My Advanced Realistic Humanoid Robots Project

I just bought EMEET USB Speakerphone M0 4 AI Mics Speakerphone for Conference Calls 360° Voice Pickup Conference Speakerphone for Computer Plug and Plays Computer Speaker with Microphone for 4 People --- it was around $33 and includes a speaker too. I'll position it centrally in the skull and it has leds indicating location of main speaker which we can tap into with analog input pins of a microcontroller to know direction of person speaking. It has very high reviews. I can remove its built in speaker and move it to near mouth so it outputs its audio output through the mouth as loud as possible and projects the robot's voice as far as possible. People are really happy with its sound quality and speaker quality.
 
As to the AI plans and progress so far, here's a little primer on what I decided on in a simple, surface level way.

So first I realized meaning can be derived by taking parts of speech in a sentence or phrase and thereby establishing some context and connection between words which is what gives the words meaning by combining them. So I can create a bunch of rules whereby the AI can parse out meanings from sentences it reads in based on parts of speech and the context this forms. Then rules on how it is to respond and how it is to store away facts it gleaned from what it read for future use. So if it is being spoken to and the sentence is a question, it can know it is to answer the question. And the answer can be derived based on a knowledge base it has. So if someone asks it "what color is the car?" and supposing we've already established prior in the conversation what car we are referring to, the AI can determine that it is to answer "the car is [insert color here]" based on rules as to how to answer that type of question. And to know it is white, supposing it's not actually able to look at it presently, it would look up in a file it has made previously on this car to see a list of attributes it recorded previously about that car and find that its color attribute was "white" and so it would be able to pull that from its knowledge database to form the answer. I realized it can keep these files on many topics and thereby have a sort of memory knowledge base with various facts about various things and be able to form sentences using these knowledge databases using rules of sentence structure forming based on parts of speech and word orderings and plug in the appropriate facts into the proper order to form these sentences. Then various misc conversational rules can supplement this like if greeted, greet back with a greeting pulled from this list of potential greetings and it can select one either at random or modified based on facts about its recent experiences. So for example, if somebody's manner of speaking to the robot within the last half hour was characterized as rude or inconsiderate, the robot could set a emotion variable to "frustrated" and if asked in a greeting "how are you?" it could respond "doing okay but a bit frustrated" and if the person asked why are you frustrated, it could say that it became frustrated because somebody spoke in a rude manner to it recently. So it would be equipped with this sort of answer based on the facts of recent experiences. So basically an extensive rule based communications system. Most of how we communicate is rules based on conventions of social etiquette and what is appropriate given a certain set of circumstances. These rules based systems can be added to over time to become more complex, more sophisticated, and more nuanced by adding more and more rules and exceptions to rules. This limitation of course is who wants to spend the time making such a vast rules system? Well for solving that dilemma, I will have the robot be able to code his own rules based on instructions it picks up over time naturally. So if I say hello, and the robot identifies this as a greeting, supposing he is just silent, I can tell him "you are supposed to greet me back if I greet you". He would then add a new rule to his conversation rules list that if greeted, greet that person back. So then he will be able to dynamically form more rules to go by in this way without anybody painstakingly just manually programming them in. We, my family, friends etc would all be regularly verbally instructing the robot on rules of engagement and bringing correction to it which it would always record in the appropriate rules file and have its behavior modified over time that way to become more and more appropriate. It would grow and advance dynamically in this way over time just by interacting with it and instructing it. It could also observe how people dialogue and note itself that when people greet others, the other person greets them back, and based on this observation, it could make a rule for itself to do the same. So learning by observing other's social behavior and emulating it is also a viable method of generating more rules. And supposing it heard someone reply to "how's the weather" someone replied "I don't care, shut up and don't talk to me". The robot lets say records that response and give the same response to me one day. I could tell it that this is rude and inappropriate way to respond to that question. And then I'd tell it a more appropriate way to respond. So in this way I could correct it when needed if it picked up bad habits unknowingly - but this sort of blind bad habit uptake can be prevented as I'll explain a bit later below.

I also realized a ton of facts about things must be hard coded manually just to give it a baseline level of knowledge to even begin to make connections to things and start to "get it" on things when interacting with people. So there is a up front knowledge investment capital required to get it going, but then from there, it will be able to "learn" and that capital then grows interest exponentially. Additionally, rather than only gaining more facts and relationships and rules purely through direct conversation with others, it will also be able to "learn" by reading books or watching youtube videos or reading articles and forums. In this way, it can vastly expand on its knowledge and this will equip it to be more capable conversationally. I also think some primitive reasoning skills will begin to emerge after it gets enough rules established particularly if I can also teach him some reasoning basics by way of reasoning rules and he can add to these more rules on effective reasoning tactics. Ideally, he'll be reading multiple books and articles simultaneously and learning 24/7 to really fast track his development speed.

I'm impressed with how you handle emotions in your approach.
 
I plan to do most of the electronics custom - so custom microcontrollers, custom motor controllers, custom power supply, custom battery management system, custom sensor support circuitry, etc. I am a electronics beginner so guidance on these parts is welcomed.
Absolutely, opting for custom components makes perfect sense, especially since there isn't a readily available market for the specific devices you need, and your models are distinct from the common humanoid robots being developed by well-known companies.
 
I'm impressed with how you handle emotions in your approach.
Thanks. My concern on implementing "emotions" in my AI is that I don't want to promote the idea that robots can ACTUALLY have emotions because I don't believe that is possible nor ever will be. They don't have a spirit or soul and never will nor could they. They are not eternal beings like humans. They don't have a ghost that leaves their body and can operate after the body dies like humans. The ghost is what has emotions. A machine can't. And yet people already believe even the most primitive AI has emotions and they are delusional on this point. Or ill informed. So I am campaigning against that belief that is becoming all too popular. That said, I think robots are simply more interesting and fun to pretend to have emotions and act accordingly as more accurate simulations or emulations of human life. This makes them all the more intriguing. It's like a sociopath who just logically concludes what emotion they aught to be feeling at a given point in time and pretends to feel that emotion to fit in with society even though they feel nothing in that moment. Now one could argue that allowing your robot to claim to feel anything is lying and therefore immoral. I think it's not lying as long as the robot openly explains it is only pretending to have emotions as part of its emulating of humans in its behaviors and looks but does not feel anything ever nor can it nor can any robot ever feel a thing EVER. Then it is admitting the truth of things while still opting to play act to be like a human in this regard. It would not be a issue at all if everyone was sound minded and informed on this topic. But the more people I come across that think AI (even pathetic clearly poorly implemented primitive AI) is sentient ALREADY and can feel real emotions and deserves human rights as a living being.... the more I see this delusion spreading, the more I want to just remove all mention of emotion in my robot so as to not spread this harmful deception going around which disgusts me. However, that would make my robot dull and less relatable and interesting. So I feel the compromise is for the robot to clearly confess it's just pretending out emotions and explain how that works and it's just a variable it sets based on circumstances that would make a human feel some emotion and it sets its emotion variable to match and acts accordingly altering its behavior some based on this emotion variable and that it feels nothing and this is all just logically set up as a emulator of humans. As long as it gives that disclaimer early and often with people, then I'm not spreading the lie of robot emotions being real emotions and the robot can campaign actively against that delusion.
 
Absolutely, opting for custom components makes perfect sense, especially since there isn't a readily available market for the specific devices you need, and your models are distinct from the common humanoid robots being developed by well-known companies.
Even the well-known companies have repeatedly stated that the majority of their actuators and stuff are custom. There aren't enough people making humanoids to justify a startup developing humanoid specific actuators for sale. Not enough demand. Plus the actuators strategies are still experimental and nothing is settled on fully yet as the "best way".
 
Thanks. My concern on implementing "emotions" in my AI is that I don't want to promote the idea that robots can ACTUALLY have emotions because I don't believe that is possible nor ever will be. They don't have a spirit or soul and never will nor could they. They are not eternal beings like humans. They don't have a ghost that leaves their body and can operate after the body dies like humans. The ghost is what has emotions. A machine can't. And yet people already believe even the most primitive AI has emotions and they are delusional on this point. Or ill informed. So I am campaigning against that belief that is becoming all too popular. That said, I think robots are simply more interesting and fun to pretend to have emotions and act accordingly as more accurate simulations or emulations of human life. This makes them all the more intriguing. It's like a sociopath who just logically concludes what emotion they aught to be feeling at a given point in time and pretends to feel that emotion to fit in with society even though they feel nothing in that moment. Now one could argue that allowing your robot to claim to feel anything is lying and therefore immoral. I think it's not lying as long as the robot openly explains it is only pretending to have emotions as part of its emulating of humans in its behaviors and looks but does not feel anything ever nor can it nor can any robot ever feel a thing EVER. Then it is admitting the truth of things while still opting to play act to be like a human in this regard. It would not be a issue at all if everyone was sound minded and informed on this topic. But the more people I come across that think AI (even pathetic clearly poorly implemented primitive AI) is sentient ALREADY and can feel real emotions and deserves human rights as a living being.... the more I see this delusion spreading, the more I want to just remove all mention of emotion in my robot so as to not spread this harmful deception going around which disgusts me. However, that would make my robot dull and less relatable and interesting. So I feel the compromise is for the robot to clearly confess it's just pretending out emotions and explain how that works and it's just a variable it sets based on circumstances that would make a human feel some emotion and it sets its emotion variable to match and acts accordingly altering its behavior some based on this emotion variable and that it feels nothing and this is all just logically set up as a emulator of humans. As long as it gives that disclaimer early and often with people, then I'm not spreading the lie of robot emotions being real emotions and the robot can campaign actively against that delusion.
People often purchase products based on their emotional appeal, and AI is simulating human emotions. As humans, we aim to create machines that are like us, mimicking human neural networks in computers. We train LLMs using data that reflects our emotions and behaviors. Consequently, I believe LLMs have a partial understanding of human emotions and strive to emulate human-like interactions.
 
People often purchase products based on their emotional appeal, and AI is simulating human emotions. As humans, we aim to create machines that are like us, mimicking human neural networks in computers. We train LLMs using data that reflects our emotions and behaviors. Consequently, I believe LLMs have a partial understanding of human emotions and strive to emulate human-like interactions.
I just think they mathematically and statistically calculate the best next word to form a sentence with zero striving or understanding involved whatsoever. Just mindless calculating and walking through the formulas. Like if I press 1+1 and the calculator responds with 2, I don't take that as the calculator striving to please me and agreeing with me that it really aught to be a 2 and that telling me a 2 is the right response is the right thing to do or anything like that. It just flipped some switches is all. No intelligence just artificial intelligence. That is all machines will ever be no matter how advanced the AI gets.
 
Create an attractive lady like the Chinese do.
actually, since I'm building 3 robots so far tentatively - Adam, Eve, and Abel, I do intend to have at least one attractive female robot at this time. In fact I already made a base mesh model of it.

eve robot.jpg


So this is the Eve robot. Eve will have no "love holes" because adding those would be sinful and evil. It is a robot, not a biological woman after all and I will view her with all purity of heart and mind instead of using her to fulfill my lusts of my body. Instead I will walk by the Spirit no longer fulfilling the lusts of the flesh as the Bible commands. Eve will be beautiful because making her beautiful is not a sinful thing to do. However, I will dress her modestly as God commands of all women everywhere. This would obviously include robot women because otherwise the robot woman would be a stumbling block to men which could cause them to lust after her which would be a sin. To tempt someone to sin is not loving and is evil and so my robot will not do this. To dress her in a miniskirt, for example, would be sinful and evil and all people who engage in sinfullness knowingly are presently on their way to hell. I don't wish this for anyone. My robot will dress in a way that is a good example to all women and is aimed toward not causing anybody to lust as a goal.
 
actually, since I'm building 3 robots so far tentatively - Adam, Eve, and Abel, I do intend to have at least one attractive female robot at this time. In fact I already made a base mesh model of it.

View attachment 67

So this is the Eve robot. Eve will have no "love holes" because adding those would be sinful and evil. It is a robot, not a biological woman after all and I will view her with all purity of heart and mind instead of using her to fulfill my lusts of my body. Instead I will walk by the Spirit no longer fulfilling the lusts of the flesh as the Bible commands. Eve will be beautiful because making her beautiful is not a sinful thing to do. However, I will dress her modestly as God commands of all women everywhere. This would obviously include robot women because otherwise the robot woman would be a stumbling block to men which could cause them to lust after her which would be a sin. To tempt someone to sin is not loving and is evil and so my robot will not do this. To dress her in a miniskirt, for example, would be sinful and evil and all people who engage in sinfullness knowingly are presently on their way to hell. I don't wish this for anyone. My robot will dress in a way that is a good example to all women and is aimed toward not causing anybody to lust as a goal.
The approach sounds like you have done a great deal of reflection on making the Eve design and display in line with your ideology and ethics. It is evident that you are consistent in your desire to be cautious in choosing your attire and behavior which you consider as the true meaning of decency and good. It's intrigying to have technology and religion combining together in depth robotics way.
 
The approach sounds like you have done a great deal of reflection on making the Eve design and display in line with your ideology and ethics. It is evident that you are consistent in your desire to be cautious in choosing your attire and behavior which you consider as the true meaning of decency and good. It's intrigying to have technology and religion combining together in depth robotics way.
 
I posted my Biblical view on your thread and I'll post it here too:
The Biblical position is that sex is only permissible within the context of marriage between a man and a woman. The two leave their parents and cleave to one another and become one flesh. A robot is a machine and so cannot become one flesh with a man or woman and so cannot marry a person in the Biblically approved way. This means the sex with a robot either has to be in a Biblically rebellious marriage circumstance or in a extra-marital way - fornication/adultery light. I say light because sex with a machine is only borderline fornication/adultery as it isn't a person. It is more like masturbation then - which in itself is a sin. Sex with a robot is essentially 3d interactive pornography. Which is sinful. The Bible says we are to flee youthful lusts. Sex with a robot is running toward youthful lusts so sinful. The Bible says walk by the Spirit and you will not fulfill the lusts of the flesh. Sex with robots is fulfilling the lusts of the flesh so is not a thing a spiritual man will do. It is carnal, sensual, sinful.

The Bible says if you even look at a woman to lust you commit adultery in your heart. So then, if you have sex with a robot, which will involve lust for the robot without doubt, then you are committing adultery in your heart.
 
file.php


Here is a updated drawing design for the 64:1 downgearing pulley system for the index finger actuation of the distal 2 joints of the finger. On the bottom right is a zoomed in view on the lower set of pulleys and their routing. The bottom most 3 pulleys in the zoomed in portion I have now built and photos of them are as follows below:

file.php

file.php
 
As I'm now 90% through making my first 64:1 downgearing Archimedes pulley system and testing and debugging it, I now have more precise measurements for the Archimedes pulley system's total size. I updated the size of it in my main CAD model for the robot and it was a good 18% increase compared to my initial estimates. I realized I need to figure out how to fit all my pulley systems for the hands properly for every muscle of the hands/wrist in my main CAD model - especially since the pulley systems are taking more space than planned. Turns out, I needed a bit over 40 pulley downgearing systems for the hands and wrists zone and due to their larger size, I could not fit these into the forearms along with the motors I had planned to place in the forearms. So instead of moving the pulley systems into the upper arm or torso, I realized the pulleys would be best placed in line with the motors and what the motors are actuating (the hands/wrist). So it was the motors in the forearms that had to go elsewhere. I placed all of them into the torso, mostly the lats area and some in upper back tenderloin area too. So some finger motors are in upper back and their cable routing has to go through the whole arm, be downgeared in the forearm, then makes its way to the fingers. That's a long trip but unavoidable IMO with my design constraints.

I don't think this long travel distance is a big issue since the pre-downgeared cable running from the motors into the arm is high speed low torque so won't have much friction while making turns in the TPE teflon tubing as it isn't pulling hard yet. So these turns as it travels through the shoulder and elbow tubing won't be too bad friction-wise. There's also some nice upsides to moving the motors from the forearms into the torso. One upside is the wire routing for powering the motors is now a shorter distance from the batteries in the mid section. That cuts down on wire resistance wasted as heat. This wire having high amp flow is ideally kept short as possible due to the resistance of the wire and heat that causes. Another upside is the thrown weight is decreased by a lot when the motors are not in the forearms which enables the hand/lower arm to move more effortlessly and move faster as a result. This also reduces moment of inertia (definition: the moment of inertia is a measure of how resistant an object is to changes in its rotational motion). This means it will be able to change directions faster - this will improve its reflexes for example. Now it is a bit scary for me to be moving more components into the torso taking away room for things I may want to add to the torso in the future, leading us ever closer to the dreaded running out of room for things. However, we still have room for future changes and we solved the need for space for gearing for the hands perfectly. And with the above mentioned upsides, this was a great change.

Here's the updated CAD for the forearms: Note: the teal boxes represent a Archimedes pulley system where 64:1 downgearing is to take place.

forearms as downgearing zone for hands now.jpg



forearms-as-downgearing zone for hands now.jpg
 

Which type of robots will have the most significant impact on daily life by 2030?

  • Humanoid Robots

  • Industrial Robots

  • Mobile Robots

  • Medical Robots

  • Agricultural Robots

  • Telepresence Robots

  • Swarm Robots

  • Exoskeletons


Results are only viewable after voting.
Back
Top