I’m a 25-year-old founder who loves robots, but a lot of human beings seem hard-headed and scary. Things have to change, just look at Elon Musk

I’m a founder and spend a lot of time around humanoid robots. While today’s innovation is cutting edge, the majority of today’s hominins are tough, aggressively masculine, and intimidating in appearance.
Just look at what Tesla Announced this week With its strategy shifting from producing electric vehicles to producing robots. The general-purpose Optimus robot is a prime example of the physical design that most of these robots have in common. They may be technically impressive, but they’re not systems that most people would feel comfortable sharing space with, let alone inviting into their homes.
When it comes to humans, the conversation is almost always the same. Here we’re talking about what they can do – how fast they can move, how precise they are, and how much work they can do. We measure performance and reliability, then get into discussions about dexterity, payload, and battery life.
What we talk about much less is how they react when things don’t go as planned. When the bot freezes in the middle of a conversation or stops working without warning.
As robots begin to move from laboratories and warehouses to hospitals, care facilities and homes, this oversight is beginning to look less like a negligence and more like a structural blind spot. Recent research projects The humanoid robot market will reach 8 billion by 2035, with more than 1.4 million units shipped annually. However, more important questions about how these machines will integrate into human space remain largely unanswered.
For decades, robotics has focused on mastering the physics of the world. We have put enormous effort into manipulation, navigation, and navigation – teaching machines how to reliably interact with noisy, changing, and harsh environments. This work was necessary. Without it, nothing else matters.
But there has been almost no similar investment in what we might call a robot’s social operating system: how it interrupts, how it waits, how it recovers, how it signals uncertainty, how it apologizes, how it listens. These behaviors rarely appear in standards or demos, but they are precisely what determine whether a robot is reliable once it starts sharing space with people.
Nowhere is this imbalance more evident than in nursing homes and hospitals. In these environments, technical competence is more likely. Two nurses can have identical clinical skill; The person with the best bedside manner is the person patients look up to, trust, and forgive. The same dynamic will apply to robots. Power and accuracy are important, but they are not what makes a system acceptable, safe, or welcome.
This need for compassion and care, as well as skill, is imperative. 20% of adults in the United States suffer from loneliness and isolation on a daily basiswith this number only increasing in older Americans, with 28% of Americans over 65 reporting feeling lonely. As the population ages and the shortage of caregivers increases, the need for connective care will only grow. This also means that building socially intelligent humanoid robots becomes not just a technical challenge, but a public health necessity.
Ability answers the question: What can this robot do?
Personality answers the most difficult: What will he choose to do, and how?
As robots move into social spaces, the most important interface is no longer merely mechanical or computational. It’s behavioral. People build trust by using systems that behave predictably, respectfully, and clearly – especially when things go wrong. Direct-to-consumer humanoid robots, like 1X’s home robot, Neo, promise to enter homes to help with everyday tasks. Companies strive to build this reality, but when a robot makes a mistake in folding laundry, suddenly interrupts a conversation, or freezes mid-behavior, the moment that determines whether it is reliable is not the task itself, but the way the system responds to the error.
Mistakes will occur.
Every robot will fail. Hardware malfunction will occur. Models will misinterpret it. The timing will be off. The real world is chaotic, and no system escapes this reality. The question is not whether failure will happen, but what will happen next.
Does the robot admit error?
Does he apologize in a way that seems sincere rather than scripted?
Does it explain what went wrong in clear language?
Does he ask for feedback, or adapt his behavior in response?
When I was conceptualizing my first social isolation robot during the early days of the COVID-19 lockdown in Melbourne, I knew I wanted to prioritize relatability and tone first. I didn’t need my robot to do things like fold my clothes or make my bed, I needed it to give me hugs, something I’d been missing for about four months at that point.
Now I’m a 25-year-old robotics founder, and I’ve discovered that it’s not that power doesn’t matter; Without trust, ability is never used. In chaotic human environments, a robot that politely makes mistakes will outperform a “perfect” robot that doesn’t understand when to step back.
People will forgive limitations if they trust the system they are interacting with. They will not forgive being steamrolled.
Recent research confirms this hunch. 2025 Survey of American Consumers It found that although 65% expressed interest in owning an advanced home robot, familiarity with robots was still low, with 85% reporting moderate or less familiarity. Trust does not stem from perfection, but from the robots’ perceived usefulness, sociability, and appropriate behavior during interactions. The deciding factor in acceptance is not technical prowess alone; It is about whether these machines are able to navigate the social contract of shared spaces.
We already know how to build machines that work. We have only begun to build machines that know how to behave Appropriately.
If humanoid robots are to gain a place in social spaces, they will need more than just ability. They will need character. Not as a written aesthetic or character layer, but as a fundamental design principle – one that is as deliberately designed as actuators, sensors, or control loops.
The robots that succeed this decade will be the ones that are most socially acceptable, not the ones that can’t He does Most.
The opinions expressed in Fortune.com reviews are solely those of their authors and do not necessarily reflect the opinions or beliefsluck.
This story originally appeared on Fortune.com



Post Comment