A team of researchers from Columbia University has demonstrated a method that allows robots to learn to model their bodies. This self-modeling process enabled the robot to decide on the type of movements most appropriate in different situations and basically think about its next step.


Every change in the posture or position of our body is controlled by our nervous system (motor cortex). The human brain knows how different parts of the body can move and, therefore, can plan and coordinate our every action before it happens. This is possible because the brain contains maps and models of our entire body.
These brain maps allow the brain to guide the movement of various parts of our body, provide us with well-coordinated movements, and even protect us from injuries when we encounter obstacles in our path. Can we do the same for robots? Boyuan Chen, lead author of a new study and assistant professor at Duke University, believes so.
“We humans clearly have a perception of ourselves. Somewhere inside our brain, we have a perception of ourselves, a self-model that informs us about how much of the surrounding environment we are on. occupy, and how that quantity changes as we move forward.”
Just as the movements of the human body are guided using multiple brain maps, Boyuan and his team have demonstrated that a robot can also develop a kinetic model of itself. A kinematic model is a mathematical piece of information about a robot’s dimensions, moving capabilities and ranges, depth of field, and the amount of work area that can be covered at any given time. It is used by robot operators to control the actions of the machine. However, after self-modelling, a robot can control itself as it becomes aware of how different motor commands trigger different body movements.
How did scientists enable robots to model themselves?
There’s no way scientists can look at a brain map built inside a person’s brain or what a person thinks at any given time – at least, we don’t have the technology yet. Similarly, if a robot imagines something, a scientist cannot see it by simply peering into the robot’s neural network. The researchers suggest that a robot’s brain is like a “black box,” so they conducted an interesting experiment to find out if a robot could model itself.


Hod Lipson, one of the study’s authors and director of Columbia University’s Creative Machines Lab, explained the experiment in an interview with ZME Science, explaining:
“You can imagine yourself, every human can imagine where they are in space but we don’t know how it works. No one can look into a rat’s mind and say how a rat sees itself.” Is. ”
So during their study, the researchers surrounded a robot arm called the WidowX 200 with five cameras in a room. The live feed of all the cameras was connected to the robot’s neural network so that the robot could see itself through the cameras. As WidowX performed a variety of body movements in front of live streaming cameras, it began to observe how its various body parts behaved in response to various motor commands.
After three hours, the robot stopped moving. Its deep neural network gathered all the information needed to model the robot’s entire body. The researchers then conducted another experiment to test whether the robot had successfully modeled itself. They assigned the robot a complex task that involved touching a 3D red sphere while avoiding a major obstacle in its path.
In addition, the robot has to touch the sphere with a particular part of the body (the final effector). To successfully complete the task, WidowX needed to propose and follow a safe trajectory that would allow it to reach the area without a collision. Surprisingly, the robot did it without any human help and for the first time Boyuan Chen and his team proved that even a robot can learn to model itself.
Self-modeling robots could advance the field of artificial intelligence
The WidowX robotic arm isn’t exactly an advanced machine, it can only perform a limited number of actions and movements. Humans in general are looking forward to a future driven by robots and machines far more complex than WidowX. When asked whether a robot could learn to model itself using the same approach, Professor Lipson explained ZME Science,
“We’ve done this with a very simple cheap robot (WidowX 200) that we can just buy on Amazon but it should work on other things. Now the question is, how complex can the robot be and will it still work? Will this work for a six-degree robot, will it work for a driverless car? Will it work for 18 motors, a spider robot? And that’s what we’re going to do next, we’re going to take it forward to see Will try how far it can go.”


Many recent AI-based innovations such as drones, driverless cars and humanoids like Sofia perform multiple tasks at the same time. If these machines learn to imagine themselves and others, including humans, it could lead to a robot revolution. Researchers believe that the ability to model itself and others will allow robots to program, repair and function on their own without human supervision.
“We rely on factory robots, we rely on drones, we rely on these robots more and more, and we can’t babysit all these robots all the time. We can’t always model or program them , it’s a lot of work. We want robots to model themselves and we’re also interested in working on how robots can model other robots. So they can help each other, their own Can take care, adapt, be more flexible and I think that will be important,” Professor Lipson said.
The study is published in the journal science robotics,