
Artificial Intelligence, News, Technology
How Robotic Self-awareness Is Becoming a Reality
Engineers at Columbia University have developed a robot which can model its body by gazing at itself in a hall of mirrors with childlike curiosity.
We’re often guilty of checking our outfits multiple times in the mirror before heading out—but how often do you hear of a robot gazing at its reflection?
A team of engineers at Columbia University have developed a robot that not only peers inquisitively into its actions but also actively observes and learns more about its body through movements.
A recent study published by the Columbia University School of Engineering and Applied Science in Science Robotics details what may be the world’s first ‘self-modelling’ robot—one that can develop a kinematic model of its body to avoid potential obstacles, plan its locomotion and achieve assigned goals.
Childlike Mimesis
Placed inside a circle of live-streaming cameras, a free-moving robotic arm experimented with its movements with avid curiosity—with the semblance of a child who has stumbled upon a hall of mirrors. For three hours, the WidowX 200 Robot Arm tinkered with its movements and studied how various motor commands produced unique movements, deriving insights into its morphology.
The robotic arm, based on deep neural network architecture, mapped the relationship between its motor actions and derived an estimate of its volume occupied in the 3D environment in the three hours it was placed inside the circle of cameras. The researchers were able to visualize the self-image captured by the robotic arm.
“We were really curious to see how the robot imagined itself,” comments the Director of Columbia’s Creative Machines Lab, Hod Lipson.
However, due to their nature, studying the structure of neural networks wouldn’t yield any meaningful information, making it impossible to draw a logical progression as to how a robot makes a certain decision or arrives at a conclusion.
“But you can’t just peek into a neural network, it’s a black box,” observes Lipson.
Robots With a Sense of Self

Self-modelling robots can be useful for deployment into remote territories, such as the planet Mars. (Image: Pixabay)
Researchers argue that a self-modelling robot able to stimulate its physical ‘self’ can offer more reliable autonomous systems—systems that are ‘aware’ of their structural inadequacies, wear and tear and damages incurred in the line of work. An autonomous system informed of its morphology can detect, compensate for, or even prevent structural damages from occurring—making for a better functioning machine.
“If a robot, animal, or human, has an accurate self-model, it can function better in the world, it can make better decisions, and it has an evolutionary advantage,” Lipson argues.
The future of advanced robotics holds many surprises. Built right, intelligent bots can automate menial jobs, assist in performing life-saving surgical procedures, and even help in our fight against climate change.
Subscribe to the Blog
Join for the latest tech, design and industry news straight to your inbox.

Our UX team designs customer experiences and digital products that your users will love.
Follow Us