Generative Data Intelligence

Body Awareness: Scientists Give Robots a Basic Sense of ‘Proprioception’

Date:

Many experts believe more general forms of artificial intelligence will be impossible without giving AI a body in the real world. A new approach that allows robots to learn how their body is configured could accelerate this process.

The ability to intuitively sense the layout and positioning of our bodies, something known as proprioception, is a powerful capability. Even more impressive is our capacity to update our internal model of how all these parts are working—and how they work together—depending on both internal factors like injury or external ones like a heavy load.

Replicating these capabilities in robots will be crucial if they’re to operate safely and effectively in real-world situations. Many AI experts also believe that for AI to achieve its full potential, it needs to be physically embodied rather than simply interacting with the real world through abstract mediums like language. Giving machines a way to learn how their body works is likely a crucial ingredient.

Now, a team from the Technical University of Munich has developed a new kind of machine learning approach that allows a wide variety of different robots to infer the layout of their bodies using nothing more than feedback from sensors that track the movement of their limbs.

“The embodiment of a robot determines its perceptual and behavioral capabilities,” the researchers write in a paper in Science Robotics describing the work. “Robots capable of autonomously and incrementally building an understanding of their morphology can monitor the state of their dynamics, adapt the representation of their body, and react to changes to it.”

All robots require an internal model of their bodies to operate effectively, but typically this is either hard coded or learned using external measuring devices or cameras that monitor their movements. In contrast, the new approach attempts to learn the layout of a robot’s body using only data from inertial measurement units—sensors that detect movement—placed on different parts of the robot.

The team’s approach relies on the fact that there will be overlap in the signals from sensors closer together or on the same parts of the body. This makes it possible to analyze the data from these sensors to work out their positions on the robot’s body and their relationships with each other.

First, the team gets the robot to generate sensorimotor data via “motor babbling,” which involves randomly activating all of the machine’s servos for short periods to generate random movements. They then use a machine learning approach to work out how the sensors are arranged and identify subsets that relate to specific limbs and joints.

The researchers applied their approach to a variety of robots both in simulations and real-world experiments, including a robotic arm, a small humanoid robot, and a six-legged robot. They showed that all the robots could develop an understanding of the location of their joints and which way those joints were facing.

More importantly, the approach does not require a massive dataset like the deep learning methods underpinning most modern AI and can instead be carried out in real-time. That opens up the prospect of robots that can adapt to damage or the addition of new body parts or modules on the fly.

“We recognize the importance of a robot’s capability to assess and continuously update the knowledge about its morphology autonomously,” the researchers write. “Incremental learning of the morphology would allow robots to adapt their parameters to reflect the changes in the body structure that could result from self-inflicted or externally inflicted actions.”

While understanding how your body works is only a small part of learning how to carry out useful tasks, it is an important ingredient. Giving robots this proprioception-like ability could make them more flexible, adaptable, and safe.

Image Credit: xx / xx

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?