Robotic process automation has penetrated into many industries worldwide through promising efficiency and accuracy in multiple operations. However, controversies arise once these robots fail in memorizing things, unlike human beings. Now, the question is – can a helping robot behave like humans in critical situations?
Researchers from the University of Maryland have introduced a unique way for combining perception and motor commands. The researchers used hyperdimensional computing theory, which they believed has the potential to alter and improve the basic AI tasks. Moreover, this theory intrigues the AI tasks through sensorimotor representation. More precisely, it answers how robots translate what they sense to what they decide to do.
Ultimate Goal is to Enable AI in a Way: From Concepts to Signals to Language
A robot’s actuators and the sensors are separate systems, which are linked by a central learning mechanism. This central learning system infers a required action provided sensor data. However, researchers noticed that the challenges came from the integration, slowed down the way of robots in accomplishing the sensorimotor tasks. Therefore, at first place, the researchers tried to merge a robot’s own perceptions with its motor capabilities. They hoped such integration will provide a more systematic way for the robots to complete tasks.
However, with the invention of hyperdimensional computing theory, researchers are integrating hyperdimensional binary vectors (HBVs) to the robot’s operating system. HBVs are capable in representing discrete things – include a concept, an image, and an instruction or a sound. In addition, they can also represent every bit of information in a meaningful, constructed way. The biggest advantage of HBVs is getting the fused version of distinct action possibilities and other information in the same space with same language. Thus, this system hopes for creating some kind of memory for the helping robots.