The prospect of a future where artificial intelligence (AI) powered robots perform everyday tasks like washing dishes or walking dogs has long been a popular science fiction trope. Today, while AI continues to make significant strides, integrating these capabilities into physical robots remains a formidable challenge. So, what is holding back the development of these AI-enabled robotic assistants?
This topic has recently gained significant traction, as AI has been making groundbreaking strides across various fields. Notable examples include OpenAI’s ChatGPT, which excels at conversational dialogue, and Midjourney, which generates unique images from text descriptions. DeepMind’s AlphaGo made significant headway in 2016, defeating the world champion in the complex game of Go. Despite these achievements, physical robots are struggling to keep pace with the AI revolution. Recognizing the challenges of integrating AI into robots is important for setting realistic expectations about the future of this technology.
One significant hurdle is Moravec’s paradox. This paradox proposes that tasks that are relatively simple for humans, such as pouring a coffee, can prove to be significantly challenging for a robot. It suggests that while computers find abstract thought and reasoning relatively simple, tasks that involve both sensory perception and motor function—tasks that we often take for granted—are significant challenges for robots. Hardware limitations such as sensor reliability and limited computational resources compound these issues. Additionally, the unpredictability of the real world presents a significant challenge when developing AI algorithms for use in robotics.
A notable example is Boston Dynamics’ state-of-the-art robotic dog. Despite its seemingly intelligent behaviour, it does not actually use AI for its movements or understanding its environment. Instead, it leverages traditional robotic approaches, a choice that underlines the current practical limitations of applying AI to robots. It emphasizes the need for further research and innovation in this field, indicating that we are still a considerable distance from achieving AI-powered robots.
Clearly, a substantial gap exists between human and robotic abilities, but progress is being made in closing it. Several AI systems show promise in helping robots perform tasks involving sensorimotor skills and perception. For instance, Google Robotics’ Reinforcement Learning at Scale project successfully taught robots in a classroom to sort garbage. The robots were sent out in a real office setting and were able to reduce contamination in the waste bins by 50%. Another example is Google Robotics’ Robotics Transformer-1 which uses a large-scale machine learning model to complete various tasks with mobile robotic arms in a kitchen environment, such as retrieving items from drawers and opening jars. These robots were able to successfully generalize their actions to new tasks and settings.
While the prospect of fully autonomous, AI-powered robots may seem more distant than science fiction has led us to believe, the progress being made is encouraging. As we continue to explore the potentials and limitations of AI, we are gradually moving closer to a world where robots are integral to our everyday lives.
Image generated with Midjourney.