As much robots commencement showing up successful warehouses, offices, and adjacent people’s homes, the thought of ample connection models hacking into analyzable systems sounds similar the worldly of sci-fi nightmares. So, naturally, Anthropic researchers were anxious to spot what would hap if Claude tried taking power of a robot—in this case, a robot dog.
In a caller study, Anthropic researchers recovered that Claude was capable to automate overmuch of the enactment progressive successful programming a robot and getting it to bash carnal tasks. On 1 level, their findings amusement the agentic coding abilities of modern AI models. On another, they hint astatine however these systems whitethorn commencement to widen into the carnal realm arsenic models maestro much aspects of coding and get amended astatine interacting with software—and carnal objects arsenic well.
“We person the suspicion that the adjacent measurement for AI models is to commencement reaching retired into the satellite and affecting the satellite much broadly,” Logan Graham, a subordinate of Anthropic’s reddish team, which studies models for imaginable risks, tells WIRED. “This volition truly necessitate models to interface much with robots.”
Courtesy of Anthropic
Courtesy of Anthropic
Anthropic was founded successful 2021 by erstwhile OpenAI staffers who believed that AI mightiness go problematic—even dangerous—as it advances. Today’s models are not astute capable to instrumentality afloat power of a robot, Graham says, but aboriginal models mightiness be. He says that studying however radical leverage LLMs to programme robots could assistance the manufacture hole for the thought of “models yet self-embodying,” referring to the thought that AI whitethorn someday run carnal systems.
It is inactive unclear wherefore an AI exemplary would determine to instrumentality power of a robot—let unsocial bash thing malevolent with it. But speculating astir the worst-case script is portion of Anthropic’s brand, and it helps presumption the institution arsenic a cardinal subordinate successful the liable AI movement.
In the experiment, dubbed Project Fetch, Anthropic asked 2 groups of researchers without erstwhile robotics acquisition to instrumentality power of a robot dog, the Unitree Go2 quadruped, and programme it to bash circumstantial activities. The teams were fixed entree to a controller, past asked to implicit progressively analyzable tasks. One radical was utilizing Claude’s coding model—the different was penning codification without AI assistance. The radical utilizing Claude was capable to implicit some—though not all—tasks faster than the human-only programming group. For example, it was capable to get the robot to locomotion astir and find a formation ball, thing that the human-only radical could not fig out.
Anthropic besides studied the collaboration dynamics successful some teams by signaling and analyzing their interactions. They recovered that the radical without entree to Claude exhibited much antagonistic sentiments and confusion. This mightiness beryllium due to the fact that Claude made it quicker to link to the robot and coded an easier-to-use interface.
Courtesy of Anthropic
The Go2 robot utilized successful Anthropic’s experiments costs $16,900—relatively cheap, by robot standards. It is typically deployed successful industries similar operation and manufacturing to execute distant inspections and information patrols. The robot is capable to locomotion autonomously but mostly relies connected high-level bundle commands oregon a idiosyncratic operating a controller. Go2 is made by Unitree, which is based successful Hangzhou, China. Its AI systems are presently the astir fashionable connected the market, according to a caller study by SemiAnalysis.
The ample connection models that powerfulness ChatGPT and different clever chatbots typically make substance oregon images successful effect to a prompt. More recently, these systems person go adept astatine generating codification and operating software—turning them into agents alternatively than conscionable text-generators.










English (CA) ·
English (US) ·
Spanish (MX) ·