AI Improves Control of Robot Arms

A team of Stanford researchers has developed a novel way to control assistive robotic arms that is both more intuitive and faster than existing approaches. New AI enabled robot controllers allowed subjects to more efficiently cut tofu and shovel it onto a plate, or stab a marshmallow, scoop it in icing, and dip it in sprinkles.

Description of the AI Powered Robot Arm Control From Stanford

More than one million American adults use wheelchairs fitted with robot arms to help them perform everyday tasks such as dressing, brushing their teeth, and eating. But the robotic devices now on the market can be hard to control.

Reducing Dimensionality
Typical assistive robots now on the market have 6-7 joints. To control each of them a user switches between various modes on the joystick, which is unintuitive, mentally tiring, and takes a lot of time.

The team wondered: Can a joystick that gives commands in only two directions (up/down; left/right) nevertheless control a multi-jointed robot smoothly and quickly? For an answer, they turned to a process called dimensionality reduction. In any given context, a robot arm doesn’t actually have to move every joint in every possible direction to accomplish a particular task.

The process of dimensionality reduction begins with a human moving the robot arm through various task-specific motions, essentially training it how to move in a more fluid and useful way in a given context. This high-dimensional dataset is then fed through a neural network (an autoencoder) that first compresses the data into two dimensions and then decodes that compressed representation to try to recreate the initial expert data.

The next step is where the magic happens: A person gives two-dimensional instructions on a joystick and the robot is able to recreate the more complex, context-dependent actions that the expert trained it to do. In experiments, when users controlled the robot with this “latent action” algorithm alone, they could pick up an egg, an apple, and a cup of flour and drop them in a bowl (making an “apple pie,” so to speak) faster than an existing approach that required mode shifting on a joystick. Despite the increased speed, users found the interface unpredictable.

Adding Shared Autonomy
The latent action controller also wasn’t very precise. To address that problem, the team blended the latent action algorithm with one called shared autonomy. Here, the novelty lay in the way the team integrated the two algorithms. “It’s not an add-on,” Sadigh says. “The system is trained all together.”

In shared autonomy, the robot begins with a set of “beliefs” about what the controller is telling it to do and gains confidence about the goal as additional instructions are given. Since robots aren’t actually sentient, these beliefs are really just probabilities.

Controllers that used the combined algorithm (latent action with shared autonomy) were both faster and easier for users to control than the latent action algorithm alone or the standard controller either alone or with shared autonomy.

SOURCE – Stanford University
Written By Brian Wang, Nextbigfuture.com

5 thoughts on “AI Improves Control of Robot Arms”

  1. "More than one million American adults use wheelchairs fitted with robot arms"

    They do? I have not seen this! Pretty cool.

Comments are closed.