Using augmented reality, the app allows the user to either walk out where the robot should go to perform its tasks, or draw out a workflow directly into real space. The app offers options for how those tasks can be performed, such as under a certain time limit, on repeat or after a machine has done its job.
After programming, the user drops the phone into a dock attached to the robot. While the phone needs to be familiar with the type of robot it’s “becoming” to perform tasks, the dock can be wirelessly connected to the robot’s basic controls and motor.
The phone is both the eyes and brain for the robot, controlling its navigation and tasks.
“As long as the phone is in the docking station, it is the robot,” Ramani said. “Whatever you move about and do is what the robot will do.”
To get the robot to execute a task that involves wirelessly interacting with another object or machine, the user simply scans the QR code of that object or machine while programming, effectively creating a network of so-called “Internet of Things.” Once docked, the phone (as the robot) uses information from the QR code to work with the objects.
The researchers demonstrated this with robots watering a plant, vacuuming and transporting objects. The user can also monitor the robot remotely through the app and make it start or stop a task, such as to go charge its battery or begin a 3D-printing job. The app provides an option to automatically record video when the phone is docked, so that the user can play it back and evaluate a workflow.
Ramani’s lab made it possible for the app to know how to navigate and interact with its environment according to what the user specifies through building upon so-called “simultaneous localization and mapping.” These types of algorithms are also used in self-driving cars and drones.
A YouTube video is available at https://www.youtube.com/watch?v=_VCIHPDbcLk.