(Left) A mechanic wearing a tracked head-worn display performs a maintenance task on a Rolls Royce DART 510 Engine. (Right) A view through the head-worn display depicts information provided using augmented reality to assist the mechanic.
MIT Technology Review reports that in the not-too-distant future, it might be possible to slip on a pair of augmented-reality (AR) goggles instead of fumbling with a manual while trying to repair a car engine. Instructions overlaid on the real world would show how to complete a task by identifying, for example, exactly where the ignition coil was, and how to wire it up correctly.
A new Augmented Reality (AR) system developed at Columbia University starts to do just this, and testing performed by Marine mechanics suggests that it can help users find and begin a maintenance task in almost half the usual time.
A user wears a head-worn display, and the AR system provides assistance by showing 3-D arrows that point to a relevant component, text instructions, floating labels and warnings, and animated, 3-D models of the appropriate tools. An Android-powered G1 smartphone attached to the mechanic’s wrist provides touchscreen controls for cueing up the next sequence of instructions.
Augmented Reality for Maintenance and Repair (ARMAR) explores the use of augmented reality to aid in the execution of procedural tasks in the maintenance and repair domain. The principal research objective of this project is to determine how real time computer graphics, overlaid on and registered with the actual repaired equipment, can improve the productivity, accuracy, and safety of maintenance personnel. Head-worn, motion-tracked displays augment the user’s physical view of the system with information such as sub-component labeling, guided maintenance steps, real time diagnostic data, and safety warnings. The virtualization of the user and maintenance environment allows off-site collaborators to monitor and assist with repairs. Additionally, the integration of real-world knowledge bases with detailed 3D models provides opportunities to use the system as a maintenance simulator/training tool. This project features the design and implementation of prototypes integrating the very latest in motion tracking, mobile computing, wireless networking, 3D modeling, and human-machine interface technologies.
Henderson and Feiner first gathered laser scans and photography of the inside of the vehicle. They built a 3-D model of the vehicle’s cockpit and developed software for directing and instructing users in performing individual maintenance tasks. Ten cameras inside the cockpit were used to track the position of three infrared LEDs attached to the user’s head-worn display. In the future, the team suggests that it may be more practical for cameras or sensors to be worn by the users themselves.
Six participants carried out 18 tasks using the AR system. For comparison, the same participants also used an untracked headset (showing static text instructions and views without arrows or direction to components) and a stationary computer screen with the same graphics and models used in the headset. The mechanics using the AR system located and started repair tasks 56 percent faster, on average, than when wearing the untracked headset, and 47 percent faster than when using just a stationary computer screen.
One of ARMAR’s research directions has examined what types of interaction techniques are well suited for conducting AR-assisted procedural tasks. This research led to the creation of Opportunistic Controls, a class of user interaction techniques for augmented reality (AR) applications that support gesturing on, and receiving feedback from, otherwise unused affordances already present in the domain environment. Opportunistic Controls leverage characteristics of these affordances to provide passive haptics that ease gesture input, simplify gesture recognition, and provide tangible feedback to the user. 3D widgets are tightly coupled with affordances to provide visual feedback and hints about the functionality of the control. While not suitable for all user interface scenarios, this technique may be a good choice for procedural tasks requiring eye and hand focus and restricting other interaction techniques.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.