New Scientist – Drones are getting smaller and smarter, able to navigate and identify targets without GPS or human operators. Micro-aerial vehicles (MAVs) with uncanny navigation and real-time mapping capabilities could soon be zipping through indoor and outdoor spaces, running reconnaissance missions that others cannot. They would allow soldiers to look over hills, inside buildings and inspect suspicious objects without risk.
Researchers led by Roland Brockers at the NASA Jet Propulsion Laboratory in Pasadena, California, have developed a MAV that uses a camera pointed at the ground to navigate and pick landing spots. It can even identify people and other objects. The system enables the drone to travel through terrain where human control and GPS are unavailable, such as a city street or inside a building.
After takeoff, the autonomous landing (a) and automated ingress (b) algorithm proceed in three phases: Detection, Refinement, and Approach.
Autonomous landing and ingress of micro-air-vehicles in urban environments based on monocular vision.(12 pages)
A human operator needs to tell the drone only two things before it sets off: where it is and where its objective is. The craft figures out the rest for itself, using the camera and onboard software to build a 3D map of its surroundings. It can also avoid obstacles and detect surfaces above a predetermined height as possible landing zones. Once it selects a place to put down, it maps the site’s dimensions, moves overhead and lands.
In a laboratory experiment, a 50 centimetre by 50 centimetre quadrotor craft equipped with the navigation system was able to take off, travel through an obstacle-filled indoor space and land successfully on an elevated platform. Brockers’s team is now testing the system in larger, more complex environments.
Vijay Kumar of the University of Pennsylvania in Philadelphia says that autonomous navigation and landing capabilities are unprecedented in a drone of this size.
“Typically the information required to locate a landing site and stabilise a vehicle over it is coming in at a 100 times a second,” he says. “No one else has been able to design a system so small with this kind of processing power.”
It may not be long until the PD-100 Black Hornet (pictured), which is set to become the world’s smallest operational drone, gets an upgrade as well.
Getting smarter by the day (Image: Prox Dynamics)
As it stands the PD-100, which has been in testing by Norwegian manufacturer Prox Dynamics since 2008, can navigate autonomously to a target area using onboard GPS or fly a pre-planned route. It can also be controlled by a human from up to a kilometre away, has an endurance of up to 25 minutes, can hover for a stable view, and fly both indoors and out.
At just 20 centimetres long and weighing about 15 grams, the PD-100 makes the drone created by Brockers’s team look like a behemoth. And while it may look like a toy, Prox Dynamics claims it can maintain steady flight in winds of up to 5 metres per second. This has attracted the attention of the UK Ministry of Defence, which last year issued a request for the vehicle under the name “Nano-UAS”.
Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.
If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.