There is a Less Wrong article about AI timeline predictions. I think far better sensors, cameras and motion detection will drive AI capabilities and bring robots effectively into the wild and out of the labs. Sensors and smart tattoo tagging will make the task of figuring out the environment easy for robots and less of a sodoku puzzle for the blind problem that robots currently have using inferior cameras and not having objects tagged and able to be wireless queried.
Here is my timeline of the next few years in AI, robotics and sensors everywhere.
* Terascale neuromorphic chips (memristors synapes, nanostore memory (logic and memory together)
* Many billions and probably trillions of electronic tattoos (less than a penny each in most cases) with processing, sensors, memory, wireless
* 2000 qubit adiabatic quantum computers
* 50 gigapixel cameras for a few thousand dollars, a gigapixel for a few hundred
motion detection at ten micron precision (next generation the size of a quarter or so, added to smartphones and tablets) and slightly more expensive versions with the super high resolution cameras to cover large volumes.
* Very capable robotics and car automation. Robotics still at the co-operation and assistance level for people. Some simple tasks can be fully automated.
* The human brain project (if funded would be done and if not there are other DARPA and asian projects of comparable scale)
* Memristors at exascale (supercomputer class), petascale for very affordable systems
* Sensors even more capable
* Electronic tattoos even cheaper and more capable.
* Deep robotics commercialization adoption.
* Beamed power and persistent UAVs
* Megascale or gigascale adiabatic quantum computers
I think that there will be big progress in increasing power and usefulness of AI over the next few years.
I am basing this on hardware improvements but not processing power per se (other than the emergence of memristors for memory and synapse like emulation and quantum computers).
I think it is important to make programming AI easier. Giving them equal or better than human eyesight or the ability to more easily match what we can do with vision is huge benefit.
It will have ten micron accuracy.
The Leap uses a number of camera sensors to map out a workspace of sorts — it’s a 3D space in which you operate as you normally would, with almost none of the Kinect’s angle and distance restrictions. Currently the Leap uses VGA camera sensors, and the workspace is about three cubic feet; Holz told us that bigger, better sensors are the only thing required to make that number more like thirty feet, or three hundred. Leap’s device tracks all movement inside its force field, and is remarkably accurate, down to 0.01mm. It tracks your fingers individually, and knows the difference between your fingers and the pencil you’re holding between two of them.
There has also been the parallel use of many cheap camera sensors to form multi-gigapixel images. This is currently high cost. However, at the cost/resolution sweet spots for different camera sensors, we will be able to scale up resolution with linear cost increases. For example, if a megapixel sensor was available for $1 and the integration electronics could be added for $100. Then a gigapixel would cost $1100. A 100 megapixels would be about $200. The cost benefit sweet spot might shift to 5 megapixels for $1.50. Then a gigapixel would cost $400 (including the $100 electronics) and 100 megapixels would cost $130.
The availability of cheaper higher resolution cameras will increase the workspace volumes of Leap Motion. 50 gigapixels has four times the resolution of normal human vision.
Robots will be able use high resolution cameras and Leap Motion to gain detailed awareness of a large area around them at a very affordable price.
This will be great for affordable robotic cars. The current electronics for robotic cars costs $50,000 or more.
The improved affordable sensors and cameras will make it easier for artificial intelligence to gain more benefit from pattern recognition.
The AWARE-10 5-10 gigapixel camera is in production and will be on-line later in 2012. Significant improvements have been made to the optics, electronics, and integration of the camera. Some are described here: Camera Evolution. The goal of this DARPA project is to design a long-term production camera that is highly scalable from sub-gigapixel to tens-of-gigapixels.
Thin film Norway. They can also be integrated with logic elements, sensors, batteries, and displays for mass market applications such as all-printed RFID tags. The proven high volume roll-to-roll production of Thinfilm printed memories provides the platform for its Memory Everywhere™ vision. Thinfilm has previously announced technology partnerships to develop an inexpensive, integrated time-temperature sensor for use in monitoring perishable goods and pharmaceuticals.
Spray on wireless can boost RFID range from 5 feet to 700 feet. Combine this with Thin film Norway. There are other roll to roll supercheap electronics.
Antenna sprayed on a tree
The smart tattoos will make it possible to “tag with wireless communication” most physical objects to make it easier for robots and self-driving systems and AI to interact with the physical world.
Memristors can be made into synapses. It seems like HP and Hynix will commercialize terabit memristor memory in 2014 that will be faster than flash and more dense.
Synapse like memristors are things that DARPA is working on for neuromorphic chips. It would seem by 2017 there will be terascale neuromorphic chips.
Europe could fund the Human Brain Project for 1 billion euro. Announcement in February 2013.
There seems likely to be multi-thousand qubit adiabatic quantum computers (Dwave systems) within about 5 years. We are at 512 qubits now and these systems have been used to train the image recognition algorithms for google.
Robotics with mobile intelligence seems on verge of breaking out to new levels. Tablet headed robots. Heartland Robotics and Foxconn developing robotic arms in the range of $1000.