Question: Your lab has claimed a breakthrough in a synthetic vision system. How does this system work?
Answer: We have essentially developed a field-programmable-array-processor (FPGA) that has been specifically designed for artificial vision. It is a specialized device, and effectively brings supercomputing power to synthetic vision. It operates about 100 times faster than a laptop computer.
Question: So this system uses neural nets and conventional vision algorithms to allow the system to map its surroundings and recognize objects?
Answer: Yes, computer vision is a computationally demanding task, so we developed an FPGA based system that could be portable and inexpensive. So we are teaching our system to spot objects, such as vehicles, animals, buildings, or even faces, in real time.
Question: How does this system compare to the human visual system?
Answer: We don't know precisely how the human vision system works, so it is not based directly on human vision. But it should be capable of doing approximately the same tasks as well as a human vision system, even though it is substantially less sophisticated.
Question: Can this system be trained to recognize any object in real time?
Answer: Essentially, yes. You can either train the system completely from scratch, or you can have it learn in an unsupervised manner. If you want it to recognize a face, you show it 10,000 images of faces, and 20,000 images of pictures without faces.
Question: How portable is this system? Can it be used in cellphones?
Answer: The system is currently highly portable, and weighs ounces. The heart of the system is the FPGA, which is a specialized single chip which could easily fit into a cellphone. This system will scale with Moore's Law.
Question: Can this system analyze full-motion video in real time?
Answer: Due to the power of the hardware, it is capable of analyzing full-motion video in real-time. A robot equipped with this system could recognize streets, vehicles, animals, trees, and people. So this technology would be well suited to assisted living centers, where it could monitor body language for problems.
Question: Are there any plans to equip robots with this system?
Answer: This technology is already being used in robots. My collaborator at New York University is an expert on these algorithms. He has implemented this system in a robot, which can traverse paths. This technology would directly benefit any robot, be it a drone, a UAV, a driverless car, or even a humanoid robot.
Question: Your lab has done work on silicon-on-sapphire. What is the advantage of this over silicon CMOS?
Answer: Our lab makes silicon-on-sapphire wafers for various analogue integrated circuits. Silicon-on-Sapphire is a form of Silicon-On-Insulator (SOI) technology. It offers higher performance than bulk silicon CMOS but is considerably more expensive. The semiconductor industry will eventually migrate to this technology, but our e-lab at Yale already uses it extensively.
Question: Your lab has developed biologically inspired algorithms for allowing real time object recognition. How does this system work?
Answer: We are trying to derive inspiration from the human visual system. We just don't have the technology to directly copy the human visual system, but we do know that human recognition involves a hierarchical system, and we are trying to that system using algorithms.
Question: It would seem that GPUs would be well suited to synthetic vision.
Answer: Our FPGAs are superior to any CPU or GPU. GPUs are powerful but aren't optimized for the things that we do. The FPGA that we use has been specifically designed for this task so it is substantially more efficient. Custom designed FPGAs will always outperform general purpose ICs for specific tasks. Our FPGAs only require about 10 watts, and within a few years will only dissipate 1 watt.
Question: It would seem that this technology would be very valuable for security monitoring.
Answer: Yes, it can augment a human operator, thereby reducing the need for human monitoring. This system could simultaneously monitor multiple video streams, looking for certain objects or behaviors. But I am equally excited about this technology being used for self-driving cars. I think that this is the primary enabling technology for driverless cars and pilotless planes.
Question: You've developed a image sensor array capable of monitoring networks of neurons in real time. To what extent can this system be scaled up?
Answer: Our ultimate goal is to create a cognitive system that can take actions based on what the system sees. But that would require more than computer vision, that would require some form of thought. It will take significant advances in both hardware and software before we get to that level.
Question: How do you see synthetic vision being used in 2020?
Answer: Within a decade, pilotless cars will become available. You'll simply tell the car where you want it to go and it will take you there. By 2020 I hope to see the first general purpose domestic robots in operation. Synthetic vision will be standard on robots, and will be able to process text, speech, or vision in real time. Whether these vision systems come from a company I found or from another company, this system will form the backbone of future robotic systems.
If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks
Ocean Floor Gold and Copper
Ocean Floor Mining Company