Ramez gives examples and problems to achieving an intelligence explosion
* the complexity of important problems like computational chemistry have exponentially increasing complexity
– if designing intelligence is an N^2 problem, an AI that is 2x as intelligent as the entire team that built it (not just a single human) would be able to design a new AI that is only 70% as intelligent as itself
* There are already entities with vastly greater than human intelligence working on the problem of augmenting their own intelligence. A great many, in fact. We call them corporations. And while we may have a variety of thoughts about them, not one has achieved transcendence.
Let’s focus on as a very particular example: The Intel Corporation. Intel uses the collective brainpower of tens of thousands of humans and probably millions of CPU cores to.. design better CPUs! (And also to create better software for designing CPUs.) Those better CPUs will run the better software to make the better next generation of CPUs. Yet that feedback loop has not led to a hard takeoff scenario.
* should Intel, or Google, or some other organization succeed in building a smarter-than-human AI, it won’t immediately be smarter than the entire set of humans and computers that built it,
Neuromorphic systems that are trying to replicate neurons and synapses and other brain structures are at least one to two decades from achieving the scale of the human mind and then would likely need another decade or three to get the systems to run as well as human minds.
Dwave Systems is doubling qubits every year. They are releasing every two years with four times the qubits.
2013 512 qubits 2015 2048 qubits 2017 8192 qubits 2019 64000 qubits 2021 256000 qubits 2023 1 million qubits 2025 4 million qubits 2027 16 million qubits
Dwave could stumble and their systems may prove to not be able to provide sufficient speedup. But there are a few other approaches to quantum computers that could also scale.
Silicon quantum dots
* yields of a tiny fraction of 1% have jumped to 80%
* they were producing 100 atomscale quantum dot structures in 1 minute
* In a day with continuous operation 100,000 atomscale quantum dot structures could be built
* they also lay out the case why their dangling bond approach is better than Michelle Simmons’ phosphor atom qubits (it is the 21st century so there are competing atom scale quantum dot and atom scale qubit approaches)
Our building block consists of silicon dangling bond on a H-Si(OO1) surface, which has been shown to act as a quantum dot. First the fabrication, experimental imaging, and charging character of the dangling bond are discussed. We then show how precise assemblies of such dots can be created to form artificial molecules. Such complex structures can be used as systems with custom optical properties, circuit elements for quantum-dot cellular automata, and quantum computing. Considerations on macro—to—atom connections are discussed.
There are proposals for modular trapped ion systems, laser light quantum system, diamond vacancy and these and other approaches would be greatly enhanced with increasing capabilities with molecular nanotechnology. DNA nanotechnology, synthetic biology and improvements to other types of nanoscale control.
More money for potential breakthrough approaches to Artificial Intelligence
There is also more money from Google, Facbook and Venture capital for promising new AI startups. One of the most talked about VC deals in March, for example, was a $40 million round for Vicarious FPC, an artificial intelligence company that had so much hype around it that the biggest names of the tech world – including Mark Zuckerberg and Elon Musk (and Ashton Kutcher) – lined up to participate. And that comes just two months after Google made a monster $400 million bet on DeepMind, an AI start-up based in London. In fact, Google’s Larry Page was so concerned about keeping the DeepMind deal away from Facebook that he took a major role in leading the deal himself.
There are also other signs of renewed interest around AI. Facebook recently opened up a new artificial intelligence lab, and quickly landed one of the top minds in “deep learning” in the nation. Google already had one of the smartest minds in the AI business –Ray Kurzweil – on its books, and has been making all kinds of lateral moves in fields like robotics that could benefit from AI. And, then, at this year’s TED Conference in Vancouver, one of the highlights of the event was the announcement of a new XPRIZE for the creation of an artificially intelligent robot capable of giving a TED Talk that earns a standing ovation from the crowd.
So what’s changed in the past 12 months to make AI a potential hot new trend in the valley?
One is the realization that the creation of a comprehensive AI solution such as IBM Watson – as amazing as it has been – may simply be too expensive to be economically viable over the long haul. Even IBM has been forced to admit that it needs to rethink how it does AI. The company wants Watson to eventually become a $10 billion a year business, but thus far, Watson has only been able to generate $100 million in new business.
The second factor is a realization by companies such as Google and Facebook that they can use AI to solve smaller, real-world problems.
Artificial Intelligence and quantum computing can be applied to improve the core businesses of Google and Facebook and other companies. (Google search and Facebook social graphs.)
Dwave Systems quantum computing has received about $100 million in funding and has had over $40 million in quantum computer sales.
New approaches to artificial Intelligence, artificial general intelligence, quantum computing seem on track to get billions in funding each year as important contributors to internet and other businesses.
A profitable technology progression that might lead to Artificial general intelligence will likely be a combination of advanced quantum computing, molecular nanotechnology and other technological applications like trillions of sensors.