Are quantum computers needed for AGI ?

Dwave systems, CTO Geordie Rose, theorizes about the usefulness and possible necessity of quantum computers for human level and higher artificial intelligence

Summarizing: Are humans somehow getting good approximate solutions to problems that are NP hard ? If so, would quantum computers be useful to develop alternative ways to get as good or better than some innate human capability ?

What got me thinking about intelligence in the first place was the observation that many of the tasks that seem to be difficult for computers, but relatively easy for biological brains, are most naturally thought of as NP-hard optimization problems. Basically anything that involves complex pattern matching–recognizing speech, inference, relational database search, vision, and learning, for example.

Another thing that seems interesting is this: take any algorithm that scales linearly with input size. For the problem this algorithm solves, can you think of a single example where a human could beat a computer? I can’t think of one.

Finally: biological brains operate with small amounts of input data (five senses). For example if we look at a photograph the total data we receive is quite small.

Is it possible that the notion of complexity classes is important for asking the right questions about intelligence? Here’s a rough outline of an idea.

1. Categorize all of the problems that biological brains have to solve as well-posed computational problems.

2. The subset of these problems that can be solved with algorithms that scale polynomially can always be done better with silicon than with bio-brains. Note that this isn’t true in general–it requires the observation that the input problem instance size is small for bio-brains (# of pixels in a photograph eg.).

3. There are problems that have survival value to solve that are harder than P.

4. Brains evolved excellent heuristics to provide quick approximate solutions to the problems harder than P, and currently it is that subset where bio-brains beat silicon.

5. In a hierarchy of difficulty, some problems will be too hard for bio-brains to evolve heuristics for. This means that the primary differentiator in this picture of things will be the “easiest hard problems”. The hard hard problems are too hard for evolution to create hardware heuristics for.

6. The easiest hard problems (the group most likely to have good hardware heuristics and bad software heuristics) are NP-hard optimization problems.

I tend to think now that breakthroughs in machine intelligence are going to come from algorithms–either new classical algorithms, or the capability to run quantum algorithms on quantum hardware, and not on Moore’s Law advances.

Comments are closed.