Artificial Intelligence, Brain Emulation and Singularity Analysis

Anders Sandberg discusses his view of the summit as a speaker and a participant. Anders provides a view into the follow up discussions that occured at lunch and other breaks. Anders also provides good analysis of the Artificial Intelligence talks from the perspective of someone in the field of AI and Brain Emulation. (H/T Michael Anissimov at Accelerating Future)

I of course talked about whole brain emulation, sketching out my usual arguments for how complex the undertaking is. Randall Koene presented more on the case for why we should go for it, and in an earlier meeting Kenneth Hayworth and Todd Huffman told us about some of the simply amazing progress on the scanning side. Ed Boyden described the amazing progress of optically controlled neurons. I can hardly wait to see what happens when this is combined with some of the scanning techniques. Stuart Hameroff of course thought we needed microtubuli quantum processing; I had the fortune to participate in a lunch discussion with him and Max Tegmark on this. I think Stuart’s model suffers from the problem that it seems to just explain global gamma synchrony; the quantum part doesn’t seem to do any heavy lifting. Overall, among the local neuroscientists there were some discussion about how many people in the singularity community make rather bold claims about neuroscience that are not well supported; even emulation enthusiasts like me get worried when the auditory system just gets reduced to a signal processing pipeline.

Stephen Wolfram and Gregory Benford talked about the singularity and especially about what can be “mined” from the realm of simple computational structures (“some of these universes are complete losers”). During dinner this evolved into an interesting discussion with Robin Hanson about whether we should expect future civilizations to look just like rocks (computronium), especially since the principle of computational equivalence seems to suggest than there might not be any fundamental difference between normal rocks and posthuman rocks. There is also the issue of whether we will become very rich (Wolfram’s position) or relatively poor posthumans (Robin’s position); this depends on the level of possible coordination.

During the workshop afterwards we discussed a wide range of topics. Some of the major issues were: what are the limiting factors of intelligence explosions? What are the factual grounds for disagreeing about whether the singularity may be local (self-improving AI program in a cellar) or global (self-improving global economy)? Will uploads or AGI come first? Can we do anything to influence this?

One surprising discovery was that we largely agreed that a singularity due to emulated people (as in Robin’s economic scenarios) has a better chance given current knowledge than AGI of being human-friendly. After all, it is based on emulated humans and is likely to be a broad institutional and economic transition. So until we think we have a perfect friendliness theory we should support WBE – because we could not reach any useful consensus on whether AGI or WBE would come first. WBE has a somewhat measurable timescale, while AGI might crop up at any time. There are feedbacks between them, making it likely that if both happens it will be closely together, but no drivers seem to be strong enough to really push one further into the future. This means that we ought to push for WBE, but work hard on friendly AGI just in case. There were some discussions about whether supplying AI researchers with heroin and philosophers to discuss with would reduce risks

.

J Storrs Hall analysis of the Singularity Summit and topics of the summit.
Riffing on robocars

Robocars would save us a trillion dollars of wasted time with the current amount of driving, they are likely to enable more than a trillion dollars of totally new transportation. And that’s a stimulus that would actually work

Why we need Artificial General Intelligence as soon as possible.

One of the best arguments for developing AI as fast as possible and putting it into use in the real world without delay: humans making these decisions are messing up big time. We don’t need superintelligence to do better, just human-level perception combined with rational decision-making — rational decision-making, I might add, that we already know how to do and believe and understand is the right way to do it, but just don’t bother to for most of our decisions. It’s a low bar

.

“AI — when and how?”

I claim, though, that we do have an existence proof for superintelligence: it’s not humans, but human societies. Put a thousand (emulated) brains in a box, and crank up the clock speed to whatever you can. Build in all the communications substrate they might need, and turn them loose. You can try different forms or internal organization — literally, try them, experimentally — and give the internal brains the ability to mate electronically, have children, teach them in various ways. Some forms of human organization, for example the scientific community over the past 500 years, have clearly demonstrated the ability to grow in knowledge and capability at an exponential rate.

In what way could you argue such a box would not be a superintelligence? Indeed, some very smart people such as Marvin Minsky believe that this is pretty much the way our minds already work. And yet this “Society of Minds” would be a model we intuitively understand. And it would help us understand that, in a sense, we have already constructed superintelligent machines.

The real question isn’t whether people are stupid. The real question is whether people make decisions that matter a lot incorrectly.

We’ve replaced kings — human beings — with artificial rule-based decision procedures based on vote-counting and other random esoterica. Likewise the governance of large business enterprises. We don’t need friendliness in markets or politics, we need competence.

Other Reviews and Analysis
Ronald Bailey has an analysis and synthesis of the whole Singularity Conference

Peter Thiel began his talk on the economics of the singularity by asking the audience to vote on which of seven scenarios they are most worried about. (See Reason’s interview with Thiel here.) The totals below are my estimates from watching the audience as they raised their hands:

A. Singularity happens and robots kill us all, the Skynet scenario, (5 percent)
B. Biotech terrorism using something more virulent than smallpox and Ebola combined (30 percent)
C. Nanotech grey goo escapes and eats up all organic matter (5 percent)
D. Israel and Iran engage thermonuclear war that goes global (25 percent)
E. A one-world totalitarian state arises (10 percent)
F. Runaway global warming (5 percent)
G. The singularity takes too long to happen (30 percent)

Thiel argued that the last one—that the singularity is going to take too long to happen—is what worries him.

Dresden Codak’s review of the singularity Summit