What I Think are More Important and Less Important Aspects About Technological Singularity Scenarios

Jamais Cascio has a talk about the Singularity where he feels that what concerned people do about the development of Artificial General Intelligence technology is the most important aspect. The politics of the different groups and the reaction of governments and markets.

Michael Anissimov at Accelerating Future notes that Jamais is wrong about several criticisms of the Singularity Institute and the work to achieve Friendly AI

He says that many of the people working on AGI at the Singularity Institute are biased and not appropriately acknowledging the way that their cultural and political biases may work themselves into an AI’s programming — that they view themselves as objective scientific types free from bias.

I find this surprising. Looking back at the intellectual history of the Friendly AI research program, the entire book-length treatment of the topic published in 2001 addresses dozens of questions of the form, “What if the programmers get X wrong?”, or, “What if the programmers are biased about X?” Essentially, Eliezer approaches the problem assuming that the programmers will mess up practically everything, and asks, “How can we still get a friendly outcome?” For instance, there is a section on layered mistake detection. The work goes on for hundreds of pages on this very theme — in fact, it is the unifying theme of the document. Shorter summaries of the ideas can be found at “SIAI Guidelines on Friendly AI” and “Features of Friendly AI”.

My views:
I disagree that there has been or will be much effective overall control of the result of technological development by governments and people. Also, most of the efforts tend to be reactive and take a long time to build a coalition to effect policy. Even when there is regulations and laws – people still break laws and often regulations do not control or prevent what they were supposedly intended. An example was the introduction of the Sarbanes Oxley rules in response to the Enron scandal. Yet those rules did nothing to mitigate or prevent the banking and credit problems.

There are people who criticize the Singularity Institute for working on a technical problem of developing Friendly AI instead of say thinking about political and social responses. This makes no sense to me. Would these same people have said that the Y2K computer problem people should focus on the political and social responses instead of organizing IT departments and programmers to fix the computer programs and to develop tools to make that process easier and more reliable ? The singularity Institute is concerned that computer programs and computer systems made for AI will end up having the technical problem of not doing what we want in general and being dangerous. It is perfectly valid to work on a set of technical programming and algorithmic and mathematical attempted solutions.

Just like the Y2K bug, maybe the problems will not be as bad as feared. the efforts to resolve and mitigate the Y2K bug probably did help. Friendly AI is a far tougher software design problem than fixing Y2K bugs in IT systems.

It is also clearly tougher to convince people that advanced AI is a potentially serious issue. Y2K was a very simple thing to explain – we have built most of the dates with only two digits for the year and the programs will error out or misbehave if we do not fix it. There are critical systems like power plant, military and medical software and emergency services software which you do not want to have down or operating improperly.

Slate looked at the history of the response and pre-emption of Y2K.

But then something strange happened: Everyone started worrying about Y2K. Over the next few years, people across the tech industry took up the cause. In 1996, Sen. Daniel Patrick Moynihan asked the Congressional Research Service to investigate the issue, and he became alarmed by the findings. In a letter to President Clinton, Moynihan urged a huge federal response to address what he called the “Year 2000 Time Bomb.” Moynihan clearly expected the worst: “You may wish to turn to the military to take command of dealing with the problem,” he wrote to Clinton.

Bill Clinton’s second term isn’t remembered as a model of comity between the executive and legislative branches. On the issue of Y2K, though, the Republican Congress and the Democratic White House were on the same page: They all pushed for a huge federal task force. The White House appointed a Y2K coordinator, John Koskinen, who headed an effort that spanned every cabinet agency and the military. (Koskinen is now a high-ranking official at Freddie Mac.) Following the government’s lead, just about every business in the country took up the cause of heading off the Y2K crisis.

If the Technological Singularity or large scale technological disruptions are anywhere close to having some of the expected effects reactive responses would need to be at least Y2K/credit crisis in scope and would have short time windows dealing with bad effects.

For Those Who Do Not Believe in the Technological Singularity

You do not have to believe in the Technological Singularity. But consider what is minimum level of AI, robotics and other technology beyond where society, employment and other aspects of civilization would get badly disrupted.

We just got bit because of a small number of people in critical parts of the financial system getting their risk management and assessment wrong. The Y2K bug was a seemingly trivial software design problem/collective blindspot.

Could some form of automation or business process shift decimate your industry or force you to go through a lot retraining ?
Computer programmers had to deal with outsourcing to India and other countries.
They had to deal with shifts in demand programming languages.
Web 2.0 and similar systems shrank the number of staff needed to launch and sustain an internet business.

Sample Possibilities:
The staffing levels at telecommunication companies could be devastated by the successful entry of Google (Gizmo5, google voice, a lot of currently dark fiber)

If there were very advanced additive manufacturing systems far beyond the current capabilities of a $1 billion industry but not to full nanofactory levels, then what would be level where certain segments of a manufacturing supply chain or reduced need for transporting goods occurs ? It does not have to be all products just the ones that your company makes.

Minimize the Abstract and Hypothetical – Find Useful Predictive Choices and Relevant Trends

I think attempts should be made to anticipate and project forward in more targeted ways.

Scenarios where there is no significant nanotechnology development or rapid manufacturing systems are wrong.

DNA nanotechnology and guided self assembly is clearly going to get very useful and capable.
Additive manufacturing and powerful printable electronics are going to be big and disruptive.
Carbon nanotubes and graphene will have industrial scale production and will be used for materials and electronics.

There will be zettaflop or more powerful computers.
There will be large scale quantum computer systems.

Impotent Reactive Efforts are the Rule and Y2K Bug response the Exception

* People against nuclear weapons
* People against landmines
* People against air pollution. (Millions dead each year and sickness effects increases medical costs by about 30% – World Health Organization)

The ranking of importance and effect is not strongly correlated to actual deaths and actual harm.

Getting Hung up on “What is Intelligence and Consciousness”

When the Singularity is debated there is often an excessive focus to discuss uncertainties about intelligence and consciousness. I think more important issues are

* how powerful and effective can the technologies become
* When will they reach critical levels of development and deployment
* What processes, system architectures and combinations and methods maximize the effectiveness of technological systems