Computerworld discusses the impact of Sputnick on the development of computer technology and the internet and high risk/high payoff technology research.
The article is making the case that the United States science and technology research community has seen a return to a culture which is less likely to pursue high risk/high payoff technology research.
There is a struggle between those who want more High risk, high payoff scientific and technological research and development and those who want only timid, incremental goals who also ridicule even the description of a high payoff technological possibility.
DARPA people are trying to defend themselves from the charge taht they are not interested in high-risk and high payoff research and are leaving the United States open to another nation surprising the United States with an unchallenged success in a high payoff research area.
DARPA continues to be interested in high-risk, high-payoff research,” says DARPA spokesperson Jan Walker.
Walker offers the following projects as examples of DARPA’s current research efforts:
– Computing systems able to assimilate knowledge by being immersed in a situation
– Universal [language] translation
– Realistic agent-based societal simulation environments
– Networks that design themselves and collaborate with application services to jointly optimize performance
– Self-forming information infrastructures that automatically organize services and applications
– Routing protocols that allow computers to choose the best path for traffic, and new methods for route discovery for wide area networks
– Devices to interconnect an optically switched backbone with metropolitan-level IP networks
– Photonic communications in a microprocessor having a theoretical maximum performance of 10 TFLOPS (trillion floating-point operations per second)
The Wall Street Journal has journalists arguing against artificial intelligence projects with greater than human AGI goals.
There are those like Dale Carrico who argue against talking about “Superlative technology”. Superlative technology being potentially high payoff technology like molecular nanotechnology and artificial greater than human general intelligence.
There are many others who argue against projects with agressive goal in energy, space and nanotechnology. Often these are the same people who lament the lack of adequate technological solutions for climate change, peak oil and other potential societal problems.
Many seem to indicate that there is culture that encourages timid technological goals:
Farber sits on a computer science advisory board at the NSF, and he says he has been urging the agency to “take a much more aggressive role in high-risk research.” He explains, “Right now, the mechanisms guarantee that low-risk research gets funded. It’s always, ‘How do you know you can do that when you haven’t done it?’ A program manager is going to tell you, ‘Look, a year from now, I have to write a report that says what this contributed to the country. I can’t take a chance that it’s not going to contribute to the country.'”
A report by the President’s Council of Advisors on Science and Technology, released Sept. 10, indicates that at least some in the White House agree. In “Leadership Under Challenge: Information Technology R&D in a Competitive World,” John H. Marburger, science advisor to the president, said, “The report highlights in particular the need to … rebalance the federal networking and IT research and development portfolio to emphasize more large-scale, long-term, multidisciplinary activities and visionary, high-payoff goals.
According to the Committee on Science, Engineering and Public Policy at the National Academy of Sciences, U.S. industry spent more on tort litigation than on research and development in 2001, the last year for which figures are available. And more than 95% of that R&D is engineering or development, not long-range research, Lazowska says.
The old head of ARPA, Charles M. Herzfeld, speaks on the old and new situation
We created the whole artificial intelligence community and funded it. And we created the computer science world. When we started [IPTO], there were no computer science departments or computer science professionals in the world. None.
There certainly has been a change, and it’s not for the better. But it may be inevitable. I’m not sure one could start the old ARPA nowadays. It would be illegal, perhaps. We now live under tight controls by many people who don’t understand much about substance.
What was unique about IPTO was that it was very broad technically and philosophically, and nobody told you how to structure it. We structured it. It’s very hard to do that today.
Interviewer Question: But why? Why couldn’t a Licklider come in today and do big things?
Because the people that you have to persuade are too busy, don’t know enough about the subject and are highly risk-averse. When President Eisenhower said, “You, Department X, will do Y,” they’d salute and say, “Yes, sir.” Now they say, “We’ll get back to you.” I blame Congress for a good part of it. And agency heads are all wishy-washy. What’s missing is leadership that understands what it is doing.
If the system does not fund thinking about big problems, you think about small problems.
Thus the big ideas for big problems have gone mostly outside the system.
SENS, Strategies for Engineered Negligible Senescence (for radical life extension), raises private funds
The Singularity Institute and companies working on AGI are outside mainstream government and corporate funding.
The nanofactory collaboration is privately funded with some use of university resources controlled by the researchers.
There was a small UK government funded project for software control of matter
Robert Bussard’s nuclear fusion project was funded by the Navy
Tri-alpha Energy’s colliding beam fusion was privately funded for over 40 million dollars
The NASA Institute for Advanced Concepts program was cancelled
I think there should be at least 20% of research funds (government and corporate) devoted to high risk/high payoff research. This is a model that Google is using to substantial success.
The problem of false negatives in selection of technology development projects Not choosing to pursue a technology development project which in fact would have succeeded and should have been chosen for development.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
Comments are closed.