Elon Musk, creator of Tesla and SpaceX, has decided to donate $10 million to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.
Future of Life is a volunteer-run research and outreach organization working to mitigate existential risks facing humanity. We are currently focusing on potential risks from the development of human-level artificial intelligence.
The $10M program will be administered by the Future of Life Institute, a non-profit organization whose scientific advisory board includes AI-researchers Stuart Russell and Francesca Rossi. “I love technology, because it’s what’s made 2015 better than the stone age”, says MIT professor and FLI president Max Tegmark. “Our organization studies how we can maximize the benefits of future technologies while avoiding potential pitfalls.”
The research supported by the program will be carried out around the globe via an open grants competition, through an application portal at http://futureoflife.org that will open by Monday January 19. The plan is to award the majority of the grant funds to AI researchers, and the remainder to AI-related research involving other fields such as economics, law, ethics and policy (a detailed list of examples can be found here). “Anybody can send in a grant proposal, and the best ideas will win regardless of whether they come from academia, industry or elsewhere”, says FLI co-founder Viktoriya Krakovna.
“This donation will make a major impact”, said UCSC professor and FLI co-founder Anthony Aguirre: “While heavy industry and government investment has finally brought AI from niche academic research to early forms of a potentially world-transforming technology, to date relatively little funding has been available to help ensure that this change is actually a net positive one for humanity.”
“That AI systems should be beneficial in their effect on human society is a given”, said Stuart Russell, co-author of the standard AI textbook “Artificial Intelligence: a Modern Approach”. “The research that will be funded under this program will make sure that happens. It’s an intrinsic and essential part of doing AI research.”
Skype-founder Jaan Tallinn, one of FLI’s founders, agrees: “Building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to to focus on steering.”
Along with research grants, the program will also include meetings and outreach programs aimed at bringing together academic AI researchers, industry AI developers and other key constituents to continue exploring how to maximize the societal benefits of AI; one such meeting was held in Puerto Rico last week with many of the open-letter signatories.
Contacts at Future of Life Institute:
Max Tegmark: firstname.lastname@example.org
Meia Chita-Tegmark: email@example.com
Jaan Tallinn: firstname.lastname@example.org
Anthony Aguirre: email@example.com
Viktoriya Krakovna: firstname.lastname@example.org
Contacts among AI researchers:
Prof. Tom Dietterich, President of the Association for the Advancement of Artificial Intelligence (AAAI), Director of Intelligent Systems: email@example.com
Prof. Stuart Russell, Berkeley, Director of the Center for Intelligent Systems, and co-author of the standard textbook Artificial Intelligence: a Modern Approach: firstname.lastname@example.org
Prof. Bart Selman, co-chair of the AAAI presidential panel on long-term AI futures: email@example.com
Prof. Francesca Rossi, Professor of Computer Science, University of Padova and Harvard University, president of the International Joint Conference on Artificial Intelligence (IJCAI)
Prof. Murray Shanahan, Imperial College: firstname.lastname@example.org
Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter
Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents – systems that perceive and act in some environment. In this context, “intelligence” is related to statistical and economic notions of rationality – colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.
As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.
In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.
There is a list of signatories to the letter
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.