Experts Debating AI Safety

Robin Hanson and Scott Aaronson (Quantum Computer Professor and Expert) debate AI Safety.

Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He is known for his work on idea futures and markets, and he was involved in the creation of the Foresight Institute’s Foresight Exchange and DARPA’s FutureMAP project. Robin also debated Eliezer Yudkowsky on AI/AGI Safety. The collection of the text articles that went back and forth between Robin Hanson and Eliezer Yudkowsky are at Less Wrong.

Robin Hanson and Scott discuss rates of technological change, AI Foom, drift of intelligence goals versus convergence. They discuss ranges of intelligence and ranges of possible futures.

There is more to the discussion that just whether there some kind of complete doom scenarios but there are other AI impact areas of economic disruption and increased level of AI enabled crime. There are other accelerating technology impact areas as well.

Recent fundamental flaws found in GO game playing AI indicates that current neural nets do not have many (most/all?) concepts properly encoded and do not have consciousness. There are very useful and complex patterns. There is a discussion of whether that emerges as systems get much faster, larger and better.

Hanson has a series of AI Safety discussions which is topical given the rapid developments in generative AI and large language models.

Hanson has debated AGI doomer and friendly AI proponent, Eliezer Yudkowsky.

3 thoughts on “Experts Debating AI Safety”

  1. OK – the elephant in the room is autonomous agents. ChatGPT and even it’s improved versions seem very useful and safe. But maybe we should consider holding off playing with the shiny new toy of autonomous agents. Work toward global agreements to limit them to heavily monitored R&D labs that prioritize safety, hopefully to eventually release limited versions that benefit everyone fairly equally. We’d need to discuss how or whether nations could be convinced to apply this to military AI. Maybe it would be possible to design military autonomous agents with the basic goal to ‘always retain informed and voluntary human oversight’.

  2. The issue is not about whether AI is safe, but what are the motives of the AI creators, users, regulators. At the end of the day AI is a tool in the hands of an individual, company, government, or other party – even if given far-reaching ‘freedom’. Since I believe there are far more garbage people in this world than good, and far more malevolent dictators-destroyers than benevolent supporter-creators, we will have many disasters. So, as with nuclear technology, the value of the results are just the values of the user. So, since the world is filled with repression and self-inflicted poverty and widepsread predator-prey mentality, there will also be AI used for selfish benefit and control and domination. However, for every world-class tech bio, energy, economic breakthrough, there are likely to be 10 gross-repression/censorhip/control uses. Is it worth it? Well, I wasn’t convinced that most of the world was that great before AI, so we can presume it is the lesser of two evils for where it brings us to in a few decades.

Comments are closed.