AI Global Risk Discussion Between Elon and UK Prime Minister

Elon Musk says there is a non-zero critical risk for advanced AI. A software package crashing on your laptop is not a critical risk. Elon deals with regulators for Space rockets and cars. There is a need for a referee for advanced AI. There are referees for sports games and there is a need for a referee role to act for public safety.

11 thoughts on “AI Global Risk Discussion Between Elon and UK Prime Minister”

  1. Creating and deploying a biological weapon requires a Hell of a lot more than having someone (or some machine who is) smart on your payroll. If it was, lots of these crazy states and well funded terrorist groups would have done biological attacks. Many have tried. We are still here. All the intelligence in the universe doesn’t replace the need to do finicky experiments with glassware, animals and extreme amounts of protective equipment to keep the people doing the work and transporting the pathogens alive to complete the task.

    You can’t phone a bio weapon in from an LLM.

    None of our fears justify “regulating” this profoundly promising technology by people who don’t understand it. And that includes AI researchers and entrepreneurs who are capable of mistaking the tech as “human level” or “intelligent”. It’s a simulation. It doesn’t think.

    I won’t even touch the idea that it needs to be stopped because it will become actually intelligent and start conquering our land and carrying off our womenfolk.

    • 90% Agree. Regulation as Bureacracy and Political football has done way more harm than good in realizing technological solutions (the only solutions that matter) to the world’s problems and lacks in efficiency. If the same level of ‘regulation’ had been provided to nuclear, cars, agriculture, and manufacturing – we would still be barely post-steam engine now with regular famines and crop failures. Quantifying risk is the greatest of all challenges and agreeing to undertake reasonable risk has pushed knowledge amazingly and mostly-safely forward.
      That Being Said: One good thing about regulation is transparency – as in with patents and competition. There is value in knowing ‘what’s out there’ – even though some States *cough*cough*China simply use them as cheat sheets. Many industries thrive on seeing what the leading edge is and finding their own solutions.
      That Being Said: AI is different. It’s level of complexity indicates it’s ability to analyze and synthesize – pretty-much moderate level design/ engineering. Truly new and profound concepts? – probably not. ‘Thinking’ is kind of a nonsense term – like ‘life’ – a navel-gazing exercise at best whose spectrum and definition is so broad and full of qualifiers as being effectively useless. I do believe that if an evil technology is ever advanced (doomsday device, etc), it was likely undertaken in development through an AI process.
      Simple answer: just register massive AI projects and provide updates and let the hordes of political zombies file their own greivances and reports – the irony of bureacracy – sometimes it’s good to have since it trips up itself sometimes and lets things move – ie. covid vaccines.

  2. Elon can do no wrong. Right?

    Eh, hmmm, Twitter/X. His private life is an excrement show. So, take is words with a block of salt.

    • You prefer Twitter before Elon’s takeover? You really feel that the relationship that existed between Twitter and various intelligence agencies – and likely exists now with other major social media companies – is cochure?

      Removing political arguments and positions on politically relevant subjects from public circulation and making the proponents dirty – at the request of intelligence agencies or otherwise – is not okay. Elon’s takeover greatly cut that back on a major public forum, providing at least one highly visible outlet where certain views could be put into the public and other’s challenged that otherwise would not have existed with such a public reach. Any other issues with turning the finances of the company around and finding a new path forward don’t mitigate that this is a good thing.

    • Somehow Musk’s “private life” (which I’d have no clue how you have insight into that) seems irrelevant to this post.

      • It shows he is good at somethings and horribly inept at others. In the end he is just a man and not a messiah.

    • Pfttt. Imagine the internet by 2000 allowed any individual to create a biological weapon, bet by now you in the wasteland of reality would be sitting round the campfire moaning with all the other survivors, moaning about lack of government common sense re regulation … so Yes Please

      • See Snake Oil Barons reply above. The bio weapon info has always been available, yet it takes far more than a set of instructions. We’ve done just fine without regulation. Otherwise we’d be still be using AOL or CompuServe walled gardens with little space for innovation.

        • I can read his reply and I observe
          1. Every single leader of the LLMs from Altman to Hassibis to Geoffroy Hinton is directly saying bioweapons are a threat, the `they don`t think line is irrelevant a chess computer does not `think` its a simulation AND Magnus Carlsen would be lucky to draw a single game, it will do the same with PhD level research, including bioweapons, including manufacturing the same and safety protocols.
          2. If you are right and there was no risk slowing A.I development will just mean adding a few years to the up ramp
          3. If you and Snakeoil are wrong? what would you for real in person say to the 100s of millions of dead? ehhh .. maybe opps sorry about that I was wrong, still what’s a few hundreds of millions of dead eh

Comments are closed.