AGI Expert Peter Voss Says AI Alignment Problem is Bogus

Peter Voss says the AI alignment problem is bogus. AGI expert Peter Voss walks through the fundamentals of Artificial Intelligence, the differences between Artificial Intelligence and Artificial General Intelligence, his take on the dangers of AGI, taking his previous venture from 0-400 employees and IPO’ing, and the mission behind his new company, Aigo.ai.

He thinks that an AI system that would get massively misaligned and kill all people is a bogus problem.

An aligned AI system advances the intended objective; a misaligned AI system is competent at advancing some objective, but not the intended one. The distinction between misaligned AI and incompetent AI has been formalized in certain contexts.

Peter thinks this would be a ludicrously massive failure of intelligence. Peter also believes that all AI developers are designing in the motivations that would be commercially and functionally aligned to the specified goals and instructions.

AI alignment is a subfield of AI safety, the study of building safe AI systems. Other subfields of AI safety include robustness, monitoring, and capability control. Research challenges in alignment include instilling complex values in AI, developing honest AI, scalable oversight, auditing and interpreting AI models, and preventing emergent AI behaviors like power-seeking. Alignment research has connections to interpretability research, robustness, anomaly detection, calibrated uncertainty, formal verification, preference learning, safety-critical engineering, game theory, algorithmic fairness, and the social sciences, among others.

This goes to making sure we do not deploy or trust broken AI. There needs to be verification of results and making sure that there is transparency and visibility into operation and that the improvement loops and productivity loops are not set loose but are controlled.

Other Views on AI Risks

Andrew Ng stated in 2015 that AI existential risk is “like worrying about overpopulation on Mars when we have not even set foot on the planet yet”.

Peter Describes Some Flaws With ChatGPT

ChatGPT is extremely impressive engaging in free-form conversations on a wide range of general knowledge topics. This makes it easy to imagine what it might do for the call center. However, its inherent limitations are that it produces unpredictable and frequently incorrect or meaningless responses, and its Black Box operation cannot be reliably enhanced or constrained.

2 thoughts on “AGI Expert Peter Voss Says AI Alignment Problem is Bogus”

  1. He thinks that an AI system that would get massively misaligned and kill all people is a bogus problem.

    Well he would say that wouldn’t he.
    If it helps him sleep at night, good for him.

  2. If conceptual thinking is essential to AGI, ChatGPT easily abstracts from examples to common concepts. E.g. if you give it “apple, car tire, sun, ferris wheel – what do these have in common” and it’ll say it sounds like their common factor is being round shaped – but then likely also suggest some other possible association. I’d bet Aigo can’t do that on its own.

    Sounds like Aigo could benefit from integration with GPT. GPT would give them a lot more depth – e.g. suggesting a task list for any general goal. Maybe Aigo could fact-check GPT from other sources, and otherwise apply logical constraint or provide the motivation and guidance toward implementing goals.

Comments are closed.