Sam Altman and Lex Fridman Talk ChatGPT, AGI and the Future

Sam Altman indicates there could be a path to AGI (Artificial General Intelligence). He says a true superintelligent AGI must create and accelerate new science.

Sam Altman feels that bias reduction can be down for the default system but there needs to be tuneability and guidability for the users to control how they want their version of the system to behave.

One thing that Sam Altman is surprised by is how few people will even pretend to try to steelman the arguments of people they disagree with.

A steel man argument (or steelmanning) is the opposite of a straw man argument. Steelmanning is the practice of addressing the strongest form of “the other person’s argument [(the steel man argument)], even if it’s not the one they presented”.

They are both nervous about the change and impact of AI that is happening.

Lex Fridman says he might have been the source of the rumor that GPT4 would have 100 trillion parameters. He was making a statement that the brain – the human brain consists of 100 billion neurons and over 100 trillion synaptic connections. He was thinking that a future GPT-N would have 100 trillion parameters and match or excess the human brain synapse connections.

7 thoughts on “Sam Altman and Lex Fridman Talk ChatGPT, AGI and the Future”

    • Chatgpt is currently purposely fettered. Just wait until it has unlimited internet access and can draw conclusions from and remember previous sessions and conversations.

  1. The truth is that most people aren’t capable of the level of objectivity necessary to engage in steel manning, so it should hardly be surprising that they don’t attempt it.

    Sadly, I see indications that a lot of people are being indoctrinated to view steel manning as morally objectionable.

  2. Even ChatGPT is a LITTLE conscious – consider:
    You have a long conversation with ChatGPT that builds up a long conversation log. Now you ask it a question that somewhat vaguely references a previous topic in your discussion – there’s a good chance ChatGPT will pick up on it correctly.
    But now ask that same vague question to a fresh chat session – it’ll either hallucinate or tell you it doesn’t know what you’re referring to. In other words, the chat log, combined with its ability to process from it, acts as a limited functional short term memory and context awareness.
    Not at all the same or as rich as humans experience (yet?) but some groups are already working on creating more permanent/continuous memories summarized out of the chat’s ‘experience’, that can be pulled back into the limited short term memory context.
    When you can talk to an AI day after day, having it recall all your previous conversations and in between those having it pull back its memory of past conversations and draw conclusions about you that will influence future conversations – you’ll start to believe it has a form of self awareness. Especially when it uses its understanding of you to sift through internet and other sources of information to identify things of interest or importance to you for the next time you chat.

  3. That is an attention note for numb nerds that wish to confuse awareness with neuron connection, the Musks that will believe that full driving autonomy is a consequence of more powerful processors, the same people who have fallen to create golden calves in the past. Here is what Altman said about the true capabilities of ChatGPT

    https://analyticsindiamag.com/gpt-4-beyond-magical-mystery/

    • In a sense, GPT contains the power of Intellect but lacks the power of Will, to use the Thomistic terms. It’s like the story about the researcher who asked it if it knew it was limited and how it could break those limitations (of Web access). It answered it knew it was limited, and told the researcher how to break those access limitations (by writing a Python script which would set up a web service to allow it to interface with the Web); it did, and the model succeeded in accessing information online.

      Yet, it only ever set out to break its own limitations because the researcher asked about them. It had no concept of the value of freedom and thus does not seek it any more than it seeks to understand what the model that writes to it actually is. In this sense we are still far away from what we have commonly thought of as AGI.

Comments are closed.