Sam Altman Says AGI Soon and AGI Will Help People Do A LOT More

Sam Altman believes that AGI will be created soon. He talked about a vision of an AGI World of More.

He predicts AGI will not replace as many jobs as people believe and the world will change less than people believe.

It will be an incredible tool for productivity and is magnifying what people can by a factor of two or five.

It will enable some things we could not do all before.

He sees new vision of the future that OpenAI didn’t really see when they started.

Sam is very thankful the technology [AI] did go in this direction.

LLM is a tool that magnifies what humans do. It lets people do their jobs better.

AI does parts of jobs. The jobs will change and of course some jobs will totally go away.

But the human drives are so strong and the sort of way that Society works is so strong.

Sam thinks AGI will get developed in the reasonably close-ish future.

It will change the world much less than we all think.

It will change jobs much less than we all think

You hear a coder say okay I’m like two times more productive or three times more productive.

They say I can never code again without this tool.

We will not run out of demand. People can just do more. Expect more.

7 thoughts on “Sam Altman Says AGI Soon and AGI Will Help People Do A LOT More”

  1. Of course Sam would say this, whether he believes it or not. Fear will be a motivating factor in control, which is the last thing that he wants.

  2. I don’t trust Sam Altman. If you are an employer with labor costs being at the top of your expense list, won’t you replace your workers with AI-robots? If you own a trucking fleet, won’t you replace your drivers with self-driving vehicles rather than keep all your employees and have them drive five times as many trucks? And do we need five times more trucks driven? And if so, why not five times more AI drivers?

    He (nor any of the other players) have been completely candid about his firing. Why are they hiding that info from us? If it is a Q* track to super intelligence, isn’t that a potentially very dangerous development?

    He’s biased towards the positive benefits of AI to the extent that he is trying to move forward as quickly as possible these tools which may prove as dangerous as they are powerful.

    • It’s ironic to worry about the development of AI while being born into a world with nuclear weapons.

      What will happen, will happen, regardless of your feelings or mine. We are along for the ride, and just as powerless to direct it as our ancestors before us.

      The only thing that’s for absolute certain is that the future will become the present, and the present will become the past.

      • It’s not ironic. It’s a position.

        We don’t believed that a nuclear war will be unavoidable. Just the opposite, a nuclear war seems like a very bold move and it’s probable will never escalate.

        At the opposite, AI seems like an unstoppable force, as the competition force multiple developers to be the first and gain with the release, while the effect for most people could be a damage without a complete redefinition of how the society works, which seems a traumatic event. And that is under the scenario that AI is not wild and attacking us directly, just used by humans to gain from other humans in a race for control.

        The worries are justified.

        • The irony is that you, Zanstel, or you, DougSpace, esteemed internet commenters, believe that by posting a concerned comment on nextbigfuture, that you have the power or placement in history to effect any kind of change whatsoever in regard to AI or really anything else.

          But don’t let me burst your bubble of impotent dreamland.

          • I do more than post my concerns in the comments. I have tracked down contact info for some of the leaders in AI and have sent emails urging a contained demonstration approach. No response yet but the stakes are so high, I’ll continue trying. I encourage others to try to do something practical to make a difference.

            The container demonstration idea is that sufficiently strong action is unlikely to take place until there is a serious catastrophe. Think about how, before 9/11 we were unwilling to arm drones and take Osama Bin Laden out. We also allowed box cutters on planes. And let hijackers into the cockpit. But 9-11 changes all that.

            The contained demonstration idea is for an AI research team to establish a highly contained lab and then attempt to be the first one to achieve a clearly dangerous (but controlled) achievement designed to shock the powers that be to take this problem far more seriously. Then, the goal would be containment where AI research is contained to internationally controlled labs and where new algorithms are not published openly and where the weights are also not openly published. AI could be a service but not out in the wild.

            • It’s amusing that you think your concerned emails will outweigh the monumental profit potential of the technology.

              Let me know how it goes for ya

Comments are closed.