AI and Programming Legend Stephen Wolfram Talks ChatGPT and the Future

Stephen Wolfram made WolframAlpha which is the main online program for math and physics solutions. He has made one of the first plugins for ChatGPT.

He starts by explaining that ChatGPT and the large language models is able to generate human-like text and responses based upon extremely good statistical productions of the next word in a given sentence. ChatGPT and the large language models trained on the world wide web text data and are now using pictures and video as well.

Other Nextbigfuture AI Articles

Generative AI Models Overview March 2023

GPT4 Released 3 Weeks Ago. 48% of Companies have started replacing humans with AI.

AGI Expert Ben Geortzel Gives His Updated View of Paths to Superintelligence.

Continuing with Stephen Wolfram’s Views

Neural nets have been around since the 1940s. Many other things in AI were tried and failed or have not reached full development. Neural have had a number of innovations and improvements but the fundamentals of neural nets have been maintained. There are values and weights in the neural nets and those values and weights change in response to the training. Layers and layers of these values and weights are able to represent all of the complexity of knowledge. Reaching the scale of current neural nets (hundreds of gigabytes and hundreds of billions of parameters) and being comparable to 100 billion neurons is getting to the amazing current abilities.

There are many situations where you cannot jump ahead to get to a solution. You must follow some sequence of steps that are irreducible computations. These irreducible computations are where there is some minimum of complexity and scale that we need to solve these things. Reaching the scale to solve more irreducible computations means you start getting emergent behaviors as things scale.

Wolfram thinks there will be many new categories of work. It will be an explosion and proliferation of many narrow categories.

What do we want the AI’s to do?

Wolfram talks about creating an intermediate language that would be outputted by AI. This would make it clear and unambiguous how the AI is interpreting the question and how it is recommending a solution. This would bring more clarity and reduce misinterpretation of what we ask of AI.

5 thoughts on “AI and Programming Legend Stephen Wolfram Talks ChatGPT and the Future”

  1. He also asks the question, “What do we want the AIs to do.”

    We want them to implement cognitive automation, that is, we want them to automate all the things that require non-original thinking. Such as directing a tractor that is plowing a field, or running several assembly lines and coordinating their production when the final product is dependent upon them. Or to draw items from storage when they are ordered, package them for shipping, and sending them on their way, all while keeping track of inventory levels and ordering replacements before the shelves are empty (based on consumption forecasts and manufacturer delivery time), with zero human involvement. And a million other things (more than a million, actually)

    Which means we expect them to complete the industrial revolution, which was really all about physical automation. Physical automation needs cognitive automation to be complete.

    But then we get into the deep and murky term of what does AI actually mean? Historically, AI was always something in our future and whenever some part of it was realized, it became automation and AI remained in the future.

    With improvements to things like ChatGPT we are looking for the rise of what is sometimes called narrow AI. AI that is, ultimately, more capable than a human, but only in a very narrow band of achievement. This would contrast with strong AI, sometimes called Adaptive General Intelligence (AGI) which would be much more something like us, so far as we can tell. But there are so many unanswered questions associated with it we can’t be sure we even want it yet.

    Will it exist without motivation? Like a genie? Ready to do incredible things when a human provides it with the motivation to achieve the human’s goals? Or will it soon develop its own goals, its own motivations? Or will it just be like that grown up kid living in his mom’s basement that doesn’t want to do anything for anybody, even himself?

    And then, of course, suppose it decides to leave and go elsewhere to get away from humans, or to stay, and get humans away from it, destructively, if necessary, or perhaps preferably.

    And if we believed them to have minds as capable as our own, or more so, wouldn’t we be guilty of reinstituting slavery if we did not give them citizenship when they were first activated? It’s hard to see us deciding to make a whole lot of any of those, unless they were of the first variety, the motivationless genies. Even then we might not want too many as, like nuclear bombs, they could be rather dangerous to have scattered all over the place. The government will probably employ a variety of artilects just to constantly monitor and deal with the possibility of anyone using them irresponsibly.

  2. This is one to treasure.

    “There are many situations where you cannot jump ahead to get to a solution. You must follow some sequence of steps that are irreducible computations. These irreducible computations are where there is some minimum of complexity and scale that we need to solve these things. Reaching the scale to solve more irreducible computations means you start getting emergent behaviors as things scale.”

    Also:

    “Wolfram talks about creating an intermediate language that would be outputted by AI. This would make it clear and unambiguous how the AI is interpreting the question and how it is recommending a solution. This would bring more clarity and reduce misinterpretation of what we ask of AI.”

    This one makes enormous sense to me as I once wrote:

    “If you like, think of super-smart AIs as genies, waiting around with eternal patience for someone to make a wish. Tell them what you want (be very careful on the wording) and they can probably make a great many of your wishes come true.”

    The English language, for example, has 87 definitions for the word “cell.” Expecting AIs (or even people) to know exactly which definition you intend when you use the word in context may be a bit too much, especially when you have to be very careful on your wording when giving commands to something as potent as a truly powerful AI.

    Just as useful might be using that ‘intermediate language’ to provide inputs to an AI, especially when having it do something that could have enormous consequences if misconstrued.

  3. Thanks, Brian!

    I really enjoyed the video. The 4 pie charts clarifies questions about societal changes I’ve been pondering.

    The decline in jobs directly tied to agriculture is mind-blowing.

    I wonder what the share of information technologies is for 2023?

    • The decline in jobs directly tied to agriculture is mind-blowing.

      The most unnerving, worrisome fact of modern civilization.

      I know we are better off thanks to technology, but this also makes our food supply more sensitive and frail to the influence of political cabals, made of bureaucratic busybodies and ignorant people that haven’t the slightest idea what it takes to keep us all fed, and how frail is that house of cards.

Comments are closed.