Sebastien Makes the Start of AGI Case for GPT-4

Sebastien Bubeck makes the case that intelligence has emerged with the trillion parameter generative AIs. He says there are internal representations and learning algorithms inside the systems. Sebastien is a Sr. Principal Research Manager in the Machine Learning Foundations group at Microsoft Research (MSR).

He says the generative systems can be used to explain itself.

GPT-4 still has weaknesses but it can use tools to mitigate those weaknesses. It can call Mathematica to precisely solve math problems.

GPT-4 does not have planning yet. There are some new systems that are trying to address this by working with GPT4.

GPT-4 is not set up to have much learning or memory. It can only remember within a session.

GPT-4 was slightly dumbed down from the internal testing system in order to address safety issues.

1:47 – Sebastien starts
5:36 – goal of the talk: there is some intelligence in the system
6:05 – “beware of trillion-dimensional space and its surprises”
8:20 – example demonstrating GPT4’s common sense
10:40 – theory of the mind
12:29 – theory of mind example
14:27 – consensus definition of intelligence by psychologists published in 1994 and if GPT4 matches this definition
18:00 – how to test GPT4’s intelligence
19:00 – Asking GP4 to write a proof of infinitude of primes
22:13 – The Strange Case of the Unicorn
27:15 – GPT4 vs Stable Diffusion
29:44 – Coding with a copilot that understands
32:57 – GPT4’s performance on coding interviews
33:41 – GPT4’s weaknesses, which can be overcome with toolsSebastien Makes the Start of AGI Case for GPT-4
36:09 – A mathematical conversation with GPT4
42:40 – GPT4 cannot do true planning
45:02 – Is GPT4 intelligent and does it matter?

A Stanford professor talks about how people and students should learn to work with the new AI.

7 thoughts on “Sebastien Makes the Start of AGI Case for GPT-4”

  1. >Have< been surprised by correct answers. For example:
    ** Which two cities are connected by the Dumbarton bridge?
    ** Which streets in SF are at the boundaries of Union Square?
    — It solved a collection of easy analogies like:
    white: black = peace : ?
    2 : 8 = 3 : ?
    — It failed miserably when I used an analogy with imaginary numbers.
    — It found an obscure math result from 6? decades ago about turning a
    sphere inside out without ripping it.
    — An expert type question was answered incorrectly because it
    regurgitated a defective wiki result.
    — Gave it story/puzzles that could not be solved by what it had been
    trained on. Some had correct answers, others not.
    — Asked about a particular CS person and got wrong achievements and
    missed a key contribution.
    — Asked a few 'sensitive' questions like:
    ** The Chicago school district has a budget of over $26K/child while
    elsewhere in the US the budget can be less less than
    $10K/child. How can the Chicago budget be explained and justified?
    The answer was a 'pathetic' attempt to justify this questionable
    situation.
    Bloomberg wrote that ChatGPT is part of the culture war.
    Got that impression also.

  2. If we reach AGI level, then we have created a system that could become a subject matter expert in all intellectual disciplines, i.e., it could make logical inferences, have full knowledge of all relevant facts, be capable to use the (intellectual) tools used by a comparable human expert, have the experience to form reasonable hypotheses, detecting gaps in arguments, prioritize questions/problems that should be solved, and even propose research questions that it may answer (or improve continuously) via simulations (or step-by-step proves) or study via all relevant literature (on experiments).
    If we have an AGI, then it should be able to answer e.g. the question: what is the theoretical limit of the energy density of a battery or capacitor, and then propose a path to get this battery tested and then mass-produced. From that moment on, we could ask all sorts of technical questions, and AGI would solutions. Well, the time to find solutions could improve in iterations, or the delta to the absolute optimum could be narrowed … but we would have a tool that could give us solutions humans could find over time, but AGI would likely be faster …
    Is an AGI with these capabilities already an ASI? (because these artificial subject matter experts could support each other in getting problems solved.) Otherwise, what capabilities of an ASI cannot be provided by AGI?

  3. Many companies see AI development as existential threat for their business. If competitor gets there first they will get larger market share and take some of it from them, companies can lose a lot of their market cap if they fall behind here. Companies use bleeding edge versions for their own internal purposes. Later then is that tech released to consumers.

    That is why lots is invested into that tech and they are pushing really hard to get it done faster than competition. In such instances progress happens way faster than it would otherwise. AI race is partly similar than it was between USA and Soviet bloc. A lot of extraordinary things did happen, which would under normally circumstances take way too long. If people are pushed and there is strong motivation things happen fast.

    I just hope things will turn out well.

  4. Being able to learn by adding to long term memory planning are important targets. Forming goals and obtaining a motivation to do something or answer something it wasn’t told to do or answer is another but may be harder. How do you get it to form a desire before planning on satisfying that desire?

    The computers in Star Trek are a good example of where we are now. You have to ask them where the invaders are before telling it to put up security force fields to stop them. The Captain could disappear from the ship and it would never tell anyone if no one asked where he was.

Comments are closed.