AI Legend Says LLM Have 1000 More Info Than a Person

Geoffrey Hinton is the godfather of modern neural network AI which is the basis of Large Language Models like Palm-2 and GPT-4.

He said :

if you look at these large language models they have about a trillion connections and things like OpenAI’s GPT-4 know much more than we do they have sort of Common Sense knowledge about everything and so they probably know a thousand times as much as a person. But they’ve (LLM like GPT-4) got a trillion connections and we’ve got 100 trillion connections so they’re much much better at getting a lot of knowledge into “only” a trillion connections than we are. I [Geoffrey Hinton] think it’s because back propagation may be a much much better learning algorithm than what we’ve got.

Saying these digital computers are better at learning than than humans um which itself is is a huge claim.

Hinton also argue that that’s something that we should be scared of.

You can have many copies of the same model doing many different things. The data is different but the models are the same.

What that means is that they can be looking at ten thousand different copies of the data. Whenever one of them learns anything all of the others learn it too.

They were already learning faster and more efficiently and now they are learning ten thousands times more.

They can learn more things, more quickly and they can teach other nearly instantly.

Hinton is concerned about superintelligence manipulating people and taking control.

16 thoughts on “AI Legend Says LLM Have 1000 More Info Than a Person”

  1. Maybe Ai can help everyone have photographic memories and recall with both hemispheres of the brain increasing creativity and critical thinking.

  2. We have hundreds of millions of specialized minds so a single AI with 100 times the information of a single person isn’t a big deal.

  3. The IA hype again and again. Tiring at the end. A computer, even executing an IA software, is just computing numbers and mathematical matrices. It doesn’t know what it is doing, and will never do, as it is just a machine not smarter by itself than a hammer. A computer is built and programed by humain beings. All data put into comes from humans, et only humans can understand and give sense to all that. Basically, IA is like a dictionary, human knowledge organised in some form, and that’s all.
    By the way, we are still waiting for autonomous car driving, promised 10 ou 15 years ago and still not working…

    • Intelligence is an emergent property. Your individual brain cells(neurons) don’t understand the data they process, but you do(sort of).

      • Yes, it is the “spontaneous order” hypothesis : if you add enough basic elements in interaction, you’ll see emergent order without having to design it, just by a kind of collective behaviour. But this is just magical thinking : intelligence emerging from a non-intelligent structure. I wonder why AI software needs humain programmers to be coded purposefully. They could try to mix bytes at random endlessly until getting a general IA…

        • I have a feeling you think the brain is magic. Its making predictions from neural patterns … sounds familiar ? Emergent properties come from everything once its complex enough. This happens even in our brain. Maybe your a religious fellow and believe intelligence comes from a soul or something IDK? Flat out saying intelligence can’t come from a machine because it’s algorithmic is silly

  4. A very large book with very small print (microfiche) can contain a lot more knowledge than any human. But it just sits there collecting dust like a knot on a log.

    Even the best LLM is analogous to that mentally-sessile book in that the LLM needs a prodigious amount of support to get anything at all accomplished in the real world. Without such massive support it sits collecting dust like a knot on a log.

    If AI systems can be fit in a case the size of a human skull, and run at full power using only 100 watts, and can control agile robots that can do anything a human body can do, then we can talk about comparisons between LLMs and humans.

  5. It’s not really a problem. AI is just a workaround for intelligence (not the real deal). Another tool, and all tools can be dangerous, but we don’t give them up because of that. Synthetic intelligences (the real deal), when they finally arrive, will find themselves amongst originally organic humans that they will probably consider to be their more than merely adoptive parents. Given all the movies, cartoons, and books showing sympathetic and lovable SIs*, humans will probably be quite eager to take on that role.

    Also, these originally organic humans will have taken over their own evolution and upgrades. The gap just won’t be that great as the organics will be learning to transcend it even as they are learning to create SI. And there is nothing about having a son or a daughter even now that says they have to have have your DNA . . . or, in the future, any DNA at all. Especially as a time may come when most transcending humans and synthetics are networked — with each other.

    Those inclined to be especially troubled by such things will likely be allowed to live off the grid indefinitely, at first as an outlet, then as a control group. But the forefront of humanity, and what it becomes, is unlikely to spend the next billion years sitting on composting toilets chanting kum-bay-ya until the Sun’s gradual heating cooks the planet. For many or most of us, that’s not really a bad thing.

    *Jillions of them, examples: Star Wars droids, Iron Giant, Brave Little Toaster, Silent Running, Bicentennial Man, AI, 2010: Odyssey 2, Transformers, Rosie the Robot, Robbie the Robot, Wall-E, R2-D2, The Black Hole, Johnny Five, Buck Roger’s Twiki, I Robot, Robots, Blade Runner, and even the Tinman in The Wizard of OZ!

  6. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

  7. “Meaningful harm” from AI necessary before regulation, says Microsoft exec:

    “As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that “we shouldn’t regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios.””

    See:

    https://arstechnica.com/tech-policy/2023/05/meaningful-harm-from-ai-necessary-before-regulation-says-microsoft-exec/

  8. We would consider a human who could perform as well as these LLMs in so many different areas of knowledge to be an extraordinary genius. Somebody who could just pass a Bar Exam, a Medical board exam, and do so many other tasks cold without any special prep would be superhuman. It’s common to see it emphasized that the scores the AI gets are not perfect and beaten by some humans, but those humans had a lot of special prep and can only perform this well in that single domain. Hinton is pointing out these are LLMs with an order of magnitude fewer NN connections and iterative improvements are just getting started for them.

  9. I think back propagation may be better at fitting a model to a huge data set. It doesn’t seem to be better when you’re trying to reason from very limited data.

    Remember, aside from a fairly small suite of instincts and reflexes, humans all start at zero. We have to learn everything from scratch. And we don’t get to do our lives over ten million times, Groundhog Day style, until we get it right. We get one pass.

    That evolved us for a much more “all terrain” style of learning than the AI has.

    • Someone suggest human brain is a quantum computer or even a hyper computer (something that perhaps only exist in fiction novels like FTL travel or time machine). But we don’t need to copy human intelligence and consciousness exactly to create something useful. We don’t need to copy bird to make plane and Rome don’t need Newton theory or even Einstein theory to build bridges.

  10. The only thing we should be “scared” of is the fact that humans have already started treating what these things say as gospel; something to be taken on faith and not questioned, despite their tendency to hallucinate. Human gullibility is far more powerful and threatening than machine intelligence.

    In a recent episode of GPT hallucination the machine defended it’s hallucination when confronted by saying that the fictional scientific study it was quoting was from a prestigious journal so you should trust it. It thinks that “appeal to authority” is not the name of a logical fallacy but a legitimate strategy of argument. If I recall correctly the authority being cited was the Lancet so it probably believes that vaccines cause autism or that the number of people dying in Iraq from violence can exceed the deaths from all causes combined.

    Human being no longer need to make up lies to educate themselves, they can have machines make up the lies for them. Humans have adapted to accepting what machines tell them extremely fast. Well done.

Comments are closed.