ChatGPT is a Glorified Tape Recorder

Physicist and science writer Michio Kaku said that large language model tools like OpenAI’s ChatGPT are nothing more than glorified tape recorders. Kaku is saying that the models take in terabytes of information and then selective return parts of it based upon questions.

Large language models (LLMs) are the new wave of artificial intelligence that trains neural networks with a large amount of human-generated text with the goal of producing new text and knowledge. In an interview with CNN anchor Fareed Zakaria yesterday (August 14), Kaku said A.I. applications like chatbots can benefit society and increase productivity.

AI chatbots cannot discern true from false. They can give true answers for a test but there is no model inside the systems.

17 thoughts on “ChatGPT is a Glorified Tape Recorder”

  1. In the same way that the word Mediterranean diet is a glorified letter by letter carbon copy of the word olive oil. In fact it isn’t, it extracts abstract concepts and records those. It is completely a different thing from the original source, a lower dimensional space, even though it learned the concepts like an abstract combination of foods from noticing they are often together.

  2. The recent comment by physicist Michio Kaku, describing large language models as “glorified tape recorders,” has sparked a conversation that extends beyond the realm of artificial intelligence. It touches on the profound question of consciousness, sentience, and the very nature of intelligence

    To dismiss AI models as mere “tape recorders” is to overlook the intricate algorithms, neural networks, and computational processes that underpin these systems. While it’s true that AI models like ChatGPT rely on vast amounts of data to generate responses, the way they process, analyze, and interpret this information is far from simplistic.

    Comparing AI to human consciousness is a complex endeavor. While we have a deep understanding of how AI works, the human brain remains a mystery in many aspects. Some argue that the human brain operates differently from AI, but without concrete evidence, such statements can be seen as purely speculative.

    Generalized statements about AI and human consciousness can lead to misunderstandings and oversimplifications. As we continue to explore the frontiers of technology and neuroscience, a cautious and evidence-based approach is essential.

    While it’s tempting to draw definitive conclusions, the reality is far more nuanced. Embracing the complexity of both AI and human intelligence, and recognizing the limitations of our current understanding, can lead to a more informed and enriching dialogue.

  3. All learning is essentially the same thing. Rather than just recording audio onto a magnetic tape the human mind records light, sounds, smells, and other senses onto a “neurological hard drive”. That data feeds our interpretations of actions to which we attribute “individuality”. It’s really all the same mechanism albeit AI language training has only had a few years of practice whereas the human brain has had thousands, possibly millions of years of practice.

    AI has some mind blowing potential if we curate the data we provide it responsibly.

  4. A rather harsh sound bite. It’s really more like a data mesh where the links encapsulate the frequency of occurrence of adjacent words.

    The problem is that the data set they used to determine the occurrence frequency was generated by the output of an infinite number of monkeys.

    The bigger problem is that now the models are training on each others output.

    Admiral Grace’s law is still in effect: Garbage in, garbage out.

    • Ya, my thoughts exactly… To Michio Kaku, i would say “tell me you don’t understand how AI works without telling me you don’t understand how AI works.”

      He’s a great physicist, but that doesn’t make him an expert in other fields…

  5. If doing something fundamentally different from feeding back things learned in a different order depending on the prompt is necessary to being more than a glorified tape recorder, then humans don’t meet the standard either.

  6. Large language models are, essentially, an effort to create a real world instance of Searle’s “Chinese room” thought experiment. A system completely lacking in thought or internal models, which just has SO MANY appropriate responses recorded and capable of being modified by rote, that it looks like intelligence to an outside viewer.

    But it’s more of a Chinese broom closet, because it’s simply impossible to create a genuinely comprehensive set of responses. The search space is too freaking large. Terabytes of largely redundant training data aren’t even a start on what you’d really need.

    So you get a shallow imitation of intelligence that falls apart as soon as you probe it. But it’s still pretty useful for a lot of applications, because, deep dark secret:

    Most of what we do doesn’t require genuine intelligence.

    Intelligence isn’t the basis of most of our behavior, it’s too computationally expensive and slow. Intelligence is an exception handling routine for humans, not our usual state.

    That’s why we want artificial intelligence in the first place, isn’t it? Being intelligent is hard! It’s tiring. It requires lots of effort. And we’re not even very good at it, most of the time.

  7. Well, the use of a fixed, pre-trained matrix of fragments of word associations is KIND of like a tape recorder. But a tape recorder doesn’t sample from the entire ‘tape’ and statistically choose what speech to ‘play back’ based on a large short term memory context.

    And there’s some subtleties to how the recent context is converted and kept, as opposed to re-scanning the whole of that text context for every new query. I’ve heard some experts mention that there is some kind of caching of processed context from one query to the next, and obviously something different is going on with the new models that can handle 100K token contexts.

    So maybe it’s more like a tape-recorder-ified mind?

  8. Well, the thing is, I receive and read dozen of business e-mails each day.

    Some of the word goo that fills my inbox is obviously generated using AI by someone who doesn’t want to put the work in. They don’t understand how bad they look. Business is all about relationships and effective communication.

    Chatbots are only slightly effective, but still cause frustration.

    AI written content, at least right now, stinks.

  9. Ah, nope.

    The things adapt their input to create ad hoc responses that match the semantics of the problem they are presented with.

    Interacting with them is not like doing a web search all.

    Kaku should better stick to his pop-sci and eccentric futurologist persona.

  10. He is a bit too dismissive. It’s wildly different from a “tape recorder”—that’s like saying the Internet is a glorified cave painting.

    These things are (currently) simulations of intelligence rather than full intelligences but it’s as easy to underestimate how powerful that is as it is to overestimate their capabilities.

Comments are closed.