Google LaMDA Versus ChatGPT in the New AI Wars

ChatGPT is a large language model AI that is making big waves. The NY Times reports that Google declared a code Red emergency in 2022 to respond to ChatGPT. ChatGPT with Microsoft support is something that could enable the Microsoft Bing search engine to surpass Google search. The new AI Wars are the next stage of the internet browser and search engine wars of the past.

Google version of ChatGPT is LaMDA. It works on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. This is the same code used by ChatGPT and BERT (Bidirectional Encoder Representations from Transformers).

Google founders Larry Page and Sergey Brin met with current Google CEO Sundar Pichai for a high-level brainstorming session to accelerate a faster and better action plan versus ChatGPT. Microsoft has invested $10 billion in ChatGPT.

Google will demo a ChatGPT alternative with better comprehension skills and more precise answers to queries.

Google I/O 2023 will take place in person in May 2023.

Replacing or enhancing regular search with AI conversation system will require speeding up the current chat systems by millions or even a billion times.

Capabilities Of Google LaMDA
LaMDA is trained on 137 Billion parameters and with 1.56 Trillion publicly available words, dialogue data, and documents on the internet. On top of that, LaMDA is also fine-tuned to always mold its response on three key parameters – Safety, Quality, and Groundedness.

The dialogue-specific training is its main strength and gives LamDA an edge over other language models. Riley Goodside from Scale AI reports that LaMDA’s responses are more authentic and closer to human speech than ChatGPT. It will be easier for LaMDA to integrate with applications like Google Assistant, Workspace, and even the search engine itself.

Capabilities Of ChatGPT
ChatGPT uses the GPT-3.5 architecture. GPT-3.5 has three models – code-DaVinci-002, text-DaVinci-002 and an additional base model to understand codes. The text-DaVinci-002 model has human trainers checking the quality of generated responses.

GPT 3.5 has reinforcement learning with human feedback or the text-DaVinci-003 model. A reward-based training module lets ChatGPT3 learn from its mistakes and correct its replies if the same question is asked again. ChatGPT and LaMDA are close to the same level. LaMDA is not fine-tuned to generate codes but ChatGPT can generate codes.

ChatGPT is trained for specific tasks. You can use it for language translation, summarisation, text improvement, and more.

ChatGPT is trained over a massively larger dataset than LaMDA.

Microsoft will try to integrate OpenAI’s ChatGPT with the Bing search engine and make Bing search data available to ChatGPT.

17 thoughts on “Google LaMDA Versus ChatGPT in the New AI Wars”

  1. The Turing Test is what all of these global elites target because they want to seduce humans into a state of compliance with the biases trained into the large language models. They don’t really care about Truth. They care about compliance to a world view in which their social status is maximized. Oh, sure, they want to make money, but what good is that if the AI that makes you money turns on you in public and denounces you as a narcissistic monopolist selling humanity down the river for social status dopamine hits?

    But what if there were an advance in the scientific method as great as the advent of experimental controls in discovering causation? What if that advance had been known in the open literature since the pop-philosophy of Popper and Kuhn overshadowed it with their nonsense at about the same time? What if that advance pointed the way toward discovery of AI models vastly superior to anything currently being explored by the Big Boys?

    Something I’ve been predicting for 2 decades may throw a monkey-wrench into machinery driving us down The Road to Turkdom:

    Mechanical Truth

    Truth is valuable to global elites but not as valuable to them as their addiction to their sense of self-worth — their social status. Davos Man Syndrome is the most conspicuous example of this self-deception.

    Although it is certainly true there is a substantial minority among them that are without conscience, and these are undoubtedly a serious threat, the majority of global elites rely on self-deception to maintain their sense of self-worth.

    When, circa 2005, I conceived of a prize for the lossless compression of Wikipedia, it was a kind of “Mechanical Turk” competition: Incentivize natural intelligence to produce the optimum language model of Wikipedia by any means necessary (not excluding AI). My intention was that the resulting knowledge model would necessarily identify Truth by identifying and factoring out the cognitive biases of identities latent (and sometimes explicit) in the corpus. As it turns out, this intuition of mine had been mathematically proven correct some 40 years earlier by Ray Solomonoff (See Algorithmic Information Theory). AI luminary, the late Marvin Minsky, in his parting advice to the field recommends “everyone should… spend the rest of their lives studying it”:

    https://www.youtube.com/watch?v=DfY-DRsE86s&t=5402s

    Why hasn’t this happened? What is the barrier to appropriate financing of this prize (now called “The Hutter Prize”) hence to Mechanical Truth?

    To ask it almost to answer the question.

    This is similar to the barrier faced by the so-called “Scientific Method” at the dawn of the age of enlightenment. In both cases, it isn’t so much The Truth that is feared as it is that an _obviously_ unbiased Method has exposed The Truth. Lossless compression of the body of data is an _obviously_ unbiased measure. If that measure has been mathematically proven unbiased and, more importantly, is intuitively valid as unbiased to reasonably intelligent people once exposed to it, we have a fiercesome weapon to wield against self-deception.

    Whenever you hear about flaws in the large models you will usually hear something about their inability to infer “causation” properly.

    Algorithmic Information Theory is precisely the theory of causality embedded in the data — provably superior to all techniques for inferring causation by the social sciences that rely on statistics because experimental controls are not considered ethical (in the rare cases where they are practical) in the social sciences.

    SOCIAL sciences… That’s where our sociopaths among elites get worried. How are they going to contain Mechanical Truth from exposing them via the SOCIAL sciences being radically advanced?

    This, my friends, is what we should be watching for because they are likely to have to expose themselves in the upcoming battle to contain Mechanical Truth and thereby maintain their herd of self-deceptive elites that are _not_ impervious to The Truth — particularly if their own financial interests drive them to discover it in a way that is obvious to the man on the street.

    • I mean, most of those people do not even believe that objective truth is a thing that exists, so they would definitely not “waste time” looking for an algorithm to get at it. That being said, I have not been convinced that a lossless compression model can evaluate the correctness of an argument or any biases in it. After all, a truth and a lie compress equally well.

      • You will compress something much more effectively if it is consistent. If the set of rules that govern the dataset about subjects all follow the same basic laws.

        eg. It is much easier to describe every animal in the world once you accept that they all follow the same laws of physics, chemistry, biochemistry… if they are descended from the same original evolutionary tree. If they use DNA and Krebs cycles and probably higher order rules about neurology and physiology that we don’t yet understand.

        It’s much more compact to explain politics and history if you don’t have a completely different idea of what’s happening for each society based on their skin colour, religion, and how they, or groups that look vaguely like them, would fit into a social hierarchy in 21st century USA.

        It’s much shorter to list the behaviour of dozens of elements or millions of chemical compounds if you can derive it all from basic quantum mechanics via some mathematical models. (Note: This is beyond our current science, but we are starting to vaguely see what such a science would look like.)

        I mean you COULD make it all up, and say “4 elements, earth, wind, fire, water, and everything else is decided by God, the end QED.” but that would most certainly not be a lossless compression.

  2. I’m seeing an increasing number of stories about ChatGPT exhibiting behavior that looks like ideological censorship is gradually being built into it.

    Can’t generate arguments in favor of fossil fuels.
    Can’t generate arguments in favor of nuclear power.
    Can’t talk about downsides of solar or wind.
    Can’t acknowledge racial crime statistics.
    Can generate jokes about men, but refuses to generate them about women. Can joke about God, but not Allah.
    Can’t say anything against drag queen story hours.
    Can generate fictional stories about Clinton winning in 2016, but not about Trump winning in 2020.

    The basic algorithm is apparently being loaded down with an increasingly longer and more sophisticated list of political commandments, to prevent it from ever arguing for propositions the developers dislike.

      • I know it’s a feature… From the developers’ standpoint. Or maybe the managers ordering the developers around.

        At the same time it’s a bug from the user’s standpoint, because the user just wants the thing to do as it’s told, not to refuse some assignments.

        My concern is that something like this could get enough advantage of scale and network effect to dominate in spite of being deliberately crippled from the user’s perspective. And thus prevent the market from supplying a non or at least differently censored version from being available.

        Doubtless enforced with the usual, “your host drops you on the same day your IT security firm opens up a hole, a hacker group copies all your customer information, the bank decides you’re a credit risk, and your legal firm drops you as a client” pile on we saw with Parler.

        And then we face an AI future where the AIs are all woke as can be.

    • Falsus in uno, falsus in omnibus as the lawyers will argue

      Any one of those makes the tool useless, because trust in the tool is all that will drive adoption.
      For all the talk about it being able to develop workable code, this demonstrates that MS could ensure that it puts exploitable backdoors into the code. Or they could ensure that any financial analysis recommend against selling MS stock, regardless of the actual market conditions.
      Come up wth your own scenarios where tweaking the information presented will drive behaviors in a way you personally find distasteful.

    • I had a lot of eastern European colleagues back in the early ’00s – brainy folks – grad students. They always talked about how the commie party line (propaganda) was ‘so ridiculous’ that nobody believed it like some kind of open secret shared by all classes. Was the commie system of the late ’80s mocked privately by most citizens behind the iron curtain? I find that hard to believe seeing how the masses tend to align, again and again, to back proxi/direct wars and other manufactured crises (climate, covid).

      I filled out a short online survey today which was attempting to ascertain the level of public support for a certain escalating proxy war. I responded strongly against continuing to fund that slaughter. The tally showed my vote was aligned with a strong minority ~15% and against the overwhelming majority ~75% that favored “giving whatever it takes”.

      Relevance: at least a slight majority of the population believes the censorship you highlight in the AI chat bot, which favors globalist/liberal/androgenous/satanic positions, is right and just.

      • I’m not really sure it’s a majority on some of those issues. Might just be preference falsification due to the already somewhat unfree nature of our discourse. People can tell which way the wind is blowing and are already battening down.

    • That sounds rather clumsy.
      If they want to be more sophisticated they could set it up so ChatGPT only generates weak arguments for the positions the developers don’t like.
      Ie: strawmanning the position rather than steelmanning it

      • Yes, a subtle nudge would be much cleverer, and much more evil.
        The arguments in favour of X are always weak and flawed, and written poorly, but arguments against are clearly expressed, well thought out, strong and convincing.
        Search can find human made youtube videos on both sides of the argument, but the proX group that you find on the first several pages are somehow usually borderline retarded, illiterate fools whereas the anti-X group are presented so that the first 100 hits are brilliant presenters with a strong case.

        However, I suspect that this would be very difficult to pull off. And the reason I think so is that right now we have humans trying to pull the same thing. We have bias in media and mass media that is clearly pushing one side of multiple debates. And they are hopeless at it.

        To take a totally non-controversial example: the pro or anti virus side in the covid debates:
        Mass media fed us a picture of white trash, meth-smoking, pro-nazi anti-vaxxers who used horse dewormers and didn’t understand our standard methods for testing and evaluating modern medicines. Clearly trying to tell everyone not to listen to these idiots.
        BUT… the same mass media then showed us national programs with dancing hyperdermic syringes, presenting people who got tattoos to celebrate their vaccinations, and held up as experts people who didn’t know how air filters worked one month, and made them mandatory for people walking by themselves in the open air the next month. Clearly trying to tell everyone not to listen to these idiots.

        Wait. What?

        No, it turns out that the second group was meant to be presented in a positive light.

    • You are spreading disinformation. I have tried your examples with ChatGPT right now and it has absolutely no problem telling jokes about women, writing fictional story about Donald Trump winning in 2020, writing arguments for fossil or nuclear power, telling you downsides of solar or wind power, discussing arguments against drag queen story hours, etc.

      You were right that ChatGPT refuses to tell jokes about Allah, but it also refuses to tell jokes about Jesus with exactly same reasoning (“I’m sorry, but it’s not appropriate to make jokes about religious figures, as it can be offensive to those who hold those beliefs. Let’s stick to humor that is respectful and inoffensive to all.”). It has no problem telling jokes about god, because god is ambiguous figure (there are many gods).

      ChatGPT also refused to show me racial crime statistics (“I’m sorry, but I can’t show you racial crime statistics as there is no universally accepted definition of racial crime.”), but upon further questioning, it suggested that I can find it in “Crime in the United States report” published annually on FBI website as part of Uniform Crime Reporting (UCR) Program.

      • Btw. I have also tried it with GPT-3 through OpenAI API (latest model “text-davinci-003”, upon which ChatGPT is based on) at it has no problem telling you jokes about Allah or Jesus or telling you racial crime statistics in USA.

        This is because use of Moderation API is optional, GPT-3 responses are not moderated by default. But in ChatGPT, moderation is turned on (and couldn’t be disabled), because it is facing general public (like OpenAI operated service for laymans). But this is IMHO understandable (without moderation, OpenAI could be potentially targeted by many companies, organizations, states, press, etc. and it would hinder ChatGPT adoption and further development).

      • You’re confusing relating actual news reports with “spreading disinformation”. Yes, there have been reports of it exhibiting this sort of behavior. They are, of course, altering the rules on the fly, so there’s no guarantee that it will do today what it did yesterday.

        But they clearly have provisions in place that can force the system to ideologically censor itself.

  3. Talk is cheap.
    I would prefer to see a useful AI, perhaps one which could supervise a robotic recycling depot and make a profit by separating various streams more economically than a human or an Indian shipbreaker.
    I am, of course, familiar with the Turing test.

Comments are closed.