Deepmind and Google Gemini AI Will Surpass ChatGPT

DeepMind’s Gemini is a large language model that works with text and is similar to GPT-4, which powers ChatGPT. Deepmin CEO Demi Hassabis says his team will combine LLM with AlphaGO reinforcement learning.

Gemini will still take months to complete and hundreds of millions of dollars. Sam Altman, OpenAI CEO, said in April that creating GPT-4 cost more than $100 million.

DeepMind and Google’s Brain AI lab are working together on Gemini. 80-90% of recent AI innovations have come from either DeepMind or the Brain AI Lab.

DeepMind and Google are using all of Youtube videos, comments and transcripts as a primary training data source.

13 thoughts on “Deepmind and Google Gemini AI Will Surpass ChatGPT”

  1. It highlights advancements in AI technology that could enable more advanced conversational capabilities. It will be interesting to see how these developments unfold and their impact on the field of natural language processing.

  2. The definition used to be, you know, an AI that was generally intelligent to the degree that humans are.

    People keep tacking extra stuff onto the definition — especially, that an AGI is super-human, and can do everything better than any and all humans.

    I think if I brought GPT-4 back to 2005, a lot of people would deem it an AGI, or very neat to it.

  3. Completely underwhelmed by modern LLM AIs.
    Asked for a list of the largest cities in Florida and the percentage growth since the last census.
    A very simple question that requires two searches and a small amount of math.
    Complete failure.
    A 5th grader could easily answer the question.

    • ChatGPT created this list in 3 seconds:

      Here is a list of some of the largest cities in Florida along with their approximate population figures and the percentage growth since the last census, which was conducted in 2020:

      1. Jacksonville:
      – Population: Approximately 911,507
      – Percentage growth: Not available (2020 data)

      2. Miami:
      – Population: Approximately 463,347
      – Percentage growth: Not available (2020 data)

      3. Tampa:
      – Population: Approximately 407,599
      – Percentage growth: Not available (2020 data)

      4. Orlando:
      – Population: Approximately 289,438
      – Percentage growth: Not available (2020 data)

      5. St. Petersburg:
      – Population: Approximately 271,842
      – Percentage growth: Not available (2020 data)

      6. Hialeah:
      – Population: Approximately 234,673
      – Percentage growth: Not available (2020 data)

      7. Tallahassee:
      – Population: Approximately 194,500
      – Percentage growth: Not available (2020 data)

      Please note that the percentage growth figures since the last census are not available as the most recent census data available to me is from 2020. The growth rates may have changed since then. For the most up-to-date and accurate population information, it is recommended to consult official sources or the latest census data.

    • For example, I asked:

      Can I have a list of the largest cities in Florida and the percentage growth between the most recent census you have data for and the previous census?

      ChatGPT answered:
      Certainly! Here is a list of some of the largest cities in Florida, along with the percentage growth between the 2010 and 2020 United States censuses:

      1. Jacksonville
      2010 Population: 821,784
      2020 Population: 929,647
      Percentage Growth: 13.11%

      2. Miami
      2010 Population: 399,457
      2020 Population: 454,279
      Percentage Growth: 13.71%

      3. Tampa
      2010 Population: 335,709
      2020 Population: 407,599
      Percentage Growth: 21.40%

      etc. etc.

  4. RL works with well set rules. How that gels with LLMs (absent a lot of RLHF) is something I don’t get. If Deepmind has figured out a way, good for them. But it brings AGI, unaligned AGI closer.

  5. Learning from Youtube. Eeuch.

    “Can you write five short paragraphs about the halting state problem”

    “Hey guys! I can write about the halting state problem with this once cool hack. Before that, let me tell you about Surfshark, the internet’s favourite VPN…”

    • My 2 cents:
      It depends on what one puts as the goalposts for AGI.

      I think the first hurdle is getting one that can avoid using the human type fallacies and sophistries it learns from human source material to cover up when it gets caught being wrong, dishonest or delusional.

      The second hurdle will be being able to become curious about things without a prompt telling it find an answer—setting goals and being motivated to achieve them. Coming up with it’s own questions.

      I think these will still take awhile and possibly some new type of architecture or significant development of the current one.

      But a good simulation of general intelligence could be quite useful and may be much closer if we don’t get ourselves caught up in the trap of mistaking its power for real intelligence. Photoshop and CGI can make great images and illustrations but they required us to outgrow the naive believe that the camera never lies. Same is true for large language models. We are going to have to stop thinking of them as more than they are just because they have access to more information than one of us.

Comments are closed.