Bing AI and GPT4 Has an IQ of 114 and is Smarter Than the Average Human

GPT4 and Google’s unreleased Palm system are getting IQ scores above the average human in IQ tests and tests that correlate to IQ.

GMAT scores and other tests have been mapped to IQ and the high IQ societies accept a range of tests that GPT4 has taken.

16 thoughts on “Bing AI and GPT4 Has an IQ of 114 and is Smarter Than the Average Human”

  1. Would you buy a used car from one of these?

    Would you trust it with your toddler on a trip to the zoo?

    Would you put control of your nation’s nuclear arsenal in its hands?

    AI enthusiasts are not the people you should talk to about what it means to be human. They are a highly skewed subset of a highly skewed subset.

    Juggling data and coming up with plausible facts is one aspect of human cognition. But that is not life by a long shot.

    Ray Kurzweil was looking at a very small part of the equation. His heuristics were focused on that small part. Even if Kurzweil were an experienced mental health clinician and also a skilled pedagogue, he would necessarily miss most of the actual picture.

    Can you trust your AI to make important decisions without human oversight?

    “It’s not clear how secure or reliable ChatGPT-generated code is. This is under active investigation in academic labs, including ours, and we hope to get a better idea within six months or so.” Saurabh Bagchi of Purdue

  2. So, basically we’re watching the singularity take off, in real time. Cool!

    Yet to be determined whether it’s a good cool, or bad cool, though. They still seem to be a bit deficient on the “benign” part of “benign AI”. Though indifferent AI lacking in independent motivation may be good enough for now.

    Is there any good advice on how to cope with this, aside from “arrange to have been a tech billionaire who can afford his own super-intelligent AI”? Because I seem to have missed the boat on that particular option.

  3. A curious metric is not being measured: How many dollars per answer are companies or people willing to pay for answers? You might call this the Free Market Test of AI.
    Currently, the best programmers, for example, make 6-7 figure annual salaries, with actual AI top talent making in the low to mid 7 figure range.
    Yet somehow, despite the enormous costs of the AI programmer – which, of course, has talents far beyond programming too – most people balk at more than $20/month, as measured by the sharp dropoff in users at more than that level. I don’t know what percentage of users are willing to pay even $20/month, but I suspect it is in the single digits.
    If AI is as good as claimed, why isn’t it already commanding at least a significant fraction of human salaries? As already described, it’s not because AI is so cheap to create; it isn’t, even if it will drop in cost very fast and very much in the future.
    Some of the answer is surely because it is fairly new in the public domain, but remember when IBM’s Watson won at Jeopardy? That was in early 2011: https://learningenglish.voanews.com/a/watson-wins-jeopardy-117010918/114319.html and it was predicted to soon replace doctors, lawyers, indeed any field where experts were highly trained in a body of knowledge. It didn’t happen.
    Yes, today’s GPT models are much more accessible and easier to use and train, and the internet as a source of knowledge is much more robust too (the cost of accumulating all that knowledge on the net is never factored in either; we are all owed a dividend, so maybe the demand for free/nearly free use of AI is not so unreasonable after all!).
    But I wonder why business – which is never known for sentimental attachments to existing ways of doing business, including employee retention – is not already mass replacing humans for its top tasks.
    Maybe pre-structured tests and iterative answers that have already been done 100s if not 1000s of times, are not the value sought by employers, and there is still something creative in the problem-solving ability of humans who don’t just respond to complex prompts but whom actually devise ways to solve complicated business and social problems?
    Or, maybe we just have to give AI more time, like to the end of the year 🙂

  4. ChatGPT is not comparable to human intelligence it is getting better at sorting data, it is not conscious. It is Futile to look at the next thing after humans, we are the thing that was not brought to its full potential level, everything else is just an accessory.

    • People are already working on adding self-awareness via embedding LLMs in code that lets the model “remember” far more of it’s history, pull factual data from trustworthy websites, self-reference and think about its recent memories to draw insights, etc.

      It may not work the way human consciousness does, but it’s going to quickly become difficult to say it isn’t a type of self-awareness/consciousness. Especially once it gets embodied in a humanoid robot onto which people will naturally project ‘humanity’ once it gets good enough.

      Top that off with it being more intelligent than average and knowing more factual information than pretty much everyone who isn’t a Jeapardy contender, and people will very soon be perceiving AI as homo-superior.

      Also – apparently some experimenters already tried turning GPT-4 loose to see if it could self-replicate and sustain itself in the cloud (i.e. make enough money to pay for its continued execution or even pay to spin off copies to earn even more money, etc). They ‘found it ineffective at autonomously replicating, acquiring resources, and avoiding being shut down “in the wild.”’ But it apparently did some interesting things in the process that indicate that a bit smarter GPT might succeed. (NOTE: That may be a bit over-hyped: the reference in the GPT-4 paper didn’t make it clear if human assistance was involved or if they managed to code it up to work on its own. I suspect the former.)

      • What’s the reason for an AI (GPT-4, GPT-&) to earn money (beside being measure/equivalent for maintenance/support/energy consumption, maybe self-reliant improvement of predictions/answers, growth/networking demand/’need’)?

        • For reference: Could someone with access please ask this question on GPT-4(plus?):
          “What is a reasonable assumption for the rate of yearly fossil fuels formation from photosynthesis related matter input on average for one decade later than year 2000 on planet Earth? What variance of error to expect?”
          Worth bothering a 114 IQ intelligence with this?
          (rough guess ~200TWh/a, variance of error could be huge, depending on data input for reference, accepted assumption: ~100-150GT/y dry mass from photosynthesis), Thanks for enabling this experiment.

      • Exactly my point, it is getting better at sorting the data that is fed to it, unlike us, it is not a reflection of the field of consciousness.

  5. What I see here is another potential first-comer-takes-all scenario.

    OpenAI has launched a product that was passably successful at general cognition tasks, and decided to launch despite the risks of embarrassing themselves. It was massively used, and found a lot of traps and ways to trick the AI into misbehaving, and used those to train the next one, fresh out of input training, with their recent influx of cash and their past experience.

    GPT-4 is much better than GPT-3 now, thanks in part to the use and abuse the later one endured.

    Google and others are at a disadvantage here, because they lack the massive user experience OpenAI already has. The adversarial and embarrassing prompts they endured, are now a competitive advantage for OpenAI.

    And they won’t stop. The GPT yearly release cycle is going to stay as long as they can keep training better models. There ought to be a limit, where they could no longer pay the massive cost of training, but it doesn’t seem to be very nearby.

    And at certain point, it will no longer matter, because their creation will start devising ways to become better with less, and they’ll follow it to keep competitive advantage.

    • That’s the result of the bitter lesson: increasing the number of neurons, layers and the input datasets with some methodological changes (e.g. attention) produces much better results than careful analysis and algorithm planning. The computer code allowing this is ridiculously, offensively simple, what you need is processing brute force.

      The ML/Natural Language Processing domain is suddenly obsoleted by the brute force approaches, where the guy with the bigger cluster for training and a pre-existing AI for filtering data wins the race.

      Some people saw this coming, but most didn’t. Kurzweil was generic enough to not commit to any AGI approach, so he’s still being right, even if looking conservative.

Comments are closed.