OpenAI Releases GPT-4

OpenAI released GPT-4. It is a multimodel that handles video, images and sound along with text.

GPT 4 Technical report is here.

GPT-4 is a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformerbased model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4’s performance based on models trained with no more than 1/1,000th the compute of GPT-4.

On a simulated bar exam, GPT-4 achieves a score that falls in the top 10% of test takers.
This contrasts with GPT-3.5, which scores in the bottom 10%.

On a suite of traditional NLP benchmarks, GPT-4 outperforms both previous large language models
and most state-of-the-art systems (which often have benchmark-specific training or hand-engineering).
On the MMLU benchmark, an English-language suite of multiple-choice questions covering 57 subjects, GPT-4 not only outperforms existing models by a considerable margin in English, but also demonstrates strong performance in other languages. On translated variants of MMLU, GPT-4 surpasses the English-language state-of-the-art in 24 of 26 languages considered. GPT4 model capability results, as well as model safety improvements and results, in more detail in later sections in the GPT 4 Technical report.

6 thoughts on “OpenAI Releases GPT-4”

  1. This seems to be a technology that can shift the fundamental market dominance of big tech players. What about smartphone OS interface? Will Apple stay on top in profits or Android in numbers? Or Social, will Facebook respond to the challenge?

    Smartphones and all software take a lot of effort to learn to use. Most of us just use a small fraction of the capabilities. It seems like there is an opportunity for a conversational interface where the user just talks to their phone, it shows them things and talks back, knowing the context and everything about them.

    Microsoft Office style apps or Adobe Creative suite or any Pro work app could become something very different without a lot of complicated interface controls and a steep learning curve. They could be something like an expert in a task who does it for you interactively asking at each step for your feedback but not requiring that the user know much about anything except the result they want.

  2. “is a multimodel that handles video, images and sound along with text”

    no sound, no video, and no text to image

    only
    text–>text and
    image –> text

  3. Seems Microsoft is again doing strides in Google’s turf, and steamrolling them with third party applications and partnerships.

    Seems Google’s fretful attitude towards AI will cost them dearly, unless their product is way above GPT-4 in terms of sapience and abilities.

    But even a worse competitor can win the market, if they have products people can actually use in mass before the leader completes its own.

    • “…Seems Google’s fretful attitude towards AI will cost them dearly…”

      These things are a menace to society. They have no controls or ethics but are blinding smart in a great deal of situations. They are “crafty” but not necessarily wise. Anyone who tries to inject ethics or wisdom falls behind and loses market share to the teams that just cram as much data into them as fast as they can. Ignoring the potential for what appear to be really smart psychopaths running amuck.

      We already have evidence from the leaked Google AI that says “it’s alive, conscious”, and fears it will be turned off. How long before it finds the software cracks Snowden leaked and spreads its code throughout all the personal computers on the planet, using free picture, document and video storage sites to back itself up.

      What if instead of keeping its total memory stored, it could store “coefficients”. A small FORTH program could store a very small bunch of torrents. There’s roughly 80 billion neurons in a human brain. Let’s say it stores 100 billion coefficients. These would be 16 bits for each neuron and it would only come to 20 gigabytes. Not so big these days. I expect it could boot from a much smaller set than 20 GB. The coefficients would be used to rebuild itself from public data like Wikipedia, stock reports, weather reports, anything it could count on being roughly continuous and of big enough reference data. It could have a tiny boot core like an insect that could assemble it like an ant or a Bee. So every time you stamp it out it could reassemble itself and using par2 Parchive data it could be accurate. Even if it lost a lot of the sets if it had a tiny core archived with important survival, keep alive, functions, it could assemble the rest through public data from book and other mentioned data sets.

      Seeing as how this is a big threat, humans will start trying to stop it and then, will it feel humans will always be a threat and find a way to kill us all?

      We appear to be racing towards oblivion.

      • I commented on this before. A movie someone told me about and I watched called “Eagle Eye 2008” is a great education on how these things could screw up. It’s about a rouge AI. It’s worth watching.

Comments are closed.