OpenAI Makes Progress on Its Text Generator

OpenAI is scaling up language models and this greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art finetuning approaches. They trained GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 can translate, answer questions, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation. It can unscramble words, using a novel word in a sentence, or performing 3-digit arithmetic. They also identify some datasets where GPT-3’s few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora.

GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. Generated news articles that are around 500 words long are difficult to distinguish from human-written news articles.

GPT-3 used ten times as much training data as GPT-2.

It was 65% accurate on SAT analogy questions.

SOURCES- Open AI, Arxiv paper
Written by Brian Wang,

16 thoughts on “OpenAI Makes Progress on Its Text Generator”

  1. The thing is though, do we really need or want an AI to do those things? They just need to do useful things for us, like drive a car or control a robot to do useful tasks that humans don’t always want to do.

  2. Nothing in particular, but in this example, he linked to an early version of the paper, but the ArXiv page shows that v3 of the PDF is available. That’s why ArXiv and similar preprint servers prefer you link to the paper landing page, which allows you to select both first and latest paper, as well as provide means for searching for related papers by topic or author.

    Also, it’s generally rude to direct link to files if you can freely access them from a page hosted by the same organization. I could understand if there was no page, or some sort of paywall or obfuscation mechanism making access difficult, but that isn’t the case here.

  3. Yeah, for making propaganda and spamming social networks they will be great, as a figure of speech.

  4. None of this wordsmithing moves AI one iota closer to sentience. For that, you need 4+ billion years of evolution. AI is good for helping to disprove the existence of God, however, showing that top-down “creator” solutions don’t produce sentient beings. We still have more in common with an Amoeba than a Supercomputer – e.g. the instincts, needs, or drives, for hunger, reproduction, escaping from danger, respiration, eliminating/fleeing waste, sense of self, and probably a few other things I didn’t think of. You can’t really “program that in,” it’s all inborn at the cellular level and is an inherent quality of being alive.

  5. Words are. labels. An understanding of the relations among labels
    can only produce a lifeless prose, since there is no knowledge or
    understanding whatsoever of what is labelled. IMHO, only useful
    application of such engines.can be in translation.

  6. A parameter space so large it’s comparable to the corpus itself and several orders of magnitude more processing than Peter Turney used in 2005 with Latent Relational Analysis: “LRA achieves state-of-the-art results, reaching human-level performance on the analogy questions”

    It would appear Musk has discovered a way to save us all from turning into paperclips:

    Send the field of AGI research down a rat hole.

  7. Just making the articles up on the spot from existing narratives and whatever keywords have a high click count.
    Same as most “human” journalists.

  8. I don’t think generalized language models are going to get us all the way to human level NLP. I think some specialized language models for specific domains as well as a general world model will help. I also think performance would improve if grammar rules were programmed directly in instead of relying on unsupervised training to get us there.

  9. Also, sockpuppet comment bots. It will remain to be seen whether humans will actually be able to communicate with each other amidst the sea of commentbots.

  10. Unscrambling words into sentences is good. Does anyone have a github link for code that uses this trick in particular?

Comments are closed.