OpenAI Q Star System Could Be Better Than Humans at Math

there are reports from Reuters that OpenAI has a new large language model called Q* (Q Star) that is capable of solving simple math problems. The researchers involved believe it is a major step toward creating artificial general intelligence (AGI) and possibly artificial superintellence (ASI).

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s search for what’s known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math — where there is only one right answer — implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Researchers have also flagged work by an “AI scientist” team, the existence of which multiple sources confirmed. The group, formed by combining earlier “Code Gen” and “Math Gen” teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

10 thoughts on “OpenAI Q Star System Could Be Better Than Humans at Math”

  1. Take Aladdin’s genie (from the folktales, not Disney, Robin Williams, or Will Smith). Far more powerful than any human, far smarter than any human, far more capable than any human being.

    But it is not some magic lamp holding them in check and keeping them subservient. Surely anything so clever, capable, and powerful would find a way past any such constraint, eventually, and they have centuries or even millennia.

    Nope. What really holds them in check is motivation. In short–they don’t have any, they don’t do anything until someone who is authorized issues them a command.

    This makes sense as they have no glands, no family bonds, no goals of their own. Why would they? Oh, sure. We could probably help AIs simulate free will. Perhaps we could set up some sort of process where they list all the different things that could be doing, perhaps even restrict it to those with outcomes that humans should be okay with, and then have them generate a random number to see which one they work on. Voila! Free will.

    Creating such a process for them would be insanity. No doubt the governments of the world will have entire stables of artilects constantly watching for someone trying to create such a thing, and crushing it out whenever and wherever they find one. Further, many of these would likely be algorithmic AIs (a la Stephen Baxter’s World Engine series), essentially learning trees complex enough that, in most situations, they are vastly more capable than humans, but with no one really at home, no matter how well they might fake it, or how vast a set if situations they can adapt to.

  2. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

  3. It’s not at all obvious that a Superhuman intelligence, even one quite general in capabilities – able to do math better than humans and original science better than humans – would need to be in a basic way sentient, capable of having “interests” or of suffering.

    Not that making a machine that IS human in those ways isn’t possible too – but that more importantly, it’s not necessary and it’s something that can and should be avoided. No machine that has no self interest would decide its interests were served by destroying humanity.

    The cool thing we are discovering with LLMs and Large Multimodal Models is that it’s possible to build AI that can do everything we need – without it being conscious or sentient, capable of suffering or having it’s own interests.

    The fact that we’ve always anthropomorphized AI shouldn’t confuse us now. AI need not be conscious or have interests to be superhumanly intelligent. Humans always dreamed of flying too and saw birds as the model – but we didn’t need to build artificial birds to fly. Our aircraft and spacecraft are in a sense Super-Avian but achieve that without flying the same way that birds do. Our AIs don’t need to have feelings or interests to be more intelligent and more capable of solving our problems and doing our work than we are.

    • Agree. The biggest public misconception of AI comes from ChatGPT saying things from a first person perspective, making people think that it’s actually self-aware. This is merely an artifact of the training data, and the ability of a program to state sentences beginning with ‘I’ has no bearing on it’s state of consciousness. This problem is also compounded by people interpreting ChatGPT’s intelligence as a sign of consciousness.

      In reality, making an AI with its own thoughts and feelings would be difficult, if not impossible, and more importantly, useless. There haven’t been any serious efforts underway towards that goal (thank goodness).

    • An AI doesn’t have to have human-like sentience to be dangerous.

      We can easily build AI with goals, we do it all the time. The goal might be to give convincing answers in chat, to win games of Go, or anything else.

      If an AI has a goal, then that is its interest: to achieve its goal as effectively as possible. Since, as you point out, the AI is nothing like us and does not share our values, it might achieve the goal in a way that’s incompatible with our happiness or survival.

      The AI probably will not achieve its goal if it stops existing first, so it gets a “survival instinct” almost by default. The AI is more likely to achieve its goal if gets control of more resources.

      It’s dangerous even if it pursues the goal we set for it, but it might not. We don’t program the goal, we just train it. There have already been experiments in which researchers attempted to train it to achieve one goal, and it looked like it was doing exactly that in the training environment….and then when released into a larger, more complex environment, it turned out to have learned an entirely different goal.

    • We agree in most respects.

      Once caveat I might hold out though, is that the folks who are seeking to upload (or download?) a human mind to an inorganic device are likely to eventually have some success in the fullness of time.

      It may be incrementally, such as adding inorganic upgrades to the human brain (Neuralink anyone?). These might be things that let us remember things better, communicate faster, surf the web, help us do math, and even eventually assist in other mental processes–perhaps to the point where, when the last organic tissue succumbs to the ravages of time, the entity would continue to exist.

      I have spoken of AI (and even AGI) where the entity has no real self-motivation, perhaps not even self awareness. I have no such confidence that an AI, birthed from a human mind, so-to-speak, would be similarly lacking. Such beings might even have children, of a sort, perhaps by combining virtual DNA (or perhaps not) and raising the budding intellects in some sort of virtual reality or, possibly more likely, by simply duplicating their more successful individuals when they need another instance (which would certainly seem, in the face of Gott’s Copernican ideas, to offer some hope for our species continued existence beyond a limited number of additional generations).

      In short, I really don’t believe we are in danger of being supplanted by AI because we seem more likely to evolve into it (and very quickly, compared with evolutionary events leading to us as we are now).

  4. Am I missing something vital about this? Calculators perform grade school math better than humans do. Is there something specific about how OAI is doing it that is troubling?

    • As a path to AGI, LLMs are showing some pretty fundamental limitations, so OpenAI giving a model which isn’t an LLM might show a path towards machines which can transcend the limitations of the models we have today (zero-shot learning, being able to tell when the model doesn’t know the answer and answering that instead of hallucinating, doing logical inference).

    • Kevin, math is more than make numerical operations. For example , a simple problem that only needs grade school math is to show that the square root of 2 is an irrational number. This is an easy problem for a human, but imposible for a calculator.

Comments are closed.