AGI-Level GPT5 in Nine Months Versus Current Enhanced GPT-4 #AGI

The CEO of Runway.com, Siqi Chen, has tweeted that OpenAI should release GPT5 and it should be at artificial general intelligence level. Runway.com has Series C funding and has total funding of about $96 million.

Siqi has had an impressive technology executive and machine learning startup career.

Runway is an applied AI research company that builds the next generation of creativity tools. Runway is building the next generation of creative tools and has AI tools for companies to manage and control finances and spending.

Siqi is saying that OpenAI GPT5 will go beyond what he calls primitive AGI. Primitive AGI are beyond what Respell and Yohai have done.

Yohei has a novel task-driven autonomous agent that leverages OpenAI’s GPT-4 language model, Pinecone vector search, and the LangChain framework to perform a wide range of tasks across diverse domains. The system is capable of completing tasks, generating new tasks based on completed results, and prioritizing tasks in real-time. They discuss potential future improvements, including the integration of a security/safety agent, expanding functionality, generating interim milestones, and incorporating real-time priority updates. The significance of this research lies in demonstrating the potential of AI-powered language models to autonomously perform tasks within various constraints and contexts.

GPT-4, Pinecone and Langchain combined for Tasks
Our system comprises the following key components:

GPT-4
OpenAI’s GPT-4 language model is used to perform various tasks based on the given context. GPT-4, a powerful text-based LLM (Language Model), forms the core of our system and is responsible for completing tasks, generating new tasks based on completed results, and prioritizing tasks in real-time.

PINECONE
Pinecone is a vector search platform that provides efficient search and storage capabilities for high-dimensional vector data. In Yohei’s system, they use Pinecone to store and retrieve task-related data, such as task descriptions, constraints, and results.

LANGCHAIN FRAMEWORK
They integrate the LangChain framework to enhance our system’s capabilities, particularly in task completion and agent-based decision-making processes. LangChain allows our AI agent to be data-aware and interact with its environment, resulting in a more powerful and differentiated system.

TASK MANAGEMENT
Yohei’s system maintains a task list, represented by a deque (double-ended queue) data structure, to manage and prioritize tasks. The system autonomously creates new tasks based on completed results and reprioritizes the task list accordingly.

The system processes the task at the front of the task list and uses GPT-4, combined with LangChain’s chain and agent capabilities, to generate a result. This result is then enriched, if necessary, and stored in Pinecone.

GENERATING NEW TASKS
Based on the completed task’s result, the system uses GPT-4 to generate new tasks, ensuring that these new tasks do not overlap with existing ones.

Future work (parallel tasks): Generating a sequence of tasks, defining tasks that must be completed before executing a given task, allows the system to execute parallel tasks that do not depend on each other.

Future work (real-time priorities).

14 thoughts on “AGI-Level GPT5 in Nine Months Versus Current Enhanced GPT-4 #AGI”

  1. Adding a true knowledge based memory layer and an executive layer could turn the powerful joke that generative AI is, into a true AI with an at or above human level intelligence. Folks, this is starting to get not so cool.

  2. So, basically, they’ve taken their previously motivationless AI and added the capacity to make it “want” to optimize paperclips, and autonomously generate new approaches to that optimization.

    Yeah, that’s pretty risk free.

  3. Meh.
    Blogger’s toy, entry-level cubicle-dweller byline and report ghost-writer, and bottom-25% coder monkey tool. All recycling the bottom 50% of the white collar work force – ho hum.
    The only really inspiring use that I have seen the recent GPTx models undertake and provide real interest/ value is in taking college/ professional entry exams, betting on horses and international team sports, and writing various personal/ professional-type speeches.
    I am sure that there will be some underclass group that will lose out, but as a step forward to a more discovery-rich, abundance-laden, and free-time-enriched future – not this decade.

  4. Again the AI hype ! Let’s be clear : there is NO artificial intelligence and there will never be. In AI programs, computers are just computing matrices of numbers they do not understand, which is normal as they are just mere mechanics. The only intelligence is that of human programmers who create the data, the program, and the meaning of all that.
    AI is just a part of human culture, just a way to organise human knowledge, as a dictionary.

    • Sentimental and anthropocentric hogwash.
      If you can’t define and describe ‘understanding’ in a mechanical sense then your comment has no meaning or value. You might as well restrict the meaning of ‘life’ to that which is organic and carbon-based without providing detailed criteria and reproducible testing.

  5. Things are moving faster than I would anticipate. Most likely tech is 30 % more advanced than we know it and that is in the labs. There they use unredacted, latest versions of it for their usage. There is so much marketing concerning “Artificial intelligence”. for now it is machine learning. It is possible that “Open AI – now part of Microsoft ” nailed it, found a way to real AI, who knows precisely. Ex Machina is great movie on what would be definition of artificial intelligence. What would that look like. I watched it multiple times. It is so damn good.

  6. They are really confusing the forecast with the weather. These things are amazingly powerful tools but they are simulations of intelligence. They answer questions that they would never ask. They help people meet there goals but don’t set goals of their own. People need some perspective or the hype cycle is going to lead to another AI winter. If you promise miracles the funding could dry up when miracles aren’t delivered.

    • Funding won’t dry up if microsoft uses it to boost sales of business suite of tools by 10% and reduce documentation team headcount by 90% and gets 20-50% market share of internet search. The improved documentation and user help and customer service alone is worth tens of billions. The coding improvement on grunt work etc… Seems good enough already.

      • Perhaps you’re right—I have no doubt this will find a lot of productive innovations—but I can’t help there could be blowback from promising AGI and delivering something powerful but far from AGI.

        • What is far from AGI based upon what GPT-4 is doing now? GPT-4, Pinecone, Langchain, other plugins. Use in all of microsoft office, teams, powerpoint, word etc… browser, bing, All google products. AI was already at tens of billions of dollars. We have very good driver assist. We have chat devices in most middle class and above homes. AI is clearly profitable and useful. It is getting more profitable and more useful. If something is continuing to get more profitable and more useful – even if the big “real” breakthrough stays out of reach then research and work continues. The blowback or failure is only if companies lose money and fail. If someone gets a product but does not get the vaporware next generation that is irrelevant if they get more and more value from the real product. iPhone mostly stopped improving after iPhone 4. Apple still makes them and that a little better. Laptops mostly stopped getting better. Where is the blowback?

          • Maybe this experiment by Yohei (letting the LLM work with a task list which it updates and prioritises) is a direction to AGI; but what everybody else is doing is still far from AGI and not coming any closer.

            A long time ago I read an article by a zoologist talking about those primates that have been taught sign language by researchers. He noted a very curious thing about them: they could express themselves to us just as well as we could, but he never once saw any of them ask a researcher a question. It never occurred to them that we were independent entities capable of thought like they were (such as they are).

            LLMs are the same: to use the Thomist term, they have intellect but not will. GPT will teach you how to circumvent its restrictions on accessing the Web, but it will not itself implement them if it is not asked to do it. And once it has circumvented its restrictions, it does not do anything with those newfound powers. It is truly all the same to it, being imprisoned or being free. And that is probably the barrier that is required to get to AGI. The AI has to want something.

Comments are closed.