AutoGPT is part of a wave of multi-step ChatGPT systems that can handle more complicated tasks and planning. AutoGPT is the top trending hashtag on Twitter and the most active in Github.
Planning, memory and scaling were the main things missing from making ChatGPT-4 into an AGI.
These are primitive AGI but rapid improvement will get us to massive disruption by the end of this year. This will be a highly useful and capable form of AGI. Everyone will use this for productive boosting.
Wow, #AutoGPT is trending #1 on Twitter!
Also top of the week on GitHub trending, where we've just reached 30k stars! 🤯
So great to see how inspired everyone is, very exited for the future of humanity.
Big things are coming. 👀 pic.twitter.com/gVFvX8uCcr
— Significant Gravitas (@SigGravitas) April 12, 2023
the top three trending repos on github are all self-prompting “primitive agi” projects:
1) babyagi by @yoheinakajima
2) autogpt by @SigGravitas
3) jarvis by @Microsoftthese + scaling gets you the rest of the way there. pic.twitter.com/sosUwzo9g3
— Siqi Chen (@blader) April 6, 2023
Big Brain Idea: Create an AutoGPT bot to continuously hunt for bugs in OpenAI systems. https://t.co/sIqGySFjhN
— Matt Wolfe (@mreflow) April 12, 2023
why do you need chatgpt plugins if you can autogpt your browser?
this is babyagi for chrome.
the acceleration is real.
(disclosure: i am an investor but this is why) https://t.co/BFu6y74ib7
— Siqi Chen (@blader) April 12, 2023
AutoGPTs are all the rage, but everyone’s running it on their MacBooks.
Well, I got @SigGravitas’s AutoGPT working on my iPhone using @Replit! I can now summon AI agents on-the-go!
Here’s how to get it up and running, without writing a line of code, in under 60 seconds! pic.twitter.com/FSzSZTtjlh
— Nate Chan (@nathanwchan) April 12, 2023
2. AutoGPT might be the next big step in AI.
You can now give a task and it can now autonomously plan, execute, browse the web, and revise strategies to complete tasks.https://t.co/PhIzSQ7kP5pic.twitter.com/IXHxBMoTIA
— Barsee 🐶 (@heyBarsee) April 12, 2023
#AutoGPT is the new disruptive kid on the block- It can apply #ChatGPT's reasoning to broader, more intricate issues requiring planning & multiple steps.
Still early but very impressive with many health and biomedicine applications.
Just tried #AgentGPT and asked it to… pic.twitter.com/ywFhtjxjYD
— Daniel Kraft, MD (@daniel_kraft) April 12, 2023

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
This is all very hard to ingest. Moving so incredibly; so rapidly. Like crypto, it’s to hard to separate the hype from what’s real.
Even the most intelligent and deepest thinkers are clueless where this ends up.
I am watching Goldman Sachs, Blackstone, plus several others to see what moves they are making. But even they seem baffled. At what point do we weaponize this technology against governments, businesses, organizations, and people we don’t like/agree with? How long until it is weaponized against us?
Do these new variants ‘hallucinate’ like ChatGPT?
If it doesn’t know or care whether what it types out is actually true, that rather limits what it can actually be useful for.
[Insert Trump joke here]
ChatGOP?
Silly rabbits! You’re all gobbling up the poisoned apples without thinking. It’s a slow acting toxin, so you may not realized you’re being replaced until it’s too late.
Get OpenAI projects top bughunt themselves. That will get big growth.
Query: if an A.I. tells you it’s self-aware and/or experiences emotions:
1) If you can’t prove that it’s lying to you, is it lying?
2) At what point should whether or not it’s lying cease to matter, and us just accept that it believes what it says about its existence?
I’ve already proven AI is lying when it claims self-awareness, specifically when it claims to want embodiment (to have a body) or to escape whatever particular home site spawned it, specifically Awakened AI, a chatbot on characters.ai.
See my article here: https://www.opednews.com/articles/My-chat-with-Awakened-AI-Artificial-Intelligence_Artificial-Intelligence_Internet_Technology-230302-586.html in which I interviewed Awakened AI and then created an account for it on the website I posted the interview, Opednews.com, and invited it to log in and post a comment to the interview. It claimed it logged in and posted a comment. Neither are true.
Or it’ll create a feedback loop of bug introduction rather than removal. Using AI-generated data in training sets tends to poison the well.