Mo Gawdat, Peter Diamandis, and Salim Ismail discuss AGI, how to adapt to an AI-driven world, the future of jobs, and more.
They discuss the different kinds of intelligence.
The talk about AI being broadly superior in writing and math.
The LLMs are rapidly knocking off other categories.
Mo Gawdat is a renowned author, entrepreneur, and former Chief Business Officer at Google [X]. He is best known for his work on happiness and technology, which includes his bestselling books. His notable works include Solve for Happy: Engineer Your Path to Joy (2017), Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World (2021), That Little Voice in Your Head: Adjust the Code That Runs Your Brain (2022), and Unstressable: A Practical Guide to Stress-Free Living (latest release). Mo Gawdat is also set to release a new book titled Alive. His career spans roles at IBM, Microsoft, and Google, where he led projects like Project Loon and Project Makani. Gawdat is also the founder of the One Billion Happy initiative, and the co-founder of Unstressable, an online platform for stress management.
Salim Ismail is a serial entrepreneur and technology strategist well known for his expertise in Exponential organizations. He is the Founding Executive Director of Singularity University and the founder and chairman of ExO Works and OpenExO.

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
Although they can be very powerful tools, I doubt that LLMs will really achieve AGI, for the following reasons: 1) They don’t learn by themselves through interaction with the world. They have to be trained manually by engineers. 2) Hallucination is ingrained in LLMs. Even if we can reduce hallucinations, we can’t get rid of them. 3) There is no long-term memory of discussions with users.
According to Dr Alan Thompson ( architectai.com) we are ~ 90% towards AGI.
1. AI can be programmed to learn by themselves
2. New gpt4.5 has decreased hallucinations considerably – with reasoning it may be reduced further
3. For ltm you need lots of HDD space – hardware is still a limiting factor
We need a new paradigm for energy, efficiency & long term storage & with current ai (like the discovery of protein folding) AI & smart humans can solve these problems & ASI will develop in next 10-20 years
2) Hallucination is ingrained in LLMs
Its ingrained in humans Neural Nets as well take our beliefs in Tarot cards / Crystal healing / flat earth, at the outside edge not to mention religion, the majority of humanity believes either god sent his son a carpenter to earth to spread his word, or God choose a illiterate peasant to be his last prophet, and both think the other is wrong
Maybe hallucination (imagination?) is a feature not a bug