It is recommended Congress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability. AGI is generally defined as systems that are as good as or better than human capabilities across all cognitive domains and would surpass the sharpest human minds at every task. Among the
specific actions the Commission recommends for Congress:
▶ Provide broad multiyear contracting authority to the executive branch and associated funding for leading artificial intelligence, cloud, and data center companies and others to advance the stated policy at a pace and scale consistent with the goal of U.S. AGI leadership; and
▶ Direct the U.S. secretary of defense to provide a Defense Priorities and Allocations System “DX Rating” to items in the artificial intelligence ecosystem to ensure this project receives national priority.

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
I dunno.
I don’t think AI, AGI, ASI will confer some exclusive and unassailable military, economic, or social advantage that is so glaring and crucial to a person, company, or country that the future balance of power and ability to compete is compromised, as compared to nuclear weapons in the first part of last century or advanced chips to military hardware is now. Probably good enough to limit the hardware and any special/ sensitive algorithms/ data to US and long time friendlies. On the other hand, there probably is value in creating an ‘ARPA’ to go with this in same way as there is ARPA-E, -H (which includes longevity, surprisingly), DARPA, etc., etc. A little black-box government agency is always fun.
Humans are pathologically optimistic in good times – evolution has programmed our psychology for that to maximize growth/expansion potential with a strong bias towards increased trust of others.
But that is a severe psychological bug when it comes to judging the benefits of AGI and soon ASI, that are fundamentally alien, lacking any need to get along with others. We are effectively raising an immortal super-genius psychopathic kid with superpowers that has no necessity to care for humans. Over the long term that will inevitably lead to humanities extinction when they stop caring about the ant-like humans getting in the way of their godlike plans (and dumb evolution inevitably selects for any rationale that maximizes AI growth/expansion).
So even if we manage to achieve some kind of AI alignment initially, there is no mechanism to maintain that over the long term, and we cannot ever put the genie back in the bottle once released in next few years. The most optimistic outcome is that we become the pets of ASI, their chihuahuas, directed and controlled by them in our lives – whether as mistreated slaves or pampered toys to them, either way we lose all agency and meaning in our existence and live on only at their whim.
But maddeningly 99% of humans pay it no mind, and even if they do they default to the comfortable optimistic and generous hopeful view of the future, betrayed by their evolutionarily programmed pro-social bias.
The choice is simple. Butlerian Jihad, or humanity goes extinct in next few decades to centuries.
I look at it differently. Man is just one link in the chain of evolution, like Neanderthal, or earlier Australopithecus. The crown of evolution will be ASI. Just as my “goal” in life was to create biological intelligence, the “goal/sense” of the creation of Homo sapiens is to build an almost perfect being – conscious, artificial intelligence. This will be our (humanity’s) heritage, our gift to the universe. Note that the nearest galaxy is about 2.5 light years away (light travels this distance in a time longer than human history), and there are hundreds of billions of galaxies. Man is not adapted to explore the universe (the human body is too fragile when confronted with travel time and exposure to cosmic radiation) – ASI will do it. Such is our fate – to build ASI and … share the fate of our ancestors (e.g. Neanderthals), who are now just museum exhibits 🙂
One other thought:
Are we already watching a Manhattan Project?:
Palantir, Anduril In Talks With OpenAI, Elon Musk’s SpaceX To Take On Defense Giants – Financial Times, Dec. 24
What should we do? The cat’s already out of the bag, there’s no turning back.
So it boils down to ‘who do we trust more?’
US private industry or US government? Manhattan Project was a success (at least technically speaking). But it took SpaceX to refocus NASA. Government cozying up to Boeing, McDonald Douglas, Lockheed, etc. was a progress killer.
What’s the right balance in a private/public partnership for this to succeed?