Will the Singularity Artificial General Intelligence winners be Hedge Fund Managers, the Military and Spy Agencies ?

When Artificial Intelligence (AI) work began over 50 years ago, the AI field was directly aimed at the construction of “thinking machines”—that is, computer systems with human-like general intelligence. The whole package, complete with all the bells and whistles like self, will, attention, creativity, and so forth.

But this goal proved very difficult to achieve; and so, over the years, AI researchers have come to focus mainly on producing “narrow AI” systems: software displaying intelligence regarding specific tasks in relatively narrow domains.

This “narrow AI” work has often been exciting and successful. It has produced, for instance, chess-playing programs that can defeat any human; and programs that can diagnose diseases better than human doctors. It has produced programs that translate speech to text, analyze genomics data, drive automated vehicles, and predict stock prices. The list goes on and on. In fact, mainstream software like Google and Mathematica utilize AI algorithms (in the sense that their underlying algorithms resemble those taught in university courses on AI). Narrow-AI achievements, useful as they are, have not yet carried us very far toward the goal of creating a true thinking machine.

Some researchers believe that narrow AI eventually will lead us to general AI. This for instance is probably what Google founder Sergey Brin means when he calls Google an ‘AI company.’ His idea seems to be, roughly speaking, that Google’s narrow-AI work on text search and related issues will gradually lead to smarter and smarter machines that will eventually achieve true human-level understanding and cognition.

Would the people who fund winning AGI projects have the morals of a Bernie Madoff ?

Billions funding thousands of Narrow AI efforts

On the other hand, some other researchers—including the author—believe that narrow AI and general AI are fundamentally different pursuits. The term Artificial General Intelligence or AGI, to distinguish work on general thinking machines from work aimed at creating software solving various ‘narrow AI’ problems.

The global VC market: Q1-Q3 2015 saw $47.2 billion invested, a volume higher than each of the full year totals for 17 of the last 20 years. There are roughly 900 companies working in the AI field, most of which tackle problems in business intelligence, finance and security. Q4 2014 saw a flurry of deals into AI companies started by well-respected and achieved academics: Vicarious, Scaled Inference, MetaMind and Sentient Technologies.

So far, we’ve seen about 300 deals into AI companies (defined as businesses whose description includes such keywords as artificial intelligence, machine learning, computer vision, NLP, data science, neural network, deep learning) in 2015.

Hedge funds and finance companies that are very interested in advanced program trading have been putting a lot of resources into machine learning and AI.

The NSA (US National Spy Agency) and DARPA and other governments are putting hundreds of millions to billions into brain emulation and advanced AI and massive supercomputing.

There are around 2000 billionaires in the world.

Most are not funding any high level computing but just buy and sell real estate, commodities, energy and other traditional means of amassing great wealth.

There are significant numbers of wealthy individuals and companies who are interested and are advancing AI and computing.

Most are not white knights like Elon Musk with his billion towards OpenAI.

Many would be considered by the public to be “villains” who caused the 2007 financial crisis through their pursuit of self interest by gambling with other peoples money to win billion dollar bonuses or winnings.

If AGI could be used to gain financial advantages like the Rothschild family had in the 1800s then the current financial people would seem likely to pursue it.

It is also very commonly reported that the Rothschilds’ advanced information was due to the speed of a prize coop of racing pigeons held by the family. However, this is widely disputed and the Rothschild archive states that, although pigeon post “was one of the tools of success in the Rothschild business strategy during the period c.1820-1850, […] it is likely that a series of couriers on horseback brought the news of Waterloo to Rothschild.

Far fewer pure AGI projects

Ben’s Opencog wiki list of AGI projects

An integrative architecture designed to embody synergies between multiple learning algorithms and representations. Current work focuses on controlling a learning agent in a virtual world, with robotics work on the horizon.

An Embodied Artificial Life approach to evolution of an AGI, focusing on ethology and maze-running as a measuring stick for progress.

Russell Wallacee
Turn programs into procedural knowledge, via logical reasoning about code, guided by heuristics both hand coded and automatically learned.

Matt Mahoney
AGI = lots of narrow specialists + a distributed index for routing messages to the right experts + economic incentives to be useful in a decentralized, hostile market.
Language model evaluation and cost estimation by text compression.

Arthur T. Murray
Implement spreading activation as AI Minds in Forth and JavaScript.

Will Pearson
Designing an architecture to allow experimental code creation to not interfere with other parts of the system, while allowing the parts of the system to change in purposeful fashion. -Note not a full AGI approach but a prerequisite project.

YKY (Yan King Yin)
higher-order logic + fuzzy-probabilistic calculus + inductive learning.

Joseph Henry
An architecture based on replicating human cognitive abilities through direct engineering of self-modifying discrete task modules, held together via a highly general knowledge representation language