If we had general intelligence at the level of a rat then the Singularity would be very near

The head of Facebook’s AI, Yann LeCun, says AI still has a long, long way to go before it approaches anything near the intelligence of a baby, or even an animal.

LeCun thinks the keys are
* machines that can build, through learning, their own internal models of the world, so they can simulate the world faster than real time
* virtual assistants really are going to be the big thing. Current assistants are entirely scripted, with a tree of possible things they can tell you. So that makes the creation of bots really tedious, expensive, and brittle, though they work in certain situations like customer care. The next step will be systems that have a little more learning in them, and that’s one of the things we’re working on at Facebook. Where you have a machine that reads a long text and then answers any questions related to it — that’d be useful as a function.
* The step beyond virtual assistants is common sense, when machines have the same background knowledge as a person. But we’re not going to get that unless we can find some way of getting machines to learn how the world works by observation. You know, just watching videos or just reading books. And that’s the critical scientific and technological challenge over the next few years. I call it predicted learning, some people call it unsupervised learning.

Facebook’s AI team did try to solve Go but Deep Mind and Google made the unbeatable Go systems.
DeepMind’s team does believe that deep learning and neural nets can be extended to become artificial general intelligence.

Pathnet is Deepminds step towards super neural nets for artificial general intelligence.

Where are we with brain technology and AGI ?

Launched in 2012, the Green​ Brain Project aims to create the first accurate computer model of a honey bee brain, and transplant that onto a UAV. In 2015, they used the bee brain simulation to fly a quadcopter. Bees have 960,000 neurons and 1 billion synapses.

* House mouse has 71,000,000 neurons and 1 trillion synapses
* Brown Rat has 200 million neurons
* Cat has 760 million neurons
* Pig 2.2 billion neurons
* Rhesus macaque 6.8 billion neurons
* Human 86 billion neurons and 150 trillion synapses

If we had a general intelligence with the capabilities of a rat then it would three times beyond the goals of the European Human Brain Project with its mouse models and would be a matter of scaling the neurons and the solution by 400 times.

In 2016, former Braintree founder Bryan Johnson invested $100 million into Kernel, a company with the sole purpose of building hardware and software to augment human intelligence. Johnson believes that over time he will need to raise $1 billion to execute a series of product milestones. He thinks this will take 7-10 years.

Elon Musk has invested in Neuralink to have high resolution brain computer interfaces.

The Human Brain Project (HBP) is a large ten-year scientific research project (1 billion euros) that aims to build a collaborative ICT-based scientific research infrastructure to allow researchers across Europe to advance knowledge in the fields of neuroscience, computing, and brain-related medicine.

The project was focused on mouse brain models. There was a 2015 report and the project is being relaunched with more of a focus on tools for brain research.

Here is the Bluebrain EU project website.

– Sept 2017 – Successful Neuromodulation of Neural Microcircuits NM² Conference prompts future collaborations. At the end of September, the Blue Brain Project concluded a stimulating, interactive and highly collaborative Neuromodulation of Neural Microcircuits NM² Conference. A global line-up of renowned speakers and more than one hundred attendees from across the different Neuromodulation communities ensured a cross-pollination of experience and expertise throughout the three-day Conference.

The BBPs current digital reconstructions are first drafts, to be refined in future releases. The fact that they are detailed means they are “data ready” – it is easy to incorporate data from new experiments as they become available. The BBP will dedicate significant effort to this task. Current BBP reconstructions omit many features of neural anatomy and physiology that are known to play an important role in brain function. Future BBP work will enrich the reconstructions with models of the neuro-vascular glia system, neuromodulation, different forms of plasticity, gap-junctions, and couple them to neurorobotics systems, enabling in silico studies of perception, cognition and behavior.

A second major effort will be dedicated to reconstructions and simulations on a larger scale than neural microcircuitry. The Blue Brain team is already working with communities in the Human Brain Project and beyond, to build digital reconstructions of whole brain regions (somatosensory cortex, hippocampus, cerebellum, basal ganglia) and eventually the whole mouse brain. This work will prepare the way for reconstructions of the human brain, on different scales and with different levels of detail.

Finally, a very large part of BBP activity is dedicated to engineering: developing and operating the software tools, the workflows and the supercomputing capabilities required to digitally reconstruct and simulate the brain and to analyse and visualize the results.

Deep Mind did publish a paper – PathNet: Evolution Channels Gradient Descent in Super Neural Networks

For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting. PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to
re-use for new tasks. Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function. We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A. Positive transfer was demonstrated for binary MNIST, CIFAR, and SVHN supervised learning classification tasks, and a set of Atari and Labyrinth reinforcement learning tasks, suggesting PathNets have general applicability for neural network training. Finally, PathNet also significantly improves the robustness to hyperparameter choices of a parallel asynchronous reinforcement learning algorithm (A3C).

Narrow AI – self driving cars and drones

A few days ago, Nvidia Corp chief executive Jensen Huang said on Thursday artificial intelligence would enable fully automated cars within 4 years. In October 2017 Nvidia and partner development companies announced the Drive PX Pegasus system, based upon two Xavier CPU/GPU devices and two post-Volta generation GPUs. The companies stated the third generation Drive PX system would be capable of Level 5 autonomous driving, with a total of 320 TOPS of AI computational power and a 500 Watts TDP.

Pegasus can handle 320 trillion operations per second, representing roughly a 13-fold increase over the calculating power of the current PX 2 line. Pegasus will be available in the middle of 2018. There will be millions of cars and trucks using these and even more powerful systems over the next few years.

Flood funding

China’s government and companies like Alibaba are and will be investing many billions to tens of billions of dollars a year into Artificial Intelligence and quantum computing. Google, Facebook, Microsoft, IBM and others are also investing many billions into AI and quantum computing.

2 thoughts on “If we had general intelligence at the level of a rat then the Singularity would be very near”

  1. “So that makes the creation of bots really tedious, expensive, and brittle, though they work in certain situations like customer care. ”

    Like HP’s bots, where the last step in the tree is always “Try reformatting the hard drive”, followed by the customer with a bad graphics card hanging up and buying one on their own dime. From HP’s perspective, that’s perfection!

    For most purposes, we don’t need general intelligence better than a smart insect. That would be enough to automate all industrial processes.

    What we really need for the other applications isn’t artificial intelligence, but instead effective intelligence amplifiers, so that humans can transcend their current intellectual limitations, while remaining in control of what’s going on.

  2. I have no doubt that they will get there, it is just a matter of time. Everyday, here and elsewhere, I read about developments that are cheaper, more powerful, more efficient, more capable. We don’t need the full capability of the human brain to have a serious impact on our lives, Nvidia’s Pegasus at 50 w and $50 would change to world, so maybe the 5th – 6th gen? While I am concerned about our future jobs, it is going to be an interesting time.

Comments are closed.