Novamente is working on Artificial General Intelligence (AGI) and the above is an architecture.
More recent AGI work is the Opencog project
In 2009, opencog presendted AGI Preschool: A Framework for Evaluating Early-Stage Human-like AGIs.
The toddler is the second of five AGI stages.
Memristors Patents and Current Developments
Williams' solid-state memristors can be combined into devices called crossbar latches, which could replace transistors in future computers, taking up a much smaller area.
They can also be fashioned into non-volatile solid-state memory, which would allow greater data density than hard drives with access times potentially similar to DRAM, replacing both components. HP prototyped a crossbar latch memory using the devices that can fit 100 gigabits in a square centimeter. HP has reported that its version of the memristor is about one-tenth the speed of DRAM.
Patents related to memristors appear to include applications in programmable logic, signal processing, neural networks, and control systems.
Pattern matching,classification/machine learning and sensors and sensor processing are important parts of AGI as seen in the diagram above.
Applications of Memristor Crossbars for Pattern Recognition and Robotics Blaise Mouttet received U.S. Patent 7,459,933 including various patent claims to using 2-terminal hysteretic resistance materials for image processing and pattern recognition. (December 2, 2008).
As opposed to storing data, attempting to recreate the basic logic functions used in digital logic, or create a neural net, the present invention proposes an implementation of a crossbar array as a component of a system which, given a first set of analog or digital input signals, can transform the signals into a second set of analog or digital output signals based on preprogrammed values stored in the crossbar array. Employing particular modifications of the crossbar array structure allows for preprogrammed impedance values (Zij) to uniquely determine transfer function coefficients (Tij) for the crossbar and, when combined with specific input and output circuitry, creates a physical device capable of performing a linear transformation of the input signals into output signals. This introduces a new level of parallel processing and adaptability to signal processing applicable to wave function generation, control systems, signal filtering, communications, and pattern recognition
If a 10×10 scanning probe input/output array is provided, and given the [described in patent] parameters, 100×10,000=1,000,000 programmable regions are possible. However, for a 1000 micron×1000 micron area, corresponding to an area covered by the 10×10 scanning probe array with a 100 micron interspacing, 10^10 intersection points of the nanowires may be achieved with 10^10 /25 programmable regions
Experimental demonstration of associative memory with memristive neural networks associative memory – the ability to correlate different memories to the same fact or event.
Circuit elements with memory: memristors, memcapacitors and meminductors
We extend the notion of memristive systems to capacitive and inductive elements, namely capacitors and inductors whose properties depend on the state and history of the system. All these elements show pinched hysteretic loops in the two constitutive variables that define them: current-voltage for the memristor, charge-voltage for the memcapacitor, and current-flux for the meminductor. We argue that these devices are common at the nanoscale where the dynamical properties of electrons and ions are likely to depend on the history of the system, at least within certain time scales. These elements and their combination in circuits open up new functionalities in electronics and they are likely to find applications in neuromorphic devices to simulate learning, adaptive and spontaneous behavior.
Apart from the obvious use of these devices in non-volatile memories, several applications can be already envisioned for these systems, especially in neuromorphic devices to simulate learning, adaptive and spontaneous behavior. For instance, the identification of memristive behavior in primitive organisms such as amoebas, opens up the possibility to relate physiological processes that occur in cells with the theory of memory devices presented here. Along similar lines, one could envision simple models that identify memory mechanisms in neurons and use these memory devices to build such models in the laboratory. Therefore, due to their versatility (including analog functionalities) the combined operations of these memory devices in electronic circuits is still largely unexplored, and we hope our work will motivate experimental and theoretical investigations in this direction.
Speculation on Billions of Memristors per Square Centimeter
If there are billions of memristor elements then massive gigapixel and terapixel images could be rapidly processed. With new metamaterial enhanced resolution to the nanometer level then images of DNA could be taken for nearly instant gene sequencing or for images of someones blood for detection of disease or illness.
We would be able to image at the nanometer scale and also process and understand what we are measuring at massive volumes.
There is evidence that a synapse is a memristor. Being able to make many billion memristors could be an easier way to enable hardware brain emulation.
Quantum computer machine learning
The global optimization approach using the quadratic objective function yields
the best results. The accuracy is only increased by less than 10% relative to AdaBoost, but this is accomplished with a reduction of more than 50% of the switched-on weak classifiers. This improvement is before boosting by using the quantum computing devices.
The competitive performance of bit-constrained classifiers suggests that
training benefits from being treated as an integer program. This has a twofold implication. First, this is good news for hardware-constrained implementations such as cell phones, sensor networks, or early quantum chips with small numbers of qubits. Second, this renders the training problem manifestly NP-hard, thus further motivating the application of quantum algorithms that may generate better approximate solutions than classically available. Our next steps will be to investigate the advantages that global optimization with AQC hardware offers for our problem instances. We plan to use the next generation of D-Wave chips with 128 qubits. This will involve adjusting our implementation to additional engineering constraints of the existing AQC hardware such as a sparse connectivity graph among the qubits. Employing AQC during the training phase has the significant benefit that once the optimal set of weights is computed, those can be utilized by an entirely classical processor. In this work we only considered fixed dictionaries of weak classifiers. An important generalization that remains to be studied is to apply this framework to adaptive dictionaries. We conclude with the remark that our finding that bit-constraint learning has good generalization properties may have implications when studying plasticity in the nervous system, where it is still an unresolved problem how a synapse can store information reliably over a long period of time
Video of adiabatic quantum computer for binary classification.
Quantum computers and memristors could blow past the performance of conventional transistors.
Technology based pattern recognition and categorization could rapidly become many orders of magnitude better than human capabilities and beyond current levels. Integration into a self-improving AGI might remain challenging even with those new capabilities, but by themselves super-pattern recognition and superfast categorization would have a lot of impact. They would be powerful tools and devices for accelerating scientific and technological progress. They would be powerful tools for people to use and could become integrated into daily use when costs come down. More progress in nanotechnology (not even full molecular nanotechnology) could bring the costs down and increase the capabilities.