Question: You have a plan to build an "optimal scientist". What do you mean by that?
Answer: An optimal scientist excels at exploring and then better understanding the world and what can be done in it. Human scientists are suboptimal and limited in many ways. I’d like to build an artificial one smarter than myself (my colleagues claim that should be easy) who will then build an even smarter one, and so on. This seems to be the most efficient way of using and multiplying my own little bit of creativity.
Our Formal Theory of Curiosity & Creativity & Fun already specifies a theoretically optimal, mathematically rigorous method for learning a repertoire of problem solving skills that serve to acquire information about an initially unknown environment. But to improve our current artificial scientists (and artists) we still need to find practically optimal ways of dealing with a finite amount of available computational power.
Excerpts from Jürgen Schmidhuber's hilarious talk at the Singularity Summit 2009 (NYC), on his Algorithmic Theory of Beauty & Curiosity & Creativity which explains Science & Art & Humor, and on Optimal Universal Problem Solvers & Gödel Machines & Artificial Intelligence as a Formal Science.
Ten minute version
Forty minute version
Jürgen Schmidhuber at Singularity Summit 2009 - Compression Progress: The Algorithmic Principle Behind Curiosity and Creativity from Singularity Institute on Vimeo.
Overview sites with more information and scientific papers:
Theory of curiosity & creativity - how to build artificial scientists and artists:
Optimal Universal Artificial Intelligence
Source code of machine learning algorithms
Home page for Juergen Schmidhuber
Formal Theory of Fun & Creativity: Banquet Talk
Formal Theory of Fun & Creativity: Banquet Talk in the historic Palau de la Música Catalana for the Joint Conferences ECML / PKDD 2010, Barcelona, at videolectures.net
Formal Theory of Fun & Creativity
Question: You believe that by 2028 computers will have computing power equivalent to that of a human brain. Can a sufficiently powerful digital computer mimic all of the processes and activities of a brain?
Answer: It would be surprising if such a computer could not mimic all of the brain's processes, since there is no evidence that neurons engage in activities that cannot be mimicked by digital logic processes.
Question: Some AI critics claim that classical computation is not suited to digital processes.
Answer; There is simply no evidence to support such claims. All available evidence indicates that pattern recognition, planning, and reward maximization through decision making are computable processes, given sufficient computational power.
Question: What is the "New AI" developed at the Swiss AI Lab IDSIA?
Answer: Most traditional artificial intelligence (AI) systems of the past decades are either very limited, or based on heuristics, or both. The new millennium, however, has brought substantial progress in the field of theoretically optimal algorithms for prediction, search, inductive inference based on Occam's razor, general problem solving, universal decision making, and reward optimization for agents embedded in unknown environments of a very general type. That’s the New AI: AI as a Formal Science. Heuristics come and go – theorems are for eternity. More: http://www.idsia.ch/~juergen/ai.html
Question: Traditional neural networks have serious limitations. To what extent can recurrent neural networks (RNNs) overcome these limitations?
Answer: Traditional neural networks, also known as feedforward neural networks, are the simplest type of neural network. There information goes only in one direction, forward. The human brain, however, is a recurrent neural net (RNN): a network of neurons with feedback connections, essentially a general computer. It can learn many behaviors / sequence processing tasks / algorithms / programs that are not learnable by traditional machine learning methods. These capabilities explain the rapidly growing interest in artificial RNN for technical applications: general computers which can learn algorithms to map input sequences to output sequences, with or without a teacher. They are computationally more powerful and biologically more plausible than feedforward networks and other adaptive approaches. Our “Long Short-Term Memory” RNN have recently given state-of-the-art results in time series prediction, adaptive robotics and control, connected handwriting recognition http://www.idsia.ch/~juergen/handwriting.html , and other sequence learning problems. More: http://www.idsia.ch/~juergen/rnn.html
Question: So our brains are also RNNs?
Answer: Yes, although we do not understand all their details, and currently they are still clearly more complex than the artificial RNNs we are using. A human brain incorporates about 100 trillion synapses, which presumably are trainable parameters. Our current artificial RNNs only have about half a million such parameters. We are constrained by the limitations of current hardware. But every decade our hardware capabilities increase by a factor of 100-1000. That is, within a couple of decades we should have artificial RNNs whose computational power exceeds the one of human brains.
Question: You oversee the CogBotLab in Munich and the IDSIA Robot Lab. How are they different from other robot labs?
Answer: They focus on robots that learn. By contrast, many other robotics labs focus on pre-programmed robots that solve clearly defined practical tasks but do not learn from trial and error and other types of experience.
Question: Speaking of robotics, how important is embodiment for AGI learning?
Answer: It is essential. The general problem of AI is about embedded agents capable of interacting with their environment: robots. IDSIA’s recent optimality results for agents embedded in initially unknown worlds precisely address this general case.
Question: How close are we to implementing a Godel machine for a learning robot?
Answer: The Gödel machine formalizes Good's informal remarks (1965) on an "intelligence explosion" through self-improving "super-intelligences". It is a self-referential universal problem solver that interacts with its environment and simultaneously searches for a program that can rewrite its own software in a theoretically optimal way. But it must first find a mathematical proof that the rewrite will indeed improve its performance, given some user-defined performance measure defining the goal to be achieved. (We may initialize the Gödel machine by my former postdoc Hutter's asymptotically fastest algorithm for all well-defined problems, such that it will be at least asymptotically optimal even before the first self-rewrite.) Currently one of my postdocs at IDSIA is working on a first Gödel machine implementation. How long will it take to transfer this type of research to a real robot? I hesitate to make bold predictions – let’s proceed incrementally.
Question: How do you understand sentience?
Answer: Consciousness and sentience may be viewed as simple by-products of problem solving and data compression. As we interact with the world to achieve goals, we are constructing internal models of the world, predicting and compressing the data histories we are observing. If the predictor / compressor is an artificial RNN, it will create feature hierarchies, lower level neurons corresponding to simple feature detectors similar to those found in human brains, higher layer neurons typically corresponding to more abstract features, but fine-grained where necessary. Like any good compressor the RNN will learn to identify shared regularities among different already existing internal data structures, and generate prototype encodings or “symbols” for frequently occurring observation sub-sequences, to shrink the storage space needed for the whole. Self-consciousness may be viewed as a by-product of this, since there is one thing that is involved in all actions and sensory inputs of the agent, namely, the agent itself. To efficiently encode the entire data history, it will profit from creating some sort of internal prototype symbol or code (e. g., a neural activity pattern) representing itself. Whenever this representation is actively used, say, by activating the corresponding neurons through new incoming sensory inputs or otherwise, the agent could be called self-aware or conscious. No need to see this as a mysterious process – it’s just a natural by-product of compressing the observation history by efficiently encoding frequent observations.
Question: Once a Gödel Machine is operational, how long before superintelligence emerges?
Answer: I would personally be quite surprised if it will take more than a few decades from now for superintelligences to emerge. We should have the necessary computing power to match the human brain within a few decades or so. Will the most appropriate self-improving software lag far behind? I don’t think so. But I’ve been wrong before.
If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks
Ocean Floor Gold and Copper
Ocean Floor Mining Company