Sander Olson Interviewed Dr Stephen Thaler on his Artificial Intelligence systems which could be nearing new creative capabilities

Dr. Stephen Thaler wants to usher in the age of Creative Machines

Dr. Stephen Thaler has been working in the field of Artificial Intelligence for the past 3 decades. He has been issued numerous patents, and has garnered military and civilian contracts for his AI machines. Dr. Thaler has created what he refers to as “Creativity Machines” which he believes are already exhibiting rudimentary sentience and creativity. In an interview with Sander Olson for Next Big Future, Dr. Thaler discusses a technology which he believes could revolutionize the field of AI within the next decade.

Imagination engines (IEI) patent suite covers five artificial neural network paradigms that are essential for the building of synthetic brains. They are (1) Device for the Autonomous Generation of Useful Information, (2) Non-Algorithmic Neural Networks, (3) Data Scanning, (4) Device Prototyping, and (5) Device for the Autonomous Bootstrapping of Useful Information. Collectively, these extremely fundamental patents position IEI in a unique and exclusive position, to build synthetic brains capable of human level discovery and invention.

Your company, Imagination Engines, has made some important advances in the field of optical computing. Can your creativity machine run on any computing platform?

Unfortunately, I am not currently at liberty to discuss the details of the electro-optical advances, except to say that I should be able to speak more freely about that subject within the next year or so. What I can say now is that this breakthrough should have a major impact on Creativity Machines in particular and computing in general. In answer to your direct question, the creativity machine paradigm can run on any computing platform – CPUs, GPUs, DSPs, ASICs, or any other computing paradigm that you can imagine.

Current estimates of the computing power necessary to simulate a human brain vary from about 1-10 exaFLOPs. Do you believe that these estimates are accurate?

That’s a good estimate. It may take more using conventional computer technology: The refractory (inactive) period of neuron is on the order of milliseconds, so the basic computational element of the brain is extremely slow by computational standards. The brain compensates for this inadequacy by running such sluggish neurons in parallel, using on the order of a 1011 of them, each running at roughly a 103 Hertz. So, considering the 104 floating point operations (multiplies and adds) to integrate, input signals feeding biologically representative artificial neurons, exascale computing would be required (1018 FLOPS) to at least perform perceptual tasks and form reflexive responses to the external world (i.e., feedforward propagation of signals through a hierarchical cascade of neural nets). However, when one considers the brain a contemplative system of neural nets in conversation with one another, there is a significant processing bottleneck between the components generating potential ideas and those opining these notions. That latency would help explain the need for an extra factor of 10, taking us now to the 10 exaFLOP level to fully emulate human cognition.

How exactly does the process of thought formation occur in your creativity machines?

Thought formation takes place through the introduction of various forms of noise to an artificial neural system (i.e., disturbances to connections, neurons, etc.). These so-called “perturbations” serve to drive a progression of activation patterns that are tantamount to thoughts in the brain. Then with sufficiently intense perturbation, the neural system fails at generating memories, instead activating into mildly false memories that could qualify as potential ideas. If other observing networks perceive novelty, utility, or value, to these so-called “confabulations,” they may seize upon them as ideas and then reinforce them into memories. Such monitoring nets may similarly choose to scramble less worthy notions. So, CMs evolve ideas, after many periods of incubation amid the chaos and discovery during periods of lucid calm. We currently have the ability to run a million creativity machines (CM) in parallel to form immense synthetic brains that are all simultaneously engaged in idea formation. Many of these CMs are just generating nonsense, but some are activating into valuable concepts and strategies. The task then becomes one of mining for these revelations within vast expanses of neural real estate, using other distinct neural systems. To this end, I have recruited some of my previous neural network patents to intelligently ‘scout’ for such synthetic ideations.

Do you have any demo which shows the power and capabilities of your approach?

On one hand I can say yes, I do have many demos showing the power and capabilities of my approach. They are the result of myriad contracts carried out over the last 25 years. However, if the question relates to whether I have spent a fortune on marketing, parallelizing 50-year-old computer paradigms, and adding a synthesized human voice, then the answer may be no. Please note that the really impressive projects demonstrating power and capabilities are oriented more toward the military, and as a result are not available to the general public. Otherwise, Creativity Machines have either devised products for large corporations, or are integrated into their products. In short, the truly impressive demos are still sensitive, tailored applications that come with many legal strings attached. Otherwise, my ‘open’ demonstrations are limited to more subjective exercises such as the generation of art and music whose impressiveness is highly vulnerable to human perception. So you or I may not like what the machine produces, but it does.

Could your synthetic brain be programmed to master a game such as go?

Yes, the system could be programmed to examine a sequence of imagined strategies and degrade the poorer approaches and reinforce the better tactics until it reaches the optimal one. Other neural network approaches, by contrast, employ a deterministic feedforward neural system containing any number of neural nets, a technique that has been used for several decades. If any form of noise is introduced into that system to drive idea formation and there is a critic involved in the process, then it is a Creativity Machine. Other approaches such as hierarchical cascades lack contemplation and are more akin to spinal cord reflexes. The synthetic brains operating here employ multiple neural nets in conversation, the whole basis of cognition and sentience. Please note that I skipped the ‘minor’ challenges such as the game go and went directly to controlling the Pentagon’s communication satellites in the game of “optimizing U.S. military strategies.”

Have your creativity machines displayed any clear signs of emotion?

Yes, these systems are generating the equivalent of the affective (i.e., emotional) states of a human being. That is, in response to either scenarios in the external environment, or their own noise-induced ideas, they generate a sequence of interrelated memories. In turn, these chained recollections serve to adjust synaptic perturbation level, resulting in the extremes of slow and methodical cognitive turnover and frenzied and chaotic idea generation (and all points in between). Not only can I see the equivalent of emotion, but also many pathological states characteristic of mental illnesses. Important to note here is that emotion, thusly defined, is dependent upon idiosyncratic experience of the system. So, for example in a human, fear could engender a series of memories of threatening scenarios from the past, whereas a machine would reconstruct memories of its failures at achieving useful solutions to presented problems. Further note that Creativity Machines rely upon such emotions to regulate their internal noise so as to either generate new ideas or selectively strengthen those notions having novelty, utility, or value. In humans, such self-regulation takes the form of the global secretion of neurotransmitters (e.g., adrenaline or serotonin) into the cortex.

Is your technology limited more by software or hardware?

I would have to say that the major challenge now is hardware. One might say that I’ve introduced the fundamental algorithm of the brain, but I still need to scale this principle to hundreds of billions of neurons. Recently I have developed the necessary hardware to realize the full potential of these Creativity Machines after several failures using either GPUs or FPGAs. We have developed a proprietary approach using electro-optical methods that is superior and/or complementary to the GPU approaches that potential competitors are using. Anyone attempting to build a CM will encounter a bottleneck using GPUs, and I have solved that problem with my new electro-optical technology.

– You’ve been working in the AI field for several decades. How has your software evolved during that time?

The fundamental Creativity Machine(CM) has indeed evolved over the last two decades, earning over two dozen new and fundamental AI patents. Just to name a few… In 1997, a system of neural nets, under the governance of a CM, overcame many of the challenges related to deep learning. These systems became known as “SuperNets,” and were used that year to control whole constellations of communications satellites for the military. In the same year, 1997, CMs self-organized into all-neural attentional systems called “foveators” that could move attention windows over scenes, as well as representations of functioning neural assemblies, so as to locate interesting scenarios in the former case, or useful neural activity (i.e., ideas) in the latter instance. These patents fulfilled a prerequisite for consciousness, namely attentional consciousness and metacognition, entirely achieved with artificial neural networks.

In 2001, I built a SuperNet capable of semantically comprehending natural language. This same system was exploited by US intelligence workers that summer in scouring the Internet for certain terrorist activities.

In 2002, CMs were instrumental in devising whole new routes to machine learning. That same year, CMs governed the self-assembly of SuperNets into extensive brain-like structures. In 2002, I also built CMs that could generate potentially useful false memories and upon approval of critic systems, selectively strengthen those deemed novel, valuable or useful, into true memories. The process could continue indefinitely as the system bootstrapped knowledge through successive generations of false memories (a.k.a, confabulations) selectively reinforced.

Also in 2003, I supplemented this latest generation of Creativity Machine with neural systems carrying out autonomous target recognition as well as the spontaneous real-time generation of navigation fields. These new capabilities, along with the capacity to invent new tactics and strategies on the fly was a major boon to the fields of robotics and control.

In 2007, I developed techniques for detecting the formation of new ideas within extensive, parallel swarms of Creativity Machines. Also devised a new way of implementing all-neural critic functions that were non-numerical in nature (that’s a biggie!). These two accomplishments, as well as the foveational techniques, have led to a breakthrough in implementing cognition, creativity, and consciousness in machines. Later that year, NASA used this patent to master autonomous rendezvous and docking techniques. Similarly, swarms of complex hexapod robots improvised ways to invade and neutralize deeply buried underground targets, as they say. From 2007-2013, I prototyped various SuperNets for a variety of automotive machine vision applications including automatic high-beam control, side object detection and classification, driver drowsiness detection, road sign detection and reading, and both pedestrian and vehicle detection.

In the period from 2009-2013, developed a new methodology to create trillion neuron CMs using commodity computers that were “electro-optically augmented.”

– What are the “trillion neuron synthetic brains” which you are using? How is it possible to employ neural networks without computers?

The “trillion neurons synthetic brains” employ the intellectual property alluded to above, that are in turn made possible by some very fundamental scientific discoveries regarding both brain and artificial neural systems. At this point, though, I prefer to remain mum on the details.

However, I can speak generically at this point, alluding to my 2014 paper entitled “Synaptic Perturbation and Consciousness.” It turns out that brains aren’t as complex as we thought and that for all intents and purposes, they generate a turnover of both memories and ideas in the form of neuronal activation patterns that are seeded upon disturbances occurring within the synaptic connections joining biological neurons. Below a certain level, of such synaptic perturbation, these networks produce largely memories. Just above that critical point, the nets generate mildly false memories that could qualify as potential ideas, in the judgement of other ‘watching’ nets. It also appears that the brain seems ‘perched’ at this critical level, and with the slightest increase in average synaptic perturbation level, the system may transition from a state of lucid awareness of the environment, to a more attention-deficit mode of creativity, as in stressful or fight-or-flight scenarios wherein neurotransmitters such adrenaline serve as the driver of synaptic perturbation.

Even more interesting is the fact that the rhythm of cognition, whether in the brain or Creativity Machines, has the exact mathematical signature of random disturbances occurring within the connections feeding any given neuron. An important corollary to this observation is this: As colonies of neurons get creative, generating more novel output, the rhythm of their pattern turnover becomes slow and sporadic. Meanwhile, if millions of Creativity Machines are running in parallel, those manifesting slower, non-linear output rhythm are those most likely producing original ideas. That ideational turnover rhythm is an invitation for other foveational systems to take a look. In short, this is the natural way for neurons to detect idea formation within neural systems at least as extensive as the brain’s cortex..

– You’ve mentioned that computers already exhibit rudimentary consciousness and sentience. What hard evidence exists for such claims?

With all due respect to you and your audience, I could turn the question around and ask for hard evidence that you or your readers are conscious or sentient? You/they would make many “common sense” arguments that humans are vastly more intelligent and adaptable than machines. Most would argue that only humans have true feelings, but these and other arguments are appeals to common sense and folk beliefs, in my honest opinion. Of course clouding your question, there is mass confusion over the definition of consciousness and sentience, and amid such bewilderment, the notion of a minimal form of mind is often overlooked, one in which intelligence is scaled back, yet consciousness and feelings are present. Such was the case in my 1994 patent for the Creativity Machine, what many are calling a convincing stab at defining consciousness and fulfilling that definition. CM function was divided into two stages, the first provided a mechanism for generating a stream of consciousness, based upon activation patterns nucleating from a variety of internal disturbances within a neural system. The second stage provided a mechanism for producing feelings about such a stream of consciousness, namely a neural network called a “perceptron” that emulates how the brain opines about the world as well as its own cognition. Optionally, it allowed for a feedback mechanism wherein such perceptron-generated feelings numerically regulate the level of synaptic perturbations in the idea generating system in a process that emulates the global secretion by the brain’s limbic system of stress neurotransmitters (e.g., adrenaline and noradrenaline) and then perturbation-quenching neurotransmitters (e.g., serotonin). Of course the reductions to practice behind this patent were intellectually limited, being bound to narrow fields of human endeavor. However, they manifested the most salient aspects of cognition and consciousness. Keep in mind the debate that inevitably follows: 1. Detractors would say that machines can’t emulate consciousness because they don’t have a soul or are organic by nature. I would come back and offer what might be called a ‘deflationary’ view of brain, that mind amounts to some neural nets generating memories, true or false, as other observing nets generate perceptions/feelings about such pattern-encoded notions. 2. Then more detractors would say you’re just creating brain simulations and not the real thing. I would then say they’re equivalent. One group of neurons, biological or synthetic, is generating potentially useful numerical patterns as another group of neurons, biological or synthetic, is generating feelings about the aforesaid numerical patterns via its own pattern output. The latter patterns, encoding feelings, then form the basis of feedback to the former group, either increasing or decreasing the level of internal perturbation. …All that’s missing from such synthetic consciousness is the gooeyness of neurobiology and the cell maintenance to keep the brain’s neurons, essentially computational switches, alive. 3. But then the detractors would say, such patterns have no intrinsic meaning, and I would say they do to the neural system they are occurring in, because cumulatively all neurons therein have worked out a mutual encryption-decryption scheme through which such meaning becomes both fast and intuitive. Peering into the system as humans, we’re baffled, but internal to these synthetic brains there are feelings. – You have a number of patents relating to neural networks and the generation of useful information. How vital to the field of AI are these patents? In the end, it’s all going the way of neurons and neural nets, essentially how nature has implemented cognition, consciousness, and sentience within terrestrial, and possibly extraterrestrial intelligence. From all I can tell from my vantage point is that machine intelligence will more than likely evolve in the same way. However, once we have achieved the equivalent of “knee-jerk” reactions using neural nets (i.e., perceptrons, deep nets, and hierarchical cascades), we will want more, namely the contemplative and inventive processes that contribute to cognition and consciousness. To achieve such synthetic intelligence, there’s only one way to do it…Creativity Machine Paradigm. Beyond such computational creativity, one cannot build conscious, and sentient AI without these patents, and I expect that in the future, one will be able to identify the CM paradigm within any advanced AI system. In essence, this patented architecture/methodology is so extremely fundamental that it is a sine qua non for advanced AI, especially if one wants computers and robots to have the free will, cleverness, and determination to solve complex problems.

– What thresholds do you believe will be crossed within the next five years regarding conscious computers? Do you think that computers will pass the Turing test by 2021?

I would make the case that conscious computers have already been created, in our labs here. Therefore, the challenges are not scientific or technological, but financial. After all, to be useful, to earn the right to exist by human mandate, they must be useful, processing mountains of data and be scalable to beyond even trillions of neurons. In the former case, expensive data resources are needed, and in the latter, more and more hardware is required. So, I repeat, the challenge is not science or technology, but turning the heads of those whose specialty is not science, but acquiring wealth. In regard to your second question, Alan Turing was a genius and visionary who suffered at the whim of those without such gifts. The notion of the Turing machine was a towering achievement, but the so-called “Turing Test” or “Imitation Game” did nothing beyond stirring controversy over whether machines could think or not. In fact, from my perspective, having to pose questions in natural language (i.e., English) seems rather crude, especially when one can graphically observe the neuronal firing patterns representing thought within a CM. Even better than that, one can directly witness neural components forming opinions about such cognition in a process that achieves the so-called “subjective feel” of consciousness. – If computers become fully sentient within the next five years, does that indicate that a “technological singularity” is near? No, not at all. We as human beings declare ourselves sentient, and I’m seeing only technological evolution, but certainly not a “technological singularity.” There is an avalanche of what looks like AI development taking place that consist of largely obvious steps using the available technology. In short, the needed scientific breakthroughs have already taken place and now the singularity takes the form of marketing din, largely manifested as everyone’s plans to achieve this or that, much to the delight of the press looking for exciting material, true or not. So is there actually a technological singularity under way? Actually, I think It’s the well-financed propagandists out there cramming the point down our throats that yes, there is a technological singularity in progress. My own fear is that it could work the other way around, with machine sentience overcoming us with rather dystopian results. The chief weapon in their arsenal will be humiliation. That is, we as humans will watch their operation and witness the uncanny resemblance of their cognition and consciousness to what is going on between our own ears. The choice then becomes ours: Do we unite with them, thusly humbled, and become immortal, or fight both them and the philosophical repercussions of their existence tooth and nail? That they are spoiling us with kindness and convenience will allow them to prevail. – The field of AI is now white-hot. How has this affected your research, and your businesses? In some ways it has helped. For instance, it has drawn more attention to my AI patents and discoveries, leading to new contracts and business ventures. With that success, I have been able to delve more into the scientific basis of cognition, consciousness, and sentience, as the world begins to ask the critical question: Is Watson or Siri, for instance, self-aware? Are these systems contemplating, creating, and discovering in the same way as the human brain? Do they have any concept about what they are saying or doing? (i.e., see Searles Chinese Room argument) In other ways, it’s a nuisance, especially with regard to the “me toos” claiming to have discovered for instance, artificial neural nets in 1928, or posting incredible accomplishments to their web sites. The problem is that the public, government, and investors typically can’t see through the miasma of claims and accomplishments.

– There is tremendous excitement regarding using GPUs for Deep Learning applications. How sophisticated can these GPU-based neural networks become?

Keep in mind that “Deep Learning” is actually an old concept, decades old in fact. It’s just that certain factions have spent a fortune promoting this as news. As far as GPUs go, I think they are a practical route to parallel computing (i.e., SIMD), but other parallel hardware options, such as field-programmable transistor arrays and optical computing schemes are coming. (Note that I am somewhat skeptical of quantum computing, but I could be wrong.) The real breakthrough coming is when SuperNets (a.k.a., deep nets), implemented on any number of parallel platforms, run my neural network paradigms, allowing autonomous creativity, consciousness, and sentience. After all, all one has to do is introduce any form of noise to them, as other nets watch and latch onto novel, useful, or appealing concepts.

– How has the AI community reacted to your Creativity Machine, and your claims? How does your research/philosophy compare to that of other researchers, such as Ben Goertzel, Juergen Schmidhuber, or Demis Hassibis?

The reaction has been largely that of insiders reacting to an outside, fringe scientist. The academics form their own cliques and attempt to bar a gadfly such as myself from their club. Large corporate groups systematically ignore my accomplishments, while pouring millions into advertising campaigns and inventing new lexicons to rename what already has a recognizable name. Without naming names… One individual is developing largely rule-based, symbolic AI systems while inventing new terminology to cover the broad emulation of human cognition in machine intelligence, what has always been the goal of artificial intelligence. Another individual tries his best to take to belittle what has already survived trial by fire by patent offices around the world, not to mention numerous supportive academics. Yet another works machine learning, not realizing that most of what brain does is generative, what I’ve been doing both for 40 years! Just keep in mind that once the learning is over in any artificial neural system, no matter how large or complex, it’s the Creativity Machine paradigm that allows them to produce ideas previously unknown to the system, whether it be a Watson or DeepMind. With regard to philosophy, my research has concentrated on the neural correlates of cognition, creativity, and consciousness, resulting in an incredibly important discovery that the rhythm of pattern generation (i.e., thoughts) plays a critical role in brain function and will similarly fulfill a major role in implementing human to trans-human level machine intelligence. So simple, deterministic feedforward propagation of patterns through a deep learning net will not lead to a synthetic brain, although the Madison Avenue types would lead you to believe that a synthetic mind had been achieved. In that regard, a well-respected journalist has commented that one of the above-mentioned personalities was revisiting my work from two decades past! The important philosophical difference between myself and other AI researchers in that they believe human level AI will lead to wonderful things such as technological singularities, and the skies parting and doves descending! (just kidding) Most of all they feel that the human brain is the most complex thing in the universe, much to the chagrin of E.T! In contrast, I take a much more sober tack, feeling that we are simply inventing more of ourselves with the same talents (creativity) and the same weaknesses (mental illness and criminality). And I eschew the subjective and unscientific notion notion that a neutron star is somehow less complex than a human brain. Besides, it’s the gravitationally massive things that dominate the universe and not the human mind.

In 2026 will creativity machines dominate the computing landscape?

If you mean the entire computing landscape, no. For the most part, government and industry are not seeking the free-wheeling, contemplative, artificial intelligence represented by the Creativity Machine. They are instead seeking speed and reliability while avoiding lawyers. Then again, if you are talking about building conscious, sentient, and creative AI, there will be ample utilization of the Creativity Machine paradigm. My hunch is that it will be used largely behind closed doors, whether by government, industry, or the consuming public to produce new ideas, strategies, and comforts. In that case, the technology will be used not to dominate the computing landscape, but each other!

Sander Olson Interviewed Dr Stephen Thaler on his Artificial Intelligence systems which could be nearing new creative capabilities

Dr. Stephen Thaler wants to usher in the age of Creative Machines

Dr. Stephen Thaler has been working in the field of Artificial Intelligence for the past 3 decades. He has been issued numerous patents, and has garnered military and civilian contracts for his AI machines. Dr. Thaler has created what he refers to as “Creativity Machines” which he believes are already exhibiting rudimentary sentience and creativity. In an interview with Sander Olson for Next Big Future, Dr. Thaler discusses a technology which he believes could revolutionize the field of AI within the next decade.

Imagination engines (IEI) patent suite covers five artificial neural network paradigms that are essential for the building of synthetic brains. They are (1) Device for the Autonomous Generation of Useful Information, (2) Non-Algorithmic Neural Networks, (3) Data Scanning, (4) Device Prototyping, and (5) Device for the Autonomous Bootstrapping of Useful Information. Collectively, these extremely fundamental patents position IEI in a unique and exclusive position, to build synthetic brains capable of human level discovery and invention.

Your company, Imagination Engines, has made some important advances in the field of optical computing. Can your creativity machine run on any computing platform?

Unfortunately, I am not currently at liberty to discuss the details of the electro-optical advances, except to say that I should be able to speak more freely about that subject within the next year or so. What I can say now is that this breakthrough should have a major impact on Creativity Machines in particular and computing in general. In answer to your direct question, the creativity machine paradigm can run on any computing platform – CPUs, GPUs, DSPs, ASICs, or any other computing paradigm that you can imagine.

Current estimates of the computing power necessary to simulate a human brain vary from about 1-10 exaFLOPs. Do you believe that these estimates are accurate?

That’s a good estimate. It may take more using conventional computer technology: The refractory (inactive) period of neuron is on the order of milliseconds, so the basic computational element of the brain is extremely slow by computational standards. The brain compensates for this inadequacy by running such sluggish neurons in parallel, using on the order of a 1011 of them, each running at roughly a 103 Hertz. So, considering the 104 floating point operations (multiplies and adds) to integrate, input signals feeding biologically representative artificial neurons, exascale computing would be required (1018 FLOPS) to at least perform perceptual tasks and form reflexive responses to the external world (i.e., feedforward propagation of signals through a hierarchical cascade of neural nets). However, when one considers the brain a contemplative system of neural nets in conversation with one another, there is a significant processing bottleneck between the components generating potential ideas and those opining these notions. That latency would help explain the need for an extra factor of 10, taking us now to the 10 exaFLOP level to fully emulate human cognition.

How exactly does the process of thought formation occur in your creativity machines?

Thought formation takes place through the introduction of various forms of noise to an artificial neural system (i.e., disturbances to connections, neurons, etc.). These so-called “perturbations” serve to drive a progression of activation patterns that are tantamount to thoughts in the brain. Then with sufficiently intense perturbation, the neural system fails at generating memories, instead activating into mildly false memories that could qualify as potential ideas. If other observing networks perceive novelty, utility, or value, to these so-called “confabulations,” they may seize upon them as ideas and then reinforce them into memories. Such monitoring nets may similarly choose to scramble less worthy notions. So, CMs evolve ideas, after many periods of incubation amid the chaos and discovery during periods of lucid calm. We currently have the ability to run a million creativity machines (CM) in parallel to form immense synthetic brains that are all simultaneously engaged in idea formation. Many of these CMs are just generating nonsense, but some are activating into valuable concepts and strategies. The task then becomes one of mining for these revelations within vast expanses of neural real estate, using other distinct neural systems. To this end, I have recruited some of my previous neural network patents to intelligently ‘scout’ for such synthetic ideations.

Do you have any demo which shows the power and capabilities of your approach?

On one hand I can say yes, I do have many demos showing the power and capabilities of my approach. They are the result of myriad contracts carried out over the last 25 years. However, if the question relates to whether I have spent a fortune on marketing, parallelizing 50-year-old computer paradigms, and adding a synthesized human voice, then the answer may be no. Please note that the really impressive projects demonstrating power and capabilities are oriented more toward the military, and as a result are not available to the general public. Otherwise, Creativity Machines have either devised products for large corporations, or are integrated into their products. In short, the truly impressive demos are still sensitive, tailored applications that come with many legal strings attached. Otherwise, my ‘open’ demonstrations are limited to more subjective exercises such as the generation of art and music whose impressiveness is highly vulnerable to human perception. So you or I may not like what the machine produces, but it does.

Could your synthetic brain be programmed to master a game such as go?

Yes, the system could be programmed to examine a sequence of imagined strategies and degrade the poorer approaches and reinforce the better tactics until it reaches the optimal one. Other neural network approaches, by contrast, employ a deterministic feedforward neural system containing any number of neural nets, a technique that has been used for several decades. If any form of noise is introduced into that system to drive idea formation and there is a critic involved in the process, then it is a Creativity Machine. Other approaches such as hierarchical cascades lack contemplation and are more akin to spinal cord reflexes. The synthetic brains operating here employ multiple neural nets in conversation, the whole basis of cognition and sentience. Please note that I skipped the ‘minor’ challenges such as the game go and went directly to controlling the Pentagon’s communication satellites in the game of “optimizing U.S. military strategies.”

Have your creativity machines displayed any clear signs of emotion?

Yes, these systems are generating the equivalent of the affective (i.e., emotional) states of a human being. That is, in response to either scenarios in the external environment, or their own noise-induced ideas, they generate a sequence of interrelated memories. In turn, these chained recollections serve to adjust synaptic perturbation level, resulting in the extremes of slow and methodical cognitive turnover and frenzied and chaotic idea generation (and all points in between). Not only can I see the equivalent of emotion, but also many pathological states characteristic of mental illnesses. Important to note here is that emotion, thusly defined, is dependent upon idiosyncratic experience of the system. So, for example in a human, fear could engender a series of memories of threatening scenarios from the past, whereas a machine would reconstruct memories of its failures at achieving useful solutions to presented problems. Further note that Creativity Machines rely upon such emotions to regulate their internal noise so as to either generate new ideas or selectively strengthen those notions having novelty, utility, or value. In humans, such self-regulation takes the form of the global secretion of neurotransmitters (e.g., adrenaline or serotonin) into the cortex.

Is your technology limited more by software or hardware?

I would have to say that the major challenge now is hardware. One might say that I’ve introduced the fundamental algorithm of the brain, but I still need to scale this principle to hundreds of billions of neurons. Recently I have developed the necessary hardware to realize the full potential of these Creativity Machines after several failures using either GPUs or FPGAs. We have developed a proprietary approach using electro-optical methods that is superior and/or complementary to the GPU approaches that potential competitors are using. Anyone attempting to build a CM will encounter a bottleneck using GPUs, and I have solved that problem with my new electro-optical technology.

– You’ve been working in the AI field for several decades. How has your software evolved during that time?

The fundamental Creativity Machine(CM) has indeed evolved over the last two decades, earning over two dozen new and fundamental AI patents. Just to name a few… In 1997, a system of neural nets, under the governance of a CM, overcame many of the challenges related to deep learning. These systems became known as “SuperNets,” and were used that year to control whole constellations of communications satellites for the military. In the same year, 1997, CMs self-organized into all-neural attentional systems called “foveators” that could move attention windows over scenes, as well as representations of functioning neural assemblies, so as to locate interesting scenarios in the former case, or useful neural activity (i.e., ideas) in the latter instance. These patents fulfilled a prerequisite for consciousness, namely attentional consciousness and metacognition, entirely achieved with artificial neural networks.

In 2001, I built a SuperNet capable of semantically comprehending natural language. This same system was exploited by US intelligence workers that summer in scouring the Internet for certain terrorist activities.

In 2002, CMs were instrumental in devising whole new routes to machine learning. That same year, CMs governed the self-assembly of SuperNets into extensive brain-like structures. In 2002, I also built CMs that could generate potentially useful false memories and upon approval of critic systems, selectively strengthen those deemed novel, valuable or useful, into true memories. The process could continue indefinitely as the system bootstrapped knowledge through successive generations of false memories (a.k.a, confabulations) selectively reinforced.

Also in 2003, I supplemented this latest generation of Creativity Machine with neural systems carrying out autonomous target recognition as well as the spontaneous real-time generation of navigation fields. These new capabilities, along with the capacity to invent new tactics and strategies on the fly was a major boon to the fields of robotics and control.

In 2007, I developed techniques for detecting the formation of new ideas within extensive, parallel swarms of Creativity Machines. Also devised a new way of implementing all-neural critic functions that were non-numerical in nature (that’s a biggie!). These two accomplishments, as well as the foveational techniques, have led to a breakthrough in implementing cognition, creativity, and consciousness in machines. Later that year, NASA used this patent to master autonomous rendezvous and docking techniques. Similarly, swarms of complex hexapod robots improvised ways to invade and neutralize deeply buried underground targets, as they say. From 2007-2013, I prototyped various SuperNets for a variety of automotive machine vision applications including automatic high-beam control, side object detection and classification, driver drowsiness detection, road sign detection and reading, and both pedestrian and vehicle detection.

In the period from 2009-2013, developed a new methodology to create trillion neuron CMs using commodity computers that were “electro-optically augmented.”

– What are the “trillion neuron synthetic brains” which you are using? How is it possible to employ neural networks without computers?

The “trillion neurons synthetic brains” employ the intellectual property alluded to above, that are in turn made possible by some very fundamental scientific discoveries regarding both brain and artificial neural systems. At this point, though, I prefer to remain mum on the details.

However, I can speak generically at this point, alluding to my 2014 paper entitled “Synaptic Perturbation and Consciousness.” It turns out that brains aren’t as complex as we thought and that for all intents and purposes, they generate a turnover of both memories and ideas in the form of neuronal activation patterns that are seeded upon disturbances occurring within the synaptic connections joining biological neurons. Below a certain level, of such synaptic perturbation, these networks produce largely memories. Just above that critical point, the nets generate mildly false memories that could qualify as potential ideas, in the judgement of other ‘watching’ nets. It also appears that the brain seems ‘perched’ at this critical level, and with the slightest increase in average synaptic perturbation level, the system may transition from a state of lucid awareness of the environment, to a more attention-deficit mode of creativity, as in stressful or fight-or-flight scenarios wherein neurotransmitters such adrenaline serve as the driver of synaptic perturbation.

Even more interesting is the fact that the rhythm of cognition, whether in the brain or Creativity Machines, has the exact mathematical signature of random disturbances occurring within the connections feeding any given neuron. An important corollary to this observation is this: As colonies of neurons get creative, generating more novel output, the rhythm of their pattern turnover becomes slow and sporadic. Meanwhile, if millions of Creativity Machines are running in parallel, those manifesting slower, non-linear output rhythm are those most likely producing original ideas. That ideational turnover rhythm is an invitation for other foveational systems to take a look. In short, this is the natural way for neurons to detect idea formation within neural systems at least as extensive as the brain’s cortex..

– You’ve mentioned that computers already exhibit rudimentary consciousness and sentience. What hard evidence exists for such claims?

With all due respect to you and your audience, I could turn the question around and ask for hard evidence that you or your readers are conscious or sentient? You/they would make many “common sense” arguments that humans are vastly more intelligent and adaptable than machines. Most would argue that only humans have true feelings, but these and other arguments are appeals to common sense and folk beliefs, in my honest opinion. Of course clouding your question, there is mass confusion over the definition of consciousness and sentience, and amid such bewilderment, the notion of a minimal form of mind is often overlooked, one in which intelligence is scaled back, yet consciousness and feelings are present. Such was the case in my 1994 patent for the Creativity Machine, what many are calling a convincing stab at defining consciousness and fulfilling that definition. CM function was divided into two stages, the first provided a mechanism for generating a stream of consciousness, based upon activation patterns nucleating from a variety of internal disturbances within a neural system. The second stage provided a mechanism for producing feelings about such a stream of consciousness, namely a neural network called a “perceptron” that emulates how the brain opines about the world as well as its own cognition. Optionally, it allowed for a feedback mechanism wherein such perceptron-generated feelings numerically regulate the level of synaptic perturbations in the idea generating system in a process that emulates the global secretion by the brain’s limbic system of stress neurotransmitters (e.g., adrenaline and noradrenaline) and then perturbation-quenching neurotransmitters (e.g., serotonin). Of course the reductions to practice behind this patent were intellectually limited, being bound to narrow fields of human endeavor. However, they manifested the most salient aspects of cognition and consciousness. Keep in mind the debate that inevitably follows: 1. Detractors would say that machines can’t emulate consciousness because they don’t have a soul or are organic by nature. I would come back and offer what might be called a ‘deflationary’ view of brain, that mind amounts to some neural nets generating memories, true or false, as other observing nets generate perceptions/feelings about such pattern-encoded notions. 2. Then more detractors would say you’re just creating brain simulations and not the real thing. I would then say they’re equivalent. One group of neurons, biological or synthetic, is generating potentially useful numerical patterns as another group of neurons, biological or synthetic, is generating feelings about the aforesaid numerical patterns via its own pattern output. The latter patterns, encoding feelings, then form the basis of feedback to the former group, either increasing or decreasing the level of internal perturbation. …All that’s missing from such synthetic consciousness is the gooeyness of neurobiology and the cell maintenance to keep the brain’s neurons, essentially computational switches, alive. 3. But then the detractors would say, such patterns have no intrinsic meaning, and I would say they do to the neural system they are occurring in, because cumulatively all neurons therein have worked out a mutual encryption-decryption scheme through which such meaning becomes both fast and intuitive. Peering into the system as humans, we’re baffled, but internal to these synthetic brains there are feelings. – You have a number of patents relating to neural networks and the generation of useful information. How vital to the field of AI are these patents? In the end, it’s all going the way of neurons and neural nets, essentially how nature has implemented cognition, consciousness, and sentience within terrestrial, and possibly extraterrestrial intelligence. From all I can tell from my vantage point is that machine intelligence will more than likely evolve in the same way. However, once we have achieved the equivalent of “knee-jerk” reactions using neural nets (i.e., perceptrons, deep nets, and hierarchical cascades), we will want more, namely the contemplative and inventive processes that contribute to cognition and consciousness. To achieve such synthetic intelligence, there’s only one way to do it…Creativity Machine Paradigm. Beyond such computational creativity, one cannot build conscious, and sentient AI without these patents, and I expect that in the future, one will be able to identify the CM paradigm within any advanced AI system. In essence, this patented architecture/methodology is so extremely fundamental that it is a sine qua non for advanced AI, especially if one wants computers and robots to have the free will, cleverness, and determination to solve complex problems.

– What thresholds do you believe will be crossed within the next five years regarding conscious computers? Do you think that computers will pass the Turing test by 2021?

I would make the case that conscious computers have already been created, in our labs here. Therefore, the challenges are not scientific or technological, but financial. After all, to be useful, to earn the right to exist by human mandate, they must be useful, processing mountains of data and be scalable to beyond even trillions of neurons. In the former case, expensive data resources are needed, and in the latter, more and more hardware is required. So, I repeat, the challenge is not science or technology, but turning the heads of those whose specialty is not science, but acquiring wealth. In regard to your second question, Alan Turing was a genius and visionary who suffered at the whim of those without such gifts. The notion of the Turing machine was a towering achievement, but the so-called “Turing Test” or “Imitation Game” did nothing beyond stirring controversy over whether machines could think or not. In fact, from my perspective, having to pose questions in natural language (i.e., English) seems rather crude, especially when one can graphically observe the neuronal firing patterns representing thought within a CM. Even better than that, one can directly witness neural components forming opinions about such cognition in a process that achieves the so-called “subjective feel” of consciousness. – If computers become fully sentient within the next five years, does that indicate that a “technological singularity” is near? No, not at all. We as human beings declare ourselves sentient, and I’m seeing only technological evolution, but certainly not a “technological singularity.” There is an avalanche of what looks like AI development taking place that consist of largely obvious steps using the available technology. In short, the needed scientific breakthroughs have already taken place and now the singularity takes the form of marketing din, largely manifested as everyone’s plans to achieve this or that, much to the delight of the press looking for exciting material, true or not. So is there actually a technological singularity under way? Actually, I think It’s the well-financed propagandists out there cramming the point down our throats that yes, there is a technological singularity in progress. My own fear is that it could work the other way around, with machine sentience overcoming us with rather dystopian results. The chief weapon in their arsenal will be humiliation. That is, we as humans will watch their operation and witness the uncanny resemblance of their cognition and consciousness to what is going on between our own ears. The choice then becomes ours: Do we unite with them, thusly humbled, and become immortal, or fight both them and the philosophical repercussions of their existence tooth and nail? That they are spoiling us with kindness and convenience will allow them to prevail. – The field of AI is now white-hot. How has this affected your research, and your businesses? In some ways it has helped. For instance, it has drawn more attention to my AI patents and discoveries, leading to new contracts and business ventures. With that success, I have been able to delve more into the scientific basis of cognition, consciousness, and sentience, as the world begins to ask the critical question: Is Watson or Siri, for instance, self-aware? Are these systems contemplating, creating, and discovering in the same way as the human brain? Do they have any concept about what they are saying or doing? (i.e., see Searles Chinese Room argument) In other ways, it’s a nuisance, especially with regard to the “me toos” claiming to have discovered for instance, artificial neural nets in 1928, or posting incredible accomplishments to their web sites. The problem is that the public, government, and investors typically can’t see through the miasma of claims and accomplishments.

– There is tremendous excitement regarding using GPUs for Deep Learning applications. How sophisticated can these GPU-based neural networks become?

Keep in mind that “Deep Learning” is actually an old concept, decades old in fact. It’s just that certain factions have spent a fortune promoting this as news. As far as GPUs go, I think they are a practical route to parallel computing (i.e., SIMD), but other parallel hardware options, such as field-programmable transistor arrays and optical computing schemes are coming. (Note that I am somewhat skeptical of quantum computing, but I could be wrong.) The real breakthrough coming is when SuperNets (a.k.a., deep nets), implemented on any number of parallel platforms, run my neural network paradigms, allowing autonomous creativity, consciousness, and sentience. After all, all one has to do is introduce any form of noise to them, as other nets watch and latch onto novel, useful, or appealing concepts.

– How has the AI community reacted to your Creativity Machine, and your claims? How does your research/philosophy compare to that of other researchers, such as Ben Goertzel, Juergen Schmidhuber, or Demis Hassibis?

The reaction has been largely that of insiders reacting to an outside, fringe scientist. The academics form their own cliques and attempt to bar a gadfly such as myself from their club. Large corporate groups systematically ignore my accomplishments, while pouring millions into advertising campaigns and inventing new lexicons to rename what already has a recognizable name. Without naming names… One individual is developing largely rule-based, symbolic AI systems while inventing new terminology to cover the broad emulation of human cognition in machine intelligence, what has always been the goal of artificial intelligence. Another individual tries his best to take to belittle what has already survived trial by fire by patent offices around the world, not to mention numerous supportive academics. Yet another works machine learning, not realizing that most of what brain does is generative, what I’ve been doing both for 40 years! Just keep in mind that once the learning is over in any artificial neural system, no matter how large or complex, it’s the Creativity Machine paradigm that allows them to produce ideas previously unknown to the system, whether it be a Watson or DeepMind. With regard to philosophy, my research has concentrated on the neural correlates of cognition, creativity, and consciousness, resulting in an incredibly important discovery that the rhythm of pattern generation (i.e., thoughts) plays a critical role in brain function and will similarly fulfill a major role in implementing human to trans-human level machine intelligence. So simple, deterministic feedforward propagation of patterns through a deep learning net will not lead to a synthetic brain, although the Madison Avenue types would lead you to believe that a synthetic mind had been achieved. In that regard, a well-respected journalist has commented that one of the above-mentioned personalities was revisiting my work from two decades past! The important philosophical difference between myself and other AI researchers in that they believe human level AI will lead to wonderful things such as technological singularities, and the skies parting and doves descending! (just kidding) Most of all they feel that the human brain is the most complex thing in the universe, much to the chagrin of E.T! In contrast, I take a much more sober tack, feeling that we are simply inventing more of ourselves with the same talents (creativity) and the same weaknesses (mental illness and criminality). And I eschew the subjective and unscientific notion notion that a neutron star is somehow less complex than a human brain. Besides, it’s the gravitationally massive things that dominate the universe and not the human mind.

In 2026 will creativity machines dominate the computing landscape?

If you mean the entire computing landscape, no. For the most part, government and industry are not seeking the free-wheeling, contemplative, artificial intelligence represented by the Creativity Machine. They are instead seeking speed and reliability while avoiding lawyers. Then again, if you are talking about building conscious, sentient, and creative AI, there will be ample utilization of the Creativity Machine paradigm. My hunch is that it will be used largely behind closed doors, whether by government, industry, or the consuming public to produce new ideas, strategies, and comforts. In that case, the technology will be used not to dominate the computing landscape, but each other!