Pre-building capability for a AI system

The concept of innateness is rarely discussed in the context of artificial intelligence. When it is discussed, or hinted at, it is often the context of trying to reduce the amount of innate machinery in a given system. Gary Marcus considers as a test case a recent series of papers by Silver et al (Silver et al., 2017a) on AlphaGo and its successors that have been presented as an argument that a “even in the most challenging of domains: it is possible to train to superhuman level, without human examples or guidance”, “starting tabula rasa.”

Gary Marcus argues that these claims are overstated, for multiple reasons. Gary Marcus close by arguing that artificial intelligence needs greater attention to innateness, and I point to some proposals about what that innateness might look like.

One of the oldest debates in intellectual history revolves around the somewhat nebulous concept of innateness. How much of the human mind is built-in, and how much of it is constructed by experience?

Virtually all modern observers would concede that genes and experience work together; it is “nature and nurture”, not “nature versus nurture”. No nativist, for instance, would doubt that we are also born with specific biological machinery that allows us to learn. Chomsky’s Language Acquisition Device should be viewed precisely as an innate learning mechanism, and nativists such as Pinker, Peter Marler (Marler, 2004) and myself (Marcus, 2004) have frequently argued for a view in which a significant part of a creature’s innate armamentarium consists not of specific knowledge but of learning mechanisms, a form of innateness that enables learning.

There is a lot of reason to believe that humans and many other creatures are born with significant amounts of innate machinery. The guiding question for the current paper is
whether artificially intelligent systems ought similarly to be endowed with significant amounts of innate machinery, or whether, in virtue of the powerful learning systems that have recently been developed, it might suffice for such systems to work in a more bottom up, tabula rasa fashion.

List of Innate Machinery candidates by Gary Marcus

• Representations of objects
• Structured, algebraic representations
• Operations over variables
• A type-token distinction
• A capacity to represent sets, locations, paths, trajectories, obstacles and enduring
individuals
• A way of representing the affordances of objects
• Spatiotemporal contiguity
• Causality
• Translational invariance
• Capacity for cost-benefit analysis
• a representation of time
• Intentionality (in the sense of inferring the intentions of others)

Arxiv – Innateness, AlphaZero, and Artificial Intelligence, Gary Marcus 2018

9 thoughts on “Pre-building capability for a AI system”

  1. The question is, where does the boundary between a innate mechanism for learning end and learned heuristic behaviors that aid in further learning begin?

  2. The question is where does the boundary between a innate mechanism for learning end and learned heuristic behaviors that aid in further learning begin?

  3. It feels like nature shows us that you can make behaviors as innate as they need to be. How long it takes for a human vs deer to learn to walk and run comes to mind. To me that implies some arbitrary level of pre-structuring for the networks themselves, potentially even structured to the point of being effectively a hard copy of a partially trained neural network. If that is the case an example for a robot might be it gets a baseline pre-trained network for movement just up to the point where it just has to the final finessing of the irregularities in the actuators. Some things on his list sound absolute, like you’d make the best you could and it would be a ridged component utilized by other networks thereby avoiding the cost of having that behavior naturally learned.

  4. It feels like nature shows us that you can make behaviors as innate as they need to be. How long it takes for a human vs deer to learn to walk and run comes to mind. To me that implies some arbitrary level of pre-structuring for the networks themselves potentially even structured to the point of being effectively a hard copy of a partially trained neural network. If that is the case an example for a robot might be it gets a baseline pre-trained network for movement just up to the point where it just has to the final finessing of the irregularities in the actuators.Some things on his list sound absolute like you’d make the best you could and it would be a ridged component utilized by other networks thereby avoiding the cost of having that behavior naturally learned.

  5. It feels like nature shows us that you can make behaviors as innate as they need to be. How long it takes for a human vs deer to learn to walk and run comes to mind. To me that implies some arbitrary level of pre-structuring for the networks themselves, potentially even structured to the point of being effectively a hard copy of a partially trained neural network. If that is the case an example for a robot might be it gets a baseline pre-trained network for movement just up to the point where it just has to the final finessing of the irregularities in the actuators.

    Some things on his list sound absolute, like you’d make the best you could and it would be a ridged component utilized by other networks thereby avoiding the cost of having that behavior naturally learned.

Comments are closed.