Some of the founders and leading lights in the fields of artificial intelligence and cognitive science called for a return to the style of research that marked the early years of the field, one driven more by curiosity rather than narrow applications.
This will be a difficult thing to achieve because success with narrow applications can generate a lot of money.
Patrick Winston, director of MIT’s Artificial Intelligence Laboratory from 1972 to 1997, blamed the stagnation in part on the decline in funding after the end of the Cold War and on early attempts to commercialize AI. But the biggest culprit, he said, was the “mechanistic balkanization” of the field, with research focusing on ever-narrower specialties such as neural networks or genetic algorithms.
Winston said he believes researchers should instead focus on those things that make humans distinct from other primates, or even what made them distinct from Neanderthals. Once researchers think they have identified the things that make humans unique, he said, they should develop computational models of these properties, implementing them in real systems so they can discover the gaps in their models, and refine them as needed. Winston speculated that the magic ingredient that makes humans unique is our ability to create and understand stories using the faculties that support language: “Once you have stories, you have the kind of creativity that makes the species different to any other.”
Sydney Brenner, who deciphered the three-letter DNA code with Francis Crick and teased out the complete neural structure of the c. elegans worm on a cellular level, said researchers should refocus on higher level problems instead. He used the analogy of someone taking a picture with a smart phone: no one today would bother to give a transistor-level description of such an action: it’s much more useful to discuss the process in terms of higher level subsystems and software.