All the pieces are finally in place for realizing the Enlightenment project of a rigorous natural science of human nature, due to the recent maturation and integration of evolutionary biology, computer science, cognitive experimentation, neuroscience, hunter-gatherer studies, and allied fields. The core of this project—what we have called evolutionary psychology—is the discovery and creation of high resolution maps of the circuit logic of each of the hundreds of evolved programs that together make up the species-typical architecture of the human brain/mind (e.g., programs for exchange, object mechanics, alliance detection, vision, kin recognition, statistical induction, disease avoidance, etc.). Like any other rapidly progressing natural science, over the next several decades this field will generate many applications. In particular, the success of this project will be pivotal for the scientific program of building a useful broad artificial intelligence that approaches or exceeds human intelligence. To begin with, for any such nonhuman intelligence to communicate with humans or respect human values— for it to understand what we mean by a question or want by a request—it will have to become equipped with accurate models of the representational and motivational programs that inhabit human minds. More importantly, evolutionary psychology holds the key to understanding and overcoming the scientific difficulties inherent in AI design. Since the invention of artificial minds seventy years ago, computer scientists have felt on the verge of building a generally intelligent machine. Somehow this goal, like the horizon, keeps retreating as it is approached. However, the human cognitive architecture is a generally intelligent machine—indeed, the only known broad intelligence in existence. As such, it offers us a working prototype of computational intelligence, to the extent we work at mapping its native codes. Reverse engineering the human computational architecture and uncovering its design principles provides key insights into how natural selection achieved computational broad intelligence, and (if taken seriously) can unleash real progress in the strong AI program.
Results from the first phase of evolutionary research already suggest a series of new approaches. For example, it has traditionally seemed self-evident to AI researchers that general intelligence must be achieved by algorithms that are general-purpose, and operate across all contents. Yet the recurrent design principle emerging from this research is that the natural intelligences (inference engines) found in humans operate, with few exceptions, by being specialized. Natural selection breaks off small but biologically important fragments of the universe (predator-prey interactions, color, social exchange, physical causality, alliances, genetic kinship, etc.) and engineers distinct problem-solving methods for each—ending up with specialized logics that look bizarre by traditional standards but are fiendishly well-engineered to solve their respective problem types. Evolution tailors computational hacks that work brilliantly, by exploiting relationships that exist only in its particular fragment of the universe (the geometry of parallax gives vision a depth cue; an infant nursed by your mother is your genetic sibling; two solid objects cannot occupy the same space). These specialized inferential systems are dramatically smarter than general reasoning because natural selection equipped them with radical short cuts that bypass the endless possibilities and combinatorial explosion that general problem-solving methods choke on. They can be better than rational because they are not limited to using only those inferential strategies that can be applied to all problems.
This suggests that an alternative and more fruitful road to engineering a broad artificial intelligence is not by searching for algorithms that manifest general intelligence, but instead by aggregating, combining, and integrating specialized intelligences (supplemented with general computational strategies). This mosaic approach to AI becomes (more) general not by having an increasingly supercharged general algorithm but by adding additional specializations that (individually, and especially in combination) cover more and more of the universe that is of interest to us.
Talks about kinship index, welfare trade-off, sexual value estimator, nonconscious computation of kinship index
Natural intelligence comes up with brilliant hacks that use radically reduced information to enable effective decisions and determinations
Dense slides and talk that chooses to use bigger words and is very difficult to follow. Plus it mostly monotone.
Proposes looking at many narrow AIs
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.