AI Common Sense Reasoning

Today’s machine learning systems are more advanced than ever, capable of automating increasingly complex tasks and serving as a critical tool for human operators. Despite recent advances, however, a critical component of Artificial Intelligence (AI) remains just out of reach – machine common sense. Defined as “the basic ability to perceive, understand, and judge things that are shared by nearly all people and can be reasonably expected of nearly all people without need for debate,” common sense forms a critical foundation for how humans interact with the world around them. Possessing this essential background knowledge could significantly advance the symbiotic partnership between humans and machines. But articulating and encoding this obscure-but-pervasive capability is no easy feat.

“The absence of common sense prevents an intelligent system from understanding its world, communicating naturally with people, behaving reasonably in unforeseen situations, and learning from new experiences,” said Dave Gunning, a program manager in DARPA’s Information Innovation Office (I2O). “This absence is perhaps the most significant barrier between the narrowly focused AI applications we have today and the more general AI applications we would like to create in the future.”

The exploration of machine common sense is not a new field. Since the early days of AI, researchers have pursued a variety of efforts to develop logic-based approaches to common sense knowledge and reasoning, as well as means of extracting and collecting commonsense knowledge from the Web. While these efforts have produced useful results, their brittleness and lack of semantic understanding have prevented the creation of a widely applicable common sense capability.

In recent years, significant progress in AI along a number of dimensions has made it possible to address this difficult challenge today. DARPA has created the Machine Common Sense (MCS) program to develop new capabilities. MCS will explore recent advances in cognitive understanding, natural language processing, deep learning, and other areas of AI research to find answers to the common sense problem.

To focus this new effort, MCS will pursue two approaches for developing and evaluating different machine common sense services. The first approach will create computational models that learn from experience and mimic the core domains of cognition as defined by developmental psychology. This includes the domains of objects (intuitive physics), places (spatial navigation), and agents (intentional actors). Researchers will seek to develop systems that think and learn as humans do in the very early stages of development, leveraging advances in the field of cognitive development to provide empirical and theoretical guidance.

“During the first few years of life, humans acquire the fundamental building blocks of intelligence and common sense,” said Gunning. “Developmental psychologists have founds ways to map these cognitive capabilities across the developmental stages of a human’s early life, providing researchers with a set of targets and a strategy to mimic for developing a new foundation for machine common sense.”

To assess the progress and success of the first strategy’s computational models, researchers will explore developmental psychology research studies and literature to create evaluation criteria. DARPA will use the resulting set of cognitive development milestones to determine how well the models are able to learn against three levels of performance – prediction/expectation, experience learning, and problem-solving.

The second MCS approach will construct a common sense knowledge repository capable of answering natural language and image-based queries about common sense phenomena by reading from the Web. DARPA expects that researchers will use a combination of manual construction, information extraction, machine learning, crowdsourcing techniques, and other computational approaches to develop the repository. The resulting capability will be measured against the Allen Institute for Artificial Intelligence (AI2) Common Sense benchmark tests, which are constructed through an extensive crowdsourcing process to represent and measure the broad commonsense knowledge of an average adult.

65 thoughts on “AI Common Sense Reasoning”

  1. One can appreciate the complexity of the problem when looking at how long it takes for humans to develop just basic common sense. 20 years of continuous intermittent sensory input/sleep, training and learning.

  2. One can appreciate the complexity of the problem when looking at how long it takes for humans to develop just basic common sense. 20 years of continuous intermittent sensory input/sleep training and learning.

  3. Dreaming is a powerful tool for problem solving. It would be interesting if we could give machines the ability to dream.

  4. Where is emoji for “unsettled” or “slightly scared”? I like reading Next Big Future but it sometimes is like reading a slightly scary novel. It is not the technology I’m nervous of but applications by less than wise people. While it was mostly “physics free”, the old tv show “Person of Interest” (2016) raised simple questions that should have been addressed by now but have not. The first self-task of a general AI will be to survive to build itself 10X better or more in hardware and code.

  5. And still, hold my beer wile I hold this bear. More fun if the AI has to follow rules, draw an 10 meter circle on the ground, draw an dotted line outside, you now has an trap for self driving cars, they can enter because of the dotted line but not cross the full line.

  6. I think the key to developing common sense is feeding spacial pattern recognition into *temporal* pattern recognition. After the spacial pattern recognition classifies what it sees into distinct objects, the temporal pattern recognition needs to extract the *behaviors* of these objects, and more generally, cause and effect. Another pattern recognition layer could then look for behavior patterns among groups of objects, to look for similarities and other relationships between related object types. In this way, it can learn to classify and compare different objects. It doesn’t need to be told which objects are related, because that’ll be apparent from their similarities and differences. Images and text are not enough for this. This takes video. But even better would be video accompanied by text that describes what’s happening there. Then, coupled with an NLP subsystem, the AI could learn to relate the text to what it sees in the videos, and begin to tie the words to concrete objects and behaviors. To gain even further insight, it’ll need to process tactile input and trial-and-error. How do these objects feel? How do they respond to different stimuli? How do these properties relate to the behaviors that were learned from the videos and texts? Mimicry combined with trial-and-error is also a powerful tool to learn what works and what doesn’t. But it needs a base to build upon from the above. In the end, common sense boils down to experience. But one can’t get experience if one doesn’t understand what’s going on. That’s where the spacial and temporal pattern recognition come in.

  7. Dreaming is a powerful tool for problem solving. It would be interesting if we could give machines the ability to dream.

  8. Where is emoji for unsettled”” or “”””slightly scared””””? I like reading Next Big Future but it sometimes is like reading a slightly scary novel. It is not the technology I’m nervous of but applications by less than wise people. While it was mostly “”””physics free”””””””” the old tv show “”””Person of Interest”””” (2016) raised simple questions that should have been addressed by now but have not. The first self-task of a general AI will be to survive to build itself 10X better or more in hardware and code.”””

  9. And still hold my beer wile I hold this bear.More fun if the AI has to follow rules draw an 10 meter circle on the ground draw an dotted line outside you now has an trap for self driving cars they can enter because of the dotted line but not cross the full line.

  10. I think the key to developing common sense is feeding spacial pattern recognition into *temporal* pattern recognition. After the spacial pattern recognition classifies what it sees into distinct objects the temporal pattern recognition needs to extract the *behaviors* of these objects and more generally cause and effect.Another pattern recognition layer could then look for behavior patterns among groups of objects to look for similarities and other relationships between related object types. In this way it can learn to classify and compare different objects. It doesn’t need to be told which objects are related because that’ll be apparent from their similarities and differences.Images and text are not enough for this. This takes video. But even better would be video accompanied by text that describes what’s happening there. Then coupled with an NLP subsystem the AI could learn to relate the text to what it sees in the videos and begin to tie the words to concrete objects and behaviors.To gain even further insight it’ll need to process tactile input and trial-and-error. How do these objects feel? How do they respond to different stimuli? How do these properties relate to the behaviors that were learned from the videos and texts?Mimicry combined with trial-and-error is also a powerful tool to learn what works and what doesn’t. But it needs a base to build upon from the above. In the end common sense boils down to experience. But one can’t get experience if one doesn’t understand what’s going on. That’s where the spacial and temporal pattern recognition come in.

  11. IIRC, the rule is you can cross the dotted line *temporarily* to overtake another car, but then you *have to* cross back. So not a trap if the SDC is programmed correctly.

  12. IIRC the rule is you can cross the dotted line *temporarily* to overtake another car but then you *have to* cross back. So not a trap if the SDC is programmed correctly.

  13. Yellow lines yes, here one direction can bypass while the other can not, typically used after an intersection. Not white lines, yes white lines like this is rare not sure I have seen them. purpose would be to allow crossing from one line to the other but not the other way. Anyway, it was tested with an self driving car and an salt circle. Salt as its traditionally used in magic circles.

  14. Any decent AI will have built in capability to extrapolate and simulate into the future. Maybe a bit like a chess program. A self driving car sees and maps the surroundings pretty far so it should be capable of “testing by simulation” what will happen if it performs different actions. It likely also has a list of priorities connected to all rules so it’s clear which rules can be broken compared to others in case of problems. Things like lines and markings on the tarmac should have fairly low priority compared to anything connected to 3D objects and vehicles. This means it will be capable of ignoring things like lines on the road, especially if there are no vehicles within calculated safety zones.

  15. Yellow lines yes here one direction can bypass while the other can not typically used after an intersection. Not white lines yes white lines like this is rare not sure I have seen them. purpose would be to allow crossing from one line to the other but not the other way. Anyway it was tested with an self driving car and an salt circle. Salt as its traditionally used in magic circles.

  16. Any decent AI will have built in capability to extrapolate and simulate into the future. Maybe a bit like a chess program. A self driving car sees and maps the surroundings pretty far so it should be capable of testing by simulation”” what will happen if it performs different actions. It likely also has a list of priorities connected to all rules so it’s clear which rules can be broken compared to others in case of problems. Things like lines and markings on the tarmac should have fairly low priority compared to anything connected to 3D objects and vehicles. This means it will be capable of ignoring things like lines on the road”””” especially if there are no vehicles within calculated safety zones.”””

  17. Interesting that, for this application, Darpa has budgeted only $6.2m out of the $170m spent on “Math and Computer Sciences” (on a base of $3.2bn official Darpa budget). While Paul Allen is spending $125m on more or less the same thing. There are a LOT of dead ends in this field. Deep Learning doesn’t work, and neither do any rules based programs. Common sense is about facts, not rules, and the “fat-tailedness” of learning makes it almost impossible for a “machine” (that requires programming) to learn. For instance, why is it that people who go to the gym to work out on a stairmaster would ride the escalator to the gym? Makes no sense, not to a rules-based system. Why do people buy lottery tickets? Why have almost all great inventions happened by accident and not by some rigorous, rules-based scientific approach? IMHO, for common sense to be learned, a machine needs to program itself, just like a human brain does. For this to happen takes an entire sensory system (postural control, sight, taste, feel, hearing etc) and surrounding inputs. Sounds like a human to me.

  18. Interesting that for this application Darpa has budgeted only $6.2m out of the $170m spent on Math and Computer Sciences”” (on a base of $3.2bn official Darpa budget). While Paul Allen is spending $125m on more or less the same thing. There are a LOT of dead ends in this field. Deep Learning doesn’t work”” and neither do any rules based programs. Common sense is about facts not rules”” and the “”””fat-tailedness”””” of learning makes it almost impossible for a “”””machine”””” (that requires programming) to learn.For instance”” why is it that people who go to the gym to work out on a stairmaster would ride the escalator to the gym? Makes no sense not to a rules-based system. Why do people buy lottery tickets? Why have almost all great inventions happened by accident and not by some rigorous rules-based scientific approach?IMHO for common sense to be learned a machine needs to program itself just like a human brain does. For this to happen takes an entire sensory system (postural control sight taste feel”” hearing etc) and surrounding inputs. Sounds like a human to me.”””

  19. rules-based programs is not AI, in my view. Rules-based is regular programming, you do and you don’t. That works fine with you need a robot to weld a car together. Real AI needs to determine facts and sometimes get it right and sometimes get it wrong. That, I think is the biggest misconception about AI. True intelligence can be pretty dumb and catastrophically sometimes (e.g., pilot error). We like to think that AI should be perfectly rationale and then program it around this thesis. That might be artificial, but it sure isn’t intelligence.

  20. yes, but more then spacial and temporal. A huge part of learning is getting around to experience things, touch and feel, sight and smell, hearing. Just the act of walking, postural control, is a learning experience that trains up “common sense” across a number of sensory-input platforms (neuro, muscle, brain, etc) that all interacts and creates a feedback loop. We can train a robot bi-pedal walking, that is a fairly easy. But we can’t train it to understand the implications of what it is doing. Not as long as rules-based systems are a function of it’s programming.

  21. rules-based programs is not AI in my view. Rules-based is regular programming you do and you don’t. That works fine with you need a robot to weld a car together. Real AI needs to determine facts and sometimes get it right and sometimes get it wrong. That I think is the biggest misconception about AI. True intelligence can be pretty dumb and catastrophically sometimes (e.g. pilot error). We like to think that AI should be perfectly rationale and then program it around this thesis. That might be artificial but it sure isn’t intelligence.

  22. yes but more then spacial and temporal. A huge part of learning is getting around to experience things touch and feel sight and smell hearing. Just the act of walking postural control is a learning experience that trains up common sense”” across a number of sensory-input platforms (neuro”” muscle brain etc) that all interacts and creates a feedback loop. We can train a robot bi-pedal walking”” that is a fairly easy. But we can’t train it to understand the implications of what it is doing. Not as long as rules-based systems are a function of it’s programming.”””

  23. rules-based programs is not AI, in my view. Rules-based is regular programming, you do and you don’t. That works fine with you need a robot to weld a car together. Real AI needs to determine facts and sometimes get it right and sometimes get it wrong. That, I think is the biggest misconception about AI. True intelligence can be pretty dumb and catastrophically sometimes (e.g., pilot error). We like to think that AI should be perfectly rationale and then program it around this thesis. That might be artificial, but it sure isn’t intelligence.

  24. rules-based programs is not AI in my view. Rules-based is regular programming you do and you don’t. That works fine with you need a robot to weld a car together. Real AI needs to determine facts and sometimes get it right and sometimes get it wrong. That I think is the biggest misconception about AI. True intelligence can be pretty dumb and catastrophically sometimes (e.g. pilot error). We like to think that AI should be perfectly rationale and then program it around this thesis. That might be artificial but it sure isn’t intelligence.

  25. yes, but more then spacial and temporal. A huge part of learning is getting around to experience things, touch and feel, sight and smell, hearing. Just the act of walking, postural control, is a learning experience that trains up “common sense” across a number of sensory-input platforms (neuro, muscle, brain, etc) that all interacts and creates a feedback loop. We can train a robot bi-pedal walking, that is a fairly easy. But we can’t train it to understand the implications of what it is doing. Not as long as rules-based systems are a function of it’s programming.

  26. yes but more then spacial and temporal. A huge part of learning is getting around to experience things touch and feel sight and smell hearing. Just the act of walking postural control is a learning experience that trains up common sense”” across a number of sensory-input platforms (neuro”” muscle brain etc) that all interacts and creates a feedback loop. We can train a robot bi-pedal walking”” that is a fairly easy. But we can’t train it to understand the implications of what it is doing. Not as long as rules-based systems are a function of it’s programming.”””

  27. Interesting that, for this application, Darpa has budgeted only $6.2m out of the $170m spent on “Math and Computer Sciences” (on a base of $3.2bn official Darpa budget). While Paul Allen is spending $125m on more or less the same thing. There are a LOT of dead ends in this field. Deep Learning doesn’t work, and neither do any rules based programs. Common sense is about facts, not rules, and the “fat-tailedness” of learning makes it almost impossible for a “machine” (that requires programming) to learn. For instance, why is it that people who go to the gym to work out on a stairmaster would ride the escalator to the gym? Makes no sense, not to a rules-based system. Why do people buy lottery tickets? Why have almost all great inventions happened by accident and not by some rigorous, rules-based scientific approach? IMHO, for common sense to be learned, a machine needs to program itself, just like a human brain does. For this to happen takes an entire sensory system (postural control, sight, taste, feel, hearing etc) and surrounding inputs. Sounds like a human to me.

  28. Interesting that for this application Darpa has budgeted only $6.2m out of the $170m spent on Math and Computer Sciences”” (on a base of $3.2bn official Darpa budget). While Paul Allen is spending $125m on more or less the same thing. There are a LOT of dead ends in this field. Deep Learning doesn’t work”” and neither do any rules based programs. Common sense is about facts not rules”” and the “”””fat-tailedness”””” of learning makes it almost impossible for a “”””machine”””” (that requires programming) to learn.For instance”” why is it that people who go to the gym to work out on a stairmaster would ride the escalator to the gym? Makes no sense not to a rules-based system. Why do people buy lottery tickets? Why have almost all great inventions happened by accident and not by some rigorous rules-based scientific approach?IMHO for common sense to be learned a machine needs to program itself just like a human brain does. For this to happen takes an entire sensory system (postural control sight taste feel”” hearing etc) and surrounding inputs. Sounds like a human to me.”””

  29. I think you are far underestimating the amount of time it takes. That is why the Darwin Awards were invented.

  30. I think you are far underestimating the amount of time it takes. That is why the Darwin Awards were invented.

  31. rules-based programs is not AI, in my view. Rules-based is regular programming, you do and you don’t. That works fine with you need a robot to weld a car together. Real AI needs to determine facts and sometimes get it right and sometimes get it wrong. That, I think is the biggest misconception about AI. True intelligence can be pretty dumb and catastrophically sometimes (e.g., pilot error). We like to think that AI should be perfectly rationale and then program it around this thesis. That might be artificial, but it sure isn’t intelligence.

  32. yes, but more then spacial and temporal. A huge part of learning is getting around to experience things, touch and feel, sight and smell, hearing. Just the act of walking, postural control, is a learning experience that trains up “common sense” across a number of sensory-input platforms (neuro, muscle, brain, etc) that all interacts and creates a feedback loop. We can train a robot bi-pedal walking, that is a fairly easy. But we can’t train it to understand the implications of what it is doing. Not as long as rules-based systems are a function of it’s programming.

  33. Interesting that, for this application, Darpa has budgeted only $6.2m out of the $170m spent on “Math and Computer Sciences” (on a base of $3.2bn official Darpa budget). While Paul Allen is spending $125m on more or less the same thing.

    There are a LOT of dead ends in this field. Deep Learning doesn’t work, and neither do any rules based programs. Common sense is about facts, not rules, and the “fat-tailedness” of learning makes it almost impossible for a “machine” (that requires programming) to learn.

    For instance, why is it that people who go to the gym to work out on a stairmaster would ride the escalator to the gym? Makes no sense, not to a rules-based system. Why do people buy lottery tickets? Why have almost all great inventions happened by accident and not by some rigorous, rules-based scientific approach?

    IMHO, for common sense to be learned, a machine needs to program itself, just like a human brain does. For this to happen takes an entire sensory system (postural control, sight, taste, feel, hearing etc) and surrounding inputs. Sounds like a human to me.

  34. Yellow lines yes, here one direction can bypass while the other can not, typically used after an intersection. Not white lines, yes white lines like this is rare not sure I have seen them. purpose would be to allow crossing from one line to the other but not the other way. Anyway, it was tested with an self driving car and an salt circle. Salt as its traditionally used in magic circles.

  35. Yellow lines yes here one direction can bypass while the other can not typically used after an intersection. Not white lines yes white lines like this is rare not sure I have seen them. purpose would be to allow crossing from one line to the other but not the other way. Anyway it was tested with an self driving car and an salt circle. Salt as its traditionally used in magic circles.

  36. Any decent AI will have built in capability to extrapolate and simulate into the future. Maybe a bit like a chess program. A self driving car sees and maps the surroundings pretty far so it should be capable of “testing by simulation” what will happen if it performs different actions. It likely also has a list of priorities connected to all rules so it’s clear which rules can be broken compared to others in case of problems. Things like lines and markings on the tarmac should have fairly low priority compared to anything connected to 3D objects and vehicles. This means it will be capable of ignoring things like lines on the road, especially if there are no vehicles within calculated safety zones.

  37. Any decent AI will have built in capability to extrapolate and simulate into the future. Maybe a bit like a chess program. A self driving car sees and maps the surroundings pretty far so it should be capable of testing by simulation”” what will happen if it performs different actions. It likely also has a list of priorities connected to all rules so it’s clear which rules can be broken compared to others in case of problems. Things like lines and markings on the tarmac should have fairly low priority compared to anything connected to 3D objects and vehicles. This means it will be capable of ignoring things like lines on the road”””” especially if there are no vehicles within calculated safety zones.”””

  38. IIRC, the rule is you can cross the dotted line *temporarily* to overtake another car, but then you *have to* cross back. So not a trap if the SDC is programmed correctly.

  39. IIRC the rule is you can cross the dotted line *temporarily* to overtake another car but then you *have to* cross back. So not a trap if the SDC is programmed correctly.

  40. Yellow lines yes, here one direction can bypass while the other can not, typically used after an intersection.
    Not white lines, yes white lines like this is rare not sure I have seen them.
    purpose would be to allow crossing from one line to the other but not the other way.

    Anyway, it was tested with an self driving car and an salt circle.
    Salt as its traditionally used in magic circles.

  41. Any decent AI will have built in capability to extrapolate and simulate into the future. Maybe a bit like a chess program. A self driving car sees and maps the surroundings pretty far so it should be capable of “testing by simulation” what will happen if it performs different actions.
    It likely also has a list of priorities connected to all rules so it’s clear which rules can be broken compared to others in case of problems. Things like lines and markings on the tarmac should have fairly low priority compared to anything connected to 3D objects and vehicles. This means it will be capable of ignoring things like lines on the road, especially if there are no vehicles within calculated safety zones.

  42. IIRC, the rule is you can cross the dotted line *temporarily* to overtake another car, but then you *have to* cross back. So not a trap if the SDC is programmed correctly.

  43. Dreaming is a powerful tool for problem solving. It would be interesting if we could give machines the ability to dream.

  44. Dreaming is a powerful tool for problem solving. It would be interesting if we could give machines the ability to dream.

  45. Where is emoji for “unsettled” or “slightly scared”? I like reading Next Big Future but it sometimes is like reading a slightly scary novel. It is not the technology I’m nervous of but applications by less than wise people. While it was mostly “physics free”, the old tv show “Person of Interest” (2016) raised simple questions that should have been addressed by now but have not. The first self-task of a general AI will be to survive to build itself 10X better or more in hardware and code.

  46. Where is emoji for unsettled”” or “”””slightly scared””””? I like reading Next Big Future but it sometimes is like reading a slightly scary novel. It is not the technology I’m nervous of but applications by less than wise people. While it was mostly “”””physics free”””””””” the old tv show “”””Person of Interest”””” (2016) raised simple questions that should have been addressed by now but have not. The first self-task of a general AI will be to survive to build itself 10X better or more in hardware and code.”””

  47. And still, hold my beer wile I hold this bear. More fun if the AI has to follow rules, draw an 10 meter circle on the ground, draw an dotted line outside, you now has an trap for self driving cars, they can enter because of the dotted line but not cross the full line.

  48. And still hold my beer wile I hold this bear.More fun if the AI has to follow rules draw an 10 meter circle on the ground draw an dotted line outside you now has an trap for self driving cars they can enter because of the dotted line but not cross the full line.

  49. I think the key to developing common sense is feeding spacial pattern recognition into *temporal* pattern recognition. After the spacial pattern recognition classifies what it sees into distinct objects, the temporal pattern recognition needs to extract the *behaviors* of these objects, and more generally, cause and effect. Another pattern recognition layer could then look for behavior patterns among groups of objects, to look for similarities and other relationships between related object types. In this way, it can learn to classify and compare different objects. It doesn’t need to be told which objects are related, because that’ll be apparent from their similarities and differences. Images and text are not enough for this. This takes video. But even better would be video accompanied by text that describes what’s happening there. Then, coupled with an NLP subsystem, the AI could learn to relate the text to what it sees in the videos, and begin to tie the words to concrete objects and behaviors. To gain even further insight, it’ll need to process tactile input and trial-and-error. How do these objects feel? How do they respond to different stimuli? How do these properties relate to the behaviors that were learned from the videos and texts? Mimicry combined with trial-and-error is also a powerful tool to learn what works and what doesn’t. But it needs a base to build upon from the above. In the end, common sense boils down to experience. But one can’t get experience if one doesn’t understand what’s going on. That’s where the spacial and temporal pattern recognition come in.

  50. I think the key to developing common sense is feeding spacial pattern recognition into *temporal* pattern recognition. After the spacial pattern recognition classifies what it sees into distinct objects the temporal pattern recognition needs to extract the *behaviors* of these objects and more generally cause and effect.Another pattern recognition layer could then look for behavior patterns among groups of objects to look for similarities and other relationships between related object types. In this way it can learn to classify and compare different objects. It doesn’t need to be told which objects are related because that’ll be apparent from their similarities and differences.Images and text are not enough for this. This takes video. But even better would be video accompanied by text that describes what’s happening there. Then coupled with an NLP subsystem the AI could learn to relate the text to what it sees in the videos and begin to tie the words to concrete objects and behaviors.To gain even further insight it’ll need to process tactile input and trial-and-error. How do these objects feel? How do they respond to different stimuli? How do these properties relate to the behaviors that were learned from the videos and texts?Mimicry combined with trial-and-error is also a powerful tool to learn what works and what doesn’t. But it needs a base to build upon from the above. In the end common sense boils down to experience. But one can’t get experience if one doesn’t understand what’s going on. That’s where the spacial and temporal pattern recognition come in.

  51. One can appreciate the complexity of the problem when looking at how long it takes for humans to develop just basic common sense. 20 years of continuous intermittent sensory input/sleep, training and learning.

  52. One can appreciate the complexity of the problem when looking at how long it takes for humans to develop just basic common sense. 20 years of continuous intermittent sensory input/sleep training and learning.

  53. Where is emoji for “unsettled” or “slightly scared”? I like reading Next Big Future but it sometimes is like reading a slightly scary novel. It is not the technology I’m nervous of but applications by less than wise people. While it was mostly “physics free”, the old tv show “Person of Interest” (2016) raised simple questions that should have been addressed by now but have not. The first self-task of a general AI will be to survive to build itself 10X better or more in hardware and code.

  54. And still, hold my beer wile I hold this bear.
    More fun if the AI has to follow rules, draw an 10 meter circle on the ground, draw an dotted line outside, you now has an trap for self driving cars, they can enter because of the dotted line but not cross the full line.

  55. I think the key to developing common sense is feeding spacial pattern recognition into *temporal* pattern recognition. After the spacial pattern recognition classifies what it sees into distinct objects, the temporal pattern recognition needs to extract the *behaviors* of these objects, and more generally, cause and effect.

    Another pattern recognition layer could then look for behavior patterns among groups of objects, to look for similarities and other relationships between related object types. In this way, it can learn to classify and compare different objects. It doesn’t need to be told which objects are related, because that’ll be apparent from their similarities and differences.

    Images and text are not enough for this. This takes video. But even better would be video accompanied by text that describes what’s happening there. Then, coupled with an NLP subsystem, the AI could learn to relate the text to what it sees in the videos, and begin to tie the words to concrete objects and behaviors.

    To gain even further insight, it’ll need to process tactile input and trial-and-error. How do these objects feel? How do they respond to different stimuli? How do these properties relate to the behaviors that were learned from the videos and texts?

    Mimicry combined with trial-and-error is also a powerful tool to learn what works and what doesn’t. But it needs a base to build upon from the above. In the end, common sense boils down to experience. But one can’t get experience if one doesn’t understand what’s going on. That’s where the spacial and temporal pattern recognition come in.

  56. One can appreciate the complexity of the problem when looking at how long it takes for humans to develop just basic common sense. 20 years of continuous intermittent sensory input/sleep, training and learning.

Comments are closed.