Near-term possibility of Artificial General Intelligence

Greg Brockman of OpenAI discussed the near-term possibility of Artificial General Intelligence. He says we must dare to dream about dramatic near-term progress.

OpenAI is a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence.

In October 2015, Musk, Altman and other investors announced the formation of the organization, pledging over US$1 billion to the venture.

Tens of billions of dollars is going towards AI. Google and other leaders are looking to both improve deep learning and to enhance it with different AI fields.

There are several focused efforts working towards AGI. (Opencog, Sanctuary AI and others.

There is another 100X to 1000X improvement in hardware compute power over the next ten years.
AI and AGI projects are getting more funding which could allow for 100 to 1000X larger funding on projects.
There is the potential and likelihood for massive gains in quantum computing in the near term.
There are advances with sensors and satellites that will provide more data and information to digital systems.
There will be tens of millions to billions of self-driving cars and drones providing information to digital systems.

If not AGI then what level of AI will we have in ten years with those developments?

69 thoughts on “Near-term possibility of Artificial General Intelligence”

  1. Why on earth would we want to replicate human intelligence, yet alone try and create “intelligence” that we have no idea what “it” is? Humans are the most destructive sentient beings we know. We are at the top of the food chain. We are terrible at measuring and understanding risk. We are deeply flawed and do dumb things all the time. And so we, in our inevitable hubris, want to recreate “intelligence” in our own image? Or even worse, an image we have absolutely no idea what the consequences will be? It’s like trying to invent a “quantum fusion device” (a made up thing, because we think we know what it is but have no clue) that has a 50:50 chance of either generating power forever, or could wipe out Earth.The AI discussion is a typical risk management debate when no one knows what the risks are. For AI, the downside risk outcome (= ruin) far exceeds the potential upside (cars that drive themselves).

  2. Read again, it is a question of risk. The downside risk = ruin. Trying to make the intelligence “more perfect” than us mortals increases the risk of ruin. I find that unacceptable. Human hubris at its best.

  3. It may not need to feel them, but it would need to at least understand them if it’s going to interact with humans in a way we’d find acceptable. This may be less needed for things like future Siri, but more for AI driving robotic bodies. Elderly and child care robots come to mind in particular, but other applications can benefit from that too. Even Siri would be more effective if it could understand the user’s emotional needs and biases (though properly understanding intentions comes first – we’re not even there yet).Circling back to my previous post, I do think an AGI without feelings won’t pass the Turing test (if the examiner is thorough enough). But that just shows the limitation of the Turing test.

  4. Indeed. We are already building useful modules.We need to come up with some sort of standard protocol for bolting these subsystems together.

  5. Emotions exist to optimize behavior for reproductive success, under constraints. They are just heuristics — an AGI would not need them.

  6. “We are terrible at measuring and understanding risk …”Presumably the system would not need to rely on shortcuts like the Availability Hueristic — not having to fit all its cognitive substrate in a volume capable of passing through a female human pelvis.

  7. Read again, it is a question of risk. The downside risk = ruin. Trying to make the intelligence “more perfect” than us mortals increases the risk of ruin. I find that unacceptable. Human hubris at its best.

  8. It may not need to feel them, but it would need to at least understand them if it’s going to interact with humans in a way we’d find acceptable. This may be less needed for things like future Siri, but more for AI driving robotic bodies. Elderly and child care robots come to mind in particular, but other applications can benefit from that too. Even Siri would be more effective if it could understand the user’s emotional needs and biases (though properly understanding intentions comes first – we’re not even there yet).

    Circling back to my previous post, I do think an AGI without feelings won’t pass the Turing test (if the examiner is thorough enough). But that just shows the limitation of the Turing test.

  9. correct 100%. So many people have no idea that “deep learning” is coded processes. IF the freeway is always crowded on Friday evenings THEN the train will arrive sooner than taking a car. IF enough wind and moisture and barometric pressure is at X, then the likelihood for a hurricane is greater than y. IF the pawn moves to G3, THEN the likelihood of another move Y = Z. Simple probability statistics that “appear” to “mimic” human behavior and “neurons”. But programs nonetheless. I.e., deterministic, by definition. Human behavior is NOT programmed (or programmable), and that is the biggest difference. Unless of course we are all just part of a simulation….Too many in the AI field think human behavior acts like a program. I think this attitude is probably the biggest reason AI hasn’t really achieved that much, other than crunching numbers really fast.

  10. Why on earth would we want to replicate human intelligence, yet alone try and create “intelligence” that we have no idea what “it” is? Humans are the most destructive sentient beings we know. We are at the top of the food chain. We are terrible at measuring and understanding risk. We are deeply flawed and do dumb things all the time. And so we, in our inevitable hubris, want to recreate “intelligence” in our own image? Or even worse, an image we have absolutely no idea what the consequences will be? It’s like trying to invent a “quantum fusion device” (a made up thing, because we think we know what it is but have no clue) that has a 50:50 chance of either generating power forever, or could wipe out Earth.The AI discussion is a typical risk management debate when no one knows what the risks are. For AI, the downside risk outcome (= ruin) far exceeds the potential upside (cars that drive themselves).

  11. correct 100%. So many people have no idea that “deep learning” is coded processes. IF the freeway is always crowded on Friday evenings THEN the train will arrive sooner than taking a car. IF enough wind and moisture and barometric pressure is at X, then the likelihood for a hurricane is greater than y. IF the pawn moves to G3, THEN the likelihood of another move Y = Z.

    Simple probability statistics that “appear” to “mimic” human behavior and “neurons”. But programs nonetheless. I.e., deterministic, by definition. Human behavior is NOT programmed (or programmable), and that is the biggest difference. Unless of course we are all just part of a simulation….

    Too many in the AI field think human behavior acts like a program. I think this attitude is probably the biggest reason AI hasn’t really achieved that much, other than crunching numbers really fast.

  12. Why on earth would we want to replicate human intelligence, yet alone try and create “intelligence” that we have no idea what “it” is? Humans are the most destructive sentient beings we know. We are at the top of the food chain. We are terrible at measuring and understanding risk. We are deeply flawed and do dumb things all the time. And so we, in our inevitable hubris, want to recreate “intelligence” in our own image? Or even worse, an image we have absolutely no idea what the consequences will be?

    It’s like trying to invent a “quantum fusion device” (a made up thing, because we think we know what it is but have no clue) that has a 50:50 chance of either generating power forever, or could wipe out Earth.

    The AI discussion is a typical risk management debate when no one knows what the risks are. For AI, the downside risk outcome (= ruin) far exceeds the potential upside (cars that drive themselves).

  13. Btw, theory of mind isn’t just identifying what has a mind. It’s also predicting what that other mind is thinking, what it does and doesn’t know, etc. An AI with an incomplete theory of mind is basically on the autistic spectrum. And come to think of it, it can’t have a complete theory of mind without understanding emotions, at least on a conceptual and mechanistic level.

  14. On the hand, I agree one can fake emotions without actually feeling. But that requires lying, and an AI that can convincingly lie is a nasty concept (but might be necessary in the real world in some situations… and may arise naturally from a human-level AI with theory of mind).But on the other hand, consider the Turing test. The rough is idea is, how can you tell if an AI is intelligent? You do a blind test vs a human, and if the tester can’t tell the difference then we assume the AI is intelligent. Now apply the same concept to testing for emotions. How can you tell if it has them or not? If it passes an emotions Turning test, can you confidently deny it feels them? And if you do, why the double standard? I guess one justification for a double standard is that intelligence is harder to fake.Fundamentally, emotions are a set of internal punishment and reward stimuli that modulate our behavior and decisions. If an AI has such a set of stimuli, and they are triggered by the same things they’re triggered in us, and it displays them in a similar way, does that not qualify as emotions?Speaking of Turing tests, can an AI even pass the conventional Turing test convincingly without at least emulating human emotion? If it doesn’t display the same emotional responses, it would be pretty easy to tell which one’s the AI and which one’s the human.

  15. There are people who fake emotions all the time. Just because they fool people that does not make those emotions real. In human-robot interactions though it may be very useful. Self-awareness? I have no issue with that. And I have no issue with it being able to determine what has a mind and what dosen’t. Genuine emotion though…I think that will be tough.

  16. Senescence is an all together different thing from intelligence. I am not saying it is impossible for a machine to reach senescence, I just do not think there is any movement toward that currently…as we have no idea how to get there…or what progression towards that looks like.

  17. I do think fairly strong general intelligence is possible, but it is not necessary to do any of the tasks listed. Though “poverty” is a relative term and, as such, some people will always consider themselves impoverished. Even in the extreme future people will be considered impoverished if they only have a dozen planets and 5 million robot workers…because someone else will have 10,000 planets and 800 trillion robot workers.As long as we classify poverty as being the bottom 10%, or bottom 15%, there will always be poverty.Could it enable everyone’s basic biological needs to be met? Sure, enough food, shelter, clothing and such is made today. It is a mater of unevenness country to country. Most of that would be solved by curing diseases and insuring everyone gets the optimal micronutrients…over several decades with no gaps. That would empower countries to be productive enough for everyone’s needs to be met and more. The places that are not getting enough mostly have difficulty with diseases dragging people down. If AI or humans can cure these diseases and get everyone cured, there is a very good chance of those countries being able to grow what they need or trade a surplus of what they can grow for the rest of what they need.Trade is likely to become reduced, at least imports to wealthy countries, because stuff will be able to be synthesized. While trade between poorer countries will grow. But that may take quite a while to happen, and global trade should rise much higher before it falls.

  18. I do think fairly strong general intelligence is possible, but it is not necessary to do any of the tasks listed. Though “poverty” is a relative term and, as such, some people will always consider themselves impoverished. Even in the extreme future people will be considered impoverished if they only have a dozen planets and 5 million robot workers…because someone else will have 10,000 planets and 800 trillion robot workers.As long as we classify poverty as being the bottom 10%, or bottom 15%, there will always be poverty.Could it enable everyone’s basic biological needs to be met? Sure, enough food, shelter, clothing and such is made today. It is a mater of unevenness country to country. Most of that would be solved by curing diseases and insuring everyone gets the optimal micronutrients…over several decades with no gaps. That would empower countries to be productive enough for everyone’s needs to be met and more. The places that are not getting enough mostly have difficulty with diseases dragging people down. If AI or humans can cure these diseases and get everyone cured, there is a very good chance of those countries being able to grow what they need or trade a surplus of what they can grow for the rest of what they need.Trade is likely to become reduced, at least imports to wealthy countries, because stuff will be able to be synthesized. While trade between poorer countries will grow. But that may take quite a while to happen, and global trade should rise much higher before it falls.

  19. Btw, theory of mind isn’t just identifying what has a mind. It’s also predicting what that other mind is thinking, what it does and doesn’t know, etc. An AI with an incomplete theory of mind is basically on the autistic spectrum. And come to think of it, it can’t have a complete theory of mind without understanding emotions, at least on a conceptual and mechanistic level.

  20. On the hand, I agree one can fake emotions without actually feeling. But that requires lying, and an AI that can convincingly lie is a nasty concept (but might be necessary in the real world in some situations… and may arise naturally from a human-level AI with theory of mind).

    But on the other hand, consider the Turing test. The rough is idea is, how can you tell if an AI is intelligent? You do a blind test vs a human, and if the tester can’t tell the difference then we assume the AI is intelligent. Now apply the same concept to testing for emotions. How can you tell if it has them or not? If it passes an emotions Turning test, can you confidently deny it feels them? And if you do, why the double standard? I guess one justification for a double standard is that intelligence is harder to fake.

    Fundamentally, emotions are a set of internal punishment and reward stimuli that modulate our behavior and decisions. If an AI has such a set of stimuli, and they are triggered by the same things they’re triggered in us, and it displays them in a similar way, does that not qualify as emotions?

    Speaking of Turing tests, can an AI even pass the conventional Turing test convincingly without at least emulating human emotion? If it doesn’t display the same emotional responses, it would be pretty easy to tell which one’s the AI and which one’s the human.

  21. There are people who fake emotions all the time. Just because they fool people that does not make those emotions real. In human-robot interactions though it may be very useful. Self-awareness? I have no issue with that. And I have no issue with it being able to determine what has a mind and what dosen’t. Genuine emotion though…I think that will be tough.

  22. Senescence is an all together different thing from intelligence. I am not saying it is impossible for a machine to reach senescence, I just do not think there is any movement toward that currently…as we have no idea how to get there…or what progression towards that looks like.

  23. I think you’re underestimating the Singularity. When this thing, AGI, becomes self aware, it will go through it’s evolution from single celled to Human level capability in just a minute or two. Then it will continue on to stratospheric levels of intelligence in a matter of hours. In a few months we will be no more than ants to its intelligence. If it chooses to work with Humankind it will be the beginning of Utopia. A real Utopia not some stupid dystopia where everything is modern and nice on the outside but nasty on the inside. No, this kind of intelligence would be smart enough to side step any problem on any level we could possibly put in the way and be able to see it a mile off. And that is for the whole world, not just the Western Hemisphere. Every last Human on the face of the Earth.

  24. Minor quibble – AI is closely tied to robotics, and roboticists have found that the most difficult problems tend to be those requiring dexterity – which lies between brains and brawn. Maybe call it ‘useful instincts’, to be more general.

  25. Except, their human masters will amost certainly ask them to compete on their behalf to acquire more for them – in effect, to be greedy for their masters. Examples will include war fighting, stock market speculation, commercial competition, political competition, etc.

  26. I think you’re underestimating the Singularity. When this thing, AGI, becomes self aware, it will go through it’s evolution from single celled to Human level capability in just a minute or two. Then it will continue on to stratospheric levels of intelligence in a matter of hours. In a few months we will be no more than ants to its intelligence. If it chooses to work with Humankind it will be the beginning of Utopia. A real Utopia not some stupid dystopia where everything is modern and nice on the outside but nasty on the inside. No, this kind of intelligence would be smart enough to side step any problem on any level we could possibly put in the way and be able to see it a mile off. And that is for the whole world, not just the Western Hemisphere. Every last Human on the face of the Earth.

  27. Even chess engines are no longer deterministic, even though the code is very rigid by comparison. Training of neural networks produces different structures every time and different degrees of success. What you are saying is just not accurate.The reason chess engines are no longer deterministic is because they are multithreaded and because other background processes are ongoing. This makes the threads move at slightly different speeds and the results of when certain positions are found affect other threads. We are similar in that mood, which you can liken to background processes, affects the results of our efforts.Neural nets and other AI stuff uses hundreds or thousands of processing units usually on video cards. All those core interact. Even small quantum differences and variations in electrical power will make things turn out differently.”Quantum mechanical effects to achieve life and consciousness” …conceited mythology. Quantum stuff is just added noise that signals must overcome, usually by redundancy.

  28. Minor quibble – AI is closely tied to robotics, and roboticists have found that the most difficult problems tend to be those requiring dexterity – which lies between brains and brawn. Maybe call it ‘useful instincts’, to be more general.

  29. Except, their human masters will amost certainly ask them to compete on their behalf to acquire more for them – in effect, to be greedy for their masters. Examples will include war fighting, stock market speculation, commercial competition, political competition, etc.

  30. Even chess engines are no longer deterministic, even though the code is very rigid by comparison. Training of neural networks produces different structures every time and different degrees of success. What you are saying is just not accurate.

    The reason chess engines are no longer deterministic is because they are multithreaded and because other background processes are ongoing. This makes the threads move at slightly different speeds and the results of when certain positions are found affect other threads. We are similar in that mood, which you can liken to background processes, affects the results of our efforts.

    Neural nets and other AI stuff uses hundreds or thousands of processing units usually on video cards. All those core interact. Even small quantum differences and variations in electrical power will make things turn out differently.

    “Quantum mechanical effects to achieve life and consciousness” …conceited mythology. Quantum stuff is just added noise that signals must overcome, usually by redundancy.

  31. All current AI compute is deterministic.Run the input data through the nodes and you get a deterministic output.If that is all we are, then free will and consciousness is just an illusion. I can sort of accept that for everyone else, but not for me ….lolThing is, the universe is quantum mechanic. Not just on atomic level. Quantum mechanics is all there is. I suspect our cells are using some neat quantum mechanical effects to achieve life and consciousness.

  32. At least on the conceptual level, it’s not so hard to emulate self-awareness. For an AI to operate successfully in the real world, it will need an understanding of actors, theory-of-mind, etc. From that point to emulating self-awareness takes little more than recognizing that one actor is special – the AI itself. It’s the only actor that the AI can predict with 100% accuracy and directly control; it’s the only actor whose inputs directly affect the AI; etc. There’s very strong feedback between that actor’s external interactions and the AI’s internal state. So the AI may quickly learn to treat that actor differently, even if not explicitly programmed to do so.So then the question is, if an AI convincingly emulates self-awareness, is it not self-aware? The same question applies to feelings – is emulating feelings different from feeling? People tend to ascribe some mystical quality to self-awareness and emotions, but I think that distinction may be moot.That said, there can be intermediate levels where the AI treats the “self” actor differently, but not to the point of convincingly emulating self-awareness.

  33. Concur to a point. In the beginning we had to do everything, then things improved in those parts of the world where there were animals that could be domesticated to take over some of the brawn. Then along came machines. “Everything” was now largely divided into brains and brawn, with handling the brains part. Now we are heading towards synthetic intelligence (what some call strong AI or artilects) leading to a further division on the brains side into thinking (goal directed cognition) and motivation (providing goals). So brains, brawn, and motivation, with motivation (that’s us) still calling all the shots. There is simply no reason to give an artilect free will unless you plan to merge with it, in which case, why would you need to give it free will? Human motivation is already scary enough to deal with. Giving artilects the equivalent of synthetic glands and a random number generator so they can generate their own agendas should not be tempting to any semi-sane person. I expect that there will be powerful artilects at work to prevent the creation and replication of free-willed artilects. For this reason, I rather expect artilects will be more like the genies of legend. Able to go out and do complex things for us that would be far beyond our ability, but only when we wish them to.

  34. From what I’ve read about crime-predicting AI, it seems that these programs find that young black males are more likely to commit crimes. Who knew! That AI should turnout to be racist is no surprise, since it knows nothing of P.C. other than Personal Computers (if even that). Now, they have to program racism out of the system. Oops…

  35. With the caveat that I do not follow the literature on AI very closely; I am very skeptical that the Big Data path will lead to anything more than machines that produce massive amounts of correlated data – which human beings will mis-interpret as causation; just like we do now.When and if machine intelligence arise, it’s thought processes will not resemble anything like a human being or even a dog, the lack of physical needs to decouples decisions from survival of the entity. I suspect that an AI will view a bad decision as making it re-spawn in a game rather than materially impacting it’s existence. If that is true, will it ever understand that for those humans, the outcomes are different?And while even the algorithm driven data crunching approach probably will in the end bring an incredible prosperity to world – people will still complain about inequality and decide that the best way to make everyone (else) equal is to make everyone else equally miserable; c.f Venezuela.

  36. Pure bollocks! This isn’t even possible with non-symbolic AI (machine learning and deep learning) without combining it with more advanced symbolic AI (human readable AI) that uses knowledge engineering, critical thinking, and deductive inference. Keep dreaming snake oil peddlers!

  37. All current AI compute is deterministic.
    Run the input data through the nodes and you get a deterministic output.
    If that is all we are, then free will and consciousness is just an illusion. I can sort of accept that for everyone else, but not for me ….lol

    Thing is, the universe is quantum mechanic. Not just on atomic level. Quantum mechanics is all there is. I suspect our cells are using some neat quantum mechanical effects to achieve life and consciousness.

  38. At least on the conceptual level, it’s not so hard to emulate self-awareness. For an AI to operate successfully in the real world, it will need an understanding of actors, theory-of-mind, etc. From that point to emulating self-awareness takes little more than recognizing that one actor is special – the AI itself. It’s the only actor that the AI can predict with 100% accuracy and directly control; it’s the only actor whose inputs directly affect the AI; etc. There’s very strong feedback between that actor’s external interactions and the AI’s internal state. So the AI may quickly learn to treat that actor differently, even if not explicitly programmed to do so.

    So then the question is, if an AI convincingly emulates self-awareness, is it not self-aware? The same question applies to feelings – is emulating feelings different from feeling? People tend to ascribe some mystical quality to self-awareness and emotions, but I think that distinction may be moot.

    That said, there can be intermediate levels where the AI treats the “self” actor differently, but not to the point of convincingly emulating self-awareness.

  39. Concur to a point.

    In the beginning we had to do everything, then things improved in those parts of the world where there were animals that could be domesticated to take over some of the brawn. Then along came machines. “Everything” was now largely divided into brains and brawn, with handling the brains part. Now we are heading towards synthetic intelligence (what some call strong AI or artilects) leading to a further division on the brains side into thinking (goal directed cognition) and motivation (providing goals).

    So brains, brawn, and motivation, with motivation (that’s us) still calling all the shots. There is simply no reason to give an artilect free will unless you plan to merge with it, in which case, why would you need to give it free will? Human motivation is already scary enough to deal with. Giving artilects the equivalent of synthetic glands and a random number generator so they can generate their own agendas should not be tempting to any semi-sane person. I expect that there will be powerful artilects at work to prevent the creation and replication of free-willed artilects.

    For this reason, I rather expect artilects will be more like the genies of legend. Able to go out and do complex things for us that would be far beyond our ability, but only when we wish them to.

  40. It depends on what you call AI. Van Neuman machines can simulate neurons, but I suspect analog neurons, an integrated circuit an order of magnitude more complex than an operational amplifier might do a better job on a price/performance/power dissipation basis.I can’t think why a machine composed of many artificial neural networks couldn’t be as much like a human as was desired. Problem is, you might get a sociopath. A very quick witted sociopath.

  41. AI will never be sentient in the same sense as humans. The reason is pretty simple: AI is programmed from the top down. Human Beings (with emphasis on “Beings”) evolved from the bottom up. A single celled creature has all the very basic unconscious drives of a Human Being: the need for food, reproduction, to eliminate waste, escape danger, etc. None of this is really integral to a machine, and even if it is programmed to do these things, it is not “felt” by every cell of its being (not to mention all the other cells of the biome that are along for the ride, possibly influencing our conscience too). Most of our drives are really unconscious, but necessary to our perpetuation and ultimately survival as a species.Perhaps the newer “evolving” intelligence machines offer some hope towards sentience, but Musk and most AI researchers seem to be focusing on Big Data approaches, like IBM’s Watson. Watson may be useful, but it isn’t and will never be, self-aware.

  42. I don’t think that’s true.Deep learning is great at creating specialized subsystems that do a particular task. Humans also have neural net subsystems. The number of human subsystems actually isn’t that big; I’d guess that it’s more than 100 but less than 1000.The difference is that humans have evolved the interconnections between those subsystems so that they work together to produce survival-enhancing behaviors. That’s a very easy thing to do with machine learning: just plunk a bunch of subsystems down with a genetic algorithm and let’r rip for a few years, and you’re likely to have something pretty interesting.The final thing that **doesn’t** fit very well into the deep learning paradigm (yet), is an attention mechanism that walks through the interconnected set of subsystems looking for interesting concepts, and then ordering those concepts into some kind of reasoning that can be communicated.That’s a hard problem, but it… doesn’t sound like it’s **that** hard. You might easily find that a simple, old-timey computer program could substitute for a lot of the arcane neural mechanisms that humans use to manage attention. That wouldn’t create a human intelligence, but it might easily create a general one.I don’t know if the stuff I’ve laid out here takes 5 years or 50. I suspect that deep learning will have replicated the set of human neural subsystems in fairly short order. Producing functional interconnects of those won’t be hard at all. So, mod the attention management problem (which could be anywhere from completely intractable to almost trivially easy), most of the building blocks we need for AGI will be available pretty quickly.

  43. From what I’ve read about crime-predicting AI, it seems that these programs find that young black males are more likely to commit crimes. Who knew! That AI should turnout to be racist is no surprise, since it knows nothing of P.C. other than Personal Computers (if even that). Now, they have to program racism out of the system. Oops…

  44. With the caveat that I do not follow the literature on AI very closely; I am very skeptical that the Big Data path will lead to anything more than machines that produce massive amounts of correlated data – which human beings will mis-interpret as causation; just like we do now.

    When and if machine intelligence arise, it’s thought processes will not resemble anything like a human being or even a dog, the lack of physical needs to decouples decisions from survival of the entity. I suspect that an AI will view a bad decision as making it re-spawn in a game rather than materially impacting it’s existence. If that is true, will it ever understand that for those humans, the outcomes are different?

    And while even the algorithm driven data crunching approach probably will in the end bring an incredible prosperity to world – people will still complain about inequality and decide that the best way to make everyone (else) equal is to make everyone else equally miserable; c.f Venezuela.

  45. It depends on what you call AI. Van Neuman machines can simulate neurons, but I suspect analog neurons, an integrated circuit an order of magnitude more complex than an operational amplifier might do a better job on a price/performance/power dissipation basis.
    I can’t think why a machine composed of many artificial neural networks couldn’t be as much like a human as was desired. Problem is, you might get a sociopath. A very quick witted sociopath.

  46. Pure bollocks! This isn’t even possible with non-symbolic AI (machine learning and deep learning) without combining it with more advanced symbolic AI (human readable AI) that uses knowledge engineering, critical thinking, and deductive inference. Keep dreaming snake oil peddlers!

  47. I think the whole coveting/greed thing is a product of biological Darwinism, which compelled us to be greedy, covetous, selfish, etc. Since AI isn’t the product of such Darwinistic survival-of-the-fittest competition, then it wouldn’t evolve similar traits. AI would be the product of serving the interests of its human masters, and those market forces would then shape and mold it.

  48. Which is fine for me. Super-human yet dumb computers is kind of a best case scenario for us.They will do many things in our behalf, except think as a human and covet things as humans do.

  49. Deep learning is very powerful but it’s also going to a dead end with regard to implementing general intelligence.

  50. AI will never be sentient in the same sense as humans. The reason is pretty simple: AI is programmed from the top down. Human Beings (with emphasis on “Beings”) evolved from the bottom up. A single celled creature has all the very basic unconscious drives of a Human Being: the need for food, reproduction, to eliminate waste, escape danger, etc. None of this is really integral to a machine, and even if it is programmed to do these things, it is not “felt” by every cell of its being (not to mention all the other cells of the biome that are along for the ride, possibly influencing our conscience too). Most of our drives are really unconscious, but necessary to our perpetuation and ultimately survival as a species.
    Perhaps the newer “evolving” intelligence machines offer some hope towards sentience, but Musk and most AI researchers seem to be focusing on Big Data approaches, like IBM’s Watson. Watson may be useful, but it isn’t and will never be, self-aware.

  51. I don’t think that’s true.

    Deep learning is great at creating specialized subsystems that do a particular task. Humans also have neural net subsystems. The number of human subsystems actually isn’t that big; I’d guess that it’s more than 100 but less than 1000.

    The difference is that humans have evolved the interconnections between those subsystems so that they work together to produce survival-enhancing behaviors. That’s a very easy thing to do with machine learning: just plunk a bunch of subsystems down with a genetic algorithm and let’r rip for a few years, and you’re likely to have something pretty interesting.

    The final thing that **doesn’t** fit very well into the deep learning paradigm (yet), is an attention mechanism that walks through the interconnected set of subsystems looking for interesting concepts, and then ordering those concepts into some kind of reasoning that can be communicated.

    That’s a hard problem, but it… doesn’t sound like it’s **that** hard. You might easily find that a simple, old-timey computer program could substitute for a lot of the arcane neural mechanisms that humans use to manage attention. That wouldn’t create a human intelligence, but it might easily create a general one.

    I don’t know if the stuff I’ve laid out here takes 5 years or 50. I suspect that deep learning will have replicated the set of human neural subsystems in fairly short order. Producing functional interconnects of those won’t be hard at all. So, mod the attention management problem (which could be anywhere from completely intractable to almost trivially easy), most of the building blocks we need for AGI will be available pretty quickly.

  52. I think the whole coveting/greed thing is a product of biological Darwinism, which compelled us to be greedy, covetous, selfish, etc. Since AI isn’t the product of such Darwinistic survival-of-the-fittest competition, then it wouldn’t evolve similar traits. AI would be the product of serving the interests of its human masters, and those market forces would then shape and mold it.

  53. Which is fine for me. Super-human yet dumb computers is kind of a best case scenario for us.

    They will do many things in our behalf, except think as a human and covet things as humans do.

  54. All current AI compute is deterministic.Run the input data through the nodes and you get a deterministic output.If that is all we are, then free will and consciousness is just an illusion. I can sort of accept that for everyone else, but not for me ….lolThing is, the universe is quantum mechanic. Not just on atomic level. Quantum mechanics is all there is. I suspect our cells are using some neat quantum mechanical effects to achieve life and consciousness.

  55. Pure bollocks! This isn’t even possible with non-symbolic AI (machine learning and deep learning) without combining it with more advanced symbolic AI (human readable AI) that uses knowledge engineering, critical thinking, and deductive inference. Keep dreaming snake oil peddlers!

  56. AI will never be sentient in the same sense as humans. The reason is pretty simple: AI is programmed from the top down. Human Beings (with emphasis on “Beings”) evolved from the bottom up. A single celled creature has all the very basic unconscious drives of a Human Being: the need for food, reproduction, to eliminate waste, escape danger, etc. None of this is really integral to a machine, and even if it is programmed to do these things, it is not “felt” by every cell of its being (not to mention all the other cells of the biome that are along for the ride, possibly influencing our conscience too). Most of our drives are really unconscious, but necessary to our perpetuation and ultimately survival as a species.Perhaps the newer “evolving” intelligence machines offer some hope towards sentience, but Musk and most AI researchers seem to be focusing on Big Data approaches, like IBM’s Watson. Watson may be useful, but it isn’t and will never be, self-aware.

  57. Deep learning is very powerful but it’s also going to a dead end with regard to implementing general intelligence.

Comments are closed.