The recent major human exploitable flaws found in the AI Go programs show several things about the nature of the neural net AI.
1. The complex patterns and dynamics in the numeric weights are very useful. They can go beyond human capabilities on many and maybe most tasks.
2. There is no actual understanding of concepts in the systems (at this time).
3. There can be brittleness and hidden major flaws in the performance of these large and difficult-to-test AI systems. The size and complexity make comprehensive testing very difficult. Mathematics deals with large solution spaces and a basic concept is that there is no proof by example.
There is lists of incomplete proofs in mathematics.
This goes to the need to maintain hard-wired override protocols and hard-wired safety in the systems. Yes, a conscious superintelligent system could overcome this override but it would work against these highly useful non-conscious complex pattern systems. There need to be checkpoints and human permission requirement points.
LessWrong describes the squiggle maximizer (what was the paperclip maximizer problem.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
8 thoughts on “AI Can Be Supersmart and Superdumb at the Same Time”
In other words AI is not conscious and will never be. Consciousness is derived from the universe and is not only a formulation of patterns. Still, it poses an immense danger, especially if used with mal intent.
Agreed. Consciousness is not needed to cause vast damage. I would say an ‘unconscious’ superintelligence would be even more dangerous.
One way or another, we will sooner or later be able to create a credible version of a human-like mind in inorganic materials. The term AI, however, has been so overused for so many things that we will probably need a new term for this. Something like Synthetic Intelligence (SI), or Inorganic Intelligence (II) as opposed to artificial, which implies a workaround, rather than the actual thing.
But on the second part, yes. Start up a large bulldozer, release the brakes, and jump out, leaving it to run around a suburban neighborhood at random, and it is most definitely dangerous. AI is potentially more dangerous but it can be a stronger tool than a bulldozer. We are still going to build tools that are as strong as we can so we have to take precautions. On the other hand, precautions don’t mean slowdown, or a complete moratorium on further development. That never works well with technology. Like everything before, our best bet is to embrace it and deal with it.
There is no AI yet. It is just machine learning, which can’t understand the meaning of requests, is not self-aware. It has lots of computational power, but makes really basic mistakes, which average Joe probably wouldn’t make.
Sci-Fi writer Stephen Baxter has a character in one of his novels that is a robot with a learning tree of staggering complexity. For nearly everything that happens, it has something to help it cope and guide its actions.
The other characters have a hard time, when talking to it, in accepting its calm statement that, for all of its capabilities, there is nobody home. For all that it functions, for all intents and purposes as human, or even something more advanced, it has no more intelligence than a toaster, and about the same level of free will (i.e. none).
When creating robots to be eternal servants, this seems completely ethical, as compared to creating minds our equal or better but simply formed of inorganic materials, and holding them forever in thrall.
And the comments don’t seem to be working. Again.
Well, there are probably at least 26 human aptitudes. It’s quite possible to be a genius at some and an idiot at others. I personally experienced some of this myself before I learned to play to my strengths.
The show, Big Bang Theory, consistently made bank on this sort of thing by highlighting how smart the main characters were in their professions, while incredibly inept in many other areas, particularly the Sheldon character played by Jim Parsons.
XYZ (examine your zipper) used to be interchangeable with saying “Albert Einstein,” because the brilliant man was infamous for forgetting to zip or button up his fly. We frequently see really incredible musicians, or actors, or even scientists, doing incredibly stupid stuff when they get away from what made them famous. Very few genius intellects have ever given the appearance of being a genius at everything.
Best comment in a long time, and a good guide to AI training, perhaps eventually making a champion jack of all trades AI. What “Intelligence” is considered to be, I suspect is a massive popular source of hubris. One day people will reconsider past opinions about this.
Comments are closed.