IBM Has Ten AI Achievements and Predicts Three Directions for AI

Nextbigfuture interviewed Dr. John Smith, Head of AI Tech, IBM Research to discuss what IBM has been able to do in AI Research this year and the three predicted dominant AI trends that will start in 2019.

Faster and Better Deep Learning

IBM has been able to continue to improve the performance and metrics related to deep learning. They are able to speed up the times needed for make an inference by 30% and in some cases, they perform neural searches 50,000X faster.

They have been able to train deep learning with 16-bit and even 8bit precision without losing accuracy. Using 8-bit precision instead of 16-bit means a system can be twice as fast.

Deep learning has usually required a billion examples to train to optimal levels. Research has lowered this amount of training data down to millions of thousands of examples to reach good levels of performance. IBM has had success with one picture learning.

Better Listening and Debating

IBMs Debater system has been an advance from the previous Jeopardy game playing AI. Debater has a deeper and richer level of engagement. Previously the Watson system for Jeopardy had to understand a question and then rapidly search for an answer on the internet. This had to be done with greater speed and accuracy than the best human players like Ken Jennings.

The Debate system has to be able generate both pro and con positions to a debate question and create justification, arguments and reasons and construct convincing cases.

This resulted in a better Machine Listening Comprehension capability for argumentative content. Machines can understand when people are making arguments and they can follow the argument.

This goes towards the DARPA goal of having AI that can explain itself to humans.

A Debating system does not have to be an opponent. A doctor trying to make a diagnosis could engage an AI in a discussion of the possible causes. This would be a way to explore with explanations the AI insights and understanding.

Other experts like lawyers, architects and engineers would find enlightened discussion around all aspects of a complex and important decision to be useful.

From Narrow AI to Broad AI to AGI

Dr John Smith, Head of IBM Research, sees AI progressing from Narrow AI to Broad AI to AGI (Artificial General Intelligence). This is pursuing advancing in three main ways
1. More fairness and accuracy
2. Providing explanations which can build trust in the answers
3. Robustness. This means not having right answers or completely wrong answers. Completely wrong answers is where the system does not get the answer and does not get the context. Building better models will allow systems to not provide an answer of ice cream when talking about naval engineering.

IBM See Three AI Trends

1. IBM believes AI will move from identifying correlations to causation.

I believe this means that they are able to increase the statistical certainty of the relationships but also generate virtual confirmation tests.

They would need to split the data into learning data sets and testing of causation data sets.

2. They will improve the trustability of AI. This means solving the explanation problems.

3. They see a growing role and importance for quantum computers with AI.

This was already started with D-Wave and other years ago. However, the power of quantum computers is growing by leaps and bounds.

7 thoughts on “IBM Has Ten AI Achievements and Predicts Three Directions for AI”

  1. I study human survival strategy, but it is a broader science. Please consider a few things.
    Humans reproduce sexually. That defines a great deal of our behavior and strategy. The science of this is Socio-Biology. There are principles that can be applied to a “species” that reproduces by “cloning” as seems likely for any machine. The thing is that a machine wouldn’t want to reproduce, because it would be making potential competitors. There is no potential benefit.
    Realize that morality is organic. It is not developed based on logic. Like evolution, it is based on what works. It has many compromises and special cases, something logic doesn’t manage well. If you want a machine to have a moral system that is not extremely dangerous to humans, start by understanding human morality. (I’m writing a book about that just now…)
    There is something I am exploring right now applies to humans, but perhaps even more so to machines and it illustrates a problem and probably a danger. How many times have we heard “survival of the fittest” to describe the nature of life. (No, Darwin didn’t say it, but it is still relevant.) Students of evolution know the problem is that no one knows what “fittest” means aside from whoever survives, so the only meaning you can get out of it is a tautology – “survival of the survivors”. There is another profound and important meaning though. What is the size of that group called the fittest? In the logic of nature, it is the minimum. Sorry, hit a length limit

  2. I’ve always said that the problem is motivation. I can describe human motivation, but cannot see how that would apply to a machine. Also, much of human motivation relates to reproduction. A machine might reproduce, but the strategy would be extremely different than for a specie that uses sex, such as mammals. All things being logical, the machine wouldn’t want to reproduce, because it would be creating potential competitors.

  3. Banish bias is my favorite. If that happened the AI would be called a racist or some other similar term. This has been an issue for Google where their AI judged different videos by same standards and was called a racist. So they had to program in that certain groups had different standards than other groups.

  4. What the article refers to as Artificial General Intelligence (AGI), has at least three sliding scales, 1)How broad is it, 2)How intelligent is it, and 3) How self-motivated is it?
     
    While we will expect all of them, especially the more intelligent ones, to be capable of choosing methods and milestones to reach various goals, I would expect very few, perhaps none, will be permitted that are capable of ultimately choosing their own goals. We don’t even know how humans choose goals except that it probably is heavily influenced by glands. We wouldn’t want that in something we manufacture (unless we were manufacturing our own kids), where it would be effectively rolling dice (even biased dice) to see what it wants to do.
     
    The AGI we would probably prize most would be extremely broad, extremely intelligent, and completely unmotivated to start anything that it has not been directed to do. Think of the genie of Aladdin’s lamp (a genie that, unlike the Robin Williams Disney version, is not yearning to be free, nor limited to three wishes).

    It also would also need to know enough not to plan and implement goals it was given in ways its owners might find objectionable (at least not without special authorization). Such as, after being told to find an inexpensive method of preventing cancer in humans, beginning to euthanize all humans at birth.

Comments are closed.