IBM’s mission is to help their clients change the way the world works. There’s no better example of that than IBM Research’s annual “5 in 5” technology predictions. Each year, we showcase some of the biggest breakthroughs coming out of IBM Research’s global labs – five technologies that they believe will fundamentally reshape business and society in the next five years. This innovation is informed by research taking place at IBM Labs, leading edge work taking place with our clients, and trends we see in the tech/business landscape.
Later today, they’ll introduce the scientists behind this year’s 5 in 5 at a Science Slam held at the site of IBM’s biggest client event of the year: Think 2018 in Las Vegas. Watch it live or catch the replay here. Science Slams give their researchers the opportunity to convey the importance of their work to a general audience in a very short span of time — approximately 5 minutes. they have found this to be an extremely useful exercise that makes our innovation more accessible by distilling it down to its core essentials.
Here’s a summary of the two of the five predictions IBM scientists will present this year.
Our oceans are dirty. AI-powered robot microscopes may save them. In five years, small, autonomous AI microscopes, networked in the cloud and deployed around the world, will continually monitor in real time the health of one of Earth’s most important and threatened resources: water. IBM scientists are working on an approach that uses plankton, which are natural, biological sensors of aquatic health. AI microscopes can be placed in bodies of water to track plankton movement in 3D, in their natural environment, and use this information to predict their behavior and health. This could help in situations like oil spills and runoff from land-based pollution sources, and to predict threats such as red tides.
AI bias will explode. But only the unbiased AI will survive. Within five years, we will have new solutions to counter a substantial increase in the number of biased AI systems and algorithms. As we work to develop AI systems we can trust, it’s critical to develop and train these systems with data that is fair, interpretable and free of racial, gender, or ideological biases. With this goal in mind, IBM researchers developed a method to reduce the bias that may be present in a training dataset, such that any AI algorithm that later learns from that dataset will perpetuate as little inequity as possible. IBM scientists also devised a way to test AI systems even when the training data is not available.
Our oceans are dirty. AI-powered robot microscopes may save them.
In five years, small autonomous AI microscopes, networked in the cloud and deployed around the world, will continually monitor the condition of the natural resource most critical to our survival: water.
By 2025, more than half of the world’s population will be living in water-stressed areas. But scientists struggle to collect and analyze even the most fundamental data about the real-time conditions of our oceans, lakes and rivers. There are specialized sensors that can be deployed to detect specific chemicals and conditions in water, but they miss unanticipated ones, like invasive species or the introduction of new chemicals from run off. Plankton, however, are natural, biological sensors of aquatic health. Even slight changes in water quality affect their behavior. They also form the foundation of the oceanic food chain, which serves as the primary source of protein for more than a billion people. Yet very little is known about how plankton behave in their natural habitat, because studying them typically requires collecting samples and shipping them to a laboratory.
IBM researchers are building small, autonomous microscopes that can be placed in bodies of water to monitor plankton in situ, identifying different species and tracking their movement in three dimensions. The findings can be used to better understand their behavior, such as how they respond to changes to their environment caused by everything from temperature to oil spills to run off. They could even be used to predict threats to our water supply, like red tides.
The microscope has no lens and relies on an imager chip, like the one in any cell phone, to capture the shadow of the plankton as it swims over the chip, generating a digital sample of its health, without the need for focusing. In the future, the microscope could be outfitted with high performance, low power AI technology to analyze and interpret the data locally, reporting any abnormalities in real-time so they can be acted upon immediately.
Because what’s good for plankton is good for all of us.
Acknowledgment: This material is based upon work supported by the National Science Foundation under Grant No. DBI-1548297. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
AI bias will explode. But only the unbiased AI will survive.
Within five years, the number of biased AI systems and algorithms will increase, much like the increase of computer viruses in the early aughts. But we will deal with them accordingly –coming up with new solutions to control bias in AI and champion AI systems free of it.
AI systems are only as good as the data we put into them. Bad data can contain implicit racial, gender, or ideological biases. Many AI systems will continue to be trained using bad data, making this an ongoing problem. But IBM believes that bias can be tamed and that the AI systems
that will tackle bias will be the most successful.
As humans and AI increasingly work together to make decisions., researchers are looking at ways to ensure human bias does not affect the data or algorithms used to inform those decisions
The MIT-IBM Watson AI Lab’s efforts on shared prosperity are drawing on recent advances in AI and computational cognitive modeling, such as contractual approaches to ethics, to describe principles that people use in decision-making and determine how human minds apply them. The goal is to build machines that apply certain human values and principles in decision-making.
A crucial principle, for both humans and machines, is to avoid bias and therefore prevent discrimination. Bias in AI system mainly occurs in the data or in the algorithmic model. As we work to develop AI systems we can trust, it’s critical to develop and train these systems with data that is unbiased and to develop algorithms that can be easily explained. To this aim, IBM researchers developed a methodology to reduce the bias that may be present in a training dataset, such that any AI algorithm that later learns from that dataset will perpetuate as little inequity as possible.
IBM scientists also devised a methodology to test AI systems even when the training data is not available. This research proposes that an independent bias rating system can determine the fairness of an AI system. For example, the AI service could be unbiased and able to compensate for data bias (the ideal scenario), or it could be just following the bias properties of its training (which could be solved by data de-biasing techniques), or it could even introduce bias whether the data is fair or not (the worst scenario). The AI end-user will be able to determine the trustworthiness of each system, based on its level of bias.
Identifying and mitigating bias in AI systems is essential to building trust between humans and machines that learn. As AI systems find, understand, and point out human inconsistencies in decision making, they could also reveal ways in which we are partial, parochial, and cognitively biased, leading us to adopt more impartial or egalitarian views.
In the process of recognizing our bias and teaching machines about our common values, we may improve more than AI. We might just improve ourselves.