In AI we trust – implementing empathy in digital assistants – C2 Montreal

Samuel Moreau, Microsoft AI Lead and Ruth Gil, Microsoft Design Architect from the Office Experience team, lead an AI workshop at C2 Montreal.

Samuel Moreau is a Partner Design Director – Cortana & Artificial Intelligence at Microsoft.

If the future is AI, then the future is now. From identifying trends to automation to medical advances, AI is already impacting the way we do business.

C2 Montréal’s AI Forum is a partnership between leading research lab Element AI and C2. It provides a powerful environment in which to engage this world-changing innovation.

The workshop had a talk about AI, AI ethics and bias.

The workshop had an exercise about identifying needs and interacting with an AI assistant. This was done by splitting a table with two groups of people. One group pretended to be users of assistants. The other group pretended to be the AI assistant. One half of the teams could ask questions, interact and clarify. but the other half of teams could not ask questions. Presumably the half that could not ask questions would have more bias.

Another exercise was to imagine what they want and where they will be in ten years with expected future AI.

Raising Ethical AI @ Microsoft

Samuel discusses raising human children and correlates to making AI and making it empathetic and ethical.

Five sources of bias in AI.

Data-driven bias
—if the training set itself is skewed, the result will be equally so.

AI facial recognition had problem with asian faces. It could not identify that asian faces pictures had faces with open eyes. This caused a problem for some people being initially denied passport because the AI said the pictures did not have open eyes and were thus not acceptable.

Bias through interaction

— Microsoft’s Tay, a Twitter-based chatbot designed to learn from its interactions with users. Unfortunately, Tay was influenced by a user community that taught Tay to be racist and misogynistic. In essence, the community repeatedly tweeted offensive statements at Tay and the system used those statements as grist for later responses.

Emergent Bias

Any algorithm that uses analysis of a data feed to then present other content will provide content that matches the idea set that a user has already seen. This effect is amplified as users open, like and share content. The result is a flow of information that is skewed toward a user’s existing belief set

Confirmation bias – when people look for scientific evidence to support pre-existing opinions.

Similarity Bias

Sometimes bias is simply the product of systems doing what they were designed to do. Google News, for example, is designed to provide stories that match user queries with a set of related stories. This is explicitly what it was designed to do and it does it well. Of course, the result is a set of similar stories that tend to confirm and corroborate each other. That is, they define a bubble of information that is similar to the personalization bubble associated with Facebook.

Conflicting Goals bias

Sometimes systems that are designed for very specific business purposes end up having biases that are real but completely unforeseen.

Imagine a system, for example, that is designed to serve up job descriptions to potential candidates. The system generates revenue when users click on job descriptions. So naturally the algorithm’s goal is to provide the job descriptions that get the highest number of clicks.

As it turns out, people tend to click on jobs that fit their self-view, and that view can be reinforced in the direction of a stereotype by simply presenting it. For example, women presented with jobs labeled as “Nursing” rather than “Medical Technician” will tend toward the first. Not because the jobs are best for them but because they are reminded of the stereotype, and then align themselves with it.

Think outside the box on bias.
Build diverse and inclusive teams.
Create intelligible databases.
Communicate the reasoning behind the decisions to the end users.

Building Trust in AI

Ultimately, for AI to be trustworthy, Microsoft believes that it should not only be transparent, secure and inclusive but also maintain the highest degree of privacy protection. And we have drawn up six principles that we believe should be at the heart of any development and deployment of AI-powered solutions:

1. Privacy and security: Like other cloud technologies, AI systems must comply with privacy laws that regulate data collection, use and storage, and ensure that personal information is used in accordance with privacy standards and protected from misuse or theft.
2. Transparency: As AI increasingly impacts people’s lives, we must provide contextual information about how AI systems operate so that people understand how decisions are made and can more easily identify potential bias, errors and unintended outcomes.
3. Fairness: When AI systems make decisions about medical treatment or employment, for example, they should make the same recommendations for everyone with similar symptoms or qualifications. To ensure fairness, we must understand how bias can affect AI systems.
4. Reliability: AI systems must be designed to operate within clear parameters and undergo rigorous testing to ensure that they respond safely to unanticipated situations and do not evolve in ways that are inconsistent with original expectations. People should play a critical role in making decisions about how and when AI systems are deployed.
5. Inclusiveness: AI solutions must address a broad range of human needs and experiences through inclusive design practices that anticipate potential barriers in products or environments that can unintentionally exclude people.
6. Accountability: People who design and deploy AI systems must be accountable for how their systems operate. Accountability norms for AI should draw on the experience and practices from other sectors such as privacy in healthcare. It also needs to be adhered to during system design and in an ongoing manner as systems operate in the world.
These six principles guide the design of Microsoft’s AI products and services, and we are institutionalizing them by forming an internal advisory committee to help ensure our products adhere to these values.

To learn more about these six AI principles read ‘The Future Computed, Artificial Intelligence and its Role in Society’ Helmed by Brad Smith, President and Chief Legal Officer; and Harry Shum, Executive Vice President of Microsoft AI and Research Group.