Ethics in AI – Google Announces External AI Advisory Council #emTechDigital

Google’s VP Kent Walker announced that Google will build an external AI advisory council. They come from our continents and from many disciplines. Google will be providing a release that has the names and details of the external AI advisory council.

Gideon Lichfield of MIT Technology Review asked who determines what is appropriate for Google’s AI projects. The answer is Google still decides internally what is appropriate.

Google has processes for consultation but currently there is no actual formal business process where Google will formally cede final decisions externally. Currently , Google cede’s final decision only to laws and governments in their jurisdictions.

Google says they will get there when we get there on the issue of Ethics in AI.

Google has 300 people working on fairness with the AI systems.

Google is tracking metrics on these matters and their is some small amounts of progress.

There are three talks on Ethics in AI at EmTech Digital 2019 and a panel discussion.

The field of AI is at a critical juncture. How can technologists, business leaders, educators, and regulators act now to promote responsible use of intelligent systems?

Kent Walker is Senior Vice President of Global Affairs, Google. He talk is entitled –
“From Policy to Implementation: Establishing Principles of Responsible AI”

Kent discusses responsible innovation. Google is one of the first companies to work on AI in a major commercial way. Google Search uses AI, Google Translate and many other products use AI.

AI is being used to diagnose eye disease and heart disease.

Google has developed Duplex to use to synthesize speech. This will help the disabled and others. Duplex will identify itself as a digital assistance.

There needs to be meaningful public conversations about AI.

Google has a five point plan.

They have published Google’s AI principles.

Google applies those principles internally.

They have best practices for researchers

They will build an external AI advisory council. They come from our continents and from many disciplines.

Google 7 AI Principles. Google has 4 areas in AI that they will not work.

Below is the specific text of Google’s AI principles.

1. Be socially beneficial. 

The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.  

AI also enhances our ability to understand the meaning of content at scale. We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.

2. Avoid creating or reinforcing unfair bias.

AI algorithms and datasets can reflect, reinforce, or reduce unfair biases.  We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

3. Be built and tested for safety.

We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.  We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

4. Be accountable to people.

We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.

5. Incorporate privacy design principles.

We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

6. Uphold high standards of scientific excellence.

Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration. AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences. We aspire to high standards of scientific excellence as we work to progress AI development.

We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.  

7. Be made available for uses that accord with these principles.  

Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications. As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors:

  • Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use
  • Nature and uniqueness: whether we are making available technology that is unique or more generally available
  • Scalewhether the use of this technology will have significant impact
  • Nature of Google’s involvementwhether we are providing general-purpose tools, integrating tools for customers, or developing custom solutions

AI applications we will not pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm.  Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights

Google moved forward on AI lip reading. They felt it was good for person to person exchange and would help the hearing impaired. They felt it did not help authoritarian governments that much to invade privacy.

Brendan McCord is the Senior Advisor for AI, US Department of Defense; EVP, Tulco LLC. Brendan is talking about Purpose-Driven AI: Responsibly Addressing Our Big Problems.

We are in the middle of an AI revolution. We can build societal scale AI systems that can impact all of society.

Brendan noted the problem geopolitically if all tech companies in the US abstain from AI for military applications.

Brendan gave a talk a couple of years ago on AI and security.

Rashida Richardson is the Director of Policy Research, AI Now Institute. She is talking about Emerging Policy Approaches to AI Adoption.

Rashida cited many problems with racial bias and language in AI systems. There is problems with the data used by the AI. We need to confront racial discrimination in society now so that less of it can be carried forward with AI.

Rashida was particularly concerned about the use of facial recognition systems by the police.

SOURCE – Live Reporting by Brian Wang, Nextbigfuture at EmTech Digital 2019, Google published AI principles

4 thoughts on “Ethics in AI – Google Announces External AI Advisory Council #emTechDigital”

  1. your topics like title to article brian. the new format of website is not ideal for reading clarity. we need bold or italics on hashtags and category to identify

Comments are closed.