Responsible AI 2020: Expectations For The Year Ahead

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Responsible AI 2020

Expectations for the Year Ahead


WHAT?

 Responsible AI is a framework for bringing many of these critical. practices


together. It focuses on ensuring the ethical, transparent and accountable use
of AI technologies in a manner consistent with user expectations,
organizational values and societal laws and norms.

 The development of AI is creating new opportunities to improve the lives of


people around the world, from business to healthcare to education. It is also
raising new questions about the best way to build fairness, interpretability,
privacy, and security into these systems.
CHALLENGES

 In 2020, enabling responsible application of AI technologies is one of the


field’s foremost challenges as it transitions from research to practice. More
and more, we’re hearing of researchers and practitioners from disparate
disciplines highlighting the ethical and legal challenges posed by the use of AI
in many current and future real-world applications.
 Additionally, there are calls from academia, government, and industry leaders
for technology creators to ensure that AI is used only in ways that benefit
mankind and to integrate responsibility aspects into the foundations of the
technology.
Overcoming these challenges

 Best practices, and open-source tools centered around responsible


development and deployment of AI-driven systems in 2020.

 To address these effectively, organizations should understand the challenges


and risks with AI and also take these fully into account in its design and
deployment.
Common Factors for Establishing
Responsible AI
 Governance — the underpinnings for responsible AI point to the need for end-to-
end enterprise governance. At a high level, -identifying accountability;
determining how AI aligns with business strategy; finding the business processes
could be modified to improve results; putting controls in place to track
performance and locate problems; and deciding whether the results are consistent
and reproducible.
 Ethics and regulation — the primary goal is to aid organizations develop AI that is
ethical and compliant with relevant regulations.
 Explainability — provide a vehicle for AI-driven decisions to be interpretable and
easily explainable by those who are affected by them.
 Security — help organizations develop AI systems that are safe to use.
 Bias — address issues of bias and fairness so that organizations are able to develop
AI systems designed to mitigate unwanted bias and achieve decisions that are fair
in a well-communicated way.
FACTS

 Microsoft for example, has publicized its approach to responsible AI with six
ethical principles to guide the development and use of AI with human beings
taking center stage — fairness, reliability & safety, privacy & security,
inclusiveness, transparency, and accountability. The company also has
developed guidelines for responsible bots, principles for building
conversational bots that create confidence in a company’s products and
services. Microsoft’s Office of Responsible AI is tasked with putting Microsoft’s
principles into practice.
 In addition, Google’s public statement on its responsible AI practices indicates
the company is addressing new questions about the best way.
 Elon Musk, for instance, is calling for regulation on organizations developing
advanced AI, including his own companies — the Tesla and SpaceX
head tweeting on Feb. 17, 2020, “All orgs developing advanced AI should be
regulated, including Tesla.”
Limiting AI Applications

 There has been much public debate centered on the use of facial recognition
software which is powered by deep learning (specifically, convolutional neural
networks). In 2020, we’re seeing various levels of government, law
enforcement agencies, and universities limit the use of facial recognition out
of concern that it could introduce economic, racial and gender bias.
 For example, this concern has prompted new federal policies such as
the Facial Recognition Technology Warrant Act of 2019 (S.2878). If it becomes
law, it would require federal officials to get a warrant if they’re going to use
facial recognition technology to attempt to track a specific person’s public
movements for more than 72 hours.
Responsible AI Tools

 AI Global offers the Responsible AI Portal, an authoritative repository


combining reports, standards, models, government policies, open data sets,
and open-source software to help navigate the AI landscape
 Element AI produces a timely podcast series “The AI Element” that focuses on
exploring the biggest issues and toughest questions around trust and adoption
of AI.
 In addition, PwC’s Responsible AI Toolkit is a suite of customizable
frameworks, tools, and processes designed to help harness the power of AI in
an ethical and responsible manner — from strategy through execution.

You might also like