Responsible AI 2020: Expectations For The Year Ahead
Responsible AI 2020: Expectations For The Year Ahead
Responsible AI 2020: Expectations For The Year Ahead
Microsoft for example, has publicized its approach to responsible AI with six
ethical principles to guide the development and use of AI with human beings
taking center stage — fairness, reliability & safety, privacy & security,
inclusiveness, transparency, and accountability. The company also has
developed guidelines for responsible bots, principles for building
conversational bots that create confidence in a company’s products and
services. Microsoft’s Office of Responsible AI is tasked with putting Microsoft’s
principles into practice.
In addition, Google’s public statement on its responsible AI practices indicates
the company is addressing new questions about the best way.
Elon Musk, for instance, is calling for regulation on organizations developing
advanced AI, including his own companies — the Tesla and SpaceX
head tweeting on Feb. 17, 2020, “All orgs developing advanced AI should be
regulated, including Tesla.”
Limiting AI Applications
There has been much public debate centered on the use of facial recognition
software which is powered by deep learning (specifically, convolutional neural
networks). In 2020, we’re seeing various levels of government, law
enforcement agencies, and universities limit the use of facial recognition out
of concern that it could introduce economic, racial and gender bias.
For example, this concern has prompted new federal policies such as
the Facial Recognition Technology Warrant Act of 2019 (S.2878). If it becomes
law, it would require federal officials to get a warrant if they’re going to use
facial recognition technology to attempt to track a specific person’s public
movements for more than 72 hours.
Responsible AI Tools