From the course: Ethics and Law in Data Analytics
Bias and legal challenges
From the course: Ethics and Law in Data Analytics
Bias and legal challenges
- The Executive Office of President Barack Obama issued a report in 2016, which identifies the opportunities and challenges of big data, the law that protects civil rights and protects against discrimination, and also some best practices for organizations in the data industry. This report really highlights two of the core themes we identified in mod one. The tension between law and technology; and the tension between individual rights and organizational rights. In this video, I will share some challenges and highlight some principles of law that are relevant. In later videos, we will look at how these challenges and laws show up in four distinct areas, consumer rights, employee rights, student rights, and criminal justice. And finally, in a separate video, we will review the best practices suggested by the U.S. government to alleviate and eliminate bias. The White House Big Data Report identified two challenges to promoting fairness and overcoming bias. One, the challenge related to the data inputs themselves that are used in algorithms, and two, the design of the algorithm systems. For the first challenge, the concern is related to our earlier discussion of inclusion exclusion. The decision to use certain inputs and not others in an algorithm can, in fact, result in discrimination. For example, in an algorithm design to determine the fastest route between points A and B, the architect of the system might include information about roads, but not bike routes, or public transportation. This negatively impacts those who do not own a vehicle. Similarly, where the data inputs do not reflect accurately a population, there can be conclusions made that favors certain groups over others. For example, in the fastest route problem, a speed data is collected only from those who own smartphones, then the system result may be more accurate for wealthier populations with higher concentrations of smartphones, and less accurate for poor areas, where smartphone concentrations are lower. This otherwise neutral collection of data can have a disparate or disproportionate impact on socially and economically disadvantaged people. This is discrimination. We're going to hear from Nathan about challenge to the actual design of the system, but basically, the issue here is that the technical processes involved in describing, diagnosing and predicting human preferences and behaviors are completely unknown. There's no transparency. And because we can't see the process, we can't determine if there's been bias or discrimination. The owner of the algorithmic system has a confidential treat secret right in that system. It is not required to disclose it, so we can never really tell what's happening. The owner, organization, or business has what we call intellectual property rights that are here at tension with individual rights to be free from discrimination.
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
Data, individuals, and society2m 38s
-
Bias in data processing: Part 12m 51s
-
Bias in data processing: Part 23m
-
Legal concerns for equality4m 16s
-
Bias and legal challenges2m 52s
-
Consumers and policy1m 31s
-
Employment and policy1m 24s
-
Education and policy2m 28s
-
Policing and policy1m 52s
-
Best practices to remove bias3m 55s
-
Descriptive analytics and identity4m 25s
-
Privacy, privilege, or right3m 40s
-
Privacy law and analytics6m 29s
-
Negligence law and analytics4m 52s
-
Power imbalances3m 24s
-
IRAC application3m 56s
-
-
-
-