Discover millions of ebooks, audiobooks, and so much more with a free trial

From $11.99/month after trial. Cancel anytime.

Human Rights in the Age of Platforms
Human Rights in the Age of Platforms
Human Rights in the Age of Platforms
Ebook603 pages7 hours

Human Rights in the Age of Platforms

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Scholars from across law and internet and media studies examine the human rights implications of today's platform society.

Today such companies as Apple, Facebook, Google, Microsoft, and Twitter play an increasingly important role in how users form and express opinions, encounter information, debate, disagree, mobilize, and maintain their privacy. What are the human rights implications of an online domain managed by privately owned platforms? According to the Guiding Principles on Business and Human Rights, adopted by the UN Human Right Council in 2011, businesses have a responsibility to respect human rights and to carry out human rights due diligence. But this goal is dependent on the willingness of states to encode such norms into business regulations and of companies to comply. In this volume, contributors from across law and internet and media studies examine the state of human rights in today's platform society.

The contributors consider the “datafication” of society, including the economic model of data extraction and the conceptualization of privacy. They examine online advertising, content moderation, corporate storytelling around human rights, and other platform practices. Finally, they discuss the relationship between human rights law and private actors, addressing such issues as private companies' human rights responsibilities and content regulation.

Contributors
Anja Bechmann, Fernando Bermejo, Agnès Callamard, Mikkel Flyverbom, Rikke Frank Jørgensen, Molly K. Land, Tarlach McGonagle, Jens-Erik Mai, Joris van Hoboken, Glen Whelan, Jillian C. York, Shoshana Zuboff, Ethan Zuckerman

Open access edition published with generous support from Knowledge Unlatched and the Danish Council for Independent Research.

LanguageEnglish
PublisherThe MIT Press
Release dateNov 19, 2019
ISBN9780262353953
Human Rights in the Age of Platforms
Author

David Kaye

David Kaye is clinical professor of law and director of the International Justice Clinic at the University of California, Irvine. He is a member of the Council on Foreign Relations, and served as the United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression from 2014-2020. His articles have appeared in publications such as The Washington Post, the Los Angeles Times, The New York Times, Slate, and Foreign Affairs. He lives in Los Angeles, CA.

Related to Human Rights in the Age of Platforms

Related ebooks

Popular Culture & Media Studies For You

View More

Related articles

Reviews for Human Rights in the Age of Platforms

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Human Rights in the Age of Platforms - Rikke Frank Jorgensen

    Human Rights in the Age of Platforms

    Information Policy Series

    Edited by Sandra Braman

    A complete list of the books in the Information Policy series appears at the back of this book.

    Human Rights in the Age of Platforms

    Edited by Rikke Frank Jørgensen

    Foreword by David Kaye

    The MIT Press

    Cambridge, Massachusetts

    London, England

    © 2019 Massachusetts Institute of Technology

    All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.

    This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 (CC-BY-NC 4.0) International License.

    The Open Access edition of this book was published with generous support from Knowledge Unlatched and the Danish Council for Independent Research.

    Library of Congress Cataloging-in-Publication Data

    Names: Jørgensen, Rikke Frank, editor.

    Title: Human rights in the age of platforms / edited by Rikke Frank Jørgensen.

    Description: Cambridge, MA : The MIT Press, [2019] | Series: Information policy | Includes bibliographical references and index.

    Identifiers: LCCN 2018049349 | ISBN 9780262039055 (hardcover : alk. paper)

    Subjects: LCSH: Human rights. | Information society. | Information technology--Moral and ethical aspects.

    Classification: LCC JC571 .H7695266 2019 | DDC 323--dc23 LC record available at https://lccn.loc.gov/2018049349

    10 9 8 7 6 5 4 3 2 1

    ISBN: 978-0-262-03905-5

    Retail e-ISBN: 978-0-262-35395-3

    Library e-ISBN: 978-0-262-35394-6

    MITP e-ISBN: 978-0-262-35393-9

    d_r0

    Contents

    Series Editor’s Introduction

    Foreword by David Kaye

    Acknowledgments

    Introduction

    I. Datafication

    1. We Make Them Dance: Surveillance Capitalism, the Rise of Instrumentarian Power, and the Threat to Human Rights

    Shoshana Zuboff

    2. Digital Transformations, Informed Realities, and Human Conduct

    Mikkel Flyverbom and Glen Whelan

    3. Data as Humans: Representation, Accountability, and Equality in Big Data

    Anja Bechmann

    4. Situating Personal Information: Privacy in the Algorithmic Age

    Jens-Erik Mai

    II. Platforms

    5. Online Advertising as a Shaper of Public Communication

    Fernando Bermejo

    6. Moderating the Public Sphere

    Jillian C. York and Ethan Zuckerman

    7. Rights Talk: In the Kingdom of Online Giants

    Rikke Frank Jørgensen

    III. Regulation

    8. The Human Rights Obligations of Non-State Actors

    Agnès Callamard

    9. The Council of Europe and Internet Intermediaries: A Case Study of Tentative Posturing

    Tarlach McGonagle

    10. The Privacy Disconnect

    Joris van Hoboken

    11. Regulating Private Harms Online: Content Regulation under Human Rights Law

    Molly K. Land

    Contributors

    Index

    Series Editor’s Introduction

    Sandra Braman

    One can sign away one’s constitutional rights by contract, though historically that has been allowed only when there were plenty of other options. One could choose, for example, to sign a contract forbidding engaging in public political speech, even on one’s personal time, in order to work for a telephone company concerned about being welcome in every home in the community—in the past, though, under conditions in which if that were not acceptable there were plenty of other jobs available on a par in terms of skills required, pay, a visible career path, and so on. You had a choice.

    In contrast, already by the turn of the century, comparative analysis of the terms of service and acceptable use agreements, the contracts we sign with Internet service providers (ISPs) and platforms by clicking through on them, found the terms of these contracts across providers were converging. And they were doing so in ways destructive of the human rights that are core to most constitutions and constitution-like foundations of national law in protections for civil liberties (see Advantage ISP). US constitutional law, for example, forbids the use of language in laws or regulations that is vague (reasonable adults may not agree on its meaning) or overbroad (covering far more activity and types of communication than is the intended target of a particular law or regulation). Both types of language are not only rife in, but characteristic of, terms of service agreements. This convergence of the provisions of terms of service means that, on the Internet, there has been nowhere else, effectively, to go, if offered a contract you considered abusive of human rights. The subject addressed by this book, on threats to human rights from private sector entities in the online environment, could not be more important.

    Theories of free speech typically focus on one problem: how to maximize the possibilities for rich and diverse public discourse about shared matters of public concern under conditions in which there may be threats to those rights from governments. As Edwin C. Baker and others have pointed out, though, with the commercial broadcasting that has dominated the globe ever since the liberalization waves of the late twentieth century, a second problem has to be solved at the same time: in economists’ terms, a second market had to be served—advertisers. Thinking about free speech in a two-sided market rather than an environment conceived to serve only one market, comprised of the needs of citizens and citizenship, makes analyses more complex. And, importantly, it inverts the relationship of the problem to policy making. Historically, thinking and practice with respect to protecting free speech have been focused on preventing the government from inappropriately affecting the speech environment in what we might think of as a single market problem. When the problem involves a two-sided market, though, the question becomes how the government can best intervene, using laws and policy, to support the public speech environment and help it thrive.

    What we have now, as is pointed out in Human Rights in the Age of Platforms, is a third class of problem—those created by multisided market markets. With this, the challenge for policy makers of all types (whether public sector or private, organizational or individual) is that the problem becomes yet more complex again by another order of magnitude. There is a second challenge to human rights in cyberspace when framed in economic terms, as well. The information economy in which we now live is, so to speak, an expanding universe. The economic domain is itself growing by commodifying types of information and informational interactions that had not previously been treated as something that can be bought and sold. This way of conceptualizing the information economy was introduced by political economists in the 1970s as the second of the four ways of conceptualizing the information economy that have appeared since the 1960s, all simultaneously in use today theoretically, rhetorically, and operationally. (The first to appear was an approach that understood the information economy as one in which everything operates as it always had, but industries in the information sector had become proportionately more important than those in other economic sectors. Later, approaches appeared that focused on transformations in the nature of economic processes themselves—emphasizing cooperation and coordination for long-term economic success in addition to competition—[often referred to as the network economy] and, in the twenty-first century, appreciation of the ways in which representation has replaced empirical data as the foundation of economic decision-making [an approach in which the information economy is called a representational economy].)

    By the 1990s, there were consulting firms and business schools with advice about just how to take advantage of informational opportunities to make profit from this expansion of the economic universe. The intellectual capital movement of that era developed alternative accounting schemes for these new forms of value, and the industrial classification codes so fundamental to the accounting systems of importance for regulation as well as financial purposes were revised in that era as well for the same reasons; in the US this meant replacing the Standard Industrial Classification (SIC) codes with the North American Industry Classification System (NAICS), while internationally these were transformations that took place within the International Standard Industrial Classification (ISIC) code system. This same insight into what makes the information economy different from the industrial economy was also a driving force behind the formation of the World Trade Organization (WTO) and the development of associated treaties, such as the General Agreement on Trade in Services (GATS), which for the first time incorporated trade in services into international trade agreements. (The prize for the best definition of services for this purpose still goes to The Economist, which defined it in 1984 as anything that can be bought and sold that cannot be dropped on your foot.)

    What all of this means for human rights is that the proportion of our lifeworlds, of what we all do on a daily basis with our friends, colleagues, neighbors, allies, and fellow citizens, for which human rights abuses presents threats, is growing. The emphasis here is not on the egregious examples of extraordinary situations, but on the normal, whether that is the normal as we are coming to accept it or the normal as we would prefer it to be. We live, that is, in an expanding universe of possible human rights issues that might arise in association with our ordinary use of digital technologies or because these technologies are embedded in our habitual or expected contexts.

    Spending several days at a meeting of the Internet Engineering Task Force in November 2017 was humbling in this regard. A growing number of those involved in this group, which is responsible for the always ongoing effort of Internet design, are working on the problem of inserting explicit attention to human rights issues formally into the processes through which a proposed protocol for the Internet becomes the official protocol. Spending several days in sessions under the guidance of members of this group who were sophisticated both regarding the technologies involved and the processes of the organization made clear that the problem of privacy was a whack-a-mole problem, appearing in a high percentage of conversations, each devoted to a specific technical issue, each within its own working group and topical problem track. With every new technological development, new privacy problems appear. From the human rights side, the problem may be a lack of comprehension of the technical possibilities and constraints of the systems to which critiques and demands for protection are being addressed.

    Human Rights in the Age of Platforms can serve as a primer for all of us. In the gifted intellectual and editorial hands of Rikke Frank Jørgensen, these authors make visible the human rights problems specific to those environments controlled by the private sector (essentially all of them) rather than in the geopolitical and legal terms that have dominated the human rights discourses of the past. The book provides, in essence, an environmental approach in that the cases addressed range across the various facets of our lives. They bring to bear theories and insights from multiple disciplines and, for many, life experience working on human rights issues on the ground.

    It is not an encouraging time to be thinking about human rights, whether in the offline or online environment. But it is encouraging to have such thoughtful scholars, thinkers, and practitioners to help us understand the fundamental human rights issues of our era as we seek to develop the means to address them offered by this foundational work.

    Foreword

    David Kaye

    UN Special Rapporteur on Freedom of Expression

    On the shelf beside my desk rest a number of recent and already dog-eared books about the digital age: Consent of the Networked by Rebecca MacKinnon, The Attention Merchants by Tim Wu, China’s Contested Internet by Guobin Yang, Twitter and Tear Gas by Zeynep Tufekci, The Net Delusion by Evgeny Morozov, Weapons of Math Destruction by Cathy O’Neill, and Dragnet Nation by Julia Angwin. Stacked nearby are countless nongovernmental organization reports and academic studies about the ways in which the Internet is affecting the enjoyment of human rights, with titles like Tainted Leaks (Citizen Lab), Online and on All Fronts (Human Rights Watch), Let the Mob Do the Job (Association for Progressive Communications), Troops, Trolls and Troublemakers (Oxford Internet Institute), and ¿Quién defiende tus datos? (Derechos Digitales).

    What connects these disparate publications? Apart from all having a focus on the individual’s experience in the digital age, not a single one tells a hopeful story about personal autonomy, freedom of expression, security, or privacy online. Not one of these publications highlights the ways in which the Internet has opened broad avenues of communication among cultures, permitted the sharing of information and ideas across borders, and offered vast expanses of knowledge that can be traversed from link to link and thread to thread online. Some of them focus on the repression of governments that criminalize expression online or conduct surveillance of their citizens and others. Some drill down into the ways in which private companies govern quasi-public space, share information with governments seeking access to their networks, or simply give the false impression of privacy or security in the shadow of what Peter Swire has called The Golden Age of Surveillance.

    Are there stories about private actors expanding or simply protecting human rights? Of course, they do exist. Indeed, the story that dominated about twenty years of public discourse, from about 1990 to 2010, was the story of private innovation breaking through old barriers of distance to develop technologies that have created and then forever altered the information society. Those stories are still told, ones about atheists in religious societies using the Internet for connection, sexual minorities going online to gain knowledge about health and well-being, and critics and dissenters using the tools of social media to share information and organize for protest.

    The truth is, the books on my shelf and the publications in my in-box reflect changes in the way most stakeholders now think about the Internet. According to an increasingly dominant narrative, the Internet is a place of darkness, danger, propaganda, misogyny, harassment, and incitement, which private actors are doing little to ameliorate. (Where do you read these complaints? On the Internet!) Worldwide, people are worried, legislators are energized, and the gears of regulation have been engaged. An era of Internet laissez-faire is over or at least coming to a close. To be sure, repressive governments have been imposing costs on private actors in digital space for many years, especially those actors—such as telecommunications and Internet service providers—subject to licensing rules as a condition of participation in a local market. Many perform a kind of regulation by denying entry into markets; blocking, filtering, or throttling digital traffic; providing beneficial network access to friends and limiting that access to critics; and performing other tricks of the digital censor.

    But the regulatory buzz is not limited to the repressive. Some rules—such as those pertaining to intellectual property, like the Digital Millennium Copyright Act in the United States—have been in place for decades, giving some private actors the power to shape in often very problematic ways the nature of expression and creation online. Recent years have shown deepening interest in regulation, as governments are eager to gain some measure of control over Internet space in an era of digital distress. European institutions are in the lead, developing regulatory models that may be replicated worldwide. The European Court of Justice has taken on personal reputational control with the right to be forgotten (or the right to erasure), outsourcing its implementation to Google. The European Court of Human Rights has danced around the possibility of intermediary liability for third-party expression. The United States and Europe have been in deep negotiations over the future of privacy ever since the collapse of the Safe Harbour standards in the treatment of personal data of Europeans. The European Union has imposed a code of conduct for social media companies and search engines to follow in the context of extremist and terrorist content, and it seems poised at the time of this writing to enter into the fraught space of disinformation and propaganda, so-called fake or junk news.

    Amid the calls for regulation in democratic societies and the acts of government repression elsewhere, there is one undeniable fact about the digital age: at the center is the private company. Whether it’s the telco providing digital access, or the social media company providing space for conversation, or one of any number of other actors in sibling industries, private companies in the digital age exercise enormous control. They connect users and providers of information and ideas. They sell user data and user attention. They moderate (or regulate) user speech. They cooperate with or resist government demands. In short, they often are either the governors of space visited by billions or the mediators between the individual and the government. This is a massive role and, depending on how you see it, a vital responsibility. Just whose responsibility is subject to debate.

    This volume, a collection of studies by some of the leading thinkers at the nexus of private action and public regulation in the digital age, introduces the most difficult legal and policy questions of the digital age. It presents theoretical insights about the transformations brought about by private actors. It offers specific examples of private power that implicates the rights of individual users. It provides legal frameworks for all stakeholders to think through the problems of human rights protection in an environment so dominated by private companies. All of this the volume does without either the hysteria of the moment’s particular crises or, at the other end of the spectrum, a jargony disconnection from the experience of real human beings.

    The real challenge for the next generation of legislators and regulators, particularly those of good faith operating in democratic societies, is to shape new laws that meet two conditions: First, at a minimum, they must promote and protect everyone’s rights, such as the right to seek, receive, and impart information and ideas of all kinds, regardless of frontiers and through any media as provided by Article 19 of the International Covenant on Civil and Political Rights. They must be compliant with international human rights norms, protecting users who enjoy rights. Second, law must protect users—and society as a whole—from the harms caused by the special features of the digital age. That is easier said than done, perhaps, but the preservation of the original vision of the Internet should be at the top of all stakeholders’ agendas moving forward. This volume guides us toward that goal.

    Acknowledgments

    I would like to thank the people who made this book possible: first and foremost, the contributors to the book, who gathered at the Danish Institute for Human Rights in January 2017 to share their initial drafts and worked together to shape the overall direction of the book. A note of thanks goes to colleagues at the Danish Human Rights Institute, Marc Bagge Pedersen, Karen Lønne Christensen, and Emilia Piechowska, who have provided crucial practical and editorial assistance; and Anja Møller Pedersen for her contribution to the authors’ workshop.

    I am also indebted to the MIT Press for their kind and professional assistance all the way from idea to final book, to the anonymous reviewers, and to series editor Sandra Braman for her support and substantive input ever since the idea of the book first materialized in 2016.

    I am grateful for the generous support from Knowledge Unlatched and the Danish Council for Independent Research, which enabled the open access edition of this book. It is my hope that it will benefit scholars and human rights practitioners around the globe.

    Introduction

    Rikke Frank Jørgensen

    This book is concerned with the human rights implications of the social web.¹ Companies such as Google, Facebook, Apple, Microsoft, Twitter, LinkedIn, and Yahoo! play an increasingly important role as managers of services and platforms that effectively shape the norms and boundaries for how users may form and express opinions, encounter information, debate, disagree, mobilize, and retain a sense of privacy. The technical affordances, user contracts, and governing practices of these services and platforms have significant consequences for the level of human rights protection, both in terms of the opportunities they offer and the potential harm they can cause.

    Whereas part of public life and discourse was also embedded in commercial structures in the pre-Internet era, the current situation is different in scope and character. The commercial press that is often referred to as the backbone of the Fourth Estate was supplemented by a broad range of civic activities and deliberations (Elkin-Koren and Weinstock Netanel 2002, vii). Moreover, in contrast to today’s technology giants, the commercial press was guided by media law and relatively clear expectations as to the role of the press in society, meaning an explicit and regulated (although imperfect on many counts) role in relation to public deliberation and participation.

    In contrast to this, the platforms and services that make up the social web are based on the double logic of public participation and commercial interest (Gillespie 2010). Arguably, over the past twenty years, these companies have facilitated a revolution in access to information and communication and have had a transformative impact on individuals’ ability to express, assemble, mobilize, inform, learn, educate, and so on around the globe. At the same time, the ability of states to compel action by the companies has put the human rights implications of their practices increasingly high on the international agenda (Sullivan 2016, 7). Most recently, concern has been raised as to the democratic implications of having a group of relatively few and powerful companies moderate and govern what is effectively the the greatest expansion of access to information in history (Kaye 2016). Despite the civic-minded narratives used to describe their services (Jørgensen 2017b; Moore 2016), the companies ultimately answer to shareholders rather than the public interest, and especially Google’s and Facebook’s business practices have increasingly been under scrutiny in the public debate.

    The revenue model of the widely used platforms imply that the expressions, discussions, queries, searches, and controversies that make up people’s social life in the online domain form part of a personal information economy (Elmer 2004). Advertising is no longer simply the dominant way to pay for information and culture (Lewis 2016), as has long been the case within old media, but has taken on a new dimension in that an unprecedented amount of social interaction is used to control markets. Whereas data was previously considered a byproduct of interactions with media, major Internet companies have become data firms, deriving their wealth from the abilities to harvest, analyze, and use personal data rather than from user activity proper (van Dijck and Poell 2013, 9). The data mining of personal information is paradoxical, as there is no demand or preference for it among consumers, yet it is accepted as a kind of cultural tax that allows users to avoid paying directly for the services provided (Lewis 2016, 95). Scholars have cautioned that these current practices represent a largely uncontested new expression of power (Zuboff 2015) that has severe impacts on human agency and on democracy more broadly, as elaborated by Zuboff in this volume. As these new practices permeate our economies, social interactions, and intimate selves, there is an urgent need for an understanding of their relationship with human rights.

    Human rights are a set of legally codified norms that apply to all human beings, irrespective of national borders. International human rights law lays down obligations of governments to act in certain ways or to refrain from certain acts, in order to promote and protect human rights of individuals or groups.² As such, it governs the relationship between the individual and the state, but it does not directly govern the activities of the private sector, although the state has an obligation to protect individuals against human rights harms in the realm of private parties.

    In recent years there have been a variety of initiatives that provide guidance to companies to ensure compliance with human rights, most notably the Guiding Principles on Business and Human Rights, adopted by the UN Human Rights Council in 2011 (UNGPs; United Nations Human Rights Council 2011). According to these Guiding Principles, any business entity has a responsibility to respect human rights, and as part of this, to carry out human rights due diligence, which requires companies to identify, assess, address, and report on their human rights impacts. Moreover, the Guiding Principles state that businesses should be prepared to communicate how they address their human rights impacts externally, particularly when concerns are raised by or on behalf of affected stakeholders.

    The commonly stated claim that human rights apply online as they do offline fails to recognize that in a domain dominated by privately owned platforms and services, individuals’ ability to enjoy their human rights is closely related to whether states have decided to encode them into national regulation applicable to companies and/or the willingness of companies to undertake human rights due diligence. In Europe, for example, the former is the case with online privacy rights, which enjoy protection under the new EU General Data Protection Regulation (GDPR) irrespective of whether the data processing is carried out by a public institution or a private company.³

    In order to address the interdisciplinary nature, scope, and complexity of these questions, the book is organized into three parts. The first is a theoretical and conceptual part that highlights areas in which datafication⁴ and the social web have implications for the protection of human rights. The second is a more practice-oriented part that explores examples of platform governance and rulemaking, and the third is a legal part that discusses human rights under pressure, focusing in particular on the right to freedom of expression and privacy, but also addressing human rights and standards related to equality and nondiscrimination, participation, transparency, access to remedies, and the rule of law. The ultimate goal of the book is to contribute to a more robust system of human rights protection in a domain largely facilitated by corporate actors. While the cases and examples used are for the most part focused on a European and US context, the challenges this book addresses are global by nature as is most clearly illustrated in the chapters by Callamard and York and Zuckerman.

    Before introducing the chapters in more detail, I will outline some of debates and literature that have served as inspiration for this book, most notably discourses on the platform society and its democratic implications. As part of this, I will briefly introduce the broad field of human rights and technology, as well as the human rights and business framework, in order to situate the specific conceptualization of this book and the human rights questions it is concerned with.

    The Platform Society

    In recent years, the notion of platform has become the prevailing way to describe the services and revenue model that make up the social web (Helmond 2015, 5). The defining characteristic of these platforms is not that they create objects of consumption but rather that they create the world within which such objects can exist (Lazzarato 2004, 188). In short, the platforms give us our horizons, or our sense of the possible (Langlois et al. 2009, 430). Via integrating buttons (like, tweet, etc.), the platforms expand beyond single services to the extent that the platform logic is visible and present across the entire web. The code and policies of the platform impose specific boundaries on social acts, and as such, the platform allows a certain predefined kind of social engagement (see the chapter by Flyverbom and Whelan in this volume). For example, you can like and have friends, but not a list of enemies. Further, the platforms’ economic interest in gathering user data implies that one cannot study a single layer but must acknowledge the intimate relationship between the technical affordances and the underlying economic interests.

    Arguably, the corporate logic, algorithms, and informational architectures of major platforms now play a central role in providing the very material means of existence of online publics. These combined elements regulate the coming into being of a public by imposing specific possibilities and limitations on user activity (Langlois et al. 2009, 417). Effectively, these platforms construct the conditions for public participation on the web. This key role prompts us to seek an understanding of their combined articulation of code and economic interests and how this logic defines the conditions and premises for online participation—in short, the paradox that exists between tools used to facilitate and free communication and the opacity and complexity of an architecture governed by the economies of data mining (ibid., 420). The economies of data mining redefine relations of power, not merely by selling user attention but by tapping into the everyday life of users and refashioning it from within, guided by commercial norms such as the presumed value to advertisers (Langlois and Elmer 2013, 4). This power perspective has also been highlighted in recent software studies, albeit from a different perspective, focusing on the interests that algorithms afford and serve in their specific manifestation (Bucher 2012), and thus how these algorithms rule (Gillespie 2014, 168). Yet scholarship has only recently begun to struggle with the broader societal implications of having technology companies define the boundaries and conditions for online social life and a networked public sphere. In addition, there is an increasing awareness of the difficulty for researchers in studying the technical, economic, and political priorities that guide major platforms due to their largely inaccessible, complex and black-boxed architecture (Langlois et al. 2009, 416). While major platforms effectively influence whether the notion of a public sphere for democratic dialogue can be sustained into the future (Mansell 2015), we have limited knowledge of how they operate and limited means of holding them accountable to fundamental rights and freedoms.

    From a regulatory perspective, the companies that control the major platforms for information search, social networking, and public discourse of all kinds squeeze themselves between traditional news companies and their two customer segments, the audience and the advertisers (Latzer et al. 2014, 18). They benefit from substantial economies of scale and a scope of operation that enables them to exploit enormous information assets (Mansell 2015, 20), while their global character detaches them from the close structural coupling between the systems of law and politics that is the paradigm of the nation-state (Graber 2016, 22). While the companies often frame themselves as neutral conduits for traffic and hosts for content creators, they have the power to influence which ideas are easily located and how boundaries for public discourse are set, as elaborated by York and Zuckerman in this volume. The capacity of these companies to screen out desirable content without the user’s knowledge is as significant as their capacity to screen out undesirable content. Citizens cannot choose to view what they are not aware of or to protest about the absence of content which they cannot discover (Mansell 2015, 24). In short, the regulatory challenge does not concern only cases in which the companies exercise direct editorial control over content. At a more fundamental level, it is about whether their practices shape the user’s online experience in ways that are inconsistent with human rights standards relating to rights of expression, public participation, nondiscrimination, media plurality, privacy, and so forth. When their gatekeeping efforts diminish the quality or variety of content accessed by citizens, result in discriminatory treatment, or lead to unwanted surveillance, there is a prime facie case for policy oversight (ibid., 3). We shall return to this point below when addressing the human rights responsibilities of these companies.

    Private Control, Public Values

    Since Habermas’ seminal work on transformations of the public sphere, various aspects of commercialization have been raised and widely elaborated in relation to the increasing power of private media corporations over public discourse, not least concerning their economic and institutional configurations (Verstraeten 2007, 78). Since public spaces relate to general principles of democracy as locations where dissent and affirmation become visible (Staeheli and Mitchell 2007, 1), their configurations and modalities of ownership, regulation, and governance greatly impact individuals’ means of participating in online public life. Oldenburg’s (1997) original work on The Great Good Place (or the third place), for example, considers the role of physical space in democratic culture and the conflict between these spaces and the commercial imperative that informs the contemporary design of cities and communities. By contrast, the commercial aspects of the online public sphere are a less researched topic although this has begun to change as scholarship increasingly examines how the political economy of online platforms affects social practices and public discourse, and what kind of public sphere may develop as a result (Gillespie 2010, 2018; Goldberg 2011; Mansell 2015).

    Arguably, the major platforms of the social web have developed an incredibly successful revenue model based on collection of users’ personal data, preferences, and behavior. The platforms facilitate communications within society, while also harnessing communication in an effort to monetize it (Langlois and Elmer 2013, 2). Corporate social media platforms constantly enact these double articulations: while on the surface they seem to promote unfettered communication, they work in their back-end of data processing and analysis to transform and translate acts of communication into valuable data (ibid., 6). Since harnessing of personal information is at the core of this revenue model, it calls for reconsideration of both personal and information in order to adequately protect users’ online privacy as discussed extensively by Mai in this volume.

    On a legal level, the harnessing of personal information implies the organized activity of exchange, supported by the legal infrastructure of private-property-plus-free-contract (Radin 2002, 4). The value of personal information has been debated in a series of Facebook-commissioned reports on how to sustainably maximize the contribution that personal data makes to the economy, to society, and to individuals (Ctrl-Shift 2015, 3). It is also the topic of annual PIE (Personal Information Economy) conferences, held by Ctrl-Shift.⁵ The first report explains how mass customization is enabled by information about specific things and people. Today’s practices, whether they drive the production of a coupon or a digital advertisement, employ data analysts and complex computational power to analyze data from a multitude of devices and target ads with optimal efficiency, relevance and personalization (ibid., 9). As noted in the report, the personal information economy has given rise to a number of concerns, such as the lack of a reasonable mechanism of consent, a sense of creepiness, fears of manipulation of algorithms, and unaccountable concentrations of data power (ibid., 15). At its core, the revenue model profiles users in order to segment customers for the purpose of targeted advertising as addressed in the chapters by Zuboff and Bermejo in this volume. A user’s search activities, for example, may result in referrals to content properties through a variety of intermediary sharing arrangements that support targeted marketing and cross-selling (Mansell 2015, 20). The economic turn in Internet-related literature is also exemplified in the work of Christian Fuchs and others (Fuchs 2015; Fuchs and Sandoval 2014) who interrogate the economic logics of the social web and argue that user activity such as the production and sharing of content is exploited labor because it contributes to the production of surplus value by data-mining companies.

    In the legal literature, it has been emphasized that the mantra of personalization blurs the distinction between citizens and consumers and swaps free opinion formation for free choice of commodities (Graber 2016, 7). Since freedom in a democratic society presupposes the ability to have preferences formed after exposure to a sufficient amount of information (Sunstein 2007, 45), personalization risks replacing a diverse, independent, and unpredictable public discourse with the satisfaction of private preferences, based on previous choices (a similar concern is found in Zuckerman 2013). In addition, there are increasing concerns about the shift in decision-making power from humans to algorithms (Pasquale 2015) and the democratic implications of this shift as addressed by Bechmann in this volume. In contrast to written law, which is interpreted by authorized humans in order to take effect on a person, code is largely self-executing and implies minimal scope for interpretation (Graber 2016, 18). While this topic is receiving increasing attention (Council of Europe Committee of Experts on Internet Intermediaries 2017), there is still limited scholarship addressing the human rights and rule-of-law implications of having algorithms regulate social behavior in ways that are largely invisible and inaccessible to the individual affected.

    In sum, while recognizing the more optimistic accounts of the networked public sphere and its potential for public participation (Benkler 2006; Benkler et al. 2015; Castells 2009), this book is inspired by literature that is concerned with the democratic implications of having an online domain governed by a relatively small group of powerful technology companies and informed by the personal information economy.

    Human Rights and Technology Literature

    Scholarship related to human rights and technology is scattered around different disciplines ranging from international law and Internet governance to media and communication studies. The interlinkage between technology and human rights started to surface on the international policy agenda during the first World Summit on the Information Society (WSIS) in 2003 and 2005 (Best, Wilson, and Maclay 2004; Jørgensen 2006). The WSIS brought together policy makers, activists, and scholars from a range of disciplines concerned with the normative foundations of the information society. The interrelation between technology and human rights was still very new at this point, and far from obvious for anyone besides a small group of committed activists and scholars. However, in the fifteen years since WSIS a large number of books, surveys, and norm-setting documents have been produced, as we shall see below.

    The human rights and technology literature includes a growing body of standard-setting literature that supports ongoing efforts to establish norms for human rights protection in the online domain. The Council of Europe’s Committee of Ministers, for example, has since 2003 issued more than 50 recommendations and declarations that apply a human rights lens to a specific area of concern in the online domain, such as search engines, social media platforms, blocking and filtering, net neutrality, Internet intermediaries, big data, Internet user rights, transborder flow of information, and so forth.⁶ The Council of Europe efforts in this field are elaborated in McGonagle’s chapter in this volume. Also, the Organization for Security and Co-operation in Europe (OSCE) has produced a number of guidebooks, although more narrowly related to online freedom of expression, such as Media Freedom on the Internet: An OSCE Guidebook (Akdeniz 2016), and the UN Human Rights Council has since 2012 adopted a number of resolutions that reaffirm the protection of human rights online.⁷ Further, the UN Special Rapporteur on Freedom of Expression has produced a number of important reports that have been widely used as benchmarks for understanding and applying freedom-of-expression standards in the online domain, most recently reports on freedom of expression, states, and the private sector in the digital age (Kaye 2016), and the regulation of user-generated online content (Kaye 2018).⁸ In 2015, the first UN Special Rapporteur on Privacy was appointed and contributed with work that maps out the normative baseline for protecting privacy in an online context (Cannataci 2016). Scholars and activists have also contributed to norm setting by serving to translate human right to an online context. One example is the Internet Rights and Principles Coalition that since 2008 has been active in promoting rights-based principles for Internet governance at the global Internet Governance Forum (IGF) as well as regional IGFs and related events. The coalition has produced a number of resources, including the Charter of Human Rights and Internet Principles for the Internet, translated into twenty-five languages. Scholarly contributions include Towards Digital Constitutionalism? Mapping Attempts to Craft an Internet Bill of Rights (Redeker, Gill, and Gasser 2018).

    Another subdivision of literature is the vast number of empirically grounded studies that illustrate how technology practice and policy may pose threats to the

    Enjoying the preview?
    Page 1 of 1