Ifthen Algorithmic Power and Politics 0190493038 9780190493035
Ifthen Algorithmic Power and Politics 0190493038 9780190493035
Ifthen Algorithmic Power and Politics 0190493038 9780190493035
Then
Oxford Studies in Digital Politics
Series Editor: Andrew Chadwick, Professor of Political Communication in the Centre
for Research in Communication and Culture and the Department of Social Sciences,
Loughborough University
TAINA BUCHER
1
1
Oxford University Press is a department of the University of Oxford.
It furthers the University’s objective of excellence in research,
scholarship, and education by publishing worldwide. Oxford is a registered
trade mark of Oxford University Press in the UK and certain other
countries.
Published in the United States of America by Oxford University Press
198 Madison Avenue, New York, NY 10016, United States of America.
© Oxford University Press 2018
All rights reserved. No part of this publication may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means,
without the prior permission in writing of Oxford University Press,
or as expressly permitted by law, by license, or under terms agreed with
the appropriate reproduction rights organization. Inquiries concerning
reproduction outside the scope of the above should be sent to the
Rights Department, Oxford University Press, at the address above.
You must not circulate this work in any other form
and you must impose this same condition on any acquirer.
Library of Congress Cataloging-in-Publication Data
Names: Bucher, Taina, author.
Title: If . . . then : algorithmic power and politics / Taina Bucher.
Description: New York : Oxford University Press, [2018] | Includes
bibliographical references and index. |
Identifiers: LCCN 2017054909 (print) | LCCN 2018008562 (ebook) |
ISBN 9780190493042 (Updf) | ISBN 9780190493059 (Epub) |
ISBN 9780190493035 (pbk. : alk. paper) | ISBN 9780190493028 (hardcover : alk. paper)
Subjects: LCSH: Information technology—Social aspects. | Information society—
Social aspects. | Algorithms—Social aspects. | Big data—Social aspects. | Artificial
intelligence—Social aspects.
Classification: LCC HM851 (ebook) | LCC HM851 .B798 2018 (print) |
DDC 303.48/33—dc23
LC record available at https://lccn.loc.gov/2017054909
1 3 5 7 9 8 6 4 2
Paperback printed by WebCom, Inc., Canada
Hardback printed by Bridgeport National Bindery, Inc., United States of America
Contents
Acknowledgments vii
Notes 161
Bibliography 175
Index 195
v
Acknowledgments
I enjoyed writing this book. The initial idea behind this book started taking shape as
a dissertation at the University of Oslo, but soon evolved into something else en-
tirely, as most things do. I therefore owe a sense of gratitude toward all the efforts
I put into my PhD project. Memories of past struggles greatly lessened the burden
of undertaking and writing my first book. The encouragement and the generosity
of series editor Andrew Chadwick and editor Angela Chnapko at Oxford
University Press were also huge contributions to that end, as well as fundamental in
developing this project. Thanks to both of you for believing it would make a
valuable contribu- tion to the Oxford Studies in Digital Politics series. The book is
a product of various encounters with people, things, and places. It was written in
the libraries, offices, homes and cafés of Copenhagen, Oslo, Berlin, and New York.
Writing allowed me to explore these places in new ways. I’d also like to acknowledge
the sunlight, coffee, sounds, views, connectivity, and solitude that these places
helpfully offered. The University of Copenhagen has provided a rich academic
community, and I am grateful to all my colleagues at the Centre for
Communication and Computing for the intellectual discussions and support.
Thanks to Shannon Mattern for hosting me at the New School in New York for my
sabbatical. A number of people have pro- vided valuable feedback on the book as
it emerged: Anne-Britt Gran, Michael Veale, Angèle Christin, Ganaele Langlois,
and Fenwick McKelvey. Thanks for your in- sightful comments. My appreciation
also goes to all people involved in the editorial process, copyediting,
transcription services, and to all the anonymous referees whose work and
critical remarks have greatly improved the final product. There are countless other
scholars and students met at conferences and seminars to thank as well for their
keen interest in my work and their astute suggestions. I hope a collec- tive word of
thanks will be accepted. This book also benefited from insightful inter- views and
conversations with media leaders, producers, and social media users. I am grateful to
all the people who generously agreed to be interviewed and for giving up their time to
help me understand the world of algorithms a bit better.
vii
viii ACKNOWLEDGMENTS
Fragments of this text have been previously published, but are all freshly
milled here. Chapter 4 takes parts from “Want to be on the top?” (New Media &
Society). Chapter 5 builds on pieces from “The algorithmic imaginary” (Information,
Commu- nication & Society) and chapter 6 has adapted sections from “Machines
don’t have instincts” (New Media & Society). Probably there are many more
fragments to be acknowledged, but the boundaries of a text are never easily
delineated. Chapter 5 and 6 also benefited from financial support from the
Digitization and Diversity project funded by the Research Council of Norway.
Thanks to friends and family for their patience and support, and especially to my
mom who never forgot to mention that there is more to life than writing a
book. More than anyone I am grateful to Georg Kjøll for his unconditional love,
superb editorial skills, music playlists, daily home-cooked dinners, and
companionship; you make living and writing so much fun each and every day.
If . . . Then
1
Introduction
Programmed Sociality
1
2 IF . . . THEN
lets users “connect with friends and the world around you.”1 As media scholar
José van Dijck has argued, “Social media are inevitably automated systems that
engineer and manipulate connections” (2013: 12). By the same token, Netflix is not
a website that lets users “see what’s next and watch anytime, cancel anytime.”2 It can’t
be seen as a neutral platform that merely queries its vast database about a user’s
request to show the movies they explicitly want to watch. Relying on vast
amounts of data, Netflix algorithms are used to analyze patterns in people’s
taste, to recommend more of the same. Popularity is not only a quantifiable
measure that helps compa- nies such as Facebook and Netflix to determine
relevant content. User input and the patterns emerging from it are turned into a
means of production. What we see is no longer what we get. What we get is what
we did and that is what we see. In the case that Netflix suggests we watch House of
Cards, it is largely a matter of consumers get- ting back their own processed data.
When the show was released in 2013 it quickly became a cornerstone for data-
driven programming, the idea that successful busi- ness decisions are driven by
big data analytics.
Of course, there is nothing inherently wrong with this form of data-driven media
production. After all, it seems, many people enjoy watching the show. The
interest- ing and potentially troubling question is how reliance on data and predictive
analyt- ics might funnel cultural production in particular directions, how
individual social media platforms code and brand specific niches of everyday life (van
Dijck, 2013: 22). Starting from the basic question of how software is shaping the
conditions of every- day life, this book sets out to explore the contours and
implications of the question itself. In what ways can we say that software, and more
specifically algorithms, shape everyday life and networked communication? What
indeed, are algorithms and why should we care about their possible shaping
effects to begin with?
Let us quickly return to my rainy November day. While chatting with friends
on Facebook about the pre-Christmas dinner, two rather specific ads appeared on
the right-hand side of my Facebook news feed. One was for a hotel in Lisbon,
where I was going to travel for the conference, and the other was for a party
dress. How did Facebook know about my upcoming trip, or that I had just
bought my mother a Christmas gift from that shop? My musings were only
briefly interrupted by one of my friends asking me about a concert I had recently
been to. She had seen a picture I posted on Facebook from the concert a few
days earlier. My other friend won- dered, why hadn’t she seen the picture? After
all, as she remarked, she checks her Facebook feed all the time. While these
connections might be coincidental, their effects are not incidental. They matter
because they affect our encounters with the world and how we relate to each other.
While seeing ads for party dresses in a festive season might not appear strange, nor
missing a picture posted from a concert for that matter, such programmed forms of
sociality are not inconsequential. These mo- ments are mediated, augmented,
produced, and governed by networked systems powered by software and
algorithms. Understood as the coded instructions that a computer needs to follow
to perform a given task, algorithms are deployed to make
Programmed 3
decisions, to sort and make meaningfully visible the vast amount of data
produced and available on the Web. Viewed together, these moments tell the
story of how our lives are networked and connected. They hint at the
fundamental question of who or what has power to set the conditions for what
can be seen and known with whatever possible effects. To address this
important question, this book proposes to consider the power and politics of
software and algorithms that con- dense and construct the conditions for the
intelligible and sensible in our current media environment.
The ideas of power and politics I have in mind are both very broad, yet quite spe-
cific. For one, this book is not going to argue that algorithms have power. Sure, algo-
rithms are powerful, but the ways in which this statement holds true cannot simply
be understood by looking at the coded instructions telling the machine what to
do. Drawing on the French philosopher Michel Foucault’s (1982; 1977)
understanding of power as exercised, relational and productive, I intend to show
how the notion of “algorithmic power” implies much more than the specific
algorithm ranking e.g. a news feed. What I am going to argue is that the notion of
algorithmic power may not even be about the algorithm, in the more technical
sense of the term. Power always takes on many forms, including not only the
ways in which it is exercised through computable instructions, but also through
the claims made over algorithms. As such, we might say that algorithmic systems
embody an ensemble of strategies, where power is immanent to the field of action
and situation in question. Furthermore, fol- lowing Foucault, power helps to
produce certain forms of acting and knowing, ulti- mately pointing to the need for
examining power through the kinds of encounters and orientations algorithmic
systems seem to be generative of.
Neither are the “politics” of this book about politics with a capital P. I will not
be discussing parliamentary politics, elections, campaigns, or political
communication in the strictest sense. Rather, politics is understood in more general
terms, as ways of world-making—the practices and capacities entailed in ordering
and arranging dif- ferent ways of being in the world. Drawing on insights from Science
and Technology Studies (STS), politics here is more about the making of certain
realities than taking reality for granted (Mol, 2002; Moser, 2008; Law, 2002).
In chapter 2 I will de- scribe this form of politics of the real, of what gets to be in
the world in terms of an “ontological politics” (Mol, 2002). In ranking, classifying,
sorting, predicting, and processing data, algorithms are political in the sense
that they help to make the world appear in certain ways rather than others.
Speaking of algorithmic politics in this sense, then, refers to the idea that realities
are never given but brought into being and actualized in and through
algorithmic systems. In analyzing power and politics, we need to be attentive of
the way in which some realities are always strengthened while others are
weakened, and to recognize the vital role of non- humans in co-creating these
ways of being in the world. If . . . Then argues that algo- rithmic power and politics
is neither about algorithms determining how the social world is fabricated nor
about what algorithms do per se. Rather it is about how
4 IF . . . THEN
and when different aspects of algorithms and the algorithmic become available to
specific actors, under what circumstance, and who or what gets to be part of how
algorithms are defined.
Programmed Sociality
Increasingly, we have come to rely on algorithms as programmable decision-makers
to manage, curate, and organize the massive amounts of information and data
avail- able on the Web and to do so in a meaningful way. Yet, the nature and
implications of such arrangements are far from clear. What exactly is it that
algorithms “do” and what are the constitutive conditions necessary for them to do
what they do? How are algorithms enlisted as part of situated practices, and how
do they operate in dif- ferent settings? How can we develop a productive and
critical inquiry of algorithms without reducing it to a question of humans versus
the machine?
Let’s begin a tentative answer with a conceptual understanding of how software
induces, augments, supports, and produces sociality. Here, I suggest the concept
of programmed sociality as a helpful heuristic device. Through this we might
study algorithmic power and politics as emerging through the specific
programmed ar- rangements of social media platforms, and the activities that are
allowed to take place within those arrangements. Facebook and other software
systems support and shape sociality in ways that are specific to the architecture and
material substrate of the medium in question. To do justice to the concept of
programmed sociality, it is important to highlight that it does not lead us down a
pathway of technological de- terminism. In using the term “programmed,” I draw
on computer scientist John von Neumann’s notion of “program,” for which the term
“to program” means to “assem- ble” and to “organize” (Grier, 1996: 52). This is
crucial, as it frames software and algorithms as dynamic and performative rather
than as fixed and static entities. Regarding “sociality,” I refer to the concept of
how different actors belong together and relate to each other. That is, sociality
implies the ways in which entities (both human and non-human) are associated
and gathered together, enabling interaction between the entities concerned
(Latour, 2005). To be concerned with programmed sociality is to be interested in
how actors are articulated in and through computa- tional means of assembling
and organizing, which always already embody certain norms and values about
the social world. To exemplify how algorithmic media prescribe certain norms,
values, and practices, let me describe how programmed sociality plays out in the
specific context of Facebook, by focusing on friendships as a particularly
pertinent form of being together online.
As Facebook has become an integral part of everyday life, providing a venue for
friendships to unfold and be maintained, it is easy to forget just how involved
Facebook is in what we often just take to be interpersonal relationships. Everything
from setting up a profile and connecting with other users to maintaining a
network
Programmed 5
of friends entails an intimate relation with the software underlying the platform
itself. As van Dijck has pointed out, “what is important to understand about social
network sites is how they activate relational impulses” (2012: 161). It is
important to understand that relationships are activated online, but also how and
when they are activated: by whom, for what purpose, and according to which
mechanisms.
With nearly two billion users, many of whom have been members of the plat-
form for many years, most people have long forgotten what it felt like to become a
member, how they became the friend that Facebook wanted them to be. Upon
first registering with the site, the user is instantly faced with the imperative to add
friends. Once a user chooses to set up an account, he is immediately prompted to start
filling in the personal profile template. Users’ identities need to be defined within
a fixed set of standards to be compatible with the algorithmic logic driving social
software systems. If users could freely choose what they wish to say about
themselves, there would be no real comparable or compatible data for the
algorithms to process and work with. Without this orderly existence as part of the
databases, our connections would not make much sense. After all, “data structures
and algorithms are two halves of the ontology of the world according to a
computer” (Manovich, 2001: 84). Being part of databases means more than
simply belonging to a collection of data. It means being part of an ordered space,
encoded according to a common scheme (Dourish, 2014). As Tarleton Gillespie
(2014) points out, data always need to be readied before an algorithm can
process them. Categorization is a powerful mech- anism in making data
algorithm-ready. “What the categories are, what belongs in a category, and who
decides how to implement these categories in practice, are all powerful assertions
about how things are and are supposed to be” (Bowker and Star, in Gillespie, 2014:
171). The template provided by Facebook upon signing in constitutes only one of
many forms of categorization that help make the data algorithm-ready. The
politics of categorization becomes most pertinent in questions concerning
inclusion and exclusion. The recurring conflicts over breastfeeding images
and Facebook’s nudity-detection systems—comprising both algorithms and
human managers—represent a particularly long-lasting debate over censorship and
platform policies (Arthur, 2012). The politics of categorization is not just a matter of
restricting breastfeeding images but one that fundamentally links database
architecture and algorithmic operations to subjectification.
To understand how sociality is programmed—that is, how friendships are pro-
grammatically organized and shaped, let us consider the ways in which the platform
simulates existing notions of friendship. As theorists of friendship have argued,
shared activity and history are important aspects of considering someone a friend
(Helm, 2010). Simulating and augmenting the notion of a shared history,
Facebook provides several tools and techniques dedicated to supporting memory.
As poorly connected or unengaged users pose a threat to the platform’s
conditions of exist- ence, programming reasons for engagement constitutes a key
rationale from the point of view of platforms. On Facebook, connecting users
to potential friends
6 IF . . . THEN
provides the first step in ensuring a loyal user base, because friendships commit.
Functioning as a memory device, algorithms and software features do not merely
help users find friends from their past, they play an important part in maintaining
and cultivating friendships, once formed. As such, a variety of features prompt users
to take certain relational actions, the most well-known being that of notifying users
about a friend’s birthday. While simulating the gesture of phatic communication
represented in congratulating someone on his or her birthday, the birthday-
reminder feature comes with an added benefit: The birthday feature is the most
basic way of making users return to the platform, by providing a concrete suggestion
for a communicative action to be performed. As I’ve described elsewhere, platforms
like Facebook want users to feel invested in their relationships, so they are continu-
ally coming up with new features and functionalities that remind them of
their social “obligations” as friends (Bucher, 2013).
While the traditional notion of friendship highlights the voluntary and dura-
tional aspects of becoming friends and becoming friends anew (Allan, 1989), the
software, one may claim, functions as a suggestive force encouraging users to connect
and engage with the people in ways that are afforded by and benefit the platform.
From the point of view of critical political economy, sociality and connectivity are
resources that fuel the development of new business models. Platforms do not acti-
vate relational impulses in an effort to be nice. Ultimately, someone benefits
finan- cially from users’ online activities. This is, of course, a familiar story and
one that many scholars have already told in illuminating and engaging ways (see
Andrejevic, 2013; Couldry, 2012; Fuchs, 2012; Gehl, 2014; Mansell, 2012; van
Dijck, 2013). From the perspective of companies like Facebook and Google, but
also from the perspective of legacy news media organizations (discussed in chapter
6), algorithms are ultimately folded into promises of profit and business models.
In this sense, a “good” and well-functioning algorithm is one that creates value, one
that makes better and more efficient predictions, and one that ultimately makes
people engage and return to the platform or news site time and again. The
question then becomes: What are the ways in which a platform sparks enough
curiosity, desire, and interest in users for them to return?
The subtle ways of software can, for example, be seen in the manner in which
Facebook reminds and (re)introduces users to each other. When browsing through
my Facebook news feed it is almost as if the algorithm is saying, “Someone
you haven’t spoken to in five years just liked this,” or, “This person whom you
haven’t heard from in ages, suddenly seems to be up to something fun.”
Somehow, I am nudged into thinking that these updates are important, that I
should pay attention to them, that they are newsworthy. Rather than meaning
that friendships on Facebook are less than voluntary, my claim is that the ways
in which we relate to each other as “friends” is highly mediated and conditioned
by algorithmic systems. People we do not necessarily think about, people we might
not remember, or people we might not even consider friends continue to show up
on our personalized news
Programmed 7
So, in that sense, it does feel as if there is only a select group of friends I
in- teract with on the social network, while I’ve practically forgotten about
the hundreds of others I have on there. An example of this is a friend from
high school, who liked one of my posts a few weeks back. I’d totally
forgotten she was even on Facebook until she liked it and we started
chatting.
an important role in deciding who gets to be seen and heard and whose voices are
considered less important. Programmed sociality, then, is political in the sense that
it is ordered, governed, and shaped in and though software and algorithms. If we
want to consider everyday life in the algorithmic media landscape, we need to pay
attention to the ways in which many of the things we think of as societal—including
friendship—may be expressed, mediated and shaped in technological designs and
how these designs, in turn, shape our social values. As we will see throughout the
book, such considerations, however, do not stop with values in design, but exceed
the purely technical (whatever that is taken to mean) in important ways.
A key argument of this book is that the power and politics of algorithms stems
from how algorithmic systems shape people’s encounters and orientations in the
world. At the same time, I claim that this shaping power cannot be reduced to
code. Specifically, I argue for an understanding of algorithmic power that hinges
on the principle of relational materialism, the idea that algorithms “are no mere
props for performance but parts and parcel of hybrid assemblages endowed with
diffused personhood and relational agency” (Vannini, 2015: 5). Thus, it is
important to acknowledge that while we start with the question of how software
and algorithms shape sociality by looking at materiality in the more conventional
sense as “proper- ties of a technology,” the answer cannot be found in these
properties alone, but rather the ways in which programmed sociality is realized
as a function of code, people, and context.
Computable Friendships
The concept of friendship provides an apt example for the understanding of pro-
grammed sociality and algorithmic life, because it shows the discrepancies between
our common-sense notions of friendship and the ways in which friendship
becomes embedded in and modeled by the algorithmic infrastructures.
Friendships are deeply rooted in the human condition as a fundamental aspect of
being together with other people, and which is always already contested based on
cultural and his- torical contexts. Traditionally, friendship has been thought of as
an exclusive social relation, a private and intimate relation between two
persons (Aristotle, 2002; Derrida, 2005; Hays, 1988). For this reason, true
friendship has been regarded as something that one cannot have with many
people at the same time, simply because it requires time to build, nurture, and
maintain. Compared to Aristotle’s conception of friendship as something rather
precious that one cannot have with many people at once (Aristotle, 2002),
Facebook seems to promote the completely opposite idea.
The way the platform puts friendships at the center of a business model is no
coincidence, of course, and is probably one of the core reasons Facebook has
evolved into an unprecedented media company during the past decade. In a patent
Programmed 9
application filed by Facebook concerning the People You May Know (PYMK) fea-
ture, no doubt is left as to the value of friendships for Facebook: “Social
networking systems value user connections because better-connected users tend
to increase their use of the social networking system, thus increasing user-
engagement and corresponding increase in, for example, advertising
opportunities” (Schultz et al., 2014). Software intervenes in friendships by
suggesting, augmenting, or encourag- ing certain actions or relational impulses.
Furthermore, software is already implicated in the ways in which the platform
imagines and performs friendships. Contrary to the notion that “friendship
clearly exists as a relation between individuals” (Webb, 2003: 138),
friendship on Facebook exists as a relation between multiple actors, between
humans and non-humans alike. As Facebook exemplifies in another patent
document:
[T]he term friend need not require that members actually be friends in
real life (which would generally be the case when one of the members is
a business or other entity); it simply implies a connection in the social
net- work. (Kendall and Zhou, 2010: 2)
The disconnect between how members usually understand friendship and the
ways in which Facebook “understands” friendship becomes obvious in the quote
above. According to Facebook, a user can be “friends” with a Facebook page, a
song, a movie, a business, and so on. While it might seem strange to consider
a movie a friend, this conception of friendship derives from the network model of
the Web in which users and movies are considered “nodes” in the network and
the relation- ship that exists between them an “edge” or, indeed, a friend.
Indeed, “the terms ‘user’ and ‘friend’ depend on the frame of reference” (Chen et
al. 2014). It is exactly the different and sometimes conflicting frames of reference
that are of interest in this book. A core contention is that media platforms and
their underlying software and infrastructures contain an important frame of
reference for understanding soci- ality and connectivity today. If we accept that
software can have a frame of reference, a way of seeing and organizing the world,
then what does it mean to be a friend on Facebook or, more precisely, what are
friends for, if seen from the perspective of the platform?
Facebook friendships are, above all, computable. In an age of algorithmic
media, the term algorithmic, used as an adjective, suggests that even friendships
are now subject to “mechanisms that introduce and privilege quantification,
proceduraliza- tion, and automation” (Gillespie, 2016a: 27). Measuring the
performance of indi- viduals and organizations is nothing new, though. As
sociologists Espeland and Sauder (2007) suggest, social measurements and rankings
have become a key driver for modern societies during the past couple of decades.
According to philosopher Ian Hacking, “society became statistical” through the
“enumeration of people and their habits” (1990: 1). Hacking connects the
emergence of a statistical society to
10 IF . . . THEN
the idea of “making up people,” meaning that classifications used to describe people
influence the forms of experience that are possible for them, but also how the effects
on people, in turn, change the classifications:
The systematic collection of data about people has affected not only the
ways in which we conceive of a society, but also the ways in which we
de- scribe our neighbour. It has profoundly transformed what we choose to
do, who we try to be, and what we think of ourselves. (1990: 3)
Above all, top friends are used to prioritize information associated with them above
others. Top friends are made-up people insofar as “those kinds of people would not
have existed, as a kind of people, until they had been so classified, organized and
taxed” (Hacking, 2007: 288). The subset of algorithmic top friends can be seen as
a new category of people, emerging in the age of programmed sociality and
algorithmic life. There are many more. As the notion of top friends shows,
computable friendships hinge on measuring and evaluating users in order to
be able to determine their friendship status. While friendships have always been
qualitatively determined, as the notion of “best friend” suggests, the extent to
which Facebook now quantifiably produces and classifies friendships works to
dehumanize sociality itself by encour- aging an empty form of competitiveness. Like
most social media platforms, Facebook measures social impact, reputation, and
influence through the creation of composite numbers that function as a score
(Gerlitz & Lury, 2014: 175). The score is typically used to feed rankings or
enhance predictions. The computing of friendships is no different. In another
patent application Facebook engineers suggest that the value of friendship is not
confined to users but also serves an essential role in sustaining the social networking
system itself. As Schultz et al. (2014) suggest, better-connected users tend to
increase their use, thereby increasing advertising opportunities. A so- called
friendship value is not only computed to determine the probability of two users
“friending” but also to make decisions as to whether to show a specific
advertising unit to a user. The higher the score, the “better” Facebook deems the
friendship to be, increasing the likelihood of using the connection to help
promote products. The value of a friendship is produced as a composite
number based on a “friendship score, sending score, receiving score or some
combination of the scores as deter- mined by value computation engine” (Schultz
et al., 2014). According to Schultz et al., “the sending and receiving scores reflect the
potential increase in the user’s continued active utilization of the social networking
system due to a given connection” (2014: 2). From a computational perspective,
friendships are nothing more than an equa- tion geared toward maximizing
engagement with the platform.
Far from loving the friend for the friend’s own sake, which would be exemplary
of the Aristotelian notion of virtue ethics and friendship, Facebook “wants”
friend- ships to happen in order to increase engagement with the social
network, ulti- mately serving revenue purposes. The quantification and metrification
of friendship are not merely part of how connections are computed by Facebook’s
algorithmic infrastructure but increasingly make up the visuals of social
networking systems
12 IF . . . THEN
through the pervasive display of numbers on the graphical user interface. With
more than 3 billion likes and comments posted on Facebook every day, users are
both expressing their sentiments and engagement and reminded and made aware
of these actions and affects through the visual traces thereof. As the software
artist Ben Grosser notes, “A significant component of Facebook’s interface is its
revealed enumerations of these ‘likes,’ comments and more” (Grosser 2014:
1). Grosser questions whether people would add as many friends if they were not
constantly confronted with how many they have or whether people would “like” as
many ads if they were not always told how many others have liked them before
them. Grosser’s artwork The Demetricator is a software plugin that removes all
metrics from the Facebook interface and critically examines these questions.
According to Grosser, Facebook draws on people’s “deeply ingrained ‘desire for
more’ compelling people to reimagine friendship as a quantitative space, and
pushing us to watch the metric as our guide” (Grosser, 2014). The pervasive
enumeration of everything on the user interface function as a rhetorical device,
teaching users that more is better. More is also necessary if we consider the
operational logics of the platform. The drive toward more is evident when
considering the value of friendship, given that more friends increases the
likelihood of engaging with the site. Friends are suggested based on mutual
friends, but also on factors such as low activity or few friends. The idea is that, by
suggesting friends with low activity level, Facebook “can enable those users to
likely have more friends as a result of being suggested [. . .] and thereby likely in-
creasing the candidate user’s engagement level with the social networking
system” (Wang et al., 2012: 6). Friendships, then, are variously made and
maintained by humans and non-humans alike. The specifics of how friendships
are susceptible to computation is immeasurable in and of itself. The purpose,
however, of explicating the term “programmed sociality” as core to
understanding algorithmic life is to draw attention to software and computational
infrastructure as conditions of possi- bility for sociality in digital media.
Guilt by Association
Whereas “friend” in the sociological sense signifies a voluntary relationship that
serves a wide range of emotional and social aims, “friends” as seen from the perspec-
tive of the platform are highly valuable data carriers that can be utilized for a variety
of reasons. One of the core principles underlying networked media is that
informa- tion is derived as much from the edges (connections) as it is from the
nodes (users, businesses, objects). This means that users do not just provide data
about them- selves when they fill out profiles, like things, or comment on posts; in
doing so, they simultaneously reveal things about the people and things they are
interacting with. If data are missing from a user’s personal profile or that user is not as
engaged as the platforms would prefer, the best way to extract more information
about the user is
Programmed 1
through his or her various connections. From a platform perspective, friends are
in the data delivery business. This becomes particularly evident when looking at
the patent documents by Facebook describing techniques related to advertising.
For example, given the insufficient personal information provided by a
particular member, ads are tailored and targeted based on friends. Facebook calls this
“guilt by association” (Kendall & Zhou, 2010: 2). While the authors of the patent
document acknowledge that they are “giving credence to an old adage,” the
word “guilt” is worth pondering. Guilt evokes notions of responsibility,
autonomy, and accounta- bility. Who or what is responsible for the content
shown on Facebook, and who might be held accountable? While it might be
easier to understand how users’ own actions determine what content they see
online, it seems more difficult to come to terms with the notion that users also
play a crucial role in determining what their friends see on their feeds. Guilt by
association, as Facebook uses the term, implies that users are made “complicit” in
their friends’ ad targeting, which seems highly problematic. While it is now
commonplace to say users are the product, not the media platforms they are
using, the extent to which users are used to promote con- tent and products—
often, without their explicit knowledge—is unprecedented in the age of
algorithmic media. If the classical notion of friendship is political in the sense that
it assumes gendered hierarchy through the notion of brotherhood (Derrida,
2005), the politics of algorithms suggests hierarchies of a different sort— of what is
“best,” “top,” “hot,” “relevant,” and “most interesting.”
When examining the contours and current state of algorithmic life, it is important
to understand the mechanisms through which algorithmic media shaping
sociality is deeply intertwined with power and politics. This book is not just about
highlight- ing the role of algorithms as a core governing principle underlying
most online media platforms today, but also about showing how algorithms
always already invoke and implicate users, culture, practice, ownership, ethics,
imaginaries, and affect. It means that talking about algorithms implies asking
questions about how and when users are implicated in developing and maintaining
algorithmic logics, as well as asking questions about governance, who owns the data,
and to what end it is put to use? While friends have always been valuable for
advertisers, algorithms seem to lessen the autonomy and intentionality of people
by turning everything they do into a potential data point for the targeting of ads and
news feed content. Such is the network logic, which users cannot escape. For
neoliberalism, “friendship is inimical to capital, and as such, like everything else, it
is under attack” (Cutterham, 2013: 41). Moreover, as Melissa Gregg holds,
“‘friendship’ is labour in the sense that it involves constant attention and
cultivation, the rewards of which include improved stand- ing and greater
opportunity” (2007: 5). As Langlois and Elmer point out, “social media seek to
mine life itself ” (2013: 4). That is, social media platforms “do much more than just
sell users’ attention to advertisers: they actually help identify the very strategies
through which attention can be fully harnessed” (Langlois & Elmer, 2013: 4).
Algorithms are key to this end. If we want to understand the ways in which
14 IF . . . THEN
power and politics are enacted in and through contemporary media, we need to
look more closely at the ways in which information, culture, and social life are being
processed and rendered intelligible. In this book, I set out to do so.
Examining algorithmic media and the ways in which life is increasingly
affected by algorithmic processing, means acknowledging how algorithms are
not static things but, rather, evolving, dynamic, and relational processes hinging on a
complex set of actors, both humans and nonhumans. Programmed sociality
implies that social relations such as friendships are not merely transposed onto a
platform like Facebook but are more fundamentally transduced. The concept of
transduction names the process whereby a particular domain is constantly
undergoing change, or individuation, as a consequence of being in touch or
touched by something else (Mackenzie, 2002; Simondon, 1992). Rather than
maintaining an interest in what friendship is, transduction and the related term
“technicity” help to account for how domains such as friendship come into being
because of sociomaterial entangle- ments. Using Facebook to access a social
network transduces or modulates how a person connects with friends. When
using Facebook, the technicity of friendship unfolds as conjunctions between
users and algorithms (e.g., the PYMK feature), coded objects (e.g., shared video),
and infrastructure (e.g., protocols and networks). As Kitchin and Dodge point out:
“This power to affect change is not deterministic but is contingent and relational,
the product of the conjunction between code and people” (2005: 178).
Transduction and technicity become useful analytical devices in exemplifying the
concept of programmed sociality as they point toward the ways in which software
has the capacity to produce and instantiate modalities of friend- ship, specific to
the environment in which it operates. The productive power of tech- nology, as
signified by the concept of technicity, does not operate in isolation or as a
unidirectional force. Algorithms and software, in this view, do not determine what
friendships are in any absolute or fixed sense. Rather, technicity usefully emphasizes
the ways in which algorithms are entities that fundamentally hinge on people’s prac-
tices and interaction, in order to be realized and developed in the first place. Taking
such a perspective allows us to see friendship and other instances of programmed
sociality as emerging sociomaterial accomplishments.
Back to the rainy November day introduced at the beginning of the chapter: The
question of what categories were used to determine the specific ads or the
content of my and my friends’ news feeds persists. Was it my clicking behavior,
my age and gender, pages that I have liked, the cookies set by the online design
store where I bought the gift for my mom, my friends’ clicking behavior, the
friends of my friend, everything or nothing of the above? Whatever the exact
reason might be, online spaces are always computed according to underlying
assumptions, norms, and values. Although we simply do not know and have no
way of knowing how exactly our data and the algorithmic processing are shaping
our experiences online, a criti- cal perspective on sociotechnical systems, along
with personal encounters and experiences with algorithmic forms of
connectivity and sociality, might help to
Programmed 1
connections among individual pieces of data are increasingly being delegated to var-
ious forms of algorithms. The question raised in this chapter is, how does this
kind of algorithmic intervention into people’s information-sharing practices takes
place? What are the principles and logics of Facebook’s algorithmic form of
editing the news feed? What implications do these algorithmic processes have for
users of the platform? Through an analysis of the algorithmic logics structuring
the flow of in- formation and communication on Facebook’s news feed, I argue
that the regime of visibility constructed, imposes a perceived “threat of
invisibility” on the part of the participatory subject. As a result, I reverse
Foucault’s notion of surveillance as a form of permanent visibility, arguing that
participatory subjectivity is not consti- tuted through the imposed threat of an all-
seeing vision machine, but by the con- stant possibility of disappearing and
becoming obsolete. The intention is not so much to offer a definite account of
the role played by Facebook in capturing the world in code, but to open avenues
for reflection on the new conditions through which in/visibility is constructed by
algorithms online.
Chapter 5 considers the barely perceived transitions in power that occur when
algorithms and people meet, by considering how social media users perceive and
experience the algorithms they encounter. While it is important to interrogate the
operational logics of algorithms on an infrastructural level, materiality is only half
the story. To do the topic of algorithmic power and politics full justice, there is
a need to understand how people make sense of and experience the algorithms
with which they persistently interact. Technical systems and infrastructure alone
do not affect use. Users’ perceived knowledge of how the systems work might be just
as sig- nificant. The questions raised in chapter 5 are: How do social media users
imagine algorithms, and to what extent does their perception and knowledge affect
their use of social media platforms? The chapter reports findings from an
exploratory study of 35 social media users who were asked about their perceptions of
and experiences with algorithms online. The chapter examines the specific
situations in which users notice algorithms and start reflecting and talking about
them. Focusing on a few of these user-reported situations, the chapter shows how
users respond to and orient themselves differently toward algorithms as. Moving
beyond a call for intensified code literacy, I argue that these personal algorithm
stories provide important in- sight into the ways in which algorithms are currently
imagined and understood, and how users negotiate and resist algorithms in their
everyday life.
Chapter 6 looks at how algorithms acquire the capacity to disturb and to com-
pose new sensibilities as part of situated practices, particularly in terms of how they
become invested with certain political and moral capacities. While the previous
chapters considered how algorithms are publicly imagined and how they work to
produce impressions of engagement, chapter 6 looks at how algorithms materialize
in the institutional setting of the news media. More specifically, in this
chapter we continue to consider how algorithms are not just matters of fact
but also an
18 IF . . . THEN
19
20 IF . . .
A Beginning
HISTORIES AND TECHNICALITIES
Let’s take a step back. What exactly do (most) people mean when they invoke the
term “algorithm”? According to standard computer science definitions, an algo-
rithm is a set of instructions for solving a problem or completing a task following
a carefully planned sequential order (Knuth, 1998). Although the algorithm is a
key concept in computer science, its history dates back to the medieval notion of
“algorism,” understood as a way of performing calculations with natural numbers.
Algorism goes as far back as the 9th century when the Persian astronomer
and mathematician Abdullah Muhammad bin Musa al-Khwarizmi (circa 780–
850) was indirectly responsible for coining the term. When his scripts were
translated into Latin in the 12th century, his name was rendered as
“Algorithmi.”1 These scripts described the basic methods of arithmetic in the Hindu-
Arabic numeral system, which much later formed the basic operations of computer
processors (Miyazaki, 2012). As Wolfgang Thomas describes, many paragraphs of
the scripts translated from al- Khwarizmi’s text “Computing with the Indian
Numbers” started with the phrase “Dixit Algorizmi,” where algorithm referred to
a “process of symbol manipulation”
The Multiplicity of 2
(2015: 31). During the 17th century, the German philosopher Gottfried Wilhelm
Leibniz (1646–1716) added a new dimension to symbolic computation by develop-
ing “the vision of calculating truths (true statements)” (Miyazaki, 2012).2 As far
as a the prehistory of algorithms goes, Leibniz provided the groundwork for what later
became known as Boolean algebra by “arithmetizing logic” and using if/then condi-
tionals for calculating truth (Thomas, 2015). In Thomas’ account, the origin can
be traced to Boole (1847) and his theory of Boolean algebra and continues with
Frege’s 1879 introduction of a formal language in which mathematical statements
could be expressed (see Thomas, 2015). For many historians, however, the
history of algo- rithms proper starts with David Hilbert and what is called the
Entscheidungsproblem, the challenge of whether an algorithm exists for deciding
the universal validity or satisfiability of a given logical formula (Sommaruga &
Strahm, 2015: xi). In 1936, Alan Turing famously solved this problem in the
negative by reducing the problem to a notion of symbolic computation, which
later became known as the Turing ma- chine. Although Turing himself scarcely
made direct reference to the term “algo- rithm,” contemporary mathematicians
such as Alonzo Church (1903–1995) or Stephen C. Kleene (1909–1994) did
(Miyazaki, 2012). In tracing the historical origins of the term “algorithm” and
its role in the history of computing, Shintaro Miyazaki (2012) suggests that
“algorithm” first entered common usage in the 1960s with the rise of scientific
computation and higher-level programming languages such as Algol 58 and its
derivatives. By the mid 20th century, then, “an algorithm was understood to be a
set of defined steps that if followed in the correct order will computationally
process input (instructions and/or data) to produce a desired out- come” (Kitchin,
2017: 16).
Perhaps the most common way to define an algorithm is to describe it as a recipe,
understood as a step-by-step guide that prescribes how to obtain a certain goal,
given specific parameters. Understood as a procedure or method for processing
data, the algorithm as recipe would be analogous to the operational logic for making
a cake out of flour, water, sugar, and eggs. Without the specific instructions for
how to mix the eggs and flour or when to add the sugar or water, for instance, these
ingre- dients would remain just that. For someone who has never baked a cake,
step-by- step instructions would be critical if they wanted to bake one. For any
computa- tional process to be operational, the algorithm must be rigorously
defined, that is, specified in such a way that it applies in all possible
circumstances. A program will execute a certain section of code only if certain
conditions are met. Otherwise, it takes an alternative route, which implies that
particular future circumstances are al- ready anticipated by the conditional
construct of the “if . . . then statement” upon which most algorithms depend. The
“if . . . then statement” is the most basic of all control flow statements, tasked with
telling a program to execute a particular section of code only if the condition is
deemed “true.” However, in order to be able to test and compute a “false”
condition, the “if . . . then” statements needs to include an “else” statement,
which essentially provides a secondary path of executing. In other
22 IF . . .
words, while the “if . . . then” statement can only compute “true” statements,
the “if . . . then . . . else” construct will be able to execute an alternate pathway as well.3
An algorithm essentially indicates what should happen when, a principle that
program- mers call “flow of control,” which is implemented in source code or
pseudocode. Programmers usually control the flow by specifying certain procedures
and param- eters through a programming language. In principle, the algorithm is
“independent of programming languages and independent of the machines that
execute the pro- grams” (Goffey, 2008: 15). The same type of instructions can be
written in the lan- guages C, C#, or Python and still be the same algorithm. This
makes the concept of the “algorithm” particularly powerful, given that what an
algorithm signifies is an inherent assumption in all software design about order,
sequence, and sorting. The actual steps are what is important, not the wording
per se.
Designing an algorithm to perform a certain task implies a simplification of the
problem at hand. From an engineering perspective, the specific operation of an
algorithm depends largely on technical considerations, including efficiency,
process- ing time, and reduction of memory load—but also on the elegance of the code
written (Fuller, 2008; Knuth, 1984).4 The operation of algorithms depends on a
variety of other elements—most fundamentally, on data structures.5
Tellingly, Niklaus Wirth’s (1985) pioneering work on “structured programming” is
entitled Algorithms + Data Structures = Programs. To be actually operational,
algorithms work in tandem not only with data structures but also with a whole
assemblage of elements, includ- ing data types, databases, compilers, hardware,
CPU, and so forth.6 Explaining why he starts his book with data structures and
not algorithms, Wirth writes: “One has an intuitive feeling that data precede
algorithms: You must have some objects before you can perform operations on
them” (2004: 7). Furthermore, Wirth’s book title suggests that algorithms and
programs are not the same. A program, or software, is more than its algorithms.
Algorithms alone do not make software computable. In order to compute, the
source code must be transformed into an executable file, which is not a one-
step process. The transformation usually follows multiple steps, which include
translation as well as the involvement of other software programs, such as
compilers and linkers. The “object file” created by the compiler is an
intermediate form and not itself directly executable. In order for a program to be
executed, another device called a linker must combine several object files into a
functional program of executable (or “.exe”) files.7 Software is quite literally the
gathering or assembling of different code files into a single “executable.” While soft-
ware might appear to be a single entity, it is fundamentally layered and
dependent on a myriad of different relations and devices in order to function. To
compute in- formation effectively, then, algorithms are based on particular
representations and structures of data. Despite the mutual interdependence of
algorithms and data structures, “we can treat the two as analytically distinct”
(Gillespie, 2014: 169). Given that my primary interest lies in the operations
performed on data—what
The Multiplicity of 2
Wirth defines as algorithms– and the social and cultural implication those opera-
tions have, the focus in this book will almost exclusively be on algorithms.
While no agreed-upon definition of algorithm exists, a few aspects have been
described as important characteristics. For Donald Knuth (1998), who has
written one of the most important multivolume works on computer
programming, algo- rithms have five broadly defined properties: finiteness,
definiteness, input, output, and effectiveness. What is generally asked of an algorithm
is that it produce a correct output and use resources efficiently (Cormen, 2013).
From a technical standpoint, creating an algorithm is about breaking the problem
down as efficiently as possible, which implies a careful planning of the steps to be
taken and their sequence.
Take the problem of sorting, which is one of the most common tasks
algorithms are deployed to solve. A given sorting problem may have many
solutions; the algo- rithm that eventually gets applied is but one possible solution.
In other words, an algorithm is a manifestation of a proposed solution. Just as
there are multiple ways of sorting a bookshelf in some well-defined order—for
example, according to alpha- betical order by the author’s surname, by genre, or
even by the color of the book jacket, different sorting algorithms (e.g., selection
sort, merge sort, or quicksort) can be applied for the same task. Anyone who has
ever tried to arrange a bookshelf according to the color of the book jacket will
probably be able to understand how this specific organizational logic might have
an aesthetically pleasing effect but also come with the added practical challenge of
finding a particular book by a certain author (unless you have an excellent color
memory). This is to say that algorithms, understood as forms of organizational
logic, come with specific affordances that both enable and constrain. To use a
well-known insight from science and technol- ogy studies (STS), such orderings
are never neutral. Algorithms come with certain assumptions and values about the
world on which they are acting. Compared to the archetypical example used in STS
about the inherent politics of engineering and urban planning in the example of
Robert Moses’ low-hanging bridges described by Langdon Winner (1986), the
politics and values of algorithms are a really blatant example of that same logic as
they explicitly decide, order, and filter the world in specific ways. Why some
people still think of algorithms as neutral, given they are a relatively easy
example of an STS value-laden technology, is a different question altogether.
Ultimately, algorithms and the models used have consequences. In the example
of the bookshelf, a color coding scheme might, for instance, imply that I might
become less inclined to read books by the same author simply because I did not
facilitate my thinking or decision-making process in terms of genre or author in the
first place.
1. Get as much data as you can and make sure it is of highest quality
2. Distill your data into signals that will be maximally predictive—a
process called feature-engineering 3. Once you have the most awesome
data and tools for feature engineering, keep raising the capacity of your
algorithms. (Candela, 2016)
Machine learning, then, is about using data to make models that have certain fea-
tures. Feature engineering, or the process of extracting and selecting the most
important features from the data, is arguably one of the most important aspects
of machine learning. While feature extraction is usually performed manually,
recent advances in deep learning now embed automatic feature engineering into
the mod- eling process itself (Farias et al, 2016). If the algorithm operates on
badly drawn features, the results will be poor, no matter how excellent the
algorithm is.
How would feature engineering work in calculating such an abstract notion as
relevance? One option might be to consider the frequency and types of content on
which a specific user has clicked. However, we might also imagine a scenario in which
“most important” is not about the frequency of clicks but, rather, the time
spent watching or reading certain content. The point here is that the problem of
determin- ing what is “most important” depends on the data and the outcome
you want to optimize. As Gillespie suggests, “All is in the service of the model’s
understanding of the data and what it represents, and in service of the model’s
goal and how it has been formalized” (2016a: 20). The understanding of data and
what it represents, then, is not merely a matter of a machine that learns but also of
humans who specify the states and outcomes in which they are interested in the first
place.10 In the case
26 IF . . .
deployed. One of the simplest and fastest learning algorithms for this purpose is the
nearest neighbor (Domingos, 2015: 179). In order to determine whether an
image contains a face, the nearest neighbor algorithm works by finding the image
most similar to it in Facebook’s entire database of labeled photos. “If ” it contains
a face, “then” the other one is presumed to contain a face as well. This approach is
used, for example, in Facebook’s auto-tagging functionality. Given Facebook’s vast
data pool, the company’s engineers are able to train algorithms to detect faces in
ways that most photo-sharing sites cannot. Facebook’s artificial intelligence team
has recently developed a system called DeepFace, which is supposedly able to
identify faces at a 97.25% accuracy level, which is just slightly worse than the
average human score of 97.53% (Taigman et al., 2014). This system uses what is
called “deep learning,” a technique that currently constitutes one of the leading and
most advanced machine learning frameworks available. Deep learning is based on
neural networks, a model most widely used in image and speech recognition.
Modeled to emulate the way in which the human brain works, neural networks
“use different layers of mathe- matical processing to make ever more sense of
the information they are fed” (Condliffe, 2015).15 In terms of image recognition,
a system powered by neural net- works, for example, would analyze pixel
brightness on one layer, shapes and edges through another layer, actual content
and image features on a third layer, and so on. Using different algorithms on every
layer to process the information, the system would gain a more fine-grained
understanding of the image.16
As recent cases of image recognition failures have shown, machine learning is not
without its problems, and there is much to be learned from cases in which
machine learning goes awry. For example, in May 2015, Google was accused of
having a “racist” algorithm when its newly launched Photos app tagged two black
people in a photograph as “gorillas” (Barr, 2015). A similar incident happened when
the photo- sharing service Flickr, powered by Yahoo’s neural network, labeled a
person as an “ape” due to the color of his skin. While incidents like these are
perfect examples of how the results of machine learning can be very problematic,
they are also good examples of how the media, which often neglect to explain why
certain inferences were made, frequently reports on algorithms, thus catering to
an often shallow un- derstanding of how these systems actually work. My hope is to
show how we might better understand the power and politics of algorithms if we
try to take a more ho- listic approach by acknowledging the complexity and
multiplicity of algorithms, machine learning and big data. Incidents such as
these failed image-recognition tasks point to a core concern about dealing with
algorithms, namely, the question of agency, responsibility, and accountability to which
I shall return in the next chapter. Suffice it to mention at this point the importance
of the training data as another element for understanding the possibilities and
limits of machine learning. As Barocas and Selbst point out, “what a model
learns depends on the examples to which it has been exposed” (2016: 10). This
raises an important question with re- gards to accusations of a “racist” algorithm:
What was the initial data set that Google
28 IF . . .
used to train its image recognition algorithms, and whose faces did it depict?
What generalizable signals/patterns was it picking up on?
Understood as sets of instructions that direct the computer to perform a specific
task, algorithms are essentially used to control the flow of actions and future events.
Sometimes, the flow of events is more or less known in advance, as in the case of
algorithms sorting an alphabetized list. More often than not in the world of machine
learning, however, the outcome of events remains uncertain. Machine learning al-
gorithms reduce this uncertainty by making predictions about the likelihoods of
outcomes. Put differently, machine learning is about strengthening the
probability of some event happening, based on evolving information. This is also the
principle of Bayes’ theorem: the “simple rule for updating your degree of belief in
a hypothesis when you receive new evidence” (Domingos, 2015: 144).17
Following Wendy Chun, then, we might understand an algorithm as a “strategy,
or a plan of action— based on interactions with unfolding events” (2011: 126).
This implies that algo- rithms do not simply change with the event but are always in
becoming since events are not static but unfolding. In the case of algorithmically
driven sites such as Facebook, users are crucial to the development and
maintenance of the underlying coding systems as they constantly feed the
system with new data. As Mukund Narasimhan, a software engineer at
Facebook, tellingly suggests: “Everything in Facebook is a work in progress.” The
models Facebook uses to design the system are evolving because the data is
changing. This means that the exact ways in which the algorithms work are also
constantly tweaked by employees because of the fact that everything else
changes (Narasimhan, 2011).
Algorithms do not simply change with the event; they also have the ability to
change the event. In the era of big data and data mining, algorithms have the
ability performatively to change the way events unfold or, at the very least,
change their interpretation. A good example of this is the failed Google Flu
Tracker, which was operative from September 2011 to August 2013. Often
heralded as the prime exam- ple of what big data could do, Google’s Flu Tracker
was designed to predict out- breaks of flu before they happened, based on
mining data from their vast troves of search queries. According to Google, they
“found a close relationship between how many people search for flu-related
topics and how many people actually have flu symptoms” (Walsh, 2014).
However, in February 2013, the Flu tracker made head- lines because it turned
out that it had predicted “more than double the propor- tion of doctor visits for
influenza-like illness than the Centers for Disease Control and Prevention,”
which had been used as a de facto predictor until then (Lazer et al., 2014:
1203). The fact that the tracker was dependent upon Google’s search algorithms
played a significant part in skewing the results for flu trends. By recom- mending
search terms to its users through its autocomplete feature, Google itself was
producing the conditions it was trying to merely describe and predict. As
Walsh (2014) put it: “If the data isn’t reflecting the world, how can it predict what
will happen?”
The Multiplicity of 2
studies (Fuller, 2008). During the past decade or so, social scientists and humani-
ties scholars have called for an expanded understanding of code that extends signif-
icantly beyond its technical definitions (Berry, 2011; Fuller, 2008; Kitchin &
Dodge, 2011; Mackenzie, 2006). Software studies can be understood as a cultural
studies approach to the “stuff of software” (Fuller, 2008), where what counts
as “stuff ” remains relatively open to interpretation. This conceptual openness is,
per- haps, what distinguishes software studies from computer science approaches
since what counts as software is seen as a “shifting nexus of relations, forms and
practices” (Mackenzie, 2006: 19). This is not to say that scholars interested in
software as a cultural object of study dismiss the importance of materiality or
technical opera- tions, quite the contrary. While calling for an expanded
understanding of the mean- ing and significance of software as part of everyday
life, scholars within software studies also maintain a high level of sensibility
toward the technical and functional dimensions of software. In his book Protocol,
Alexander Galloway makes the case that “it is not only worthwhile, but also
necessary to have a technical as well as the- oretical understanding of any given
technology” (2004: xiii). In order to under- stand power relationships at play in
the “control society,” Galloway argues that it is crucial to start with the questions
of how technology (in this case, protocols such as HTTP) works and who it works
for. In fact, not addressing the technical details of software or algorithms as part of a
sociological or critical inquiry is seen as problem- atic (Rieder, 2017: 101). While
media scholars differ in their assessment of how necessary coding skills are to a
critical understanding of software and algorithms, some technical knowledge is
clearly desirable.19
If an expanded understanding of software and algorithms does not necessarily
mean discarding technical details from analysis, what does it mean? One
view, inspired by actor network theory and similar perspectives, takes software and
algo- rithms to be complex technical systems that cannot merely be described as
techno- logical alone because their ontological status remains unclear. As Mackenzie
argues, software has a “variable ontology,” suggesting “that the essential nature of the
entity is unstable” (2006: 96). The variable ontology of software means that
“questions of when and where it is social or technical, material or semiotic cannot be
conclusively answered” (2006: 96). Similarly, Gillespie suggests that “‘Algorithm’
may, in fact, serve as an abbreviation for the sociotechnical assemblage that
includes algorithm, model, target goal, data, training data, application, hardware—
and connect it all to a broader social endeavour” (2016a: 22). Moreover, as Seaver
points out, “algorith- mic systems are not standalone little boxes, but massive,
networked ones with hun- dreds of hands reaching into them, tweaking and tuning,
swapping out parts and experimenting with new arrangements” (2013: 10).
Another way of answering the question of what algorithms are beyond their
technical definition is to see them as inscriptions of certain ideologies or particular
ways of world-making (Goodman, 1985). For example, scholars have analyzed the
algorithmic logic underpinning search engines and high-frequency trading in
terms of how they can be said to
The Multiplicity of 3
the historical and cultural contexts of algorithms intersect with the social history of
calculation and ordering of various types, including the history and politics of sta-
tistical reasoning and large numbers (Desrosieres & Naish, 2002; Foucault, 2007;
Hacking, 2006; Power, 2004); practices of quantification, numbering and valua-
tion (Callon & Law, 2005; Espeland & Stevens, 2008; Verran, 2001); the cultural
logic of rankings and ratings (Espeland & Sauder, 2007; Sauder & Espeland,
2009); and ideas of critical accounting and auditing (Power, 1999; Strathern,
2000b). Moreover, contemporary concerns about the power of algorithms can
also be seen as an “extension of worries about Taylorism and the automation of
industrial labor” (Gillespie, 2016a: 27) or “the century long exercise by media
industries to identify (and often quantify) what’s popular” (Gillespie, 2016b).
Clearly, algorithms need to be understood as part of a historical lineage. Indeed,
we might say that the “ava- lanche of numbers” (Hacking, 1991, 2015), which
occurred as nation-states started to classify and count their populations in the 19th
century, forms a general backdrop for an understanding of algorithms in the era of
big data. This specific historical lin- eage, however, if we were to carry it all the
way through, would also require us to travel a complex and disjunctive route via
the social history of census, punch cards, bureaucratization and the rise of
industrial society and factory work, wartime ma- chinery and the rise of
computers, databases and the automated management of populations, before
arriving at the contemporary milieu of “making up people” (Hacking, 1999)
through data mining and machine learning techniques. This is the genealogy that
tells the story of managing populations—most notably theorized by Michel
Foucault in his College de France lectures (Foucault, 2007, 2008). As Foucault
(2007: 138) points out, statistics means etymologically “knowledge of the state.”
The task for those who govern, then, is to determine what one needs to know in
order to govern most effectively and how that knowledge is to be organized. If the
collection of data through the numbering practices of statistics and the corporate
data mining of recent years makes algorithmic operations possible, the
algorithms themselves (understood in the technical sense) give shape to
otherwise meaning- less data. While the significant power and potential of big
data (the quantity of in- formation produced by people, things, and their
interactions) cannot be denied, its value derives not from the data themselves but
from the ways in which they have been brought together into new forms of
meaningfulness by the associational infra- structure of the respective software
systems in which algorithms play a key role.
Power of Algorithms
The starting premise for the book is the observation that algorithms have become a
key site of power in the contemporary mediascape. During the past decade or so,
social scientists and humanities scholars have started to explore and describe the
increased presence of algorithms in social life and the new modes of power that
The Multiplicity of 3
these new “generative rules” entail (Lash, 2007; Beer, 2009; Cheney-Lippold, 2011;
Gillespie, 2014; Diakopoulos, 2015). As Scott Lash argues, “[a] society of ubiquitous
media means a society in which power is increasingly in the algorithm” (2007:
71). To say that algorithms are powerful, have power, animate, exert, or produce
power needs some qualification and explanation. The concept of power is one of
the most contested and important terms there is. While it is difficult to provide a short
answer to the fundamental question of what power is, Michel Foucault’s
scholarship pro- vides one of the most comprehensive avenues for a nuanced and
multifaceted un- derstanding of the term. For Foucault, power was never just one
thing but, funda- mentally, about different forms of relations. Throughout his
career, Foucault changed and modified his ideas of what power is, coining
analytically distinct con- ceptions of power sensitive to specific contexts,
institutions, objects of knowledge, and political thought.21 Generally speaking,
Foucault identified three levels of power relations, which he termed strategic
games between liberties, domination, and governmental technologies. As
Foucault elaborates in an interview:
Like the immanent form of power described by Foucault, Scott Lash sees new
forms of capitalist power as “power through the algorithm” (Lemke, 2012: 17,
em- phasis mine). This is a form of power that works from below, not a power-over
as it were. As such it becomes indistinguishable from life itself by sifting into “the
capil- laries of society” (Lash, 2007: 61). In his seminal article on algorithmic
power in the age of social media, Beer builds on Lash’s notion of “post-
hegemonic” forms of power to argue that the algorithms underpinning social
media platforms “have the capacity to shape social and cultural formations and
impact directly on individual lives” (2009: 994). Drawing on the example of
Last.fm, a music-sharing platform, Beer argues that the power of the algorithm
can be seen in the ways that the plat- form provides users with their own taste-
specific online radio station. Power, in others words, stems from the algorithm’s
capacity to “shape auditory and cultural experiences” (2009: 996). This notion of
power through the algorithm does not treat power as hierarchical or one-
directional but, rather, as immanent to life itself. Algorithmic power in this sense
can be understood as a force, energy, or capacity of sorts (Lash, 2007). This way
of understanding power as immanent is, perhaps, most famously reflected in
Foucault’s notion of power as an omnipresent feature of modern society in which
power relations are not seen as repressive but as produc- tive (Lemke, 2012:
19).22
Others have argued that algorithms have an intrinsic power to regulate social lives
through their “autonomous decision-making” capacity (Diakopoulos, 2015: 400).
Here, power seems to be located in the mechanics of the algorithm. According to
Nick Diakopoulos, algorithmic power stems from the “atomic decisions that algo-
rithms make, including prioritization, classification, association, and filtering” 2015:
400, emphasis in the original). As such, algorithms exert power by making deci-
sions about the ways in which information is presented, organized, and indicated as
being important. As filtering devices, algorithms make decisions about what in-
formation to include and exclude, constituting a new form of gatekeeping
(Bozdag, 2013; Helberger et al., 2015; Napoli, 2015).23 In contrast to the rhetoric
surround- ing the early days of the Web, which often heralded the Internet’s potential
for doing away with hierarchy by giving everyone a voice, scholars now worry that
algorithms are assuming gatekeeping roles that have a significant effect on the way
public opin- ion is formed ( Just & Latzer, 2016). Worries about algorithms
diminishing the democratic potential of the public sphere—for example, by
creating filter bubbles (Pariser, 2011; Zuiderveen et al., 2016) or manipulating
what information is shown to the public in the first place (Tufekci, 2015)—are
frequently framed in more tra- ditional terms as forms of domination or power
seen as hierarchical and top-down. In this sense, algorithms are seen as having
power over somebody or something. As Frank Pasquale writes in his recent book
The Black Box Society, search engines and social networks, by way of their
capacity to include, exclude, and rank, have “the power to ensure that certain
public impressions become permanent, while others remain fleeting” (2015: 14).
In Foucault’s terms, this would be power seen as a form
The Multiplicity of 3
of political, social and economic domination, where one entity prevents another
from seeing or doing something. As Pasquale puts it, “we have given the
search sector an almost unimaginable power to determine what we see, where
we spend, how we perceive” (2015: 98). However, framing algorithms as having
power over someone or something risks losing sight of the human decision-
making processes and programming that precedes any algorithmic operation.
When Pasquale notes that we have given power to the search sector, he does not
simply mean the algo- rithms running Google. Yet, there is a tendency to use
algorithm as a placeholder for a much more distributed form of power. As I argue in
this book, critical inquiry into algorithmic forms of power and politics should
always extend any claim of algo- rithms having power. As I will elaborate in in the
next chapter, the question of whom or what power most obviously belongs to cannot
be conclusively answered. Instead, what can be examined are algorithms in
practice, the places and situations through which algorithms are made present
and take on a life of their own.
Claims about algorithmic decision-making, which are often accompanied by
subsequent calls for greater accountability and regulation (Diakopoulos, 2015;
Pasquale, 2015), are part of a much longer continuum of concerns about the ways in
which technology can be said to have politics. According to this view, technology
is never neutral but always already embedded with certain biases, values, and assump-
tions. At least since Winner’s (1986) influential arguments about Moses’ low-
hanging bridges, scholars have followed suit in advocating the necessity to consider
the pol- itics of artifacts, particularly by attending to the values in design and
the moral import of those design choices.24 Introna and Wood (2004), for
example, argue that facial recognition systems are political in the sense that these
algorithmic forms of surveillance carry certain assumptions about designated
risks in the very design of the system. As Introna and Wood explain, the politics is
mostly implicit, “part of a mundane process of trying to solve practical problems”
(2004: 179). Every arti- fact implicating human beings, including algorithms, always
carries certain assump- tions and values about how the world works. Take the
design of an ATM: “if you are blind, in a wheelchair, have problem remembering,
or are unable to enter a PIN, because of disability, then your interest in accessing
your account can be excluded by the ATM design” (2004: 179). The point is not
that designers of ATM machines consciously or deliberately exclude people in
wheelchairs from using an ATM but that the ways in which systems are designed
always involve certain exclusionary practices that sometimes appear to be a more or
less coherent and intentional strat- egy despite nobody “authoring” it as such
(Introna and Nissenbaum, 2000).
The question of intentionality—whether someone explicitly authors an artifact
to function in a particular way or whether its functioning is emergent—gets even
more complicated with respect to machine learning algorithms. These algorithms
are able to improve performance over time based on feedback. In addition, machine
learning algorithms “create an internal computer model of a given phenomenon
that can be generalized to apply to new, never-before-seen examples of that
phenomenon”
36 IF . . .
The most common problem in algorithm design is that the new data turns out
not to match the training data in some consequential way [. . .]
Phenomena emerge that the training data simply did not include and
could not have anticipated [. . .] something important was overlooked as
irrelevant, or was scrubbed from the training data in preparation for the
development of the algorithm. (2016a: 21)
Indeed, as Introna and Wood contend, it seems “we cannot with any degree of
cer- tainty separate the purely social from the purely technical, cause from effect,
de- signer from user, winners from losers, and so on” (2004: 180). This does not
mean, however, that taking a “value in design” approach in the study of machine
learning algorithms is not a viable option. To the degree that algorithms are, in
fact, capable of labeling black people as “gorillas” or to score black people as
predominantly high- risk in committing a future crime (Angwin et al., 2016), they
ought to be scrutinized for the kinds of values and assumptions underlying their
function. While many scholars and policymakers have worried about the
discriminatory capabilities of al- gorithms, calling for more transparency on part of
the companies involved is only part of the solution. What many of the “algorithms
gone awry” cases suggest is the importance of seeing the results as reflections of
more fundamental societal biases and prejudices. This is to say that, if the
machine that is supposed to compute the likelihood of future crimes is fed
statistical data tainted by centuries of racial bias materialized in police reports,
arrests, urban planning, and the juridico-political sys- tems, it would be misleading
only to talk about the power of algorithms in produc- ing such risk assessments.
Instead of locating predictive power in the algorithm (narrowly defined), we
may think of algorithms as what Foucault calls governmen- tal technologies
(Lemke, 2001; Miller & Rose, 1990).
Foucault used the notions of government and governmentality to analyze the forms
of power and politics involved in the shaping and structuring of people’s fields of
possibility. While Foucault used the terms government and governmentality some-
what interchangeably, especially in his later writings, “government” broadly refers to
the “conduct of conduct,” whereas governmentality refers to the modes of
thought, or rationalities, underlying the conduct of conduct (Foucault, 1982;
2007; 2010; Lemke, 2001; Rose, 1999).25 As Lemke remarks, “the problematic of
government redirects Foucault’s analytics of power” (2012: 17). It is a notion of
power that moves beyond relations of consent and force and, instead, sees power
relations in the double
The Multiplicity of 3
sense of the term “conduct” as both a form of leading and “as a way of
behaving within a more or less open field of possibilities” (Foucault, 1982: 789).
Government is about “the right disposition of things” (Foucault, 2007: 134),
which concerns “men in their relationships, bonds, and complex involvements
with” various things such as resources, environment, habits, ways of acting and
thinking (Foucault, 2007: 134).26 By way of example, Foucault asks what it would
mean to govern a ship?
It involves, of course, being responsible for the sailors, but also taking care
of the vessel and the cargo; governing a ship also involves taking winds,
reefs, storms, and bad weather into account. What characterizes govern-
ment of a ship is the practice of establishing relations between the
sailors, the vessel, which must be safeguarded, the cargo, which must be
brought to port, and their relations with all those eventualities like
winds, reefs, storms and so on. (Foucault, 2007: 135)
of government to direct the flow of information and the practices of users “to this
or that region or activity” (Foucault, 2007: 141).
To conceive of algorithms as technologies of government in the Foucauldian
sense, however, is not to restrict analysis of algorithms to the material
domain. As Lemke writes, “an analytics of government operates with a concept of
tech- nology that includes not only material but also symbolic devices” (2012:
30). Discourses and narratives are “not reduced to pure semiotic
propositions; in- stead they are regarded as performative practices” and part of
the mechanisms and techniques through which the conduct of conduct is
shaped. Throughout the book, the different ways in which algorithms operate as
technologies of govern- ment will be both explicitly and implicitly addressed. In
chapter 4, Facebook’s news feed algorithm is explicitly framed as an
architectural form that arranges things (relationships between users and objects)
in a “right disposition.” Chapter 5 highlights the ways in which narratives and
beliefs about algorithms constitute a double articulation, shaping both the
conduct of individuals and the algorithm through feedback loops. Chapter 6,
meanwhile, considers how algorithms are used to govern journalism in a digital
age, looking at how algorithms do not just enact a particularly understanding of
journalism but also through the ways in which algo- rithms now occupy the
minds of news professionals, making them act and react in certain ways.
complex, as Bogost puts it), not-so-sloppy descriptions are called for. When
platforms such as Twitter and Instagram publicly announce that they are
introducing an “algorithmic timeline” or when the media are repeatedly
reporting on incidents of “algorithmic discrimination” and bias, there is also an
increasing need to examine not just what the term means but, more importantly,
how and when “algorithms” are put to use as particularly useful signifiers and
for whose benefit. In the way Katherine Hayles (2005: 17) sees computation
as connoting “far more than the digital computer” and “not limited to digital
manipulations or binary code,” this book sees algorithms as much more than
simply step-by-step instructions telling the machine what to do.29 When a London
bookstore labels a bookshelf containing employees’ book recommendations their
“human algorithm” or when the BBC starts a radio show by announcing that
“humans beat robots,” there is clearly some- thing about the notion of an
algorithm that seems to inform the way we think and talk about contemporary
culture and society that is not just strictly confined to computers.30
Algorithms, I want to suggest, constitute something of a cultural logic in
that they are “much more than coded instruction,” drifting into the ways in which
people think and talk about everything from the economy to knowledge
production to cul- ture. Algorithms exist on many scales, ranging from the
operationality of software to society at large. However, in this book, I am less
interested in asserting what algo- rithms are, as if they possess some essence that
can be clearly delineated. What is of interest is the “ontological politics” of
algorithms, the sense that “conditions of pos- sibility are not given” but shaped in
and through situated practices (Mol, 1999: 75). Algorithms are seen as multiple.
This is to say that there is no one way in which al- gorithms exist as a singular
object. By positing the multiple nature of algorithms, the intention is to take their
manyfoldedness seriously.31
Drawing on Mol’s work in The Body Multiple (2002) in which she argues
that there is no essence to what the disease arthrosclerosis is—only ever
different ver- sions that are enacted in particular settings, I argue that algorithms
never materialize in one way only. This does not mean, however, that there cannot be a
singular object called an algorithm or that algorithms are nothing but relations.
Like the diseases studied by Mol or the aircraft studied by John Law, algorithms
oscillate between multiplicity and singularity. As Law puts it: “Yes, atherosclerosis.
Yes, alcoholic liver disease. Yes, a water pump [. . .] And yes, an aircraft. All of these are
more or less sin- gular but also more or less plural” (2002a: 33). Similarly, we might
say: Yes, a neural network. Yes, a code written in C++. Yes, PageRank. Yes, a
conversation about how we organize our lives. All of which are more or less
singular, more or less plural. In contrast to Mol and Law, however, this book is not
about a specific object such as one specific aircraft (Law is specifically concerned
with the military aircraft TSR2), disease (i.e., anemia), or algorithm (i.e.,
PageRank). The singularity and multiplicity in question are the result of analytical
cuts. Sometimes, the algorithm might appear as a singular entity—for example, in
chapter 4 on the Facebook news feed algorithm.
40 IF . . .
At other times, the algorithm is less identifiable; yet, the question of what and
where it is looms large. What is at stake, then, in addressing the ontological
politics of algorithms is not so much an understanding of what exactly the
algorithm is or the moments in which it acts (although this is important, too) but,
rather, those moments in which they are enacted and made to matter as part of
specific contexts and situations.
3
Neither Black nor Box
(Un)knowing Algorithms
Pasquale’s The Black Box Society, Dewandre (2015) instructively connects the recent
calls for greater algorithmic transparency to what feminist scholar Susan H. Williams
has called the “Enlightenment Vision.” For Williams, “the liberal model of auton-
omy and the Cartesian model of truth are deeply connected. The autonomous lib-
eral is the Cartesian knower” (2004: 232). According to the Enlightenment Vision,
transparency is what makes rationality, autonomy, and control possible.
When something is hidden, the Enlightenment impetus says we must reveal it
because knowing leads to greater control.1 But what if the power believed to
emanate from algorithms is not easily accessible simply because the idea of origins
and sources of actions that come with the Cartesian assumption of causality are
problematic to begin with? By thinking around the cusp of sensibility and
knowledge, I take up in this chapter the challenge of formulating an
epistemological stance on algorithms that is committed to the notion of algorithm
as multiple introduced in the previous chapter. That is, how to know algorithms
when the algorithm is both multiple, “con- cealed behind a veil of a code,” and
seemingly “impenetrable”?
In this chapter, I use the concept of the black box as a heuristic device to
discuss the nature of algorithms in contemporary media platforms and how we, as
scholars and social actors interested in them, might attend to algorithms despite, or
even be- cause of, their seemingly secret nature. Moving beyond the notion that
algorithms are black boxes, this chapter asks, instead, what is at stake in framing
algorithms in this way and what such a framing might possibly distract us from
asking. The ques- tions that I want to pose in going forward have to do with the
limits of using the black box as a functional analogy to algorithms. To what extent
are algorithms use- fully considered as black boxes? What could we possibly see if
efforts to illuminate algorithms are not directed as the source code or details of its
encoded instructions but elsewhere, and what would this elsewhere be? The
chapter unfolds as follows: First, I address the trope of algorithms as black boxes,
arguing that algorithms are neither as black nor as boxed as they are sometimes
made out to be. Next, I unpack this claim more by conceptualizing algorithms in
terms of a relational ontology. This implies a shift in focus from the question of
what algorithms are to what they do. The argument is made that, in order to address
the power and politics of algorithms, questions concerning the agency of
algorithms should be focused not on where agency is located but when. Finally, I
point to three methodological tactics around which to orient ways of making
sense of algorithms.
Black Box
THE PROBLEMATIC OF THE UNKNOWN
The concept of the black box has become a catch-all for all the things we
(seem- ingly) cannot know. Referring to an opaque technical device about which
only the inputs and outputs are known, the figure of the black box is linked to the
history of
Neither Black nor 4
secrecy—to trade secrets, state secrets, and military secrets (Galison, 2004:
231).2 The black box is an object whose inner functioning cannot be known—at
least not by observation, since the blackness of the box obscures vision.
Historically, the black box refers, quite literally, to a physical black box that
contained war machinery and radar equipment during World War II. In tracing the
genealogy of the black box, von Hilgers (2011) describes how the black box initially
referred to a “black” box that had been sent from the British to the Americans as
part of the so-called Tizard Mission, which sought technical assistance for the
development of new technolo- gies for the war effort. This black box, which was
sent to the radiation lab at MIT, contained another black box, the Magnetron.
During wartime, crucial technologies had to be made opaque in case they fell into
enemy hands. Conversely, if confronted with an enemy’s black box, one would
have to assume that the box might contain a self-destruct device, making it
dangerous to open. As a consequence, what emerged was a culture of secrecy or
what Galison (1994) has termed “radar philosophy,” a model of thought that
paved the way for the emergence of cybernetics and the anal- ysis and design of
complex “man-machine” systems. The black box readily became a metaphor for the
secret, hidden, and unknown. In everyday parlance, everything from the brain to
markets to nation-states is now conceptualized as a black box. Algorithms are no
different.
When algorithms are conceptualized as black boxes, they are simultaneously
rendered a problem of the unknown.3 As unknowns, algorithms do not simply
signify lack of knowledge or information. The black box notion points to a
more specific type of unknown. What the pervasive discourses on transparency
and accountability surrounding algorithms and trade secrecy suggest is that
algorithms are considered knowable known unknowns (Roberts, 2012)—that is,
something that, given the right resources, might be knowable in principle. All that is
needed, accord- ing to popular discourse, is to find a way of opening up the black
box. Indeed, a key mantra in science and technology studies, “opening up the
black box,” implies dis- entangling the complexities and work that goes into
making a technical device appear stable and singular.4 The impetus for opening
up the black box can also be seen in calls for greater transparency and
accountability characteristic of the “audit society” (Power, 1999). In a climate of
auditing, organizations are increasingly asked to be transparent about their dealings
and ways of operating. Universities, for example, are asked to produce more and
more paper trails, including assessment records, numbers of research outputs,
and lists of funding received. As Marilyn Strathern (2000a) puts it, axiomatic
value is given to increased information. Today, scholars have extended the notion
of auditing to the field of algorithms, arguing for the need to conduct audit studies
of algorithms in order to detect and combat forms of algorithmic discrimination
(Sandvig et al., 2014). While such efforts are certainly admirable, what I want to
examine critically as part of this chapter is precisely the hope of intelligibility
attached to calls for more transparency. There are many ways of knowing
algorithms (broadly understood) besides opening the black box and
44 IF . . .
reading the exact coded instructions that tell the machine what to do. Indeed, while
some things are “fundamentally not discoverable” (von Hilgers, 2011: 42), the wide-
spread notion of algorithms as black boxes constitutes something of a red herring—
that is, a piece of information that distracts from other (perhaps, more pressing)
questions and issues to be addressed. The metaphor of the black box is often too
readily used as a way of critiquing algorithms without critically scrutinizing the met-
aphor itself. What is gained and what is lost when we draw on the metaphor of
the black box to describe algorithms? To what extent does the metaphor work at
all?
To say that something is a black box may not simply be a statement of
facts. As I will discuss in this chapter, declaring something a black box may serve
many different functions. Unlike the Socratic tradition, which sees the unknown
as a fun- damental prerequisite to wisdom, the black box renders the unknown an
epistemo- logical problem. The unknown—including the black box—is deemed
problematic because it obscures vision and, ultimately, undermines the
Enlightenment impera- tive aude sapere: “ ‘dare to know,’ ‘have the courage, the
audacity, to know’” (Foucault, 2010: 306). For Kant, the Enlightenment philosopher
par excellence, not knowing is characterized by immaturity, the notion that people
blindly accept someone else’s authority to lead (Foucault, 2010: 305). Alas, if
something is willfully obscured, the task for any enlightened mind would be to
find ways of rendering it visible. Critics of the Enlightenment vision have often
raised flags against the notion of exposing or decoding inner workings, as if there were
a kernel of truth just waiting to be revealed by some rational and mature mind (a
mind that is quite often seen as specifically male) (Chun, 2011; Foucault, 1980;
Harding, 1996). In the Kantian tradition, the audacity to know is not just
explicitly linked to rationalism but to the quest for the conditions under which
true knowledge is possible. On the face of it, then, black boxes threaten the very
possibility of knowing the truth.
While threatening access to a seemingly underlying truth, the very concept of
the black box also denotes a device that releases rational subjects from their
obligation, as Kant would see it, to find a “way out” of their immaturity (Foucault,
2010: 305). As Callon and Latour suggest, “[a] black box contains that which no
longer needs to be reconsidered” (1981: 285). In discussions on technical or
commercial black boxes and transparency, one of the arguments often raised in
defense of the black box is the necessity to keep details closed and obscured. In writing
about trade secrets, historian of science Peter Galison (2004) makes the point
that secrecy is legiti- mized as a form of “antiepistemology,” knowledge that
must be covered and ob- scured in order to protect a commercial formula or
the like.5 Indeed, it seems that the entire figure of the black box is premised on
the notion of antiepistemol- ogy. Without secrecy, systems would cease to work
properly. From a more techni- cal standpoint, making the inner workings
obscure helps to remedy attempts at gaming the system. As Kroll et al. write,
“Secrecy discourages strategic behavior by participants in the system and prevents
violations of legal restrictions on disclosure of data” (2016: 16). Finally, from an
engineering point of view, concealing or
Neither Black nor 4
UNKNOWING ALGORITHMS
While it is true that proprietary algorithms are hard to know, it does not make them
unknowable. Perhaps paradoxically, I want to suggest that, while algorithms are not
unknowable, the first step in knowing algorithms is to un-know them. By unknow-
ing, I mean something akin to making the familiar slightly more unfamiliar. In the
previous chapter, we saw how algorithms mean different things to different
stake- holders and people from different disciplines. This is not to say that a
computer scientist needs to dismiss her knowledge of algorithms or that social
scientists should somehow dismiss their ways of seeing algorithms as objects of
social con- cern. “Unknowing” also does not imply “blackboxing” the black box
even more. Rather, “unknowing” means seeing differently, looking elsewhere, or
not even look- ing at all. As much as calls for transparency attempt to make the
object of concern more visible, visibility, too, may conceal. As Strathern notes,
“there is nothing inno- cent about making the invisible visible” (2000a: 309).
Too much information may blind us from seeing more clearly and, ultimately,
from understanding (Strathern, 2000a; Tsoukas, 1997). Unknowing does not
foreclose knowledge but challenges it. In this sense, the kind of unknowing I have
in mind can be likened to Bataille’s notion of “nonknowledge” as “a form of excess
that challenges both our thinking and our ethics” (Yusoff, 2009: 1014). For
Bataille, nonknowledge is not something that must be eradicated but embraced
as an enriching experience (1986, 2004).7 Baudrillard develops the distinction
between “obscenity” and “se- duction” in response to Bataille’s thinking about the
knowledge/nonknowledge divide, stating how “nonknowledge is the seductive
and magical aspect of knowl- edge” ( Juergenson & Rey, 2012: 290). In other
words, we might think of “unknow- ing algorithms” as a form of distancing or a
form of engaging with the seductive qualities of algorithms that cannot always
be explained in fully rational terms. On a practical level, unknowing algorithms
may simply imply opening the black box of one’s own assumptions about
knowing algorithms. For a computer scientist, this may imply knowing more
about the social dynamics impinging on the data that algorithms are designed to
process. Conversely, for social scientists and humani- ties scholars, it might imply
knowing more about how computers and algorithms make decisions (Kroll et
al., 2016: 14). On a more theoretical and conceptual level, unknowing
algorithms implies confronting the limits of the metaphor of the black box itself.
Grappling with what Karin Knorr-Cetina calls negative knowledge,
Neither Black nor 4
the task is to identify the limits and imperfections of the black box metaphor.
For Knorr-Cetina:
THE E V E N T F U L N E S S OF ALGORITHMS
Theoretically, this ontological shift is indebted to diverse but interrelated per-
spectives, including actor-network theory and post-ANT (Latour, 2005; Mol,
1999; Law; 1999), process-relational philosophy (Deleuze & Guattari, 1987;
Simondon, 2011; Whitehead, 1978), agential realism (Barad, 2003; 2007),
and new materialism (Bennett, et al., 2010; Braidotti, 2006).9 While these
perspectives do not represent a homogeneous style of thought or a single
theoretical position, even among the thinkers referenced here as belonging to a
specific category, they all, in one way or another, emphasize a relational
ontology and the extension of power and agency to a heterogeneity of actors,
including non-humans or the more- than-human.10
In arguing for an understanding of algorithms as eventful, I am drawing on
a Whiteheadian sensibility that puts emphasis on processes of becoming rather
than being. For Whitehead (1978), actual entities or actual occasions (such as
algorithms) are combinations of heterogeneous elements (or what he calls
prehensions).11 Actual entities are only knowable in their becoming as opposed to
their being. As Whitehead suggests, “how an actual entity becomes constitutes
what that actual entity is. Its ‘being’ is constituted by its ‘becoming’. This is the
‘principle’ of process” (1978: 23). This view of entities as processes breaks with the
more traditional view of entities as substance and essence. Indeed, “‘actual entities’
are the final real thing of which the world is made up. There is no going behind
actual entities to find anything more real” (Whitehead, 1978: 18). This has
important consequences for the analytical treatment of algorithms, since there is
nothing “more real” behind the ways in which they actualize to form novel forms of
togetherness.12 It is not enough simply to state that algorithms are eventful,
understood as constituents that co-become. Analytical choices have to be made as
to which relations and which actors to include in the study of actual entities. As
Mike Michael suggests, the value of studying processes or events “rests not so much
on their empirical ‘accuracy’ as on their capacity to pro- duce ‘orderings and
disorderings’ out of which certain actualities (such as practices, discourses and
politics) emerge” (2004: 9, 19).13 What this means for an under- standing of
algorithms is a shift in attention away from questions of what algorithms are to what
they do as part of specific situations.
At the most basic level, algorithms do things by virtue of embodying a command
structure (Goffey, 2008: 17). For the programmer, algorithms solve
computational problems by processing an input toward an output. For users,
algorithms primarily do things by virtue of assistance—that is, they help users
find something they are searching for, direct attention to the “most important”
content, organize informa- tion in a meaningful way, provide and limit access to
information, or make recom- mendations and suggestions for what to watch or
buy. The doing of algorithms can also be seen in the various ways they shape
experience and make people feel a cer- tain way—for example, in how they
animate feelings of frustration, curiosity, or joy
50 IF . . .
(see chapter 5). As Introna suggests, “the doing of algorithms is not simply the
exe- cution of instructions (determined by the programmers),” algorithms “also
enact the objects they are supposed to reflect or express” (2016: 4). The notion of
perfor- mativity that Introna draws on here posits that algorithms, by virtue of
expressing something, also have the power to act upon that world.14 When
algorithms become part of people’s everyday lives, incorporated into financial
markets or entangled in knowledge production, they do something to those
domains.
What algorithms do in these cases, however, cannot simply be understood by
opening up the black box, as it were. This is not because, as Winner (1993) suggests,
we risk finding it empty when we open the box; but, rather, as Latour (1999) re-
minds us, all black boxes are black boxes because they obscure the networks and
assemblages they assume and were constituted by. For Latour, all scientific
and technical work is made invisible by its own success through a process of
blackbox- ing. In a much-cited example of an overhead projector breaking down,
Latour sug- gests that the black box reveals itself as what it really is—not a
stable thing but, rather, an assemblage of many interrelated parts (1999: 183). When
a machine runs smoothly, nobody pays much attention, and the actors and work
required to make it run smoothly disappear from view (Latour, 1999: 34). For
Latour, the black box ultimately hides its constitution and character as a
network, while blackboxing refers to the process in which practices become
reified. If the metaphor of the black box is too readily used as a way of critiquing
algorithms, Latour’s notion of black- boxing reminds us that we might want to
scrutinize critically the ways in which al- gorithms become.
A core tenet of a relational ontology is the principle of relational materialism,
the idea that “objects are no mere props for performance but parts and parcel of
hybrid assemblages endowed with diffused personhood and relational agency”
(Vannini, 2015: 5). Concepts such as sociotechnical and sociomateriality are often
used to express the idea of a radical symmetry between human and
nonhuman actors.15 According to this view, the social and technical are not seen
as separate entities that can be considered independently of each other. The
social and techni- cal are always already engaging in symbiotic relationships
organized in networks, assemblages, or hybrids.16 What is important on a
relational account is that the en- active powers of new assemblages or composite
entities cannot merely be reduced to its constituent parts. Rather, these new
composite entities are able to “produce new territorial organisations, new
behaviours, new expressions, new actors and new realities” (Mü ller, 2015: 29).
This agential force is, perhaps, most explicitly expressed in the concept of
assemblage or, more specifically the French term agence- ment. As Callon points out,
“agencement has the same root as agency: agencements are arrangements
endowed with the capacity of acting in different ways depending on their
configuration” (2007: 320).
These discussions, of course, raise the tricky question of how we should think
about agency in the first place. After all, it is not for nothing that agency has been
Neither Black nor 5
called “the most difficult problem there is in philosophy” (Latour, 2005: 51).
If algorithms are multiple and part of hybrid assemblages or even hybrid assemblages
themselves, then where is agency located? Who or what is acting when we say
that algorithms do this or that? Although scholars committed to a relational
ontology may differ in terms of the ontological status they ascribe to entities and
relations, the general answer would be to see agency as distributed.17 For a theory
of the agential capacities of algorithms, adopting a relational view implies
discarding any “neatly ordered flow of agency” (Introna, 2016: 9). As Karen Barad
puts it, agency is not an attribute that someone or something may possess but, rather,
a name for the process of the ongoing reconfiguration of the world (2003: 818). In
a similar vein, actor- network theory sees agency as a mediated achievement,
brought about through forging associations (Mü ller, 2015: 30). Anything—
whether human or nonhuman— can potentially forge an association. As Barad
emphasizes, “agency is not aligned with human intentionality or subjectivity”
(2003: 826). According to Latour: “any thing that does modify a state of affairs
by making a difference is an actor”; one needs simply to ask whether something
“makes a difference in the course of some other agent’s action or not” (2005: 71).
At the core of a relational ontology lies the importance of acknowledging the
relationality and agential capacities of nonhu- mans. Perhaps more so than any
other concept, the notion of assemblage has served as way to account for the ways
in which relations are assembled for different pur- poses. Deleuze and Parnet
view assemblage as a “multiplicity which is made up of many heterogeneous terms
and which establishes liaisons, relations between them,” where the only unity “is that
of co-functioning” (2007: 69). This notion of co-func- tioning usefully describes
“how different agents within the assemblage may possess different resources and
capacities to act” (Anderson et al., 2012: 181). Viewed in these terms, the agency of
algorithms cannot be located in the algorithm as such but in the “ever-changing
outcome of its enactment” (Passoth et al., 2012: 4).
The implications of viewing agency as distributed are far from trivial. When we
hear that “algorithms discriminate” (Miller, 2015) or that “discrimination is
baked into algorithms” (Kirchner, 2015), it may easily be understood as saying
algorithms possess the agency to discriminate. While cases such as the Google
“gorilla” inci- dent (see chapter 2), Amazon’s exclusion of predominately black
ZIP codes from their same-day deliveries, or Google’s display of ads for arrest records
when distinc- tively black names are searched (Sweeney, 2013) leave much to be
desired in terms of algorithmic fairness and performance, the question of who or
what is actually discriminating in these cases is not as straightforward to answer as
the media head- lines seem to suggest.
Take the controversy over Facebook’s trending feature. In May 2016,
Facebook hit the news (again) after it became clear that their trending feature was
not, in fact, “the result of a neutral, objective algorithm” but, partly, the
accomplishment of human curation and oversight.18 Facebook had employed
journalism graduates to keep checks on algorithmically produced trending topics,
approving the topics and
52 IF . . .
writing headlines to describe them. The problem was that the human editors em-
ployed to oversee the trending topics happened to lean to the political left, and this,
according to the news stories, could be seen in the kinds of stories that were
made to “trend.” As the Gizmodo article first reported, “In other words,
Facebook’s news section operates like a traditional newsroom, reflecting the biases of
its workers and the institutional imperatives of the corporation” (Nunez, 2016).
Shortly after the story broke, Tom Stocky, a Facebook executive, wrote that “there
are rigorous guide- lines in place for the review team to ensure consistency and
neutrality” (Stocky, 2016). The incident also prompted a letter from the US
Senate to Mark Zuckerberg, de- manding more transparency about how
Facebook operates. The letter, signed by Republican Senator John Thune, asked
Facebook to elaborate on questions such as: “Have Facebook news curators in
fact manipulated the content of the Trending Topics section” and “what steps will
Facebook take to hold the responsible indi- viduals accountable?” As Thune later
told reporters, any level of subjectivity associ- ated with the trending topics would
indeed be “to mislead the American public” (Corasaniti & Isaac, 2016).
What remained puzzling throughout the ordeal was the apparent lack of vocabu-
lary available to talk about what it is that algorithms do or are even capable of doing,
as exemplified in the repeated attribution of bias either to the algorithm or to the
humans involved. Words such as bias, neutrality, manipulation, and subjectivity
abound, making the controversy one of locating agency in the right place. The
pre- vailing sense in the discourse surrounding the event seemed to be that
Facebook should not claim to use algorithms to make decisions when, in fact,
humans make the decisions. Of course, what was being slightly overlooked in all
of this was the fact that algorithms are always already made, maintained, and
sustained by humans. Yet, if only the responsible people could be held accountable, the
story went, it would make it easier to control or regulate such “manipulations” and
“subjective order- ings” in the future. From a relational perspective, however,
determining the origin of action as if it belonged to one source only would be
misleading. After all, as Latour puts it, “to use the word ‘actor’ means that it’s
never clear who and what is acting when we act since an actor on stage is never
alone in acting” (2005: 46). The stage in this case was clearly composed of a
myriad of participants, including journalism graduates, professional culture,
political beliefs, work guidelines, the trending prod- uct team, Facebook executives
and management, algorithms, users, news agencies, and so on.
So, then, what about the supposed bias of algorithmic processes? As John
Naughton, professor of the public understanding of technology, writes in an op-ed
in the Guardian, bias or, human values, is embedded in algorithms right from the
beginning simply because engineers are humans:
Any algorithm that has to make choices has criteria that are specified by its
designers. And those criteria are expressions of human values. Engineers
Neither Black nor 5
may think they are ‘neutral’, but long experience has shown us they are
babes in the woods of politics, economics and ideology. (Naughton, 2016)
When Is an Algorithm?
The case of Facebook’s trending controversy and other instances of “humanizing
algorithms” do not merely call the source of agency into question but suggest that
the more politically poignant question to ask is when agency is located, on whose
behalf and for what purpose? The real controversy was not the fact that Facebook
employs journalism graduates to intervene in an algorithmic decision-making proc-
ess by adding their editorial human judgments of newsworthiness to the mix. The
controversy lies in the selective human and nonhuman agency. What became appar-
ent in the public reaction to the Facebook trending controversy was the fact that
algorithms only matter sometimes. This, I want to suggest, constitutes an important
dimension of thinking around the politics of algorithms—politics not as what
algo- rithms do per se but how and under what circumstances different aspects of
algo- rithms and the algorithmic are made available—or unavailable—to specific
actors in particular settings.19 By shifting attention away from the proper source of
action toward the practices of mobilizing sources of action in specific
circumstances, we might be able to understand both how and when algorithms
come to matter (the mattering of algorithms is taken up again in chapter 6).20 Why is
it that, sometimes, an algorithm is blamed for discrimination when, on a similar but
different occasion, a human is “accused” of bias? Why is it contentious that
Facebook employs journal- ism graduates in the process of curating the trending
feature when, in the context of news organizations, the opposite seems to be
considered problematic? Put differ- ently, why is it OK for Facebook to use
algorithms when doing the same thing is considered problematic for a news
organization? These and similar questions cannot simply be answered by
adhering to a vocabulary of essences. Algorithms are not given; they are not either
mathematical expressions or expressions of human intent but emerge as situated,
ongoing accomplishments. That is, they emerge as more or less
technical/nonhuman or more or less social/human because of what else they are
related to. In the case of the Facebook trending topic incident, the algo- rithms
shifted its configuration as a result of controversy. In the face of widespread
accusations against the subjective biases of the human editors, Facebook decided to
fire the 26 journalism graduates contracted to edit and write short descriptions
for the trending topics module. In a bid to reduce bias, Facebook announced,
instead, that they would replace them with robots. While still keeping humans in
the loop, “a more algorithmically driven process,” according to Facebook, would
“allow our team to make fewer individual decisions about topics” (Facebook, 2016).
What is of
56 IF . . .
interest in this case is not whether algorithms or humans govern Facebook’s trend-
ing topic but how the different categories were enlisted and made relevant in the
context of the controversy. That is, the “algorithm” emerged as objective and neutral
while humans were seen as subjective and biased. The point is that there is nothing
inherently neutral about algorithms or biased about humans, these descriptive mark-
ers emerge from particular contexts and practices. It could have been otherwise.21
From the perspective of relational materialism, the questions that matter the
most are “not philosophical in character, but political” (Mol, 2013: 381).
Returning to the notion of ontological politics introduced at the end of the
previous chapter, engaging with the question of how algorithms come to matter
in contemporary so- ciety is not about trying to define what they are or at what points
they act but, rather, about questioning the ways in which they are enacted and
come together to make different versions of reality. I think what the many
contrasting examples of contro- versies involving the algorithm-human
continuum show is how algorithms are not inherently good or bad, neutral or
biased, but are made to appear in one way or the other, depending on a whole range
of different factors, interests, stakeholders, strat- egies and, indeed, politics. The
term “ontological politics” is precisely meant to highlight how realities are
never given but shaped and emerge through interac- tions.22 Instead of
determining who acts (or discriminates, for that matter), the inter- esting question is
what and when an actor becomes in the particular ways in which the entity is active
(Passoth et al., 2012: 4).
By shifting attention away from asking what and where agency is to when
agency is and to whom agency belongs in specific situations, we may begin to see
how the notion of algorithms as black boxes may not just be an ontological and
epistemolog- ical claim but, ultimately, a political one as well. As I said in the
beginning of this chapter, positioning something as a black box serves different
functions. The black box of algorithm is not simply an unknown but, in many
cases, constitutes what Linsey McGoey (2012) has called a strategic unknown,
understood as the strategic harnessing of ignorance. As McGoey asserts, strategic
unknowns highlight the ways in which “cultivating ignorance is often more
advantageous, both institutionally and personally, than cultivating knowledge” (2012:
555). In the case of disaster manage- ment, for example, experts’ claims of ignorance
soften any alleged responsibility for the disaster or scandal in question. By
mobilizing unknowns strategically, organiza- tions and individuals can insist that
detection or prior knowledge was impossible. These forms of ignorance are
frequently mobilized in discourses on algorithms and software as well.
In fact, it seems that machine learning and the field of artificial intelligence often
appear as entire fields of strategic unknowns. At the most fundamental level, the
operations of machine-learning algorithms seem to preclude any form of
certainty. Because the machine learns “on its own” without being explicitly
programmed to do so, there is no way of knowing what exactly caused a certain
outcome. As Dourish reflects on the advent of machine learning. “During my
years of computer science
Neither Black nor 5
Indeed, in context of machine learning, the decisional rule emerges “from the spe-
cific data under analysis, in ways that no human can explain” (Kroll et al., 2016:
6). While this fundamental uncertainty may be disgruntling to those interested in
knowing algorithms, it can also be used to restore what McGoey calls knowledge
alibis, “the ability to defend one’s ignorance by mobilizing the ignorance of
higher- placed experts” (2012: 563-564). As McGoey writes:
What we need to remember is that all of these layers and structures are built by
people in teams, and so the ability for one person to understand everything is
very much the same challenge as understanding how the University of
Copenhagen works. Yet at the same time teams can build heuristic
understanding of their algo- rithmic systems, feelings of what might cause it to
break or what is causing the bot- tleneck, which allow them to work and do their
job.23
The ignorance of higher-placed experts, however, should not detract us from
knowing differently. Now, if the exact configuration of the algorithmic logic
cannot be easily traced—say, for example, by examining what the machine learned at
differ- ent layers in a neural net, it should not keep us from interrogating the
oddity itself, particularly since the allusion to ignorance often comes in handy for
the platforms themselves. As Christian Sandvig (2015) suggests, “platform
providers often en- courage the notion that their algorithms operate without any
human intervention, and that they are not designed but rather ‘discovered’ or
invented as the logical pin- nacle of science and engineering research in the area.”
When things do not go ex- actly as planned and platforms are accused of censorship,
discrimination or bias, the algorithm is often conveniently enlisted as a strategic
unknown. Of course, as Callon and Latour aptly remind us, “black boxes never
remain fully closed or properly fas- tened [ . . . ] but macro-actors can do as if they
were closed and dark” (1981: 285). It is certainly odd that a particular purchase is
deemed suspicious, or that a credit card company cannot explain why an
algorithm came to this or that conclusion. However, the knowledge of an
algorithmic event does not so much hinge on its causes as it does on the
capacity to produce certain “orderings and disorderings” (Michael, 2004). As an
event whose identity is uncertain, there is an opportunity to ask new questions.
In the case of the credit card company, the question might not necessarily be
phrased in terms of why the algorithm came to a certain conclusion but what that
conclusion suggests about the kinds of realities that are emerging because people
are using algorithms and what these algorithmic practices do to various people.
In terms of transparency, questions also need to be asked of functions and expec-
tations. When the Norwegian newspaper Aftenposten and Norway’s Prime Minister
Erna Solberg called out Facebook in September 2016, accusing its algorithms
of censoring the iconic Pulitzer Prize image “Terror of War,” they did so because
they expected the algorithm to behave in a specific way. An algorithm, they
suggested, should have been able to distinguish between an award-winning iconic
image and “normal nudity.” Not only was this incident, to which I shall return later in
chapter 6, yet another example of the differential enactment of agency as
exemplified in the question of whether the fault was with the algorithms or the
humans: It also shows that, sometimes, what there is to know about algorithms
may not be about the algo- rithm itself but, rather, our own limits of
understanding. Should we, perhaps, not just worry about algorithms being wrong
but also ask whether they do what they are supposed to do? Why was Facebook called
out on censoring an image of a naked girl
Neither Black nor 5
when, in fact, this is what is expected of them? Knowing algorithms, I want to sug-
gest, may be just as much about interrogating “negative knowledge” in Knorr-
Cetina’s sense as it is about trying to peel the layers of a neural network or getting
access to the actual source code. In other words, when trying to know algorithms we
also have to take into account what things interfere with our knowing, what we
are not interested in, what we do not want to know and why.
What the figure of the black box conceals is not just the inner workings of algo-
rithms but also the ways in which the unknown can be used strategically as a re-
source to maintain control or deny liability in certain situations. In suggesting, as
I have done, that the black box metaphor constitutes a red herring of sorts,
the metaphor itself becomes a strategic unknown, enabling knowledge to be
deflected and obscured. What needs to be critically scrutinized, then, is not
necessarily the hidden content of the box but the very political and social
practices that help sus- tain the notion of algorithms as black boxes. The question
is not simply whether we can know algorithms but when the realm of its
intelligibility is made more or less probable. That is, when are algorithms framed
as unknowns, for whom and for what purpose?
inaccessible—that is, until we find the leaks, cracks, and ruptures that allow us to
see into them.
The first step in knowing algorithms is not to regard the “impossibility of
seeing inside the black box” as an epistemological limit that interrupts any “futile
attempts at knowledge acquisition” (von Hilgers, 2011: 43). As Ashby
recognized:“In our daily lives we are confronted at every turn with systems whose
internal mechanisms are not fully open to inspection, and which must be treated
by the methods appro- priate to the Black Box” (1999: 86). Because opacity,
secrecy, and invisibility are not epistemic anomalies but a basic condition of
human life, the black box is not something to be feared but something that
“corresponds to new insights” (von Hilgers, 2011: 32) When confronted with a
black box, the appropriate task for the experimenter, as Ashby saw it, is not
necessarily to know exactly what is inside the box but to ask, instead, which
properties can actually be discovered and which remain undiscoverable
(Ashby, 1999). Because what matters is not the thing but what it does, the
cybernetic lead does not ask us to reveal the exact content of the box but to
experiment with its inputs and outputs.
What can be discovered and described about algorithms differs from context
to context with varying degrees of access and availability of information.
However, even in the case of seemingly closed and hidden systems such as
Facebook or Google, there are plenty of things that can be known. Speculative
experimentation and playing around with algorithms to figure out how they work
are not just re- served for hackers, gamers, spammers and search engine
optimizers (SEO). In the spirit of reverse engineering—“the process of extracting
the knowledge or design blueprints from anything man-made” (Eilam 2005: 3)
—we might want to ap- proach algorithms from the question of how they work
and their general “opera- tional logics” (Wardrip-Fruin, 2009). There are
already some very instructive examples of “reverse engineering” algorithms
within the domain of journalism and journalism studies (Angwin et al., 2016;
Diakopoulos, 2014) and in related aca- demic calls for “algorithm audits”
(Hamilton et al, 2014; Sandvig, et al., 2014).
Mythologizing the workings of machines, however, does not help, nor should we
think of algorithmic logics as somehow more hidden and black-boxed than
the human mind. While we cannot ask the algorithm in the same way we may
ask humans about their beliefs and values, we may indeed attempt to find other ways
of making it “speak.” Similar to the way ethnographers map people’s values and
beliefs, I think of mapping the operational logics of algorithms in terms of
technography.24 As Latour puts it, “specific tricks have to be invented to make
them [technology] talk, that is, to offer descriptions of themselves, to produce
scripts of what they are making others—humans or non-humans—do” (2005:
79). Technography, as I use the term, is a way of describing and observing the
workings of technology in order to examine the interplay between a diverse set of
actors (both human and nonhu- man). While the ethnographer seeks to
understand culture primarily through the meanings attached to the world by
people, the technographic inquiry starts by
Neither Black nor 6
asking what algorithms are suggestive of. Although he does not use the term him-
self, I think Bernhard Rieder’s (2017) way of scrutinizing diverse and general
algo- rithmic techniques could be thought of as an attempt to describe the
“worldviews of algorithms” I have in mind when I use the term “technography.”
Rieder offers a par- ticularly instructive account of how different technical
logics—in this case, the Bayes classifier—entail certain values and assumptions
that inevitably have conse- quences for the operational logics of specific systems.
Following Ashby’s cybernetic lead, what is at stake in a technographic inquiry
is not to reveal some hidden truth about the exact workings of software or to
unveil the precise formula of an algorithm. Instead the aim is to develop a critical
under- standing of the mechanisms and operational logic of software. As
Galloway states, the question is “how it works” and “who it works for” (2004:
xiii). Just as the eth- nographer observes, takes notes, and asks people about their
beliefs and values, Ashby’s observer and the technographer describe what they see
and what they think they see. The researcher confronted with the black box algorithm
does not necessar- ily need to know much about code or programming (although it
is certainly an ad- vantage). As Ashby points out, “no skill is called for! We are
assuming, remember, that nothing is known about the Box” (1999: 89).
In general, we might say that a good way to start is by confronting known un-
knowns in terms of one’s own limits of knowledge. What, for example, is there to
know about key concepts in computer science, mathematics, or social sciences
that would be useful for understanding a specific algorithmic context? Then, we
also have the option of tracing the many “semiotic systems that cluster around
technical artefacts and ensembles” (Mackenzie, 2002: 211): patent applications
and similar documents that detail and lay out technical specifications, press
releases, confer- ence papers on machine learning techniques, recorded
documents from developer and engineering conferences, company briefs, media
reports, blog posts, Facebook’s IPO filing, and so on. Finally, we might want to
experiment with systems as best we can or even code them ourselves. As the next
chapter will show, a technography of algorithms need not imply elaborate
experimentation or detailed technical knowl- edge but, above all, a readiness to
engage in unknown knowns, seeing the black box not as an epistemological obstacle
but as a playful challenge that can be described in some ways (but not all).
a field of further inquiry. For example, Nick Couldry and his colleagues (2016) re-
cently proposed the notion of “social analytics” as the phenomenological study of
how social actors use “analytics” to reflect upon and adjust their online presence. As
they see it, a “social analytics approach makes a distinctively qualitative contribution
to the expansion of sociological methods in a digital age” (Couldry et al., 2016: 120).
In a world in which large-scale data analysis takes on a greater hold, it is
important to investigate the ways in which actors themselves engage with increased
quantifica- tion. In Schü tz’s (1946) terms, we might think of such actors as “well-
informed cit- izens” who seek to “arrive at reasonably founded opinions” about their
whereabouts in a fundamentally uncertain world.
A phenomenological approach to algorithms, then, is concerned with excavating
the meaning-making capacities that emerge as people have “strange encounters”
(Ahmed, 2000) with algorithms. As chapter 5 will show, people make sense of algo-
rithms despite not knowing exactly what they are or how they work. When the out-
comes of algorithmic processing do not feel right, surprise, or come across as
strangely amusing, people who find themselves affected by these outcomes start
to turn their awareness toward algorithmic mechanisms and evaluate it. As with
the social analytics described by Couldry et al. (2016), when modes of appearance
or senses of identity are at stake, actors may reflect at length on how to influence
such operational logics; and, in doing so, they performatively participate in
changing the algorithmic models themselves, a key reason it is important to study
actors’ own experiences of the affective landscape of algorithms.
task to study when the algorithm becomes particularly pertinent and to challenge
the conceptual separation between, for example, the social and the technical.
Interrogating the configuration of algorithms begins with tracing out its specific cul-
tural, historical, and political appearances and the practices through which these
appearances come into being (Suchman, 2004). In this chapter, I have described
this process as one of moving from a question of where the agency of algorithm is
located to when it is mobilized, by whom and for what purpose.
What is at stake here, then, is not the black box as a thing but, rather, the process
of “blackboxing,” of making it appear as if there is a stable box in the first place. If the
metaphor of the black box is used too readily as a way of critiquing
algorithms, Latour’s notion of blackboxing reminds us that we might want to
scrutinize criti- cally the ways different actors have a vested interest in figuring
the algorithm as a black box in the first place. As discussed in this chapter, the alleged
unknowability of algorithms is not always seen as problematic. It can also be
strategically used to cul- tivate an ignorance that is sometimes more advantageous
to the actors involved than knowing. By referring to the notion of strategic
unknowns as part of this third methodological option, the intention is to point out
the deeply political work that goes into the figurations of algorithms as particular
epistemic phenomena. In chap- ter 6, I turn to these questions anew by looking at
how algorithms variously become enlisted, imagined and configured as part of
journalistic practices and products.
Concluding Remarks
Let this be the general conclusion: For every epistemological challenge the seem-
ingly black-boxed algorithm poses, another productive methodological route may
open. The complex and messy nature of social reality is not the problem. Just
as algorithms constitute but one specific solution to a computational problem,
we cannot expect a single answer to the problem of how to know algorithms.
Borrowing from Law, “one thing is sure: if we want to think about the messes of
reality at all then we’re going to have to teach ourselves to think, to practice, to
relate, and to know in new ways” (2004a: 2). In this chapter the black box was
used as a heuristic device to deal with this mess—not by making the world less
messy, but by redirect- ing attention to the messiness that the notion of the black
box helps to hide.
Not to be taken as a definitive, exhaustive list of well-meant advice, I
offered three steps to consider when researching algorithms. First, do not regard
the “im- possibility of seeing inside the black box” as an epistemological limit that
impinges any “futile attempts at knowledge acquisition” (von Hilgers 2011: 43).
Ask instead what parts can and cannot be known and how, in each particular case,
you may find ways to make the algorithm talk. Second, instead of expecting the
truth to come out from behind the curtain or to lay there in the box just waiting
for our hands to take
Neither Black nor 6
the lid off, take those beliefs, values, and imaginings that the algorithm solicits as
point of departure. Third, keep in mind that the black box is not as seamless as it
may seem. Various actors and stakeholders once composed black boxes in a
specific historical context for a specific purpose. Importantly, they evolve, have
histories, change, and affect and are affected by what they are articulated to.
While we often talk about algorithms as if they were single stable artifacts, they
are boxed to pre- cisely appear that way.
4
Life at the Top
Engineering Participation
Over a decade ago, a geeky Harvard computer science undergraduate, together with
a couple of friends, founded the world’s largest social networking site to date.
With a claimed 1.4 billion daily active users, 25,105 employees, and a market
value of
$530 billion as of December 2017, Facebook is not just a site through which users
can “stay connected with friends and family.”1 Facebook is also a multibillion dollar
business, engineering company, sales operation, news site, computational platform,
and infrastructure. As the feature film The Social Network (2010) nicely shows,
Facebook is also a myth, an entrepreneurial dream, a geeky fairy tale. Far from just a
Hollywood story with its founder Mark Zuckerberg in the leading role, Facebook
is a carefully crafted brand, which over the past decade has been marketed as a
tech- nology company comprising a young, vibrant, and creative class of geeks
and hack- ers. Its mission: to make the world more open and connected.
Indeed, research suggests that, in many parts of the world, “Facebook is
the Internet.” As one recent news article reported, “Millions of Facebook users
have no idea they’re using the Internet” (Mirani, 2015). When people in countries
such as Indonesia, Nigeria, Brazil, and India were surveyed, many reported they did
not use the Internet while, at the same time, indicating that they were avid users of
Facebook. Despite some media reports about a teenage flight from Facebook to
other social media platforms such as Snapchat, the sheer scale and volume of
active users and content shared via Facebook remains historically
unprecedented. In the American context, Facebook remains the most popular social
media site with a large user base that continues to be very active. According to a Pew
Research report on social media usage for 2014, fully 70% of Facebook users
engage with the site on a daily basis, a significant increase from the 63% who
did so in 2013 (Duggan et al., 2015). It is probably no exaggeration to state
that, for many, Facebook has transformed the ways in which they live their lives—
the way they communicate and coordinate with their friends and family, receive and
read news, find jobs and are fired from jobs. Facebook is constitutive of what
Deuze calls media life, shaping “the ways in which
Neither Black nor 6
66
Life at the 6
we experience, makes sense of and act upon the world” (2012: 5). As I argue in
this book, our lived experiences are not just permeated by media in general but,
increas- ingly and more specifically, by the principle of the algorithm. Algorithmic
life is a form of life whose fabric is interwoven by algorithms—of what can
and cannot be adapted, translated, or incorporated into algorithmic expressions
and logics. Facebook, as we will see, sits right at the center of this algorithmic
fabric, making intelligible and valuable that which can be quantified, aggregated,
and procedural- ized. This fabric is the theme of this chapter, and the question I
explore is, what kind of togetherness does software invent? If, as van Dijck (2012)
suggests, Facebook is wrapping relationships in code by configuring connections
algorithmically, the question is in what ways.
To examine this, I discuss the case of Facebook’s news feed, focusing on the
ways in which the feed is fundamentally governed and organized by an
algorithmic logic. If Facebook is the computational platform that facilitates other
things to be built on top of it, an architectural model for communication, then the
news feed is the actu- alization of this model as a communication channel, a designed
space in which com- municative action can take place. Despite its purported
discourse of openness, the Facebook news feed differs in fundamental ways from
the ideal public sphere envi- sioned by Jü rgen Habermas and other political
theorists based in the idea of mutual deliberation and argumentation. While the
intention is not to suggest compatibility between the communicative space of the
news feed and the public sphere model, it is not without its warrants. The news
feed, I want to suggest, is political insofar as it is exercising a position of
governance. Here, I am not referring to politics in the con- ventional notion of
parliamentary politics. Rather, politics is understood in a much broader sense as
the encounter and conflict between different ways of being in the world (see also
chapter 1). As the French philosopher Jacques Ranciére (2004) has theorized,
politics concerns the reconfiguration of the distribution of the sensi- ble. In this
sense, politics is productive of “the set of horizons and modalities of what is
visible and audible as well as what can be said, thought, made, or done”
(Ranciere, 2004: 85). It is a way of informing experience that has no proper
place. In this sense, power and politics are not limited to parliamentary govern-
ments or ruling bodies. Whereas political theorists such as Habermas consid-
ered politics mainly as discursive forms of power produced through arguments
in an informal public sphere (Flynn, 2004), I propound a view on politics that is
sensitive to its material dimensions as well. In his seminal studies of the Long
Island Bridge and the mechanical tomato harvester, Winner (1986) argued that
technological arrangements impose a social order prior to their specific use. If
taken to the realm of algorithms, we may start to consider them as political
devices in the sense that they represent certain design decisions about how the
world is to be ordered. After all, as Mackenzie suggests, algorithms “select and
reinforce one ordering at the expense of others” (Mackenzie, 2006: 44). It is the
work of ordering and distributing the sensible that I am concerned with in this
chapter.
68 IF . . .
social media users and news practitioners’ oral histories respectively, this
chapter leans toward different aspects of materiality—to spaces and
technology. Next, I draw on Michel Foucault’s architectural diagram of power
introduced in Discipline and Punish (1977) to establish an analytical framework
through which the news feed can be understood as an example of how sociality is
programmed in algorith- mic media.
sayings that pervade not only the company’s discourse but also its physical set-
ting and work place. The word “hack” dominates the facades of one of Facebook’s
two office buildings in Menlo Park, and the office spaces are “tattooed with
slogans that inculcate through repetition the value of speed and iterative
improvement: ‘Move Fast and Break Things;’ ‘This Journey Is 1% Finished;’
‘Done Is Better than Perfect’” (Fattal, 2012: 940). After moving headquarters in
December 2011, Facebook is now even located at “the Hacker Way,” and
Zuckerberg’s office faces onto “Hacker Square.” In a fascinating NYT article,
journalist Quentin Hardy (2014) details the way in which the interiors of the
office spaces at Facebook have been carefully designed to reflect the company’s
culture of openness and change: Open office plans, of course, where meetings
often happen on the fly, where no one has secrets, and common areas that
without a warning may have the couches replaced in order to create a space of
perpetual change. “Similarly, design changes to Facebook’s home page are
known as ‘moving the furniture around’ ” (Hardy, 2014). The hacking culture is
such an integral part of Facebook that it has become the principal way in which
the company now presents itself, internally as well as externally.
However, the hacker identity was not always a part of Facebook’s culture. The
external hacker brand first had to be created. In 2008, cultural and employment
branding manager Molly Graham was hired to create and tell the company’s story.
Graham describes how the defining moment for Facebook’s identity came with a
blog post by Paul Buchheit, who joined Facebook from FriendFeed in 2009.
The post reframed hacking as “applied philosophy,” a way of thinking and acting
that “is much bigger and more important than clever bits of code in a computer”
(Buchheit, 2009). According to Graham, “When Paul wrote that post, he wrote
the story of Facebook’s DNA” (Firstround, 2014).
Facebook’s DNA is not just externally communicated but internally communi-
cated and “installed” into every new Facebook employee, as one of Facebook’s engi-
neers put it in a blog post (Hamilton, 2010). Newly hired Facebook engineers
have to go through a six-week boot camp in which they learn almost everything about
the software stack and are exposed to the breadth of code base and the core tools
for engineering. Moreover, as Andrew Bosworth—the inventor of the Facebook
boot camp and a long-time Facebook engineer—puts it, the boot camp is a
training ground for the camper to be “indoctrinated culturally.”4 New hires are told
from the very beginning to “be bold and fearless,” an ethos that is at the heart of
Facebook engineering.5 Just as the company and code base are always changing,
so are their engineers expected to change. Whatever they do, Facebook engineers
are continu- ously told to move fast. In terms of the boot camp, moving fast means
starting to work on actual code right away, and there are expectations of publishing
code to the live site within their first week. Moving fast is also an important part of
Facebook hackathons, another cornerstone of Facebook’s hacker culture.
Hackathons are or- ganized at Facebook every six weeks for engineers to conceive
of new products in a
Life at the 7
motivations intact by eliminating one more decision from his life, it also says some-
thing about the ways in which the CEO of the world’s largest social networking
site thinks about human autonomy and decision-making. Similarly, perpetuating a
work environment in which moving furniture is part of the everyday life of
Facebook en- gineers or tearing down walls for meetings to happen anywhere at any
time is much more that a design decision made by an interior architect. They
are the physical manifestations of the “under construction” ethos that permeates the
culture of engi- neering at Facebook. As John Tenanes, who oversees Facebook’s
buildings as its director of real estate, tellingly says about the physical layout
Facebook’s headquar- ters: “It’s designed to change thinking” (Hardy, 2014).
The engineering of the engineer at which Tenanes hints clearly challenges the
idea that artefacts are socially constructed, as if the social was already a
given. Engineers do not simply engineer artefacts as if their intentions and ideas were
the results of discursive and deliberate processes. These deliberations are always
al- ready imbued in materiality, whether it is the architecture of the
headquarters, a piece of clothing, or the infrastructure of the work place. Just as
the cultural di- mension of computing matters, so does the material dimension of
the culture of computing. It becomes apparent that thinking about the power of
software and algorithms in terms of cause and determination will not do. While
STS scholars have long argued for the necessary entanglement of materiality,
practice, and poli- tics (Gillespie, Boczkowski, & Foot, 2014: 4), there is a
tendency still among com- munication scholars to frame technology as an
outcome of culture. As Leah Lievrouw suggests, communication scholars only
secondarily, if at all, consider material artefacts and devices to have anything
resembling power (2014: 24). Indebted to the STS ideal of co-determination and
the mutual shaping of technol- ogy and cultural practices, while at the same time
explicitly opposing the prefer- ence for the social/cultural side of this
sociotechnical duality, in this chapter I seek to attend more closely to the physical
design and configuration of the (im)material architecture of the news feed.
Echoing the idea of co-production or the dynamic relationship between the material
and social, power is no longer seen as an abstract “force” or institutional “structure”
but, rather, as instantiated and observable in the physical forms of social practices,
relations, and material objects and artefacts (Lievrouw, 2014: 31).
by making a difference in how social formations and relations are formed and in-
formed. Setting out to explore power through software and algorithms, indeed,
seems to be a daunting and, perhaps, even impossible task. However, it is not my
intention to investigate power as a totalizing force that can be located and described
once and for all. As discussed in previous chapters, algorithms are never powerful in
one way only but power always signifies the capacity to produce new realities. What
we need, therefore, are diagrams or maps of power, tracing how and when power
operates and functions in specific settings and through specific means.7
Following Deleuze in his reading of Foucault and the assertion that “every
society has its diagram(s)” (2006: 31), what is at stake here is thus a
diagrammatics of algorithms, understood as the cartography of strategies of
power. Here, Foucault’s analysis of the architectural, technological, and conceptual
nature of diagrams provides a useful methodological and analytical framework for
examining Facebook’s news feed.
Foucault’s architectural model of power usefully highlights the ways in which
spaces are “designed to make things seeable, and seeable in a specific” way (Rajchman,
1988). It offers a useful avenue to analyze algorithms in terms of architectural
structuring in which embedded technically means having the power “to incite, to
induce, to seduce, to make easy or difficult, to enlarge or limit, to make more
or less probable” (Foucault quoted in Deleuze, 2006:59). For a mapping of
power, Foucault’s notion of government and governmentality also offers
elaborate con- cepts for the kinds of architectural shaping that he first described
in Discipline and Punish (1977), as it points to the multiple processes,
measurements, calculations, and techniques at play in organizing and arranging
sociality. Analytically, this im- plies a focus on the ways in which algorithms arrange
and organize things, its specific techniques and procedures, and the mechanisms
used in processes of individual and collective individuation.8
I argue in this chapter for the importance of revisiting the idea of the technical
and architectural organization of power as proposed in the writings of Foucault,
by highlighting an analytics of visibility. Becoming visible, or being granted
visibility, is a highly contested game of power in which the media play a crucial
role. While Foucault did not connect his theory of visibility specifically to the media,
the frame- work he developed in Discipline and Punish helps illuminate the ways in
which the media participate in configuring the visible as oscillating between what
can and should be seen and what should not and cannot be seen, between who can
and cannot see whom. Examining new modalities of visibility, thus, becomes a
question of how and when something is made visible rather than what is made
visible, through which specific politics of arrangement, architecture, and designs.
Although Foucault’s writings focused more on the “how of power,” I want to
suggest that the how is inextricably linked to the when of algorithms as discussed in
the previous chapter. If the how of power are “those practices, techniques, and
procedures that give it effect” (Townley, 1993: 520), then the when, in this case,
refers to the aspects that are ren- dered important or unimportant to the
effectuating of these procedures at different
74 IF . . .
times. In this chapter, I investigate the notion of mediated and constructed visibil-
ity through a close reading of the news feed and its underlying algorithmic oper-
ational logic. I argue that Foucault’s idea of an architecturally-constructed regime
of visibility as exemplified in the figure of the Panopticon makes for a useful analyt-
ical and conceptual framework for understanding the ways in which the sensible
is governed in social networking sites.9 The intention is not so much to offer a definite
account of the role played by Facebook in capturing the world in code but to open
avenues for reflection on the new conditions through which visibility is constructed
by algorithms online.
Posts that you see in your News Feed are meant to keep you connected
to the people and things that you care about, starting with your friends
and family. Posts that you see first are influenced by your connections
and ac- tivity on Facebook. The number of comments, likes, and
reactions a post receives and what kind of story it is (example: photo,
video, status update) can also make it more likely to appear higher up
in your News Feed. (Facebook, 2018)
so that the feed most users believed to represent every update from every friend in a
real-time stream, in fact, became an edited one, much like “top news.” What
caused the most controversy was the fact that this change in default occurred
without Facebook notifying its users about it. The option to change the default
was tucked away at the bottom of a drop down menu, barely noticeable. Users
who did notice, however, quickly started to warn other users, pointing to the fact that
these changes basically meant that people were not seeing everything that they
should be seeing (Hull, 2011).
The revelation that the “most recent” feed was filtered is just one of many contro-
versies surrounding the features and functionalities of the Facebook platform
that the company has faced throughout the years. Besides representing yet another
case of shaky privacy settings, it clearly points toward a certain disconnect between
user expectations and reality. As we will discuss in much more detail in the next
chapter, user expectations and perceptions about how technology works or ought
to work may affect their media practices just as much as the actual working of
that technology. As with the example above of users getting a sense of not seeing
everything they should be seeing, the aforementioned revelations about the
Facebook emotion experiment continue to show how important it is to hold soft-
ware companies such as Facebook accountable. In late June 2014, the news broke
that Facebook had “tinkered” with users’ emotions by manipulating the news feeds
of over half a million randomly selected users, changing the number of
positive and negative posts they saw (Goel, 2014). Facebook researchers had
published the findings from the experiment in an academic paper. The experiment
itself had been carried out for one week in January 2012, one and a half years
prior to the public outcry engendered through the mass media. Notwithstanding
the apparently small effects obtained, the experiment unleashed a huge
controversy about algorithms, human subject research, and regulation (Meyer,
2014). What was most surprising about the media frenzy following the academic
publication was not so much the actual experiments described in the article but the
widespread public surprise in the face of this news. Like the public perception of
what a “most recent” feed ought to be, the Facebook emotion experiment
revealed that there are many persistent be- liefs about how technology works and
should work. As Gillespie (2014) put it in a blog post:
There certainly are many, many Facebook users who still don’t know
they’re receiving a curated subset of their friends’ posts, despite the fact
that this has been true, and “known,” for some time. But it’s more than that.
Many users know that they get some subset of their friends’ posts,
but don’t understand the criteria at work. Many know, but do not think
about it much as they use Facebook in any particular moment. Many
know, and think they understand the criteria, but are mistaken. Just
because we live with Facebook’s algorithm doesn’t mean we fully
understand it.
Life at the 7
Indeed, we are only at the beginning of figuring out what it means to live in a society
increasingly mediated and governed by algorithms. While it has been “known” for
some time that Facebook uses algorithms to select the stories shown at top of the
news feed, it is important not to forget how recent this time span really is. Moreover,
what exactly this knowledge amounts to varies greatly. While we might never know
for sure how the technology—the software and algorithms—of Facebook work in
detail, there are enough ways in which we might know its operational principles
and logics. As I argued in the previous chapter, algorithms are never so black or
so boxed that they do not allow for critical scrutiny of their functions,
assumptions, and embedded values. The cybernetic lead does not ask us to reveal
the exact con- tent of the box but to experiment with its inputs and outputs. In the
following, I will walk you through some of the most important criteria the
Facebook algorithm uses to serve people the “most interesting” content on their news
feeds. The materi- als used to present some of the operational logic of Facebook’s
algorithmic work- ings I draw from the technical specifications of specific
inventions that pertain to the news feed as disclosed and discussed in white
papers, computer science confer- ence papers, Facebook patent applications, and
recorded talks given by a Facebook engineer. Moreover, I draw on various
news reports and blog posts from the technology press, most notably from
publications such as Wired, The Atlantic, Techcrunch, Mashable, the New York
Times, and the Guardian. These documents have been read and coded for the
technological mechanisms entailed by the news feed. Technical specifications
entailed in patent applications, in particular, contain much important information
about how these systems work and are imagined by their inventors. Like any piece
of discourse, patent applications also have to be read carefully and within the
realms of what they are produced to achieve as rhetorical devices in a particular
commercial context. To test some of the operational logics described in these
documents, I also report on a technography of my own news feed. What, then,
are the principles and logics of Facebook’s algorithmic form of editing the
news feed?
assumption that users are not equally connected to their friends. Some
friends “count more” than others. The friends that count more are those with
whom a user interacts on a frequent basis or a more “intimate” level—say, by
communicating with a friend via “chat” rather than on the “wall.” The news feed
algorithm is also geared toward highlighting certain types of edges while
downgrading others in which the type of interaction becomes a decisive factor.
Chatting with someone on “Facebook Chat” presumably counts more than “liking”
his or her post. Through tests, Facebook found that, “when people see more text
status updates on Facebook they write more status updates themselves”—
particularly, when they were written by friends as opposed to pages (Turitzin,
2014). As product manager for news feed ranking Chris Turitzin (2014) says,
“Because of this, we showed people more text status updates in their news feed.”
Clearly, the type of content that is more likely to generate an interaction or
prompt users to take action is made more visible and given priority. As Facebook
engineers Andrew Bosworth—who is often cited as the inventor of news feed—
and Chris Cox wrote in a patent document, the system de- termines an overall
affinity for past, present, and future content based on one or more user activities
and relationships:
Some user interactions may be assigned a higher weight and/or rating than
other user interactions, which may affect the overall affinity. For
example, if a user emails another user, the weight or the rating for the
interaction may be higher than if the user simply looks at the profile page for
the other user. (Bosworth and Cox, 2013: 6)
There is a certain circular logic embedded in the algorithm. In order for you to
like or comment on a friend’s photo or status update, they have to be visible to
you in the first place. Any time a user interacts with an edge, it increases his or
her affinity toward the edge creator. For instance, we can assume that
comments outweigh “likes” as they require more individual effort. The system
assigns ratings and weights to the activities performed by the user and the
relationships associated with the ac- tivities. Many different variables may be used
to assign weight to edges, including the time since the information was accessed,
the frequency of access, the relation- ship to the person about which
information was accessed, the relationship to a person sharing common interests
in the information accessed, the relationship with the actual information accessed
(Bosworth and Cox, 2013: 5). In deciding what sto- ries to show users first,
Facebook also prioritizes certain types of content over others. For example,
content items related to so-called life events such as a wed- dings, the birth of a
child, getting a new job, or moving across the country may be “prioritized in a
ranking of news feed stories selectively provided to users to ensure that the most
relevant information is consumed first” (Luu, 2013: 2). The weight given to
certain types of edges, moreover, is likely to depend on the internal incen- tives
that Facebook may have at any given point in time. If the objective for Facebook
80 IF . . .
trending topics, showing people stories about things that are trending as soon as they
occur (Owens and Vickrey, 2014). As is the case with Twitter’s trending topics,
the notion of what a trend is may not be as straightforward as one might first
think. As Gillespie (2011) has argued, “the algorithms that define what is ‘trending’ or
what is ‘hot’ or what is ‘most popular’ are not simple measures, they are carefully
designed to capture something the site providers want to capture.” A trend is not
a simple measure but a most specific one, defined as a topic that spikes, as
something that exceeds single clusters of interconnected users, new content over
retweets, new terms over already trending ones (Gillespie, 2011). Although
Facebook generally conceives of a trend as an object, topic, or event that is popular at
a specific moment in time, the ways in which a trend is computationally defined is
much more com- plex.15 Stories posted on Facebook about a topic that is
currently trending (for ex- ample, about a national sports event or connected to a
specific hashtag) are more likely to appear higher up in news feed. In addition, as
Owens and Vickrey (2014) point out, deciding what to show on the news feed is
not merely a measure of the total amount of previous engagement, as in the total
number of likes, but also de- pends on when people choose to like, comment, and
share. Moreover, timeliness is not simply a matter of current events. The time
may be right to show a story even many days after it was originally posted. Facebook
calls this practice story-bumping, meaning that stories people did not scroll down
far enough to see are resurfaced, based on measures of engagement and other
factors. The ordinality of kairos can be seen in the ways in which the news feed
algorithm is clearly geared toward providing stories that are sensitive to a specific
time and context. As Zuckerberg et al. explain in a news feed–related patent
application:
For example, a user planning a trip may be very interested in news of other
users who have traveled recently, in news of trips identified as events by
other users, and in travel information, and then be much less interested
in these relationships, events, objects, or categories or subcategories
thereof upon his return. Thus, items of media content associated with
another user who has traveled recently may receive a large weighting
relative to other items of media, and the weighting will decay steeply so
that the weighting is low by the time of the user’s return. (Zuckerberg et
al. 2012: 5)
Affinity, weight, and time are as dynamic as the news feed itself. Constantly changed
and fine-tuned, there is no clearcut way in which a certain state of the feed can easily
be discerned. Like the prospect of moving furniture at the Facebook
headquarters, the news feed offers users a slightly differently designed space to
hang out in every time they log in. While it is unlikely that the sofas at the
headquarters are really moved that often or to radically different areas of the
building, the prospect of con- tinuous change is what is important. The culture of
not knowing how the office may look when entering the building each day or the
uncertainties Facebook users face
82 IF . . .
when logging in are not that different. The main difference is that users may
never really know what they are exposed to, even after entering the digital building
that is Facebook. A moving sofa is easily detectable; an A/B test is not. Users will
never know exactly what they are witness to or how the world appearing on their
screens came into existence. Living in an algorithmically mediated environment means
learn- ing to live in the moment and not to expect to step into the same river twice.16
It also means acknowledging that talking about EdgeRank, the news feed
algorithm, or whatever one now wants to call it, means talking about a general
underlying logic, not its exact mathematic formula.17
In general, the system is geared toward maximizing user engagement and will
present what it deems to be the most interesting content in the hope that the user
will be more likely to “take actions” ( Juan and Hua, 2012). The anticipatory logic
inherent in the functioning of these algorithms is not primarily geared towards
confirming some pre-existing cultural logic but, rather, toward a mode of govern-
ment that has the capacity to take into account the probability of subjects’ actions.
After all, “the predictions could then be used to encourage more user interaction”
( Juan and Hua, 2012: 2). By looking at the ways in which the specificities of
the Facebook platform, exemplified here through the Facebook algorithm,
enables and constrains ways of becoming visible online, we can begin to rethink
regimes of visibility that hinge on and operate through algorithmic architectures.
In doing so, in this chapter I expand on Foucault’s idea of “panopticism” as it
provides a useful and still highly relevant analytic for understanding the ways in
which visibility is technologically structured.
PANOPTICISM
Foucault operates with two basic notions of how things are made visible or
shown, exemplified in his notion of the spectacle and surveillance. It is not just a
matter of what is seen in a given historical context, but what can be seen and how the
realm of the seeable and sayable is constructed in order to make a particular
regime of
Life at the 8
Hence the major effect of the Panopticon: to induce in the inmate a state
of conscious and permanent visibility that assures the automatic function-
ing of power. So to arrange things that the surveillance is permanent in
its effects, even if it is discontinuous in its action; that the perfection of
power should tend to render its actual exercise unnecessary; that this
architectural apparatus should be a machine for creating and sustaining a
power relation independent of the person who exercises it; in short,
that the inmates should be caught up in a power situation of which they
are themselves the bearers. (1977: 201)
THREAT OF INVISIBILITY
The mode of visibility at play in Facebook, as exemplified by the news feed algo-
rithm, differs from that of disciplinary societies in one particularly interesting
way. The technical architecture of the Panopticon makes sure that the uncertainty
felt by the threat of permanent visibility is inscribed in the subject, who
subsequently ad- justs his/her behavior. While one of the premises of the
panoptic diagram pertains to an even distribution of visibility in which each
individual is subjected to the same level of possible inspection, the news feed does
not treat individuals equally. There is no perceivable centralized inspector who
monitors and casts everybody under the same permanent gaze. In Facebook, there
is not so much a “threat of visibility” as there is a “threat of invisibility” that seems
to govern the actions of its subjects. The problem is not the possibility of
constantly being observed but the possibility of constantly disappearing, of not
being considered important enough. In order to appear, to become visible, one
needs to follow a certain platform logic embedded in the architecture of
Facebook.
The threat of invisibility should be understood both literally and symbolically.
Whereas the architectural form of the Panopticon installs a regime of
visibility whereby “one is totally seen, without ever seeing” (Foucault, 1977:
202), the algo- rithmic arrangements in Facebook install visibility in a much more
unstable fash- ion: One is never totally seen or particularly deprived of a
seeing capacity. As is the case in the Panopticon, the individual Facebook
users can be said to occupy equally confined spaces. Like the carefully and
equally designed prison cells, user profiles represent templates that “provide
fixed positions and permit circulation”
Life at the 8
(Foucault, 1977: 148). Just as with the specific machines (i.e., military, prisons, hos-
pitals) described by Foucault, it is not the actual individual that counts in Facebook.
This is why spaces are designed in such a way as to make individuals interchange-
able. The generic template structure of Facebook’s user profiles provide not so much
a space for specific individuals but a space that makes the structured organization of
individuals’ data easier and more manageable. The system, then, does not particu-
larly care for the individual user as much as it thrives on the decomposition and
recomposition of the data that users provide. However, whereas the architecture
of the Panopticon makes all inmates equally subject to permanent visibility, the
Facebook algorithm does not treat subjects equally; it prioritizes some above others.
Whereas visibility, as a consequence of the panoptic arrangement, is abundant
and experienced more like a threat imposed from outside powers, visibility in the
Facebook system arguably works the opposite way. The algorithmic architecture of
the news feed algorithm does not automatically impose visibility on all subjects.
Visibility is not something ubiquitous but rather something scarce. As
content, users, and activities on the Facebook platform grow and expand, visibility
becomes even scarcer. According to Brian Boland (2014), who leads the Ads
Product Marketing team at Facebook, this means that “competition in news feed
is increas- ing” and that it is “becoming harder for any story to gain exposure in
news feed.”
In order to see how many posts actually made it to the news feed, I conducted an
experiment of what could be considered a process of “reversed
engineering.”18 Recall how it was claimed in the previous chapter that the first step
in knowing algo- rithms is not to regard the impossibility of seeing inside the black
box as an episte- mological limit. Instead, what I termed a “technographic inquiry”
starts by asking what algorithms are suggestive of by observing the outcomes of
algorithmic proce- dures as indicative of its “point of view.” Over the course of
several months (March- September, 2011), I used my own personal Facebook
profile to compare the contents of the “top news” to that of the “most recent.” The
most intensive investigation took place during April 2011 when I did the
comparison a couple of times a week, took screen shots of the entire top news
feeds, and manually counted the posts in the most recent feeds. I took the oldest
story published in the top news and compared it to the number of stories
published in the most recent feed up to the same time stamp. On a randomly
selected day in April 2011, this amounted to 280 stories/up- dates published on
the most recent feed as opposed to 45 posts appearing in the top news feed within
the same timeframe. At first glance, only 16% of the possible sto- ries seem to
have made it to the top news. As time decay is one of the major known factors of
the Facebook news feed algorithm, it is safe to assume that there is a higher
probability of making it into the top news the closer to “real time” the story is
pub- lished. In fact, my experiment showed that, if a story was published within
the last three hours, the chance of getting onto the top news was between 40 and
50%. In addition to selecting from the total number of updates generated by
friends from the real-time stream, top news also displays its own tailored news
stories that are not
86 IF . . .
displayed in the most-recent feed. I call these stories “communication stories,” as they
make a story out of two friends’ recent communicative interaction.19 Communication
stories in Facebook typically take the form of “X commented on Y’s photo” or
“X likes Y’s link.” Taking these tailored stories into the equation, a better estimate
for the 45/280 ratio would be a mere 12% chance of getting in the top news. Of
all the 45 stories published, 17 were communication stories. What most of these
17 com- munication stories had in common was a high degree of interaction. For
example, a typical story would say: “Anna commented on Claire’s photo” along with
“11 people like this” and “View all 14 comments.” Not only does Facebook tailor
specific stories for the top news feed, these stories also receive a significant
amount of visibility as opposed to other types of edges. Presumably, the emphasis on
quantification is sup- posed to simulate the impression of high activity in order to
lower the threshold for participation on the platform.
A different top news sample from the experiment reveals the following: Of 42
posts displayed, only three of the stories published by my “friends” came without
any form of interaction by others (that is, without any likes or comments). My top
news was filled with stories that obviously signify engagement and interaction.
Although distribution of the specific types of stories published varied over the
course of the six months in which I systematically studied the news feed, stories
without significant interaction seemed to be filtered out. The fact that there were
almost no stories by friends that made it to the top news without any form of
engagement by others strengthens the impression of an algorithmic bias toward
making those stories that signify engagement more visible than those that do not.
No matter how meticulous the counting and comparing between the two feeds may
be, the exact percentage of stories making it to the top news remains largely obscure.
However, my initial experiments showed what Facebook only later confirmed—
that, on average, Facebook users only see a small percentage of what they could have
been seeing. Of the “1,500+ stories a person might see whenever they log onto
Facebook, news feed displays approximately 300” (Boland, 2014). Making it onto
the news feed has become a competitive advantage, especially for an increasing
number of business pages. On Facebook, becoming visible is to be selected by the
algorithm. Inscribed in the algorithmic logic of the news feed is the idea that visibil-
ity functions as a reward rather than as punishment, as is the case with Foucault’s
notion of panopticism. This is even truer today than when the data reported on in
this chapter were collected. In the first half of 2011, the top news feed primarily
showed a mixture of stories from friends and pages, giving prominence to cluster
stories or what I have called communication stories to provide the impression of
constant activity and engagement. While stories of the type “X commented on Y’s
photo” are still shown in the news feed, most of these stories are now shown as they
happen and displayed as part of the much smaller Ticker feed. Talking about
changes to the news feed since its conception in 2006, Chris Cox, vice president
for prod- ucts at Facebook, remarked that the most noticeable changes have come
from what
Life at the 8
framing the environments upon which they work (Mackenzie, 2007). We may say
that Facebook’s news feed algorithm, acting as a gatekeeper of user-generated con-
tent, demarcates visibility as something that cannot be taken for granted. The uncer-
tainty connected to the level of visibility and constant possibility of
“disappearing” in relation to the “variable ontology” of software frames visibility
as something ex- clusive. Becoming visible on the news feed is, thus,
constructed as something to which to aspire rather than to feel threatened by.
PARTICIPATORY SUBJECTIVITY
The threat of invisibility on Facebook, then, is not merely a symbolical phenome-
non but is also, quite literally, real. While the regime of visibility created by Facebook
may differ from the one Foucault described in terms of surveillance, understood
as imposing a state of permanent visibility, discipline is still part of the new
diagram- matic mechanisms. While it has become commonplace to argue for the
transition of a disciplinary society into a control society after the post-industrial
fact described by Deleuze (1992), I do not see a necessary contradiction between
the disciplinary diagram and software-mediated spaces. Discipline simply refers to a
diagram that op- erates by making the subject the “principle of (its) own subjection”
(Foucault, 1977: 203). Discipline denotes a type of power that economizes its
functioning by making subjects responsible for their own behavior. This is still very
much the case with the logic of feedback loops perpetuated by contemporary
machine learning techniques. Contrary to the notion that we have moved away
from a disciplinary society to a control society as the “Deleuzian turn” suggests, I
agree with Mark Kelly (2015) in that many of the aspects ascribed to control
societies were already covered by Foucault in his notion of discipline. For
Foucault, “Discipline ‘makes’ individuals; it is the specific technique of a power
that regards individuals both as objects and as instruments of its exercise”
(Foucault, 1977: 170). It imposes a particular conduct on a particular human
multiplicity (Deleuze, 2006: 29). It is important here to highlight that Foucault
developed the notion of disciplinary power in order to ac- count for the duality of
power and subjectivation—effectuated by “training” sub- jects to think and
behave in certain ways and, thus, to become the principle of their own regulation
of conduct. Through the means of correct training, subjects are gov- erned so as to
reach their full potentiality as useful individuals (Foucault, 1977: 212). Foucault
identified three techniques of correct training: hierarchical observa- tion,
normalizing judgment, and the examination. In this sense, we could say that the
news feed algorithm exercises a form of disciplinary power since discipline, in
Foucault’s words, “fixes; it arrests or regulates movements; it clears up confusion; it
dissipates compact groupings of individuals wandering about the country in unpre-
dictable ways; it establishes calculated distributions” (1977: 219).
For Facebook, a useful individual is the one who participates, communicates,
and interacts. The participatory subject evidently produced by the algorithmic
Life at the 8
certainly a space that allows for participation, the software suggests that some forms
of participation are more desirable than others, whereby desirability can be mapped
in the specific mechanisms of visibility that I have suggested throughout this chapter.
Conclusion
Given the 1.28 billion active daily Facebook users, algorithmic practices, tech-
niques, and procedures play a powerful role in deciding what gets shown, to whom,
and in what ways. Algorithms are never neutral but reflect the values and cultural
assumptions of the people who write them. This does not mean that we can
simply understand algorithms by asking programmers. While ethnographic work
explor- ing how algorithms are created may tell us something about what goes into the
ways an algorithm functions, programmers are no more equipped than other
people to talk explicitly about their biases, values, or cultural assumptions. Instead,
this chap- ter reveals how an understanding of the power and politics of
algorithms can be approached by considering their sociomateriality. Guided by
the question of how the news feed works, for what possible purpose, I argued for a
critical, architectural reading of algorithms. Through a close reading of publicly
available documents de- scribing the workings of the Facebook news feed as well as
systematic observations and analysis of my own news feed, several aspects were
revealed about the ways in which sociality and subjectivity are imaged by the
platform and how it is systemati- cally embedded and engineered into
algorithmic arrangements and mechanics. Despite the emphasis on the material
politics of algorithms in this chapter, material- ity is never decoupled or distinct
from the domain of the social. With much recent focus on the algorithm as an
object of analysis within the social sciences and hu- manities, there has been a
certain tendency to imbue too much causal power to the algorithm as if it were a
single, easily discernible thing. What has become apparent is that there is an
urgent need to historicize algorithms, to put algorithms into broader contexts
of production and consumption. At the same time, cultural theo- rists have to
grapple with the inevitable ephemerality of social media algorithms. In this chapter, I
have tried to engage with this historicity by considering the cultural beliefs and
values held by creators of software and by being explicit about the dates and times
of data capture. This is to say that we need to acknowledge that the news feed and
its algorithm(s) emerge from somewhere; evolve in particular ways; are talked
about in certain terms; mystified; reflect certain beliefs and assumptions; work to
accomplish certain technical, economic, and cultural goals; stabilize tempo- rarily,
then change again; and so on. It also means that tapping into the algorithmic
assemblage will only ever be a tapping into a momentary tracing and stabilisation of
a series of events.
Indeed, the analysis of the news feed algorithms presented in this chapter reveals
the “variable ontology” characteristic of software (Mackenzie, 2006). While it
Life at the 9
might be a stretch to claim that Mark Zuckerberg’s love of gray T-shirts is an algo-
rithmic metaphor, the piece of clothing does reveal something about the values and
beliefs he holds. In spite of restricted access to Facebook engineers, we still
find plenty of traces of how platforms operate, how their CEOs and managers
think and talk, how code is made and work organized, the ways in which culture
matters, how culture gets branded, and the values, norms, and cultural
assumptions held by the people involved. These traces include but are not
exclusive to the principle of moving furniture, the notion that the physical layout
of Facebook’s headquarters is designed to change the thinking of Facebook
employees, the technical specifications described in patent documents, the
engineering rhetoric perpetuated at developer conferences, media reports, and user
interface. The study shows how algorithms are ontogenetic simply because the
problems that require a solution continuously change. While the problem of
showing the most “interesting” content remains, what constitutes interestingness
or relevance depends on the given context. Facebook is never finished. Whether
we are talking about the physical refurbishing of sofas or the news feed ranking
algorithm, the object of study is inherently dynamic, chang- ing, and mutable. In an
age of machine learning in which the models used to predict user behavior
constantly change as a result of increasing user data, it may prove par- ticularly
challenging to capture the underlying logic of how the system works. If, as
Foucault suggested, disciplinary power is effectuated by means of training subjects
to reach their full potentiality as useful individuals, what we see emerging now is
a certain disciplining of the algorithm by means of training the machine to learn
cor- rectly from a corpus of known data. What we are still only at the beginning of
under- standing, let alone addressing in the first place, is what “correct training”
means and which techniques are put into place to ensure that algorithms reach their
full poten- tiality as useful machines.
Many of the characteristics associated with disciplinary power described by
Foucault, such as the function of enclosure, the creation of self-control, and
the training of human multiplicity, are apt characterizations of the kind of
enclosed ar- chitecture of Facebook and its subtle demands for participation and
interaction. However, if we follow Foucault in his understanding of surveillance
as a form of “permanent visibility,” then this notion fails to capture the
algorithmic logic of cre- ating modalities of visibility that are not permanent but
temporary, that are not equally imposed on everyone and oscillate between
appearing and disappearing. While it is true that Foucault described
organizations of power within a somewhat fixed technological and architectural
form, the idea that architectural plans structur- ally impose visibility does not seem to
come in conflict with the unstable and chang- ing arrangements characteristic of new
media. Spaces, both digital and physical, are always delimitated in one way or
another. That does not mean they cannot also expand or change as new elements
come along. As Foucault sees it, discipline is cen- tripetal, security centrifugal. This is
to say that discipline functions to isolate a space or determine a segment, whereas
the apparatuses of security “have the constant
92 IF . . .
tendency to expand” by having new elements integrated all the time (Foucault,
2007: 67). Security does not supersede discipline, just as discipline does not
replace sov- ereignty but rather complements and adds to it (Foucault, 2007: 22).
With reference to Foucault’s concept of panopticism, my aim in this chapter
has been to argue for the usefulness of applying an analytics of visibility to
(im)material architectures. Following Foucault’s assertion that “the Panopticon
must be under- stood as a generalizable model of functioning; a way of defining
power in terms of the everyday life of men” (1977: 205), a diagrammatic
understanding of the news feed algorithm provides a constructive entry point for
investigating how different regimes of visibilities materialize. Taking the material
dimensions of social media seriously reveals how politics and power operate in
the technical infrastructure of these platforms. In Foucault’s terms, the news feed
can be understood as a form of government in which the right disposition of
things are arranged to lead to a suit- able end (2007: 96). Algorithms, it was
argued, are key to the disposition of things or what Ranciere would call the
distribution the sensible. What the operational logic of the news feed ranking
algorithm reveals is a reversal of Foucault’s notion of surveillance. Participatory
subjectivity is not constituted through the imposed threat of an all-seeing vision
machine but, rather, by the constant possibility of dis- appearing and becoming
obsolete.
5
Affective Landscapes
Everyday Encounters with Algorithms
I have been a Twitter user for more than ten years now, but I still do not know
how to use it. Tweeting does not come naturally to me, and I would not really be
able to explain what the platform is for. However, like most users, I think of
Twitter as a real-time micro-blogging platform from which you can tweet
small snippets of thoughts, pieces of information, or links to a group of followers.
For many, the real- time reverse chronological feed is the most defining aspect of
Twitter’s user experi- ence. It, therefore, felt somewhat transgressive when
Twitter announced in early 2016 they would introduce an “algorithmic timeline”
to replace its iconic real-time feed. It did not come as a surprise that the news that
Twitter was preparing to change this defining feature completely would cause a
massive public outcry. Like any big platform change nowadays, enraged users
gathered around a hashtag to protest. For days and weeks, #RIPTwitter
demonstrated the dismay people felt at the arrival of the algorithm. “In case you
never hear from me again, you can thank Twitter’s genius new algorithm system . . .
#RIPTwitter.”1 “The main thing I love about Twitter is that it does not use an
algorithm. Sounds like that’s changing. #RIPTwitter.”2 “Every time I hear the
word ‘algorithm’ my heart breaks. #RIPTwitter.”3 I want to start this chapter with
the case of #RIPTwitter, not because the user protests surrounding platforms are
somehow exceptional, but because of the emerging ordinariness of algorithms
that these tweets and many more like them seem to convey. As Christian Sandvig
(2015) notes, in the years 2006–2013, “There has been a five-fold increase in the
number of times the word ‘algorithm’ appeared in the major newspapers of the
world.” This number has likely just exploded since 2013, with the news media
now routinely reporting on algorithms. While many people are still not aware of
the extent to which algorithms are curating their online experiences (Eslami et al.,
2015), we might nevertheless—or precisely because of it—sense the emergence
of a new “structure of feeling” (Williams, 1977) surrounding algorithms as
evident in various emerging discursive impulses surrounding the algorithm.4
93
94 IF . . .
The starting premise for this chapter is that algorithms are becoming habitual
and part of a felt presence as people inhabit mediated spaces. This is not to say
that algorithms are necessarily consciously present to people but that they have
the capacity to call forth “something that feels like something” (Stewart,
2007: 74). Sometimes, this something may take the form of an event such as the
#RIPTwitter protest, a moment when “collective sensibilities seem to pulse in plain
sight” (2007: 74). Most of the time, however, algorithmic impulses become barely
noticeable in the mundane moments of everyday life. Recall, for example, the
targeted ads for the Lisbon hotel recommendation or the party dress that seemed
to creep up on me while I was surfing the Web (as described in chapter 1), or the
oddity of having your credit card company put a security ban on you without being
able to explain why (as mentioned in chapter 3). In moments like these,
algorithms make themselves known even though they often remain more or less
elusive. They become strangely tangible in their capacity to create certain
affective impulses, statements, protests, sensations and feelings of anger,
confusion or joy.
In this chapter, we move away from the disciplinary power of algorithms
ex- plored in the previous chapter toward the “micropolitics” of power imbued in
the affective and phenomenological dimensions of algorithms. By
“micropolitics,” I mean something akin to Foucault’s (1978) notion of
microphysics as a way to pay attention to the generative potential of specific
everyday practices and techniques. More specifically, micropolitics “refers to the
barely perceived transitions in power that occur in and through situated
encounters” and the idea that “different qualities of encounter do different things”
(Bissell, 2016: 397). This is not a kind of power and politics that is necessarily
repressive, discriminatory, or hierarchical but, rather, productive in the sense that it
produces certain capacities to do and sense things. What I am interested in is the
question of how algorithms and people meet, and what these encounters make
possible or restrict? The notion of affect becomes im- portant in terms of
describing what is at stake in thinking about the barely notice- able and more
sensible dimensions of algorithms as a key to understanding the power and
politics of algorithmic life. Without going much further into the notion of “affect”
(many books have been dedicated to this concept alone), suffice it, at this point, to
say that “affect” describes the domains of “the more-than or less-than rational” in life,
including “mood, passion, emotion, intensity, and feeling” (Anderson, 2006:
734).5 I specifically focus on the ways in which algorithms have the capacity
to affect and be affected (Deleuze & Guattari, 1987) by looking at how users
perceive and make sense of algorithms as part of their everyday lives.
The point, however, is not to assess the extent to which people actually feel algo-
rithms as such but to highlight experience and affective encounters as valid forms of
knowledge of algorithms (see chapter 3 for a discussion on the phenomenological
approach to algorithms). The argument is made that algorithms do not just do
things to people, people also do things to algorithms. The word to is important as
it extends more classic accounts of media use and arguments of audience power. It
is
Affective 9
not just that people do things with the help of algorithms. By using algorithms,
they also do things to them—modulate and reconfigure them in both
discursive and material ways. As we discussed in chapter 2, the machine-learning
algorithms prolif- erating on social media platforms are never set in stone. These
algorithms are con- tinuously molded, shaped, and developed in response to user
input. If the previous chapter made a case for analyzing algorithms as
governmental techniques directed at the “right disposition of things” through
rankings and weights as architectural forms of power, this chapter considers the
“productive power” of cultural imaginar- ies and the significant role users have in
reconfiguring the algorithmic spaces that they themselves inhabit.
Encountering Algorithms
Consider the following scene: Michael presses the post button and waits.
Normally, it should not take longer than five minutes before the “likes” and
“comments” start ticking in. Nothing happens. As an independent musician,
Michael has to find his own ways to spread the word about his music and make sure
he reaches an audience. Facebook seems like the perfect platform for self-
promotion. Michael thinks he has gotten better at playing “Facebook’s game,” as
he calls it. For example: “Statuses do better based on what night you post it, the
words you choose to use, and how much buzz it initially builds.” He knows from
previous experience that, “if the status doesn’t build buzz (likes, comments,
shares) within the first ten minutes or so, it immediately starts moving down the
news feed and eventually gets lost.” Michael has just released a new album and
needs to get the word out. He picks what seems to him the perfect day of the week,
carefully crafts the words of the update, deliber- ately uses phrases such as “wow!”
and “this is amazing!” to make the post more vis- ible. Or so he thinks. But nothing
happens, and the post eventually disappears, no downloads, just some scattered
“likes.”6
One more: Rachel is an avid Facebook user, but lately she thinks “the Facebook
algorithm is destroying friendships.”7 She’s been a Facebook member almost
since the very beginning and has over 700 friends. She considers herself a fairly
sociable and outgoing person, who likes to post regular status updates and
frequently com- ments on other people’s posts. Then, a few weeks back, she
saw a post by an old friend from high school. Rachel had totally forgotten they
were even Facebook friends. She had not seen this old friend in ages, and, all of a
sudden, she “pops up in her news feed.” Rachel is curious—how much has she
been missing out on? What’s been going on in her friend’s life that Facebook decided
not to broadcast on her own feed? “I’m constantly taken aback by all the info and
people Facebook hides from my feed on a daily basis,” Rachel says. “So, in that sense,
it does feel as if there is only a select group of friends I interact with on the social
network, while I’ve practically forgotten about the hundreds of others I have out
there.”8
96 IF . . .
More than simply describing the strange feelings of media users, these scenes
describe some of the many mundane moments in which people variously encounter
the algorithmic realities and principles underlying contemporary media
platforms. For 21-year-old Michael, the Facebook algorithm has become an
important part of his everyday life. As a musician, he depends on the Facebook
platform to promote his music, connect with fans and followers, and engage in
a community of like- minded artists in L.A., where he lives. Michael describes his
encounters with the algorithm as a game of popularity that hinges on the number
of likes and comments his updates are able to attract. In many ways, Michael’s
story is reminiscent of the “threat of invisibility” described in the previous
chapter. To counteract this threat, Michael has developed strategies and tactics
that he thinks will guarantee him a better chance at retaining attention and
getting noticed. Based on previous experi- ence and videos about the underlying
logic of the news feed, Michael has developed theories about how the algorithm
works. What is frustrating, he says, is when it turns out not to be true. For Rachel,
the Facebook algorithm is not subject to the same kind of frustration as it is for
Michael. Rachel, a 24-year-old journalist based in New York City, does not “depend”
on the algorithm for her own livelihood in the way that Michael does. As a
journalist, however, she has noticed that there seems to be an increasing number
of news reports on algorithms lately. An article in the Washington Post dealt
with the Facebook algorithm; and, ever since, Rachel has been trying to find
signs of it in her own news feed. When an old friend from high school appeared
in her feed seemingly from nowhere and even “liked” one of Rachel’s posts,
she finally sensed the algorithm at work. With respect to forgetting people, Rachel
says, it feels like the algorithm is trying to make decisions on her behalf,
ultimately influencing the way in which she conducts her friendships. The
question going forward is how we might want to think about such everyday encoun-
ters with algorithms and what, if anything, they might tell about the ways in
which algorithms shape our experiences?
Asking about people’s encounters with algorithms seems to suggest a conscious
meeting of some sort or that people are already aware of and have a grasp of
what algorithms are. As existing research has shown this is far from always
the case (Eslami et al. 2015; Rader & Gray, 2015). In many ways, the question of
how people attend to algorithms is akin to the question of how people start to make
sense of that which is not readily available to the senses, which has always been an
important area of research in the history of science.9 Many scientific and
technological innovations are not visible as such. Just think of nanotechnology,
RFID, or sensor technologies of various kinds. Like algorithms, they are
embedded in computers, pieces of cloth- ing, and other more visible interfaces.
Dealing with uncertainties and unknowns is part of what it means to be a
human; it is part of what it means to live toward a future, what it means to be
in love, or what it means to understand things such as physics and climate change.
When people are faced with the hidden and uncertain aspects of life, they tend to
construct what has variously been described as “mental
Affective 9
models” (Schutz, 1970; Kempton, 1986), “interpretive frames” (Bartunek & Moch,
1987), or simply “folk theories” (Hirschfeld, 2001; Johnson-Laird & Oatley, 1992;
Johnson, 1993). To interact with technologies, people have to make sense of them,
and they do so in a variety of ways—for example, through visualizations,
analogies to more familiar domains, or by the use of metaphors. Media discourses,
cultural artefacts, stories, anecdotes, and shared cultural beliefs can “provide
people with explanatory power and guide behavior surrounding use of a particular
technology” (Poole et al., 2008). The increased ubiquity and popularity of
depicting unfamiliar phenomena such as artificial intelligence and robotics in films
and popular imagery have arguably made these phenomena more available to
public reasoning and debate (see Suchman, 2007).10 This is no different when it
comes to the algorithm, which is rarely encountered in its mathematical or
technical form. When people encounter or “see” the algorithm, it is typically in the
form of simplified images of a formula (e.g., EdgeRank formula) or a newspaper
story about a recent controversy as evident, for example, in the recent censorship
accusations against Facebook and their “algorithm” removing the iconic
“Terror of War” image or through other means of cultural mediation such as
films and artworks of various kinds.
Unsurprisingly, perhaps, algorithms are difficult to represent or culturally medi-
ate even for computer scientists.11 However, as Sandvig (2015) notes, in an attempt
to make the algorithm more intelligible to the public we are seeing an epistemic
development in which “algorithms now have their own public relations.” That is,
beyond the technical or educational imagery of algorithms characteristic of
earlier times, algorithms materialize in different representational settings. Algorithms
have become objects of marketing, represented in commercialized images of
mathemat- ical formulas and circulated by various third-party marketing consultants,
technology blogs, and other trade press publications. The process of making
algorithms a prod- uct of public relations is not so different from what Charles
Bazerman (2002) calls the production of “representational resting points.” In
examining the cultural his- tory of Edison’s electric light, Bazerman argues that,
before the invention “could create its own new world of experience and
meaning,” it was crucial to develop accepted representations of electric light as
stable points of reference (2002: 320).12 What exactly these representational
resting points are with regards to algorithms and machine learning is still up for
grabs, but simplified mathematical formulas or cyborg figures are certainly part of
it. Though the formula may portray algorithms as objects of science and neutral
conduits, this is arguably not what people experience in their day-to-day life. As the
stories of Michael and Rachel suggest, what people experience is not the
mathematical recipe but, rather, the moods, affects, and sensa- tions of which
algorithms are generative. The notion of mental models, then, only brings us so
far—not because the theories that people construct about unfamiliar technologies
and hidden processes often prove to be inaccurate (Adams & Sasse, 1999; Wash,
2010), but because it does not necessarily matter whether they are correct or
not.
98 IF . . .
scenes, situations, episodes, and interruptions that give rise to the felt presence of
algorithms in everyday life. By “scenographic,” I mean a methodological
sensibility akin to writers and scholars such as Lauren Berlant and Kathleen
Stewart, both of whom attend to cases or scenes of the ordinary as a way of making
affective processes more palpable. For Stewart (2007), scenes are method and
genre, a way of writing about ordinary affects as a series of happenings that,
taken together, form a story about what it feels like to live in contemporary
America. As McCormack writes, the scene or situation “becomes a way of gathering
the sense of worlds that matter while also posing the question of how the force of
these worlds might become part of their stories” (2015: 93). A situation, as Berlant
defines it, “is a state of things in which something that will perhaps matter is
unfolding amid the usual activity of life. It is a state of animated and animating
suspension,” one “that forces itself on conscious- ness, that produces a sense of the
emergence of something in the present” (2011: 5). Algorithms, I suggest, may be
productive of such emerging presences and “accessed” via the ways in which they
make people feel and the stories that they tell about those encounters. This is closely
related to the notion of affect but not exactly the same. As Papacharissi (2015)
points out, affect is what permits feelings to be felt, it is the movement that may
lead to a particular feeling. Affect, she suggests, can be thought of as the “rhythm
of our pace as we walk” (2015: 21). For example, a fast-paced rhythm may
lead to and amplify feelings of stress; a slow and light-paced rhythm may make us
calm. In the context of this study, the question is how we might think of the force
of movement that algorithms create, and how it prompts a “reason to react,” as
Stewart puts it (2007: 16).
Taking tweets about algorithms written by seemingly ordinary people as my
starting point, I was interested in getting a better sense of the kinds of situations and
circumstances that had solicited people to react and respond to the ways of the algo-
rithm.14 During the two years of data collection, I regularly searched Twitter for
keywords and combinations of keywords, including: “Facebook algorithm,” “algo-
rithm AND Facebook,” “Netflix algorithm,” “algorithm AND Netflix” and so
forth.15 I queried Twitter every few weeks and manually scrolled down the stream of
tweets and took screenshots of the ones that seemed to be more personal rather
than mar- keting-oriented. Using a research profile I had set up on Twitter, I
occasionally con- tacted people who had recently tweeted about algorithms to
ask whether they would be willing to answer a few questions related to that
tweet. These tweets in- cluded statements such as: “I’m offended by the awful
taste Twitter’s algorithm thinks I have,” “Facebook, perhaps there isn’t an
algorithm that can capture human experience,” “It feels like Spotify’s algorithm can
read my mind—it’s both awesome and creepy,” “The Netflix algorithm must be
drunk,” “The algorithm on Facebook confuses me. It seems to think I should lose
weight, am broke, pregnant and single.” Altogether, I engaged in asynchronous
email conversations and synchronous com- puter mediated chats with 25
individuals, based on questions concerning those tweets.16 During the spring of
2016, I conducted an additional ten semistructured
100 IF . . .
interviews with long-time social media users in their early twenties about their media
use practices and how they viewed the role of algorithms in those interactions.17
The answers were first coded for information about the kinds of situations and
circumstances that provided the informants a “reason to react.” As feminist philoso-
pher Sara Ahmed suggests of strange encounters, “To be affected by something is to
evaluate that thing. Evaluations are expressed in how bodies turn towards things. To
give value to things is to shape what is near us” (Ahmed, 2010: 31). This became
clear in the process of coding the interviews and email transcripts as well. Thus, in a
second iteration of coding, all transcripts were analyzed in terms of how they
were either explicitly or implicitly evaluating and making sense of the algorithms,
the extent to which their awareness of the algorithm affected their use of the
platforms in question, what kinds of tactics and strategies they developed in
response to the algorithm (if any), and the kinds of issues and concerns they
voiced about the algo- rithms. While these transcripts would not stand up in a
court of law, they should be seen as records of happenings that are “diffuse yet
palpable gatherings of force be- coming sensed in scenes of the ordinary”
(McCormack, 2015: 97). What is of im- portance, then, is not whether the stories
that people tell are representative of how algorithms “really are” (if that is even
possible). Instead, personal algorithm stories may attest to the ideas and
perceptions people have about what algorithms are and how they function and,
perhaps more importantly, their imaginings about what algorithms should be and
the expectations that people form about them.
“Clicking consciously” addresses how algorithms or, more precisely, beliefs about
how algorithms work trigger particular responses and ways of being on social media
platforms. People develop various strategies and tactics to make themselves
“algo- rithmically recognizable” (Gillespie, 2016) or unrecognizable for that
matter. What the data reveal is how being affected by algorithms is not simply a
matter of passive observation or subtle awareness but of actionable consequences
and movement, of reaction and creativity.
While Shannon thinks that the Taylor Swift incident is amusing, she often finds
tar- geted ads to be “slightly offensive as they make assumptions about me,
which I don’t like to think are true.” Such is the work of “profiling machines”
(Elmer, 2004) that seek to produce a sense of identity through detailed consumer
profiles, which are geared toward anticipating future needs. Based on statistical
inferences and inductive reasoning, profiling algorithms do “not necessarily
have any rational grounds and can lead to irrational stereotyping and
discrimination” (de Vries, 2010: 80). Still, the question of whether “correct”
identifications are being constructed may be beside the point. As de Vries (2010)
argues, misidentification is not simply a mis- match or something that should be
considered inappropriate. Misidentifications may also give some leeway for thinking
about how identity construction is experienced.
Experiencing algorithmic landscapes is as much about what the algorithm does
in terms of drawing and making certain connections as it is about people’s personal
engagements. To draw on Ingold (1993), a particular landscape owes its
character to the experiences it affords to the ones that spend time there—to their
observa- tions and expectations. For Robin, the ways in which Amazon seems
constantly to be confused by her gender is not just amusing and confusing but
seems to make her transgender subject position simultaneously more real and
unreal. As she explains:
Robin has been thinking about “how much a company is willing to go out on a
limb on things like this,” whether a company like Amazon would actually be willing to
try and categorize people according to all possible demographic buckets, not just
the ones that do not seem to endanger their profits:
the negative reaction that comes from a false positive might outweigh any
potential sales.24
What seems to bother Robin is not so much the fact that the apparatus is unable
to categorize her but the apparent heteronormative assumptions that seem to be
re- flected in the ways in which these systems work. As a person in transition, her
queer subject position is reflected in the ways in which the profiling machines
do not demarcate a clear, obvious space for her. A similar issue came up in my
interview with Hayden, who is also transgender and uses social media primarily
to read tran- sition blogs on Tumblr and Instagram. He is not particularly
impressed by the way in which algorithms seem to only put “two and two
together” when “the way things work in society can be quite the opposite.” If
this is how algorithms work by simplifying human thoughts and identity, Hayden
thinks social media plat- forms are moving in the wrong direction. As he tellingly
suggests, “people aren’t a math problem.”25
Algorithmic mismatches may be perceived as problematic when they are
no longer perceived as “innocent” but turn into a more permanent confrontation
that feels inescapable and more intrusive. For Lena, the Facebook algorithm is at
odds with how she identifies in life. She is no longer the person she used to be.
Yet, she feels constantly trapped by her past. She no longer hangs out with the
same crowd. She does not share their interests. She has moved across the country
from a small rural town in Texas to New York City, where she now goes to grad
school. She iden- tifies strongly as liberal and, of course, votes for the Democrats,
she says. Yet, all the updates Lena sees on her news feed seem to stem from
Republican voters and people who are only interested in celebrity gossip. This
obvious disconnect between who she identifies as being and how Facebook seems to
have her figured out annoys her. She is annoyed that her “own social network is so out
of sync” with her interests and beliefs. Although she speculates that her inability to
escape her past could be a result of adding most of her FB “friends” when she
was attending high school in rural Texas and not adding as many since she
moved to New York City, Lena thinks the algorithm “must be incorrectly assessing
my social networking desires.”26 While “real” life allows the past to be the past,
algorithmic systems make it difficult to “move on.” As Beer (2013) notes, the
archive records and works to shape memory by defining what is relatable and
retrievable. What happens when the world algo- rithms create is not in sync (to
use Lena’s own words) with how people experience themselves in the present?
To what extent do existing social networking profiles remain forever muddied by
past lives and experiences?
Memory is a powerful thing. It lives in the flash of a second and the duration of a
lifetime, it emerges in lifelike dreams and the drowsiness of mornings. Memories
can be recalled at will or willfully withdrawn. A memory is an event that connects
us to the past and makes it possible to project a path for the future. Memory is an
104 IF . . .
I was tweeting about Facebook’s recent “Year in Review” fiasco, where the
algorithm they created was generating reviews that included some terrible
memories—deaths of children, divorces, etc. My point in the tweet was
that, while algorithms might (or might not) be a good tool for effectively
serving up the right ad to the right user, they might not be the best way
to create emotional content or connect on a human level.28
The incident that Albert cites, and which prompted him to speak out about
the limits of algorithms, is the story of Eric Meyer and the “cruel algorithm”
serving up a picture of his recently deceased daughter as part of Facebook’s “year
in review” feature. For Eric Meyer and others experiencing these unwanted
flashes of painful memories, algorithmically curated and repackaged content does not
just materialize in helpful birthday reminders. It also lingers as insensitive
reminders of the trage- dies of life. Emerging at odd moments, these digitally
mediated memories make you “raise your head in surprise or alarm at the uncanny
sensation of a half-known influ- ence” (Stewart, 2007: 60). More than anything,
however, the incident made explicit for Albert “the most obvious thing about
algorithms—they’re just machines.” What was obviously missing, says Albert, was
the “human judgment that says, ‘You know, this guy probably doesn’t want to be
reminded that his daughter died this year, so even though the post got tons of
attention, I’ll leave it out’.” Indeed, algorithmic sys- tems “judge individuals against
a set of contrasting norms, without human discre- tion intervening or altering
decisions” (Beer, 2013: 77).
Similarly, Ryan, a 21-year-old avid YouTube user, thinks that mechanical ways
of trying to solve human problems and desires is where the algorithm hits its
barrier. The problem with algorithms, Ryan suggests, is “that they are just based
on trends and past behavior. It can’t predict for you in the future, it just assumes
you are going to repeat everything you’ve done in the past.” Algorithms do not
take into account that “people are unpredictable and like to try new things all the
time,” he says.29 This is why Ryan loves people who put considerable effort into
their YouTube chan- nels—particularly, the semifamous YouTube celebrities he
follows, who explicitly ask fans to contribute to their next videos. For Ryan, these
YouTube celebrities are almost like “human algorithms,” making sure they give
their audience what they want. Ryan thinks they are “more accurate than an
automatic algorithm, because computer algorithms don’t really understand human
preferences or traits like actual humans do.” This is precisely what Hayden seems
to touch on when he claims that people are not a math problem: People are
more complicated than an equation,
Affective 1
more complex and unpredictable than what can be broken down into a few steps
of instructions for a computer. In fact, Hayden thinks algorithms are fundamen-
tally not that smart. They were not able, for example, to parse the fact that
“views” means beautiful scenery, while it also happens to be the name of the
rapper Drake’s newest album. So, when Hayden uses the hashtag “views” for all
his recent images of sunsets and landscape pictures on Instagram, he is not
interested in fol- lowing a bunch of Drake fans, which an algorithm apparently
suggested. What became apparent in many of the participants’ responses, then,
is not just how algo- rithms are capable of misjudging humans but how they
might not even be able to judge humans at all.
platforms such as Facebook have a tendency to make the same people or the
same type of content visible time and again. Lucas, a 25-year-old quality
assurance engi- neer, says he spends a lot of his spare time on social media.
Lately, he has noticed that there is absolutely no new content popping up on his
Facebook feed.
The same 5-6 stories were showing up at the top of my feed across a 4-hour
period. I have a fairly large social graph on Facebook (1,000+ “friends”), so
to see absolutely no new content show up agitated me, so I tweeted
about my feelings on the algorithm.30
Lucas thinks that “Facebook’s filtering consistently degrades what he prefer in his
online social experience.” Like Lena, who seems to be caught in a Republican loop
on Facebook, Lucas feels that the platform is holding back important information.
The problem for the participants lies not necessarily with filtering in and of itself but
with the feeling of not knowing what they could have known. This is not simply
the fear of missing out but of having “your own content” and relationships controlled
or even censored by a platform, as some participants suggest.
Going back to Michael and Rachel’s stories, their frustration with Facebook
stems in part from the feeling of having someone else decide on their behalf
the conditions under which their contributions are either made visible or
silenced. Indeed as Amber, another participant tells me, “I am feeling annoyed
that the algo- rithm decides for me [. . .] it is so unpredictable and strange.”31
Nora, a 21-year-old Canadian public policy student, thinks there is “definitely
something strange going on with the Facebook algorithm.”32 In a series of tweets,
Nora speculates about the underlying mechanisms of Facebook, noting that some
of her posts seem to get far more “likes” and comments than others without any
apparent reason. The number of “likes,” “shares,” and “comments” fuel the
popularity game supported by social media platforms in which algorithms feed
off on the social disposition towards in- teraction. Nora worries that the
popularity bias of social media algorithms poten- tially diminishes what people
get to see. Specifically, she worries that the algorith- mic bias toward “likes” and
“shares” makes viral videos such as the “ice bucket challenge” much more
prominent, hiding more important but less “liked” current events such as the
racial conflicts in Ferguson. As Nora says, “I don’t like having an algorithm or
editor or curator or whatever controlling too much of what I say—if they’re trying
to go for more ‘trending topics,’ will my posts on ‘non-trending topics’ get shuttered
away?”33
The notion that algorithms contribute to making the popular even more popular
is not just apparent in the participants’ understanding of algorithms. It is very much
evident in the public discourses surrounding algorithms as well. Let us for a
moment return to the #RIPTwitter incident mentioned at the beginning of this
chapter and consider some of tweets that contained this hashtag: “Algorithms are
a form of censorship. I look to Twitter for unfiltered news in real time.”34 “One of
the great
Affective 1
rewards of being an adult is deciding ON YOUR OWN who (and what) you should
be interested in.”35 “Think about it, with an algorithm you might never see a
tweet from specific people ever again. RIP Twitter.”36 “Replacing the
chronological time line with an algorithm based feed. Turning Twitter into a
popularity contest. No thanks.”37 In my interview with Ryan, he brings up
similar issues and worries. He thinks it would be “a big mistake” if Twitter started
using algorithms on a large scale, as “it will obviously favor media and links and
videos and photos and stuff like that, and it will not favor text posts, which is a
majority of what the average user is posting on that platform.” Hayden voiced a
similar concern about the possibility that Instagram would change into an
algorithmically filtered feed. For Hayden, the “Instagram algorithm” would
merely make the “Instagram famous” even more famous since they would
“automatically pop up on the top, and you won’t see your friends who might have
fewer followers.”38 Like Hayden, the users who gathered around the hashtag
#InstagramAlgorithm only two months after Twitter announced its algorithmically
curated feed voiced their worries about the platform turning into a popularity
contest—for example: “Meh about the Instagram algorithm change. Popularity
contests were never my thing.”39 “Why is everything a popularity contest on social
media? Why @snapchat is the healthiest platform for your self esteem.
#InstagramAlgorithm.”40 “Why does @instagram think that this is a good idea? It’s
‘insta’gram not ‘popular’gram #RIPInstagram.”41 “The new @instagram algorithm is
the end for small timers like me. I’m not winning any popularity contests against big
names.”42 These tweets are worth quoting at length not because they are necessarily
representative of the general sentiments of the hashtag publics that form around
these specific events but because of the ways in which they show how algorithms are
variously imagined and the expectations people have of them. What these senti-
ments show is how algorithms are imagined as control mechanisms and gatekeepers
that work to enforce existing power structures.
However, algorithms are not always pictured as “bad” or controlling. The same
person may talk about algorithms as problematic in the context of one platform but
not seem to have a problem with a similar mechanism on another platform. Some
algorithms are regarded as useful, while others are seen as obtrusive. For
example, while Nora is worried about the Facebook algorithm controlling too
much of what she sees, she says that, “with regards to Google, I think that is an
algorithm I would actually place my faith in.”43 When I probe her about her
reasons for having faith in Google but not in Facebook, she says she is
struggling to find an exact reason. Perhaps, it is because Google is search, she
speculates, or because Facebook somehow
feels much more intrusive because the things they are intruding on is really
personal things that really could mess someone up [. . .] Whereas Google,
it is just a search engine. When you go to Google you already have an
idea of what to find.44
108 IF . . .
[T] his simple algorithm that Twitter has is that it shows you briefly that,
while you’ve been away, these are the major things that has happened in
the feed [. . .] I think the Twitter algorithm is very nice, because it is very
simple, and it doesn’t control that very much, it just helps you a little
bit.45
When asking Ryan to expand on what he means by the notion of “helpful algo-
rithms,” he brings up Netflix, his favorite platform. To him, the Netflix algorithm is
like Facebook’s algorithm and Twitter’s algorithm: Like Facebook because it
“shows what is popular right now,” and like Twitter in that it focuses on “your own
custom- ized feed without any other interference” but still tries to help you find
new content “based on what I have watched.” Similarly, Melissa, a 22-year-old
Canadian college student, thinks that Netflix gives you the “greatest control,
because it will pull spe- cifically from what you yourself are doing. They will not pull
from what your friends are doing, or what you are looking at.”46 While all of the
platforms try to find con- tent that is relevant to users, there is a sense in which it
makes a difference whether the user feels in control by setting the parameters
themselves and the algorithm is merely helping along, as opposed to a situation in
which the algorithm seems to de- termine relevancy more or less on its own or, at
least, without any obvious explicit input from users. In many cases,
personalization is seen as an asset if the platforms get it right but an intrusion if
the popular seems to get in the way of the personal. Melissa links the demand for
personalized content to her generation’s desire to feel special. This is how she
puts it:
You are always looking for something that makes you special. And that is
one reason why Starbucks’ business model was so successful. Because
giving the cup with your name on it makes you feel like an individual in the
sea of people. And I think the online platforms try to do a similar thing
by offering you things they think you want the most. And it makes you
feel like it is almost like the media platforms are looking out for you.
They try to give you the best experience you can have on that platform.
And some succeed and some not so much. But I find the personalization
much more
Affective 1
gratifying for the user than having the same standard experience as every-
body else.47
Just as the platforms themselves struggle to find the right balance between ranking
based on a global notion of relevance and/or a more personal model, users
also seem to oscillate between a desire for popular content on the one hand and
person- alized content on the other. More importantly, what these different
responses show is that algorithms are not just factual but also normative in the
sense that, both ex- plicitly and implicitly, people have an understanding of how
things ought to be, that is, how algorithms should perform.
In order to “correct” the algorithm, Ryan says he either stops clicking on anything
for a while or just starts clicking on as many other more-related videos as he can.
Similarly, Rachel, the woman who tweeted about Facebook destroying
friendships, says that:
I find myself “clicking consciously” every day. This is going to sound crazy,
but I remember the one time I accidentally clicked on a former high school
classmate’s profile and cursed under my breath: “damn it, now I’m gonna
see her stuff 24/7.”
temporary bubble because of the way one clicks and likes seemed to constitute a
rather common experience of being in algorithmic spaces. As the comments
of Rachel and Ryan suggest, either you take charge of the clicks or the clicks will
catch you up in a perpetual feedback loop.
Clicking consciously, however, is not just a defense strategy of sorts. It
emerges quite prominently in a much more proactive sense as well. Take Kate,
a former school teacher who now runs a Facebook page for parents in her
neighborhood. For Kate, it is vital that her posts are seen and circulated as widely
as possible. In order to secure “maximum reach” as she puts it, Kate takes great care
to post “consciously,” which, in her case, means using multiple pictures instead of
one, always trying to choose the right words, and the right time of the day. As a
page owner, Kate says, “I have completely changed how I share information to
make it work best for the algorithm.”48 In a similar vein, Nora tailors her updates
in specific ways to fit the Facebook algorithm. Like Michael, Nora has been
observing the working of the news feed for a while, taking notes of what seems to
work and what does not. As she puts it, “If I post things and they receive no likes
within the first 5 minutes, or very sparse likes (1–2 in the first minute), then they’ll
drop off and not get many comments or likes at all.” Over the years, Nora has
developed different strategies for making her “posts more frequently recognized”
by the algorithm.49 These strategies include posting at a certain time (“usually
around late evening on a weekday that’s not Friday”), structuring the post in
specific ways, making sure that other people are not shown in her profile pictures
(otherwise, they are “likely to get fewer likes”) and making sure to avoid or
include certain keywords in her updates.
As Gillespie (2016) has usefully pointed out, adapting online behavior to social
media platforms and their operational logics can be seen as a form of
optimization, whereby content producers make their posts “algorithmically
recognizable.” When users wait until a certain day of the week and for a particular
time of day, use multi- ple pictures instead of one, carefully choose their words,
and deliberately use posi- tive sounding phrasing, they are not just strategically
updating their social media profiles or hoping to be seen by others. Reminiscent
of Gillespie’s (2014) argument about using hashtags for optimizing the algorithm,
the personal algorithm stories shared as part of this study suggest that many of
the participants are redesigning their expressions in order to be better recognized
and distributed by the algorithm. Like Kate, Ryan thinks the underlying logic of
social media matters a lot. As he says, “how social media works really, really
impacts what I post and how I post it.” For example, posting “life events” on
Facebook or using certain hashtags on Twitter and Instagram was seen by some of
the participants as a particularly effective means of increasing the probability the
algorithm will make them visible.
Of course, people have always developed strategic efforts to be seen and
recog- nized by information intermediaries. As Gillespie writes, “sending press
releases to news organizations, staging events with the visual impact that
television craves, or making spokespeople and soundbites available in ways
convenient to journalists”
Affective 1
have long been media strategies for visibility (2016: 2). Users have in a sense always
tried to reverse engineer or playfully engage with the seemingly impenetrable logics
of media institutions and powerbrokers. However, in this day and age, the logic
that people are trying to crack is not about human psychology but machine
processing. Part of this task involves figuring out how these platforms function.
For example, posting “life events” on Facebook or using certain hashtags on Twitter
and Instagram were seen by some of the participants as particularly effective means
for increasing the probability of being made visible by the algorithm. According to
Zoe, a 23-year- old self-employed Canadian designer, it is not just about knowing
which hashtags to use in order to get noticed but also about knowing how different
platforms treat and value them in the first place.50 For an up-and-coming designer
like Zoe, who de- pends on reaching an audience (much like Michael), being
unable to navigate cer- tain platform conventions could be detrimental to business.
The blogging platforms Tumblr and Instagram, for instance, have different hashtag
conventions that are not just about knowing which are the most popular. Take a
seemingly small technical detail such as whether or not a platform allows typing
spaces in hashtags. Zoe says it took her a while to figure out that cross-posting
images between the blogging platform Tumblr and Instagram failed to attract
new followers or interest simply because the hashtags would not synchronize in
the right way. While the hashtags worked on Tumblr, they turned into gibberish
on Instagram. Zoe speculates that this may have cost her valuable opportunities
to be noticed due to the algorithm’s unwillingness to feature her profile on other
users’ “explore” section of Instagram.
Using the right hashtags in the right way will increase your chances of becoming one
of those suggested profiles, Zoe says.
Unlike a traditional public relations person or strategic communicator, the ma-
jority of people in this study did not tailor their messages for business reasons
but often did so for what they thought their friends and family would like to
see. Particularly striking was the notion of not wanting to burden people with
unneces- sary postings, that is, of not wanting to add to their friends’ information
overload; and the social norms around accepted sharing behavior were frequently
mentioned in this regard. For Ryan, catering to what he thinks his network is
interested in seeing goes hand-in-hand with knowing how platforms work.
Because he knows that a photo album on Facebook “is much more likely to
appear on someone’s feed than a text status update,” he makes sure that the photos
he posts are worthy of other people’s attention. He will only share a photo album
with “thoughtful, funny cap- tions” because he knows people are probably going
to see it, given the algorithm’s perceived preference for photo albums. In a similar
vein, Ryan only rarely “likes” things on social media platforms because as he says
“people notice.” If you “like” too many things, the “likes” become meaningless. So,
Ryan tries to “save those for actual important times.” This way, when people see he
liked something, it actually means he really enjoyed it. “I save my likes for
interesting articles or funny videos that are posted by major commercial pages
that already have a million followers or likes,”
112 IF . . .
But, with Facebook, there are so many things going on there, so I have to
plan out strategically what I am saying, how I am posting it, when I post
it, all that stuff. Just to make sure that everyone I want to see it
actually sees that.51
The notion of strategically planning what to say is just another side of “clicking con-
sciously,” a way of being oriented toward the operational logics of platforms. For
Ryan, the problem with Twitter rolling out an algorithmic logic on a much greater
scale than before is not so much about the algorithm in and of itself but the conse-
quences it entails for practical engagement with the platform. So, if Twitter were
to “start with an algorithm, you really have to start thinking about what you’re
post- ing,” Ryan suggests.
While users often need play along with the algorithm if they want to get
noticed, there are also the inverse efforts of not making oneself “algorithmically
recogniz- able.” Indeed, as the personal algorithm stories attest, avoiding or
undermining the algorithm requires people to pay attention to the workings of
the platform. Louis, a respondent from the Philippines, says he is deeply fascinated
by the Facebook algo- rithm; yet, he thinks of it as a trap. For Louis, users have
various options to counter- act the logic of algorithms. Indeed, as Hill argues,
“resistance cannot merely be about opting out, but about participating in
unpredictable ways” (2012: 121). As Louis sees it, “privacy online does not really
exist. So, why not just confuse those who are actually looking at your intimate
information? That way, it misleads them.”52 Others, too, reported engaging in
activities of data obfuscation, whether explicitly or implicitly. Lena has been trying
to “manipulate content” with which she interacts in order to “control the
suggestions” Facebook gives her, while Jessa attempted to confuse the algorithm
by liking contradictory things.
Whether or not people try to make themselves algorithmically
recognizable or not, the responses suggest that playful learning on the part of
users should be
Affective 1
THE A L G O R I T H M I C IMAGINARY
Algorithms seize the social imaginary through the various affective encounters of
which they are part. As the many different personal algorithm stories analyzed in
this chapter attest, algorithms are generative of different experiences, moods, and
sensations. The different scenes and situations can be understood as forming part of
what might be called an algorithmic imaginary—ways of thinking about what
algo- rithms are, what they should be, how they function, and what these imaginations,
in turn, make possible. While there is no way of knowing for sure how
algorithms work, the personal algorithm stories illuminate how knowing
algorithms might in- volve other kinds of registers than code. Indeed, “stories
help to make sense of events, set forth truth claims, define problems, and,
establish solutions and strate- gies” (Ainsworth & Hardy, 2012: 1696). Stories
account for events and experiences in ways that help actors make sense of what is
going on in specific, situated encoun- ters. Personal algorithm stories illuminate
what Charles Taylor (2004) has termed “the social imaginary,” understood as the
way in which people imagine their social reality. The imaginary, then, is to be
understood in a generative and productive sense as something that enables the
identification and engagement with one’s lived presence and socio-material
surroundings.
However, my use of the imaginary is not about the social imaginary per se. The
algorithmic imaginary is not necessarily a form of “common understanding that
makes possible common practices” (Taylor, 2004: 23), nor is it necessarily about
the ways in which algorithms, for instance, can be generative of an “imagined
com- munity” (Anderson, 1983) or a “calculated public” (Gillespie, 2014). While
these notions could certainly be part of an “algorithmic imaginary,” I want to use
the term here to suggest something slightly different but no less related. I have no
reason or desire to claim, for example, that people’s personal algorithm stories
are part of a more commonly held cultural belief about algorithms in general.
While they might
114 IF . . .
be shared or overlap in important ways and be part of larger debates and discourses
about algorithms in society, they are also imaginaries that emerge out of
individual and habitual practices of being in algorithmically mediated spaces. In
other parts of the book, I have discussed the notion of programmed sociality as the
ways in which software and algorithms support and shape sociality and lived
experience that are specific to the architecture and material substrate of the
platforms in question. In this sense, algorithms may, indeed, have the power to
generate forms of being together that are reminiscent of Benedict Anderson’s
notion of imagined communities. As Anderson famously declared, a nation “is an
imagined political community—and imagined as both inherently limited and
sovereign” (Anderson, 1983: 15). Importantly, as Strauss points out, Anderson
emphasized the role of media, particularly newspa- pers and books, in “creating a
reader’s sense of being part of a larger community of other assumed readers”
(Strauss, 2006: 329). The algorithmic imaginary that I want to address in this
chapter, however, is not a public assembled by the algorithm but the algorithm
assembled, in part, by the public. In other words, the algorithmic im- aginary
emerges in the public’s beliefs, experiences, and expectations of what an al-
gorithm is and should be.54
Using the notion of the imaginary to describe what the encounters between
people and algorithms generate is not to suggest that people’s experiences of
algo- rithms are somehow illusionary. Quite the opposite, they are “real.”55
Algorithms are not just abstract computational processes; they also have the
power to enact material realities by shaping social life to various degrees (Beer,
2013; Kitchin & Dodge, 2011). When Rachel finds herself “clicking consciously
everyday” to influ- ence what will subsequently show up in her news feed, the
algorithm is not merely an abstract “unreal” thing that she thinks about but
something that influences the ways in which she uses Facebook. Similarly, Lucas says
his awareness of the Facebook algorithm has affected not just how he posts but also
how he responds to others. As Lucas explains:
Lucas’ willingness to go out of his way to like his friends’ posts and enhance their
visibility echoes some of the findings in recent work on social media surveillance.
As Trottier and Lyon (2012) have shown, Facebook users engage in “collaborative
identity construction,” augmenting each other’s visibility through practices of tag-
ging, commenting and liking.
Users’ perceptions of what the algorithm is and how it works shape their orienta-
tion toward it. Several of the participants reported having changed their informa-
tion-sharing behavior “to make it work best for the algorithm,” as Kate put it. The
Affective 1
Conclusion
Everyday encounters with algorithms constitute an important site for analyzing the
power and politics of algorithms in a way that does not necessarily assume a top-
down or macroscopic view of power. Instead, in this chapter I have focused on the
micropolitics of powers that emerge in the lived affects of people as they encounter
algorithmic forces in their online surroundings. Micropolitics becomes a useful
concept for thinking about what these encounters generate. As events, these situ-
ated encounters “might have powerful consequences through the way that [they
transform] relations of enablement and constraint” (Bissell, 2016: 397). It is not
just that algorithms may constrain or enable people’s capacities to do things—for
example, to be seen and heard online, to permit feelings to be felt or to prompt reac-
tions and evaluations about one’s lived environments. The barely perceived
transi- tions that occur in situated encounters are by no means unidirectional;
people certainly constrain or enable the algorithm’s capacity to do things as well.
The algorithmic body—particularly, the one driven by machine learning—is
characterized by its capacity to change as a result of an encounter. Just as people
learn to navigate their whereabouts, algorithms learn and adapt to their
surround- ings as well. While we might not be able to ask the algorithm about
its capacity to affect and be affected, its outcomes can certainly be read as traces of
how it evalu- ates its encounters with the social world. It is important to
understand the affec- tive encounters that people have with algorithms not as
isolated events that are somehow outside the algorithm but as part of the same
mode of existence. When people experience the Facebook algorithm as
controlling, it is not that the actual news feed algorithm is powerful in the
restrictive sense. In its perceived restrictive functionality, the algorithm also
becomes generative of various evaluations, sensa- tions and moods.
If power is to be understood relationally, then we must think about the
encoun- ters between algorithms and people as both adversarial and
supportive. Power, Foucault reminds us, “traverses and produces things, it
induces pleasure, forms of knowledge, produces discourse. It needs to be
considered as a productive network, which runs through the whole social body,
much more than as a negative instance whose function is repression” (1980:
119). More than anything, then, power is a
Affective 1
“People are beginning to believe what they want.” Such was the explanation for
Macquarie Dictionary to name “fake news” the word of the year for 2016.
Similarly, the Oxford English Dictionary named “post-truth” word of the year 2016,
suggesting: “Objective facts are less influential in shaping public opinion than
appeals to emotion and personal belief.” Questions about the status of truth, news, and
facts have acquired renewed prominence in the wake of the complex political
developments of 2016, par- ticularly with regard to the possible consequences of a
misinformed public in the after- math of the US presidential election. But the arrival
of “fake news” and “post-truth” into the popular vocabulary do not merely
describe the notion that beliefs and emo- tions trump facts and truth.1 These
concepts are symptomatic of a time when al- gorithms have emerged as a
particularly prominent matter of concern in public discourse. When debates
about fake news and whether we live in a post-truth era emerged, they were
already part of a much longer and ongoing discussion about the powerful role that
social media platforms have in circulating news and information.
In 2016 alone, Facebook was involved in at least three major public controversies
surrounding its alleged editorial responsibility. I briefly discussed the first
contro- versy, which centered on the human role of editing what was believed to
be a fully automated process for determining trending news, in chapter 3. When
the news about Facebook employing human editors to curate its “Trending Topic”
section first broke in May 2016, the public response was one of outrage. After all,
the algo- rithm was believed to be neutral, free from human intervention and
subjective bias. Through these controversies, among other things, the world learned
that algorithms do not function apart from humans. Part of the controversy centered
around a belief that humans were interfering with the supposed neutrality of the
algorithm. But the matter was made worse by the way it turned out that the humans
involved held left- leaning political inclinations. The story became one where
politically biased humans deliberately edited out and thereby prevented
conservative news from making it into the trending topic section.
118
Programming the 1
and politics have to do with the circumstances of making them matter in specific
ways and for specific purposes. In this chapter I continue this discussion by examin-
ing how algorithms are “problematized” in the current media landscape. In
writing about the history of sexuality, Foucault described how sexual behavior
was “prob- lematized, becoming an object of concern, an element for reflection,
and a material for stylization” (2012: 23–24). Similarly, this chapter takes the
news industry as a case in point to address the circumstances under which
algorithms are rendered more or less problematic, and how their power and
politics can be understood by examining the conditions under which algorithms
become objects of concern.
As we will see, the news industry constitutes a place in which the emergence
of computation, machine learning, and data science have wide-reaching and persistent
consequences. Although computerization has long been part of the news industry,
the use of algorithms as an integral part of news work is a relatively recent phenom-
enon in this context. As I claimed in chapter 2, algorithms do not exist in a
vacuum. Nor can their power and politics simply be framed in a top-down
manner, for ex- ample, as something exercising power over another actor or
object. Algorithms are always built and embedded into the lived world, at the level of
institutional practice, individual behavior, and human experience. In chapters 2
and 3 I referred to the notion of ontological politics to denote how reality is
never a given, but rather shaped and emerging through particular interactions. The
purpose of this chapter is to provide more empirical evidence for the claim that
algorithms are imbued with such ontological politics.
So far in this book, I have shown that algorithms matter in a variety of ways: in
their capacity to govern participation on platforms, distribute information flow,
embed values in design, reflect existing societal biases and help reinforce them by
means of automation and feedback loops, and in their powers to make people feel
and act in specific ways. As I have argued throughout, understanding the
ontologi- cal politics of algorithms cannot simply be done by looking at the
materiality of code, as if formulas have a deterministic force or causal relationship to
what they are supposed to compute. Regarding the notion that we need to take
materiality of technology seriously, I already take this for granted.2 While
algorithms can be said to possess what Jane Bennett calls “thing-power” (2004:
348), a capacity to assert themselves that cannot be reduced to human
subjectivity, here I want to focus on the specific circumstances of their material-
discursive enactment—or mattering— in the context of the news industry.
How exactly are algorithms enacted to shape the realities of contemporary
journalism, and when do algorithms come to matter in the first place? To speak of
matter, “mattering” (Law, 2004b) and “material-discursive practices” (Barad, 2003;
Orlikowski & Scott, 2015) is to be interested in the ways in which algorithms are
materialized in different manners, made out and carved into existence at the
inter- section of technology, institutional practice, and discourse. Drawing on the
pro- cess-relational framework that was introduced in chapter 3, what I am
interested in
Programming the 1
here are the processes through which algorithms are demarcated as an entity and
made separate from other things. What an algorithm does and the impact it may
have is not tied to its materiality alone, but also to the ways in which it enters discur-
sively into accounts, and makes things happen as part of situated practices. This
eventfulness of algorithms is thus inextricably linked to the question of when algo-
rithms matter, understood as the ways in which the algorithm emerges as objects of
knowledge and concern, defining and being defined by those who are touched by it.
Matter and mattering are key terms in this regard, describing an ongoing process of
becoming, whereby different actors and elements are brought together and aligned
in ways that shape certain realities (Barad, 2003; Mol, 2002). To understand the
realities that have allowed controversies surrounding “fake news,” Facebook’s trend-
ing topic, and its censorship of an iconic photograph to flourish, we need to
first understand how algorithms come to matter at the intersection of journalism,
social media platforms, and institutional practice. When, then, do algorithms
matter and whom do they matter for?
Hansen’s worry is that the world’s most powerful editor, the algorithm, and
the people controlling it—Mark Zuckerberg and his associates—do not operate
ac- cording to the same principles and professional ethics as the “independent
press.” This worry concerns the power struggle over editorial responsibility, and
the ques- tion of who should have a right to claim it. On the one side, we have the
independent press, whose task it is to inform the public and expose people to
different views. On the other side, we find the curated filter bubbles created by
social media algorithms
122 IF . . .
that relentlessly “trap users in a world of news that only affirms their beliefs” (Sullivan
2016). However, this reduces a complex issue to a false dichotomy of being either
for or against the algorithm. Facebook does not connect users and provide them a
platform due to an altruistic, benevolent motivation. Nor does Aftenposten and
the news industry rally against Facebook because of mere anger over the removal
of a photograph. And even though he partly construes such a black-and-white
narrative himself, Hansen says in the interview that the problem is not the
algorithm per se: “I of course use algorithms myself, and my ambitions are to use
them on a much higher level than we do today.”
While algorithms in many ways have become an entrenched part of the logic and
functioning of today’s news media, their status and purpose are still very much up
for debate and contestation. In this chapter I look more closely at the content and
circumstance of these debates by considering how key actors in the news media,
such as Hansen, make sense of and negotiate algorithms in the institutional context
of journalism. To understand the power and politics of algorithms in the contempo-
rary media landscape, the argument is made that it is vital to see algorithms as
nei- ther given nor stable objects, but rather made and unmade in material-
discursive practices. Sometimes algorithms are rendered important, while at
other times deemed insignificant. Knowing more about processes of mattering, I
suggest, en- ables us to understand the multiple realities of algorithms, and how these
relate and coexist.
Although the many recent controversies surrounding algorithms certainly call
into question what algorithms do to the current informational landscape, what news
is and ought to be, and what it means to have editorial responsibility, algorithms
are already an integral part of journalism and journalistic practices. As is evident
from much of the current literature on the intersection of technology and journal-
ism (e.g., Anderson, 2013; Coddington, 2015; Diakopoulos, 2015; Dö rr, 2016;
Lewis, 2015), algorithms, robots, and data analytics are now contained in the standard
repertoire of the way in which news outlets organize, produce and disseminate new
content.3 However, to posit these changes and developments as new forces changing
journalism is to miss the point. From the telegraph and the typewriter to databases,
spreadsheets, and the personal computer, various forms of technology have for cen-
turies been used to support and augment news work (Boczkowski, 2004; Pavlik,
2000; Weiss & Domingo, 2010). Indeed, computers have been part of journalistic
practice at least since the mainframe computers of the 1960s, which also gave rise to
the tradition of computer-assisted reporting (CAR). In his seminal book Precision
Journalism, Philip Meyer ([1973] 2002) describes how computers and statistics
would be the most effective tools by which journalists could strengthen journalistic
objectivity and transform journalism more broadly.4 In order to fulfil the “real” goal
of journalism, which in Meyer’s view was to reveal social injustices,
journalists would have to work with statistical software packages, database
programs, and spreadsheets to find stories in the data (Meyer, 2002: 86; Parasie
& Dagiral, 2013).
Programming the 1
Today, these ideas are no longer mere predictions of a distant future. The computer-
ization and algorithmization of news and news work have increasingly become
the norm. While many of the core tools identified by Meyers remain the same, the
scale and extent to which computational tools and processes are incorporated
into news work are what’s novel.
If the digitization of news once meant making analogue content digitally avail-
able, it now signifies a series of processes that go far beyond simple online newspa-
per distribution. The digitization and computerization of news can be seen at every
stage of the production cycle: how journalists use computational tools to collect
and produce news stories, the way in which systematized tagging and metadata
practices are used to archive content, through the implementation of digital pay-
walls to ensure revenue and access to valuable user data, and in the
algorithmic presentation and curation of news.
Let’s at this point recall the common understanding of an algorithm as a set of
instructions for solving a problem. What are the problems that algorithms are
sup- posed to help solve in this context? One obvious problem for journalism
is the “challenge to the economic viability of newspapers triggered by the digital
revolu- tion in publishing and news distribution” (Alexander, 2015: 10).
Sociologist Jeffrey Alexander observes how many “leading journalistic institutions
in the West have experienced great economic upheaval, cutting staff, and
undergoing deep, often radical reorganization—in efforts to meet the digital
challenge” (2015: 10). These cuts and reorganizations have also been felt acutely
in the Nordic context, where many newspapers, including Aftenposten, are
currently experiencing a dramatic re- duction in staff—primarily those working
with the print edition. As Alexander notes, the cultural mantra “information
wants to be free” that came along with the Internet has become an economic
headache for journalism today.5 Declining adver- tising and the rise of blogs and
social media platforms as fierce market competitors have contributed significantly
to the economic crisis in journalism in recent years. Rasmus Kleis Nielsen notes
how over “half of all digital advertising revenues glob- ally go to Google,
Facebook” (2016: 86), and the stigma that was once attached to paywalls
continues to be an obstacle for their economic and cultural feasibility
(Alexander et al., 2016).
The rise of computational journalism, here broadly understood as news entan-
gled with algorithmic practices, comes at a time of crisis not only for the newspaper
industry and journalism in general, but also in terms of broader social and economic
unrest. Not unlike the rhetoric surrounding the rise of CAR in the 1960s, the dis-
courses of computational and algorithmic journalism are characterized by a con-
tinuum of utopian and dystopian visions. Algorithms, data science, and everything
that comes with them are distinguished by a great degree of techno-optimism and
promise, but also marked by a notable skepticism and worries about
machines taking over jobs. On the optimistic side, computational journalism is
talked about as a way of upgrading and equipping journalism for the demands of the
21st century
124 IF . . .
(Hamilton & Turner, 2009; Cohen et al., 2011). As Lewis and Usher note,
“many argue that computational journalism will both lead to better investigative
journal- ism and create new forms of engagement with audiences” (2013: 606).
The senti- ment here is that journalism needs “to adapt to technological and economic
changes in order to survive” (Peters & Broersma, 2013: 2). More than just as a
celebration of technical possibilities, many see these digital transformations as a
changing condi- tion of news to which one simply needs to adapt in order to stay
in the game.
But as with all technical change and innovation, discussions surrounding the
computerization of newsrooms and the digitization of news have typically been
ac- companied by worries about the changing status quo. The emergence of
social media, for example, fostered discussions around the potential replacement
of jour- nalists and other news workers by bloggers (Lowrey, 2006; Matheson,
2004; Rosen, 2005) or non-professionals engaging in citizen journalism (Goode,
2009; Rosenberry & St. John, 2010). These debates and worries have been
replaced by similar discussions and worries about the rise of the so-called robot
journalism (Clerwall, 2014). Companies such as Narrative Science or
Automated Insights, which train computers to write news stories using automated
algorithmic processes, have spurred much discussion about the sustainability of
journalist as an enduring profession and the vulnerability of the human work
force in the age of the purport- edly “smart machine” (Carlson, 2015; van Dalen,
2012). Without committing to news robots being either a threat or a salvation, it is
clear from the now burgeoning literature on what Anderson calls the “shaggy,
emerging beast” of computational journalism (2013: 1017; see also Rodgers, 2015:
13) that algorithms are here to stay. The question, then, is not whether algorithms
play an important part in journalism and news work, but in what way this role
is playing out in practice, how it is ac- counted for and made relevant, and
when it, indeed, comes to matter to specific actors in a given setting.
Beginning with the premise that discrete units of analysis are not given but
made, we need to ask how any object of analysis—human or nonhuman
or combination of the two—is called out as separate from the more
extended networks of which it is a part. (Suchman, 2007: 283)
Finally, there are two things to note with regard to this being a Scandinavian case study,
as opposed to an investigation with a more global focus. In the way in which
leading staff from select Nordic news organizations describe the coming together of
journalism and algorithms, I believe that these interviewees’ strategic roles and, in
many cases, their position as experts beyond the Scandinavian context, provide a
valuable and con- vincing account of how, where and when algorithms have come
to matter.
Also, in emphasizing the notion of enactment, I reject the notion of context as
an explanatory factor. This may seem somewhat counterintuitive at first, given that
I have been talking about “the particular context” of news media as my starting
point. However, as Steve Woolgar and Javier Lezaun point out, there is a crucial
difference between what ethnomethodologists call “context of action” and “con-
text in action”—“between context as an explanatory resource available
exclusively to the analyst, and the context as an emergent property of interaction
available to its participants” (2013: 324). Although algorithms need to be
understood in context (in this case journalism) the meaning of algorithms cannot be
reduced to con- text either. Just as news media professionals’ talk about
algorithms cannot be un- derstood separately from the context of journalism,
journalism itself needs to be understood as the contingent upshot of material-
discursive practices.9 The inter- views provide rich and important data about
journalism’s current condition, espe- cially as it relates to the Scandinavian
countries and to algorithms. Yet, this chapter is not about journalism or specific
news organizations per se. To invoke the way I put it at the end of chapter 2: “Yes,
journalism. Yes, Scandinavia. Yes, technology. Yes, economic challenges. Yes,
social media platforms. Yes, algorithms. Yes, to all of the above, and no one in
particular.” The story is more about the relationship be- tween the elements and
the new realities that emanate from that, rather than any one of these in
isolation.
Programming the 1
news, by way of making sure the right content reaches the right persons. When
algorithms are articulated as having potential, what is generally emphasized are the
various possibilities and promises that they seem to entail. For many of the inter-
viewees, algorithms promise to reduce the cost of production, make it more effi-
cient, automate tedious and repetitive tasks, provide valuable insights into users and
readers, help predict future preferences, help find links and correlations in the data,
and present the news in new, personalized ways.
Interestingly, there seems to have been progress over time. When first contacting
people in the news media to talk about algorithms and their role in
journalism, people seemed less articulate about their potential than they seem
today. While the early days of new technologies are always accompanied by a
certain level of skepti- cism, particularly in fields engrained with strong
professional values and cultures such as journalism, the interviews conducted in
2016 suggest a degree of normaliza- tion with regard to algorithms in news. This
gradual normalization of algorithms is also reflected in the rate at which
journalism conferences, industry seminars, panel discussions, and interest
organizations put algorithms on the critical agenda. At the same time, news media
professionals seem less apprehensive about algorithms than they were only a few
years ago. As Hans Martin Cramer, former product developer at Aftenposten and
now product manager at Schibsted Media Platform noted when I interviewed him
for the second time in August 2016, the algorithms themselves are not a sufficient
explanation for this change in attitude. What changed since we first spoke in early
2014 was not so much the actual implementation of algorithms in the news (although
things had certainly been happening) but, rather, the gradual nor- malization of
discourses surrounding algorithms.10 “In addition,” he pointed out, “we are faced
with a much more challenging economic situation, compared to just two years
ago” (Interview 1, August 2016). As Cramer sees it, the media industry needs to
move beyond the question of whether algorithms or robots will diminish or save
the news. What is at stake is whether the new, emerging forms of journalism can be
made sustainable from an economic point of view. Despite a great deal of talk about
algorithms, the last few years, Cramer suggested, have shown that there are no easy
solutions for developing algorithmic news production and dissemination: “it is either
too complicated or too challenging.”
Yet, Scandinavian news outlets have long been at the forefront of digital innova-
tion. Many of the managers and key staffers I talked to work for some of Scandinavia’s
largest news organizations and publishing houses, including Schibsted,
Polaris Media, and JP/Politikens Hus. As described by the Reuters Institute for the
Study of Journalism in their annual digital news report 2016, these news outlets have a
reputa- tion for digital innovation in content and business models. Or, as The
Columbia Review of Journalism recently wrote: “Leave it to the Scandinavians to
be open to change and innovation, even in the ever-traditional world of newspaper
journalism” (Garcia, 2017). For Espen Sundve, vice president of product
management at Schibsted, news organizations essentially find themselves at an
important crossroads:
Programming the 1
Either they make themselves even more dependent on platforms like Facebook
“that dictate the editorial and business rules without claiming any editorial and fi-
nancial accountability for independent journalism,” or, they need to reinvent them-
selves by dictating the terms of what and whom algorithms are being used to
serve (Sundve, 2017).
For Sundve, the algorithmic potential lies in making journalism more personal,
which is not to be mistaken for what we usually think of in terms of personalization.
In my interview with him, he worries that, by using a concept such as
personaliza- tion, which primarily belongs to the world of technology, one may
too easily lose sight of what is important from a journalistic point of view. Though
“journalism can be a lot of things [. . .] algorithms are primarily there to
complement the human understanding of it,” Sundve says (Interview 7, May
2016). A central theme in Sundve’s discussion of journalism and the role of
algorithms is the fundamental dif- ference between news organizations and
technology companies such as Facebook and Google. He likes to compare
journalism and algorithms by talking about both of them as optimization tasks. It
helps, he says, when talking to programmers in the newsrooms to speak about
journalism in these terms. Essentially, journalism is about “closing the gap
between what people know and should know but also want to know,” he adds. In
principle, this is an optimization task that can be solved both technically and
editorially, but “What distinguishes us from a technology company is that we
include that which we think you should know” (Interview 7, May 2016, emphasis
added). As with Hansen and his critique of Zuckerberg’s “Californian code,”
Sundve’s description of the difference between journalism and technology
companies is essentially about legitimizing and sustaining the role of news media as
powerful channels of information and communication. Where social media plat-
forms fail, journalism succeeds.
Faced with the algorithm’s arrival, news media professionals have a choice:
either develop a proactive stance, or reactively adapt to the new
technological landscape. As is evident in most of the interviews, simply
dismissing algorithms is no longer an option. As we will see later in this chapter,
one of the most pervasive storylines when it comes to algorithms in news,
concerns their potential threat to the informed democratic public. Yet, there are
limits to how sustainable these argu- ments about so-called filter bubbles and echo
chambers are (a topic to which we will return shortly), as the news organizations
themselves are increasingly using algorithms to present the news. The challenge,
therefore, is to balance critique with actual practice, to make algorithms good as it
were, and to engage in practices of making them matter in the right way. In talking to
different news media profession- als, it became clear how the algorithm in part
derives its value from what is already valued. That is, algorithms are judged, made
sense of and explained with reference to existing journalistic values and
professional ethics: Algorithms and objectivity, algorithms and newsworthiness,
algorithms and informed citizens, and the list goes on.
130 IF . . .
METRIC POWER
As with any commercial business, what is of course greatly valued is profit. The
al- gorithmic potential is less about the specific code than it is about the
economic promise that algorithms provide, and their perceived capacity to act on
the data in new ways. If algorithms promise to reduce production and circulation
costs, there might not be much to argue against. As Stig Jakobsen, editor-in-chief
at the local Norwegian newspaper iTromsø, told me when I interviewed him in
May 2016, the choice of implementing an algorithm to personalize the “mobile-
front” of the news- paper was rather easy: “If you find something that gives you 28%
more traffic, as we have done, all you can do is say thank you” (Interview 2, May
2016).11 We may call this the pragmatic route. Here quantification and the
measurement of traffic is the new language of success, readily supported by
algorithms that allow for seemingly objective decision making. Increased access
to user data implies new insights and new possibilities for action. Jakobsen, who
came to the local newspaper in mid-2015 after having spent much of his career in
the magazine business as founder and edi- tor-in-chief of some of the biggest
Scandinavian lifestyle magazines, says real-time analytics and algorithms promise
to support more informed journalistic decisions. In the magazine business as in
journalism more generally, decisions were often taken purely on the basis of a
“gut feeling.” In times of economic hardship, however, a “gut feeling” is too risky,
while metrics and numbers can potentially save you many unnecessary missteps.
Though Jakobsen is enthusiastic about the seemingly objec- tive promise of
algorithms to deliver more relevant news to the right readers, he is careful not
ascribe too much power to algorithms as such. The algorithm is merely a tool that can
be used to make sure the right news reaches the right person, Jakobsen suggests, “If
you are not interested in Justin Bieber, the algorithm will not recom- mend that
article for you” (Interview 2, May 2016).
The power of metrics to influence editorial decisions, however, is not new
(Anderson, 2011; Domingo, 2008; Vu, 2014). As journalism scholar Chris
W. Anderson (2011) describes, based on his ethnographic research in
Philadelphia newsrooms, new measurement and web traffic tools have contributed
to the emer- gence of a “generative, creative audience.” Anderson found that “Editors
were clearly adjusting their understanding of ‘what counted’ as a good news story
based on the quantified behavior of website readership” (2011: 562). For digital
director at the Danish newspaper Jyllandsposten, Jens Nicolaisen, the extent to
which big data and new computational tools provide actionable insights is one of
the most promising avenues for the news business. As Nicolaisen sees it, the
widespread use of analytics and metrics inside newsrooms has resulted in something
of a power shift in journal- ism more broadly. Previously, the news was largely
produced from the inside out, he says, without too much thought dedicated to what
readers actually wanted to read. This lack of journalistic responsiveness to
audience desires was most famously observed by Herbert Gans in his seminal
work Deciding What’s News (1979). Gans notes that he was surprised to find how
little knowledge about the actual audience
Programming the 1
existed within newsrooms: “Although they had a vague image of the audience,
they paid little attention to it” (Gans 1979: 229). Today, however, Nicolaisen
observes, value creation increasingly starts from the outside—with what users
actually click on, what they say they are interested in, what they share and talk
about in social media (Interview 3, February 2015).
Ingeborg Volan, then director of innovation and newsroom development at
Norway’s oldest newspaper Adresseavisen, shares this sentiment, suggesting that the
landscape of journalism has changed profoundly with regards to digital
media. Traditional newspapers were constrained by their technological
affordances: “You simply could not print personalized newspapers for every
subscriber,” says Volan. These technological constraints are now largely gone, and
with it, an obvious ra- tionale for producing the same paper for everyone. After
all, Volan says, “Not all of our readers are alike. People’s interests vary a lot, and they
read different things” (Interview 4, August 2016). This does not mean, however,
that Volan thinks jour- nalism should simply cater to people’s tastes and likes.
“I don’t think anyone would want that,” she says when I ask her to elaborate on
the power of users and readers to shape the news. It is not that power has shifted
from editors to users— or to algorithms and data analytics for that matter. Although
algorithms and social media platforms certainly challenge the privileged position news
media organiza- tions have historically occupied, they also provide a renewed
opportunity for reaching people. As Volan sees it, it’s not an either or, but
rather a question of careful realignment.
BETTER WORKFLOWS
The interviews also suggest that the potential of algorithms is strongly linked to
their role in organizing news work, particularly with regard to being perceived as
strategic change agents. From a managerial point of view, algorithms offer an op-
portunity for speeding up the production cycle, making journalistic practices
more efficient and news more consumer-oriented. It has, for example, become a
fairly common practice for journalists to track their performance by monitoring how
well their articles are doing in terms of readership and distribution. At the Danish
news- paper Politiken, journalists and front desk editors are equipped with
dashboards that help monitor and analyze how well specific articles and stories
are doing. There are a few staple software systems that most organizations tend
to use, including Chartbeat for tracking user behavior on site in real time, and
Comscore or Google analytics for analyzing those data. Often elements from these
“off-the-shelf” prod- ucts are combined and adapted to the specific needs and
interests of the organiza- tion in custom-made software solutions. As Anders Emil
Møller, then director of digital development at Politiken, explains, dashboards and
analytics software are important working tools that provide journalists with a
better understanding of their work processes. It may even give them a sense of
success and agency, given that
132 IF . . .
they are able to better understand when an article is doing particularly well or what
they may have to change to increase performance. A system like Chartbeat is used to
monitor how much time a reader spends on a particular article and where he/she
might move next. In this sense, software systems and dashboards become an epistemic
tool, enabling an understanding of journalists’ own performance, or the behavior and
interests of readers. In addition, they are used to realize the expectations that exist at
the managerial level about what constitutes good news work. When asked whether
the many different metrics and tracking mechanisms might possibly drain
journal- ists in their work, Møller says these systems are merely there to help and
support journalists to do the best they can (Interview 5, March 2014). Møller
suggests that these systems may even support journalists in promoting their own
work. For ex- ample, tracking systems are able to give journalists leverage to
argue for a more prominent position on the front page, based on the metrics
provided. In this sense, the dashboards and the metrics supplied by different analytic
tools become imbued with discursive power used to negotiate with the front desk.
Yet, algorithms are primarily directed at the consumer, not so much at practitio-
ners, Aske Johan Koppel Stræde, new director of digital development at Politiken
says, when I interview him in September 2016. This is where Stræde locates the
al- gorithmic potential, in helping journalists find, contextualize and frame the news
in innovative ways. Rather than setting the agenda, algorithms can be used to
provide journalists with new tools to “make it easier to set the agenda,” Stræde
suggests. Whether these tools mine Twitter for important topics or track what
people are talking about on Facebook, algorithms can be used productively to
support journal- istic practices, many of the informants suggest. During the
interviews, I was struck by the way it appeared easier to talk about algorithms in
a positive spin when they were invoked as human helpers, rather than as
automated gatekeepers of sorts. Understood as less confronting, invoking
algorithms as tools for making the editorial workflow more efficient can be a way
of saying yes to algorithms without compro- mising on the democratic function of
journalism. As we will see in the next section, the conversations around algorithms
tend to shift when they are more directly linked to the values and institutional role of
journalism. Journalists often see themselves as using algorithms to serve publics with
better news, and the managers tend to defend this public function of journalism as
well.
While algorithms are variously regarded as helpful tools that support news work-
ers in accomplishing their tasks and make the production cycle more efficient,
they are not merely supporting journalists but also slowly changing the very condition
of news work itself. Algorithms help to make journalism in new ways, by creating
new genres, practices, and understandings of what news and news work is, and what
they ought to be. As described in the burgeoning literature on digital and
algorithmic journalism, algorithms are now used to find relevant stories on social
media plat- forms (Thurman et al., 2016), create catchy headlines (van Dalen,
2012), use data sets to generate stories (Karlsen and Stavelin, 2014), and aggregate
and recommend
Programming the 1
news stories ( Just & Latzer, 2017). In many of the news organizations studied large
investments go into experimenting with new computational tools to make news
work more effective. At Schibsted, Cramer explains, one problem is that
journalists often write in a form that is too long, too complicated, and too
intricate without really taking the user’s perspective into consideration
(Interview 1, August 2016). There’s potential, as Cramer sees it, in the way
algorithms can be used to “atomize” the news. Such an approach, which is also
known as “structured journalism,” works by treating every bit of content as a
separate block of data that can be reassembled into a new form, depending on
what a specific reader already knows.12 As Cramer points out, by simply treating
every sentence, word, or paragraph as a separate “atom,” newsrooms may use the
power of algorithms and databases to produce news stories “on the go” and avoid
unnecessary overlaps. Perhaps it is time to rethink the genre of the traditional
news article itself, Cramer speculates, referring to a study conducted by
Norway’s leading tabloid and most read newspaper VG, which showed an
overwhelming overlap in content produced between their journalists on a daily
basis. More than simply writing articles like journalists have always done,
atomizing the news essentially implies a personalization of the storytelling process
itself, a process that is fundamentally powered by algorithms and structured data.
For Sundve, whose rhetorical mission at Schibsted seems in part to be about
estab- lishing “the true purpose of journalism” vis-à-vis technology companies,
atomiza- tion, however, represents much more than a new computational
technique. In the ongoing battle for attention, revenue, and control over
informational flows, content is king. If what social media platforms do is merely to
distribute content, creating original and personal content may just be the news
media industry’s “greatest weapon in the fight for relevance” (Sundve, 2017).
More important, perhaps, is the editorial accountability that comes with creating
journalistic content. In order to succeed in keeping a distinct and accountable
editorial voice, Sundve thinks it is crucial to develop new ways in which
algorithms and people work together (Interview 7, May 2016).
To this end, it might be instructive to turn our attention to the Schibsted-owned
Swedish mobile news app Omni as an example of the algorithmic potential found
in organizing the editorial workflow in new ways. Relying on a combination of
devel- opers, editors, and a news-ranking algorithm, Omni seeks to provide users
with what editor-in-chief Markus Gustafsson describes as a “totally new way of
experi- encing news” (Interview 8, April 2014). Gustafsson, who came from the
position as managing editor at Sweden’s leading tabloid newspaper Aftonbladet
before co- founding Omni with Ian Vännman, says it all started with a mission to
create some- thing like the Swedish version of Huffington Post. But instead of
copying an existing news service, Omni developed into a unique service of its own.
Rather than creating original content, Omni aggregates news from other sites
and provides manually written summaries of the news while linking to the
original sources. Readers may also follow certain topics and personalize the mix of
news. At the time of my visit to
134 IF . . .
the Omni headquarters in Stockholm in March 2015, about ten editors worked in
shifts to handpick the news. I spent a week on site, observing the editors day-to-
day work, conducting informal interviews with some of them, and also
conducting more formal interviews with designers, developers, and editors
connected to Omni and the Schibsted media house (Interviews 9–16). The editors,
who all come from a background as reporters and journalists before starting at Omni,
do not write their own stories or work like reporters in a traditional sense. Instead,
the work involves picking interesting news items from a content delivery system
in order to make it “algorithm-ready.” That is, editors summarize existing stories
using a custom-built content management system—for example, by giving the story a
new headline, pro- viding a short written summary, linking it up to the original
sources, and finding a suitable image to go with it.
A key part of making a story algorithm-ready (which, in this case, refers to pre-
paring the news items in such a way that they can be processed and,
subsequently, ranked by an algorithm) is setting news and lifetime values for
each story. Before editors can publish the edited stories, they have to assign
each and every one a unique news value on a 5-point scale and a lifetime
value on a 3-point scale. As Gustafsson explains, these values act as tokens of
relevance and newsworthiness, and are subsequently fed into an algorithm that,
ultimately, “decides exactly where and how to show the news” (Interview 8, April
2014). The algorithm ranks every story based on a combination of news and
lifetime value, the reader’s personalized settings, time decay, and the relative
weights predefined for each major news cate- gory (i.e., foreign news is weighted
slightly higher than sports). The news and life- time values that editors assign,
Gustafsson suggests, are “subjective values” based on journalistic know-how and gut
feeling (Interview 9, March 2015). The algorithm’s job, then, is to process the
subjective values in an objective and predictable way. While the news-ranking
algorithm is rather “banal and simple,” and the human ef- forts involved are
“crucial and decisive,” Gustafsson is clear when he suggests that they “couldn’t
have done it without the algorithm,” as “it does, in fact, influence quite a lot”
(Interview 9, March 2015, and notes from field observations). There’s algorithmic
potential in providing a better user experience but, perhaps, “first and foremost
in making the work flow more efficient and time-saving” (Interview 9, March
2015).
The interviews reveal how algorithms are deemed particularly problematic when
they threaten to diminish the journalistic mission of ensuring an informed public.
Moreover, algorithms become problematic when they threaten to automate too
much—especially when they compromise people’s jobs, or when they become
unruly and can no longer easily be controlled, starting to challenge the fundamental
touchstones of editorial responsibility. Algorithms are not just problematic in terms
of the various concerns and challenges to which they point: They are also conceived
of as problematic at different times and circumstances. The notion of the
problem- atic algorithm appears, for example, during moments of negotiation
and break- down, or when it becomes permeated with people’s worries and
concerns.
Incorporating algorithms as agents in the newsroom is not without its
chal- lenges. Most news executives I talked to describe a variety of obstacles and
trials encountered in the process of adding an algorithm to the mix. When
algorithms are incorporated into the editorial workflow, the process is often
characterized by much debate and tweaking in the beginning, before the algorithm
gradually slips into the background and becomes, at least temporarily, stabilized.
Many news outlets have dedicated teams of programmers and data journalists who
experiment with new tools and techniques on an ongoing basis; however, when
organizations instigate more pivotal and far-reaching technological changes, the
process is often more top-down. A few dedicated people, often digital project
managers and editors, make decisions that are subsequently rolled out into
different sections and parts of the newsroom. First, you need to figure out what
you want the algorithm to do. While some news organizations spend considerable
time on concept development, others such as iTromsø, release the code to see
what happens. Even in cases where algorithms are released more quickly, the
process of implementation implies subjecting the algo- rithm to a series of trials
and questions. As Jakobsen says of the process at iTromsø, “first, we had to ask
ourselves, should we ask our readers for permission” and to what degree should the
algorithm be opt-in or opt-out? (Interview 2, May 2016). When implementing an
algorithmically generated newsfeed for the first time, they decided not to ask their
readers for permission explicitly but rather to “just go ahead and ask for forgiveness
later.” When I asked Jakobsen how often they think about or tweak the algorithm
now, he replied, “not so much anymore.” In the beginning, “we tweaked it every day”
until it became more stable and less problematic he says. While nobody seemed to
complain about the opt-in solution, there were other issues that had to be dealt with,
Jakobsen adds. It proved increasingly difficult to find the same article twice because
the algorithm was constantly showing new items and personalizing con- tent. To
combat the disappearing archive that comes with personalization, iTromsø
implemented a section called “my reading history,” where you could easily find
old articles again. They also implemented an opt-out button at the bottom of the
mobile interface to make it easier for readers to go back to a non-personalized
feed. One of the things that is far more difficult to handle algorithmically,
however, is knowing when to show certain content. In setting the parameters for
personalization, having
136 IF . . .
a sense of “right time” seems crucial (see also chapter 4). Like the annoyingly
reap- pearing Lisbon hotel ads that plagued my Facebook news feed in the
introductory chapter, Jakobsen uses a similar example to explain how an
algorithm can provide stupid output. In order to prevent the algorithm from
“showing you hotels in Ljublana 14 years after you’ve been there,” you need to
make sure to give your algo- rithm a sensible lifetime, Jakobsen says (Interview 2,
May 2016).
The decision to start using an algorithm in the first place is not always as clear-cut
as one might think. Christian Gillinger, who works as head of social media at
Sveriges Radio, says of the process of developing the concept for a socially generated
top list to be shown on the front-page of the public service broadcaster’s website
that “all of a sudden we discovered that what we were doing was creating an
algo- rithm” (Interview 17, May 2016). The social media team had been given the
task of finding new ways of promoting socially engaging content. They started
down the obvious route, looking at how well certain programs did in terms of
likes, shares, and comments. However, they quickly realized that the list would merely
reflect and be skewed toward the most popular programs. As Gillinger says, there
is nothing particularly social about merely looking at the number of likes and
shares for a par- ticular program. If you are a small program, say a Kurdish program
with 2000 listen- ers in total, and you still manage to mobilize 1900 people to
share or like you posts, then you are really doing a good job in terms of being
social, he adds. Initially, “we didn’t know that what we were developing was an
algorithm, but we knew we could not leave it up to the different news sections” to
define the social; “it had to be done with the help of mathematics.” Because “an
algorithm is just a mathematical for- mula,” the critical task is to decide what
the algorithm is supposed to reflect (Interview 17, May 2016, emphasis
added). For Gillinger, making decisions about what the algorithm is supposed to
reflect is critically dependent on the people involved in actually developing the
algorithmic logic. While algorithms are often accused of foreclosing diversity in
terms of access to content, the diversity of people designing the algorithms is much
less discussed. The more people who think about algorithms and help develop
them inside news organizations, the better, Gillinger argues, suggesting that
algorithms require diversity in the teams that create and are responsible for them.
While the algorithm reveals itself as problematic in the initial phases of develop-
ment and implementation, it may not always solve problems, and it may also create
news ones. Sometimes, algorithms simply personalize too much, or they turn out
not to automate things as much as anticipated. The news-ranking algorithm at the
center of Omni’s editorial infrastructure does not merely support the editors’ work-
flow; the editors also have to support the algorithm in various ways. As the field
of infrastructure studies has taught us, infrastructures require ongoing human
com- mitment and maintenance work (Bowker et al., 2010; Star & Ruhleder,
1996). Infrastructures can be understood as enabling resources; yet, in order to
function properly, they are dependent on “human elements, such as work practices,
individual
Programming the 1
habits, and organizational culture” (Plantin et al., 2016: 4). Initially, the idea of as-
signing the function of curating the front page at Omni to an algorithm was to reduce
the need for manual labor. As “Frank,” one of the editors working at Omni, explained
to me while I sat next to him observing him at work, the idea is that editors only
have to set the news and lifetime value once based on their journalistic know-
how; and, then, the algorithm would take care of the rest (Interview 12, March
2015).13 Despite the relative simplicity of the algorithm at work, it turned out
that not only did the algorithm not eliminate the need for humans in the workplace, it
also, some- what paradoxically, seemed to put new tasks on the table. As was
described earlier, part of the work that editors do at Omni is to assign news and
lifetime values to each news story, so that the news-ranking algorithm is able to
process it accordingly. Based on these values and a few other signals (i.e., user’s
explicit news settings and different weights assigned to different news sections),
the algorithm works to com- pute the right order on the front page and the right
length of time a news story is allowed to remain in that position.
Although the algorithm is programmed to curate the front page, the editors need
to check on the algorithm’s performance continuously. As “Emma” (another editor)
explained, they need to keep track of how the stories actually appear on the front
page and the extent to which the front page reflects the front pages of other major
newspapers (Interview 13, March 2015). In other words, editors cannot simply
assign their news and lifetime values and trust that the algorithm will do its part
of the job correctly. It needs to be checked constantly. It is not just that editors
need to check the actual front page to see how their stories are displayed. They also
need to make sure that the collective product—Omni’s front page—comes
together nicely. In order for the front page to be displayed in the desired way,
editors sometimes have to change the initial values given to a news item after it
has already been pub- lished. This practice, more commonly referred to as
“sedating the algorithm,” is by no means uncontroversial among the news
professionals at Omni. Having to “cor- rect” the algorithm by manually refurbishing
the front page seems to go against the purpose of employing an algorithm in the
first place. As Gustafsson and several of the editors told me, the whole idea of
assigning values is premised on the notion that the values represent more or less
objective and stable journalistic criteria of newsworthiness. For example,
changing a story that would, in reality, only merit a news value of 3 into a 5,
which normally indicates breaking news of major impor- tance, may not just feel
counterproductive. It can also complicate the workflow itself. There is a
certain sense in which the initial values cannot be taken as a given, and that you
as an editor have to be prepared to change and adjust your own valua- tion practice
to accommodate the practices of your colleagues and whatever story they are
working on, in order to achieve the final product that is the front page.
While everyone at Omni seems to agree that “sedating the algorithm” may not be
ideal, there is also the sense in which these unintended consequences become an
opportunity to engage in an ongoing conversation about what news should be and
138 IF . . .
how the editorial news flow should ideally be organized. At Omni the meaning as-
cribed to algorithms had less to do with the algorithm in the technical sense and
more to do with the algorithm as an infrastructural element. Despite describing the
algorithm as “labor-saving device,” for “Frank,” the algorithm seems to be merely
one among many other elements that allow him to work in the way that he does
— “without anyone telling you what to do” and with a great deal of responsibility
and freedom (Interview 12, March 2015). The unintended consequences of
algorithms notwithstanding, the same consequences may on a different occasion
be turned into a welcome argument in favor of using algorithms. The need to
check up on the algorithm’s performance as evident from the editorial workflow at
Omni becomes at the same time an argument in support of human labor and know-
how.
data and algorithms should not be exaggerated. “An algorithm is just a form of
pro- grammed logic,” Larsen replies when I ask what kind of algorithms VG is using.
“Big data or not,” Larsen continues, “we do a lot of interesting things with small data,
too. Whether the data set is big or not is not that important.” At the end of the day,
“the most important working tool you have is your journalistic instinct”
(Interview 20, March 2014). Even at a news organization such as Omni, which
fundamentally relies on algorithmic intermediaries, the computer’s inability to infer
contextual informa- tion or understand irony means that “there is no chance that
algorithms will ever be able to replace the human heart and brain in knowing what
is best for the readers” (Interview 8, April 2014). This juxtaposition of human
judgment and computational mechanisms does not just reveal a way of thinking
about algorithms as mechanical and inferior; it also speaks to the desire of
maintaining journalists’ professional en- vironment. As Larsen suggests when I ask
about the prospects for automating the curation and presentation of the front
page at VG, “We have thought about it, but the answer is no.”
As do many of the other informants, Larsen emphasizes the “democratic” func-
tion of news and the journalistic values. While he is open to the possibility of
using algorithms to automate some limited and carefully controlled parts of the
newspa- per, he insists on the fundamental role of human editors in deciding
what readers should know. The newspaper’s role, in Larsen’s view, is not to serve
content based on individuals’ past clicking behavior. If something important just
happened in the world, “it is our role to inform people about it and say ‘this is
important, this is what you need to understand’ ” (Interview 20, March 2014). Key
to this public informa- tion function is maintaining people’s trust in the
newspaper as a credible source. The editors and managers talk about their brand
name and the importance of having a distinct editorial line that people recognize.
For some, the advent of an algorith- mic logic in journalism seems to threaten the
unique and recognizable editorial line that people have come to know. As Møller
says when I ask about the prospect of using algorithms to personalize Politiken’s
front page: “We want Politiken to remain a credible media outlet. The best way to
accomplish that is through our own priori- tizations, not through something
automatic.” People come to Politiken because they know what “we stand for,”
Møller adds. He explains, “One of our greatest values as a newspaper is that it is
created by people with a specific worldview” (Interview 5, March 2014).
The informants’ concerns about the damaging effects of algorithms are familiar
ones. Over the past decade, concerns about the algorithmic creation of so-called
filter bubbles and echo chambers—the idea that users only get served more of the
same content based on their own clicks—have regularly been voiced (Pariser, 2011).
These discourses constitute an important backdrop against which the informants
express their own concerns. There was a sense in which the linking of journalism
and the computational almost required informants to reflect upon the perceived
dangers of algorithmic personalization. While Jens Nicolaisen, digital director at
140 IF . . .
Jyllandsposten, says it has been “absolutely fantastic to discover the strength in having
something 100% algorithmically controlled,” he was quick to point out the
danger of compromising the public ideal of journalism by creating filter bubbles.
Similarly, Møller says that filter bubbles are exactly what they want to avoid. The
question for many of the news organizations is how to solve this conundrum. On one
hand, there is an increasing need to cater to the specific interests of individual
readers. On the other hand, as Møller contends, “If everything were based on
algorithms, we would be worried that you were missing some of the other things
we would like to tell you about” (Interview 5, March 2014).
There is a general wariness among editors and managers of granting too much
agency to algorithms. What really matters is how algorithms can be used to augment
and support journalists in accomplishing their work. Jørgen Frøland, project man-
ager for personalization at Polaris Media and developer of the algorithm at
iTromsø thinks that much of the fear and concern surrounding algorithms stems
from the fact that many people still do not understand what they really are. In most
instances, Frøland points out, “algorithms are just plain stupid” (Interview 19,
August 2016). It is not an either/or but a matter of using technology for the things
that technology does well and people for what they know best. For example,
Frøland adds, algo- rithms cannot make decisions based on common sense. For
Ingeborg Volan (who works in the Polaris Media–owned regional newspaper
Adresseavisen) the relative dumbness of algorithms means that their role in
journalism should not be consid- ered too problematic. She draws on a familiar
argument voiced in debates about robot journalism: the algorithm is not as a
threat to the job security of journalists but something that has the potential to
make the job more exciting by “relieving journalists of some of their most tedious
tasks” (Interview 4, August 2016). Because “good journalism is about asking critical
questions, being able to judge the trust- worthiness of sources, and to go out into
the world and talk to people,” Volan does not think that algorithms pose a very
serious threat to the journalistic profession.
The words people choose to talk about algorithms matter. Ola Henriksson, web
editor and project manager at Sweden’s national newspaper Svenska
Dagbladet, thinks that the word “algorithm” is too confusing and abstract. When I
ask what the word “algorithm” means to him, he says “mathematics” and
“calculations” (Interview 18, June 2016). However, this is not what journalism is really
about, he says, suggest- ing that the term itself may do more harm than good. The
Schibsted-owned Svenska Dagbladet became the first newspaper to implement the
algorithmic infrastructure and valuation practices developed by Omni. Rather than
talking about an algorithm, the Omni-inspired approach became more widely
known as an “editor-controlled algorithm.” I asked Henriksson whether he thinks
calling it editor-controlled makes a difference. “Yes, for some, it actually seemed to
make a difference,” he replied. The notion of “editor-control” helped ease people’s
minds, and ensure them that they would still play an important role, despite the
emergence of an algorithmic media landscape.
Programming the 1
I believe that you need to control and own your own technology. Google
and Facebook would be thrilled for us to join their ad platforms. But to
be a publisher is also about integrity: To own the way you publish, when
you publish and where you publish is really important moving
forward. (Rodrigues, 2017)
Let’s pause for a moment and consider what the word “integrity” is doing in
the above account and how it works to make the algorithm matter in the right
way. In light of the many recent controversies surrounding the Facebook
algorithm, espe- cially regarding the discussions about the extent to which
Facebook should assume editorial responsibility, the news media find themselves
in the welcome position of highlighting and strengthening their institutional
legitimacy. For Espen Sundve at Schibsted, the prospect of becoming too
absorbed with Facebook’s news feed algorithms is one of his biggest concerns
about the future of news. Although Sundve has Facebook’s publishing platform
“Instant Articles” in mind when he talks about the risk of being engulfed by the
Facebook algorithm, these concerns have ramifications far beyond its financial
accountability. For news professionals, the problematic algorithm does not
merely manifest itself inside news organiza- tions but, perhaps more importantly,
in terms of the external algorithms governed by Facebook and Google. Sundve
says of the Facebook news feed that, “from a democratic perspective,” it is very
problematic to “give away your editorial posi- tion to someone who clearly does
not take any editorial responsibility” (Interview 7, May 2016).
Becoming too dependent upon algorithms that do not reflect the right values is
risky. As Sundve says, technology companies may attempt to give people what they
142 IF . . .
want, but journalism is primarily about providing people with information they
should know and taking responsibility for that. Giving people information about
what they should know, Sundve adds, also implies giving people the opposite
view of an issue. “If you are a vaccination opponent, you should also be exposed
to facts on the other side of the spectrum and not just find 50 other people
supporting your views on your Facebook news feed,” Sundve says (Interview 7,
May 2016). One way to avoid filter bubbles, many informants suggest, is to think more
carefully about the design of algorithms. It is not that algorithms always create bubbles
and echo cham- bers. The question is what a specific algorithm is actually optimized
for. As Gillinger sees it, there is no necessary opposition between algorithms and
diversity—“it all depends on what it is that you want your algorithm to reflect.” It is
entirely possible to optimize for clicks if that is what you want; “but, in my world, that
is a badly writ- ten algorithm,” Gillinger says (Interview 17, May 2016). While
critics worry about Facebook’s alleged power to create filter bubbles and
misinformed publics, there is also the sense in which the democratic crisis can be
resolved by writing better and journalistically sound algorithms.
Algorithms as Problematization
Algorithms do not merely solve or create new problems, as the interviews make per-
fectly clear. They also need to be understood as “problematization” devices
that work to question the accepted boundaries and definitions of journalism
itself. It is here that we see most clearly the eventfulness of algorithms at play. As
I argued in chapter 3, the notion of event allows for an analytical position that is
not so much interested in establishing the “empirical accuracy” (Michael, 2004) of
what the algorithm is or what it means, but which instead looks at how
algorithms have the capacity to produce new orderings or disorderings.14 As
Mackenzie points out, “event thinking” has helped scholars “think about the
contingency of new forma- tions or assemblages without ‘front-loading’ a
particular ontological commitment” (2005: 388). In talking to news media
professionals and observing the ways in which algorithms have emerged as an
object of concern within journalism, it seems to me that the power and politics of
algorithms are manifested in the way they prob- lematize accepted boundaries. Paul
Rabinow says of an event that it makes “things work in a different manner and
produces and instantiates new capacities. A form/ event makes many other
things more or less suddenly conceivable” (1999: 180). Just as “events
problematize classifications, practices, things” (Rabinow, 2009: 67), an algorithm, I
suggest, “puts into question or problematises an existing set of boundaries or
limits within” journalism (Mackenzie, 2005: 388). In going forward, the question is
not just what happens to particular domains when they introduce an algorithm, but
how algorithms as events make it “possible to feel, perceive, act or know
differently” (2005: 388).
Programming the 1
In the previous sections, I identified at least two distinct ways in which the un-
derstanding of algorithms is realized in Scandinavian newsrooms: as forms of po-
tentiality and as something that is articulated as more or less problematic. In this
final part of the chapter, I suggest that the notions of potentiality and the
problem- atic do not necessarily stand in opposition to one another, but
rather should be thought of as a “relational play of truth and falsehood” (Rabinow,
2009: 19). That is, when algorithms become contested, when they fluctuate
between discourses of promise and closure, achieve recognition outside of
accepted boundaries, and become imbued with certain capacities for action,
they emerge as forms of “prob- lematization.” A problematization, Foucault
writes:
Does not mean the representation of a pre-existent object nor the creation
through discourse of an object that did not exist. It is the ensemble of
dis- cursive and nondiscursive practices that make something enter into
the play of true and false and constitute it as an object of thought
(whether in the form of moral reflection, scientific knowledge, political
analysis, etc.). (1988: 257)
media environment show how social media do not replace but, rather,
supplement and challenge more established forms of media and media use
(Chadwick, 2013; Hermida et al., 2012; Kleis Nielsen & Schrøder, 2014). As
Nicolaisen puts it, news media and social media seem to have entered into a marriage
of convenience of sorts. In the “hybrid media system,” the news information cycle is
no longer defined by the mass media alone but needs to be understood, instead, as
an assemblage composed of many different media forms, actors, and interests in
mutual relations of co-depen- dence (Chadwick, 2013). While we know that many
news media sites attract a fair share of web traffic from social media—in
particular, from Facebook (Kleis Nielsen & Schrøder, 2014), the notion of
hybridization would also suggest that Facebook is dependent on the news media.
Indeed, news is crucial for Facebook. People seem to share less of their personal
lives on platforms like Facebook and Twitter than they used to (Griffith, 2016)
and interact more with news shared by professional content producers. According
to Will Cathcart (2016), who is vice-president of product at Facebook, 600
million people see news stories on Facebook on a weekly basis. Moreover, the
company claims that a big part of why people come to Facebook in the first place is
to catch up on the news and talk with their friends about it. As personal updates
seems to be in decline, Facebook clearly counts on the news media and ex- ternal
content producers to help attract those much-needed clicks, likes, and engage- ment
metrics that fuel its advertising business.
The hybrid media system extends well beyond web traffic and “likes,” encom-
passing the operational logics of platforms and the players involved. Similar to how
users reorient their social media behavior to accommodate the algorithmic logics of
platforms, as was discussed in the previous chapter, the news professionals in
this chapter note how algorithmic logics increasingly also inform journalistic
practices. Ingeborg Volan at Adresseavisen observes how journalists in many
ways have to orient themselves toward the algorithmic logic of external players
like Facebook. Volan emphasizes how vital it has become to be good at playing
Facebook’s game. Becoming more attuned to the algorithmic logic of Facebook is
now a necessary ingredient in reaching as many existing and new readers as
possible. This does not mean, however, that you simply “dismiss all your
journalistic principles and just follow Facebook blindly,” says Volan. The question
is whether you can find ways of “gaming Facebook’s algorithms to serve your
own journalistic purpose.” The tricky part, Volan adds, is finding good ways of
adhering to the “things we know Facebook will reward but without compromising
journalistic principles.” If you want your au- dience to come to your own site to
watch a particular video, you do not put the whole video up on Facebook. What
you do, instead, is share a “teaser” on Facebook, which directs people to your
own site, according to Volan. However, producing videos also makes sense from
the perspective of the operational logics of platforms. Given that Facebook’s
algorithms privilege videos over text, Volan suggests, they make sure to publish
more videos on Facebook in order to be prioritized by the algorithm (Interview
4, August 2016).
Programming the 1
The case of Omni further underscores the fact that the journalistic practice in the
social media age cannot be reduced to making the news “algorithm ready,” blindly
following the dictations of Facebook. Here, it becomes apparent that algorithms
also problematize the practice and meaning of doing journalism. In contrast to
pre- vious research on journalists’ responses to new technologies in the
newsroom, which is “nearly unanimous in concluding that journalists react
defensively in the face of such boundary intrusions on their professional turf”
(Lewis & Usher, 2016: 15), many of the editors working at Omni see new
technologies as part of the attrac- tion of working there in the first place.
Although, of all the organizations studied, Omni has, perhaps, gone furthest in
making the algorithm an essential component of the editorial workflow itself, we
can conclude that the algorithm by no means deskills the workforce, as critics
often worry. Instead, the inclusion of an algorithm into the workflow repositions
and displaces the work in new ways. As the notion of problematization is meant to
highlight, one cannot look at algorithms as something new and detachable that has
been added to a preexisting journalistic domain. Such a notion falsely
presupposes that algorithms and journalism are discrete and free- standing
domains that suddenly have been forced to interact. Conceived instead as
“eventful,” algorithms are defined by their capacity to produce new
environments. This alternative view is much more nuanced analytically, providing
greater explana- tory force to the cases I have been considering in this chapter. As
a consequence, it renders the question of whether algorithms are responsible for
deskilling the work- force superfluous. Instead, it urges us to ask how forms of
algorithmic interventions are productive of new journalistic realities, and to what
effect. Simply claiming that algorithms make people redundant will not do. While
it seems safe to say that algo- rithms do something that, in some cases, may imply
reducing the need for human labor, we should be wary of claims that simply end
there.
What, then, do algorithms problematize? In the remainder of this chapter, I will
focus on two possible answers, aware of there being no exhaustive explanation for a
question like this. First, algorithms contest and transform how journalism is per-
formed. As the interviews suggest, algorithms do not eliminate the need for
human judgment and know-how in news work; they displace, redistribute, and
shape new ways of being a news worker. Similarly, seeing machines as lacking
instincts and, therefore, ill-suited to replace journalists and their “gut feelings” is
a too simple a view. What journalists may have a “gut feeling” about expands and
transforms as well. The field observations and conversations I had with staff at Omni
revealed how editors in particular have developed an instinct for how the
machine works. As “Clare” tellingly suggests, it is not just a matter of having
an instinct for news at Omni, it is also about “developing a feeling for the
algorithm” (Interview 11, March 2015). In contrast to much recent discourse on
computational journalism, which hinges on the notion of “computational
thinking,” the case of Omni suggests that not all “computational thinking”
requires journalists to learn to think more like computer scientists. Whereas the
notion of “computational thinking,” as originally
146 IF . . .
cannot necessarily be found in the algorithm, but rather, as it was also suggested
in chapter 3, is more a question of when the agency of algorithms is mobilized
and on whose behalf.
Concluding Remarks
Algorithms are not simply algorithms, as it were. Their ontology is up for grabs. Let’s
briefly return to the notion of fake news introduced at the beginning of the chapter.
While Facebook should not be blamed for creating a misinformed public, or for
de- termining the outcome of elections, the debate over fake news and post-truth
un- derscores the general point I make in this chapter about the power and
politics of the algorithmic media landscape. If we want to understand the ways in
which algo- rithms matter, we need to pay attention to the ways in which they are
made to matter, as well as the ways in which they are made to matter differently in
different situa- tions and for different purposes. Going beyond viewing algorithms
as problems solvers, as the standard computer science definition would have it,
or creating new problems and concerns, a trope on which the social science
perspective on algorithms tends to focus, I have made a case for attending to the
eventfulness of algorithms that constitutes them as particular forms of
problematization. Investigating the ways in which algorithms become relevant, I
discussed how relations of sameness or difference are enacted on given
occasions, with given discursive and material consequences.
As I showed, the “truth” of algorithms is “played” out in material-discursive
prac- tices. Constitutive of what Foucault called “games of truth,” the narratives of
news media professionals show how the mattering of algorithms is predicted on
establish- ing certain norms and values of what is desirable/problematic, good/bad,
true/false. Similar to how truth games were set up in the organization of medical
knowledge and the designation of madness in Foucault’s writings (1988; 1997), the
truth games sur- rounding algorithms have to be connected to a whole series of
socioeconomic pro- cesses and professional practice. As Foucault makes sure to
highlight, a game “is not a game in the sense of an amusement,” but rather, “a set
of rules by which truth is produced” so that something can be “considered valid or
invalid, winning or losing” (Foucault & Rabinow, 1997: 297). These rules,
however, are never static. While im- plicit and explicit “rules,” such as
journalistic objectivity and professional ethics endure, the meaning of these
“rules” change. In an algorithmic age, news media are not merely adapting to the
presence of algorithms or simply using algorithmic sys- tems as part of journalistic
practices. What it means to be a journalist and publisher changes, as do definitions
of what news and newsworthiness is and should be. News selection only
moderately falls to a “gut feeling.” The “hard facts” of metrics, traffic data and new
media logics help transform feeling from gut to byte. The way news professionals
talk about developing “a feeling for the algorithm,” or “gaming the
148 IF . . .
I began this book by asking readers to consider the following scenario: Copenhagen
on a rainy November day, where a semester is about to finish. The festive season
is around the corner, but it’s otherwise a normal day like any other, filled with
the habits of everyday life. Many of these involve media, such as checking
Facebook, reading online news, searching Goggle for information, writing emails,
tweeting a link, buying a Christmas present from Amazon, watching an episode of
“House of Cards” on Netflix. While there is nothing particularly eye-opening
about these moments, that was exactly the point. It describes life lived with, in
and through the media. Or more precisely, a life fundamentally intertwined and
entangled with algorithmic media of all sorts. As I am writing this concluding
chapter it so happens to be a remarkably similar day in November, albeit two
years later.
Of course, nothing ever stays exactly the same. This time around it is sunny,
not raining, and I have much less time to check Facebook or watch Netflix, as I
am trying to finish this book. The platforms and algorithms have changed too. While
we might not always notice, the calculative devices of contemporary media
constantly work to give us more of what we seemingly want. Attuned to users’
clicks, shares, and likes, algorithms are constantly updated, revised, and tweaked to
make the flow of information seem more relevant and timely. As Wendy Chun
suggests, “New media live and die by the update: the end of the update, the
end of the object” (2016: 2).1 This condition of the perpetual update means that I am
never guaranteed to see the same content on Facebook or Twitter as my neighbor
or friend, due to network logics and algorithmic processes of personalization. Every
day, or even sev- eral times a day, users are implicitly confronted with the question of
“what will I see this time”?
It used to be that one could expect a certain sense of chronological order to the
ways in which information was presented online. Today, chronos seems increasingly
to have been replaced by kairos, the right or opportune time to say and do some-
thing. As evident in news feeds of all kinds, time is no longer about the linear,
149
150 IF . . .
continuous flow, but rather about the very punctuation of linear time itself. As it was
argued in chapter 4, kairos is what constitutes the temporal regime of algorithmic
media. After all, as Facebook tellingly suggests, the goal of the news feed “is to de-
liver the right content to the right people at the right time so they don’t miss
the stories that are important to them” (Backstrom, 2013). In the past few years,
many more media platforms have followed suit. Instagram and Twitter are only
some of the more prominent platforms that have started to punctuate the flow of
the “real- time” feed by highlighting “right time” content instead. The previous
chapter shows how the logic of “the right content, to the right people, at the
right time” is in- creasingly becoming part of traditional news media as well.
Faced with economic hardship, news media are reorienting themselves to the new
digital realities. One con- sequence is the way news organizations adapt to the
algorithmic media landscape by deploying algorithms to produce, distribute and
present news in ways that readers are already familiar with from being in social
media spaces. But even more signifi- cance is carried by the fact that traditional
news media are becoming algorithmically attuned.
This brings me to a third difference between November two years ago and
today. Socially and culturally speaking, algorithms have emerged as a social and
cultural im- aginary of sorts. Algorithms are seemingly “everywhere.” They have
become sites for cultural and social production. As objects of news stories,
academic papers, and con- ferences, as well as the focal point of public
controversies, popular discourse, cultural production, and affective encounters,
algorithms are producing calculative results. And while algorithms are oriented
toward us, we, too, are increasingly becoming ori- ented toward the algorithm. In
chapter 5, I showed how the algorithmic output of social media becomes
culturally meaningful, as seen in the ways that people form opinions about specific
systems and act strategically around them.2 Algorithms are not just making their
mark on culture and society; to a certain extent they have become culture. As Roberge
and Melançon suggest “algorithms do cultural things, and they are increasingly active
in producing meaning and interpreting culture” (2017: 318).
However, algorithms are not simply means of interpreting culture, they are also pro-
ductive of culture, understood in terms of the practices they engender. Hallinan and
Striphas claim “engineers now speak with unprecedented authority on the subject, suf-
fusing culture with assumptions, agendas, and understandings consistent with their dis-
ciplines” (2016: 119). Although engineers and computer scientists are perceived to
be in a privileged position to hold opinions about the term “algorithm,” we are at the
same time seeing an emerging trend where “ordinary” people and institutions are
speaking and thinking about algorithms. They do so by permeating algorithms
with assump- tions, agendas, and understandings that are consistent with their
specific backgrounds and life worlds. For someone who encounters the Facebook
algorithm as a suppressor and regulator of content, the algorithm gets suffused with
agendas and assumptions that may not be shared by someone else who
encounters similar types of posts but in a different situation.
Algorithmic 1
Technicity
How are we to think of the capacity that algorithms have to bring new realities
into being? At the start of the book, I suggested a specific approach to the
question of “how algorithms shape life in the contemporary media landscape,” by
looking at how platforms like Facebook condition and support specific forms of
sociality in ways that are specific to the architecture and material substrate of the
medium in question. I described how Facebook encodes friendship in ways that
essentially support the circuit of profit, and introduced the notion of programmed
sociality. At the same time, in chapter 1 I warned against taking this notion of
programmed sociality to credit for technological determinism. Software and
algorithms do not simply operate in isola- tion or exercise power in any
unidirectional way. Rather, their capacity to produce sociality always already
occurs in relation to other elements, and as part of an assem- blage through which
these elements take on their meaning in the first place.
This is the technicity of algorithms, understood as the capacity they have to
make things happen as part of a co-constitutive milieu of relations. For the French
philoso- pher Gilbert Simondon, who introduced the concepts of transduction and
technicity as useful frames to account for the productive power of technical
objects, humans and machines are mutually related, where the technical object
always already appears as a “theatre of a number of relationships of
reciprocal causality” (1980: 22). Technicity, then, pertains not so much to what
the technical objects are, but rather to the “forces that it exercises on other beings
as well as in and through the new virtu- alities, and hence realities, it brings into
being” (Hoel & van der Tuin, 2013: 190). This understanding comes close to the
Foucauldian manner in which power was per- ceived in chapter 2. For Foucault, power
pertains precisely to its exercise and force, to the “way in which certain actions
modify others” (1982: 788). For an understanding of algorithms, this implies
resisting reading algorithms as either technology or cul- ture. Rather one is
prompted to think of algorithms as “dramas”—or “theatre” of relationships in
Simondon’s terms—that algorithmic mediations bring about. The concept of
technicity, then, offers a way of thinking about the productive power of algorithms
that doesn’t rely on the attribution of power to some fixed characteristics or stable
artefact devoid of human agency. To speak of the technicity of algorithms is to
emphasize how algorithms do not possess power in and of themselves, but how
power unfolds as an “ontological force” (Hoel & van der Tuin, 2013).
Let’s briefly consider the examples of Amazon book recommendations and Twitter’s
“While you were away” feature. The technicity here should be understood as the
co- evolving conditions that emerge from the algorithm’s dynamic functioning. An
analy- sis of an event such as the recommendation of a book, or having one’s fear of
missing out catered to by a promise of a peek into what happened “when you
were away,” should be grounded in an “understanding of the nature of machines, of
their mutual relationships and their relationships with man, and of the values
involved in these
154 IF . . .
relationships” (Simondon, 1980: 6). Though I have not made any claims about
“the nature” of algorithms, I propose that algorithmic modes of existence are
grounded in the relational drama of diverse environments. In the process of
recommending books, Amazon’s “collaborative filtering” algorithms create conditions
through which certain books are brought to user attention while others remain
elusive. Collaborative filter- ing is based on the assumption that customers who share
some preferences would also share others. Rather than matching users with similar
customers, Amazon uses “item- to-item” collaborative filtering to recommend similar
books or other items, and this calculated chain of similarity relies heavily on user
input. Thinking of the power of al- gorithms in terms of their technicity leaves no
one off the agential hook, so to speak. The nature of the machine always needs to
be understood in relation to other ma- chines and “man,” in Simondon’s terms. On
Amazon, we buy books (often several at the same time), put them into wish lists,
search for items, browse the platform, write reviews, and provide ratings. On
Twitter, we tweet, browse, spend time, build non- reciprocal networks of
relationships, reply, and retweet. We are already implicated. According to Barad,
“we are responsible for the world in which we live not because it is an arbitrary
construction of our choosing, but because it is sedimented out of par- ticular
practices that we have a role in shaping” (2007: 203). This means that we cannot
treat the power of algorithms as a one-way street. If power refers to the way in which
certain actions modify others, our actions count too. The tweets we post, items we
purchase, and likes we hand out all factor in to the sum total. As we saw in chap- ter
4, actions such as these modify what becomes visible to us and to our networks.
This also means that the roles of designers, developers, and other decision-makers
are crucial. Just because algorithms learn and amplify existing societal biases, in-
equalities or stereotypes, it does not mean that they cannot be corrected for. On the
contrary, algorithm developers can compensate for the bias in datasets, and compa-
nies do make choices about when to intervene in correcting certain algorithmic out-
comes.3 When algorithms end up as biased toward certain groups, these
instances can be identified and handled computationally. As was discussed in chapter
6, using algorithms in journalism does not automatically entail the creation of filter
bubbles, contrary to what is often claimed. As one of the interviewees suggested, “If
you want your algorithm to reflect diversity that’s what you have to design for.” What’s
impor- tant is how designers, developers and decision makers think about what
algorithms should be optimized for, and what possible consequences this may have for
different social groups.
Orientations
Agency is not all or nothing. To say “the algorithm did it” will not do. Nor is it
an option to assign responsibility to others, just because there are always other
entities involved. As with the controversies surrounding algorithms discussed in
the book,
Algorithmic 1
updates, videos, comments), and time. In general terms, algorithms do not control
what users say or how they behave. Rather, algorithms shape how users come to
speak and what actions are made possible to begin with.5 In the terms of
Foucault, algorithms can be seen as instantiating a particular form of
“government,” in terms of “the way in which the conduct of individuals or of
groups might be directed [. . .] To govern, in this sense, is to structure the
possible field of action of others” (Foucault, 1982: 790).
While algorithms do not determine how people behave, they shape an environ-
ment in which certain subject positions are made more real and available to us.
As Sara Ahmed suggests in her book Queer Phenomenology, we are “directed in
some ways more than others” (2006: 15). Her book takes up the question of what it
means to be oriented, that is, to become attuned in certain ways. Ahmed
instructively asks: “What difference does it make ‘what’ we are oriented toward?”
(2006: 1). While Ahmed talks about human orientations mainly in terms of how
social relations are arranged spatially, algorithms are certainly also oriented
toward users. Algorithms, particularly in the age of machine learning, need us,
depend on us, and thrive on us. In the midst of ongoing, fervent arguments
about Facebook disseminating “fake news,” Facebook worries about “data-
dirtying” practices, whereby users willingly provide misinformation (Ctrl-Shift,
2015). As chapter 5 shows, these worries are far from unwarranted. People do not
necessarily post what is on their mind (to para- phrase Facebook’s status update
prompt) or what is actually happening (to use Twitter’s expression). Instead,
people may post much more strategically, in attempts to make themselves more or
less “algorithmically recognizable” (Gillespie, 2017).
Ahmed suggests that “orientations matter,” because they shape how the world co-
heres around us. That is, orientations matter because this is “how certain things come
to be significant” as part of situated encounters (Ahmed, 2010: 235). Moreover,
Ahmed writes, orientations matter in that it “affects how subjects and objects materi-
alize and come to take shape in the way that they do” (ibid.). As I have suggested,
algorithms are not just oriented toward us, we are increasingly becoming
oriented toward algorithms and algorithmic systems as well. In chapter 5 I show
how we are developing our own sense of sociality and sense of self in, through, and
around algo- rithms. There is an interesting relational drama at play when a single
person gets served an ad for engagement rings or a middle-aged woman gets
suggested recom- mendations for wrinkle cream. Was it an algorithm or merely
the person’s clicking behavior that was responsible for these recommendations
popping up, and to what degree does it matter? Can one simply attribute agency to
the machine, and laugh off the stereotypical ways of depicting personal relations
inscribed into the design of algorithms? What would it mean to take
responsibility in this case?
I suggest in chapter 5 that the attribution of agency is realized in the
encounters that people have with what they perceive as algorithmic phenomena.
Depending on specific situated encounters, algorithms are perceived as creepy,
helpful, disturbing, intrusive, and so on. The notion of the “algorithmic imaginary”
was introduced to
Algorithmic 1
denote ways of thinking about what algorithms are, what they should be, how
they function, and what these imaginations, in turn, make possible. While
algorithms may produce certain conditions for action, these conditions are not
necessarily attributable to algorithms in a purely technical sense. Rather, how and
when people perceive algorithms may sometimes matter even more. Thus, when
looking at what people do online, there is no way of telling what made them act in
certain ways. The notion of the algorithmic imaginary suggests that what the
algorithm is may not always be about the specific instructions telling the computer
what to do, but about the imaginations and perceptions that people form. It
follows from the logic of ma- chine learning, that what people “do in anticipation of
algorithms,” as Gillespie sug- gests, “tells us a great deal about what algorithms do
in return” (2017: 75). In terms of accountability and responsibility, then, we are
all implicated, machines and humans alike.
Boundary-making Practices
From a strictly technical perspective, an algorithm can be defined as a step-by-
step procedure for solving a problem in a finite number of steps. Throughout this
book, I have argued that algorithms are much more besides this narrow, technical
concept. Algorithms do not only instruct, they mean something, and often they
mean differ- ent and conflicting things. Algorithms manifest as objects of social
concern, as they become inscribed into the fabric of everyday life. Algorithms,
moreover, help to shape the ways in which we come to know others and
ourselves.
If the textbook definition of algorithm is but one version implicated in the many-
foldedness of the term, then where and when besides code do algorithms manifest?
As I attest in this book, algorithms exist in the conversations of academics, in
media portrayals, public controversies (i.e., debates over “fake news” on Facebook),
in people’s perceptions and imaginations, in representational resting points (i.e.,
simplified images of formulas), in metaphors (i.e., the notion of algorithm as recipe),
as part of films and popular imagery, stories, and professional practices. In an expanded
understanding of the term, algorithms are seen as events. Without front-loading
an ontological commitment with regard to algorithms, the notion of the event helps
directing attention to the ways in which algorithms make other things suddenly
conceivable.
Who or what is made accountable and responsible for algorithmic outcomes
depends on how and where the boundaries are drawn. Disentangling the
ontological politics of algorithms requires “remembering that boundaries between
humans and machines are not naturally given but constructed, in particular
historical ways and with particular social and material consequences”
(Suchman, 2007: 11). Again, there are at least two sides to this. What counts as
an algorithm, as well as when it counts, are not given. Following Suchman, we
need to pay attention to the
158 IF . . .
“boundary work through which a given entity is delineated as such” (2007: 283).
As we have seen, what counts as an algorithm may vary greatly. Yet, most people
would probably have little problem acknowledging that the computer science
text- book definition of an algorithm, at least partly, constitutes the
phenomenon in question. With this book, I hope to have made people more
susceptible to the idea that algorithms exist and manifest on scales and levels that go
far beyond the narrow, traditional definition.
Clearly, the question of what counts as an algorithm is highly site-specific, and
even when algorithms are assumed they may count differently. In terms of
profes- sional practice the interviews presented in chapter 6 reveal the extent to
which algo- rithms perpetuate designer’s values, beliefs, and assumptions. I also
showed that they reflect the different institutional and organizational settings in
which algo- rithms linger. News media professionals frequently contrast the “good”
journalistic algorithm to the “inferior” algorithms of social media platforms. News
editors and managers speak of the importance of coding the “right” journalistic
values into sys- tems, equating the “bad” algorithm with systems that merely give
people more of what they already had. While it might be tempting to take sides, to
say that one al- gorithmic system is better than the other, we must also be wary
not just of the poli- tics embedded in design but the politics of making boundaries.
As was suggested in chapter 6, what constitutes a “problematic algorithm” in
one setting (i.e., the Facebook algorithm challenging the livelihood of
journalism), may simply be ren- dered unproblematic and desirable in another
(i.e., Facebook itself touting the algo- rithms as merely helping people get closer to
their friends). Problematic values in design aside, it is also important to point out
that there are no “innocent” ways of knowing or talking about algorithms.
Returning to what was the starting point of the book, we are now in a better posi-
tion to think through some of the more general claims that were made at the outset.
The question was raised as to how algorithms are shaping the conditions of everyday
life. One answer is that the materiality of algorithms, the values and assumptions
designed into the technological properties of algorithmic systems, govern
sociality in specific ways. A claim was made to the effect that algorithms
“program” ways of being together, producing the conditions through which
people come to speak and connect online. Now we are hopefully in a better
position to see how questioning the ways in which algorithms shape life also
requires us to question how life shapes algorithms. Through the continuous
collection of user data, algorithmic systems reach a higher level of flexibility and
responsiveness: The relations between self and others change continuously. What I
want to suggest, then, is that the notion of pro- grammed sociality at play in the
algorithmic media landscape is one where what’s at stake are the ongoing
actualization of “becoming together” in ways that temporarily stabilize, rather than a
result of pre-programmed forms of being. The way we are to- gether can only be
thought of in terms of becoming, where the “we” implied by sociality is never
confined to any specifies or essence, but to the “we” as relation.
Algorithmic 1
Chapter 1
Chapter 2
1. http://cs-exhibitions.uni-klu.ac.at/index.php?id=193
2. According to Hromkovič, Leibniz was the first philosopher who conceptualized mathematics
as an “instrument to automatize the intellectual work of humans. One expresses a part of
reality as a mathematical model, and model, and then one calculates using arithmetic”
(2015: 274).
3. Not all algorithms depend on “if . . . then” statements. Other control structures include “while”
or “for” loops.
4. Elegance, as first proposed by Donald Knuth in Literate Programming (1984), can be
measured by four criteria: the leanness of the code, the clarity with which the problem is
defined, the sparseness of the use of resources such as time and processor cycles, and
implementation in the most suitable language on the most suitable system for its
execution.
5. Some of the most basic data structures include the array, the record, the set, and the sequence,
of which array is probably the most widely used. More complicated structures include: lists,
rings, trees, and graphs (Wirth, 1985: 13).
6. For more on the history, cultures, and technical aspects of databases, see Bucher (2016),
Codd (1970), Driscoll (2012), Dourish (2014), Manovich (1999), and Wade &
Chamberlin (2012).
7. In a program written in the programming language C, the different steps can usefully be illus-
trated by the program’s file-naming convention. The source code file ends in “.c,” the object
code ends in “.obj,” and the executable files end in “.exe.”
8. While it is beyond the scope of this chapter to lay out how the extremely complex fields
of cognitive science or children’s cognitive development understand human learning, it
may suffice to say that “many machine-learning researchers take inspiration from these
fields” (Domingos, 2015: 204).
9. A fourth category of machine learning that is usually listed is reinforcement learning, which
is about learning in situ as the system interacts with a dynamic environment, for example, in
the case of self-driving cars.
10. For a good overview of the technical details on how data mining and machine learning work,
see Barocas & Selbst (2016). They provide an excellent description of the different processes
161
NOTES TO PAGES 35– 1
and steps in data mining, including defining the target variable, labeling and collecting the
training data, selecting features, and making decisions on the basis of the resulting
model.
11. Probabilistic models or classifiers called Naive Bayes usually power spam filters. For more on
the history and working of Naive Bayes classifiers, see Domingos (2015: 151–153) or Rieder
(2017). According to Domingos, it is supposedly the most used learner at Google.
12. See Barocas & Selbst (2016: 9) on the creditworthiness.
13. Rules of thumb and systematic searching through the performance of different parameter
values on subsets of the data (cross-validation) which are also often used.
14. Numbers based on consulting Facebook’s newsroom on December 31, 2014: http://news-
room.fb.com/products.
15. Neural networks in machine learning do not work exactly like the brain but are simply
in- spired by the ways in which the brain is thought to learn. The first formal model of a
neuron was proposed by Warren McCulloch and Walter Pitts in 1943. It was not until
1957, when Frank Rosenblatt pioneered neural nets with his conception of the perceptron,
that neurons were thought to be able to learn to recognize simple patterns in images. For
more on the early history of neural nets, see Minsky & Papert (1969) and M. Olazaran
(1996). For a literature overview of the field of neural networks and deep learning, see, for
example, J. Schmidhuber (2015).
16. For more on how Google image recognition works using deep learning and neural networks,
see http://googleresearch.blogspot.dk/2015/06/inceptionism-going-deeper-into-neural.html.
17. Bayes’ theorem sits at the heart of statistics and machine learning, though what a Bayesian
means by probabilities may differ from the ways in which statisticians use the concept.
As Domingos (2015) explains, statisticians usually follow a much stricter “frequentist”
interpre- tation of probability.
18. For the most comprehensive bibliography on critical algorithm studies, put together by
Tarleton Gillespie and Nick Seaver, see: https://socialmediacollective.org/reading-lists/
critical-algorithm-studies (last accessed, May 2, 2016).
19. The need for coding skills is one of the most discussed topics in software studies. While
it certainly helps to know the principles of writing code, some syntax of a programming
lan- guage, and how the computer works, a sociologist of algorithms needs to know how to
code no more than a television scholar needs to know mechanical engineering.
20. On the notion of “algorithmic culture,” see also Galloway (2006b); Kushner (2013).
21. For an overview of Foucault’s concepts of power, see Lemke, 2012; Faubion, 1994; Rabinow,
1994. Foucault’s public lectures (held at the Collége de France as part of his appointment as
chair of the “history of systems of thought” between 1970 and his death in 1984) provide a
comprehensive survey of his various conceptualizations of power, including the notions of
disciplinary power (Foucault, 2015), biopower (Foucault, 2007, 2008), and pastoral power
(Foucault, 2007). Foucault’s text on The Subject and Power (1982) provides a classic state-
ment of how he understood the notion of power and how it can be studied.
22. Foucault’s ideas of omnipresent power, however, have a much longer historical philosophical
tradition going back, at least, to the philosopher Baruch Spinoza (1632–1677) who is known
for his metaphysics of substance monism, the view that everything is a “mode” of one onto-
logical substance (God or Nature). Monism is typically contrasted with Cartesian dualism or
the view that the world is made of two fundamental categories of things or principles.
For Spinoza, modes (or subsets of the substance) are always in the process of entering into
rela- tions with other modes (i.e., humans, animals, things, etc.). As Jane Bennett puts it,
Spinozan nature is “a place wherein bodies strive to enhance their power of activity by
forging alliances with other bodies in their vicinity” (2004: 353). Foucault’s way of
understanding power as omnipresent and relational bears resemblance to Spinoza’s
monism and his concept of im- manent causality, see Juniper & Jose, 2008; Deleuze,
1988.
23. Gatekeeping is the idea that information flows from senders to receivers through
various “gates.” Originally conceived by the social psychologist Kurt Lewin in 1947, the idea
of gate- keeping became crucial to journalism studies and information science as a way to
explain the
162 NOTES TO PAGES 26–
process of editing information. In the domain of communication, the gatekeeping concept is
usually attributed to Lewin’s research assistant David Manning White, who in 1950 studied
how “Mr. Gates,” a wire editor in a small US newspaper, based his selection of news on some
highly subjective criteria (Thurman, 2015). Ever since, the notion of gatekeeping has been
used to designate the editorial decision-making process in journalism and media.
24. Within scholarship on computational systems, previous research emphasizing a “values in
design” perspective includes discussions about the politics of search engines (Introna &
Nissenbaum, 2000; Zimmer, 2008), cookies (Elmer, 2004), knowledge infrastructures
(Knobel & Bowker, 2011; Bowker & Star, 2000), and game design (Flanagan & Nissenbaum,
2014). “Values in design” approaches typically hold that technology raises political concerns
not just in the way it functions but also because the ways in which it works often seem to be
at odds with people’s expectations (Introna & Nissenbaum, 2000: 178).
25. For a good discussion of the concept of government and governmentality, see
Michel Senellart’s overview in Foucault, 2007: 499–507.
26. For his conceptualization of government in his College de France lectures, Foucault
draws heavily on the anti-Machiavellian writer Guillaume de La Perrière and his work
Le Miroir politique, contenant diverses manières de gouverner (1555). Despite characterizing
La Perrière’s writing as boring compared to that of Machiavelli, Foucault sees great merit in
La Perrière’s way of conceptualizing government as being concerned with the relationship
between men and things (with an emphasis on relationship).
27. As Brö ckling et al. point out, such technical means may, for instance, include social engineer-
ing strategies embedded in various machines, medial networks, and recording and visualiza-
tion systems (2011: 12).
28. Foucault’s thinking on subjectivation and power relations changed somewhat throughout his
career. Whereas his earlier work conceptualized subjects as “docile bodies” shaped by disci-
plinary processes of power, his later works focused on what he termed “technologies of self,”
whereby subjects are seen as much more self-sufficient in terms of shaping their conditions
of living.
29. David Golumbia (2009) makes a similar argument that computation can be understood as
an “ideology that informs our thinking not just about computers, but about economic
and social trends.”
30. http://www.bbc.co.uk/programmes/b0523m9r
31. As Law & Singleton usefully break the notion of the multiple down: “it is most unlikely that
whatever we are looking at is one thing at all. So, for instance, a ‘natural’ reality such as foot-
and-mouth disease is not just seen differently by vets, virologists and epidemiologists
(though indeed it is). It is actually a different thing in veterinary, virological and epidemio-
logical practice. It is made or done to be different in these different practices. It is a multiple
reality” (2014: 384).
Chapter 3
1. See Wendy H.K. Chun (2011) for more on how the Enlightenment vision connects to
software.
2. Secrecy has long been a topic of concern for disciplines such as sociology (most notably, in
the works of Simmel, 1906), anthropology (Bellman, 1984), and philosophy (Derrida &
Ferraris, 2001). On the relations between secrecy and transparency, see Birchall (2011).
3. The unknown here is simply understood as lack of knowledge or information. For more on
how the unknown can be conceptualized further, see, for example, the extant literature on
the notion of ignorance (Gross, 2007; McGoey, 2012; Roberts, 2012; Smithson, 2012).
Scholars distinguish among different types of ignorance (or lack of knowledge). For exam-
ple, between known unknowns and unknown unknowns, where the former
“denotes knowledge of what is known about the limits of knowledge; there are certain
things that we
164 NOTES TO PAGES 43–
know that we do not know” while the latter “refers to a total lack of knowledge”
(Roberts, 2012: 217).
4. For one of the first discussions on black boxes within science and technology studies, see
Callon and Latour (1981), where they conceptualize power as the ability to sit on top of
black boxes. For a critical account of “opening up the black box,” see Winner (1993). A quick
Google Scholar search revealed that over 18,000 articles have been published containing the
phrase “opening the black box,” including the black boxes of finance, nanotechnology,
soil microbial diversity, aid effectiveness, and media effects.
5. Galison exemplifies his notion of antiepistemology by way of one of the most mythical
ex- amples of trade secrets, the Coca-Cola formula. As Galison writes, quoting security
personal Quist: “the recipe for Coca-Cola Classic has been kept a secret for over one
hundred years. It is said that only two Coca-Cola company executives know that recipe
[which] is in a safe deposit box in Atlanta, which may be opened only by vote of the
company’s board of direc- tors . . . We probably would not know if a national security secret
was as well-kept as the secret of Coca-Cola” (2004: 239).
6. Legal debates over algorithmic regulation have largely centered on free speech jurisprudence
under the First Amendment. Pasquale and Bracha (2008), for example, argue not only that
search engines need to be regulated but, more specifically, their ability to structure search
results needs to be regulated, and that the First Amendment does not encompass search
engine results. Others disagree, seeing search engines as fully protected by the First
Amendment (Volokh & Falk, 2011).
7. Bataille’s notion of “non-savoir” has been translated as both unknowing (Bataille &
Michelson, 1986) and nonknowledge (Bataille, 2004).
8. At Twitter, they describe their mode of platform development in terms of a “culture of exper-
imentation”: https://blog.twitter.com/2015/the-what-and-why-of-product-experimentation-
at-twitter-0.
9. There are many resources providing an overview of what a relational materialism entails, its
key thinkers and core tenets—for example, Bennett, Cheah, Orlie, Grosz, Coole, &
Frost (2010) for an overview on new materialism; Thrift (2007) for an introduction to many
of the core issues at stake in a relational ontology; and Latour (2005) for an introduction to
actor- network theory. Scholarly debates about these issues also proliferate in special issues
of aca- demic journals and within specific academic disciplines (i.e., information systems,
education, human geography) that seem to have adapted to a particular strand of relational
materialism (i.e., agential realism, new materialism), often with regard to specific concepts
(i.e., sociomate- riality, material-discursive, assemblage) and thinkers (i.e., Barad,
Whitehead, Latour).
10. As Lemke (2015) notes, the term “more-than-human” was coined by Braun and Whatmore
(2010: xx). Braun and Whatmore use it as the preferred term over “posthuman.” These ap-
proaches also emphasize terms such as practice, performance, movement, process, entangle-
ment, relation, materiality and the nonhuman.
11. Prehension is a key concept in Whitehead’s metaphysics and refers to the elements, includ-
ing energies, emotion, purpose, causation, and valuation, that combine (or what Whitehead
refers to as concrescence) to produce actual entities (Michael, 2004).
12. For Whitehead, actual entities or occasions become concrete through a process he calls con-
crescence, that is, the “production of novel togetherness” (1978: 21). This insistence of
the becoming or thickening of an actual entity from a multiplicity of possibilities (or
potential- ity) has had an enormous influence on Deleuze’s philosophy of the virtual.
13. Lucy Suchman (2007: 283–286) addresses the methodological necessity of making arbi-
trary analytical cuts in the network. The boundaries drawn and cuts made are never just
naturally occurring but constructed for analytical and political purposes.
14. Derived from Austin’s (1975) account of the performative function of language as having the
power to enact that which it merely seems to state, the notion of performativity has
been widely used in critical theory (Butler, 2011), STS (Callon, 1998; Pickering, 1995),
and the
NOTES TO PAGES 50– 1
sociology of finance (MacKenzie, 2008) as a way to depict the world not as an already exist-
ing state of affairs but, rather, as a doing—“an incessant and repeated action of some sort”
(Butler, 1990: 112). The notion of performativity has also played a central role in critical
discussions of software, code and algorithms (see Galloway, 2006a; Hayles, 2005; Introna,
2016; Mackenzie, 2005; Mackenzie & Vurdubakis, 2011).
15. Some scholars would argue that the sociotechnical and sociomateriality are not the same but
denote different things. The concept of sociomateriality has been particularly important to
research in information systems and organizational studies. For a discussion on how what
these terms mean and how they might be distinguished, see Leonardi, Nardi & Kallinikos
(2013).
16. As with the terms sociotechnical and sociomaterial, the concepts of network,
assemblage, and hybrids have their own intellectual history that do not necessarily overlap
entirely. While they all denote a composite term that implies some form of co-constitution
between humans and the more than human, the term hybrid relates to actor-network theory
and the work of Bruno Latour in particular, whereas assemblage is more of a Deleuzian
concept (although applied and used by thinkers affiliated with actor-network theory such
as Michael Callon). For an understanding of hybrids, see Latour’s example of the citizen-gun
or gun-citizen that he describes on several occasions but, most explicitly, in Latour
(1994). Whereas hybrids refer to the relations between heterogeneous entities, the notion
of assemblage also points to the process of assembling, not merely to the existence of a
composite entity. In Deleuze and Guattari’s account, an assemblage is not just a thing but also
an ongoing organizing of multi- plicities (see Deleuze & Guattari, 1987). Assemblage
should not just be understood as a gathering of subjectivities and technical objects but,
rather, in the sense of its original French meaning of agencement—a process of
assembling rather than a static arrangement (see Packer & Wiley, 2013; Callon, 2007). For
a good overview of the similarities and differences between actor-network theory and the
concept of assemblages, see Mü ller (2015).
17. See DeLanda (2006) for a discussion on these terms and how they differ. The main ontolog-
ical difference can be found between those who focus on “relations of interiority” and those
who maintain a focus on “relations of exteriority.” The former notion claims that nothing
exists outside of relations, whereas the latter contends that the parts of an assemblage can
have intrinsic qualities outside of its associations (Mü ller, 2015: 31). For Karen Barad,
an influential philosopher committed to a relational ontology (or agential realism as she calls
it) “relata do not preexist relations” (2003:815). An agential realist account emphasizes
entan- glements not as intertwining of separate entities, but rather as relata-within-
phenomena that emerge through specific intra-actions (ibid.). As Barad puts it, “why do
we think that the existence of relations requires relata?” (2003: 812). If Barad’s ontological
commitments con- stitute one side of the spectrum, most theorists committed to a relational
ontology are either less explicit about their metaphysical claims or formulate a theory that
attempts to account for the relative autonomy of entities outside of their relations. In
contrast to Barad, Deleuze explicitly states that “relations are external to their terms” (Deleuze
and Parnet, 2007: 55), mean- ing that “a relation may change without the terms changing”
(See Delanda, 2006: 11).
18. The US Senate Commerce Committee sent a letter to Facebook CEO Mark Zuckerburg,
looking for answers on its trending topics section. In the letter, Senator John Thune, chairman
of the committee, accused Facebook of presenting the feature as the result of an
objective algorithm while, in reality, human involvement made it much more
“subjective.”
19. Here, I am borrowing from Bloomfield et al. (2010: 420), who revisit the notion of affor-
dances and urge researchers to consider the question of when an affordance is.
20. Lucy Suchmann makes a similar argument in her book Human-Machine Reconfiguration in
which she turns away from questions of “whether humans and machines are the same or dif-
ferent to how and when the categories of human or machine become relevant, how relations
of sameness or difference between them are enacted on particular occasions, and with what
discursive and material consequences” (2007: 2).
166 NOTES TO PAGES 43–
21. As the continuing saga of Facebook’s trending topic shows, the figuration of the human-
machine relationship is only ever temporarily stabilized. Only two days after Facebook
announced that it would replace the human writers with robots, the algorithm started to
highlight fake news as part of the trending topic. Without human oversight, the robots were
not able to detect that a story featuring the Fox News anchor Megyn Kelly wasn’t true. This,
of course, caused much outcry again, proving to critics that humans were needed after all.
I am including this in a note because of its obvious neverending character. At the time of
writing, Facebook might have deployed more humans again to counter the robots gone awry,
while in two months’ time it might just as well be the other way around.
22. While many scholars who subscribe to a performative notion of ontology claim that reality
emerges not just in interactions but intra-actions, referencing Barad (2007), I am not com-
mitted to making what is essentially a very strong ontological claim. Rather than committing
to a view that claims nothing exists outside or external to a relation, which is what Barad’s
notion of intra-action implies, my view is more in line with Deleuze’s notion of assemblage,
which assumes a certain autonomy to the terms they relate. See Hein (2016) for a discussion
of how compatible Barad and Deleuze are.
23. Thanks to Michael Veale for pointing out these distinctions.
24. Technography has previously been described as an “ethnography of technology” (Kien,
2008), a research strategy aimed at uncovering the constructed nature of technoculture
(Vannini & Vannini, 2008), a way to explore artefacts in use ( Jansen & Vellema, 2011), or a
method “to tease out the congealed social relations embodied in technology” (Woolgar,
1998: 444). Moreover, the notion of technography has been used to describe the “biography
of things” (Kahn, 2004) or simply as a synonym for a “technical drawing,” for example,
within the field of architecture (Ridgway, 2016). My understanding comes closest to that of
Woolgar (1998) and of Vannini and Vannini (2008), who describe technography as a general
attitude and research strategy aimed at examining the structural aspects of complex techno-
cultural layers. However, the way in which I use the term differs from the notion of technog-
raphy in Woolgar and in Vannini and Vannini in that the descriptions of technology do
not necessarily involve a description of situated practices of production and implementation
in particular settings. The question is, rather, what can we know about the workings of
algo- rithms without necessarily asking humans or studying particular social settings as an
ethnog- rapher would do?
Chapter 4
6. Throughout the company’s history, Facebook has been accused of violating the privacy of its
users. There is a rich existing body of literature concerning Facebook and privacy. See,
for example, boyd (2008); Debatin, et al. (2009); Hargittai (2010); Tufekci (2008).
7. In Gilles Deleuze’s (2006) reading of Foucault, the concept of diagram signifies the function or
operationality of power. Deleuze suggests that Foucault is a new cartographer, someone
intent on mapping out the relations between forces, to show how power produces new
realities.
8. The dimension of subjectivity is key in the writings of Foucault (especially in the later ones).
Foucault used the concept of governmentality to analyze everything from the production of
orderly and compliant “docile bodies” through pastoral guidance techniques (see Foucault,
1978) to the emergence of liberalism in which notions of freedom are produced so as to
replace external regulation by inner production (Brö ckling et al., 2011: 5).
9. Many scholars have argued that Bentham’s Panopticon and Foucault’s adaptation of it in
terms of disciplinary power provides a limited model for understanding contemporary forms
of surveillance and power (e.g., Boyne, 2000; Green, 1999; Haggerty & Ericson, 2000). One
of the more pervasive arguments used against the notion of disciplinary power is what we
may call the Deleuzian turn in surveillance studies (Lyon, 2006). In a short piece called the
Postscript on the Societies of Control, Deleuze argues that discipline has been superseded by
something he calls “control.” Despite the brevity of Deleuze’s essay (only a few pages),
the fact that he explicitly connects control to code, automation and networked technology,
prob- ably accounts for the main reason scholars find the notion of control to be more
useful. However, as Mark G.E. Kelly (2015) points out, much of the arguments
voiced against Foucault’s notion of discipline have failed to recognize that “discipline is
control.” It is not the case, as Deleuze suggested, “we are in the midst of a general breakdown
of all confinements” (1992: 178). Prisons and schools still persist. Nor is it the case that
discipline means confine- ment in the way that Foucault introduced the notion. As Kelly
(2015) suggests, networked technologies such as CCTV and GPS are not evidence that
discipline has been surpassed by control, but rather complete the idea of disciplinary power
as discipline is essentially about the infinite management of bodies. Others have in turn
argued that Foucault moved away from notions of discipline in his later work, focusing
instead on notions of biopower, security, and governmentality. Yet, in Security, Territory,
Population Foucault makes sure to not posit a historical break between discipline and
mechanisms of security when he writes: “So, there is not a series of successive elements, the
appearance of the new causing the earlier ones to dis- appear. There is not the legal age, the
disciplinary age, and then the age of security. Mechanisms of security do not replace
disciplinary mechanisms, which would have replaced juridico-legal mechanisms” (2007: 22).
While I can certainly understand the appeal of moving away from a notion of discipline
toward a more fluid understanding of control as proposed by Deleuze or the notion of an
expansive power indicative of Foucault’s security apparatuses, I don’t think we have to
choose. It is not an either/or; not discipline or control; not discipline or security. I concur
with Foucault when he says that discipline and security are not opposites but part of the
same attempts at managing and organizing social spaces. For these reasons I still think
the notion of disciplinary power holds great merit for an understanding of the ordering
mechanisms of algorithms.
10. https://newsroom.fb.com/news/2013/08/announcing-news-feed-fyi-a-series-of-blogs-on-
news-feed-ranking.
11. For more information on the news feed disclosure, see the video broadcasts of the technical
sessions of the 2010 Facebook developers conference, especially the session called “Focus on
the feed” in which Facebook engineers Ari Steinberg and Ruchi Sanghvi talk about the
de- tails of EdgeRank: http://www.livestream.com/f8techniques/video?
clipId=pla_5219ce25- 53c6-402d-8eff-f3f8f7a5b510
12. While the term “Facebook algorithm” may be somewhat misleading in terms of giving
the impression that there is one single Facebook algorithm as opposed to many different
algo- rithms accomplishing different tasks and working in tandem, my reference to the
algorithm
168 NOTES TO PAGES 168–
in this singular way should be read as an act of convenience on my part. Facebook algorithm
is shorter and more commonly used in everyday discourse than EdgeRank or the longer and
more precise “news feed ranking algorithm.”
13. As exemplified in a Slate article about the Facebook news feed: “It doesn’t just predict
whether you’ll actually hit the like button on a post based on your past behavior. It
also predicts whether you’ll click, comment, share, or hide it, or even mark it as spam. It will
predict each of these outcomes, and others, with a certain degree of confidence, then
combine them all to produce a single relevancy score that’s specific to both you and that
post” (Oremus, 2016).
14. Kairos refers to the ancient Greek notion of the opportune or the right time to do or say some-
thing. For more details on the concept of kairos, see, for example, Boer (2013), or Smith
(1969).
15. For a detailed description on how trends are defined and algorithmically processed in
Facebook, see Li et al. (2013).
16. Daniel Ek, who is the founder of the music streaming site Spotify, which entered into a close
partnership with Facebook in 2011, tellingly suggests: “We’re not in the music space—we’re
in the moment space” (Seabrook, 2014).
17. These are just some of the changes made to the news feed as documented by the Facebook
News Feed FYI blog (https://newsroom.fb.com/news/category/news-feed-fyi): May 2017,
reducing the amount of posts and ads that link to “low-quality web page experiences” by
using artificial intelligence to “identify those [web pages] that contain little substantive
content and have a large number of disruptive, shocking or malicious ads.” January 2017,
emphasizing “authentic” content by looking at “signals personal to you, such as how close you
are to the person or Page posting.” August 2016, reducing the ranking of updates that classify
as having clickbait-like headlines. June 2016, posts from friends and family to get top priority
followed by posts that “inform” and posts that “entertain.” March 2016, prioritizing live
videos. June 2015, tweaking the algorithm to take more account of user actions on videos
such as turning on sound or making the video full screen as indications of heightened
interest. June 2015, taking into account time spent on stories as in- dicated by users scrolling
behavior. April 2015, making posts from friends rank even higher and relaxing previous
rules about not showing multiple posts in a row from the same source. September 2014,
prioritizing more timely stories, for example as indicated by two people talking about the
same issues (i.e., a TV series or a football game). Timeliness, moreover, is indicated by when
people choose to like or comment on a post. If more people like a post right after it was
published but not a few hours later, the algorithm takes it as an indication that it was more
rele- vant at the time it was posted. What’s interesting to note about the blog posts
documenting changes is that the posts from 2017 more readily mention artificial
intelligence as part of the procedures and techniques for making the news feed more
relevant.
18. See chapter 3 for a discussion of the possibilitities and challenges of applying a reverse engi-
neering ethos to algorithms.
19. Facebook alternately calls these types of stories aggregate or cluster stories. See Luu (2013)
for a description of cluster stories.
20. The most recent feed used to have a counter next to it indicating the number of new stories
that had been posted and published by a user’s Facebook connections since the last time they
had checked the feed. The counter only went as far as 300 new posts, so any number above
the limit would just be indicated as “+300.” Usually, my most recent feed would reach
this limit only after one or two days of not checking.
21. In September 2011, I did this kind of comparison between my “top news” feed before
and after checking the most recent feed a couple of times; and, every time, there was a
consider- able change in the stories displayed between the first and second time I
checked.
Chapter 5
easy categorization. A computer, she found, means more to people than merely a
physical thing. It is also a metaphysical thing that influences how people think of
themselves and others (Turkle, 1984: 16).
10. Film, art and other cultural mediations also lend themselves to the analysis of unknown and
hidden aspects of technoculture, an approach that is often used in humanities-oriented
media studies that seek to address the experiential dimensions of technologies. See, for ex-
ample, Hansen (2004) on the phenomenology of new media or, more recently, Hansen
(2012) on the micro perception of code and ubiquitous computing. Recently, Lisa Parks and
others have also called for a critical study of media infrastructure that focuses on making the
materiality of infrastructure more visible by paying attention to images, art and film (Parks,
2007; Parks & Starosielski, 2015; Sandvig, 2013). For an understanding of
computers through art and literature, see, for example, Hayles (2010).
11. Depicting algorithms has always been a representational challenge whether it is flowcharts,
diagrams, visualizations, introductory computer science textbooks, or commercial
depic- tions (Sandvig, 2015). See also A. Galloway (2011) on the limits of representation
in data science.
12. For more on the role of popular imagery as representational practice, see Kirby (2011).
13. Both the names and the exact wording of the tweets have been slightly changed to protect the
privacy of the participants in the study reported on in this chapter.
14. Although the term “ordinary people” is both contested and highly ambiguous, the prefix “or-
dinary” is simply used to denote the aim of the study, which was to try and talk to people who
are neither computer specialists nor social media marketers but, on the face of it, have
no specialist knowledge of or any obvious vested interest in algorithms.
15. I conducted similar searches for the following platforms and services: Twitter,
Instagram, YouTube, Amazon, Google, OkCupid, Tinder, and Spotify. While users tweet
about algo- rithms in connection with all of these platforms, most of the searches have been
conducted on Facebook and algorithms, Netflix and algorithms, and Twitter and
algorithms because they were the platforms that seemed to solicit the most consistent
streams of user responses. The main period of data collection regarding the Twitter searches
took place during a nine- month period stretching from October 2014 through June 2015.
16. All 25 participants are pseudonymized, whereas their real age, country of residence, and oc-
cupation are disclosed. Australia: Steven (24, graphic designer). Canada: Jolene (22, fashion
blogger), Nora (20, student), Larry (23, works in television), Anthony (64, art professor),
Richard (41, manual laborer), Alex (age unknown, occupation unknown). Norway: Sarah
(33, biologist). Philippines: Louis (20s, former student, current occupation unknown).
United Kingdom: Jacob (38, on leave from a university degree). United States: Amber
(25, student), Kayla (23, student), Michael (21, Musician), Rachel (24, journalist), Jessa
(20s, journalist), Lucas (25, quality assurance engineer), Shannon (45, career
counselor), Lena (20s, graduate student), Chris (20, student), Albert (42, works in
advertising), Kate (36, former school teacher), Nancy (age unknown, public policy
associate), Caitlyn (30s, teacher), Robyn (age unknown, graphic designer). Unknown
location, age, and occupation: Tom, John.
17. The participants were recruited from the personal Facebook network of Nora, one of the
participants in the Twitter study (see note 16). It was Nora’s own interest in the subject
matter and offer to distribute a request for interview participants via her own personal
Facebook account that helped me recruit new participants. Together, we designed the
re- cruitment request for participants but phrased in the way Nora would normally speak.
Nora’s status update asked people to contact her privately if they “would be willing to talk
to a re- searcher about your experience with social media, and how it shapes the type of
information that you get.” Out of the 12 people expressing an interest, 10 ended up
engaging in email conversations and face-to-face interviews over Skype during May and
June 2016. Once again, the names provided are pseudonyms to protect the privacy of the
participants. Due to Nora’s
NOTES TO PAGES 101– 1
physical locality in a university town in Canada, many of the participants recruited come
from a similar geographical and socioeconomic background. They are all in their early twen-
ties, Canadian, engaged in college education, and relatively active on social media (however,
none are involved in computer science or similar subjects that would entail knowledge
of algorithms).
18. Paraphrased from Jessa’s tweet, October 25, 2014.
19. Multiple conversations over email, exchanged between October 27 and November 7, 2014.
20. Paraphrased from Kayla’s tweet, September 28, 2014.
21. Based on email interview, October 1, 2014, and online chat on October 2, 2014.
22. Paraphrased from Shannon’s tweet published November 17, 2014.
23. Email interview, November 19, 2014.
24. Based on multiple emails exchanged between June 12 and 15, 2015.
25. Interview over Skype, June 2, 2016.
26. Based on an email interview, December 4, 2014, and Lena’s tweet, published November 24,
2014.
27. Paraphrased from Albert’s tweet published December 29, 2014.
28. Email interview, December 30, 2014.
29. Interview over Skype, May 29, 2016.
30. Based on multiple emails exchanged on November 5, 2014.
31. Based on multiple emails exchanged with Amber on October 1–2, 2014.
32. Paraphrased from Nora’s tweet published October 2, 2014.
33. Based on multiple emails exchanged between October 6 and 12, 2014.
34. @RealityTC, February 6, 2016.
35. @RobLowe, February 6, 2016.
36. @Timcast, February 6, 2016.
37. @Polystatos, February 6, 2016.
38. Interview over Skype, June 2, 2016.
39. @etteluap74, March 28, 2016.
40. @CarynWaechter, March 28, 2016.
41. @Monica_xoxx, March 28, 2016.
42. @MrBergerud, March 28, 2016.
43. Interview over Skype, April 10, 2016.
44. Interview with Nora over Skype, April 10, 2016.
45. Interview over Skype, May 29, 2016.
46. Interview over Skype, May 27, 2016.
47. Interview with Melissa over Skype, May 27, 2016.
48. Email interview, February 12, 2015.
49. Email interview, October 12, 2014.
50. Interview over Skype, May 31, 2016.
51. Interview over Skype, May 29, 2016.
52. Email interview, January 19, 2015.
53. Within cognitive psychology, the notion of playful learning has been posited as a core feature
of how children learn and develop cognitive skills. Most notably, perhaps, Jean Piaget (2013)
formulated a series of developmental stages of play that correspond to the successive stages
of cognitive development. Through explorations of an object, the child can obtain the in-
formation necessary to navigate functionally in the world of unknowns.
54. In this sense, the algorithmic imaginary bears some resemblance to Cornelius Castoriadis’
(1987) notion of the social imaginary—not in terms of seeing it as a culture’s ethos but
in terms of its epistemological assumptions. Castoriadis thinks “knowing a society means
re- constituting the world of its social imaginary significations” (Peet, 2000: 1220).
Similarly, we might think of knowing algorithms through the meaning-making and
sense-making processes of various actors.
172 NOTES TO PAGES 114–
55. I am well aware that talking about the illusionary and the real in the context of the imaginary
may create associations with the psychoanalytical theories of Jacques Lacan. However, I am
using these terms in a much more pragmatic, everyday sense. For more on Lacan’s notion of
the imaginary, the symbolic and the real, see Lacan & Fink (2002).
56. Based on multiple emails exchanged on November 5, 2014.
Chapter 6
1. The term “fake news” should be understood as encompassing many related phenomena,
which held together refer to various forms of mis- and disinformation, ranging from deliber-
ate misleading content, over parody, through to news that is ideologically opposed (see
Tambini, 2017).
2. Although the material turn has long been underway in the social sciences and humanities,
I am well aware that many other (often neighboring) fields and areas of studies have not yet
fully acknowledged the power of things or adopted a material sensibility. Within the field of
journalism studies, the assertion that “materiality matters” is still a relatively novel
claim. Although journalism scholars have talked about the role of technology and
computers in news-making for decades, “a renewed interest for materiality has
emerged in the last few years” (De Maeyer, 2016: 461). For more on the material turn in
journalism studies and the role of nonhumans in journalistic practice and products, see, for
example, Anderson (2013), De Mayer (2016), Steensen (2016).
3. Scholars have used a variety of different but interrelated concepts to describe this movement
toward the algorithmic processing of digital and digitized data, including computational
journalism (Diakopoulos, 2015; Karlsen & Stavelin, 2014), data journalism (Fink &
Anderson, 2015), data-driven journalism (Parasie, 2015), robot journalism (Carlson, 2015),
journalism as programming (Parasie & Dagiral, 2013), and algorithmic journalism (Dö rr,
2016). The precise label chosen to describe the intersection between journalism and com-
puting depends, in part, on the specific area of work that is being emphasized (Karlsen &
Stavelin, 2014). For example, the terms data journalism and data-driven journalism are most
often used to describe an emerging form of storytelling in which journalists combine
data analysis with new visualization techniques (Appelgren & Nygren, 2014). Broadly
conceived, computational journalism refers to “finding, telling, and disseminating news
stories with, by, or about algorithms” (Diakopoulos & Koliska, 2016: 2), and serves as an
umbrella term for all kinds of “algorithmic, social scientific, and mathematical forms of
newswork” (Anderson, 2013: 1005).
4. Precision journalism was conceived of as “the application of social and behavioral science
research methods to the practice of journalism” (Meyer, 2002: 2). Computers and software,
Meyer foresaw, would be essential for a new kind of journalism.
5. Although Jeffrey Alexander writes “information will be free,” I assume he means “informa-
tion wants to be free” as this is the iconic phrase attributed to Stewart Brand. Interestingly
the first part of the mantra is usually overlooked: “On the one hand information wants to be
ex- pensive, because it’s so valuable. The right information in the right place just changes
your life. On the other hand, information wants to be free, because the cost of getting it
out is getting lower and lower all the time. So you have these two fighting against each
other” (Doctorow, 2010).
6. The Scandinavian news media context is characterized by what Hallin and Mancini (2004)
describe as the “democratic corporatist” model. According to this model, media are
marked by a unique “coexistence” that brings together “the polarized pluralist tendency
toward more partisan, opinion-oriented journalism, and the liberal tendency toward a
more commercialized, news-driven journalism—often assumed to be ‘naturally’ opposed”
(Benson et al., 2012: 22). For more on the Scandinavian context in general, see Syvertsen
et al. (2014).
NOTES TO PAGES 101– 1
Chapter 7
1. Wendy Chun (2016) argues that it is through habits that new media become embedded in our
lives. It is not the push to the future characteristic of Big Data that structures lives in the age
of smartphones and “new” media, but rather habitual forms of updating, streaming, sharing
etc.
2. See also Gillespie (2017); Grimmelmann, J. (2008).
3. A widely used example of platform intervention is how Google responded differently in two
cases of “Google bombing,” the practice of manipulating Google’s algorithms to get a
piece of content to the top of the search results for a given topic. While Google chose to
intervene and to manually “clean up” its search results when a group of people started a
racist campaign against Michelle Obama in 2009 (to make search results for her name
yield images of a monkey), the company did not intervene when in the wake of the 2011
terrorist attacks in Norway a campaign was launched urging people to upload pictures
of dog droppings, tagging them with the terrorist’s name.
4. Claims about heterogeneity and relationality do not automatically imply symmetry or equal-
ity. Despite the long-time insistence on symmetries within studies of science and technology,
realities are seen as enacted and made through practices. As Pickering suggests with regards
to humans and nonhumans, “semiotically, these things can be made equivalent; in practice
they are not” (Pickering, 1995: 15). How agency and power manifest in particular practices
is therefore an empirical question.
5. Here I am borrowing from Ganaele Langlois, who writes about meaning machines and how
they “not tightly control what users say, but rather how they come to speak” (2013:
103).
6. In Karen Barad (2007) and Donna Haraway’s (2004) terms, we might also say that the
“response-ability” is shared. Accordingly, reponsibility and accountability is not so much
about taking charge, as it is about the ability to respond to others.
Bibliography
Adams, A., & Sasse, M. A. (1999). Users are not the enemy. Communications of the ACM, 42(12),
40–46.
Adams, P., & Pai, C. C. F. (2013). U.S. Patent No. 20130179271 A1 (“Grouping and Ordering
Advertising Units Based on User Activity”). Retrieved from http://www.google.tl/patents/
US20130179271
Agger, B. (2012). Oversharing: Presentations of self in the internet age. New York, NY: Routledge.
Ahmed, S. (2000). Strange encounters: Embodied others in post-coloniality. London, England:
Routledge.
Ahmed, S. (2006). Queer phenomenology: Orientations, objects, others. Durham, NC: Duke University
Press.
Ahmed, S. (2010). Orientations matter. In J. Bennett, P. Cheah, M.A. Orlie & E. Grosz, E.
(Eds.) New materialisms: Ontology, agency, and politics (pp. 234–257). Durham, NC: Duke
University Press.
Ainsworth, S., & Hardy, C. (2012). Subjects of inquiry: Statistics, stories, and the production of
knowledge. Organization Studies, 33(12), 1693–1714.
Alexander, J. C. (2015). The crisis of journalism reconsidered: Cultural power. Fudan Journal of the
Humanities and Social Sciences, 8(1), 9–31.
Alexander, J. C., Breese, E. B., & Luengo, M. (2016). The crisis of journalism reconsidered. Cambridge,
England: Cambridge University Press.
Allan, G. (1989). Friendship: Developing a sociological perspective. New York, NY: Harvester
Wheatsheaf.
Altheide, D. L., & Snow, R. P. (1979). Media logic. Beverly Hills, CA: Sage.
Amatriain, X. (2013). Big & personal: Data and models behind Netflix recommendations. Paper
presented at the Proceedings of the 2nd International Workshop on Big Data, Streams and
Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications,
August 11, Chicago, IL.
Amoore, L. (2009). Algorithmic war: Everyday geographies of the War on Terror. Antipode, 41(1),
49–69.
Amoore, L. (2013). The politics of possibility: Risk and security beyond probability. Durham, NC: Duke
University Press.
Ananny, M. (2016). Toward an ethics of algorithms convening, observation, probability,
and timeliness. Science, Technology & Human Values, 41(1), 93–117.
Anderson, B. (1983). Imagined communities: Reflections on the origin and spread of nationalism.
London, England: Verso Books.
Anderson, B. (2006). Becoming and being hopeful: towards a theory of affect. Environment and
Planning D: society and space, 24(5), 733–752.
175
176
Anderson, B., Kearnes, M., McFarlane, C., & Swanton, D. (2012). On assemblages and geography.
Dialogues in human geography, 2(2), 171–189.
Anderson, C. (2013). Towards a sociology of computational and algorithmic journalism. New Media
& Society, 15(7), 1005–1021.
Anderson, C. W. (2011). Between creative and quantified audiences: Web metrics and changing
patterns of newswork in local US newsrooms. Journalism, 12(5), 550–566.
Andrejevic, M. (2013). Infoglut: How too much information is changing the way we think and know.
New York, NY: Routledge.
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. Pro Publica. Retrieved
from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Appelgren, E., & Nygren, G. (2014). Data journalism in Sweden: Introducing new methods and
genres of journalism into “old” organizations. Digital Journalism, 2(3), 394–405.
Aristotle. (2002). The Nicomachean ethics (S. Broadie & C. Rowe, Trans.). Oxford, England:
Oxford University Press.
Arribas-Ayllon, M., & Walkerdine, V. (2008). Foucauldian discourse analysis. In C. Wiig &
W. Stainton-Rogers (Eds.) The Sage handbook of qualitative research in psychology (pp. 91–
108). London, England: Sage.
Arthur, C. (2012). Facebook’s nudity and violence guidelines are laid bare. The Guardian. Retrieved
from http://www.theguardian.com/technology/2012/feb/21/facebook-nudity-violence-
censorship-guidelines.
Asdal, K., & Moser, I. (2012). Experiments in context and contexting. Science, Technology, & Human
Values, 37(4), 291–306.
Ashby, W. R. (1999). An introduction to cybernetics. London, England: Chapman & Hall Ltd.
Austin, J. L. (1975). How to do things with words. Oxford, England: Oxford University Press.
Backstrom, L. S. (2013). News Feed FYI: A window into News Feed. Facebook Business.
Retrieved from https://www.facebook.com/business/news/News-Feed-FYI-A-Window-Into-
News-Feed
Badiou, A. (2005). Being and Event. London, England: Continuum.
Baki, B. (2015). Badiou’s being and event and the mathematics of set theory. London, England:
Bloomsbury Publishing.
Balkin, J. M. (2016). Information fiduciaries and the first amendment. UC Davis Law Review, 49(4): 1183–
1234.
Barad, K. (1998). Getting real: Technoscientific practices and the materialization of reality.
Differences: a journal of feminist cultural studies, 10(2), 87–91.
Barad, K. (2003). Posthumanist performativity: Toward an understanding of how matter comes to
matter. Signs, 28(3), 801–831.
Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter
and meaning. Durham, NC: Duke University Press.
Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. Available at SSRN 2477899.
Barr, A. (2015). Google mistakenly tags black people as gorillas, showing limits of algorithms.
Retrieved from http://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tags-black-people-
as-gorillas-showing-limits-of-algorithms/
Bartunek, J. M., & Moch, M. K. (1987). First-order, second-order, and third-order change
and organization development interventions: A cognitive approach. The Journal of
Applied Behavioral Science, 23(4), 483–500.
Bataille, G. (2004). The unfinished system of nonknowledge. Minneapolis: University of Minnesota
Press.
Bataille, G., & Michelson, A. (1986). Un-knowing: laughter and tears. October, 36, 89–102.
Bazerman, C. (2002). The languages of Edison’s light. Cambridge, Mass.: MIT Press.
Beckett, C. (2017). “Fake news”: The best thing that’s happened to journalism. Retrieved from
blogs.lse.ac.uk/polis/2017/03/11/fake-news-the-best-thing-thats-happened-to-journalism/
Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological
unconscious. New media & society, 11(6), 985–1002.
BIBLIOGRAP 1
Beer, D. (2013). Popular culture and new media: The politics of circulation. Basingstoke, England:
Palgrave Macmillan.
Bellman, B. L. (1984). The language of secrecy. Symbols and metaphors in Poro ritual. Newark, NJ:
Rutgers University Press.
Benjamin, S. M. (2013). Algorithms and speech. University of Pennsylvania Law Review, 161(1445).
Bennett, J. (2004). The force of things steps toward an ecology of matter. Political Theory, 32(3),
347–372.
Bennett, J., Cheah, P., Orlie, M. A., Grosz, E., Coole, D., & Frost, S. (2010). New
materialisms: Ontology, agency, and politics. Durham, NC: Duke University Press.
Benson, R., Blach-Ørsten, M., Powers, M., Willig, I., & Zambrano, S. V. (2012). Media
systems online and off: Comparing the form of news in the United States, Denmark,
and France. Journal of communication, 62(1), 21–38.
Berker, T., Hartmann, M., & Punie, Y. (2005). Domestication of media and technology. Maidenhead,
England: McGraw-Hill Education.
Berlant, L. G. (2011). Cruel optimism. Durham, NC: Duke University Press.
Bernauer, J. W., & Rasmussen, D. M. (1988). The final Foucault. Cambridge, MA: MIT Press.
Berry, D. M. (2011). The philosophy of software. London, England: Palgrave Macmillan.
Bijker, W., & Law, J. (1994). Shaping technology/building society: Studies in sociotechnical change.
Cambridge, MA: MIT Press.
Bijker, W. E., Hughes, T. P., Pinch, T., & Douglas, D. G. (2012). The social construction of technological
systems: New directions in the sociology and history of technology. Cambridge, MA: MIT Press.
Birchall, C. (2011). Introduction to “secrecy and transparency”: The politics of opacity and
openness. Theory, Culture & Society, 28(7–8), 7–25.
Bissell, D. (2016). Micropolitics of mobility: Public transport commuting and everyday encounters
with forces of enablement and constraint. Annals of the American Association of Geographers,
106(2), 394–403.
Bloomberg (n.a.) “Room Full of Ninjas: Inside Facebook’s Bootcamp”. Retrieved from https://www
.bloomberg.com/video/inside-facebook-s-engineering-bootcamp-i8gg~WZFR7ywI4nlSg JJKw
.html/
Bloomfield, B. P., Latham, Y., & Vurdubakis, T. (2010). Bodies, technologies and action possibilities:
When is an affordance? Sociology, 44(3), 415–433. doi:10.1177/0038038510362469
Boczkowski, P. J. (2004). The processes of adopting multimedia and interactivity in three online
newsrooms. Journal of Communication, 54(2), 197–213.
Boer, R. (2013). Revolution in the event: the problem of Kairos. Theory, Culture & Society, 30(2),
116–134.
Bogost, I. (2015). The cathedral of computation. The Atlantic. Retrieved from http://www
.theatlantic.com/technology/archive/2015/01/the-cathedral-of-computation/384300/
Boland, B. (2014). Organic reach on Facebook: Your questions answered. Facebook for business.
Retrieved from https://www.facebook.com/business/news/Organic-Reach-on-Facebook
Bosworth, A., & Cox, C. (2013). U.S. Patent No. 8405094 B2 (“Providing a newsfeed based on user
affinity for entities and monitored actions in a social network environment”). Retrieved from
https://www.google.com/patents/US8402094
Bowker, G. C., & Star, S. L. (2000). Sorting things out: Classification and its consequences. Cambridge,
MA: MIT Press.
Bowker, G. C., Baker, K., Millerand, F., & Ribes, D. (2010). Toward information
infrastructure studies: Ways of knowing in a networked environment. In J. Hunsinger, L.
Klastrup, M. Allen (Eds.) International handbook of internet research (pp. 97–117). New
York, NY: Springer.
boyd, d. (2008). Facebook’s privacy trainwreck. Convergence: The International Journal of Research
into New Media Technologies, 14(1), 13–20.
Boyne, R. (2000). Post-panopticism. Economy and Society, 29(2), 285–307.
Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information
Technology, 15(3), 209–227.
178
Braidotti, R. (2006). Posthuman, all too human towards a new process ontology. Theory, Culture &
Society, 23(7–8), 197–208.
Brandom, R. (2016). Leaked documents show how Facebook editors were told to run
Trending Topics. The Verge. Retrieved from
http://www.theverge.com/2016/5/12/11665298/facebook- trending-news-topics-human-
editors-bias
Braun, B. & Whatmore, S. J. (2010). The stuff of politics: An introduction. In B. Braun, S
Whatmore, & I. Stengers (Eds.) Political matter: Technoscience, democracy, and public
life (pp. ix–xl) Minneapolis: University of Minnesota Press.
Braverman, I. (2014). Governing the wild: Databases, algorithms, and population models
as biopolitics. Surveillance & Society, 12(1), 15.
Bröckling, U., Krasmann, S., & Lemke, T. (2011). From Foucaults lectures at the Collège de France
to studies of governmentality. In U. Brökling, S. Krasmann & T. Lemke (Eds.)
Governmentality. Current issues and future challenges (pp. 1–33). New York, NY:
Routledge.
Bucher, T. (2013). The friendship assemblage investigating programmed sociality on Facebook.
Television & New Media, 14(6), 479–493.
Bucher, T. (2016). Database. In B. K. Jensen & R. T. Craig (Eds.), The international encyclopaedia of
communication theory and philosophy (pp. 489–496). Chichester, England: Wiley-Blackwell.
Buchheit. (2009). Applied philosophy, a.k.a. “hacking.” Paul Buchheit. Retrieved from http://
paulbuchheit.blogspot.com/2009_10_01_archive.html
Burrell, J. (2016). How the machine “thinks”: Understanding opacity in machine learning
algorithms. Big Data & Society, 3(1), 2053951715622512.
Butler, J. (1990). Gender trouble: Feminism and the subversion of identity. London: Routledge.
Butler, J. (2011). Bodies that matter: On the discursive limits of sex. London: Taylor & Francis.
Callon, M. (1998). The laws of the markets. Oxford, England: Blackwell.
Callon, M. (2007). What does it mean to say that economics is performative? In D. Mackenzie,
F. Muniesa & L. Siu (Eds.) Do economists make markets? (pp. 311–357). Princeton, NJ:
Princeton University Press.
Callon, M., & Latour, B. (1981). Unscrewing the big Leviathan: How actors macro-structure reality
and how sociologists help them to do so. In K. Knorr-Cetina & A. V. Cicourel (Eds.) Advances
in social theory and methodology: Toward an integration of micro-and macro-sociologies (pp. 277–
303). London, England: Routledge.
Callon, M., & Law, J. (2005). On qualculation, agency, and otherness. Environment and Planning D:
Society and Space, 23(5), 717–733.
Callon, M., & Muniesa, F. (2005). Peripheral vision economic markets as calculative collective
devices. Organization Studies, 26(8), 1229–1250.
Candela. (2016). Session with Joaquin Quiñonero Candela. Quora. Retrieved from https://www
.quora.com/session/Joaquin-Quiñonero-Candela/1
Carlson, M. (2015). The robotic reporter: Automated journalism and the redefinition of labor,
compositional forms, and journalistic authority. Digital Journalism, 3(3), 416–431.
Castoriadis, C. (1987). The imaginary institution of society. Cambridge, MA: MIT Press.
Cathcart, W. (2016). Creating value for news publishers and readers on Facebook. Facebook for
developers. Retrieved from: https://developers.facebook.com/videos/f8–2016/creating-value-
for-news-publishers-and-readers-on-facebook/
Chadwick, A. (2013). The hybrid media system: Politics and power. New York, NY: Oxford University
Press.
Chaykowski, K. (2016, September 14). Facebook news feed head: Trending topics is “better” without
human editors. Forbes. Retrieved from http://www.forbes.com/sites/kathleenchaykowski/
2016/09/14/facebooks-head-of-news-feed-trending-topics-is-better-without-human-
editors-despite-fake-news/#73e85459243f
Chen, D. Y., Grewal, E. B., Mao, Z., Moreno, D., Sidhu, K. S., & Thibodeau, A. (2014). U.S. Patent
No. 8856248 B2 (“Methods and systems for optimizing engagement with a social network”).
Washington, DC: U.S. Patent and Trademark Office.
BIBLIOGRAP 1
Cheney-Lippold, J. (2011). A new algorithmic identity: Soft biopolitics and the modulation of
control. Theory, Culture & Society, 28(6), 164–181.
Cheney-Lippold, J. (2016). Jus algoritmi: How the National Security Agency remade citizenship.
International Journal of Communication, 10, 22.
Christian, B. (2012). The A/B test: Inside the technology that’s changing the rules of business.
Wired. Retrieved from http://www.wired.com/2012/04/ff_abtesting/
Chun, W. H. K. (2011). Programmed visions: Software and memory. Cambridge, MA: MIT Press.
Chun, W. H. K. (2016). Updating to Remain the Same: Habitual New Media. Cambridge, MA: MIT
Press. Citron, D. K., & Pasquale, F. A. (2014). The scored society: Due process for automated
predictions.
Washington Law Review, 89, 1–33.
Clerwall, C. (2014). Enter the robot journalist: Users’ perceptions of automated content. Journalism
Practice, 8(5).
Clough, P. T., & Halley, J. (Eds.). (2007). The affective turn: Theorizing the social. Durham, NC: Duke
University Press.
Codd, E. F. (1970). A relational model of data for large shared data banks. Communications of
the ACM, 13(6), 377–387.
Coddington, M. (2015). Clarifying journalism’s quantitative turn: A typology for evaluating data
journalism, computational journalism, and computer-assisted reporting. Digital Journalism,
3(3), 331–348.
Cohen, J. E. (2016). The regulatory state in the information age. Theoretical Inquiries in Law, 17(2).
Cohen, S., Hamilton, J. T., & Turner, F. (2011). Computational journalism. Communications of the
ACM, 54(10), 66–71.
Coleman, G. (2014). Hacker, hoaxer, whistleblower, spy: The many faces of Anonymous. London,
England: Verso.
Condliffe, J. (2015). You’re using neural networks every day online—Here’s how they
work. Gizmodo. Retrieved from http://gizmodo.com/youre-using-neural-networks-
every-day- online-heres-h-1711616296
Cooren, F., Fairhurst, G., & Huët, R. (2012). Why matter always matters in (organizational)
communication. In P. M. Leonardi, B. A. Nardi, & J. Kallinikos (Eds.), Materiality
and organizing: Social interaction in a technological world (pp. 296–314). Oxford, England:
Oxford University Press.
Corasaniti, N., & Isaac, M. (2016). Senator demands answers from Facebook on claims of
“trending” list bias. Retrieved from
http://www.nytimes.com/2016/05/11/technology/facebook-thune- conservative.html?
_r=0
Corbin, J., & Strauss, A. (2008). Basics of qualitative research. London: Sage.
Cormen, T. H. (2013). Algorithms unlocked. Cambridge, MA: MIT Press.
Cotter, C. (2010). News talk: Investigating the language of journalism: Cambridge University Press.
Couldry, N. (2012). Media, society, world: Social theory and digital media practice. Cambridge,
England: Polity.
Couldry, N., Fotopoulou, A., & Dickens, L. (2016). Real social analytics: a contribution towards
a phenomenology of a digital world. The British Journal of Sociology, 67(1), 118–137.
Ctrl-Shift (2015). The data driven economy: Toward sustainable growth. Report commissioned
by Facebook.
Cutterham, T. (2013). Just friends. Retrieved from http://thenewinquiry.com/essays/just-friends/
Davidson, J., Liebald, B., Liu, J., Nandy, P., & Van Vleet, T. (2010). The YouTube video recommendation
system. Paper presented at the Proceedings of the fourth ACM conference on Recommender
systems. Barcelona, Spain.
De Mayer, J. (2016). Adopting a ‘Material Sensibility’ in Journalism Studies. In T. Witschge,
C.W. Anderson, D. Domingo & A. Hermida (Eds.) The SAGE Handbook of Digital Journalism
(pp. 460–476). London, England: Sage.
De Vries, K. (2010). Identity, profiling algorithms and a world of ambient intelligence. Ethics and
Information Technology, 12(1), 71–85.
180
Debatin, B., Lovejoy, J. P., Horn, A. K., & Hughes, B. N. (2009). Facebook and online privacy:
Attitudes, behaviors, and unintended consequences. Journal of Computer-Mediated
Communication, 15(1), 83–108.
DeLanda, M. (2006). A new philosophy of society: Assemblage theory and social complexity. London,
England: Bloomsbury
Deleuze, G. (1988). Spinoza: Practical philosophy. San Francisco, CA: City Lights Books.
Deleuze, G. (1990). Expressionism in philosophy: Spinoza. New York, NY: Zone Books.
Deleuze, G. (1992). Postscript on the societies of control. October, 59, 3–7.
Deleuze, G. (2006). Foucault. London, England: Routledge.
Deleuze, G., & Guattari, F. (1987). A thousand plateaus. Minneapolis: University of Minnesota
Press.
Deleuze, G., & Parnet, C. (2007). Dialogues II. New York, NY: Columbia University Press.
Derrida, J. (2005). Politics of friendship (Vol. 5). New York, NY: Verso.
Derrida, J., & Ferraris, M. (2001). A Taste for the secret, trans. Giacomo Donis. Cambridge,
England: Polity.
Desrosières, A., & Naish, C. (2002). The politics of large numbers: A history of statistical reasoning.
Cambridge, MA: Harvard University Press.
Deuze, M. (2012). Media life. Cambridge, England: Polity.
Dewandre, N. (2015). The human condition and the black box society. Boundary 2. Retrieved from
https://www.boundary2.org/2015/12/dewandre-on-pascal/#authorbio
Diakopoulos, N. (2014). Algorithmic accountability reporting: On the investigation of black boxes.
Tow Center for Digital Journalism, Columbia University, New York, NY.
Diakopoulos, N. (2015). Algorithmic Accountability: Journalistic investigation of
computational power structures. Digital Journalism, 3(3), 398–415.
Diakopoulos, N., & Koliska, M. (2017). Algorithmic transparency in the news media. Digital
Journalism, 5(7), 809–828.
Doctorow, C. (2010). Saying information wants to be free does more harm than good. Retrieved
from https://www.theguardian.com/technology/2010/may/18/information-wants-to-be-free
Domingo, D. (2008). Interactivity in the daily routines of online newsrooms: Dealing with an
uncomfortable myth. Journal of Computer-Mediated Communication, 13(3), 680–704.
Domingos, P. (2015). The master algorithm: How the quest for the ultimate learning machine will
remake our world. New York, NY: Basic Books.
Dörr, K. N. (2016). Mapping the field of algorithmic journalism. Digital Journalism, 4(6), 700–722.
Dourish, P. (2014). NoSQL: The shifting materialities of database technology. Computational
Culture (4).
Driscoll, K. (2012). From punched cards to “big data”: A social history of database populism.
communication+ 1, 1(1), 4.
Duggan, M., Ellison, N., Lampe, C., Lenhart, A., & Madden, M. (2015). Social media update 2014.
Retrieved from Pew Research Center: http://www.pewinternet.org/files/2015/01/PI_
SocialMediaUpdate20144.pdf
Eilam, E. (2005). Reversing: Secrets of reverse engineering. Indianapolis, IN: John Wiley & Sons.
Elmer, G. (2004). Profiling machines: Mapping the personal information economy. Cambridge, MA:
MIT Press.
Ensmenger, N. (2012). The digital construction of technology: Rethinking the history of computers
in society. Technology and Culture, 53(4), 753–776.
Eslami, M., Rickman, A., Vaccaro, K., Aleyasen, A., Vuong, A., Karahalios, K., . . . Sandvig, C. (2015).
“I always assumed that I wasn’t really that close to [her]”: Reasoning about invisible algorithms in
the news feed. Paper presented at the Proceedings of the 33rd Annual SIGCHI Conference
on Human Factors in Computing Systems, New York, NY.
Espeland, W. N., & Sauder, M. (2007). Rankings and reactivity: How public measures recreate
social worlds. American Journal of Sociology, 113(1), 1–40.
Espeland, W. N., & Stevens, M. L. (2008). A sociology of quantification. European Journal of
Sociology, 49(03), 401–436.
BIBLIOGRAP 1
Foucault, M., & Rabinow, P. (1997). Ethics: subjectivity and truth: the essential works of Michel
Foucault, 1954–1984 (Vol. 1). New York, NY: The New Press.
Fraser, M. (2006). Event. Theory, Culture & Society, 23(2–3), 129–132.
Fuchs, C. (2012). The political economy of privacy on Facebook. Television & New Media, 13(2), 139–
159.
Fuller, M. (2008). Software studies: A lexicon. Cambridge, MA: MIT Press.
Galison, P. (1994). The ontology of the enemy: Norbert Wiener and the cybernetic vision. Critical
Inquiry, 21(1), 228–266.
Galison, P. (2004). Removing knowledge. Critical Inquiry, 31(1), 229–243.
Galloway, A. R. (2004). Protocol: How control exists after decentralization. Cambridge, MA: MIT Press.
Galloway, A. R. (2006a). Language wants to be overlooked: On software and ideology. Journal of
Visual Culture, 5(3), 315–331.
Galloway, A. R. (2006b). Gaming: Essays on algorithmic culture. Minneapolis: University of
Minnesota Press.
Galloway, A. (2011). Are some things unrepresentable? Theory, Culture & Society, 28(7–8), 85–102.
Gans, H. J. (1979). Deciding what’s news: A study of CBS evening news, NBC nightly news,
Newsweek,
and Time. Evanston, IL: Northwestern University Press.
Garcia, M. (2017). In quest for homepage engagement, newsrooms turn to dreaded “A” word.
Retrieved from https://www.cjr.org/analysis/news-algorithm-homepage.php
Ge, H. (2013). News Feed FYI: More Relevant Ads in News Feed. Retrieved from https://
newsroom.fb.com/news/2013/09/news-feed-fyi-more-relevant-ads-in-news-feed/
Gehl, R. W. (2014). Reverse engineering social media. Philadelphia, PA: Temple University Press.
Gerlitz, C., & Helmond, A. (2013). The like economy: Social buttons and the data-intensive web.
New media & society, 15(8), 1348–1365.
Gerlitz, C., & Lury, C. (2014). Social media and self-evaluating assemblages: on numbers, orderings
and values. Distinktion: Scandinavian Journal of Social Theory, 15(2), 174–188.
Gibson, E. J. (1988). Exploratory behavior in the development of perceiving, acting, and the
acquiring of knowledge. Annual Review of Psychology, 39(1), 1–42.
Gillespie, T. (2011). Can an algorithm be wrong? Twitter Trends, the specter of censorship, and our
faith in the algorithms around us. Culture Digitally. Retrieved from http://culturedigitally.
org/2011/10/can-an-algorithm-be-wrong/
Gillespie, T. (2014). The Relevance of Algorithms. In T. Gillespie, P. Boczkowski, & K. Foot
(Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–194).
Cambridge, MA: MIT Press.
Gillespie, T. (2016a). Algorithm. In B. Peters (Ed.), Digital keywords: A vocabulary of information
society and culture (pp. 18–30). Princeton, NJ: Princeton University Press.
Gillespie, T. (2016b). #trendingistrending: when algorithms become culture. In R. Seyfert &
J. Roberge (Eds.) Algorithmic cultures: Essays on meaning, performance and new technologies
(pp. 52–75). New York, NY: Routledge.
Gillespie, T. (2017). Algorithmically recognizable: Santorum’s Google problem, and
Google’s Santorum problem. Information, Communication & Society, 20(1), 63–80.
Gillespie, T., Boczkowski, P., & Foot, K. (2014). Media Technologies: Essays on Communication,
Materiality, and Society. Cambridge, Mass: MIT Press.
Goel, V. (2014). Facebook tinkers with users’ emotions in news feed experiment, stirring
outcry. New York Times. Retrieved from
http://www.nytimes.com/2014/06/30/technology/facebook- tinkers-with-users-emotions-in-
news-feed-experiment-stirring-outcry.html
Goffey, A. (2008). Algorithm. In M. Fuller (Ed.), Software studies: A lexicon. Cambridge, MA: MIT
Press. Golumbia, D. (2009). The cultural logic of computation. Cambridge, MA: Harvard
University Press. Gomez-Uribe, C. A., & Hunt, N. (2015). The Netflix recommender system:
Algorithms, business value, and innovation. ACM Transactions on Management Information
Systems (TMIS), 6(4), 13.
Goode, L. (2009). Social news, citizen journalism and democracy. New Media & Society, 11(8),
1287–1305.
BIBLIOGRAP 1
Hansen, M. B. (2004). New philosophy for new media. Cambridge, MA: MIT Press.
Hansen, M. B. (2012). Bodies in code: Interfaces with digital media. New York, NY: Routledge.
Haraway, D. (1991). Simians, cyborgs, and women: New York, NY: Routledge.
Haraway, D. J. (2004). The Haraway reader. East Sussex, England: Psychology Press.
Hardin, R. (2003). If it rained knowledge. Philosophy of the Social Sciences, 33(1), 3–24.
Harding, S. (1996). Feminism, science, and the anti-enlightenment critiques. In A. Garry &
M. Pearsall (Eds.) Women, knowledge, and reality: Explorations in feminist philosophy (pp. 298–
320). New York, NY: Routledge.
Hardy, Q. (2014). The Monuments of Tech. New York Times. Retrieved from http://www.nytimes
.com/2014/03/02/technology/the-monuments-of-tech.html
Hargittai, E. (2010). Facebook privacy settings: Who cares? First Monday, 15(8).
Hayles, N. K. (2005). My mother was a computer: Digital subjects and literary texts. Chicago, IL:
University of Chicago Press.
Hays, R. B. (1988). Friendship. In S. Duck, D. F. Hay, S. E. Hobfoll, W. Ickes, & B. M. Montgomery
(Eds.), Handbook of personal relationships: Theory, research and interventions (pp. 391–408).
Oxford, England: John Wiley.
Heath, A. (2015). Spotify is getting unbelievably good at picking music—here’s an inside look at
how. Tech Insider. Retrieved from http://www.techinsider.io/inside-spotify-and-the-future-
of-music-streaming
Hecht-Nielsen, R. (1988). Neurocomputing: picking the human brain. Spectrum, IEEE, 25(3), 36–41.
Hein, S. F. (2016). The new materialism in qualitative inquiry: How compatible are the philosophies
of Barad and Deleuze? Cultural Studies? Critical Methodologies, 16(2), 132–140.
Helberger, N., Kleinen-von Königslöw, K., & van der Noll, R. (2015). Regulating the new information
intermediaries as gatekeepers of information diversity. Info, 17(6), 50–71.
Helm, B. W. (2010). Love, friendship, and the self: Intimacy, identification, and the social nature of
persons. Cambridge, England: Cambridge University Press.
Hermida, A., Fletcher, F., Korell, D., & Logan, D. (2012). Share, like, recommend: Decoding the
social media news consumer. Journalism Studies, 13(5–6), 815–824.
Hill, D. W. (2012). Jean-François Lyotard and the inhumanity of internet surveillance. In C. Fuchs,
K. Boersma, A. Albrechtslund, & M. Sandoval (Eds.), Internet and surveillance: The challenges of
Web (Vol. 2, pp. 106–123). New York, NY: Routledge.
Hirsch, E., & Silverstone, R. (2003). Consuming technologies: Media and information in domestic
spaces. London, England: Routledge.
Hirschfeld, L. A. (2001). On a folk theory of society: Children, evolution, and mental representations
of social groups. Personality and Social Psychology Review, 5(2), 107–117.
Hoel, A. S., & Van der Tuin, I. (2013). The ontological force of technicity: Reading Cassirer and
Simondon diffractively. Philosophy & Technology, 26(2), 187–202.
Hromkovič, J. (2015). Alan Turing and the Foundation of Computer Science. In G. Sommaruga &
T. Strahm (Eds.) Turing’s Revolution (pp. 273–281). Cham, Germany: Springer
Hull M (2011) Facebook changes mean that you are not seeing everything that you should be seeing.
Retrieved from http://www.facebook.com/notes/mark-hull/please-read-facebook-changes-
mean-that-you-are-not-seeing-everything-that-you-sh/10150089908123789.
Hutchby, I. (2001). Technologies, texts and affordances. Sociology, 35(2), 441–456.
Ingold, T. (1993). The temporality of the landscape. World Archaeology, 25(2), 152–174.
Ingold, T. (2000). The perception of the environment: essays on livelihood, dwelling and skill. East
Susses, England: Psychology Press.
Ingold, T. (2011). Being alive: Essays on movement, knowledge and description. London, England:
Routledge.
Introna, L., & Wood, D. (2004). Picturing algorithmic surveillance: The politics of facial recognition
systems. Surveillance & Society, 2(2/3).
Introna, L. D. (2011). The enframing of code agency, originality and the plagiarist. Theory, Culture &
Society, 28(6), 113–141.
BIBLIOGRAP 1
Kitchin, R., & Dodge, M. (2005). Code and the transduction of space. Annals of the Association of
American geographers, 95(1), 162–180.
Kitchin, R., & Dodge, M. (2011). Code/space: Software and everyday life. Cambridge, MA: MIT
Press.
Kittler, F. A. (1999). Gramophone, film, typewriter. Stanford, CA: Stanford University Press.
Klein, E. (2016). Facebook is going to get more politically biased, not less. Vox. Retrieved from
http://www.vox.com/2016/5/13/11661156/facebook-political-bias
Kling, R. (1980). Social analyses of computing: Theoretical perspectives in recent empirical
research. ACM Computing Surveys (CSUR), 12(1), 61–110.
Knight, W. (2015, September 22). The hit charade. MIT Technology Review. Retrieved from https://
www.technologyreview.com/s/541471/the-hit-charade/
Knobel, C., & Bowker, G. C. (2011). Values in design. Communications of the ACM, 54(7), 26–28.
Knorr-Cetina, K. (1999). Epistemic cultures: How scientists make sense. Cambridge, MA: Harvard
University Press.
Knuth, D. E. (1984). Literate programming. The Computer Journal, 27(2), 97–111.
Knuth, D. E. (1998). The art of computer programming: Sorting and searching (Vol. 3). London,
England: Pearson Education.
Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2016).
Accountable algorithms. University of Pennsylvania Law Review, 165, 633–706.
Kushner, S. (2013). The freelance translation machine: Algorithmic culture and the
invisible industry. New Media & Society, 15(8), 1241–1258.
Lacan, J., & Fink, B. (2002). Ecrits: A selection. New York, NY: WW Norton.
LaFrance, A. (2015). Not even the people who write algorithms really know how they work. The
Atlantic. Retrieved from http://www.theatlantic.com/technology/archive/2015/09/not-
even-the-people-who-write-algorithms-really-know-how-they-work/406099/
Lamb, R., & Kling, R. (2003). Reconceptualizing users as social actors in information systems
research. Mis Quarterly, 27(2), 197–236.
Langlois, G. (2014). Meaning in the age of social media. New York, NY: Palgrave
Macmillan. Langlois, G., & Elmer, G. (2013). The research politics of social media platforms.
Culture Machine,
14, 1–17.
Lash, S. (2007). Power after hegemony cultural studies in mutation? Theory, Culture & Society,
24(3), 55–78.
Latour, B. (1994). On technical mediation. Common knowledge, 3(2), 29–64.
Latour, B. (1999). Pandora’s hope: Essays on the reality of science studies. Cambridge, MA: Harvard
University Press.
Latour, B. (2004). Why has critique run out of steam? From matters of fact to matters of concern.
Critical Inquiry, 30(2), 225–248.
Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford, England:
Oxford University Press.
Law, J. (1999). After ANT: Complexity, naming and topology. The Sociological Review, 47(S1), 1–
14. Law, J. (2002). Aircraft stories: Decentering the object in technoscience. Durham, NC: Duke
University
Press.
Law, J. (2004a). After method: Mess in social science research. London, England: Routledge.
Law, J. (2004b). Matter-ing: Or how might STS contribute?.” Centre for Science Studies, Lancaster
University, draft available at http://www. heterogeneities. net/publications/Law2009TheGreer-
BushTest. pdf, accessed on December 5th, 2010, 1–11.
Law, J., & Singleton, V. (2014). ANT, multiplicity and policy. Critical policy studies, 8(4), 379–396.
Lazer, D., Kennedy, R., King, G., & Vespignani, A. (2014). The parable of Google flu: Traps in big
data analysis. Science, 343(6176), 1203–1205.
Lemke, T. (2001). “The birth of bio-politics”: Michel Foucault’s lecture at the Collège de France on
neo-liberal governmentality. Economy and Society, 30(2), 190–207.
Lemke, T. (2012). Foucault, governmentality, and critique. Boulder, CO: Paradigm.
BIBLIOGRAP 1
Lemke, T. (2015). New materialisms: Foucault and the ‘government of things’. Theory, Culture &
Society, 32(4), 3–25.
Lenglet, M. (2011). Conflicting codes and codings: How algorithmic trading is reshaping
financial regulation. Theory, Culture & Society, 28(6), 44–66.
Leonardi, P. M., Nardi, B. A., & Kallinikos, J. (2012). Materiality and organizing: Social interaction in
a technological world. Oxford, England: Oxford University Press.
Levy, S. (2010). Hackers. Sebastopol, CA: O’Reilly Media.
Lewis, S. C. (2015). Journalism in an era of big data: Cases, concepts, and critiques. Digital
Journalism, 3(3), 321–330.
Lewis, S. C., & Usher, N. (2013). Open source and journalism: Toward new frameworks for
imagining news innovation. Media, Culture & Society, 35(5), 602–619.
Lewis, S. C., & Usher, N. (2016). Trading zones, boundary objects, and the pursuit of
news innovation: A case study of journalists and programmers. Convergence, 22(5), 543–
560.
Lewis, S. C., & Westlund, O. (2015). Big Data and Journalism: Epistemology, expertise,
economics, and ethics. Digital Journalism, 3(3), 447–466.Li, J., Green, B., & Backstrom, L. S.
(2013). U.S. Patent No. 9384243 B2 (“Real-time trend detection in a social network”).
Washington, DC:
U.S. Patent and Trademark Office.
Lievrouw, L. (2014). Materiality and media in communication and technology studies: An
unfinished project. In T. Gillespie, P. Boczkowski, & K. Foot (Eds.), Media technologies: Essays
on communication, materiality, and society (pp. 21–51). Cambridge, MA: MIT Press.
Linden, C. G. (2017). Decades of automation in the newsroom: Why are there still so many jobs
in journalism? Digital Journalism, 5(2), 123–140.
Lowrey, W. (2006). Mapping the journalism–blogging relationship. Journalism, 7(4), 477–500.
Luu, F. (2013). U.S. Patent No. 20130332523 A1 (“Providing a multi-column newsfeed of content on a
social networking system”). Retrieved from https://www.google.com/patents/US20130332523
Lyon, D. (2006). Theorizing surveillance. London, England: Routledge.
Mackenzie, A. (2002). Transductions: Bodies and machines at speed. London, England: Continuum.
Mackenzie, A. (2005). Problematising the technological: the object as event? Social Epistemology,
19(4), 381–399.
Mackenzie, A. (2006). Cutting code: Software and sociality. New York, NY: Peter Lang.
Mackenzie A (2007) Protocols and the irreducible traces of embodiment: The Viterbi Algorithm
and the mosaic of machine time. In R. Hassan and R.E. Purser RE (Eds.) 24/7: Time
and Temporality in the Network Society (pp. 89–106). Stanford, CA: Stanford University
Press.
Mackenzie, A. (2015). The production of prediction: What does machine learning want? European
Journal of Cultural Studies, 18(4–5), 429–445.
Mackenzie, A., & Vurdubakis, T. (2011). Codes and codings in crisis signification, performativity
and excess. Theory, Culture & Society, 28(6), 3–23.
MacKenzie, D. (2008). Material markets: How economic agents are constructed. Oxford, England:
Oxford University Press.
Mager, A. (2012). Algorithmic ideology: How capitalist society shapes search engines. Information,
Communication & Society, 15(5), 769–787.
Mahoney, M. S. (1988). The history of computing in the history of technology. Annals of the History
of Computing, 10(2), 113–125.
Manovich, L. (1999). Database as symbolic form. Convergence: The International Journal of Research
into New Media Technologies, 5(2), 80–99.
Manovich, L. (2001). The language of new media. Cambridge MA: MIT Press.
Mansell, R. (2012). Imagining the internet: Communication, innovation, and governance. Oxford,
England: Oxford University Press.
Marres, N. (2012). Material participation: technology, the environment and everyday publics.
Basingstoke, England: Palgrave Macmillan.
Marres, N. (2013). Why political ontology must be experimentalized: On eco-show homes as
devices of participation. Social Studies of Science, 43(3), 417–443.
188
Marwick, A. E. (2013). Status update: Celebrity, publicity, and branding in the social media age. New
Haven, CT: Yale University Press.
Massumi, B. (1995). The autonomy of affect. Cultural Critique (31), 83–109.
Matheson, D. (2004). Weblogs and the epistemology of the news: some trends in online journalism.
New Media & Society, 6(4), 443–468.
Maturana, H. R., & Varela, F. J. (1987). The tree of knowledge: The biological roots of human
understanding. Boston, MA: Shambhala Publications.
McCormack, D. P. (2015). Devices for doing atmospheric things. In P. Vannini (Ed.), Non-
Representational methodologies. Re-Envisioning research. New York, NY: Routledge.
McGoey, L. (2012). The logic of strategic ignorance. The British Journal of Sociology, 63(3), 533–576.
McKelvey, F. (2014). Algorithmic media need democratic methods: Why publics matter. Canadian
Journal of Communication, 39(4).
McLuhan, M. (1994). Understanding media: The extensions of man. Cambridge, MA: MIT Press.
Mehra, S. K. (2015). Antitrust and the robo-seller: Competition in the time of algorithms.
Minnesota
Law Review, 100, 1323.
Meyer, P. (2002). Precision journalism: A reporter’s introduction to social science methods. Lanham,
MD: Rowman & Littlefield.
Meyer, R. (2014). Everything we know about Facebook’s secret mood manipulation
experiment. The Atlantic. Retrieved from
http://www.theatlantic.com/technology/archive/2014/06/ everything-we-know-about-
facebooks-secret-mood-manipulation-experiment/373648/
Meyrowitz, J. (1994). Medium theory. In S. Crowley & D. Mitchell (Eds.) Communication Theory
Today (pp. 50–77). Stanford, CA: Stanford University Press.
Michael, M. (2004). On making data social: Heterogeneity in sociological practice. Qualitative
Research, 4(1), 5–23.
Miller, C. H. (2015, July 9). When algorithms discriminate. New York Times. Retrieved from http://
www.nytimes.com/2015/07/10/upshot/when-algorithms-discriminate.html?_r=0
Miller, P., & Rose, N. (1990). Governing economic life. Economy and Society, 19(1), 1–
31. Minsky, M., & Papert, S. (1969). Perceptrons. Cambridge, MA: MIT Press.
Mirani, L. (2015). Millions of Facebook users have no idea they’re using the internet.
Quartz. Retrieved from http://qz.com/333313/milliions-of-facebook-users-have-no-
idea-theyre- using-the-internet
Miyazaki, S. (2012). Algorhythmics: Understanding micro-temporality in computational cultures.
Computational Culture (2). Retrieved from: http://computationalculture.net/algorhythmics-
understanding-micro-temporality-in-computational-cultures/
Mol, A. (1999). Ontological politics. A word and some questions. The Sociological Review, 47(S1), 74–
89. Mol, A. (2002). The body multiple: Ontology in medical practice: Durham, NC: Duke
University
Press.
Mol, A. (2013). Mind your plate! The ontonorms of Dutch dieting. Social Studies of Science, 43(3),
379–396.
Morley, D. (2003). Television, audiences and cultural studies. London, England: Routledge.
Morley, D., & Silverstone, R. (1990). Domestic communication—technologies and meanings.
Media, Culture & Society, 12(1), 31–55.
Moser, I. (2008). Making Alzheimer’s disease matter. Enacting, interfering and doing politics of
nature. Geoforum, 39(1), 98–110.
Mü ller, M. (2015). Assemblages and actor-networks: Rethinking socio-material power, politics and
space. Geography Compass, 9(1), 27–41.
Murray, J. (2012). Cybernetic principles of learning. In. N. Seel (Ed) Encyclopedia of the Sciences of
Learning (pp. 901–904). Boston, MA: Springer.
Napoli, P. M. (2014). On automation in media industries: Integrating algorithmic media production
into media industries scholarship. Media Industries, 1(1). Retrieved from: https://quod.lib.
umich.edu/m/mij/15031809.0001.107/—on-automation-in-media-industries-integrating-al
gorithmic?rgn=main;view=fulltext
BIBLIOGRAP 1
Napoli, P. M. (2015). Social media and the public interest: Governance of news platforms in
the realm of individual and algorithmic gatekeepers. Telecommunications Policy, 39(9),
751–760.
Narasimhan, M. (2011). Extending the graph tech talk. Retrieved from http://www.facebook.com/
video/video.php?v=10150231980165469
Naughton, J. (2016). Here is the news—but only if Facebook thinks you need to know. The Guardian.
Retrieved from http://www.theguardian.com/commentisfree/2016/may/15/facebook-instant-
articles-news-publishers-feeding-the-beast
Newton, C. (2016). Here’s how Twitter’s new algorithmic timeline is going to work. The
Verge. Retrieved from http://www.theverge.com/2016/2/6/10927874/twitter-algorithmic-
timeline
Neyland, D., & Mö llers, N. (2017). Algorithmic IF . . . THEN rules and the conditions
and consequences of power. Information, Communication & Society, 20(1), 45–62.
Nielsen, R. K. (2016). The many crises of Western journalism: A comparative analysis of economic
crises, professional crises, and crises of confidence. In J. C. Alexander, E. B. Breese, & M. Luengo
(Eds.), The crisis of journalism reconsidered (pp. 77–97)a Cambridge, England: Cambridge
University Press.
Nielsen, R. K., & Schrøder, K. C. (2014). The relative importance of social media for accessing,
finding, and engaging with news: An eight-country cross-media comparison. Digital Journalism,
2(4), 472–489.
Nunez, M. (2016). Former Facebook workers: We routinely suppressed Conservative news. Gizmodo.
Retrieved from http://gizmodo.com/former-facebook-workers-we-routinely-suppressed-
conser-1775461006
Obama, B. (2009). Freedom of Information Act. Retrieved from https://www.usitc.gov/secretary/
foia/documents/FOIA_TheWhiteHouse.pdf
Ohlheiser, A. (2016). Three days after removing human editors, Facebook is already trending fake
news. The Washington Post. Retrieved from https://www.washingtonpost.com/news/the-
intersect/wp/2016/08/29/a-fake-headline-about-megyn-kelly-was-trending-on-facebook/
?utm_term=.d7e4d9b9bf9a
Olazaran, M. (1996). A sociological study of the official history of the perceptrons controversy.
Social Studies of Science, 26(3), 611–659.
Oremus, W. (2016). Who controls your Facebook feed. Slate. Retrieved from http://www.slate
.com/articles/technology/cover_story/2016/01/how_facebook_s_news_feed_algorithm_
works.html
Orlikowski, W. J. (1992). The duality of technology: Rethinking the concept of technology
in organizations. Organization Science, 3(3), 398–427.
Orlikowski, W. J. (2000). Using technology and constituting structures: A practice lens for studying
technology in organizations. Organization Science, 11(4), 404–428.
Orlikowski, W. J., & Gash, D. C. (1994). Technological frames: Making sense of information
technology in organizations. ACM Transactions on Information Systems (TOIS), 12(2),
174–207.
Orlikowski, W. J., & Scott, S. V. (2008). 10 Sociomateriality: Challenging the separation of
technology, work and organization. The Academy of Management Annals, 2(1), 433–474.
Orlikowski, W. J., & Scott, S. V. (2015). Exploring material-discursive practices. Journal
of Management Studies, 52(5), 697–705.
Owens, E., & Vickrey, D. (2014). News feed FYI: Showing more timely stories from friends and
pages. Facebook News Feed FYI. Retrieved from https://newsroom.fb.com/news/2014/09/
news-feed-fyi-showing-more-timely-stories-from-friends-and-pages/
Packer, J., & Wiley, S. B. C. (2013). Communication matters: Materialist approaches to media, mobility
and networks. New York, NY: Routledge.
Papacharissi, Z. (2015). Affective publics: Sentiment, technology, and politics. New York, NY: Oxford
University Press.
190
Ridgway, S. (2016). Architectural projects of Marco Frascari: The pleasure of a demonstration. London,
England: Routledge.
Rieder, B. (2017). Scrutinizing an algorithmic technique: The Bayes classifier as interested reading
of reality. Information, Communication & Society, 20(1), 100–117.
Roberge, J., & Melançon, L. (2017). Being the King Kong of algorithmic culture is a tough job
after all Google’s regimes of justification and the meanings of Glass. Convergence: The
International Journal of Research into New Media Technologies, 23(3), 306–324.
Roberts, J. (2012). Organizational ignorance: Towards a managerial perspective on the unknown.
Management Learning, 44(3), 215–236.
Rodgers, S. (2015). Foreign objects? Web content management systems, journalistic cultures and
the ontology of software. Journalism, 16(1), 10–26.
Rodrigues, F. (2017). Meet the Swedish newspaper editor who put an algorithm in charge of his
homepage. Retrieved from http://www.storybench.org/meet-swedish-newspaper-editor-
put-algorithm-charge-homepage/
Rose, N. (1999). Powers of freedom: Reframing political thought. Cambridge, England:
Cambridge University Press.
Rosen, J. (2005). Bloggers vs. journalists is over. Retrieved from http://archive.pressthink.
org/2005/01/21/berk_essy.html
Rosenberry, J., & St John, B. (2010). Public journalism 2.0: The promise and reality of a citizen engaged
press. London, England: Routledge.
Rubinstein, D. Y., Vickrey, D., Cathcart, R. W., Backstrom, L. S., & Thibaux, R. J. (2016). Diversity
enforcement on a social networking system newsfeed: Google Patents.
Sandvig, C. (2013). The internet as an infrastructure. In The Oxford handbook of internet studies
(pp. 86–108). Oxford, England: Oxford University Press.
Sandvig, C. (2015). Seeing the sort: The aesthetic and industrial defense of “the algorithm.” Journal of
the New Media Caucus. Retrieved from http://median.newmediacaucus.org/art-infrastructures-
information/seeing-the-sort-the-aesthetic-and-industrial-defense-of-the-algorithm/
Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). Auditing algorithms: Research
methods for detecting discrimination on internet platforms. Presented at Data and
Discrimination: Converting Critical Concerns into Productive Inquiry. May 22, Seattle, USA.
Sauder, M., & Espeland, W. N. (2009). The discipline of rankings: Tight coupling and organizational
change. American Sociological Review, 74(1), 63–82.
Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61, 85–117.
Schubert, C. (2012). Distributed sleeping and breathing: On the agency of means in medical work.
In J. Passoth, B. Peuker & M. Schillmeier (Eds.) Agency without actors: New approaches to
collective action (pp. 113–129). Abingdon, England: Routledge.
Schultz, I. (2007). The journalistic gut feeling: Journalistic doxa, news habitus and orthodox news
values. Journalism Practice, 1(2), 190–207.
Schultz, A. P., Piepgrass, B., Weng, C. C., Ferrante, D., Verma, D., Martinazzi, P., Alison, T., & Mao,
Z. (2014). U.S. Patent No. 20140114774 A1 (“Methods and systems for determining use
and content of pymk based on value model”). Washington, DC: U.S. Patent and Trademark
Office. Schü tz, A. (1946). The well-informed citizen: An essay on the social distribution of
knowledge.
Social Research, 13 (4), 463–478.
Schutz, A. (1970). Alfred Schutz on phenomenology and social relations. Chicago, IL: University of
Chicago Press.
Seabrook, J. (2014). Revenue Streams. The New Yorker. Retrieved from https://www.newyorker
.com/magazine/2014/11/24/revenue-streams
Seaver, N. (2013). Knowing algorithms. Media in Transition, 8, 1–12.
Sedgwick, E. K., & Frank, A. (1995). Shame in the cybernetic fold: Reading Silvan Tomkins.
Critical Inquiry, 21(2), 496–522.
Simmel, G. (1906). The sociology of secrecy and of secret societies. The American Journal of
Sociology, 11(4), 441–498.
192
Wardrip-Fruin, N. (2009). Expressive processing: Digital fictions, computer games, and software studies.
Cambridge, MA: MIT Press.
Wash, R. (2010). Folk models of home computer security. Paper presented at the Proceedings of the
Sixth Symposium on Usable Privacy and Security, July 14–16, Redmond, WA.
Webb, D. (2003). On friendship: Derrida, Foucault, and the practice of becoming. Research in
Phenomenology, 33, 119–140.
Weiss, A. S., & Domingo, D. (2010). Innovation processes in online newsrooms as actor-
networks and communities of practice. New Media & Society, 12(7), 1156–1171.
Welch, B., & Zhang, X. (2014). News Feed FYI: Showing better videos. News Feed FYI. Retrieved
from http://newsroom.fb.com/news/2014/06/news-feed-fyi-showing-better-videos/
Weltevrede, E., Helmond, A., & Gerlitz, C. (2014). The politics of real-time: A device perspective on
social media platforms and search engines. Theory, Culture & Society, 31(6), 125–150.
Whitehead, A. N. (1978). Process and reality: An essay in cosmology Ed. David Ray Griffin and Donald
W. Sherburne, NY: Free Press.
Wiener, N. (1948). Cybernetics: Control and communication in the animal and the machine. New
York, NY: Technology Press & Wiley.
Wilf, E. (2013). Toward an anthropology of computer-mediated, algorithmic forms of sociality.
Current Anthropology, 54(6), 716–739.
Williams, R. (1977). Marxism and literature. Oxford, England: Oxford University Press.
Williams, R. (1985). Keywords: A vocabulary of culture and society. Oxford, England: Oxford
University Press.
Williams, R. (2005). Television: Technology and cultural form. New York, NY: Routledge.
Williams, S. (2004). Truth, autonomy, and speech: Feminist theory and the first amendment. New
York: New York University Press.
Williamson, B. (2015). Governing software: networks, databases and algorithmic power in the
digital governance of public education. Learning, Media and Technology, 40(1), 83–105.
Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.
Winner, L. (1986). The whale and the reactor: A search for limits in an age of high technology. Chicago,
IL: University of Chicago Press.
Winner, L. (1993). Upon opening the black box and finding it empty: Social constructivism and the
philosophy of technology. Science, Technology, and Human Values, 18(3): 362–378.
Wirth, N. (1985). Algorithms+ data structures= programs. Englewood Cliffs, NJ: Prentice Hall.
Woolgar, S. (1998). A new theory of innovation? Prometheus, 16(4), 441–452.
Woolgar, S., & Lezaun, J. (2013). The wrong bin bag: A turn to ontology in science and technology
studies? Social Studies of Science, 43(3), 321–340.
WSJ. (2015, October 25). Tim Cook talks TV, cars, watches and more. Wall Street Journal.
Retrieved from http://www.wsj.com/articles/tim-cook-talks-tv-cars-watches-and-more-
1445911369
Yusoff, K. (2009). Excess, catastrophe, and climate change. Environment and Planning D: Society and
Space, 27(6), 1010–1029.
Ziewitz, M. (2016). Governing algorithms myth, mess, and methods. Science, Technology & Human
Values, 41(1), 3–16.
Zimmer, M. (2008). The externalities of search 2.0: The emerging privacy threats when the
drive for the perfect search engine meets Web 2.0. First Monday, 13(3). Retrieved from
http:// firstmonday.org/article/view/2136/1944
Zuckerberg, M. (2014). “Today is Facebook’s 10th anniversary”. Retrieved from https://www
.facebook.com/zuck/posts/10101250930776491
Zuckerberg, M., Bosworth, A., Cox, C., Sanghvi, R., & Cahill, M. (2012). U.S. Patent No. 817128
(“Communicating a newsfeed of media content based on a member’s interactions in a
social network environment”). Retrieved from
https://www.google.com/patents/US8171128
Zuiderveen Borgesius, F. J., Trilling, D., Moeller, J., Bodó, B., De Vreese, C. H., & Helberger,
N. (2016). Should we worry about filter bubbles? Internet Policy Review. Journal on
Internet Regulation, 5(1). Retrieved from https://policyreview.info/articles/analysis/should-
we- worry-about-filter-bubbles
Index
A/B testing, 48, 82, 127 Anderson, C.W., 29, 122, 124, 130
accountability, 13, 27, 35, 41–42, 43, 47, 52, 76, 129, antiepistemology, 44
133, 141, 155, 157 Apple Music, 54
actor-network theory, 30, 49, 51 architectural model of power, 73, 92, 95
Adresseavisen, 125, 131, 140, 144 architecture, 83, 84, 85, 89, 91, 114
affect, 13, 14, 17, 62–65, 93, 94–96, 100, 116–17 Aristotle, 8, 11
see also encounters with algorithms artificial intelligence, 27, 56, 97, 151
affordances, 23, 131 Ashby, Ross, An Introduction to Cybernetics, 59–62
Aftenposten, 58, 119, 121, 122, 123, 125, 128, 152 assemblages, 8, 22, 30, 50–51, 64, 82, 142, 144, 152, 155
Aftonbladet, 133 auditing, 43, 45, 60
agency, 16, 27, 42, 49, 50–51, 55, 56, 147, 151, 155, automation, 9, 25, 31, 32
156 autonomy, 13, 42, 72
Ahmed, Sara, 100, 156
Algol 58, 21
algorism, 20–21 Barad, Karen, 16, 49, 51, 120, 121, 125, 146, 154, 155
algorithmic configurations, 16, 50, 55, 58, 63–64, Barocas, Solon, 25–27, 53
72, 143, 146, 151, 155–56 Bayes’ theorem, 28, 61
algorithmic culture, 31–32, 150 Beer, David, 29, 33, 34, 103, 104, 114
algorithmic decision-making, 34, 35, 41, 46, 53, 55 Bennett, Jane, 49, 120
algorithmic discrimination, 29–30, 36, 39, 43, Berlingske, 143
45–46, 51, 55, 56, 102 see also bias; “racist” bias, 35, 36, 39, 41, 45, 52–53, 55–56, 58, 86, 90,
algorithms 106, 118, 120, 152, 154 see also algorithmic
algorithmic imaginary, 113–16, 117, 156–57 discrimination; “racist” algorithms
algorithmic life, 12, 13, 67, 149–59 big data, 2, 25, 27, 28, 29, 32, 54, 130, 139, 150
see also orientations; technicity black boxes, 16, 41, 42–47, 50, 56, 58, 59–60, 61, 64, 65
algorithmic power and politics, 3–4, 8, 13–14, 15, cybernetic notion, 43, 60, 61, 62, 77
18, 19, 20, 23, 27, 34, 35, 41, 55, 119–20, opening up, 43–44, 45–46, 47, 50
146–47, 151, 155, 159 see also encounters with blogging, 111, 123, 124
algorithms; gatekeeping; government and Bogost, Ian, 38–39
governmentality; multiplicity of algorithms; Boland, Brian, 85–86
ontological politics Bosworth, Andrew, 70, 79
in contemporary media landscape, 16, 18, 20, boundary-making practices, 146, 148, 152, 155,
32–40, 122 157–59
diagrammatics of algorithms, 72–74 business models, 6, 8, 48, 108, 128
micropolitics of power, 94, 116
algorithmic timeline, 39, 93, 112
al-Khwarizmi, Abdullah Muhammad bin Musa, calculation, 20, 32, 140
20–21 Callon, Michael, 32, 44, 50, 58, 143
Amazon, 31, 51, 101, 102–3, 149, 153–54 categorization, 5, 10, 14–15, 102, 103, 115, 155
Anderson, Benedict, 51, 94, 113, 114
195
196
Tizard Mission, 43
van Dijck, José, 2, 5–7, 67, 105
Top-N Video ranker, 47
Verdens Gang (VG), 125, 133, 138–39
transduction, 14, 153
visibility, 10, 16–17, 73–74, 82, 88, 92, 100, 159
transparency, 41–42, 43, 45–46, 58
media strategies for, 110–11
truth games, 147
panopticism, 82–84
Tumblr, 103, 111
threat of invisibility, 17, 84–88, 89, 96
Twitter, 39, 47, 81, 99, 106–7, 108, 110–112, 132,
Von Hilgers, Philipp, 43, 44, 60, 64
144, 149, 150, 156
hashtags, 93, 98, 106, 110
#RIPTwitter, 93, 94, 98, 106, 107
“when” of algorithms, 55–59, 151–52
“while you were away” feature, 153–54
Whitehead, Alfred North, 16, 49
Williams, Raymond, 71, 93, 117
Winner, Langdon, 23, 35, 50, 67
unknowing algorithms, 41, 46–47, 59
Wired, 48, 77
unknown knowns, 61–63
Wirth, Niklaus, 22–23
unknowns, 42, 43–47, 44, 56, 62, 96 see also known
Wood, David, 29, 35, 36, 151
unknowns; strategic unknowns
Usher, Nikki, 124, 145
Yahoo, 27
YouTube, 47, 48, 104, 109
values, 4, 8, 35, 52, 60, 71, 90, 91, 134
in algorithms, 23, 36, 52–53, 60, 61, 77, 90, 117,
158
Zuckerberg, Mark, 52, 66, 69, 70, 71–72, 81, 91, 119,
in design, 35, 36, 68, 120, 158
121, 129, 152 see also Facebook
journalistic see under journalism