Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace
By Ronald Deibert, John Palfrey, Rafal Rohozinski and
3.5/5
()
About this ebook
Internet filtering, censorship of Web content, and online surveillance are increasing in scale, scope, and sophistication around the world, in democratic countries as well as in authoritarian states. The first generation of Internet controls consisted largely of building firewalls at key Internet gateways; China's famous “Great Firewall of China” is one of the first national Internet filtering systems. Today the new tools for Internet controls that are emerging go beyond mere denial of information. These new techniques, which aim to normalize (or even legalize) Internet control, include targeted viruses and the strategically timed deployment of distributed denial-of-service (DDoS) attacks, surveillance at key points of the Internet's infrastructure, take-down notices, stringent terms of usage policies, and national information shaping strategies. Access Controlled reports on this new normative terrain. The book, a project from the OpenNet Initiative (ONI), a collaboration of the Citizen Lab at the University of Toronto's Munk Centre for International Studies, Harvard's Berkman Center for Internet and Society, and the SecDev Group, offers six substantial chapters that analyze Internet control in both Western and Eastern Europe and a section of shorter regional reports and country profiles drawn from material gathered by the ONI around the world through a combination of technical interrogation and field research methods.
Related to Access Controlled
Related ebooks
Open Networks, Closed Regimes: The Impact of the Internet on Authoritarian Rule Rating: 0 out of 5 stars0 ratingsConnecting Canadians: Investigations in Community Informatics Rating: 0 out of 5 stars0 ratingsdigitalSTS: A Field Guide for Science & Technology Studies Rating: 0 out of 5 stars0 ratingsState of the World 2006: Special Focus: China and India Rating: 0 out of 5 stars0 ratingsDigital Technology and Democratic Theory Rating: 0 out of 5 stars0 ratingsThe Myth of Digital Democracy Rating: 3 out of 5 stars3/5Intelligent Infrastructure: Zip Cars, Invisible Networks, and Urban Transformation Rating: 0 out of 5 stars0 ratingsState of the World 2005: Redefining Global Security Rating: 0 out of 5 stars0 ratingsBig Mind: How Collective Intelligence Can Change Our World Rating: 3 out of 5 stars3/5Globalization and Human Rights Rating: 3 out of 5 stars3/5Innovate to Dominate: The Rise of the Chinese Techno-Security State Rating: 0 out of 5 stars0 ratingsMedia Pluralism and Online News: The Consequences of Automated Curation for Society Rating: 0 out of 5 stars0 ratingsThe NSA Report: Liberty and Security in a Changing World Rating: 5 out of 5 stars5/5Organized Hypocrisy Rating: 0 out of 5 stars0 ratingsNetworked Politics: Agency, Power, and Governance Rating: 5 out of 5 stars5/5Doing Global Science: A Guide to Responsible Conduct in the Global Research Enterprise Rating: 0 out of 5 stars0 ratingsLaw, Privacy and Surveillance in Canada in the Post-Snowden Era Rating: 0 out of 5 stars0 ratingsState of the Future 19.1 Rating: 0 out of 5 stars0 ratingsCultivating Confidence: Verification, Monitoring, and Enforcement for a World Free of Nuclear Weapons Rating: 1 out of 5 stars1/5Chasing Innovation: Making Entrepreneurial Citizens in Modern India Rating: 0 out of 5 stars0 ratingsLibrary Infrastructures & Citizen Science Rating: 0 out of 5 stars0 ratingsState of the World 2007: Our Urban Future Rating: 0 out of 5 stars0 ratingsThe Future of Open Data Rating: 0 out of 5 stars0 ratingsThe Company We Keep: Occupational Community in the High-Tech Network Society Rating: 0 out of 5 stars0 ratingsPersonal Data Collection Risks in a Post-Vaccine World Rating: 0 out of 5 stars0 ratingsThe European Information Society: A Reality Check 2003 Rating: 0 out of 5 stars0 ratingsEthnography for a data-saturated world Rating: 0 out of 5 stars0 ratingsEdward Snowden: State Enemy Or Privacy Defender? Rating: 0 out of 5 stars0 ratingsBeyond Broadband Access: Developing Data-Based Information Policy Strategies Rating: 0 out of 5 stars0 ratings
Internet & Web For You
Coding For Dummies Rating: 5 out of 5 stars5/5Beginner's Guide To Starting An Etsy Print-On-Demand Shop Rating: 0 out of 5 stars0 ratingsThe Beginner's Affiliate Marketing Blueprint Rating: 4 out of 5 stars4/5No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State Rating: 4 out of 5 stars4/5Coding All-in-One For Dummies Rating: 4 out of 5 stars4/5Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are Rating: 4 out of 5 stars4/5How to Be Invisible: Protect Your Home, Your Children, Your Assets, and Your Life Rating: 4 out of 5 stars4/5HTML in 30 Pages Rating: 5 out of 5 stars5/5How to Disappear and Live Off the Grid: A CIA Insider's Guide Rating: 0 out of 5 stars0 ratingsSix Figure Blogging Blueprint Rating: 5 out of 5 stars5/5Cybersecurity For Dummies Rating: 5 out of 5 stars5/5The Hacker Crackdown: Law and Disorder on the Electronic Frontier Rating: 4 out of 5 stars4/5Social Engineering: The Science of Human Hacking Rating: 3 out of 5 stars3/5An Ultimate Guide to Kali Linux for Beginners Rating: 3 out of 5 stars3/5The $1,000,000 Web Designer Guide: A Practical Guide for Wealth and Freedom as an Online Freelancer Rating: 4 out of 5 stars4/5Wireless Hacking 101 Rating: 5 out of 5 stars5/5The Cyber Attack Survival Manual: Tools for Surviving Everything from Identity Theft to the Digital Apocalypse Rating: 0 out of 5 stars0 ratingsGrokking Algorithms: An illustrated guide for programmers and other curious people Rating: 4 out of 5 stars4/5The Gothic Novel Collection Rating: 5 out of 5 stars5/5200+ Ways to Protect Your Privacy: Simple Ways to Prevent Hacks and Protect Your Privacy--On and Offline Rating: 0 out of 5 stars0 ratingsGet Rich or Lie Trying: Ambition and Deceit in the New Influencer Economy Rating: 0 out of 5 stars0 ratingsCybersecurity All-in-One For Dummies Rating: 0 out of 5 stars0 ratingsCISM Certified Information Security Manager Study Guide Rating: 0 out of 5 stars0 ratingsJavaScript All-in-One For Dummies Rating: 5 out of 5 stars5/5The Anatomy of the Swipe: Making Money Move Rating: 5 out of 5 stars5/5Kill All Normies: Online Culture Wars From 4Chan And Tumblr To Trump And The Alt-Right Rating: 3 out of 5 stars3/5Podcasting For Dummies Rating: 4 out of 5 stars4/5Python QuickStart Guide: The Simplified Beginner's Guide to Python Programming Using Hands-On Projects and Real-World Applications Rating: 0 out of 5 stars0 ratingsHow To Start A Profitable Authority Blog In Under One Hour Rating: 5 out of 5 stars5/5
Reviews for Access Controlled
3 ratings1 review
- Rating: 5 out of 5 stars5/5Thank You This Is Very Good, Maybe This Can Help You
Download Full Ebook Very Detail Here :
https://amzn.to/3XOf46C
- You Can See Full Book/ebook Offline Any Time
- You Can Read All Important Knowledge Here
- You Can Become A Master In Your Business
Book preview
Access Controlled - Ronald Deibert
Part I Access Controlled: Theory and Analysis
1 Beyond Denial
Introducing Next-Generation Information Access Controls
Ronald Deibert and Rafal Rohozinski
Introduction
It is hard to imagine the world before the Internet. A generation of digital natives has grown up with ubiquitous connectivity, where neither borders nor language seems a barrier to communication.¹ And yet, less than 20 years ago the global information environment was a much more controlled and regulated space, organized around sovereign states. Throughout much of modern history, governments have wrestled with the tensions of the relentless drive to build new technologies and the unpredictable and often counterproductive consequences that flow from them for their power and authority.² No less of a historical figure than Stalin captured this tension between the quest for modernity and the compulsion to control. When presented with a proposal to build a modern telephone system for the new Soviet state, he reportedly replied, I can imagine no greater instrument of counterrevolution in our time.
The rise of the Internet coincided with a major set of political upheavals that culminated with the collapse of the Soviet Union and communist bloc. In the euphoria that ensued, the idea of technological redemption, inevitable democratization, and for some, the end of history, coalesced into a popular ideology that equated technology with empowerment. This idea was far from new. Indeed, the telegraph, electrical lighting, and telephony all emerged at similarly transformational historical junctures, leading to a long pedigree of speculation regarding the democratizing role of technology in social and political change.³
There is no doubt that the Internet has unleashed a wide-ranging and globally significant shift in communications—a shift that has led to the empowerment of individuals and nonstate actors on an unprecedented scale. At times, the Internet seems uncontrollable, a constantly evolving and dynamic virtual realm, reshaped continuously by a growing number of users at edge points of the network. But Newtonian physics is as relevant in politics and cyberspace as it is in the physical realm. Just as with previous technological developments, as the Internet has grown in political significance, an architecture of control—through technology, regulation, norms, and political calculus —has emerged to shape a new geopolitical information landscape.
In 2008, the OpenNet Initiative (ONI) published its first global study—Access Denied: The Practice and Policy of Global Internet Filtering⁴—which documented how states are seeking to establish borders in cyberspace. Our snapshot of 41 countries discovered that states were busy constructing defensive perimeters to deny access to unwanted content. For the most part, these methods consisted of building firewalls at key Internet choke points. The People’s Republic of China was among the first to adopt national filtering systems at the backbone of the country’s Internet—popularly known as the Great Firewall of China
—and it has become a paradigm of Internet censorship ever since. Chinese-style
filtering—as we call it here—represents the first generation of Internet control techniques.
In Chinese-style filtering, lists of Internet protocol (IP) addresses, keywords, and/or domains are programmed into routers or software packages that are situated at key Internet choke points, typically at international gateways or among major Internet service providers (ISPs).⁵ Requests that are made for any information contained in the block lists are denied for citizens living within those jurisdictions. The latter can happen in a variety of ways, with greater and lesser degrees of transparency, but it is almost always static, fixed in time, and relatively easy to discern using the methods developed over time by the OpenNet Initiative’s researchers (see box on ONI’s methodology). Moreover, determined Internet users can circumvent them with relative ease.
Not all countries have been as forthright with their rationale for filtering Internet content as China. Our research for Access Denied also found coyness on the part of many states to admit seeking to control Internet content. In many cases, denial of access occurred extralegally, or under the guise of opaque national security laws. Often, ISPs were simply asked or told to block access to specific content without any reference to existing law. Other times, blockages were difficult to distinguish from network errors or other technical problems, like denial of service attacks, but seemed suspiciously connected to political events. Many of the countries listed in our first report denied that they were in fact blocking access to Internet content or had any connection to attacks on services. We saw these events as anomalies insofar as they did not fit the paradigm of Chinese style filtering and largely eluded the methodologies that we had developed to test for Internet censorship.⁶
We have subsequently come to learn that these anomalies were, in fact, emerging norms. Since our research for Access Denied was conducted, a sea change has occurred in the policies and practices of Internet controls. States no longer fear pariah status by openly declaring their intent to regulate and control cyberspace. The convenient rubric of terrorism, child pornography, and cyber security has contributed to a growing expectation that states should enforce order in cyberspace, including policing unwanted content. Paradoxically, advanced democratic states within the Organization for Security and Cooperation in Europe (OSCE)—including members of the European Union (EU)—are (perhaps unintentionally) leading the way toward the establishment of a global norm around filtering of political content with the introduction of proposals to censor hate speech and militant Islamic content on the Internet. This follows already existing measures in the UK, Canada, and elsewhere aimed at eliminating access to child pornography. Recently and amid great controversy, Australia announced plans to create a nationwide filtering system for Internet connectivity in that country. Although the proposal has ultimately languished, it shows the extent of this growing norm. No longer is consideration of state-sanctioned Internet censorship confined to authoritarian regimes or hidden from public view. Internet censorship is becoming a global norm.
Box 1.1
The ONI employs a unique fusion
methodology that combines field investigations, technical reconnaissance, and data mining, fusion, analysis, and visualization. Our aim is to uncover evidence of Internet content filtering in countries under investigation. The ONI’s tests consist of running special software programs within countries under investigation that connect back to databases that contain lists of thousands of URLs, IPs, and keywords. The lists are broken down into two categories: global lists include URLs, IPs, and keywords that are tested in every country, and which help us make general comparisons of accessibility across countries. Global lists also provide a snapshot
of accessibility to content typically blocked by filtering software programs, and can help us understand whether particular software programs are being used in a specific context. Local lists are unique for each country and are usually made up of content in local languages. These are high-impact URLs, IPs, and keywords, meaning they are content that is likely to, or has been reported to have been, targeted for filtering. Our aim is to run tests on each of the main ISPs in a country over an extended period of time—typically at least two weeks on at least two occasions. Our accessibility depends very much on our in-country testers, and for security and other reasons we are not always able to perform comprehensive tests, meaning in some cases we have only partial results on which to make inferences. Our specially designed software checks access both within the country and from one or more control locations simultaneously. Anomalies are analyzed and determinations are made as to whether a site is accessible or not, and if the latter, how the inaccessibility occurs. In some instances, block-pages— Web sites that explicitly confirm blocking—are yielded for requests for banned content. In other instances, connections are simply broken. In some cases, special filtering software is employed, while in others routers are manually configured to block.
At the same time, states have also become more cognizant of the strategic importance of cyberspace (of which the Internet is an important constituent component). Cyberspace has become militarized. A clever use of the Internet by insurgents and militants in Iraq and other parts of the Middle East, the significance of the Internet in conflicts such as the 2008 Russia-Georgia war, and revelations concerning large-scale cyber-espionage networks,⁷ has emphasized the impact of cyberspace on the sweat and muscle aspects of war fighting, and geopolitical competition among states and nonstate actors. Reflecting on these recent incidents, many states’ armed forces and policymakers have engaged in a fundamental rethinking of assumptions about the importance of the informational domain to conflict and competition. As a consequence, states are now openly pursuing a cyber arms race with leading powers such as the United States, Russia, and China unashamedly making their intentions clear in doctrines for military engagement in cyberspace. The quest for information control is now beyond denial.
The present volume aims to document, analyze, and explore these emerging next-generation techniques, what they mean for relationships between citizens and states, and how they will shape cyberspace as a domain for civic interaction into the future. The title of our volume—Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace—suggests how the center of gravity of practices aimed at managing cyberspace has shifted subtly from policies and practices aimed at denying access to content to methods that seek to normalize control and the exercise of power in cyberspace through a variety of means.
This volume differs from its predecessors in two ways. First, our focus is primarily on the 56 countries that make up the OSCE. This is a deliberate choice, as many of the legal mechanisms that legitimate control over cyberspace, and its militarization, are led by the advanced democratic countries of Europe and North America. Likewise, many of the more innovative means by which laws and techniques used to silence voices in cyberspace are emerging from the postcommunist countries of the Commonwealth of Independent States (CIS). In this respect, the industrialized North is establishing norms that are only too readily propagated and adopted by repressive and authoritarian regimes elsewhere.
Second, Access Controlled focuses on the new generations of Internet controls that go beyond mere denial of information. Whereas Chinese-style national filtering schemes represent the first generation of Internet filtering, second- and third-generation techniques are more subtle, flexible, and even offensive in character. These next-generation techniques employ the use of legal regulations to supplement or legitimize technical filtering measures, extralegal or covert practices, including offensive methods, and the outsourcing or privatizing of controls to third parties,
to restrict what type of information can be posted, hosted, accessed, or communicated online. Examples of next-generation techniques include the infiltration and exploitation of computer systems by targeted viruses and the employment of distributed denial-of-service (DDoS) attacks, surveillance at key choke points of the Internet’s infrastructure, legal takedown notices, stifling terms-of-usage policies, and national information-shaping strategies, all of which are highlighted in one way or another in the chapters that follow. Although these measures may have the same aim as Chinese-style filtering, they reflect a maturation of methods resulting from a growing colonization of cyberspace by states and other actors. They emerge from a desire to shape and influence as much as tightly control national and global populations that are increasingly reliant on cyberspace as their main source of information. These next-generation controls raise important and sometimes troubling public policy issues—particularly for the relationship between citizens and states.
Chapter Overview
Second- and third-generation controls are carefully defined in our subsequent chapter in this volume, Control and Subversion in Russian Cyberspace. Second-generation controls create a legal and normative environment and technical capabilities that enable actors to deny access to information resources as and when needed, while reducing the possibility of blowback or discovery. These controls have an overt and covert track. The overt track aims to legalize content controls by specifying the conditions under which they can be denied. Instruments here include the doctrine of information security as well as the application of existent laws, such as slander and defamation, to the online environment. The covert track establishes procedures and technical capabilities that allow content controls to be applied just in time,
when the information being targeted has the highest value (e.g., during elections or public demonstrations), and to be applied in ways that assure plausible deniability.
Third-generation controls take a highly sophisticated, multidimensional approach to enhancing state control over national cyberspace and building capabilities for competing in informational space with potential adversaries and competitors. The key characteristic of third-generation controls is that the focus is less on denying access than successfully competing with potential threats through effective counterinformation campaigns that overwhelm, discredit, or demoralize opponents. Third-generation controls also focus on the active use of surveillance and data mining as means to confuse and entrap opponents.
We argue that while the countries of the CIS are often seen as lagging behind Europe, North America, and the technological tigers of Asia, they may be leaders in the development of next-generation controls. Some of the first, and most elaborate, forms of just-in-time blocking, terms-of-usage policies, surveillance, and legal takedown notices occurred among the countries of the CIS over the last several years. Examining that region in detail may give us insight into the future of information controls elsewhere.
Computer network attacks and exploitation—what we called just-in-time
blocking in Access Denied—are perhaps the starkest of examples of next-generation techniques. Computer network attacks describe the range of controls that target and take down
strategically important sources of information or services at key moments in time through computer-based information attacks. Although there are several tactics that can be employed within this rubric—deliberate tampering with domain name servers, virus and Trojan horse insertion, and even brute physical attacks—the most common is the use of DDoS attacks. These attacks flood a server with illegitimate requests for information from multiple sources—usually from so-called zombie
computers that are infected and employed as part of a botnet.
The ONI has monitored an increasing number of just-in-time blocking incidences using DDoS attacks, going back to our first acquaintance during the Kyrgyzstan parliamentary elections of 2005. In that episode, the Web sites of opposition newspapers came under a debilitating attack that left them unable to communicate during the critical period leading up to and during the Kyrgyz election.⁸ Since the Kyrgyz case, DDoS attacks have featured prominently in the dispute between Russia and Estonia in May 2007, during the Russia-Georgia conflict of 2008, and in numerous cases involving the Web sites of human rights and political opposition groups.
These tactics are particularly difficult to monitor using traditional ONI methods because of their temporary and fleeting duration, and because their perpetrators can disguise their involvement through distribution and anonymity. Today, organized criminal networks operate commercial botnets with significant powers of disruption. Perpetrators can simply contract out a DDoS attack and benefit by the convenience of an electronic assault that from the outside may look as though it is a random attack or a series of unfortunate network errors. Attributing such attacks to their source is difficult because the vectors are distributed and the transactions are done through criminal activity and illicit shadow markets. Although much of what the ONI has observed in terms of computer network attacks and just-in-time blocking has occurred in the developing world, it is noteworthy the military use of botnets is being debated in NATO countries and elsewhere.⁹ The prospect of an arms race in cyberspace looms large.
Among many countries in the industrialized world, a major impetus to filter is the desire to control access to information relating to the sexual exploitation of children, otherwise known as child pornography. In almost all countries, possession and distribution of child pornography is illegal. In some countries, laws have been enacted to restrict distribution of child pornography online. In some countries, private ISPs have entered into voluntary arrangements to filter access to lists of child pornographic material, while in others entire nationwide filtering schemes have been proposed. In all cases, the proposals have been the subject of considerable public debate and controversy. Although only a few very extreme minority groups, such as libertarians, question the right to access child pornography, many have raised questions about the transparency of the processes being followed or the mechanisms put in place for oversight and review. For the ONI, for example, the mere test for access to this material is prohibited because a simple connection to such a site would constitute a crime in most jurisdictions. This situation leaves many researchers in a quandary as to how to verify that lists are accurate and do not contain collateral filtering problems or categorization mistakes common to filtering software. Nart Villeneuve’s chapter provides a historical overview of online child pornography controls and examines the range of policy responses that have been employed. As Villeneuve explains, many governments have adopted national filtering policies rather than developing international information-sharing arrangements that would involve police cooperation and the removal of information at its source.
Another example of next-generation information controls prominent among the countries of the OSCE is the extensive use and application of surveillance. As Hal Roberts and John Palfrey outline in their chapter, surveillance can happen at numerous points throughout the infrastructure of cyberspace and can be collected by a variety of public and private actors who have access to those choke points. States’ intelligence and law enforcement agencies are increasingly extracting precious information flows through the installation of permanent eavesdropping equipment at key Internet choke points, such as Internet exchanges, ISPs, or major international peering facilities, and combining such information with new tools of reconnaissance drawn from data sources such as CCTVs, satellite imagery, and powerful systems of geo-locational mapping. To be sure, electronic surveillance is nothing new, having a long history shrouded with secrecy. Throughout the cold war, both superpowers assembled globe-spanning electronic surveillance systems that operated in the most highly classified realms. However, today’s surveillance systems are much more extensive and penetrating, and are legitimized by permissive antiterror legislation that removes many previous operational constraints. They are also increasingly operated and controlled not by the state but by private actors. As with just-in-time blocking, surveillance eludes the ONI’s methods and is generally quite difficult to monitor using technical means. It is, however, a very powerful force of information control and can create a stifling climate of self-censorship.
Another control beyond denial that is profiled in Access Controlled relates to the growing and widespread prevalence of cyberspace as a communications environment, and the ways in which third-party intermediaries, including private companies and public institutions, host, service, and ultimately control that environment. At one point in time, it might have been fair to characterize cyberspace as largely a separate and distinct realm—something people enter into
when they turn on their computers or play video games. Today, however, with always-on portable devices that are fully connected to the Internet, and much of society’s transactions mediated through information and communication technologies—including business, work, government, and play—cyberspace is not so much a distinct realm as it is the very environment we inhabit. Our lives have been digitally disassembled, disaggregated, and dispersed into multiple digital domains. Our private
information now traverses through cables and spectrum owned and operated by numerous private and public institutions located in numerous legal jurisdictions. The same is true of government and business information. It is hosted on servers each of which may have unique terms-of-service, data-retention, and use policies. Depending on the territorial jurisdiction in which they are located, they may be subject to the pressures of law enforcement and intelligence to turn over that information, either overtly or covertly. And they are subject to a bewildering variety of local, national, and international laws, some of which may conflict.
Issues of censorship that involve terms-of-use policies, takedown notices, and other commercial compliance and service issues are taken up in both the Ethan Zuckerman and Colin Maclay chapters. Zuckerman outlines some of the ways in which competitive market forces can create unintended consequences leading to censorship by ISPs and online service providers (OSPs). Unwilling or afraid to bear the burden of legal and other costs of hosting controversial information, ISPs and OSPs may simply err on the side of caution, leading to a situation where the spaces for hosting content deemed objectionable anywhere are progressively winnowed. As much of what happens online today, from e-mail to documentation to chats, flows through or otherwise depends on these large cloud
services managed by private companies, such a chilling effect could have profound consequences on freedom of speech and access to information.
Maclay’s chapter focuses on issues of accountability and transparency around OSPs and ISPs that operate or provide services in jurisdictions where Internet censorship takes place. In many countries, Internet companies are either pressured or legally compelled to censor their services or turn over user data, with search engines being among the most common of them. In China, for example, major search engine companies all filter their search results, and at least one has turned over personal data to Chinese authorities, resulting in arrests. These practices have garnered significant controversy, particularly in the United States where the largest of them—Microsoft, Yahoo!, Google—are based. In an effort to forestall legislation that would restrict their investment practices abroad, these companies have entered into a self-regulation pact, called the Global Network Initiative, which Maclay analyzes and discusses. Given that much of cyberspace is operated by the private sector, such self-regulation pacts may become a more common feature of cyberspace governance, as will undoubtedly the policing of Internet content controls.
Conclusion
The trends and findings analyzed in Access Controlled reveal a rapidly emerging normative terrain that should be of concern to policymakers, advocacy and rights networks, and academics. Given the strategic importance of the OSCE, in terms of relative military capabilities, wealth, and diplomatic influence, the norms emerging from this region are bound to have unintended consequences all over the world. Understanding those impacts will be of paramount importance for Internet governance at all levels in years to come.
Probably the most important norm is the security first
orientation toward Internet governance, driven in part by the fear of terrorism and in part by concerns of protecting vulnerable populations (particularly children) from exploitation. Across the OSCE, communities of practice in law enforcement, intelligence, and the private sector are working, often in uncoordinated, discrete, but like-minded ways, leading to a normalization of Internet surveillance and censorship across all sectors of cyberspace. It is perhaps ironic that these norms so antithetical to basic rights and freedoms are being propagated from many countries that just over a decade ago were responsible for the expansion of liberal democratic principles and market capitalism across the globe. And yet upon closer consideration such trends conform to what have been called governmentality practices
in general that characterize these societies, as techniques of control become progressively more refined, technologically rigorous, and bureaucratically complex. Although not socially sinister,
as David Lyons puts it, what he calls everyday surveillance
has routinized itself into ordinary life in so many myriad ways that it has become the taken-for-granted context within which modern industrialized society operates.¹⁰ The security-first norm around Internet governance can be seen, therefore, as but another manifestation of these wider developments. Internet censorship and surveillance—once largely confined to authoritarian regimes—is now fast becoming the global norm.
But there is a second characteristic of this newly emerging normative terrain that is unique to cyberspace and the speed with which such changes are being wrought, in particular to the long-standing pillars of modern citizen-state relations. The social contract
that has set the basic framework for citizen-state relations in the modern industrialized period has been shaped by decades of technological and social change and institutional innovations. One must be careful, therefore, to ascribe to contemporary events unique and epochal challenges. However, the way in which citizen-state relations are being upset in a very compressed time frame is worth noting, and may be comparable only to that which happened at the height of the industrial revolution itself. In such a context of rapid technological and social change, the margin for error and unintended consequence around laws and regulations is enormous as path dependencies open up around fast-moving developments that only in hindsight can be identified as such.
The salience of such impacts can be seen in the practices surrounding the distributed ownership infrastructure of cyberspace. Today, peoples’ everyday lives are mediated not only through the state per se, but dispersed through clouds of digital-electronic telecommunications owned and operated by private entities. Each of these clouds—often spanning multiple national jurisdictions—represents potential, and often actual, loci of private authority. As shown throughout each of the chapters in this volume, the decisions they make on when to retain, filter, monitor, and share the information they control (and with whom) are increasingly having important political ramifications for citizens the world over. The normative terrain outlined in Access Controlled thus offers a compelling example of the privatization of authority.
Perhaps the most important unintended consequences may come from new conflicts and offensive operations documented in this volume. The growing acceptance of the militarization of cyberspace, by states and by third-party actors, risks significant blowbacks as these techniques—once hidden from view or confined to marginalized contexts—become an entrenched characteristic of global relations. Societies around the world—none more so than those of the OSCE—are heavily dependent on globally networked technologies. They have been locked in and interpenetrated by a digital web of their own spinning.¹¹ And so from a rational perspective, an arms race in cyberspace is to no one’s advantage; a collapse of one information infrastructure would undoubtedly affect others—perhaps even the perpetrator. But as so often is the case in the competitive dynamics of world politics, the logic of security dilemmas can easily overwhelm and entrap rational decision-making processes. Today, governments are responding to the threats of cyberwar not by pursuing norms of mutual restraint but by endorsing new techniques of offensive operations, including outsourcing to third-party actors and criminal organizations.
Last, this newly emerging normative terrain of next-generation Internet controls presents major challenges to monitoring organizations, including the ONI itself. The technical investigations that informed our country studies and that are reported on here represent a methodology borne out of the need to monitor first-generation technical filtering techniques. If the trends identified in Access Controlled are accurate, then these first-generation filtering techniques may be gradually superseded by a variety of next-generation controls that are more subtle and fluid and deeply integrated into social relations rather than fixed at specific choke points. This possibility suggests that the ONI itself must now respond with a new suite of methodologies if it hopes to remain relevant to the challenges of cyberspace governance that lay ahead.
Notes
1. John Palfrey and Urs Gasser, Born Digital: Understanding the First Generation of Digital Natives (New York: Basic Books, 2008).
2. Unpredictable consequences of technological change is a theme explored in Ronald J. Deibert, Parchment, Printing and Hypermedia: Modes of Communication in World Order Transformation (New York: Columbia University Press, 1997).
3. Tom Standage, The Victorian Internet: The Remarkable Story of the Telegraph and the Nineteenth Century’s On-line Producers (New York: Berkeley Books, 1998).
4. Ronald J. Deibert, John Palfrey, Rafal Rohozinski, and Jonathan Zittrain, eds., Access Denied: The Practice and Policy of Global Internet Filtering (Cambridge, MA: MIT Press, 2008).
5. Steven J. Murdoch and Ross Anderson, Tools and Technology of Internet Filtering,
in Access Denied: The Practice and Policy of Global Internet Filtering, ed. Ronald J. Deibert, John Palfrey, Rafal Rohozinski, and Jonathan Zittrain (Cambridge, MA: MIT Press, 2008), 57-72.
6. Ronald J. Deibert and Rafal Rohozinski, Good for Liberty, Bad for Security? Global Civil Society and the Securitizaton of the Internet,
in Access Denied: The Practice and Policy of Global Internet Filtering, ed. Ronald J. Deibert, John Palfrey, Rafal Rohozinski, and Jonathan Zittrain (Cambridge, MA: MIT Press, 2008), 123-149.
7. Information Warfare Monitor, Tracking GhostNet: Investigating a Cyber Espionage Network,
Citizen Lab/the SecDev Group, March 29, 2009, http://www.tracking-ghost.net.
8. OpenNet Initiative, Special Report: Kyrgyzstan, Election Monitoring in Kyrgyzstan,
April 15, 2005, http://www.opennetinitiative.net/special/kg/.
9. For example, see Col. Charles W. Williamson III, Carpet Bombing in Cyberspace: Why America Needs a Military Botnet,
Armed Forces Journal (May 2008), http://www.armedforcesjournal.com/2008/05/3375884.
10. David Lyons, Surveillance Society: Monitoring Everyday Life (Buckingham, UK: Open University Press, 2001).
11. Ronald J. Deibert, Network Power,
in The Political Economy of a Changing Global Order,2nd ed., ed. Richard Stubbs and Geoffrey Underhill (New York: Oxford University Press, 1999).
2 Control and Subversion in Russian Cyberspace
Ronald Deibert and Rafal Rohozinski
Introduction
It has become a truism to link censorship in cyberspace to the practices of authoritarian regimes. Around the world, the most repressive governments—China, Burma, North Korea, Cuba, Saudi Arabia—are the ones that erect digital firewalls that restrict citizens’ access to information, filter political content, and stymie freedom of speech online. When we turn to the countries of the former Soviet Union—Russia and the Commonwealth of Independent States (CIS)—we should expect no different. The Economist index of democracy paints a bleak picture of political freedoms in the CIS (see Table 2.1; numbers represent the country’s rank in the world).¹ Only two countries, Ukraine and Moldova, rank as flawed democracies, with the remaining 10 countries of the region described as either hybrid regimes or authoritarian.
Throughout the CIS, this creeping authoritarianism is evident in just about every facet of social and political life. Independent media are stifled, journalists intimidated, and opposition parties and civil society groups harassed and subject to a variety of suffocating regulations. And yet, in spite of this increasingly constrained environment, the Internet remains accessible and relatively free from filtering. The ONI has tested extensively through the CIS region, far deeper and more regularly in fact than in any other region in the world. To date we have documented traditional Chinese-style
Internet filtering—the deliberate and static blocking of Internet content and services by state sanction—only in Uzbekistan and Turkmenistan. For the rest of the region, while connectivity may be poor and unreliable, and suffer from the usual rent-seeking distortions found in other developing country environments, the same basic content is available there as in the most open country contexts.
In our chapter, we explore this seeming disjuncture between authoritarianism in the CIS and the relative freedom enjoyed in Russian cyberspace, commonly known as RUNET. We argue that attempts to regulate and impose controls over cyberspace in the CIS are not necessarily absent (as ONI testing results may suggest) but are different than in other regions of the world. We hypothesize that CIS control strategies have evolved several generations ahead of those used in other regions of the world (including China and the Middle East). In RUNET, control strategies tend to be more subtle and sophisticated and designed to shape and affect when and how information is received by users, rather than denying access outright.
Table 2.1
INDEX OF DEMOCRACY
Source: The Economist Intelligence Unit, The Economist Intelligence Unit’s Index of Democracy 2008,
2008, http://graphics.eiu.com/PDF/Democracy%20lndex%202008.pdf.
One reason for this difference may be the prior experiences of governments and opposition groups in the region. State authorities are aware of the Internet’s potential for mobilizing opposition and protest that goes far beyond the nature of content that can be downloaded from Web sites, chat rooms, and blogs. These technologies have the potential to enable regime change, as demonstrated by the eponymous color revolutions in Ukraine, Georgia, and Kyrgyzstan. By the same token, state actors have also come to recognize that these technologies make opposition movements vulnerable, and that disruption, intimidation, and disinformation can also cause these movements to fragment and fail. The failure of opposition movements in Belarus and Azerbaijan to ignite a wider social mobilization, along with the role that targeted information controls played in fragmenting and limiting the effectiveness of these movements, also points to the possible trajectory in which controls aimed at Russian cyberspace may be moving.
Our chapter unfolds in several steps. We begin by describing some of the unique characteristics of the hidden
information revolution that has taken place in Russian cyberspace since the end of the cold war. Contrary to widespread perceptions outside of the region, Russian cyberspace is a thriving and dynamic space, vital to economics, society, and politics. Second, we outline three generations of cyberspace controls that emerge from the research conducted by the ONI in this region. First-generation controls—so-called Chinese-style filtering—are unpopular and infrequently applied. While instances of filtering have been identified in just about all CIS countries, wide-scale national filtering is only pursued as a matter of state policy in two of the CIS states. Rather, information control seems to be exercised by way of more subtle, hidden, and temporally specific forms of denial. These controls can involve legal and normative pressures and regulations designed to inculcate an environment of self-censorship. Others, like denial-of-service attacks, result in Web sites and services becoming unavailable, often during times of heightened political activity. Still others, like mass blogging by political activists on opposition Web sites, cannot be characterized as an attack per se, although the outcome of silencing these Web sites is as effective as traditional filtering (if not more so).
These second- and third-generation controls are increasingly widespread, and they are elusive to traditional ONI testing methods. They are difficult to measure and often require in-depth fieldwork to verify. Consequently, many of the examples in this chapter are based on field investigations carried out by our ONI regional partners where technical testing was used to establish the characteristics of controls, rather than measure the extent of them. We hypothesize that, although these next-generation controls emerged in the CIS, they may in fact be increasingly practiced elsewhere. In the next section of the chapter we turn our lens beyond the CIS to find examples of second- and third-generation controls.
We conclude by arguing that, contrary to initial expectations, first-generation filtering techniques may become increasingly rare outside of a few select content categories, raising serious public policy issues around accountability and transparency of information controls in cyberspace. The future of cyberspace controls, we argue, can be found in RUNET.
RUNET
On July 6, 2006, Russian President Vladimir Putin fielded questions from the Internet at an event organized by the leading Russian Web portal Yandex.² It was the first time a Russian leader directly engaged and interacted with an Internet audience. The event itself made few headlines in the international media, but in Russia it marked an important milestone. The Internet had graduated to the mainstream of Russian politics and was being treated by the highest levels of state authority as equal in importance to television, radio, and newspapers. The question put to President Putin by the Internet audience also revealed a sense of the informal, irreverent culture of Russian cyberspace. Over 5,640 netizens wrote in to ask when the President first had sex. More surprising, perhaps, was that Putin replied.³
The rise of the Internet to the center of Russian culture and politics remains poorly understood and insufficiently studied. With the end of the cold war and the demise of the USSR, Russia and the CIS entered into a long period of decline. Economies stagnated, political systems languished, and the pillars of superpower status—military capacities and advanced scientific and technological potential—rapidly ebbed away. Overnight, the CIS become less relevant and dynamic. The precipitously declining population rates in the Slavic heartland, a wholesale free-for-all of mafiya-led privatization, growing impoverishment, and failing public infrastructure, all made the distant promise of a knowledge revolution led by information technologies seem highly improbable.
Moreover, the prospects for Russia and the CIS keeping up with the Internet and telecom boom of the late 1990s and early 2000s seemed, for many, a distant reality. By the time the USSR finally collapsed in 1991, it had the lowest teledensity of any industrialized country. Its capacity for scientific development, particularly in the field of PCs (which the USSR had failed to develop) and computer networking (which was based on reverse-engineered systems pirated from European countries) was weak to nonexistent. Moreover, Russian seemed to be a declining culture and language as newly independent CIS countries adopted national languages and scripts, and preferred to send their youth to study at Western institutions. In almost every major indicator of economic progress, political reform, scientific research, and telecommunications capacity, the countries of the CIS seemed headed for the dustheap of history. Not surprisingly, scholarly and policy interest in the effects and impact of the information revolution in the CIS waned, as attention focused on the rising behemoths in Asia (particularly China and India), and the need and potential of bridging the digital divide in Africa and the Middle East. And yet, during the last decade the CIS has undergone a largely unnoticed information revolution. Between 2000 and 2008 the Russian portion of cyberspace, or RUNET, which encompasses the countries of the CIS, grew at an average rate of 7,208 percent, or over five times the rate of the next faster region (Middle East) and 15 times faster than Asia (see Table 2.2).
More than 55 million people are online in the CIS, and Russia is now the ninth-largest Internet country in terms of its percentage of world users, just ahead of South Korea.⁴ By latest official estimates, 38 million Russians, or a third of the population of the Russian Federation, are connected, with over 60 percent of those surfing the Internet from home on broadband connections. And these figures may be low. Russian cyberspace also embraces the global Russian diaspora that, through successive waves of emigration, is estimated at above 27 million worldwide. Many Russian émigrés reside in developed countries, but tend to live online in the RUNET. Statistics to back this claim are methodologically problematic, but anecdotal evidence suggests that this is the case. The popular free mail service mail.ru, for example, boasts over 50 million user accounts, suggesting that the number of inhabitants in Russia cyberspace may be significantly above the 57 million users resident in the CIS. And these figures are set to rise—dramatically. By official predictions, Russia’s Internet population is set to double to over 80 million users by 2012.⁵
Table 2.2
PROFILE OF INTERNET USE, PENETRATION, AND GROWTH IN THE CIS
Source: Miniwatts Marketing Group, Internet World Statistics, 2009,
http://www.internetworldstats.com.
Paradoxically, the very Russianness of the RUNET may have contributed to hiding this cyber revolution.
Unlike much of the Internet, which remains dominated by English and dependent on popular applications and services that are provided by U.S.-based companies (such as Google, Yahoo, and Hotmail), RUNET is a self-contained linguistic and cultural environment with well developed and highly popular search engines, Web portals, social network sites, and free e-mail services. These sites and services are modeled on services available in the United States and the English-speaking world but are completely separate, independent, and only available in Russian.⁶ In a recent ranking of Internet search engines, the Russian Web portal Yandex was one of only three non-English portals to make the top ten, and was only beaten out by a Baidu (China) and NHK (Korea), both of which have much larger absolute user base.⁷ Within RUNET, Russian search engines dominate with Yandex (often called the Google of Russia), beating out Google with 70 percent of the market (Google has between 18 and 20 percent).⁸
The RUNET is also increasingly central to politics. Elections across the CIS are now fought online, as the Internet has eclipsed all the mass media in terms of its reach, readership, and especially in the degree of free speech and opportunity to mobilize that it provides. By 2008, Yandex could claim a readership larger than that of the popular mainstream newspapers Izvestia, Komsomolskaya Pravda, and Moskovsky Komso-molets combined.⁹ The Russian-language blogosphere—which currently makes up 3 percent of the world’s 3.1 million blogs—grows by more than 7,000 new blogs per day.¹⁰ There are currently more Russian-language blogs than there are French, German, or Portuguese, and only marginally fewer than Spanish,¹¹ which is spoken by a larger percentage of the world population.¹²
This shift has been fueled as much by the growing state control over the traditional mass media as it has been by the draw of what the new online environment has to offer. Well-known journalists, commentators, and political figures have all turned to the RUNET as the off-line environment suffers through more severe restrictions and sanctions. Across the CIS, especially in the increasingly authoritarian countries of Uzbekistan, Belarus, and Kazakhstan, the RUNET has become the last and only refuge of public debate. Given its rapid ascent to the popular mainstream, it is paradoxical— and certainly a puzzle—that RUNET has elided filtering controls of the kind imposed by China on its Internet in all but a few countries. In the next section, we explore why that is the case.
Next-Generation Information Controls in the CIS
Although RUNET is a wild hive of buzzing online activity, it is not completely unregulated. Since its emergence in the early 1990s, RUNET has been subject to a variety of controls. Some controls have been commercial in motivation and represent crude attempts to use formal authority to create what amounts to a monopoly over secure communications and as means to seek rents.¹³ This form of control has not been unique to RUNET and has extended to every other facet of post-Soviet life, from car registration through to the supply of gasoline, as an aspect of the great scramble to prihvatizatsia public assets that occurred during the early to mid 1990s.¹⁴ Other controls have emerged from a legal system inherited from the Soviet era, which criminalized activities without necessarily seeking prosecution, except selectively. These forms of control effectively form the rules of the game for all informal networks. Their emergence in the virtual online world of the RUNET is transparent and natural.
But during the late 1990s, and especially following the color revolutions that swept through the CIS region, states began to think seriously about the security implications of RUNET, and in particular its potential to enable mobilization of mass social unrest. The first attempts at formally controlling cyberspace were legal, beginning with legislation enabling surveillance (SORM-II),¹⁵ and later in 2001 with the publication of Russia’s Doctrine of Information Security. While the doctrine addressed mass media and did not focus on RUNET specifically, it declared the information sphere to be a vital national asset that required state protection and policing. The doctrine used strong language to describe the state’s right to guide the development of this space, as well as its responsibility to ensure that information space respects the stability of the constitutional order, sovereignty, and the territorial integrity of Russian political, economic and social stability, the unconditional ensuring of legality, law and order, and the development of equal and mutually beneficial international cooperation.
¹⁶
The intent of the doctrine was as much international as it was domestic, establishing demarcated borders in cyberspace, at least in principle. The international intent of the doctrine appears to have been driven by a growing concern that Russia was falling behind its major adversaries in developing a military capability in cyberspace; efforts by countries such as the United States, China, India, and others to develop covert computer network attack capabilities risked creating a strategic imbalance.¹⁷ Domestically, the doctrine was aimed at the use of the Internet by militant groups to conduct information operations, specifically the Chechen insurgency. Within a few years, most other CIS countries had followed suit, adopting variations of the Russian doctrine.
ONI Tests for Internet Controls in RUNET
The controls outlined previously are qualitatively different from the usual types of controls for which the ONI tests. Establishing empirical evidence of the effects of policies like SORM and the Doctrine of Information Security is challenging, since their application is largely contextual, their impact at times almost metaphysical. Such controls do not yield a technological fingerprint
in the way that a filtering system blocking access to Internet content does. However, they may be just as effective, if not more so, in achieving the same outcomes. In its 2007 study of the policy and practice of Internet filtering, the ONI found that substantial and pervasive attempts to technically filter content on RUNET did not begin until 2004, and even then were isolated to Turkmenistan and Uzbekistan, with lesser attempts at filtering found in most other CIS countries (see Table 2.3)¹⁸
These reports have remained consistent in more recent rounds of ONI tests. And yet persistent anecdotal reports, as well as special monitoring efforts mounted by the ONI, reveal in the majority of CIS countries that information denial and access shaping is occurring, and on a significant scale, especially around critical events such as elections. The ONI carried out a number of special investigations, including mounting monitoring efforts during the 2005 parliamentary elections in Kyrgyzstan¹⁹ and the March 2006 Belarus presidential elections.²⁰ These efforts yielded the first technically verified results that the RUNET was being deliberately tampered with to achieve a political effect.
The results obtained by ONI in the CIS are unique, and they differ significantly from the results obtained in ONI’s global survey. They demonstrate that information controls in the CIS have developed in different ways and using different techniques than those found in other areas of the world. They suggest a much more sophisticated approach to managing networks through denial that is highly selective and event based, and that shapes access to the sources of information and means of communication in a manner that could plausibly be explained by errant technical failures or other random network effects. In the following sections, we define the three different generations of cyberspace controls and provide examples for each from our research in the CIS region. The three generations of controls are also summarized in Table 2.4.
Table 2.3
SUMMARY RESULTS FOR ONI TESTING FOR INTERNET FILTERING, 2007-2008
First-Generation Controls
First-generation controls focus on denying access to specific Internet resources by directly blocking access to servers, domains, keywords, and IP addresses. This type of filtering is typically achieved by the use of specialized software or by implementing instructions manually into routers at key Internet choke points. First-generation filtering is found throughout the world, in particular among authoritarian countries, and is the phenomenon targeted for monitoring by the ONI’s methodology. In some countries, compliance with first-generation filtering is checked manually by security forces, who physically police cybercafés and ISPs.
In the CIS, first-generation controls are practiced on a wide scale only in Uzbekistan and Turkmenistan. In Uzbekistan, a special department of the SNB (KGB) monitors the Internet and develops block lists that are then conveyed to individual ISPs who in turn implement blocking against the specific resources or domain names. The filtering is universal across all ISPs, and the SNB spot-checks ISPs for compliance. In Turkmenistan, filtering is centralized on the country’s sole ISP (operated by Turkmentelekom), and access is heavily filtered. Up until late 2007, Internet access in Turkmenistan was severely restricted and expensive, limiting its access and impact.
Table 2.4
1. Legal and Normative Environment for Information Control includes the following:
a. Compelling Internet sites to register with authorities and using noncompliance as grounds for filtering illegal
content.
b. Strict criteria pertaining to what is acceptable
within the national media space, leading to the de-registration of sites that do not comply.
c. Expanded use of defamation, slander, and veracity
laws to deter bloggers and independent media from posting material critical of the government or specific government officials.
d. Evoking national security concerns, especially at times of civic unrest, as the justification for blocking specific Internet content and services.
e. Legal regime for Internet surveillance.
2. CNA has been used by both Azeri and Armenian hackers in an ongoing series of attacks. It is unclear whether these are the actions of individual hackers, or whether these groups receive tacit or direct support from the state. Attacks are directed against the Web sites of the opposing country, so are not a content control mechanism.
3. The DDoS attacks were outsourced to commercial black hat
hackers in Ukraine. The party ordering attacks is unknown, but suspicion falls on rogue elements inside the security services.
A second practice associated with first-generation blocking is policing and surveillance of Internet cafés. In Uzbekistan, SNB officers monitor Internet cafés, often enlisting café owners to notify them of individual users who try to access banned
sites. Many Uzbek Internet cafés now openly post notices that viewing illegal sites is subject to fine and arrest. On several occasions, ONI researchers have manually verified the surveillance.
Second-Generation Controls
Second-generation controls aim to create a legal and normative environment and technical capabilities that enable state actors to deny access to information resources as and when needed, while reducing the possibility of blowback or discovery. Second-generation controls have an overt and a covert track. The overt track aims to legalize content controls by specifying the conditions under which access can be denied. Instruments here include the doctrine of information security as well as the application of existent laws, such as slander and defamation, to the online environment. The covert track establishes procedures and technical capabilities that allow content controls to be applied just in time,
when the information being targeted has the highest value (e.g., during elections or public demonstrations), and to be applied in ways that assure plausible deniability.
The legal mechanisms used by the overt track vary from country to country, but most share the characteristic of establishing double jeopardy for RUNET users, making requirements such that compliance sets the grounds for prosecution, and noncompliance establishes a legal basis for sanction.
The following are among the more common legal mechanisms being applied:
Compelling Internet sites to register with authorities and to use noncompliance as grounds for taking down or filtering illegal
content, and possibly revoking service providers’ licenses. This tack is effectively used in Kazakhstan and Belarus, and it is currently being considered in Russia. The mechanism is particularly effective because it creates multiple disincentives for potential Web site owners who must go through the hassle of registering with authorities, which leaves them open to legal sanction should their site be deemed to be carrying illegal content. It also creates double jeopardy for international content providers (such as the BBC, CNN, and others) and opens the question whether they should register their services locally. In practice, the registration requirement applies to them so long as their audience is local, and a failure to comply leaves open the option to filter their content for noncompliance
with local registration requirements. On the other hand, registering would make the content they carry subject to local laws, which may deem their content unacceptable
or slanderous
and could lead to legally sanctioned filtering.
Strict criteria pertaining to what is acceptable
within the national media space, leading to the de-registration of sites that do not comply. In Kazakhstan, opposition Web sites or Web sites carrying material critical of the government are regularly de-registered from the national domain. This includes a large number of opposition sites and, notably, the Borat Web site, ostensibly because the owners of the site were not resident in Kazakhstan as required by the Kazakh domain authority. In Belarus, the popular portal tut.by refused to put up banners advertising opposition Web sites, possibly for fear of reprisals (although those fears were not made explicit).²¹
Expanded use of defamation, slander, and veracity
laws, to deter bloggers and independent media from posting material critical of the government or specific government officials, however benignly (including humor). In Belarus, slander laws were used to prosecute an owner of a Web site posting cartoons of the president. In both Belarus and Uzbekistan, the law on mass media requires that reporting passes the objectivity test.
Journalists and editors are held responsible for the veracity
of publications and postings, leading to a high degree of self-censorship. In Kazakhstan, there are several cases of oppositional and independent media Web sites being suspended for providing links to publications about corruption among senior state offices and the president.
Evoking national security concerns, especially at times of civic unrest, as the justification for blocking specific Internet content and services. Most recently, this justification was evoked in Armenia when the opposition demonstrations that followed the February 2008 presidential elections turned to violence leading to the death and injury of several dozen protesters. A 20-day state of emergency was declared by President Kocharian, which also led to the de-registration of popular Armenian political and news sites, including a site carrying the Armenian-language BBC service and the filtering of YouTube (ostensibly because of allegations that footage of the rioting had been posted to the popular video sharing site).²² Similar filtering occurred during the Russian-Georgian crisis of 2008 when Georgia ordered ISPs to block access to Russian media. The blocks had the unintended consequence of creating panic in Tbilisi, as some Georgians perceived the blocks as a signal of impending Russian invasion of the capital.
The technical capabilities typical of second-generation controls are calibrated to effect just-in-time
or event-based denial of selected content or services.²³ These techniques can be difficult to verify, as they can be made to look like technical errors. One of the more common techniques involves formal and informal requests to ISPs. Providers in the CIS are under constant pressure to comply with government requests or face any number of possible sanctions if they do not, from visits from the taxation police to revocations of their licenses. Such pressures make them vulnerable to requests from authorities, especially those that are conveyed informally. In Russia, top-level ISPs are in the hands of large telecommunication companies, such as Trans-TeleKom and Rostelecom, with strong ties to the government. These providers appear responsive to informal requests to make certain content inaccessible, particularly when information could prove embarrassing to the government or its officials. In one such case, the popular Russian site—Kompromat.ru—known for publishing documents and photographs of corrupt or illegal practices (roughly analogous to the Web site wikileaks.com) was de-registered or filtered by several top-level ISPs (including TransTeleKom and Rostelekom). Service was later restored, and the blocking of the site was deemed accidental.
Nonetheless, the Web site was inaccessible throughout the February 2008 Russian presidential poll.²⁴ Similar incidents have been documented in Azerbaijan, where Web sites critical of President Ilham Aliyev were filtered by ISPs, apparently at the request of the security department of the office of the president. ²⁵ A similar dynamic is found in Kazakhstan, where a number of Web sites are inaccessible on a regular basis, with no official reason ever being given.²⁶
Other, less subtle but nonetheless effective technical means include shutting down Internet access, as well as selected telecommunications services such as cell phone services and especially short message services (SMS). Temporary outages of the Internet and SMS services were employed by Belarus authorities during the February 2006 presidential elections as a means to limit the ability of the opposition to launch street demonstrations of the type that precipitated the color revolutions in Ukraine, Georgia, and Kyrgyzstan. At first, authorities denied that any interruptions had taken place, and later they attributed the failures to technical reasons.²⁷ Similar instances were reported (although not verified) to have occurred during the 2007 elections in Azerbaijan.
Second-generation techniques also make extensive use of computer network attacks, especially the use of distributed denial of service (DDoS) attacks, which can overwhelm ISPs and selected sites, and which make tracking down perpetrators difficult, since the attacks themselves are sold and engineered by black hat hackers
and can be ordered by anyone. Such attacks were used extensively during the 2005 Kyrgyz presidential elections that precipitated the Tulip revolution.²⁸ They were also used during the 2006 Belarus elections against opposition political and news sites. In 2008, presidential and parliamentary elections in many parts of the region saw the significant use of DDoS attacks against the Web sites of major opposition leaders as well as prominent human rights groups. Recently, computer network attacks have been conducted by state-sanctioned patriotic hackers
who act as vigilantes in cyberspace. A Russian hacker who admitted that officers from the FSB encouraged him brought down the pro-Chechen Web site Kavkaz center
repeatedly.²⁹ There is strong suspicion that the May 2007 DDoS attacks that brought down most of Estonia’s networks were the work of state-sanctioned patriotic hackers
responding to unofficial calls from the FSB to punish
Estonia over the removal of a monument to Soviet soldiers in Tallinn. Such attacks were also a prominent feature of the Russian-Georgian crisis of 2008. Several prominent investigations have been undertaken to determine attribution in this case—including an ongoing one by the ONI’s sister project, the Information Warfare Monitor—and to date no definitive evidence has been found linking the attacks to the Russian security forces.
Third-Generation Controls
Unlike the first two generations of content controls, third-generation controls take a highly sophisticated, multidimensional approach to enhancing state control over national cyberspace and building capabilities for competing in informational space with potential adversaries and competitors. The key characteristic of third-generation controls is that the focus is less on denying access than successfully competing with potential threats through effective counterinformation campaigns that overwhelm, discredit, or demoralize opponents. Third-generation controls also focus on the active use of surveillance and data mining as means to confuse and entrap opponents.
Third-generation controls include enhancing jurisdiction over national cyberspace and expanding the powers of state surveillance. These include warrantless monitoring of Internet users and usage. In 2008, Russia expanded the powers previously established by SORM-II, which obliged ISPs to purchase and install equipment that would also permit local FSB offices to monitor the Internet activity of specific users. The new legislation makes it possible to monitor all Internet traffic and personal usage without specific warrants. The legislation effectively brings into the open covert powers that were previously assigned to FAPSI, with the twist of transferring to the ISPs the entire costs associated with installing the necessary equipment. The SORM-II law was widely used as a model for similar legislation in other CIS counties, and it is expected that the new law will likewise become a standard in the CIS. Although it is difficult to verify the use of surveillance in specific incidences, inferences can be drawn from specific examples. In July 2008, a Moldovan court ordered the seizure of the personal computers of 12 individuals for allegedly posting critical comments against the governing party. The people were accused of illegally inciting people to overthrow the constitutional order
and threaten the stability and territorial integrity of the Republic of Moldova.
It is unknown how the authorities obtained the names of the people, but some suggest that an ISP provided them with the IP addresses of the users.³⁰
Several CIS countries are also pursuing the creation of national cyberzones. Countries such as Kazakhstan, Tajikistan, and Russia are investing heavily into expanding Internet access to schools. These institutions are being tied to special Internet connections, which limit access only to resources found in the national Internet domain. These national zones
are popular among some Tajik and Kazakh ISPs because they allow the ISPs to provide low-cost connectivity, as traffic is essentially limited to the national segment. In 2007, Russian authorities floated the idea of creating a separate Cyrillic cyberzone, with its own domain space and addressing scheme. National cyberzones