Tag Archives: Ronald Deibert

Citizen Lab - Not by technical means alone - The multidisciplinary challenge of studying information controls - 2013

Citizen Lab - Not by Technical Means Alone: The Multidisciplinary Challenge  of Studying Information Controls - 2013

Abstract

The study of information controls is a multidisciplinary challenge. Technical measurements are essential to such a study, but they do not provide insight into why regimes enact controls or what those controls’ social and political effects might be. Investigating these questions requires that researchers pay attention to ideas, values, and power relations. Interpreting technical data using contextual knowledge and social science methods can lead to greater insights into information controls than either technical or social science approaches alone. The OpenNet Initiative has been developing a mixed-methods approach to the study of information controls since 2003.

Introduction

Information controls can be conceptualized as actions conducted in and through the Internet and other information and communication technologies (ICTs). Such controls seek to deny (as with Internet filtering), disrupt (as in distributed denial-of-service, or DDoS, attacks), or monitor (such as surveillance) information for political ends. Here, we examine national-level, state-mandated Internet filtering, but the arguments we raise apply to other information controls and technologies as well.

Technical measurements are essential for determining the prevalence and operation of information controls such as Internet filtering. However, alone, such measurements are insufficient for determining why regimes decide to enact controls and the political, social, and economic impacts of these decisions.

To gain a holistic understanding of information controls, we must study both technical processes and the underlying political, legal, and economic systems behind them. Multiple actors in these systems seek to assert agendas and exercise power, including states (military, law enforcement, and intelligence agencies), inter-governmental organizations, and the private sector. These actors have different positions of influence within technical, political, and legal systems that affect their motivations and actions, as well as the resulting consequences.

At its core, the study of information controls is a study of the ideas, values, and interests that motivate actors, and the power relations among those actors. The Internet is intimately and inseparably connected to social relations and is thus grounded in contexts, from its physical configuration — which is specific to each country — to its political, social, and military uses. Studying information controls’ technical operation and the political and social context behind them is an inherently multidisciplinary exercise.

In 2003, the inter-university OpenNet Initiative (ONI; https://opennet.net) launched with the mission of empirically documenting national-level Internet censorship through a mixed-methods approach that combines technical measurements with fieldwork and legal and policy analysis. At the time, only a few countries filtered the Internet. Since 2003, the ONI has tested for Internet filtering in 74 countries and found that 42 of them — including both authoritarian and democratic regimes — implement some level of filtering. Internet censorship is quickly becoming a global norm. The spread and dynamic character of information controls makes the need for evidence-based multidisciplinary research on these practices increasingly important. Here, we present the ONI approach via several case studies and discuss methodological challenges and recommendations for the field moving forward.

Mixed-Methods Approach

Despite the global increase in Internet censorship, multidisciplinary studies have been limited. Technical studies have focused on specific countries (such as China) and filtering technologies. 1 2 3 Studies of global Internet filtering have used PlanetLab (www.planet-lab.org), which has limited vantage points into countries of interest and tests academic networks, which might not represent average national-level connectivity.4 5 In the social sciences, particularly political science and international relations, empirical studies on information controls and the Internet’s impact on global affairs are growing, but they seldom use technical methods. This slow adoption is unsurprising; disciplinary boundaries are deeply entrenched in the social sciences, and incentives to explore unconventional methods, especially ones that require specialized skills, are low. Social scientists are more comfortable focusing on social variables: norms, rules, institutions, and behaviors. Although these variables are universally relevant to social science, for information controls research they should be paired with technical methods, including network measurements.

Studying information controls requires skills and perspectives from multiple disciplines, including computer science (especially network measurement and security), law, political science, sociology, anthropology, and regional studies. Gaining proficiency in all of these fields is difficult for any scholar or research group. We attempted to bridge these areas through a multidisciplinary collaboration. The ONI started as a partnership between the University of Toronto, Harvard University, and the University of Cambridge, bringing together researchers from political science, law, and computer science. Beyond these core institutions, the ONI helped form and continues to support two regional networks, OpenNet Asia (http://opennet-asia.net) and OpenNet Eurasia. Fieldwork conducted by local and regional experts from our research network has been a central component of our approach. The practice and policy of information control can vary widely among countries. Contextual knowledge from researchers who live in the countries of interest, speak the local language, and understand the cultural and political subtleties is indispensable.

Studying information controls’ technical operation and the political and social context behind them is an inherently multidisciplinary exercise.

Our methods and tools for measuring Internet filtering have evolved gradually over the past 10 years. Early efforts used publicly available proxies and dial-up access to document filtering in China.6 A later approach (which continues today) is client-based, in-country testing. This approach uses software written in Python in a client-server model, which is distributed to researchers. The client attempts to access a predefined list of URLs simultaneously in the country of interest (the “field”) and in a control network (the “lab”). In our tests, the lab connection is the University of Toronto network, which doesn’t filter the type of content we test for. Once testing is complete, we compress the results and transfer them to a server for analysis. We collect several data points for each URL access attempt: HTTP headers and status code, IP address, page body, and, in some cases, trace routes and packet captures. A combined process of automated and manual analysis helps us identify differences in the results returned between the field and lab and isolate filtering instances. Because attempts to access websites from different geographic locations can return different data points for innocuous reasons (such as a domain resolving to different IP addresses for load balancing, or content displaying in different languages depending on where a request originates from), we must often manually inspect the results.

Internet censorship research involves ethical considerations, particularly when we employ client-based testing, which requires openly accessing numerous potentially sensitive websites in quick succession. This method can pose security concerns for users depending on the location. Because our goal is to reproduce and document an average Internet user’s experience in the target country, the client doesn’t use censorship circumvention or anonymity techniques when conducting tests. Before testing takes place, we hold an informed consent meeting to clearly explain the risks of participating in the research. The decision about where to test is driven by safety and practicality concerns. Often, countries with the potential for interesting data are considered too dangerous for client-based testing. For example, due to security concerns, we did not run client tests during Syria’s recent conflict, or in certain countries (such as Cuba or North Korea) at all.

The intentions and motivations of authorities who mandate censorship aren’t readily apparent from technical measurements alone.

Internet filtering measurements are only as good as the data sample being tested. ONI testing typically uses two lists of URLs as its sample: a global list and a local list. The global list comprises a range of internationally relevant and popular websites, predominantly in English, such as international news sites (CNN, BBC, and so on) and social networking platforms (Facebook and Twitter). It also includes content that is regularly filtered, such as pornography and gambling sites. This list acts as a baseline sample that allows for cross-country and cross-temporal comparison. Regional experts compile local lists for each country using material specific to the local political, cultural, and linguistic context. These lists can include URLs of local independent media, oppositional political and social movements, or religious organizations unique to the country or region. The lists also contain URLs that have been reported to be blocked or have content likely to be targeted in that country. These lists do not attempt to enumerate every website that a country might be filtering, but they can provide a snapshot into filtered content’s breadth, depth, and focus.

Before testing occurs, gaining knowledge about the testing environment, including a country’s Internet market and infrastructure, can help determine significant network vantage points. Understanding a country’s regulatory environment can provide insight into how it implements information controls, legally and extra-legally, and how ISPs might differ in implementing filtering.

Timing in testing is also important. Authorities might enact or alter information controls in response to events on the ground. Because our testing method employs client-based testing and analysis, resource constraints require that we schedule testing strategically. Local experts can identify periods in which information might be disrupted, such as elections or sensitive anniversaries, and provide context for why events might trigger controls.

Case Studies

The intentions and motivations of authorities who mandate censorship aren’t readily apparent from technical measurements alone. Filtering might be motivated by time-sensitive political events, and can be implemented in a nontransparent manner for political reasons. In other cases, decisions to filter content might come from a desire to protect domestic economic interests. Filtering can also come with unintended consequences when the type of content filtered and the jurisdiction where it’s blocked are not the censors’ intended targets.

In the following cases, we illustrate how a mixed-methods approach can ground technical filtering measurements in the political, economic, and social context in which authorities apply them.

Political Motivations

Although technical measurements can determine what’s censored and how that censorship is implemented, they can’t easily answer the question of why content is censored. Understanding what motivates censorship can provide valuable insight into measurements while informing research methods.

Political events. Information controls are highly dynamic and can be triggered or adjusted in response to events on the ground. We call this practice just-in-time blocking (JITB), which refers to the denial of access to information during key moments when the information might have the greatest impact, such as during elections, periods of civil unrest, and sensitive political anniversaries.

The most dramatic implementation of JITB is the complete shutdown of national connectivity, as was seen recently during mass demonstrations in the Middle East and North Africa (MENA).7 In these extreme cases, we can see the disruption via traffic monitoring, while the political event’s prominence makes the context obvious. In other cases, the disruption might be subtle and implemented only for a short period. For example, ONI research during the 2005 Kyrgyzstan parliamentary elections and 2006 Belarus presidential elections found evidence of DDoS attacks against opposition media, and intermittent website inaccessibility. 8 In these cases, attribution is difficult to assess; attacks such as DDoS provide plausible deniability.

Recurring events (for example, sensitive anniversaries) or scheduled events (such as elections) let us trace patterns of information controls enacted in response to those events. Because our client-based testing relies on users in-country, continuous monitoring isn’t feasible, and knowing which events might trigger information controls is highly valuable. However, even in countries with aggressive information controls and records of increased controls during sensitive events, anticipating those that will lead to JITB can be difficult.

In 2011, we collaborated with the BBC to analyze a pilot project it conducted to provide Webproxy services that would deliver content in China and Iran, where BBC services have been consistently blocked. 9 We monitored usage of Psiphon (the proxy service used by the BBC; see http://psiphon.ca) and tested for Internet filtering daily before, during, and after two sensitive anniversaries: the 1989 Tiananmen Square protest and the 2009 disputed Iranian presidential elections. These anniversaries’ sensitivity and past evidence that the respective regimes targeted information controls around the anniversary dates led us to hypothesize that authorities would increase controls around the events. However, our hypothesis wasn’t confirmed — we observed little variance in blocking and no secondary reports of increased blocking. We also didn’t see the expected increase in Psiphon node blocking. However, several unforeseen events in China did appear to trigger a censorship increase. Rumors surrounding the death of former president Jiang Zemin and public discontent following a fatal train collision in Wenzhou were correlated with an increase in the blocking of BBC’s proxies and other reports of censorship. Other studies have similarly shown Chinese authorities quickly responding to controversial news stories with increased censorship of related content. 10 This case shows that predicting changes in information control is difficult, and that unforeseen events can rapidly influence how authorities target content. Measurement methods that are technically agile, can adapt to events, and are informed by a richer understanding of the local context through local experts can help reduce this uncertainty.

Understanding what motivates censorship can provide valuable  insight into measurements while informing research methods.

Filtering transparency. The degree to which censors acknowledge that filtering is occurring and inform users about what content is filtered can vary significantly among countries and ISPs. Many states apply Internet filtering openly, with explicit block pages that notify users why content is blocked and in some cases offer channels for appeal. Others apply filtering using methods that make websites appear inaccessible due to network errors, with no acknowledgment that access has been restricted and no remedies offered. Interestingly, in some cases, authorities apply filtering transparently to certain types of content and covertly to others.

Although determining filtering transparency is a relatively straightforward technical question, knowing what motivates censors to make filtering more or less transparent requires understanding the environment in which such filtering takes place. States might filter transparently to be perceived as upholding certain social values, as seen among MENA countries that block access to pornography or material deemed blasphemous. Other states might wish to retain plausible deniability to accusations that they block sites of opposition political groups, and thus might block using methods that mimic technical errors.

Yemen’s filtering practices illustrate this complexity. ONI testing in Yemen found that some content, including pornography and LGBT content, is blocked with an explicit page outlining why and offering an option to have this blocking reassessed (see https://opennet.net/ research/profiles/yemen). However, other websites — particularly those containing critical political content, which Yemen’s constitution ostensibly protects — have been consistently blocked through TCP reset packet injection. This method is not transparent to average users and would be difficult to distinguish from routine network issues. State-run ISPs in Yemen have denied that they block these political sites, instead attributing their inaccessibility to technical error; covert blocking of political content offers the government plausible deniability.

Other countries might similarly vary in how openly they filter content and how closely such filtering aligns with the country’s stated motivations for censorship. Vietnam, for example, has historically claimed that its information controls aim to limit access to pornography (see https://opennet.net/blog/2012/09/updatethreats-freedom-expression-online-vietnam). However, Vietnam extensively blocks critical political and human rights content through DNS tampering. Similarly, the Ethiopian government has previously denied blocking sensitive content, despite our findings that it blocks political blogs and opposition parties’ websites (see https://opennet.net/blog/2012/11/updateinformation-controls-ethiopia). As these examples show, national-level filtering systems that authorities justify to block specific content (such as pornography) can be extended through “mission creep” to include other sensitive material in unaccountable and nontransparent ways. 11

Economic Motivations

Economic factors also help determine what authorities censor and how they apply that censorship. In countries with strict censorship regimes, the ability to offer unfettered access can provide significant competitive advantage or encourage investment in a region. Conversely, targeting particular services for filtering while letting others operate unfiltered can protect domestic economic interests from competition. Economic considerations might also affect the choice of filtering methods.

ONI research in Uzbekistan has documented significant variation in Internet filtering across ISPs. 12 Although many ISPs tested consistently filtered a wide range of content, others provided unfiltered access. The technical data alone couldn’t explain this result. Contextual fieldwork determined that some commercial ISPs had close ties with the president’s inner circle, which might have helped them resist pressure to implement filtering. This relationship let the ISPs engage in economic rent-seeking, in which they used their political connections to gain a competitive advantage by offering unfettered access.

Other instances show how economic interests shape how ISPs apply information controls. Until 2008, one United Arab Emirates (UAE) ISP, Du, didn’t filter, whereas Etisalat, the country’s other major ISP, filtered extensively. 13 As in Uzbekistan, this variation was motivated by economic interests. Du serves most customers in the UAE’s economic free zones, and was set up to encourage the development of technology and media sectors. The provision of unfettered access was an incentive to attract investment.

Conversely, some online services might be filtered to protect commercial interests. Countries including the UAE and Ethiopia filter access to, and have passed regulations restricting the use of, VoIP services such as Skype to protect the interests of national telecommunications companies, a major source of revenue for the state.

The decision to implement a particular filtering method might also be influenced by cost considerations as much as technical concerns. States can implement some filtering methods, such as IP blocking, on standard network equipment. Other methods, such as TCP reset packet injection, are more technically complex and require systems that are more sophisticated.

Unintended Consequences

In some instances, states might apply filtering in a way that blocks content not intentionally targeted for filtering, or affects jurisdictions outside of where the filtering is implemented. Such cases can be difficult to identify from technical measurement alone.

Upstream filtering. The Internet’s borderless nature complicates research into national-level information controls. Internet filtering, particularly where it isn’t implemented transparently, can have cross-jurisdictional effects that aren’t immediately apparent.

We can see this complexity in upstream filtering, in which filtering that originates in one jurisdiction ends up applied to users in a separate jurisdiction. If ISPs connect to the broader Internet through peers that filter traffic, this filtering could be passed on to users. In some cases, an underdeveloped telecommunications system might limit a country’s wider Internet access to just a few foreign providers, who might pass on their filtering practices. Russia, for example, has long been an important peer to neighboring former Soviet states and has extended filtering practices beyond its borders. The ONI has documented upstream filtering in Kyrgyzstan, Uzbekistan, and Georgia (see https://opennet.net/regions/commonwealth-independent-states-cis).

In a recent example, we found that filtering applied by ISPs in India was restricting content for users of Omani ISP Omantel. 14 Through publicly available proxies and in-country, client- based testing, we collected data on blocked URLs in Oman, a country with a long history of Internet filtering. Although our results showed that users attempting to access blocked content received several block pages, one in particular wasn’t consistent with past filtering that ISPs in Oman had employed. Rather, it matched a block page issued by India’s Department of Telecommunications. Filtered websites with this block page included multimedia sharing sites dedicated to Indian culture and entertainment. Furthermore, Omantel has a traffic peering arrangement with India-based ISP Bharti Airtel ASNs AS8529 and AS9498, and trace routes of attempts to access the blocked content from Oman confirmed that the traffic passed through Bharti Airtel. We found that the filtering resulted from a broad Indian court decision that sought to limit the distribution of a recently released film.

Omani users were thus subject to filtering implemented for domestic purposes within India. These users had limited means of accessing content that might not have violated Omani regulations, did not consent to the blocking, and had little recourse for challenging the censorship.

Collateral filtering. ISPs often implement Internet filtering in ways that can unintentionally block content. Ineffectively applied filtering can inadvertently block access to an entire domain even when the censor was targeting only a single URL. IP blocking can restrict access to thousands of websites hosted on a single server when only one was targeted. Commercial filtering lists that miscategorize websites can restrict access to those that do not contain the type of content censors might want to block. We refer to such overblocking as collateral filtering, or the inadvertent blocking of content that is a byproduct of crude or ineffectively applied filtering systems.

The idea of collateral filtering implies that some content is blocked because censors target it, whereas other content is filtered as a side effect. However, the distinction between these two categories is rarely self-evident from technical data alone. We must understand what type of content censors are trying to block — a challenging determination that requires knowledge of the domestic political and social context.

Collateral filtering can occur from keyword blocking, in which censors block content containing particular keywords regardless of context. Our research in Syria demonstrated such blocking’s effects, and illustrated how we can redefine testing methods if we understand the censoring regime’s targets. Syrian authorities have acknowledged targeting Israeli websites, letting us focus research on enumerating this filtering’s scope and depth. Past research has also documented the country’s extensive filtering of censorship circumvention tools. Data gathered from Syria has demonstrated that all content tested that contained the keywords “Israel” or “proxy” in the URL was blocked, a crude filtering method that likely resulted in significant collateral filtering.

Similarly, our research in Yemen has indicated that the ISP YemenNet blocks access to all websites with the .il domain suffix, such as Israeli government and defense forces websites. However, several seemingly innocuous sites also ended up blocked, including that of an Italian airline selling flights to Israel, and that of an Israeli yoga studio. This content was filtered using nontransparent methods, in contrast to the transparent methods used to filter other social content.

Methodological Challenges

Using a mixed-methods approach to study information controls can help us pinpoint which technical measurements to use and add valuable context for interpreting the intent of a regime. However, challenges remain. In our work, we have wrestled with perennial difficulties in data collection, analysis, and interpretation that are general challenges for multidisciplinary research on information controls.

Any Internet censorship measurement study will encounter the seemingly simple but actually complicated questions of determining what content to test, which networks to access, and when to target testing.

Determining what Web content to use to test Internet filtering is challenging in terms of both creating and maintaining content lists over time. Keeping lists current, testing relevant content, and avoiding deprecated URLs is a logistical challenge when testing in more than 70 countries over 10 years. To create and maintain these lists, our project relies on a large network of researchers who differ in their focus and expertise. Also, although keeping testing lists responsive to environmental changes increases the relevancy of their content, it can complicate efforts to measure a consistent dataset across time and countries and, consequently, can make fine-grained longitudinal analysis difficult.

Network access points can be accessed in various ways, including remote access (such as public proxies), distributed infrastructures (for example, PlanetLab), or client-based approaches. Each of these methods has benefits and limitations. Public proxies and PlanetLab enable continuous automated measurements but are limited with regard to which countries are available or might not represent an average connection in a country, possibly introducing bias. Client-based testing can ensure a representative connection, but we might not have access to users in countries of interest or to particular ISPs. In some cases, the potential safety risks to users are substantial; moreover, ethical and legal considerations can restrict testing.

Our testing method relies heavily on users for testing and human analysts for compiling testing lists and reviewing results. These conditions make continuous testing infeasible and require that we identify ad hoc triggers for targeting tests. Clearly, sensitive events are potentially good indicators of when information controls might be enacted. However, as our BBC study showed, predicting which events will trigger controls is never straightforward.

A holistic view of information controls combines technical and contextual data and iterative analysis. However, this analysis is often constrained by data availability. In some cases, technical data clearly showing a blocking event or other control might not be easily paired with contextual data that reveals the intentions and motivations of the authorities implementing it. Policies regarding information controls might be kept secret, and the public justification for controls can run counter to empirical data on their operation. Contextual anecdotes about controls derived from interviews, media reports, or document leaks, on the other hand, can be difficult to verify with technical data due to access restrictions.

The study of information controls is becoming an increasingly challenging but important area as states ramp up cyber-security and related policies. As controls increase in prevalence and include more sophisticated and at times even offensive measures, the need for multidisciplinary research into their practice and impact is vital. Disciplinary divides continue to hinder progress. In the social sciences, incentives for adopting technical methods relevant to information controls are low. Although the study of technology’s social impact is more deeply entrenched in technical fields such as social informatics and human-computer interaction, these fields are less literate in social science theories that can help explain information control dynamics. We have tried to overcome disciplinary divides through large collaborative projects. However, collaborative research is costly, time-consuming, and administratively complex, particularly if researchers in multiple national locations are involved.

Addressing these divides will require a concentrated effort from technical and social science communities. Earlier education in theories and methods from disparate fields could provide students with deeper skill sets and the ability to communicate across disciplines. Researchers from technical and social sciences working on information controls should stand as a community and demonstrate the need for funding opportunities, publication venues, workshops, and conferences that encourage multidisciplinary collaborations and knowledge sharing in the area. Through education and dialogue, the study of information controls can mature and hopefully have greater effects on the Internet’s future direction.

References

  1. Anonymous, “The Collateral Damage of Internet Censorship by DNS Injection,” ACM SIGCOMM Computer Communication Rev., vol. 42, no. 3, 2012, pp. 21–27.
  2. Clayton, S. Murdoch, and R. Watson, “Ignoring the Great Firewall of China,” Privacy Enhancing Technologies, Springer, 2006, pp. 20–35; www.cl.cam.ac.uk/~rnc1/ ignoring.pdf.
  3. Xu, Z. Mao, and J. Halderman, “Internet Censorship in China: Where Does the Filtering Occur? Passive and Active Measurement, Springer, 2011, pp. 133–142; http:// web.eecs.umich.edu/~zmao/Papers/china-censorshippam11.pdf.
  4. Sfakianakis et al., “CensMon: A Web Censorship Monitor,” Proc. 1st Usenix Workshop Free and Open Communication on the Internet (FOCI 11), Usenix Assoc., 2011; http://static.usenix.org/event/foci11/tech/final_files/ Sfakianakis.pdf.
  5. Verkamp and M. Gupta, “Inferring Mechanics of Web Censorship around the World,” Proc. 2nd Usenix Workshop Free and Open Communication on the Internet (FOCI 12), Usenix Assoc., 2012; www.usenix.org/ conference/foci12/inferring-mechanics-web-censorship- around-world.
  6. Zittrain and B. Edelman, “Empirical Analysis of Internet Filtering in China,” IEEE Internet Computing, vol. 2, no. 2, 2003, pp. 70–77; http://cyber.law.harvard .edu/filtering/china/.
  7. Dainotti et al., “Analysis of Country-Wide Internet Outrages Caused by Censorship,” Proc. 2011 ACM SIGCOMM Conf. Internet Measurement (IMC 11), ACM, 2011, pp. 1–18; www.caida.org/publications/papers/ 2011/outages_censorship/outages_censorship.pdf.
  8. “The Internet and Elections: The 2006 Presidential Election in Belarus,” OpenNet Initiative, 2006; http://opennet.net/sites/opennet.net/files/ONI_ Belarus_Country_Study.pdf.
  9. “Casting a Wider Net: Lessons Learned in Delivering BBC Content on the Censored Internet,” Canada Centre for Global Security Studies, 11 Oct. 2011; http:// munkschool.utoronto.ca/downloads/casting.pdf.
  10. Aase et al., “Whiskey, Weed, and Wukan on the World Wide Web,” Proc. 2nd Usenix Workshop Free and Open Communication on the Internet (FOCI 12), Usenix Assoc., 2012; www.usenix.org/system/files/conference/ foci12/foci12-final17.pdf.
  11. Villeneuve, “The Filtering Matrix,” First Monday, vol. 11, no. 2, 2006; http://firstmonday.org/htbin/cgiwrap/ bin/ojs/index.php/fm/article/view/1307/1227.
  12. “Internet Filtering in Uzbekistan in 2006–2007,” OpenNet Initiative, 2007; http://opennet.net/studies/ uzbekistan2007.
  13. Noman, “Dubai Free Zone No Longer Has Filter-Free Internet Access,” OpenNet Initiative, 18 Apr. 2008; http://opennet.net/blog/2008/04/dubai-free-zone-nolonger-has-filter-free-internet-access.
  14. “Routing Gone Wild: Documenting Upstream Filtering in Oman via India,” Citizen Lab, 12 July 2012; https:// citizenlab.org/2012/07/routing-gone-wild.
  1. Anonymous, “The Collateral Damage of Internet Censorship by DNS Injection,” ACM SIGCOMM Computer Communication Rev., vol. 42, no. 3, 2012, pp. 21–27.
  2. R. Clayton, S. Murdoch, and R. Watson, “Ignoring the Great Firewall of China,” Privacy Enhancing Technologies, Springer, 2006, pp. 20–35; www.cl.cam.ac.uk/~rnc1/ ignoring.pdf.
  3. X. Xu, Z. Mao, and J. Halderman, “Internet Censorship in China: Where Does the Filtering Occur? Passive and Active Measurement, Springer, 2011, pp. 133–142; http:// web.eecs.umich.edu/~zmao/Papers/china-censorshippam11.pdf.
  4. A. Sfakianakis et al., “CensMon: A Web Censorship Monitor,” Proc. 1st Usenix Workshop Free and Open Communication on the Internet (FOCI 11), Usenix Assoc., 2011; http://static.usenix.org/event/foci11/tech/final_files/ Sfakianakis.pdf.
  5. J. Verkamp and M. Gupta, “Inferring Mechanics of Web Censorship around the World,” Proc. 2nd Usenix Workshop Free and Open Communication on the Internet (FOCI 12), Usenix Assoc., 2012; www.usenix.org/ conference/foci12/inferring-mechanics-web-censorship- around-world.
  6. J. Zittrain and B. Edelman, “Empirical Analysis of Internet Filtering in China,” IEEE Internet Computing, vol. 2, no. 2, 2003, pp. 70–77; http://cyber.law.harvard .edu/filtering/china/.
  7. A. Dainotti et al., “Analysis of Country-Wide Internet Outrages Caused by Censorship,” Proc. 2011 ACM SIGCOMM Conf. Internet Measurement (IMC 11), ACM, 2011, pp. 1–18; www.caida.org/publications/papers/ 2011/outages_censorship/outages_censorship.pdf.
  8. “The Internet and Elections: The 2006 Presidential Election in Belarus,” OpenNet Initiative, 2006; http://opennet.net/sites/opennet.net/files/ONI_ Belarus_Country_Study.pdf.
  9. “Casting a Wider Net: Lessons Learned in Delivering BBC Content on the Censored Internet,” Canada Centre for Global Security Studies, 11 Oct. 2011; http:// munkschool.utoronto.ca/downloads/casting.pdf.
  10. N. Aase et al., “Whiskey, Weed, and Wukan on the World Wide Web,” Proc. 2nd Usenix Workshop Free and Open Communication on the Internet (FOCI 12), Usenix Assoc., 2012; www.usenix.org/system/files/conference/ foci12/foci12-final17.pdf.
  11. N. Villeneuve, “The Filtering Matrix,” First Monday, vol. 11, no. 2, 2006; http://firstmonday.org/htbin/cgiwrap/ bin/ojs/index.php/fm/article/view/1307/1227.
  12. “Internet Filtering in Uzbekistan in 2006–2007,” OpenNet Initiative, 2007; http://opennet.net/studies/uzbekistan2007.
  13. H. Noman, “Dubai Free Zone No Longer Has Filter-Free Internet Access,” OpenNet Initiative, 18 Apr. 2008; http://opennet.net/blog/2008/04/dubai-free-zone-nolonger-has-filter-free-internet-access.
  14. “Routing Gone Wild: Documenting Upstream Filtering in Oman via India,” Citizen Lab, 12 July 2012; https:// citizenlab.org/2012/07/routing-gone-wild.

Deibert, Ronald - Authoritarianism Goes Global: Cyberspace Under siege - 201507

Deibert, Ronald, "Authoritarianism Goes Global: Cyberspace Under siege," Journal of Democracy, Volume 26, Number 3, July 2015, pp. 64-78.

December 2014 marked the fourth anniversary of the Arab Spring. Beginning in December 2010, Arab peoples seized the attention of the world by taking to the Internet and the streets to press for change. They toppled regimes once thought immovable, including that of Egyptian dictator Hosni Mubarak. Four years later, not only is Cairo’s Tahrir Square empty of protesters, but the Egyptian army is back in charge. Invoking the familiar mantras of anti-terrorism and cyber-security, Egypt’s new president, General Abdel Fattah al-Sisi, has imposed a suite of in- formation controls. 1 Bloggers have been arrested and websites blocked; suspicions of mass surveillance cluster around an ominous-sounding new “High Council of Cyber Crime.” The very technologies that many heralded as “tools of liberation” four years ago are now being used to stifle dissent and squeeze civil society. The aftermath of the Arab Spring is looking more like a cold winter, and a potent example of resurgent authoritarianism in cyberspace.

Authoritarianism means state constraints on legitimate democratic political participation, rule by emotion and fear, repression of civil society, and the concentration of executive power in the hands of an unaccountable elite. At its most extreme, it encompasses totalitarian states such as North Korea, but it also includes a large number of weak states and “competitive authoritarian” regimes. 2 Once assumed to be incompatible with today’s fast-paced media environment, authoritarian systems of rule are showing not only resilience, but a capacity for resurgence. Far from being made obsolete by the Internet, authoritarian regimes are now actively shaping cyberspace to their own strategic advantage. This shaping includes technological, legal, extralegal, and other targeted information controls. It also includes regional and bilateral cooperation, the promotion of international norms friendly to authoritarianism, and the sharing of “best” practices and technologies.

The development of several generations of information controls has resulted in a tightening grip on cyberspace within sovereign territorial boundaries. A major impetus behind these controls is the growing imperative to implement cyber-security and anti-terror measures, which often have the effect of strengthening the state at the expense of human rights and civil society. In the short term, the disclosures by Edward Snowden concerning surveillance carried out by the U.S. National Security Agency (NSA) and its allies must also be cited as a factor that has contributed, even if unintentionally, to the authoritarian resurgence.

Liberal democrats have wrung their hands a good deal lately as they have watched authoritarian regimes use international organizations to promote norms that favor domestic information controls. Yet events in regional, bilateral, and other contexts where authoritarians learn from and cooperate with one another have mattered even more. Moreover, with regard to surveillance, censorship, and targeted digital espionage, commercial developments and their spin-offs have been key. Any thinking about how best to counter resurgent authoritarianism in cyberspace must reckon with this reality.

Mention authoritarian controls over cyberspace, and people often think of major Internet disruptions such as Egypt’s shutdown in late January and early February 2011, or China’s so-called Great Firewall. These are noteworthy, to be sure, but they do not capture the full gamut of cyberspace controls. Over time, authoritarians have developed an arsenal that extends from technical measures, laws, policies, and regulations, to more covert and offensive techniques such as targeted malware attacks and campaigns to co-opt social media. Subtler and thus more likely to be effective than blunt-force tactics such as shutdowns, these measures reveal a considerable degree of learning. Cyberspace authoritarianism, in other words, has evolved over at least three generations of information controls. 3

First-generation controls tend to be “defensive,” and involve erecting national cyber-borders that limit citizens’ access to information from abroad. The archetypal example is the Great Firewall of China, a system for filtering keywords and URLs to control what computer users within the country can see on the Internet. Although few countries have matched the Great Firewall (Iran,  Pakistan, Saudi Arabia, Bahrain, Yemen, and Vietnam have come the closest), first-generation controls are common. Indeed, Internet filtering of one sort or another is now normal even in democracies.

Where countries vary is in terms of the content targeted for blocking and the transparency of filtering practices. Some countries, including Canada, the United Kingdom, and the United States, block content related to the sexual exploitation of children as well as content that infringes copyrights. Other countries focus primarily on guarding reli- gious sensitivities. Since September 2012, Pakistan has been blocking all of  YouTube over a video, titled “Innocence of Muslims,” that Pakistani authorities deem blasphemous. 4  A growing number of countries are blocking access to political and security-related content, especially content posted by opposition and human-rights groups, insurgents, “extremists,” or “terrorists.” Those last two terms are in quotation marks because in some places, such as the Gulf states, they are defined so broadly that content is blocked which in most other countries would fall within the bounds of legitimate expression.

In 2012, Renu Srinavasan of Mumbai found herself arrested merely for hitting the “like” button below a friend’s Facebook post.

National-level Internet filtering is notoriously crude. Errors and inconsistencies are common. One Citizen Lab study found that Blue Coat (a U.S. software widely used to automate national filtering systems) mistakenly blocked hundreds of non-pornographic websites. 5 Another Citizen Lab study found that Oman residents were blocked from a Bollywood-related website not because it was banned in Oman, but because of upstream filtering in India, the pass-through country for a portion of Oman’s Internet traffic. 6  In Indonesia, Internet-censorship rules are applied at the level of Internet Service Providers (ISPs). The country has more than three-hundred of these; what you can see online has much to do with which one you use. 7

As censorship extends into social media and applications, inconsistencies bloom, as is famously the case in China. In some countries, a user cannot see the filtering, which displays as a “network error.” Although relatively easy to bypass and document, 8 first-generation controls have won enough acceptance to have opened the door to more expansive measures.

Second-generation controls are best thought of as deepening and ex- tending information controls into society through laws, regulations, or requirements that force the private sector to do the state’s bidding by policing privately owned and operated networks according to the state’s demands. Second-generation controls can now be found in every region of the world, and their number is growing. Turkey is passing new laws, on the pretext of protecting national security and fighting cyber-crime, that will expand wiretapping and other surveillance and detention powers while allowing the state to censor websites without a court order. Ethiopia charged six bloggers from the Zone 9 group and three independent journalists with terrorism and treason after they covered political issues. Thailand is considering new cyber-crime laws that would grant authorities the right to access emails, telephone records, computers, and postal mail without needing prior court approval. Under reimposed martial law, Egypt has tightened regulations on demonstrations and arrested prominent bloggers, including Arab Spring icon Alaa Abd El Fattah. Saudi blogger Raif Badawi is looking at ten years in jail and 950 remaining lashes (he received the first fifty lashes in January 2015) for criticizing Saudi clerics online. Tunisia passed broad reforms after the Arab Spring, but even there a blogger has been arrested under an obscure older law for “defaming the military” and “insulting military commanders” on Facebook. Between 2008 and March 2015 (when the Supreme Court struck it down), India had a law that banned “menacing” or “offensive” social-media posts. In 2012, Renu Srinavasan of Mumbai found herself arrested merely for hitting the “like” button below a friend’s Facebook post. In Singapore, blogger and LGBT activist Alex Au was fined in March 2015 for criticizing how a pair of court cases was handled.

Second-generation controls also include various forms of “baked-in” surveillance, censorship, and “backdoor” functionalities that governments, wielding their licensing authority, require manufacturers and service providers to build into their products. Under new anti-terrorism laws, Beijing recently announced that it would require companies offering services in China to turn over encryption keys for state inspection and build into all systems backdoors open to police and security agencies. Existing regulations already require social-media companies to survey and censor their own networks. Citizen Lab has documented that many chat applications popular in China come pre-configured with censorship and surveillance capabilities. 9  For many years, the Russian government has required telecommunications companies and ISPs to be “SORM-compliant” — SORM is the Russian acronym for the surveillance system that directs copies of all electronic communications to local security offices for archiving and inspection. In like fashion, India’s Central Monitoring System gives the government direct access to the country’s telecommunications networks. Agents can listen in on broadband phone calls, SMS messages, and email traffic, while all call-data records are archived and analyzed. In Indonesia, where BlackBerry smartphones remain popular, the government has repeatedly pressured Canada-based BlackBerry Limited to comply with “lawful-access” demands, even threatening to ban the company’s services unless BlackBerry agreed to host data on servers in the country. Similar demands have come from India, Saudi Arabia, and the United Arab Emirates. The company has even agreed to bring Indian technicians to Canada for special surveillance training. 10

Also spreading are new laws that ban security and anonymizing tools, including software that permits users to bypass first-generation blocks. Iran has arrested those who distribute circumvention tools, and it has throttled Internet traffic to frustrate users trying to connect to popular circumvention and anonymizer tools such as Psiphon and Tor. Belarus and Russia have both recently proposed making Tor and similar tools illegal. China has banned virtual private networks (VPNs) nationwide — the latest in a long line of such bans—despite the difficulties that this causes for business. Pakistan has banned encryption since 2011, although its widespread use in financial and other communications inside the country suggests that enforcement is lax. The United Arab Emirates has banned VPNs, and police there have stressed that individuals caught using them may be charged with violating the country’s harsh cyber-crime laws.

Second-generation controls include finer-grained registration and identification requirements that tie people to specific accounts or devices, or even require citizens to obtain government permission before using the Internet. Pakistan has outlawed the sale of prepaid SIM cards and demands that all citizens register their SIM cards using bio-metric identification technology. The Thai military junta has extended such registration rules to cover free WiFi accounts as well. China has imposed real-name registration policies on Internet and social-media accounts, and companies have dutifully deleted tens of thousands of accounts that could not be authenticated. Chinese users must also commit to respect the seven “baselines,” including “laws and regulations, the Socialist system, the national interest, citizens’ lawful rights and interests public order, morals, and the veracity of information.”11

By expanding the reach of laws and broad regulations, second-generation controls narrow the space left free for civil society, and subject the once “wild frontier” of the Internet to growing regulation. While enforcement may be uneven, in country after country these laws hang like dark clouds over civil society, creating a climate of uncertainty and fear.

Authoritarians on the Offensive

Third-generation controls are the hardest to document, but may be the most effective. They involve surveillance, targeted espionage, and other types of covert disruptions in cyberspace. While first-generation controls are defensive and second-generation controls probe deeper into society, third-generation controls are offensive. The best known of these are the targeted cyber-espionage campaigns that emanate from China. Although Chinese spying on businesses and governments draws most of the news reports, Beijing uses the same tactics to target human-rights, pro-democracy, and independence movements outside China. A recent four-year comparative study by Citizen Lab and ten participating NGOs found that those groups suffered the same persistent China-based digital attacks as governments and Fortune 500 companies. 12 The study also found that targeted espionage campaigns can have severe consequences including disruptions of civil society and threats to liberty. At the very least, persistent cyber-espionage attacks breed self-censorship and undermine the networking advantages that civil society might otherwise reap from digital media. Another Citizen Lab report found that China has employed a new attack tool, called “The Great Cannon,” which can redirect the website requests of unwitting foreign users into denial-of-service attacks or replace web requests with malicious software. 13

While other states may not be able to match China’s cyber-espionage or online-attack capabilities, they do have options. Some might buy off-the-shelf espionage “solutions” from Western companies such as the United Kingdom’s Gamma Group or Italy’s Hacking Team — each of which Citizen Lab research has linked to dozens of authoritarian-government clients. 14 In Syria, which is currently the site of a multi-sided, no-holds-barred regional war, security services and extremist groups such as ISIS are borrowing cyber-criminals’ targeted-attack techniques, downloading crude but effective trade-craft from open sources and then using it to infiltrate opposition groups, often with deadly results. 15 The capacity to mount targeted digital attacks is proving particularly attractive to regimes that face persistent insurgencies, popular protests, or other standing security challenges. As these techniques become more widely used and known, they create a chilling effect: Even without particular evidence, activists may avoid digital communication for fear that they are being monitored.

Third-generation controls also include efforts to aim crowd-sourced antagonism at political foes. Governments recruit “electronic armies” that can use the very social media employed by popular opposition movements to discredit and intimidate those who dare to criticize the state. 16 Such on-line swarms are meant to make orchestrated denunciations of opponents look like spontaneous popular expressions. If the activities of its electronic armies come under legal question or result in excesses, a regime can hide behind “plausible deniability.” Examples of pro-government e-warriors include Venezuela’s Chavista “communicational guerrillas,” the Egyptian Cyber Army, the pro-Assad Syrian Electronic Army, the pro-Putin bloggers of Russia, Kenya’s “director of digital media” Dennis Itumbi plus his bloggers, Saudi Arabia’s anti-pornography “ethical hackers,” and China’s notorious “fifty-centers,” so called because they are allegedly paid that much for each pro-government comment or status update they post.

Other guises under which third-generation controls may travel include not only targeted attacks on Internet users but wholesale disruptions of cyber-space. Typically scheduled to cluster before and during major political events such as elections, anniversaries, and public demonstrations, “just-in-time” disruptions can be as severe as total Internet blackouts. More common, however, are selective disruptions. In Tajikistan, SMS services went down for several days leading up to planned opposition rallies in October 2014. The government blamed technical errors; others saw the hand of the state at work. 17 Pakistan blocked all mobile services in its capital, Islamabad, for part of the day on 23 March 2015 in order to shield national-day parades from improvised explosive devices. 18 During the 2014 pro-democracy demonstrations in Hong Kong, China closed access to the photo-sharing site Instagram. Telecommunications companies in the Democratic Republic of Congo were ordered to shut down all mobile and SMS communications in response to anti-government protests. Ban- gladesh ordered a ban on the popular smartphone messaging application Viber in January 2015, after it was linked to demonstrations.

To these three generations, we might add a fourth. This comes in the form of a more assertive authoritarianism at the international level. For years, governments that favor greater sovereign control over cyber-space have sought to assert their preferences—despite at times stiff resistance—in forums such as the International Telecommunication Union (ITU), the Internet Governance Forum (IGF), the United Nations (UN), and the Internet Corporation for Assigned Names and Numbers (ICANN). 19 Although there is no simple division of “camps,” observers tend to group countries broadly into those that prefer a more open Internet and a limited role for states and those that prefer a state-led form of governance, probably under UN auspices.

The United States, the United Kingdom, Europe, and the Asian democracies line up most often behind openness, while China, Iran, Russia, Saudi Arabia, and various other non-democracies fall into the latter group. A large number of emerging-market countries, led by Brazil, India, and Indonesia, are “swing states” that can go either way. Battle lines between these opposing views were becoming sharper around the time of the December 2012 World Congress on Information Technology (WCIT) in Dubai—an event that many worried would mark the fall of Internet governance into UN (and thus state) hands. But the WCIT process stalled, and lobbying by the United States and its allies (plus Internet companies such as Google) played a role in preventing fears of a state-dominated Internet from coming true.

If recent proposals on international cyber-security submitted to the UN by China, Russia, and their allies tell us anything, future rounds of the cyber-governance forums may be less straightforward than what transpired at Dubai. In January 2015, the Beijing- and Moscow-led Shanghai Cooperation Organization (SCO) submitted a draft “International Code of Conduct for Information Security” to the UN. This document reaffirms many of the same principles as the ill-fated WCIT Treaty, including greater state control over cyber-space.

Such proposals will surely raise the ire of those in the “Internet freedom” camp, who will then marshal their resources to lobby against their adoption. But will wins for Internet freedom in high-level international venues (assuming that such wins are in the cards) do anything to stop local and regional trends toward greater government control of the online world? Writing their preferred language into international statements may please Internet- freedom advocates, but what if such language merely serves to gloss over a ground-level reality of more rather than less state cyber-authority?

It is important to understand the driving forces behind resurgent authoritarianism in cyberspace if we are to comprehend fully the challenges ahead, the broader prospects facing human rights and democracy promotion worldwide, and the reasons to suspect that the authoritarian resurgence in cyberspace will continue.

A major driver of this resurgence has been and likely will continue to be the growing impetus worldwide to adopt cyber-security and anti-terror policies. As societies come to depend ever more heavily on networked digital information, keeping it secure has become an ever-higher state priority. Data breaches and cyber-espionage attacks — including massive thefts of intellectual property — are growing in number. While the cyber- security realm is replete with self-serving rhetoric and threat inflation, the sum total of concerns means that dealing with cyber-crime has now become an unavoidable state imperative. For example, the U.S. intelligence community’s official 2015 “Worldwide Threat Assessment” put cyber-attacks first on the list of dangers to U.S. national security. 20

It is crucial to note how laws and policies in the area of cyber-security are combining and interacting with those in the anti-terror realm. Violent extremists have been active online at least since the early days of al-Qaeda several decades ago. More recently, the rise of the Islamic State and its gruesome use of social media for publicity and recruitment have spurred a new sense of urgency. The Islamic State atrocities recorded in viral beheading videos are joined by (to list a few) terror attacks such as the Mumbai assault in India (November 2008); the Boston Marathon bombings (April 2013); the Westgate Mall shootings in Kenya (September 2013); the Ottawa Parliament shooting (October 2014); the Charlie Hebdo and related attacks in Paris (January 2015); repeated deadly assaults on Shia mosques in Pakistan (most recently in February 2015); and the depredations of Nigeria’s Boko Haram.

Horrors such as these underline the value of being able to identify, in timely fashion amid the wilderness of cyberspace, those bent on violence before they strike. The interest of public-safety officials in data-mining and other high-tech surveillance and analytical techniques is natural and understandable. But as expansive laws are rapidly passed and state-security services (alongside the private companies that work for and with them) garner vast new powers and resources, checks and balances that protect civil liberties and guard against the abuse of power can be easily forgotten. The adoption by liberal democracies of sweeping cyber-crime and anti-terror measures without checks and balances cannot help but lend legitimacy and normative support to similar steps taken by authoritarian states. The headlong rush to guard against extremism and terrorism worldwide, in other words, could end up providing the biggest boost to resurgent authoritarianism.

Regional Security Cooperation as a Factor

While international cyberspace conferences attract attention, often overlooked are regional security forums. The latter are the places where cyber-security coordination happens. They are focused sites of learning and norm promotion where ideas, technologies, and “best” practices are exchanged. Even countries that are otherwise rivals can and do agree and cooperate within the context of such security forums.

The SCO, to name one prominent regional group, boasts a well-developed normative framework that calls upon its member states to combat the “three evils” of terrorism, separatism, and extremism. The upshot has been information controls designed to bolster regime stability against opposition groups and the claims of restive ethnic minorities. The SCO recently held joint military exercises in order to teach its forces how to counter Internet-enabled opposition of the sort that else- where has led to “color revolutions.” The Chinese official who directs the SCO’s “Regional Anti-Terrorist Structure” (RATS) told the UN Counter-Terrorism Committee that RATS had “collected and distributed to its Member States intelligence information regarding the use of the Internet by terrorist groups active in the region to promote their ideas.” 21

Such information may include intelligence on individuals involved in what international human-rights law considers legitimate political expression. Another Eurasian regional security organization in which Russia plays a leading role, the Collective Security Treaty Organization (CSTO), has announced that it will be creating an “international center to combat cyber threats.”22 Both the SCO and the CSTO are venues where commercial platforms for both mass and targeted surveillance are sold, shared, and exchanged. The telecommunications systems and ISPs in each of the five Central Asian republics are all “SORM-compliant” — ready to copy all data routinely to security services, just as in Russia. The SCO and CSTO typically carry out most of their deliberations behind closed doors and release no disclosures in English, meaning that much of what they do escapes the attention of Western observers and civil society groups.

The regional cyber-security coordination undertaken by the Gulf Co- operation Council (GCC) offers another example. In 2014, the GCC approved a long-awaited plan to form a joint police force, with head- quarters in Abu Dhabi. While the fights against drug dealing and money laundering are to be among the tasks of this Gulf Interpol, the new force will also have the mission of battling cyber-crime. In the Gulf monarchies, however, online offenses are defined broadly and include posting items that can be taken as critical of royal persons, ruling families, or the Muslim religion. These kingdoms and emirates have long records of suppressing dissent and even arresting one another’s political opponents. Whatever its other law-enforcement functions, the GCC version of Interpol is all too likely to become a regional tool for suppressing protest and rooting out expressions of discontent.

“Flying under the radar,” with little flash, few reporters taking notice, and lots of closed meetings carried on in local languages by like-minded officials from neighboring authoritarian states, organizations concerned with regional governance and security attract far less attention than UN conferences that seem poised to unleash dramatic Web takeovers which may never materialize. Yet it is in these obscure regional corners that the key norms of cyberspace controls may be taking shape and taking hold.

The Cyber-security Market as a Factor

A third driving factor has to do with the rapid growth of digital connectivity in the global South and among the populations of authoritarian regimes, weak states, and flawed democracies. In Indonesia the number of Internet users increases each month by a stunning 800,000. In 2000, Nigeria had fewer than a quarter-million Internet users; today, it has 68 million. The Internet-penetration rate in Cambodia rose a staggering 414 percent from January 2014 to January 2015 alone. By the end of 2014, the number of mobile-connected devices exceeded the number of people on Earth. Cisco Systems estimates that by 2019, there will be nearly 1.5 mobile devices per living human. The same report predicts that the steepest rates of growth in mobile-data traffic will be found in the Middle East and Africa. 23

Booming digital technology is good for economic growth, but it also creates security and governance pressure points that authoritarian regimes can squeeze. We have seen how social media and the like can mobilize masses of people instantly on behalf of various causes (pro-democratic ones included). Yet many of the very same technologies can also be used as tools of control. Mobile devices, with their portability, low-cost, and light physical-infrastructure requirements, are how citizens in the developing world connect. These handheld marvels allow people to do a wealth of things that they could hardly have dreamt of doing before. Yet all mobile devices and their dozens of installed applications emit reams of highly detailed information about peoples’ movements, social relationships, habits, and even thoughts — data that sophisticated agencies can use in any number of ways to spy, to track, to manipulate, to deceive, to extort, to influence, and to target.

The market for digital spyware described earlier needs to be seen not only as a source of material and technology for countries who demand them, but as an active shaper of those countries’ preferences, practices, and policies. This is not to say that companies are persuading policy makers regarding what governments should do. Rather, companies and the services that they offer can open up possibilities for solutions, be they deep-packet inspection, content filtering, cellphone tracking, “big-data” analytics, or targeted spyware. SkyLock, a cellphone-tracking solution sold by Verint Systems of Melville, New York, purports to offer governments “a cost-effective, new approach to obtaining global location information concerning known targets.” Company brochures obtained by the Washington Post include “screen shots of maps depicting location tracking in what appears to be Mexico, Nigeria, South Africa, Brazil, Congo, the United Arab Emirates, Zimbabwe, and several other countries.” 24

Large industry trade fairs where these systems are sold are also crucial sites for learning and information exchange. The best known of these, the Intelligence Support Systems (ISS) events, are run by TeleStrategies, Incorporated, of McLean, Virginia. Dubbed the “Wiretappers’ Ball” by critics, ISS events are exclusive conventions with registration fees high enough to exclude most attendees other than governments and their agencies. As one recent study noted, ISS serves to connect registrants with surveillance-technology vendors, and provides training in the latest industry practices and equipment. 25  The March 2014 ISS event in Dubai featured one session on “Mobile Location, Surveillance and Signal Intercept Product Training” and another that promised to teach attendees how to achieve “unrivaled at-tack capabilities and total resistance to detection, quarantine and removal by any endpoint security technology.” 26 Major corporate vendors of lawful-access, targeted-surveillance, and data-analytic solutions are fixtures at ISS meetings and use them to gather clients.

As cyber-security demands grow, so will this market. Authoritarian policy makers looking to channel industrial development and employment opportunities into paths that reinforce state control can be expected to support local innovation. Already, schools of engineering, computer science, and data-processing are widely seen in the developing world as viable paths to employment and economic sustainability, and within those fields cyber-security is now a major driving force. In Malaysia, for example, the British defense contractor BAE Systems agreed to under- write a degree-granting academic program in cyber-security in partial fulfillment of its “defense offsets” obligation. 27  India’s new “National Cyber Security Policy” lays out an ambitious strategy for training a new generation of experts in, among other things, the fine points of “ethical hacking.” The goal is to give India an electronic army of high-tech specialists a half-million strong. In a world where “Big Brother” and “Big Data” share so many of the same needs, the political economy of cyber-security must be singled out as a major driver of resurgent authoritarianism in cyberspace.

Edward Snowden as a Factor

Since June 2013, barely a month has gone by without new revelations concerning U.S. and allied spying—revelations that flow from the disclosures made by former NSA contractor Edward Snowden. The disclosures fill in the picture of a remarkable effort to marshal extraordinary capacities for information control across the entire spectrum of cyber- space. The Snowden revelations will continue to fuel an important public debate about the proper balance to be struck between liberty and security.

While the value of Snowden’s disclosures in helping to start a long- needed discussion is undeniable, the revelations have also had unintended consequences for resurgent authoritarianism and cyberspace. First, they have served to deflect attention away from authoritarian-regime cyber-espionage campaigns such as China’s. Before Snowden fled to Hong Kong, U.S. diplomacy was taking an aggressive stand against cyber-espionage. Individuals in the pay of the Chinese military and allegedly linked to Chinese cyber-espionage were finding themselves under indictment. Since Snowden, the pressure on China has eased. Beijing, Moscow, and others have found it easy to complain loudly about a double standard supposedly favoring the United States while they rationalize their own actions as “normal” great-power behavior and congratulate themselves for correcting the imbalance that they say has beset cyberspace for too long.

Second, the disclosures have created an atmosphere of suspicion around Western governments’ intentions and raised questions about the legitimacy of the “Internet Freedom” agenda backed by the United States and its allies. Since the Snowden disclosures—revealing top-secret exploitation and disruption programs that in some respects are indistinguishable from those that Washington and its allies have routinely condemned — the rhetoric of the Internet Freedom coalition has rung rather hollow. In February 2015, it even came out that British, Canadian, and U.S. signals-intelligence agencies had been “piggybacking” on China-based cyber-espionage campaigns—stealing data from Chinese hackers who had not properly secured their own command-and-control networks. 28

Third, the disclosures have opened up foreign investment opportunities for IT companies that used to run afoul of national-security concerns. Before Snowden, rumors of hidden “backdoors” in Chinese-made technology such as Huawei routers put a damper on that company’s sales. Then it came out that the United States and allied governments had been compelling (legally or otherwise) U.S.-based tech companies to do precisely what many had feared China was doing—namely, in-stalling secret backdoors. So now Western companies have a “Huawei” problem of their own, and Huawei no longer looks so bad.

In the longer term, the Snowden disclosures may have the salutary effect of educating a large number of citizens about mass surveillance. In the nearer term, however, the revelations have handed countries other than the United States and its allies an opportunity for the self-interested promotion of local IT wares under the convenient rhetorical guise of striking a blow for “technological sovereignty” and bypassing U.S. in- formation controls.

There was a time when authoritarian regimes seemed like slow-footed, technologically challenged dinosaurs whom the Information Age was sure to put on a path toward ultimate extinction. That time is no more—these regimes have proven themselves surprisingly (and dismayingly) light-footed and adaptable. National-level information controls are now deeply entrenched and growing. Authoritarian regimes are becoming more active and assertive, sharing norms, technologies, and “best” practices with one another as they look to shape cyberspace in ways that legitimize their national interests and domestic goals.

Sadly, prospects for halting these trends anytime soon look bleak. As resurgent authoritarianism in cyberspace increases, civil society will struggle: A web of ever more fine-grained information controls tightens the grip of unaccountable elites. Given the comprehensive range of information controls outlined here, and their interlocking sources deep within societies, economies, and political systems, it is clear that an equally comprehensive approach to the problem is required. Those who seek to promote human rights and democracy through cyberspace will err gravely if they stick to high-profile “Internet Freedom” conferences or investments in “secure apps” and digital training. No amount of rhetoric or technological development alone will solve a problem whose roots run this deep and cut across the borders of so many regions and countries.

What we need is a patient, multi-pronged, and well-grounded approach across numerous spheres, with engagement in a variety of venues. Researchers, investigative journalists, and others must learn to pay more attention to developments in regional security settings and obscure trade fairs. The long-term goal should be to open these venues to greater civil society participation and public accountability so that considerations of human rights and privacy are at least raised, even if not immediately respected.

The private sector now gathers and retains staggering mountains of data about countless millions of people. It is no longer enough for states to conduct themselves according to the principles of transparency, accountability, and oversight that democracy prizes; the companies that own and operate cyberspace — and that often come under tremendous pressure from states — must do so as well. Export controls and “smart sanctions” that target rights-offending technologies without infringing on academic freedom can play a role. A highly distributed, independent, and powerful system of cyberspace verification should be built on a global scale that monitors for rights violations, dual-use technologies, targeted malware attacks, and privacy breaches. A model for such a system might be found in traditional arms-control verification regimes such as the one administered by the Organization for the Prohibition of Chemical Weapons. Or it might come from the research of academic groups such as Citizen Lab, or the setup of national computer emergency-response teams (CERTs) once these are freed from their current subordination to parochial national-security concerns. 29   However it is ultimately constituted, there needs to be a system for monitoring cyber- space rights and freedoms that is globally distributed and independent of governments and the private sector.

Finally, we need models of cyberspace security that can show us how to prevent disruptions or threats to life and property without sacrificing liberties and rights. Internet-freedom advocates must reckon with the realization that a free, open, and secure cyberspace will materialize only within a framework of democratic oversight, public accountability, transparent checks and balances, and the rule of law. For individuals living under authoritarianism’s heavy hand, achieving such lofty goals must sound like a distant dream. Yet for those who reside in affluent countries, especially ones where these principles have lost ground to anti-terror measures and mass-surveillance programs, fighting for them should loom as an urgent priority and a practically achievable first step on the road to remediation.

Time, 18 February 2015.

  1. Sam Kimball, “After the Arab Spring, Surveillance in Egypt Intensifies,” Intercept, 9 March 2015,  https://firstlook.org/theintercept/2015/03/09/arab-spring-surveillance- egypt-intensifies.
  2. Steven Levitsky and Lucan A. Way, “The Rise of Competitive Authoritarianism,” Journal of Democracy 13 (April 2002): http://access.opennet.net/wp-content/uploads/2011/12/accesscontrolled-chapter-1.pdf, 51–65.
  3. Ronald Deibert and Rafal Rohozinski, “Beyond Denial: Introducing Next Generation Information Access Controls,” http://access.opennet.net/wp-content/uploads/2011/12/accesscontrolled-chapter-1.pdf.  Note that the “generations” of controls are not assumed to be strictly chronological: Governments can skip generations, and several generations can exist together. Rather, they are a useful heuristic device for understanding the evolution of information.
  4. “YouTube to Remain Blocked ‘Indefinitely’ in Pakistan: Officials,” Dawn (Islamabad), 8 February 2015, http://www.dawn.com/news/1162139.
  5. Bennett Haselton, “Blue Coat Errors: Sites Miscategorized as ‘Pornography,’” Citizen Lab, 10 March 2014, https://citizenlab.org/2014/03/blue-coat-errors-sites-miscategorized-pornography.
  6. “Routing Gone Wild: Documenting Upstream Filtering in Oman via India,” Citizen Lab, 12 July 2012, https://citizenlab.org/2012/07/routing-gone-wild.
  7. “IGF 2013: Islands of Control, Island of Resistance: Monitoring the 2013 Indone- sian IGF (Foreword),” Citizen Lab, 20 January 2014, http://www.citizenlab.org/briefs/29-igf-indonesia.pdf.
  8. Masashi Crete-Nishihata, Ronald J. Deibert, and Adam Senft, “Not by Technical Means Alone: The Multidisciplinary Challenge of Studying Information Controls,” IEEE Internet Computing 17 (May–June 2013): 34–41.
  9. See https://china-chats.net.
  10. Amol Sharma, “RIM Facility Helps India in Surveillance Efforts,” Wall Street Journal, 28 October 2011.
  11. Rogier Creemers, “New Internet Rules Reflect China’s ‘Intent to Target Individuals Online,’” Deutsche Welle, 2 March 2015.
  12. Citizen Lab, “Communities @ Risk: Targeted Digital Threats Against Civil Society,” 11 November 2014, https://citizenlab.org/2015/04/chinas-great-cannon; https://targetedthreats.net.
  13. Bill Marczak et , “China’s Great Cannon,” Citizen Lab, 10 April 2015, https://citizenlab.org/2015/04/chinas-great-cannon.
  14. “For Their Eyes Only: The Commercialization of Digital Spying,” Citizen Lab, 30 April 2013, https://citizenlab.org/2014/12/malware-attack-targeting-syrian-isis-critics; https://citizenlab.org/2013/04/for-their-eyes-only-2.
  15. “Malware Attack Targeting Syrian ISIS Critics,” Citizen Lab, 18 December 2014,  https://citizenlab.org/2014/12/malware-attack-targeting-syrian-isis-critics.
  16. Seva Gunitzky, “Corrupting the Cyber-Commons: Social Media as a Tool of Autocratic Stability,” Perspectives on Politics 13 (March 2015): 42–54.
  17. RFE/RL Tajik Service, “SMS Services Down in Tajikistan After Protest Calls,” Radio Free Europe/Radio Liberty, 10 October 2014, http://www.rferl.org/content/tajikistan-sms-internet-group-24-quvatov-phone-message-blockage-dushanbe/26630390.html.
  18. See “No Mobile Phone Services on March 23 in Islamabad,” Daily Capital (Islamabad), 22 March 2015, http://dailycapital.pk/mobile-phone-services-to-remain-blocked-on-march-23.
  19. Ronald J. Deibert and Masashi Crete-Nishihata, “Global Governance and the Spread of Cyberspace Controls,” Global Governance 18 (2012): 339–61, http://citizenlab. org/cybernorms2012/governance.pdf.
  20. See James R. Clapper, “Statement for the Record Worldwide Threat Assessment of the US Intelligence Community,” Senate Armed Services Committee, 26 February 2015, http://www.dni.gov/files/documents/Unclassified_2015_ATA_SFR_-_SASC_FINAL.pdf.
  21. See “Counter-Terrorism Committee Welcomes Close Cooperation with the Regional Anti-Terrorist Structure of the Shanghai Cooperation Organization,” 24 October 2014, www.un.org/en/sc/ctc/news/2014-10-24_cted_shanghaicoop.html.
  22. See Joshua Kucera, “SCO, CSTO Increasing Efforts Against Internet Threats,” The Bug Pit, 16 June 2014, http://www.eurasianet.org/node/68591.
  23. See Cisco, “Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update 2014–2019,” white paper, 3 February 2015, http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/white_paper_c11-520862.html.
  24. Craig Timberg, “For Sale: Systems That Can Secretly Track Where Cellphone Users Go Around the Globe,” Washington Post, 24 August 2014.
  25. Collin Anderson, “Monitoring the Lines: Sanctions and Human Rights Policy Considerations of TeleStrategies ISS World Seminars,” 31 July 2014, http://cda.io/notes/monitoring-the-lines.
  26. Anderson, “Monitoring the Lines: Sanctions and Human Rights Policy Considerations of TeleStrategies ISS World Seminars,” 31 July 2014, http://cda.io/notes/monitoring-the-lines.
  27. See Jon Grevatt, “BAE Systems Announces Funding of Malaysian Cyber Degree Programme,” IHS Jane’s 360, 5 March 2015, http://www.janes.com/article/49778/bae-systems-announces-funding-of-malaysian-cyber-degree-programme.
  28. Colin Freeze, “Canadian Agencies Use Data Stolen by Foreign Hackers, Memo Reveals” Globe and Mail (Toronto), 6 February 2015.
  29. For one proposal along these lines, see Duncan Hollis and Tim Maurer, “A Red Cross for Cyberspace," Time, 18 February 2015.

Deibert, Ronald - The Geopolitics of Cyberspace After Snowden - 2015

Deibert, Ronald - The Geopolitics of Cyberspace After Snowden - 2015

“The aims of the Internet economy and those of state security converge around the same functional needs: collecting, monitoring, and analyzing as much data as possible.”

For several years now, it seems that not a day has gone by without a new revelation about the perils of cyberspace: the networks of Fortune 500 companies breached; cyberespionage campaigns uncovered; shadowy hacker groups infiltrating prominent websites and posting extremist propaganda. But the biggest shock came in June 2013 with the first of an apparently endless stream of riveting disclosures from former US National Security Agency (NSA) contractor Edward Snowden. These alarming revelations have served to refocus the world’s attention, aiming the spotlight not at cunning cyber activists or sinister data thieves, but rather at the world’s most powerful signals intelligence agencies: the NSA, Britain’s Government Communications Headquarters (GCHQ), and their allies.

The public is captivated by these disclosures, partly because of the way in which they have been released, but mostly because cyberspace is so essential to all of us. We are in the midst of what might be the most profound communications evolution in all of human history. Within the span of a few decades, society has become completely dependent on the digital information and communication technologies (ICTs) that infuse our lives. Our homes, our jobs, our social networks—the fundamental pillars of our existence—now demand immediate access to these technologies.

With so much at stake, it should not be surprising that cyberspace has become heavily contested. What was originally designed as a small-scale but robust information-sharing network for advanced university research has exploded into the information infrastructure for the entire planet. Its emergence has unsettled institutions and upset the traditional order of things, while simultaneously contributing to a revolution in economics, a path to extraordinary wealth for Internet entrepreneurs, and new forms of social mobilization. These contrasting outcomes have set off a desperate scramble, as stakeholders with competing interests attempt to shape cyberspace to their advantage. There is a geopolitical battle taking place over the future of cyberspace, similar those previously fought over land, sea, air, and space.

Three major trends have been increasingly shaping cyberspace: the big data explosion, the growing power and influence of the state, and the demographic shift to the global South. While these trends preceded the Snowden disclosures, his leaks have served to alter them somewhat, by intensifying and in some cases redirecting the focus of the conflicts over the Internet. This essay will identify several focal points where the outcomes of these contests are likely to be most critical to the future of cyberspace.

Big Data

Before discussing the implications of cyberspace, we need to first understand its characteristics: What is unique about the ICT environment that surrounds us? There have been many extraordinary inventions that revolutionized communications throughout human history: the alphabet, the printing press, the telegraph, radio, and television all come to mind. But arguably the most far-reaching in its effects is the creation and development of social media, mobile connectivity, and cloud computing—referred to in shorthand as “big data.” Although these three technological systems are different in many ways, they share one very important characteristic: a vast and rapidly growing volume of personal information, shared (usually voluntarily) with entities separate from the individuals to whom the information applies. Most of those entities are privately owned companies, often headquartered in political jurisdictions other than the one in which the individual providing the information lives (a critical point that will be further examined below).

We are, in essence, turning our lives inside out. Data that used to be stored in our filing cabinets, on our desktop computers, or even in our minds, are now routinely stored on equipment maintained by private companies spread across the globe. This data we entrust to them includes that which we are conscious of and deliberate about—websites visited, e-mails sent, texts received, images posted—but a lot of which we are unaware.

For example, a typical mobile phone, even when not in use, emits a pulse every few seconds as a beacon to the nearest WiFi router or cellphone tower. Within that beacon is an extraordinary amount of information about the phone and its owner (known as “metadata”), including make and model, the user’s name, and geographic location. And that is just the mobile device itself. Most users have within their devices several dozen applications (more than 50 billion apps have been downloaded from Apple’s iTunes store for social networking, fitness, health, games, music, shopping, banking, travel, even tracking sleep patterns), each of which typically gives itself permission to extract data about the user and the device. Some applications take the practice of data extraction several bold steps further, by requesting access to geolocation information, photo albums, contacts, or even the ability to turn on the device’s camera and microphone.

We leave behind a trail of digital “exhaust” wherever we go. Data related to our personal lives are compounded by the numerous and growing Internet-connected sensors that permeate our technological environment. The term “Internet of Things” refers to the approximately 15 billion devices (phones, computers, cars, refrigerators, dishwashers, watches, even eyeglasses) that now connect to the Internet and to each other, producing trillions of ever-expanding data points. These data points create an ethereal layer of digital exhaust that circles the globe, forming, in essence, a digital stratosphere.

Given the virtual characteristics of the digital experience, it may be easy to overlook the material properties of communication technologies. But physical geography is an essential component of cyberspace: Where technology is located is as important as what it is. While our Internet activities may seem a kind of ephemeral and private adventure, they are in fact embedded in a complex infrastructure (material, logistical, and regulatory) that in many cases crosses several borders. We assume that the data we create, manipulate, and distribute are in our possession. But in actuality, they are transported to us via signals and waves, through cables and wires, from distant servers that may or may not be housed in our own political jurisdiction. It is actual matter we are dealing with when we go online, and that matters—a lot. The data that follow us around, that track our lives and habits, do not disappear; they live in the servers of the companies that own and operate the infrastructure. What is done with this information is a decision for those companies to make. The details are buried in their rarely read terms of service, or, increasingly, in special laws, requirements, or policies laid down by the governments in whose jurisdictions they operate.

The vast majority of Internet users now live  in the global South.

Big State

The Internet started out as an isolated experiment largely separate from government. In the early days, most governments had no Internet policy, and those that did took a deliberately laissez-faire approach. Early Internet enthusiasts mistakenly understood this lack of policy engagement as a property unique to the technology. Some even went so far as to predict that the Internet would bring about the end of organized government altogether. Over time, however, state involvement has expanded, resulting in an increasing number of Internet-related laws, regulations, standards, and practices. In hindsight, this was inevitable. Anything that permeates our lives so thoroughly naturally introduces externalities—side effects of industrial or commercial activity—that then require the establishment of government policy. But as history demonstrates, linear progress is always punctuated by specific events—and for cyberspace, that event was 9/11.

We continue to live in the wake of 9/11. The events of that day in 2001 profoundly shaped many aspects of society. But no greater impact can be found than the changes it brought to cyberspace governance and security, specifically with respect to the role and influence of governments. One immediate impact was the acceleration of a change in threat perception that had been building for years.

During the Cold War, and largely throughout the modern period (roughly the eighteenth century onward), the primary threat for most governments was “interstate” based. In this paradigm, the state’s foremost concern is a cross-border invasion or attack—the idea that another country’s military could use force and violence in order to gain control. After the Cold War, and especially since 9/11, the concern has shifted to a different threat paradigm: that a violent attack could be executed by a small extremist group, or even a single human being who could blow himself or herself up in a crowded mall, hijack an airliner, or hack into critical infrastructure. Threats are now dispersed across all of society, regardless of national borders. As a result, the focus of the state’s security gaze has become omni-directional.

Accompanying this altered threat perception are legal and cultural changes, particularly in reaction to what was widely perceived as the reason for the 9/11 catastrophe in the first place: a “failure to connect the dots.” The imperative shifted from the micro to the macro. Now, it is not enough to simply look for a needle in the haystack. As General Keith Alexander (former head of the NSA and the US Cyber Command) said, it is now necessary to collect “the entire haystack.” Rapidly, new laws have been introduced that substantially broaden the reach of law enforcement and intelligence agencies, the most notable of them being the Patriot Act in the United States—although many other countries have followed suit.

This imperative to “collect it all” has focused government attention squarely on the private sector, which owns and operates most of cyberspace. States began to apply pressure on companies to act as a proxy for government controls—policing their own networks for content deemed illegal, suspicious, or a threat to national security. Thanks to the Snowden disclosures, we now have a much clearer picture of how this pressure manifests itself. Some companies have been paid fees to collude, such as Cable and Wireless (now owned by Vodafone), which was paid tens of millions of pounds by the GCHQ to install surveillance equipment on its networks. Other companies have been subjected to formal or informal pressures, such as court orders, national security letters, the withholding of operating licenses, or even appeals to patriotism. Still others became the targets of computer exploitation, such as US-based Google, whose back-end data infrastructure was secretly hacked into by the NSA.

This manner of government pressure on the private sector illustrates the importance of the physical geography of cyberspace. Of course, many of the corporations that own and operate the infrastructure—companies like Facebook, Microsoft, Twitter, Apple, and Google—are headquartered in the United States. They are subject to US national security law and, as a consequence, allow the government to benefit from a distinct homefield advantage in its attempt to “collect it all.” And that it does—a staggering volume, as it turns out. One top-secret NSA slide from the Snowden disclosures reveals that by 2011, the United States (with the cooperation of the private sector) was collecting and archiving about 15 billion Internet metadata records every single day. Contrary to the expectations of early Internet enthusiasts, the US government’s approach to cyberspace—and by extension that of many other governments as well—has been anything but laissez-faire in the post-9/11 era. While cyberspace may have been born largely in the absence of states, as it has matured states have become an inescapable and dominant presence.

Domain Domination

After 9/11, there was also a shift in US military thinking that profoundly affected cyberspace. The definition of cyberspace as a single “domain”— equal to land, sea, air, and space—was formalized in the early 2000s, leading to the imperative to dominate and rule this domain; to develop offensive capabilities to fight and win wars within cyberspace. A Rubicon was crossed with the Stuxnet virus, which sabotaged Iranian nuclear enrichment facilities. Reportedly engineered jointly by the United States and Israel, the Stuxnet attack was the first de facto act of war carried out entirely through cyberspace. As is often the case in international security dynamics, as one country reframes its objectives and builds up its capabilities, other countries follow suit. Dozens of governments now have within their armed forces dedicated “cyber commands” or their equivalents.

The race to build capabilities also has a ripple effect on industry, as the private sector positions itself to reap the rewards of major cyber-related defense contracts. The imperatives of mass surveillance and preparations for cyberwarfare across the globe have reoriented the defense industrial base.

It is noteworthy in this regard how the big data explosion and the growing power and influence of the state are together generating a politicaleconomic dynamic. The aims of the Internet economy and those of state security converge around the same functional needs: collecting, monitoring, and analyzing as much data as possible. Not surprisingly, many of the same firms service both segments. For example, companies that market facial recognition systems find their products being employed by Facebook on the one hand and the Central Intelligence Agency on the other.

As private individuals who live, work, and play in the cyber realm, we provide the seeds that are then cultivated, harvested, and delivered to market by a massive machine, fueled by the twin engines of corporate and national security needs. The confluence of these two major trends is creating extraordinary tensions in state-society relations, particularly around privacy. But perhaps the most important implications relate to the fact that the market for the cybersecurity industrial complex knows no boundaries—an ominous reality in light of the shifting demographics of cyberspace.

Southern Shift

While the “what” of cyberspace is critical, the “who” is equally important. There is a major demographic shift happening today that is easily overlooked, especially by users in the West, where the technology originates. The vast majority of Internet users now live in the global South. Of the 6 billion mobile devices in circulation, over 4 billion are located in the developing world. In 2001, 8 of every 100 citizens in developing nations owned a mobile subscription. That number has now jumped to 80. In Indonesia, the number of Internet users increases each month by a stunning 800,000. Nigeria had 200,000 Internet users in 2000; today, it has 68 million.

Remarkably, some of the fastest growing online populations are emerging in countries with weak governmental structures or corrupt, autocratic, or authoritarian regimes. Others are developing in zones of conflict, or in countries that have only recently gone through difficult transitions to democracy. Some of the fastest growth rates are in “failed” states, or in countries riven by ethnic rivalries or challenged by religious differences and sensitivities, such as Nigeria, India, Pakistan, Indonesia, and Thailand. Many of these countries do not have long-standing democratic traditions, and therefore lack proper systems of accountability to guard against abuses of power. In some, corruption is rampant, or the military has disproportionate influence.

Consider the relationship between cyberspace and authoritarian rule. We used to mock authoritarian regimes as slow-footed, technologically challenged dinosaurs that would be inevitably weeded out by the information age. The reality has proved more nuanced and complex. These regimes are proving much more adaptable than expected. National-level Internet controls on content and access to information in these countries are now a growing norm. Indeed, some are beginning to affect the very technology itself, rather than vice versa.

In China (the country with the world’s most Internet users), “foreign” social media like Facebook, Google, and Twitter are banned in favor of nationally based, more easily controlled alternatives. For example, WeChat - - owned by China-based parent company Tencent - is presently the fifth-largest Internet company in the world after Google, Amazon, Alibaba, and eBay, and as of August 2014 it had 438 million active users (70 million outside China) and a public valuation of over $400 billion. China’s popular chat applications and social media are required to police the country’s networks with regard to politically sensitive content, and some even have hidden censorship and surveillance functionality “baked” into their software. Interestingly, some of WeChat’s users outside China began experiencing the same type of content filtering as users inside China, an issue that Tencent claimed was due to a software bug (which it promptly fixed). But the implication of such extraterritorial applications of national-level controls is certainly worth further scrutiny, particularly as China-based companies begin to expand their service offerings in other countries and regions.

It is important to understand the historical context in which this rapid growth is occurring. Unlike the early adopters of the Internet in the West, citizens in the developing world are plugging in and connecting after the Snowden disclosures, and with the model of the NSA in the public domain. They are coming online with cybersecurity at the top of the international agenda, and fierce international competition emerging throughout cyberspace, from the submarine cables to social media. Political leaders in these countries have at their disposal a vast arsenal of products, services, and tools that provide their regimes with highly sophisticated forms of information control. At the same time, their populations are becoming more savvy about using digital media for political mobilization and protest.

While the digital innovations that we take advantage of daily have their origins in high-tech libertarian and free-market hubs like Silicon Valley, the future of cyberspace innovation will be in the global South. Inevitably, the assumptions, preferences, cultures, and controls that characterize that part of the world will come to define cyberspace as much as those of the early entrepreneurs of the information age did in its first two decades.

Who Rules?

Cyberspace is a complex technological environment that spans numerous industries, governments and regions. As a consequence, there is no one single forum or international organization for cyberspace. Instead, governance is spread throughout numerous small regimes, standard-setting forums, and technical organizations from the regional to the global. In the early days, Internet governance was largely informal and led by non-state actors, especially engineers. But over time, governments have become heavily involved, leading to more politicized struggles at international meetings.

The original promise of the Internet  as a forum for free exchange  of information is at risk.

Although there is no simple division of camps, observers tend to group countries into those that prefer a more open Internet and a tightly restricted role for governments versus those that prefer a more centralized and state-led form of governance, preferably through the auspices of the United Nations. The United States, the United Kingdom, other European nations, and Asian democracies are typically grouped in the former, with China, Russia, Iran, Saudi Arabia, and other nondemocratic countries grouped in the latter. A large number of emerging market economies, led by Brazil, India, and Indonesia, are seen as “swing states” that could go either way.

Prior to the Snowden disclosures, the battle lines between these opposing views were becoming quite acute—especially around the December 2012 World Congress on Information Technology (WCIT), where many feared Internet governance would fall into UN (and thus more state-controlled) hands. But the WCIT process stalled, and those fears never materialized, in part because of successful lobbying by the United States and its allies, and by Internet companies like Google. After the Snowden disclosures, however, the legitimacy and credibility of the “Internet freedom” camp have been considerably weakened, and there are renewed concerns about the future of cyberspace governance.

Meanwhile, less noticed but arguably more effective have been lower-level forms of Internet governance, particularly in regional security forums and standards-setting organizations. For example, Russia, China, and numerous Central Asian states, as well as observer countries like Iran, have been coordinating their Internet security policies through the Shanghai Cooperation Organization (SCO). Recently, the SCO held military exercises designed to counter Internet-enabled opposition of the sort that participated in the “color revolutions” in former Soviet states. Governments that prefer a tightly controlled Internet are engaging in partnerships, sharing best practices, and jointly developing information control platforms through forums like the SCO. While many casual Internet observers ruminate over the prospect of a UN takeover of the Internet that may never materialize, the most important norms around cyberspace controls could be taking hold beneath the spotlight and at the regional level.

Technological Sovereignty

Closely related to the questions surrounding cyberspace governance at the international level are issues of domestic-level Internet controls, and concerns over “technological sovereignty.” This area is one where the reactions to the Snowden disclosures have been most palpably felt in the short term, as countries react to what they see as the US “home-field advantage” (though not always in ways that are straightforward). Included among the leaked details of US- and GCHQ-led operations to exploit the global communications infrastructure are numerous accounts of specific actions to compromise state networks, or even the handheld devices of government officials—most notoriously, the hacking of German Chancellor Angela Merkel’s personal cellphone and the targeting of Brazilian government officials’ classified communications. But the vast scope of US-led exploitation of global cyberspace, from the code to the undersea cables and everything in between, has set off shockwaves of indignation and loud calls to take immediate responses to restore “technological sovereignty.”

For example, Brazil has spearheaded a project to lay a new submarine cable linking South America directly to Europe, thus bypassing the United States. Meanwhile, many European politicians have argued that contracts with US-based companies that may be secretly colluding with the NSA should be cancelled and replaced with contracts for domestic industry to implement regional and/or nationally autonomous data- routing policies—arguments that European industry has excitedly supported. It is sometimes difficult to unravel whether such measures are genuinely designed to protect citizens, or are really just another form of national industrial protectionism, or both. Largely obscured beneath the heated rhetoric and underlying self-interest, however, are serious questions about whether any of the measures proposed would have any more than a negligible impact when it comes to actually protecting the confidentiality and integrity of communications. As the Snowden disclosures reveal, the NSA and GCHQ have proved to be remarkably adept at exploiting traffic, no matter where it is based, by a variety of means.

We leave behind a  trail of digital “exhaust”  wherever we go.

A more troubling concern is that such measures may end up unintentionally legitimizing national cyberspace controls, particularly for developing countries, “swing states,” and emerging markets. Pointing to the Snowden disclosures and the fear of NSA-led surveillance can be useful for regimes looking to subject companies and citizens to a variety of information controls, from censorship to surveillance. Whereas policy makers previously might have had concerns about being cast as pariahs or infringers on human rights, they now have a convenient excuse supported by European and other governments’ reactions.

Spyware Bazaar

One byproduct of the huge growth in military and intelligence spending on cyber-security has been the fueling of a global market for sophisticated surveillance and other security tools. States that do not have an in-house operation on the level of the NSA can now buy advanced capabilities directly from private contractors. These tools are proving particularly attractive to many regimes that face ongoing insurgencies and other security challenges, as well as persistent popular protests. Since the advertised end uses of these products and services include many legitimate needs, such as network traffic management or the lawful interception of data, it is difficult to prevent abuses, and hard even for the companies themselves to know to what ends their products and services might ultimately be directed. Many therefore employ the term “dual-use” to describe such tools.

Research by the University of Toronto’s Citizen Lab from 2012 to 2014 has uncovered numerous cases of human rights activists targeted by advanced digital spyware manufactured by Western companies. Once implanted on a target’s device, this spyware can extract files and contacts, send emails and text messages, turn on the microphone and camera, and track the location of the user. If these were isolated incidences, perhaps we could write them off as anomalies. But the Citizen Lab’s international scan of the command and control servers of these products — the computers used to send instructions to infected devices—has produced disturbing evidence of a global market that knows no boundaries. Citizen Lab researchers found one product, Finspy, marketed by a UK company, Gamma Group, in a total of 25 countries— some with dubious human rights records, such as Bahrain, Bangladesh, Ethiopia, Qatar, and Turkmenistan. A subsequent Citizen Lab report found that 21 governments are current or former users of a spyware product sold by an Italian company called Hacking Team, including 9 that received the lowest ranking, “authoritarian,” in the Economist’s 2012 Democracy Index.

Meanwhile, a 2014 Privacy International report on surveillance in Central Asia says many of the countries in the region have implemented far-reaching surveillance systems at the base of their telecommunications networks, using advanced US and Israeli equipment, and supported by Russian intelligence training. Products that provide advanced deep packet inspection (the capability to inspect data packets in detail as they flow through networks), content filtering, social network mining, cellphone tracking, and even computer attack targeting are being developed by Western firms and marketed worldwide to regimes seeking to limit democratic participation, isolate and identify opposition, and infiltrate meddlesome adversaries abroad.

Pushing Back

The picture of the cyberspace landscape painted above is admittedly quite bleak, and therefore one-sided. The contests over cyberspace are multidimensional and include many groups and individuals pushing for technologies, laws, and norms that support free speech, privacy, and access to information. Here, too, the Snowden disclosures have had an animating effect, raising awareness of risks and spurring on change. Whereas vague concerns about widespread digital spying were voiced by a minority and sometimes trivialized before Snowden’s disclosures, now those fears have been given real substance and credibility, and surveillance is increasingly seen as a practical risk that requires some kind of remediation.

The Snowden disclosures have had a particularly salient impact on the private sector, the Internet engineering community, and civil society. The revelations have left many US companies in a public relations nightmare, with their trust weakened and lucrative contracts in jeopardy. In response, companies are pushing back. It is now standard for many telecommunications and social media companies to issue transparency reports about government requests to remove information from websites or share user data with authorities. USbased Internet companies even sued the government over gag orders that bar them from disclosing information on the nature and number of requests for user information. Others, including Google, Microsoft, Apple, Facebook, and WhatsApp, have implemented end-to-end encryption.

Internet engineers have reacted strongly to revelations showing that the NSA and its allies have subverted their security standards-setting processes. They are redoubling efforts to secure communications networks wholesale as a way to shield all users from mass surveillance, regardless of who is doing the spying. Among civil society groups that depend on an open cyberspace, the Snowden disclosures have helped trigger a burgeoning social movement around digital-security tool development and training, as well as more advanced research on the nature and impacts of information controls.

Wild Card

The cyberspace environment in which we live and on which we depend has never been more in flux. Tensions are mounting in several key areas, including Internet governance, mass and targeted surveillance, and military rivalry. The original promise of the Internet as a forum for free exchange of information is at risk. We are at a historical fork in the road: Decisions could take us down one path where cyberspace continues to evolve into a global commons, empowering individuals through access to information and freedom of speech and association, or down another path where this ideal meets its eventual demise. Securing cyberspace in ways that encourage freedom, while limiting controls and surveillance, is going to be a serious challenge.

Trends toward militarization and greater state control were already accelerating before the Snowden disclosures, and seem unlikely to abate in the near future. However, the leaks have thrown a wild card into the mix, creating opportunities for alternative approaches emphasizing human rights, corporate social responsibility, norms of mutual restraint, cyberspace arms control, and the rule of law. Whether such measures will be enough to stem the tide of territorialized controls remains to be seen. What is certain, however, is that a debate over the future of cyberspace will be a prominent feature of world politics for many years to come.

Deibert, Ronald - Shutting the Backdoor: The Perils of National Security and Digital Surveillance Programs - 201310

Shutting the Backdoor: The Perils of National Security and Digital Surveillance Programs - Ronald Deibert - October 2013

EXECUTIVE SUMMARY

As governments have sought to monitor digital communications for security purposes, the “backdoor” paradigm has become the predominant approach. Strictly speaking, backdoors refer to special methods of bypassing normal authentication procedures to secretly access computing systems. But here the concept is used in a broader sense to describe a range of policies and practices whereby governments compel, or otherwise get the cooperation of, private sector companies to provide access to data they control. The backdoor paradigm is not only a concern for political reasons, particularly for civil liberties, it is also bad for digital security and foreign policy. Law enforcement and intelligence agencies should instead seek to work within the information-rich world of Big Data that surrounds us with any access to private communications made exceptional and strictly controlled by lawful procedures and oversight mechanisms.

“If a surveillance capability were quietly added into the core of the Internet and an attempt was made to keep it a secret, in some respects this would be antithetical to the overarching philosophy upon which the Internet was built” Tom Cross, BlackHat 2010

Introduction

The Internet and associated digital media have deeply penetrated all of society. We now live in the world of the “Internet of things,” by some estimates there are now 10 billion internet connected devices on the planet networked together via common protocol. This common space is not only a forum for communications, it is a major repository of data about each and every one of us, our daily habits, movements, social relationships, and even private thoughts. For government agencies whose mission is to enforce the law or gather intelligence, cyberspace has become an extraordinary asset and an object of security in its own right. As governments have ramped up digital surveillance programs, they have had to turn to the private sector, which controls not only the familiar services we all use (Google, Facebook, Microsoft), but the vast majority of the physical infrastructure of cyberspace as well—the cell phone base stations, undersea fibre optic cables, Internet Exchange Points, and routers.

One of the ways governments have approached the private sector has been to compel, legally or informally, the development of special modifications to private company’s technical systems to enable access to data, what I call the “back door” paradigm. Strictly speaking, backdoors refer to special methods of bypassing normal authentication procedures to secretly access computing systems. But I use the phrase in a broader sense, to refer to the paradigm of state-directed modification of and/or intrusions into communications infrastructure and services for security purposes. Examples of the backdoor paradigm include lawful intercept mechanisms coded directly into software, to “splitters” that surreptitiously fork copies of data streams to alternative destinations, to other, informal, means of data sharing that may go on between private sector and security services. The backdoor paradigm is not new, but the Edward Snowden/NSA revelations have cast a spotlight on them and underscored how deeply entrenched they have become in the post 9/11 era of Big Data surveillance. In what follows I lay out several concerns about the backdoor paradigm, foremost among them being the ways, in the name of security, backdoors actually contribute to greater insecurities down the road. Shutting the backdoor is, therefore, an urgent public policy matter for all liberal democratic countries. Rather than sacrifice cyberspace at the altar of security, law enforcement and intelligence agencies should be encouraged to develop alternative modes of data collection strictly within the framework of the rule of law.

History of Backdoors

Built-in backdoors on communications equipment for law enforcement and intelligence agencies are not new. It is often said that espionage is the second oldest profession, and as long as governments have been engaged in espionage they have looked to assert control through the means of communication. In the early 20th century, for example, telegraphic messages processed by Western Union and other companies, including those of foreign diplomatic communications, were copied and secretly given to US authorities, much the same as Verizon has reportedly done so today. Codenamed “Shamrock,” the program required the companies to, on a daily basis, hand over to the NSA copies of all of the cables sent to, from, or through the US.[1] During the Cold War, a variety of signals intelligence collection mechanisms, from specially equipped naval vessels to huge land-based antennas, intercepted stray radar and radio transmissions, even those that bounced off the surface of the moon.

The rise of the Internet and digital technology presented a new challenge to signals intelligence, but also a special opportunity. Digital media flowing through fibre optic cables do not unintentionally leak data into the atmosphere in the same way, making signals interception a more difficult task. At the same time, huge and exponentially growing volumes of digital data, processed and archived through the Internet’s physical points of control (e.g., switches, gateways, exchange points), presented irresistible targets for data mining and analysis, leading the government to seek access through cooperation with the companies that own and operate the infrastructure. Then, as now, the government sought to ensure access to data by deliberately weakening one type of security in the name of another. Occasionally, these efforts trickled into the public domain. During the 1990s, there was a cantankerous “cryptodebate” over US government proposals to control the export of strong encryption to foreign jurisdictions to weaken the security of adversaries and make them more open to US surveillance. Criticism at that time questioned the wisdom of such controls, arguing that restricting cryptography would end up hurting the US itself. As Bruce Schneier explains

[t]he government deliberately weakened U.S. cryptography products because it didn’t want foreign groups to have access to secure systems. Two things resulted: fewer Internet products with cryptography, to the insecurity of everybody, and a vibrant foreign security industry based on the unofficial slogan ‘Don’t buy the U.S. stuff—it’s lousy.’

The US Communications Assistance for Law Enforcement Act (CALEA), passed in 1994 and still in effect, requires telecommunications carriers to install technical capabilities for lawful surveillance into all equipment manufactured or sold in the US. Although the law applies to telecommunications carriers, US government officials have appealed to expand the law to cover VOIP and other broadband digital services. During the “Clipper Chip” debate of the 1990s, the US government attempted to mandate the production of special chips into telecommunications equipment that would allow the NSA to eavesdrop on voice traffic using a surrendered encryption key. While critics of the Clipper Chip managed to scuttle the proposal, the same basic backdoor philosophy has continued to drive government surveillance efforts up to and including the present time. 9/11 gave added impetus to these approaches, with a perceived “failure to connect the dots” driving ambitious plans and a thriving new market to gather as much information from as many discrete information sources as possible and then mine and analyze it.[2]

The Snowden Revelations and the Backdoor Paradigm

Edward Snowden’s detailed leaks have opened a chasm into the otherwise secretive world of intelligence practices, revealing widespread monitoring of both US and international communications by the US NSA and some of its allied agencies, like the UK’s GCHQ. Among other revelations, the leaked documents have provided details on elaborate programs involving the cooperation of some of the Internet’s largest companies, like Google, Microsoft, Facebook, and Yahoo! as well as major tier 1 telecommunications companies, like Verizon and Global Crossing. Although the exact details are in dispute, the program codenamed PRISM appears to have involved a hiving off of customer data onto special servers to give law enforcement and intelligence agencies direct access.[3] In the case of leaks involving Verizon, the company supplied the NSA with metadata on all domestic and international telephone call records.4 Metadata might best be described as everything but the content of the communications, meaning the endpoints of the transmission, the length and time of the call, and so on. Other leaks have shown how foreign owned telecommunication companies seeking to operate in the United States are approached by US legal officials, called “Team Telecom,” who may request similar data sharing arrangements, such as compliance with CALEA or archiving of data on US soil for national security investigations.[4]

In the case of leaks concerning Microsoft, the company reportedly helped solve encryption cracking issues surrounding email, cloud, and VOIP services, including providing access to videoconference streams over Skype without users’ knowledge.[5]

Other leaked documents have shed light on top secret programs, codenamed “Bullrun,” whose objectives are to weaken encryption standards worldwide, including by encouraging companies to insert deliberate vulnrabilities into their encryption algorithms known only to the NSA. Given Canada’s own signals intelligence agency, the Communications Security Establishment of Canada (CSEC) has a long-standing special relationship with the

NSA, it should come as no surprise that we would develop similar programs. According to recent Globe and Mail reports, “for nearly two decades, Ottawa officials have told telecommunications companies that one of the conditions of obtaining a licence to use wireless spectrum is to provide government with the capability to bug the devices that use the spectrum.”[6] Documents obtained by The Globe also reveal that as part of these requirements, Ottawa has demanded companies scramble encryption so that it can be accessed by Canada’s law enforcement agencies.

The leaks have touched off vigorous public policy and security debates in the US and abroad. Their scope and scale as highlighted in the leaked documents (which themselves are probably only a partial peek), call into question recent law enforcement lobbying efforts, claiming that the FBI was at risk of “going dark” because of new communication technologies.8 In light of Snowden’s revelations about wholesale surveillance programs, as well as the abundance of information in the public domain users willingly give out about themselves, their friends, interests, political preferences, etc, the assertions seem ludicrous. The controversy has also raised questions about the degree and effectiveness of oversight and public accountability for backdoor arrangements. While it is true that three branches of US government approved the programs associated with the Snowden leaks, the deliberations of the court that oversees them, the FISC, are themselves secret.[7] Moreover, it appears that many elected officials were either ignorant, or deliberately misled by government officials, about the true scope and scale of the programs in question.[8] The leaks have also raised questions about oversight of similar programs operating outside the United States, such as the UK’s GCHQ or Canada’s CSE, both of which have arguably far less independent scrutiny of their activities.[9] While European countries are subject to more stringent privacy protections, some of their signals intelligence programs have even less rigorous oversight than what appears to be the case in the UK and Canada, and certainly less than in the United States. Backdoors into massive and detailed databases of private information, as appears to have been constructed in the US and allied countries, calls for an urgent debate about proper checks and balances around national security in the world of Big Data.

Quite apart from these concerns about privacy and potential abuse of unchecked power is an additional concern around the security implications of backdoors. Building backdoors into devices and infrastructure may be useful to law enforcement and intelligence agencies, but it also provides a built-in vulnerability for those who would otherwise seek to exploit them and in doing so actually contributes to insecurity for the whole of society that depends on that infrastructure. In 2013, a team of twenty computer security researchers issued a report published by the US-based Center for Democracy and Technology, arguing that “mandating wiretap capabilities in endpoints poses serious security risks,” and that building “intercept functionality into … products is unwise and will be ineffective, with the result being serious consequences for the economic well-being and national security of the United States.”[10] As noted security expert Bruce Schneier points out with respect to the FBI’s claims, but in terms that could be generalized to other security services:

The FBI believes it can have it both ways: that it can open systems to its eavesdropping, but keep them secure from anyone else’s eavesdropping. That’s just not possible. It’s impossible to build a communications system that allows the FBI surreptitious access but doesn’t allow similar access by others. When it comes to security, we have two options: We can build our systems to be as secure as possible from eavesdropping, or we can deliberately weaken their security. We have to choose one or the other.[11]

MIT’s Susan Landau contributes an additional argument to these security concerns, which she labels the “time factor.”[12] The US law, the Communications Assistance for Law Enforcement Act (CALEA), mandated that telecommunications equipment be built with law enforcement access capabilities, and also applied retrospectively to older technologies. But it never factored in that the back doors could become vulnerable over time as attack capabilities progressed, and knowledge of the back doors spread. These types of concerns were amply demonstrated in a 2010 Black Hat presentation by IBM researcher Tom Cross of security vulnerabilities connected to lawful intercepts in Cisco routing products (which were based on standards set by the European Telecommunications Standards Institute)[13] in which he showed how the lawful intercept could be exploited to enter the systems and eavesdrop on communications without leaving a trace.[14]

There have been several real-life demonstrations of these types of vulnerabilities that should give us pause. Recent political scandals in Greece and Italy, in which prominent officials and business people had their phones secretly tapped with the information used for purposes of blackmail and slander, were enabled by poorly designed lawful access back doors on cell phone infrastructures.[15] A project to search the internet for access to critical infrastructure, called the Shodan search engine, has vividly demonstrated the number of easily accessible systems, some critical, that can be accessed from the wider Internet.[16] Some of the access points have emerged because of vulnerabilities in the code, but in other cases back doors have been built in and then forgotten about, or poorly maintained, by the engineers. In one case, the Canadian company Rugged.com, whose industrial grade software is used to control everything from power utilities to military plants and traffic control systems, had a back door that was hard coded with a single username (“factory”), and a password that was automatically generated depending on the MAC address of the device, which itself could easily be searched for using Shodan.[17] In 2008 Citizen Lab researchers discovered that the Chinese version of the popular VOIP product, Skype (called TOM-Skype) had been coded with a special surveillance system in place such that whenever certain keywords were typed into the chat client, data would be sent to a server in mainland China (presumably to share with China’s security services).[18] Upon further investigation, it was discovered that the server onto which the chat messages were stored was not password protected, allowing for the download of millions of personal chats, many of which included credit card numbers, business transactions, and other private information.[19]

By definition, back doors engineered for lawful interception are engineered vulnerabilities by a different name. In these and likely numerous other undiscovered cases, those vulnerabilities offer a direct point of access for exploitation. Building insecurities into the communications infrastructure that surrounds us may be a shortcut for law enforcement and intelligence, but is it one worth making relative to the vulnerabilities that are created for all of society?

Not only are back doors a concern for civil liberties and infrastructure security, they set a bad precedent for, and help legitimize, the very same practices abroad. The Tom-Skype example may look amateur in comparison to programs like the NSA’s PRISM, but it is a local variation on a common theme. No doubt one implication of Snowden’s revelations will be the spurring on of numerous national efforts to regain control of information infrastructures through national competitors to Google, Verizon, and other companies implicated, not to mention the development of national signals intelligence programs that attempt to duplicate the US model.[20] Already prior to the revelations, numerous companies faced complex and, at times, frustrating national “lawful access” requests from newly emerging markets. Many countries of the global South lack even basic safeguards and accountability mechanisms around the operations of security services, and their demands on the private sector could contribute to serious human rights violations and other forms of repression. For example, India, the United Arab Emirates, Saudi Arabia, and other countries have all demanded that Canada’s Blackberry put in place lawful interception and monitoring capabilities, compliance with which could begin to affect the nature of the Blackberry product itself. Although the company rarely discloses anything about these agreements, a spokesperson for the company admitted that it had developed a monitoring system for its consumer devices for the government of India and will even train technicians in interception back in Canada.23 (In India, the government requires all telecommunication companies to make available data to security agencies without a warrant or other basic safeguards). What was once a signature feature of the Blackberry product (uncrackable encryption), has been compromised in the name of opening up new markets.

Alternatives to BackDoors

If the backdoor paradigm is questionable on civil liberties, economics, security, and foreign policy grounds, what are the alternatives? As long as we live in a dangerous world with real threats, intelligence agencies are going to be an important element of liberal democratic government. Likewise, without agencies to enforce the law, the very basis for the exercise of human rights, including privacy, would quickly diminish.

Elsewhere, I have argued that civil society needs to develop a cyber security strategy, and part of that strategy must involve a response to the back door paradigm on security grounds. Rather than building in insecurities by design, a better strategy would emphasize the opposite: encouraging the widespread use of state of the art encryption systems, as well as adoption of standards such as “https by default” and “two factor authentication.” Another component of such a strategy would be to emphasize the security benefits of open source software, which provide greater assurances that companies have not built in special back door privileges by design, that customers get what they sign off on in their terms of service. In line with this approach would be regulations around deletion of stored data and proposals like the “right to be forgotten,” which, although typically seen in a rights-based framework, have security implications insofar as they restrict the copies of stored data that might be exploited for nefarious purposes.

As for legitimate lawful access requests, the group of experts involved in the CDT study argue that “law enforcement’s use of passive interception and targeted vulnerability exploitation tools creates fewer security risks for nontargets and critical infrastructure than do design mandates for wiretap interfaces.”24 In a world of “Big Data” in which so much of our information is routinely given away as part of our daily life, law enforcement and intelligence agencies need to find ways to work within this universe as it exists, rather than drill holes in it from the inside-out in ways that undermine confidence and create additional risks for all of society. Those lawful access provisions that are still required should be infrequent and strictly controlled with rigorous oversight and public accountability provisions. Direct tapping of entire services wholesale should be eliminated. Not only will this protect civil liberties and prevent the concentration of power in unchecked hands, it will ensure that we are not doing more to undermine our own security in an overzealous surveillance quest.

Conclusion

The communications environment we have created around us is complex and expanding, and the security issues surrounding it are serious and important. The backdoor paradigm is symptomatic of a larger trend that privileges intelligence and security agencies, builds borders around cyberspace, and undermines checks and balances around concentrations of power. The backdoor paradigm is not only bad for civil liberties, it is bad for security, and sets dangerous precedents for the legitimization of practices abroad we ostensibly oppose. Law enforcement and intelligence agencies are necessary and important to liberal democracy, but there is more than one way for them to go about their missions. In the world of Big Data, in which so much personal information is readily abundant, new methods of “connecting the dots” must be explored other than those that drill holes into our communications infrastructure. Government surveillance practices need radical re-thinking, beginning first and foremost with a reinforcement of basic checks and balances that have, for too long, been sidestepped in the name of security.

[1] See James Bamford, Body of Secrets: Anatomy of the Ultra-Secret National Security Agency, (New York: Random House, 2001 ), p. 133.

[2] A good historical resource for all of the above is Shane Harris, The Watchers: The Rise of America’s Surveillance State, (New York: Penguin, 2010).

[3] http://www.guardian.co.uk/world/2013/jun/06/us-tech-giants-nsa-data; also see “How Prism Works” http://ashkansoltani.org/2013/06/14/prism-solving-for-x. 4 http://www.guardian.co.uk/world/2013/jun/06/nsa-phone-records-verizon-court-order.

[4] http://www.guardian.co.uk/world/2013/jul/12/telstra-deal-america-government-spying and http://articles.washingtonpost.com/2013-07-06/ business/40406049_1_u-s-access-global-crossing-surveillance-requests.

[5] http://www.guardian.co.uk/world/2013/jul/11/microsoft-nsa-collaboration-user-data.

[6] Colin Freeze and Rita Trichur, “Wireless firms agree to give Ottawa ability to monitor calls, phone data,” The Globe and Mail (September 16, 2013). 8 See the testimony of Robert S. Mueller, Director, FBI before the Committee on the Judiciary, US Senate, June 19, 2013 http://www.judiciary.senate.gov/ pdf/6-19-13MuellerTestimony.pdf and Hearing on: “Going Dark: Lawful Electronic Surveillance in the Face of New Technologies” http://judiciary.house.gov/ hearings/hear_02172011.html.

[7] See Mark Rumold and David Sobel, “Government Says Secret Court Opinion on Law Underlying PRISM Program Needs to Stay Secret,” EFF (June 7, 2013) https://www.eff.org/deeplinks/2013/06/government-says-secret-court-opinion-law-underlying-prism-program-needs-stay.

[8] Greg Miller, “Misinformation on classified NSA programs includes statements by senior US officials,” Washington Post, June 30, 2013) http://www. washingtonpost.com/world/national-security/misinformation-on-classified-nsa-programs-includes-statements-by-senior-us-officials/2013/06/30/7b5103a2-e02811e2-b2d4-ea6d8f477a01_print.html.

[9] http://www.independent.co.uk/news/uk/home-news/gchq-spying-programme-spy-watchdog-is-understaffed-and-totally-ineffective-8708231.html and http:// www.thestar.com/news/world/2013/06/10/us_online_surveillance_canadian_cyber_expert_weighs_in.html.

[10] https://www.cdt.org/files/pdfs/CALEAII-techreport.pdf.

[11] “The Problems with CALEA-II,” Schneier on Security, June 4, 2013, http://www.schneier.com/blog/archives/2013/06/the_problems_wi_3.html.

[12] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2028152.

[13] Tom Cross, “Exploiting Lawful Intercept to Wiretap the Internet,” BlackHat DC 2010, http://www.blackhat.com/presentations/bh-dc-10/Cross_Tom/ BlackHat-DC-2010-Cross-Attacking-LawfulI-Intercept-wp.pdf. Notably the presentation was based on the fact that Cisco is one of the few companies to openly publish its lawful intercept architecture for peer review.  Without that transparency, Cross’ analysis could not be done.

[14] http://judiciary.house.gov/hearings/printers/112th/112-59_64581.PDF, p.23.

[15] See Vassilis Prevelakis and Diomidis Spinellis, “The Athens Affair,” IEE Spectrum (June 29, 2007), found here: http://spectrum.ieee.org/telecom/security/ the-athens-affair.

[16] http://arstechnica.com/security/2012/10/backdoor-in-computer-controls-opens-critical-infrastructure-to-hackers.

[17] http://lists.grok.org.uk/pipermail/full-disclosure/2012-April/086652.html.

[18] Information Warfare Monitor and ONI Asia, “Breaching trust: An analysis of surveillance and security practices on China’s TOM-Skype platform,” 2008 and Jedidiah Crandall et al, “Chat program censorship and surveillance in China: Tracking TOM-Skype and Sina UC,” First Monday, June 2013,  http://firstmonday. org/ojs/index.php/fm/article/view/4628/3727.

[19] The China-based espionage attacks aimed at Google and other companies, and revealed in January 2010 as Operation Aurora, may have been directed towards accessing Google’s database of court-ordered surveillance targets, and possibly even the forked databases that enable access for the FBI to Google data, known as the “PRISM” program.  Not enough is known about the attacks or the PRISM program to say for certain. See http://articles.washingtonpost.com/201305-20/world/39385755_1_chinese-hackers-court-orders-fbi.

[20] See Ron Deibert, “Why NSA Spying Scares the World,” CNN Opinion, (June 12, 2013) http://www.cnn.com/2013/06/12/opinion/deibert-nsa-surveillance. 23 http://timesofindia.indiatimes.com/tech/tech-news/telecom/Government-BlackBerry-dispute-ends/articleshow/20998679.cms. 24 “Going Bright: Wiretapping without Weakening Communications Infrastructure,” p.63.

Citizen Lab may give Canadian government a pass when it comes to the subversion of our human rights

Last week I woke up a little bit grumpy and snarked at Ronald Deibert, the Director of the Citizen Lab at the Munk School of Global Affairs, University of Toronto.

Deibert had tweeted that a Canadian foreign policy official had mentioned to the United Nations General Assembly that "The University of Toronto's Citizen Lab has undertaken leading-edge advanced research in developments that impact the openness and security of the Internet and that pose threats to human rights [at 22:15 of video]."

While I have tremendous respect for Deibert's efforts to expose the subversion of human rights by technological means in other countries, I have been struck by his reticence when it comes to investigating or critiquing the mass surveillance of Canadians. And so we had a brief exchange:

Allen, P - Deibert, R - Twitter 20151217
Allen, P - Deibert, R - Twitter 20151217
Deibert, R - Allen, P - Twitter 20151217
Deibert, R - Allen, P - Twitter 20151217

In a future post, I will examine Deibert's claim that he's "done plenty" to attend to Canada's malpractice and that "the record speaks for itself."

Deibert, Ronald - Now we know Ottawa can snoop on any Canadian. What are we going to do? - The Globe and Mail 20140130

Deibert, Ronald - Now we know Ottawa can snoop on any Canadian. What are we going to do? - The Globe and Mail 20140130

Since June, 2013, a steady stream of revelations from Edward Snowden has shed light on a vast U.S.-led surveillance system. While there have been several important Canadian-related revelations, none has raised clear issues of potential unlawfulness. That is, until now.

A “Top Secret” presentation obtained by the CBC from the Snowden cache, which I reviewed in detail, outlines the indiscriminate and bulk collection and analysis of Canadian communications data by the Communications Security Establishment of Canada (CSEC). Assuming the documents are legitimate, it is difficult not to reach the conclusion that these activities constitute a clear violation of CSEC’s mandates and almost certainly of the Charter of Rights and Freedoms.

The CSEC presentation describes ubiquitous surveillance programs clearly directed at Canadians, involving data associated with Canadian airports, hotels, wi-fi cafes, enterprises and other domestic locations. The presentation outlines the challenges of discerning specific internet addresses and IDs associated with users within the universe of bulk data, paying special attention to challenges involving the movement of people through airports. It outlines results of experiments undertaken at a medium-sized city airport, which could possibly mean Calgary or Halifax, and which includes observations at “other domestic airports,” “hotels in many cities” and “mobile gateways in many cities.” Observations are made with detailed graphs of specific patterns of communications, noting differences as to how individuals communicate upon arrival and during departure, how long they spend in transit lounges, wi-fi cafes, hotel visits and even places of work. The objectives, the presentation says, are to separate the “needle from the haystack” – the haystack being, of course, all of us.

The presentation specifies that at least some of the bulk data from these locations was obtained through the cooperation of what’s only described as a “Canadian Special Source,” which is likely a Canadian telecommunications provider. If so, such revelations would make a mockery of Canadian carriers advertising their services as a “safe haven” from the snooping U.S. National Security Agency. From an accountability and oversight point of view, moving data hosting from the United States to Canada is like moving from a dimly lit cave to a pitch-black tunnel at the back of the cave.

What’s this mean for Canadians? When you go to the airport and flip open your phone to get your flight status, the government could have a record. When you check into your hotel and log on to the Internet, there’s another data point that could be collected. When you surf the Web at the local cafe hotspot, the spies could be watching. Even if you’re just going about your usual routine at your place of work, they may be following your communications trail.

Ingenious? Yes. Audacious? Yes. Unlawful? Time for the courts to decide. With regard to recent revelations, Canadian government officials have strenuously denied doing what is clearly described in this presentation. On 19 September 2013, CSEC chief John Forster was quoted by the Globe and Mail saying “CSEC does not direct its activities at Canadians and is prohibited by law from doing so.” In response to a lawsuit launched by the British Columbia Civil Liberties Association against the Government of Canada, CSEC admitted that there “may be circumstances in which incidental interception of private communications or information about Canadians will occur.” Only in Orwell-speak would what is contained in these presentations be described as “incidental” or “not directed at Canadians.” Then again, an Orwellian society is what we are in danger of becoming.

The revelations require an immediate response. They throw into sharp relief the obvious inadequacy of the existing “oversight” mechanism, which operates entirely within the security tent. They cast into doubt all government statements made about the limits of such programs. They raise the alarming prospect that Canada’s intelligence agencies may be routinely obtaining data on Canadian citizens from private companies – which includes revealing personal data – on the basis of a unilateral and highly dubious definition of “metadata” (the information sent by cellphones and mobile devices describing their location, numbers called and so on) as somehow not being “communications.” Such operations go well beyond invasions of privacy; the potential for the abuse of unchecked power contained here is practically limitless.

We live in a world of Big Data and the Internet of Things, our lives turned inside out. We leave a vast digital trail of intimately revealing metadata around us wherever we go. Allowing the state to have access to all of it is incompatible with a free and democratic society. The question now for Canadians to collectively address is what are we going to do about it?