Tag Archives: Adam Senft

Citizen Lab - Not by technical means alone - The multidisciplinary challenge of studying information controls - 2013

Citizen Lab - Not by Technical Means Alone: The Multidisciplinary Challenge  of Studying Information Controls - 2013

Abstract

The study of information controls is a multidisciplinary challenge. Technical measurements are essential to such a study, but they do not provide insight into why regimes enact controls or what those controls’ social and political effects might be. Investigating these questions requires that researchers pay attention to ideas, values, and power relations. Interpreting technical data using contextual knowledge and social science methods can lead to greater insights into information controls than either technical or social science approaches alone. The OpenNet Initiative has been developing a mixed-methods approach to the study of information controls since 2003.

Introduction

Information controls can be conceptualized as actions conducted in and through the Internet and other information and communication technologies (ICTs). Such controls seek to deny (as with Internet filtering), disrupt (as in distributed denial-of-service, or DDoS, attacks), or monitor (such as surveillance) information for political ends. Here, we examine national-level, state-mandated Internet filtering, but the arguments we raise apply to other information controls and technologies as well.

Technical measurements are essential for determining the prevalence and operation of information controls such as Internet filtering. However, alone, such measurements are insufficient for determining why regimes decide to enact controls and the political, social, and economic impacts of these decisions.

To gain a holistic understanding of information controls, we must study both technical processes and the underlying political, legal, and economic systems behind them. Multiple actors in these systems seek to assert agendas and exercise power, including states (military, law enforcement, and intelligence agencies), inter-governmental organizations, and the private sector. These actors have different positions of influence within technical, political, and legal systems that affect their motivations and actions, as well as the resulting consequences.

At its core, the study of information controls is a study of the ideas, values, and interests that motivate actors, and the power relations among those actors. The Internet is intimately and inseparably connected to social relations and is thus grounded in contexts, from its physical configuration — which is specific to each country — to its political, social, and military uses. Studying information controls’ technical operation and the political and social context behind them is an inherently multidisciplinary exercise.

In 2003, the inter-university OpenNet Initiative (ONI; https://opennet.net) launched with the mission of empirically documenting national-level Internet censorship through a mixed-methods approach that combines technical measurements with fieldwork and legal and policy analysis. At the time, only a few countries filtered the Internet. Since 2003, the ONI has tested for Internet filtering in 74 countries and found that 42 of them — including both authoritarian and democratic regimes — implement some level of filtering. Internet censorship is quickly becoming a global norm. The spread and dynamic character of information controls makes the need for evidence-based multidisciplinary research on these practices increasingly important. Here, we present the ONI approach via several case studies and discuss methodological challenges and recommendations for the field moving forward.

Mixed-Methods Approach

Despite the global increase in Internet censorship, multidisciplinary studies have been limited. Technical studies have focused on specific countries (such as China) and filtering technologies. 1 2 3 Studies of global Internet filtering have used PlanetLab (www.planet-lab.org), which has limited vantage points into countries of interest and tests academic networks, which might not represent average national-level connectivity.4 5 In the social sciences, particularly political science and international relations, empirical studies on information controls and the Internet’s impact on global affairs are growing, but they seldom use technical methods. This slow adoption is unsurprising; disciplinary boundaries are deeply entrenched in the social sciences, and incentives to explore unconventional methods, especially ones that require specialized skills, are low. Social scientists are more comfortable focusing on social variables: norms, rules, institutions, and behaviors. Although these variables are universally relevant to social science, for information controls research they should be paired with technical methods, including network measurements.

Studying information controls requires skills and perspectives from multiple disciplines, including computer science (especially network measurement and security), law, political science, sociology, anthropology, and regional studies. Gaining proficiency in all of these fields is difficult for any scholar or research group. We attempted to bridge these areas through a multidisciplinary collaboration. The ONI started as a partnership between the University of Toronto, Harvard University, and the University of Cambridge, bringing together researchers from political science, law, and computer science. Beyond these core institutions, the ONI helped form and continues to support two regional networks, OpenNet Asia (http://opennet-asia.net) and OpenNet Eurasia. Fieldwork conducted by local and regional experts from our research network has been a central component of our approach. The practice and policy of information control can vary widely among countries. Contextual knowledge from researchers who live in the countries of interest, speak the local language, and understand the cultural and political subtleties is indispensable.

Studying information controls’ technical operation and the political and social context behind them is an inherently multidisciplinary exercise.

Our methods and tools for measuring Internet filtering have evolved gradually over the past 10 years. Early efforts used publicly available proxies and dial-up access to document filtering in China.6 A later approach (which continues today) is client-based, in-country testing. This approach uses software written in Python in a client-server model, which is distributed to researchers. The client attempts to access a predefined list of URLs simultaneously in the country of interest (the “field”) and in a control network (the “lab”). In our tests, the lab connection is the University of Toronto network, which doesn’t filter the type of content we test for. Once testing is complete, we compress the results and transfer them to a server for analysis. We collect several data points for each URL access attempt: HTTP headers and status code, IP address, page body, and, in some cases, trace routes and packet captures. A combined process of automated and manual analysis helps us identify differences in the results returned between the field and lab and isolate filtering instances. Because attempts to access websites from different geographic locations can return different data points for innocuous reasons (such as a domain resolving to different IP addresses for load balancing, or content displaying in different languages depending on where a request originates from), we must often manually inspect the results.

Internet censorship research involves ethical considerations, particularly when we employ client-based testing, which requires openly accessing numerous potentially sensitive websites in quick succession. This method can pose security concerns for users depending on the location. Because our goal is to reproduce and document an average Internet user’s experience in the target country, the client doesn’t use censorship circumvention or anonymity techniques when conducting tests. Before testing takes place, we hold an informed consent meeting to clearly explain the risks of participating in the research. The decision about where to test is driven by safety and practicality concerns. Often, countries with the potential for interesting data are considered too dangerous for client-based testing. For example, due to security concerns, we did not run client tests during Syria’s recent conflict, or in certain countries (such as Cuba or North Korea) at all.

The intentions and motivations of authorities who mandate censorship aren’t readily apparent from technical measurements alone.

Internet filtering measurements are only as good as the data sample being tested. ONI testing typically uses two lists of URLs as its sample: a global list and a local list. The global list comprises a range of internationally relevant and popular websites, predominantly in English, such as international news sites (CNN, BBC, and so on) and social networking platforms (Facebook and Twitter). It also includes content that is regularly filtered, such as pornography and gambling sites. This list acts as a baseline sample that allows for cross-country and cross-temporal comparison. Regional experts compile local lists for each country using material specific to the local political, cultural, and linguistic context. These lists can include URLs of local independent media, oppositional political and social movements, or religious organizations unique to the country or region. The lists also contain URLs that have been reported to be blocked or have content likely to be targeted in that country. These lists do not attempt to enumerate every website that a country might be filtering, but they can provide a snapshot into filtered content’s breadth, depth, and focus.

Before testing occurs, gaining knowledge about the testing environment, including a country’s Internet market and infrastructure, can help determine significant network vantage points. Understanding a country’s regulatory environment can provide insight into how it implements information controls, legally and extra-legally, and how ISPs might differ in implementing filtering.

Timing in testing is also important. Authorities might enact or alter information controls in response to events on the ground. Because our testing method employs client-based testing and analysis, resource constraints require that we schedule testing strategically. Local experts can identify periods in which information might be disrupted, such as elections or sensitive anniversaries, and provide context for why events might trigger controls.

Case Studies

The intentions and motivations of authorities who mandate censorship aren’t readily apparent from technical measurements alone. Filtering might be motivated by time-sensitive political events, and can be implemented in a nontransparent manner for political reasons. In other cases, decisions to filter content might come from a desire to protect domestic economic interests. Filtering can also come with unintended consequences when the type of content filtered and the jurisdiction where it’s blocked are not the censors’ intended targets.

In the following cases, we illustrate how a mixed-methods approach can ground technical filtering measurements in the political, economic, and social context in which authorities apply them.

Political Motivations

Although technical measurements can determine what’s censored and how that censorship is implemented, they can’t easily answer the question of why content is censored. Understanding what motivates censorship can provide valuable insight into measurements while informing research methods.

Political events. Information controls are highly dynamic and can be triggered or adjusted in response to events on the ground. We call this practice just-in-time blocking (JITB), which refers to the denial of access to information during key moments when the information might have the greatest impact, such as during elections, periods of civil unrest, and sensitive political anniversaries.

The most dramatic implementation of JITB is the complete shutdown of national connectivity, as was seen recently during mass demonstrations in the Middle East and North Africa (MENA).7 In these extreme cases, we can see the disruption via traffic monitoring, while the political event’s prominence makes the context obvious. In other cases, the disruption might be subtle and implemented only for a short period. For example, ONI research during the 2005 Kyrgyzstan parliamentary elections and 2006 Belarus presidential elections found evidence of DDoS attacks against opposition media, and intermittent website inaccessibility. 8 In these cases, attribution is difficult to assess; attacks such as DDoS provide plausible deniability.

Recurring events (for example, sensitive anniversaries) or scheduled events (such as elections) let us trace patterns of information controls enacted in response to those events. Because our client-based testing relies on users in-country, continuous monitoring isn’t feasible, and knowing which events might trigger information controls is highly valuable. However, even in countries with aggressive information controls and records of increased controls during sensitive events, anticipating those that will lead to JITB can be difficult.

In 2011, we collaborated with the BBC to analyze a pilot project it conducted to provide Webproxy services that would deliver content in China and Iran, where BBC services have been consistently blocked. 9 We monitored usage of Psiphon (the proxy service used by the BBC; see http://psiphon.ca) and tested for Internet filtering daily before, during, and after two sensitive anniversaries: the 1989 Tiananmen Square protest and the 2009 disputed Iranian presidential elections. These anniversaries’ sensitivity and past evidence that the respective regimes targeted information controls around the anniversary dates led us to hypothesize that authorities would increase controls around the events. However, our hypothesis wasn’t confirmed — we observed little variance in blocking and no secondary reports of increased blocking. We also didn’t see the expected increase in Psiphon node blocking. However, several unforeseen events in China did appear to trigger a censorship increase. Rumors surrounding the death of former president Jiang Zemin and public discontent following a fatal train collision in Wenzhou were correlated with an increase in the blocking of BBC’s proxies and other reports of censorship. Other studies have similarly shown Chinese authorities quickly responding to controversial news stories with increased censorship of related content. 10 This case shows that predicting changes in information control is difficult, and that unforeseen events can rapidly influence how authorities target content. Measurement methods that are technically agile, can adapt to events, and are informed by a richer understanding of the local context through local experts can help reduce this uncertainty.

Understanding what motivates censorship can provide valuable  insight into measurements while informing research methods.

Filtering transparency. The degree to which censors acknowledge that filtering is occurring and inform users about what content is filtered can vary significantly among countries and ISPs. Many states apply Internet filtering openly, with explicit block pages that notify users why content is blocked and in some cases offer channels for appeal. Others apply filtering using methods that make websites appear inaccessible due to network errors, with no acknowledgment that access has been restricted and no remedies offered. Interestingly, in some cases, authorities apply filtering transparently to certain types of content and covertly to others.

Although determining filtering transparency is a relatively straightforward technical question, knowing what motivates censors to make filtering more or less transparent requires understanding the environment in which such filtering takes place. States might filter transparently to be perceived as upholding certain social values, as seen among MENA countries that block access to pornography or material deemed blasphemous. Other states might wish to retain plausible deniability to accusations that they block sites of opposition political groups, and thus might block using methods that mimic technical errors.

Yemen’s filtering practices illustrate this complexity. ONI testing in Yemen found that some content, including pornography and LGBT content, is blocked with an explicit page outlining why and offering an option to have this blocking reassessed (see https://opennet.net/ research/profiles/yemen). However, other websites — particularly those containing critical political content, which Yemen’s constitution ostensibly protects — have been consistently blocked through TCP reset packet injection. This method is not transparent to average users and would be difficult to distinguish from routine network issues. State-run ISPs in Yemen have denied that they block these political sites, instead attributing their inaccessibility to technical error; covert blocking of political content offers the government plausible deniability.

Other countries might similarly vary in how openly they filter content and how closely such filtering aligns with the country’s stated motivations for censorship. Vietnam, for example, has historically claimed that its information controls aim to limit access to pornography (see https://opennet.net/blog/2012/09/updatethreats-freedom-expression-online-vietnam). However, Vietnam extensively blocks critical political and human rights content through DNS tampering. Similarly, the Ethiopian government has previously denied blocking sensitive content, despite our findings that it blocks political blogs and opposition parties’ websites (see https://opennet.net/blog/2012/11/updateinformation-controls-ethiopia). As these examples show, national-level filtering systems that authorities justify to block specific content (such as pornography) can be extended through “mission creep” to include other sensitive material in unaccountable and nontransparent ways. 11

Economic Motivations

Economic factors also help determine what authorities censor and how they apply that censorship. In countries with strict censorship regimes, the ability to offer unfettered access can provide significant competitive advantage or encourage investment in a region. Conversely, targeting particular services for filtering while letting others operate unfiltered can protect domestic economic interests from competition. Economic considerations might also affect the choice of filtering methods.

ONI research in Uzbekistan has documented significant variation in Internet filtering across ISPs. 12 Although many ISPs tested consistently filtered a wide range of content, others provided unfiltered access. The technical data alone couldn’t explain this result. Contextual fieldwork determined that some commercial ISPs had close ties with the president’s inner circle, which might have helped them resist pressure to implement filtering. This relationship let the ISPs engage in economic rent-seeking, in which they used their political connections to gain a competitive advantage by offering unfettered access.

Other instances show how economic interests shape how ISPs apply information controls. Until 2008, one United Arab Emirates (UAE) ISP, Du, didn’t filter, whereas Etisalat, the country’s other major ISP, filtered extensively. 13 As in Uzbekistan, this variation was motivated by economic interests. Du serves most customers in the UAE’s economic free zones, and was set up to encourage the development of technology and media sectors. The provision of unfettered access was an incentive to attract investment.

Conversely, some online services might be filtered to protect commercial interests. Countries including the UAE and Ethiopia filter access to, and have passed regulations restricting the use of, VoIP services such as Skype to protect the interests of national telecommunications companies, a major source of revenue for the state.

The decision to implement a particular filtering method might also be influenced by cost considerations as much as technical concerns. States can implement some filtering methods, such as IP blocking, on standard network equipment. Other methods, such as TCP reset packet injection, are more technically complex and require systems that are more sophisticated.

Unintended Consequences

In some instances, states might apply filtering in a way that blocks content not intentionally targeted for filtering, or affects jurisdictions outside of where the filtering is implemented. Such cases can be difficult to identify from technical measurement alone.

Upstream filtering. The Internet’s borderless nature complicates research into national-level information controls. Internet filtering, particularly where it isn’t implemented transparently, can have cross-jurisdictional effects that aren’t immediately apparent.

We can see this complexity in upstream filtering, in which filtering that originates in one jurisdiction ends up applied to users in a separate jurisdiction. If ISPs connect to the broader Internet through peers that filter traffic, this filtering could be passed on to users. In some cases, an underdeveloped telecommunications system might limit a country’s wider Internet access to just a few foreign providers, who might pass on their filtering practices. Russia, for example, has long been an important peer to neighboring former Soviet states and has extended filtering practices beyond its borders. The ONI has documented upstream filtering in Kyrgyzstan, Uzbekistan, and Georgia (see https://opennet.net/regions/commonwealth-independent-states-cis).

In a recent example, we found that filtering applied by ISPs in India was restricting content for users of Omani ISP Omantel. 14 Through publicly available proxies and in-country, client- based testing, we collected data on blocked URLs in Oman, a country with a long history of Internet filtering. Although our results showed that users attempting to access blocked content received several block pages, one in particular wasn’t consistent with past filtering that ISPs in Oman had employed. Rather, it matched a block page issued by India’s Department of Telecommunications. Filtered websites with this block page included multimedia sharing sites dedicated to Indian culture and entertainment. Furthermore, Omantel has a traffic peering arrangement with India-based ISP Bharti Airtel ASNs AS8529 and AS9498, and trace routes of attempts to access the blocked content from Oman confirmed that the traffic passed through Bharti Airtel. We found that the filtering resulted from a broad Indian court decision that sought to limit the distribution of a recently released film.

Omani users were thus subject to filtering implemented for domestic purposes within India. These users had limited means of accessing content that might not have violated Omani regulations, did not consent to the blocking, and had little recourse for challenging the censorship.

Collateral filtering. ISPs often implement Internet filtering in ways that can unintentionally block content. Ineffectively applied filtering can inadvertently block access to an entire domain even when the censor was targeting only a single URL. IP blocking can restrict access to thousands of websites hosted on a single server when only one was targeted. Commercial filtering lists that miscategorize websites can restrict access to those that do not contain the type of content censors might want to block. We refer to such overblocking as collateral filtering, or the inadvertent blocking of content that is a byproduct of crude or ineffectively applied filtering systems.

The idea of collateral filtering implies that some content is blocked because censors target it, whereas other content is filtered as a side effect. However, the distinction between these two categories is rarely self-evident from technical data alone. We must understand what type of content censors are trying to block — a challenging determination that requires knowledge of the domestic political and social context.

Collateral filtering can occur from keyword blocking, in which censors block content containing particular keywords regardless of context. Our research in Syria demonstrated such blocking’s effects, and illustrated how we can redefine testing methods if we understand the censoring regime’s targets. Syrian authorities have acknowledged targeting Israeli websites, letting us focus research on enumerating this filtering’s scope and depth. Past research has also documented the country’s extensive filtering of censorship circumvention tools. Data gathered from Syria has demonstrated that all content tested that contained the keywords “Israel” or “proxy” in the URL was blocked, a crude filtering method that likely resulted in significant collateral filtering.

Similarly, our research in Yemen has indicated that the ISP YemenNet blocks access to all websites with the .il domain suffix, such as Israeli government and defense forces websites. However, several seemingly innocuous sites also ended up blocked, including that of an Italian airline selling flights to Israel, and that of an Israeli yoga studio. This content was filtered using nontransparent methods, in contrast to the transparent methods used to filter other social content.

Methodological Challenges

Using a mixed-methods approach to study information controls can help us pinpoint which technical measurements to use and add valuable context for interpreting the intent of a regime. However, challenges remain. In our work, we have wrestled with perennial difficulties in data collection, analysis, and interpretation that are general challenges for multidisciplinary research on information controls.

Any Internet censorship measurement study will encounter the seemingly simple but actually complicated questions of determining what content to test, which networks to access, and when to target testing.

Determining what Web content to use to test Internet filtering is challenging in terms of both creating and maintaining content lists over time. Keeping lists current, testing relevant content, and avoiding deprecated URLs is a logistical challenge when testing in more than 70 countries over 10 years. To create and maintain these lists, our project relies on a large network of researchers who differ in their focus and expertise. Also, although keeping testing lists responsive to environmental changes increases the relevancy of their content, it can complicate efforts to measure a consistent dataset across time and countries and, consequently, can make fine-grained longitudinal analysis difficult.

Network access points can be accessed in various ways, including remote access (such as public proxies), distributed infrastructures (for example, PlanetLab), or client-based approaches. Each of these methods has benefits and limitations. Public proxies and PlanetLab enable continuous automated measurements but are limited with regard to which countries are available or might not represent an average connection in a country, possibly introducing bias. Client-based testing can ensure a representative connection, but we might not have access to users in countries of interest or to particular ISPs. In some cases, the potential safety risks to users are substantial; moreover, ethical and legal considerations can restrict testing.

Our testing method relies heavily on users for testing and human analysts for compiling testing lists and reviewing results. These conditions make continuous testing infeasible and require that we identify ad hoc triggers for targeting tests. Clearly, sensitive events are potentially good indicators of when information controls might be enacted. However, as our BBC study showed, predicting which events will trigger controls is never straightforward.

A holistic view of information controls combines technical and contextual data and iterative analysis. However, this analysis is often constrained by data availability. In some cases, technical data clearly showing a blocking event or other control might not be easily paired with contextual data that reveals the intentions and motivations of the authorities implementing it. Policies regarding information controls might be kept secret, and the public justification for controls can run counter to empirical data on their operation. Contextual anecdotes about controls derived from interviews, media reports, or document leaks, on the other hand, can be difficult to verify with technical data due to access restrictions.

The study of information controls is becoming an increasingly challenging but important area as states ramp up cyber-security and related policies. As controls increase in prevalence and include more sophisticated and at times even offensive measures, the need for multidisciplinary research into their practice and impact is vital. Disciplinary divides continue to hinder progress. In the social sciences, incentives for adopting technical methods relevant to information controls are low. Although the study of technology’s social impact is more deeply entrenched in technical fields such as social informatics and human-computer interaction, these fields are less literate in social science theories that can help explain information control dynamics. We have tried to overcome disciplinary divides through large collaborative projects. However, collaborative research is costly, time-consuming, and administratively complex, particularly if researchers in multiple national locations are involved.

Addressing these divides will require a concentrated effort from technical and social science communities. Earlier education in theories and methods from disparate fields could provide students with deeper skill sets and the ability to communicate across disciplines. Researchers from technical and social sciences working on information controls should stand as a community and demonstrate the need for funding opportunities, publication venues, workshops, and conferences that encourage multidisciplinary collaborations and knowledge sharing in the area. Through education and dialogue, the study of information controls can mature and hopefully have greater effects on the Internet’s future direction.

References

  1. Anonymous, “The Collateral Damage of Internet Censorship by DNS Injection,” ACM SIGCOMM Computer Communication Rev., vol. 42, no. 3, 2012, pp. 21–27.
  2. Clayton, S. Murdoch, and R. Watson, “Ignoring the Great Firewall of China,” Privacy Enhancing Technologies, Springer, 2006, pp. 20–35; www.cl.cam.ac.uk/~rnc1/ ignoring.pdf.
  3. Xu, Z. Mao, and J. Halderman, “Internet Censorship in China: Where Does the Filtering Occur? Passive and Active Measurement, Springer, 2011, pp. 133–142; http:// web.eecs.umich.edu/~zmao/Papers/china-censorshippam11.pdf.
  4. Sfakianakis et al., “CensMon: A Web Censorship Monitor,” Proc. 1st Usenix Workshop Free and Open Communication on the Internet (FOCI 11), Usenix Assoc., 2011; http://static.usenix.org/event/foci11/tech/final_files/ Sfakianakis.pdf.
  5. Verkamp and M. Gupta, “Inferring Mechanics of Web Censorship around the World,” Proc. 2nd Usenix Workshop Free and Open Communication on the Internet (FOCI 12), Usenix Assoc., 2012; www.usenix.org/ conference/foci12/inferring-mechanics-web-censorship- around-world.
  6. Zittrain and B. Edelman, “Empirical Analysis of Internet Filtering in China,” IEEE Internet Computing, vol. 2, no. 2, 2003, pp. 70–77; http://cyber.law.harvard .edu/filtering/china/.
  7. Dainotti et al., “Analysis of Country-Wide Internet Outrages Caused by Censorship,” Proc. 2011 ACM SIGCOMM Conf. Internet Measurement (IMC 11), ACM, 2011, pp. 1–18; www.caida.org/publications/papers/ 2011/outages_censorship/outages_censorship.pdf.
  8. “The Internet and Elections: The 2006 Presidential Election in Belarus,” OpenNet Initiative, 2006; http://opennet.net/sites/opennet.net/files/ONI_ Belarus_Country_Study.pdf.
  9. “Casting a Wider Net: Lessons Learned in Delivering BBC Content on the Censored Internet,” Canada Centre for Global Security Studies, 11 Oct. 2011; http:// munkschool.utoronto.ca/downloads/casting.pdf.
  10. Aase et al., “Whiskey, Weed, and Wukan on the World Wide Web,” Proc. 2nd Usenix Workshop Free and Open Communication on the Internet (FOCI 12), Usenix Assoc., 2012; www.usenix.org/system/files/conference/ foci12/foci12-final17.pdf.
  11. Villeneuve, “The Filtering Matrix,” First Monday, vol. 11, no. 2, 2006; http://firstmonday.org/htbin/cgiwrap/ bin/ojs/index.php/fm/article/view/1307/1227.
  12. “Internet Filtering in Uzbekistan in 2006–2007,” OpenNet Initiative, 2007; http://opennet.net/studies/ uzbekistan2007.
  13. Noman, “Dubai Free Zone No Longer Has Filter-Free Internet Access,” OpenNet Initiative, 18 Apr. 2008; http://opennet.net/blog/2008/04/dubai-free-zone-nolonger-has-filter-free-internet-access.
  14. “Routing Gone Wild: Documenting Upstream Filtering in Oman via India,” Citizen Lab, 12 July 2012; https:// citizenlab.org/2012/07/routing-gone-wild.
  1. Anonymous, “The Collateral Damage of Internet Censorship by DNS Injection,” ACM SIGCOMM Computer Communication Rev., vol. 42, no. 3, 2012, pp. 21–27.
  2. R. Clayton, S. Murdoch, and R. Watson, “Ignoring the Great Firewall of China,” Privacy Enhancing Technologies, Springer, 2006, pp. 20–35; www.cl.cam.ac.uk/~rnc1/ ignoring.pdf.
  3. X. Xu, Z. Mao, and J. Halderman, “Internet Censorship in China: Where Does the Filtering Occur? Passive and Active Measurement, Springer, 2011, pp. 133–142; http:// web.eecs.umich.edu/~zmao/Papers/china-censorshippam11.pdf.
  4. A. Sfakianakis et al., “CensMon: A Web Censorship Monitor,” Proc. 1st Usenix Workshop Free and Open Communication on the Internet (FOCI 11), Usenix Assoc., 2011; http://static.usenix.org/event/foci11/tech/final_files/ Sfakianakis.pdf.
  5. J. Verkamp and M. Gupta, “Inferring Mechanics of Web Censorship around the World,” Proc. 2nd Usenix Workshop Free and Open Communication on the Internet (FOCI 12), Usenix Assoc., 2012; www.usenix.org/ conference/foci12/inferring-mechanics-web-censorship- around-world.
  6. J. Zittrain and B. Edelman, “Empirical Analysis of Internet Filtering in China,” IEEE Internet Computing, vol. 2, no. 2, 2003, pp. 70–77; http://cyber.law.harvard .edu/filtering/china/.
  7. A. Dainotti et al., “Analysis of Country-Wide Internet Outrages Caused by Censorship,” Proc. 2011 ACM SIGCOMM Conf. Internet Measurement (IMC 11), ACM, 2011, pp. 1–18; www.caida.org/publications/papers/ 2011/outages_censorship/outages_censorship.pdf.
  8. “The Internet and Elections: The 2006 Presidential Election in Belarus,” OpenNet Initiative, 2006; http://opennet.net/sites/opennet.net/files/ONI_ Belarus_Country_Study.pdf.
  9. “Casting a Wider Net: Lessons Learned in Delivering BBC Content on the Censored Internet,” Canada Centre for Global Security Studies, 11 Oct. 2011; http:// munkschool.utoronto.ca/downloads/casting.pdf.
  10. N. Aase et al., “Whiskey, Weed, and Wukan on the World Wide Web,” Proc. 2nd Usenix Workshop Free and Open Communication on the Internet (FOCI 12), Usenix Assoc., 2012; www.usenix.org/system/files/conference/ foci12/foci12-final17.pdf.
  11. N. Villeneuve, “The Filtering Matrix,” First Monday, vol. 11, no. 2, 2006; http://firstmonday.org/htbin/cgiwrap/ bin/ojs/index.php/fm/article/view/1307/1227.
  12. “Internet Filtering in Uzbekistan in 2006–2007,” OpenNet Initiative, 2007; http://opennet.net/studies/uzbekistan2007.
  13. H. Noman, “Dubai Free Zone No Longer Has Filter-Free Internet Access,” OpenNet Initiative, 18 Apr. 2008; http://opennet.net/blog/2008/04/dubai-free-zone-nolonger-has-filter-free-internet-access.
  14. “Routing Gone Wild: Documenting Upstream Filtering in Oman via India,” Citizen Lab, 12 July 2012; https:// citizenlab.org/2012/07/routing-gone-wild.