Category Archives: GCHQ

Stanistreet, Michelle - The government is using terrorism as an excuse to spy on journalists - The Guardian 20160314

Stanistreet, Michelle  - The government is using terrorism as an excuse to spy on journalists - The Guardian 20160314

The investigatory powers bill – or ‘snoopers’ charter’ – endangers press freedom, and offers no protection for sources or whistleblowers

The investigatory powers bill, which will receive its second reading in parliament on Tuesday, contains a range of surveillance powers available to the security services, police and other public bodies.

The first draft last year raised alarm bells, including amongst the three cross-party parliamentary committees that voiced serious concerns. Yet it took the Home Office just two weeks to cobble together this re-draft that in no way resolves the bill’s serious flaws.

Farcically, concerns about privacy have been addressed by inserting one word into a heading. Part one of the investigatory powers bill was called “general protections” and is now called “general privacy protections”. This is how the government has responded to the parliamentary intelligence committee recommendation that “privacy protections should form the backbone of the draft legislation”.
Snooper's charter: wider police powers to hack phones and access web history
Read more
The NUJ has been campaigning for improved laws and protections since a police report in 2014 revealed the Sun’s political editor’s mobile phone records and call data from the newsdesk had been seized in secret by police. When the British state has a total disregard for the protection of sources and whistleblowers then there are severe consequences for all journalists and press freedom.

Not least will be the impact on journalists’ safety. Reporters who work in dangerous environments – in a war zone or when investigating organised crime – are already targeted. Being seen as agents of the state, or a conduit to information about their sources, will make their work fraught with greater dangers. The independence of journalists, and the very notion of press freedom, is something that is critical to our collective safety and credibility.

In its essence, this bill is exploiting public concerns about terrorism and national security as an excuse to spy on journalists.

Advertisement

One section of the bill allows for “equipment interference”, enabling the authorities to access computers and electronic equipment. This interference includes hacking computers to gain access to passwords, documents, emails, diaries, contacts, pictures, chat logs and location records. Microphones or webcams could be turned on and items stored could be altered or deleted. Under the bill, journalists have no right to challenge this type of surveillance – in fact it is highly unlikely they would ever find out it has happened.

If journalists don’t know their work and their sources are being compromised then it becomes practically impossible to uphold our ethical principle to protect sources and whistleblowers.

The NUJ has a long and proud record of defending members from having to identify their sources, including backing a legal case in 1996 that fixed the journalists’ “right to silence” in European law.

That’s only possible if there is a transparent mechanism to challenge such demands for information on sources – if the state can get their hands on that information without a journalist ever knowing, how can we support the countless individuals who are brave enough to blow the whistle on information they believe the public needs to know about?

In 2015 a report by the Interception of Communications Commissioner’s Office revealed that 19 police forces had made 608 applications for communications data to find journalistic sources over a three-year period. Applications made and considered in secret.

Analysis Technology firms' hopes dashed by 'cosmetic tweaks' to snooper's charter
New version of investigatory powers bill doesn’t differ much from the old one, signalling a standoff between the government and technology sector
Read more
In response the previous culture secretary, Sajid Javid, said “journalism is not terrorism” and the government promised to introduce safeguards. Despite a series of interim measures being put in place, none of the key ones are in this new bill. The cross-party parliamentary joint committee said that the “protection for journalistic privilege should be fully addressed by way of substantive provisions on the face of the bill”. Yet this government has turned its face and ignored the recommendation.

The bill also introduces legislative anomalies – there is no adherence to standards already established in legislation such as the Police and Criminal Evidence Act 1984 and Terrorism Act 2000. The bill contains no requirement to notify a journalist, media organisation or their legal representatives when the authorities intend to put journalists under surveillance or hack into their electronic equipment. There is no right to challenge or appeal and the entire process takes place in secret. The oversight measures do not involve any media experts who can advocate on behalf of journalists and press freedom. This is an outrageous abuse of press freedom in the UK.

The NUJ is not alone in having grave concerns about this latest version of the bill – we are joined by others in the media industry, trade unions, legal experts, and privacy and human rights campaigners. These extremely intrusive and unnecessary surveillance powers trample over the very principles of journalism and will be a death knell for whistleblowers of the future. There are a growing number of politicians waking up to the dangers in this bill and we hope others will think hard before they cast their vote on Tuesday.

The snooper’s charter shows the government’s total contempt for privacy - The Guardian 20160301

The snooper’s charter shows the government’s total contempt for privacy - The Guardian 20160301

This cyber-surveillance bill will have a monumental impact on the human rights of the British people

The government proposed a fundamental shift in the relationship between citizens, the internet and the state in its 300-page draft investigatory powers bill. Under the law, now christened the snooper’s charter, almost every digital communication and movement would be logged by telecommunications companies, intercepted by intelligence agencies and subject to scrutiny. But when the government introduced the bill into parliament on Tuesday, it demonstrated not only its disregard for privacy but its contempt for that other key pillar of

The bill contains some of the most intrusive surveillance powers imaginable, including some that are not currently found in any other country in the world. Cyber security is to be sacrificed at the altar of “national security”: government hacking would become legal, bulk datasets collected and mined, and encrypted services subject to state restrictions.

It will come as little surprise to many Britons that the government has contempt for privacy. In the last decade we have seen the roll-out of mandatory data retention and the secret expansion of digital surveillance, revealed only because of the actions of an American whistleblower. Prior to the publication of the draft bill, in November 2015, it had been 15 years since parliament had modernised its surveillance powers, and an overhaul of police and intelligence authorities with regards to the internet was sorely needed. Yet suddenly, with the publication of the draft text, the government decided that time was of the essence. When a joint committee was appointed to scrutinise the bill, it had 52 working days to consider the oral evidence of 59 people and 1,500 pages of written submissions. Two other committees also conducted rapid reviews of the dense legislation during the three short months between the publication of the Draft Bill and its introduction into Parliament.

All three found the draft bill, at best, problematic; the Intelligence and Security Committee (ISC) issued a scathing critique, which cut particularly deep given the Committee’s historically close relationship to the security services. The ISC found the lack of emphasis on privacy protections “surprising” and cautioning the government against using terrorist attacks as an excuse to override civil liberties. It recommended the government did away with invasive powers such as bulk hacking and introduced a new section of the Bill specifically dedicated to protecting privacy.

Other voices, vital to the democratic debate, also criticised the Draft Bill: human rights organisations and civil society argued that by legitimising “bulk interception” powers the bill would open the door for indiscriminate and disproportionate surveillance; key industry players such as Facebook, Google and Yahoo! spoke out against provisions that could be used to weaken security and undermine encryption, and cautioned that Britain’s attempt to exercise extraterritorial jurisdiction on US companies could legitimise the “lawless and heavy-handed practice[s]” of other less democratic nations. The Law Society and Bar Council raised the prospect that the draft bill would undermine the confidentiality of legal communications, and the National Union of Journalists raised the chilling effect on media and the risk posed to the protection of journalists’ sources.

In a final attempt to talk sense into the government, on Tuesday morning more than 100 MPs, experts and organisations published a letter calling on the government to take full account of the extensive criticism offered by the committees, and refrain from rushing the bill through parliament. Hours later the home secretary introduced the Investigatory Powers bill into the Commons.. A second reading is expected on 14 March and a final vote by the end of April.

If the rapid process of scrutiny isn’t a sufficiently clear demonstration of the government’s derision, its response to the committees’ recommendations bring the Home Office’s contempt for the democratic process into sharp relief. A scant handful of minor critiques have been reflected. There has been no response to the widespread criticism of the restricted powers granted to judicial commissioners. In response to demands by the joint committee and the ISC for demonstrable, evidence-based justifications for vastly intrusive bulk powers, the Home Office has provided general case studies with few details, unaccompanied by any sense of the scale and effect of such measures. The ISC’s advice that the government withdraw authorities related to bulk equipment interference powers and bulk dataset acquisition has been ignored.

With deeply regrettable flippancy, the Home Office has responded to the ISC’s recommendation that the draft legislation contain “an entirely new part dedicated to overarching privacy protections [to ensure that] privacy is an integral part of the legislation rather than an add-on” by adding one word to the bill – the word “privacy” to the title of part one, previously “general protections”.

Should the bill be brought into law, its impact on the human rights of the British people would be monumental. The government has shown an audacious disregard for these consequences. That privacy will be eroded as a result of a process that flaunts democratic tenets serves only to add insult to injury. It is not only democracy that the government has treated with contempt but the British public.

GCHQ Director: One Warrant Can Be Used to Hack a Whole Intelligence Agency - Motherboard 20160209

GCHQ Director: One Warrant Can Be Used to Hack a Whole Intelligence Agency - Motherboard 20160209

The UK’s intelligence agencies may soon get their hacking powers on a stronger legal footing. But a new report questions why certain warrants designed to hack multiple computers at once are even necessary, when their more targeted equivalents are arguably just as broad.

On Tuesday, the UK's Intelligence and Security Committee of Parliament published its report on the draft Investigatory Powers Bill, a proposed piece of surveillance legislation. The Committee was told that so-called “targeted” hacking warrants were so broad, that they could be used to gather information on an entire foreign intelligence agency, raising concerns about what “bulk” warrants are designed for.

If passed into law, the bill will force internet service providers to store the browsing history of their customers for 12 months. It will also update how some of the intelligence agencies' use of “equipment interference” (EI)—the UK government's term for hacking—is handled, and introduce the idea of “targeted” and “bulk” EI warrants.


"It is possible that bulk activity might capture data and information about UK persons"

At the moment, equipment interference for the intelligence agencies is governed under the Intelligence Services Act 1994, but the draft Bill is the first time that hacking warrants are being separated into Targeted and Bulk variants.

Only security and intelligence agencies would be able to apply for a bulk EI warrant, not law enforcement, and they could only be used to intentionally target systems abroad, according to a government-issued fact sheet.

“Bulk EI facilitates target discovery, it helps to join up the dots between fragments of information that may be of intelligence interest,” the fact sheet continues, keeping its description of the power incredibly vague. “It is possible that bulk activity might capture data and information about UK persons, for instance if they are associated with a subject of interest.”

But the Intelligence and Security Committee—a body of the government tasked with examining the policy, administration and finances of the UK's intelligence agencies—is concerned that bulk EI warrants are largely superfluous, because targeted warrants are already exceptionally wide in scope.

“Despite the name, a Targeted EI warrant is not limited to an individual piece of equipment, but can relate to all equipment where there is a common link between multiple people, locations or organisations,” the report from the Committee reads.

Robert Hannigan, the director for GCHQ, told the Committee that, hypothetically, a targeted EI warrant could encompass an entire hostile foreign intelligence service.

“It is therefore unclear what a 'bulk' EI warrant is intended to cover, and how it differs from a 'targeted' EI warrant,” the report continues.

Indeed, Hannigan conceded that “the dividing line between a large-scale targeted EI and bulk is not an exact one.” This evidence was provided in an oral session to the Committee in November 26, 2015, but the transcript is not public.

The Committee writes that the intelligence agencies appeared to suggest that the provision for a bulk EI warrant may be desired for “future-proofing,” but no specific examples of what such a warrant might cover were provided by the agencies, despite the very broad and intrusive powers they would provide.

“The Committee is therefore not convinced as to the requirement for [bulk warrants],” the report reads.

A technical reading of the “HIMR Data Mining Research Problem Book” - Conspicuous Chatter 20160203

A technical reading of the “HIMR Data Mining Research Problem Book” - Conspicuous Chatter 20160203

Boing Boing just released a classified GCHQ document that was meant to act as the Sept 2011 guide to open research problems in Data Mining. The intended audience, Heilbronn Institute for Mathematical Research (HIMR), is part of the University of Bristol and composed of mathematicians working for half their time on classified problems with GCHQ.

First off, a quick perusal of the actual publication record of the HIMR makes a sad reading for GCHQ: it seems that very little research on data mining was actually performed post-2011-2014 despite this pitch. I guess this is what you get trying to make pure mathematicians solve core computer science problems.

However, the document presents one of the clearest explanations of GCHQ’s operations and their scale at the time; as well as a very interesting list of open problems, along with salient examples.

Overall, reading this document very much resembles reading the needs of any other organization with big-data, struggling to process it to get any value. The constrains under which they operate (see below), and in particular the limitations to O(n log n) storage per vertex and O(1) per edge event, is a serious threat — but of course this is only for un-selected traffic. So the 5000 or so Tor nodes probably would have a little more space and processing allocated to them, and so would known botnets — I presume.

Secondly, there is clear evidence that timing information is both recognized as being key to correlating events and streams; and it is being recorded and stored at an increasing granularity. There is no smoking gun as of 2011 to say they casually de-anonymize Tor circuits, but the writing is on the wall for the onion routing system. GCHQ at 2011 had all ingredients needed to trace Tor circuits. It would take extra-ordinary incompetence to not have refined their traffic analysis techniques in the past 5 years. The Tor project should do well to not underestimate GCHQ’s capabilities to this point.

Thirdly, one should wonder why we have been waiting for 3 years until such clear documents are finally being published from the Snowden revelations. If those had been the first published, instead of the obscure, misleading and very non-informative slides, it would have saved a lot of time — and may even have engaged the public a bit more than bad powerpoint.

Some interesting points in the document in order:

  • It turns out that GCHQ has a written innovation strategy, reference [I75]. If someone has a copy it would be great to see it, and also understand where the ACE program fits in it.
  • GCHQ, at the time used heavily two families of technologies: Hadoop, for bulk processing of raw collected data (6 months for meta-data apparently), and IBM Streams (DISTILLERY) for stream, real-time, processing. A lot of the discussion, and open problems, relate to the fact that bulk collection can only provide a limited window of visibility, and intelligence related selection, and processing has to happen within this window. Hence the interest in processing streaming data.
  • Section 2 is probably the clearest explanation of how modern SIGINT works. I would like to congratulate the (anonymous) author, and will be setting it as the key reading for my PETS class. It summarizes well what countless crappy articles on the Snowden leaks have struggled to piece together. I wish journalists just released this first, and skipped the ugly slides.
  • The intro provides a hit at the scale of cable interception operations as of 2011. It seems that 200 “bearers” of 10 Gigabit / sec were being collected at any time; it makes clear that many more sources were available to switch to depending on need. That is about 2 Terabit / sec, across 3 sites (Cheltenham, Bude, and LECKWITH).
  • Section 2.1.2 explains that a lot (the majority) of data is discarded very early on, by special hardware performing simple matching on internet packets. I presume this is to filter out bulk downloads (from CDNs), known sources of spam, youtube videos, etc.
  • The same section (2.1.2) also explains that all meta-data is pulled from the bearer, and provides an interpretation of what meta-data is.
  • Finally (2.1.2) there is a hint at indexing databases (Query focused databases / QFD) that are specialized to store meta-data, such as IP traffic flow data, for quick retrival based on selectors (like IP addresses).
  • Section 2.1.3 explains the problem of “target development”, namely when no good known selectors exist for a target, and it is the job of the analyst to match it though either contact chaining or modus-operandi (MO) matching. It is a very instructive section, which is the technical justification underpinning a lot of the mass surveillance going on.
  • The cute example chosen to illustrate it (Page 12, end of 2.1.3): Apparently GCHQ developed many of those techniques to spy on foreign delegations during the 2009 G20 meeting. Welcome to London!
  • Section 2.2.2 provides a glimpse at the cybersecurity doctrine and world-view at GCHQ, already in 2011. In particular, there is a vision that CESG will act as a network security service for the nation, blocking attacks at the “firewalls”, and doing attribution (as if the attacks will be coming “from outside”). GCHQ would then counter-attack the hostile sources, or simply use the material they intercepted from others (4th party collection, the euphemism goes).
  • Section 2.2.3 provides a glimpse of the difficulties of running implants on compromised machines: something that is openly admitted. Apparently ex-filtrating traffic and establishing command-and-control with implants is susceptible to passive SIGINT, both a problem and an opportunity.
  • Section 3 and beyond describes research challenges that are very similar with any other large organization or research group: the difficulty of creating labelled data sets for training machine learning models; the challenges on working on partial or streaming data; the need for succinct representations of data structures; and the problem of inferring “information flow” — namely chains of communications that are related to each other.
  • It seems the technique of choice when it comes to machine learning is Random Decision Forrests. Good choice, I also prefer it to others. They have an in-house innovation: they weight the outputs of each decision tree. (Something that is sometimes called gradual learning in the open literature, I believe).
  • Steganography detection seems to be a highlight: however there is no explanation if steganography is a real problem they encountered in the field, or if it was an easy dataset to generate synthetically.
  • Section 4, deals with research problems of “Information Flow in Graphs”. This is the problem of associated multiple related connections together, including across types of channels, detecting botnet Command and Control nodes, and also tracing Tor connections. Tracing Tor nodes is in fact a stated problem, with a stated solution.
  • Highlights include the simple “remit” algorithm that developed by Detica (page 26, Sect. 4.2.1); PRIME TIME that looks at chains of length 2; and finally HIDDEN OTTER, that specifically targets Tor, and botnets. (Apparently an internal group codenamed ICTR-NE develop it).
  • Section 4.2.2 looks at communications association through temporal correlation: one more piece of evidence that timing analysis, at a coarse scale, is on the cards for mining associations. What is cute is the example used is how to detect all GCHQ employees: they are the ones with phones not active between 9am and 5pm when they are at work.
  • Beside these, they are interested in change / anomaly detection (4.4.1), spread of information (such as extremest material), etc. Not that dissimilar from say an analysis Facebook would perform.
  • Section 5 poses problem relating to algorithms on streaming graph data. It provides a definitions of the tolerable costs of analysis algorithms (the semi-streaming paradigm): for a graph of n vertices (nodes), they can store a bit of info per vertex, but not all edges, or even process all edges. So they have O(n log n) storage and can only do O(1) processing per event / edge. That could be distilled to a set of security assumptions.
  • Section 5.2.2 / 5.2.3 is an interesting discussion about relaxations of cliques and also point of that very popular nodes (the pizza delivery line) probably are noise and should be discarded.
  • As of 2011 (sect 5.2.4) it was an open problem how far contact chaining is required. This is set as an open problem, but states that analysis usually use 2-hops from targets. Note that other possible numbers are 3, 4, and 5 since after 6 you probably have included nearly everyone in the world. So it is not that exciting a problem and cannot blame the pure mathematicians for not tackling it.
  • Section 5.5.1 asks the question on whether there is an approximation of the correlation matrix, to avoid storing and processing an n x n matrix. It generally seems that matching identifiers with identifiers is big business.
  • Section 6 poses problems relating to the processing, data mining, and analysis of “expiring” graphs, namely graphs with edges that disappear after a deadline. This is again related to the constraint that storage for bulk un-selected data is limited.
  • In section 6.3.2 the semi-streaming model where only O(n log n) storage per vertex is allowed, and O(1) processing per incoming event / edge is re-iterated.
  • Appendix A deals with models of academic engagement. I have to say it is very enlightened: it recognizes the value of openly publishing the research, after some sanitization. Nice.
  • Appendix B and C discuss the technical details and size of the IBM Streams and Hadoop clusters. Section D presents the production clusters (652 nodes, total 5216 cores, and 32 GB memory for each node).
  • Section E discusses the legalities of using intercepted data for research, and bless them they do try to provide some logging and Human Rights Justification (bulk authorization for research purposes).

Heilbronn Institute for Mathematical Research - Data mining research problem book

Excerpt from Snowden document, published by Boing Boing - 20160202

Ways of working

This section gives a few thoughts on ways of working. The aim is to build on the positive culture already established in the Institute’s crypt work. HIMR researchers are given considerable freedom to work in whatever way suits them best, but we hope these ideas will provide a good starting-point.

A.1 Five-eyes collaboration

As on the crypt side, we hope that UKUSA collaboration will be a foundation-stone of the data mining effort at HIMR. This problem book is full of links to related research being carried out by our five-eyes partners, and researchers are very strongly urged to pursue collaborative angles wherever possible—above all, to get to know the people working on the same problems and build direct relationships. Researchers are encouraged to attend and present at community-wide conferences (principally SANAR and ACE), as funding and opportunity allows. We hope that informal short visits to and from HIMR will also be a normal part of data mining life. HIMR has a tradition of holding short workshops to focus intensively on particular topics, where possible with participation from experts across the five eyes community. Frequently these are held during university vacations, to allow our cleared academic consultants to take part. Each summer, HIMR hosts a SWAMP: a two-month long extended workshop on (traditionally) two topics of high importance, similar to the SCAMPs organized by IDA. We hope that HIMR researchers will feel inspired to suggest possible data mining sub-topics for future SWAMPs.

A.2 Knowledge sharing

Inevitably, there is a formal side to reporting results: technical papers, conference talks, code handed over to corporate processing, and so on. But informal dissemination of ideas, results, progress, set-backs and mistakes is also extremely valuable. This is especially true at HIMR, for several reasons.

  • There is a high turnover of people, and it is important that a researcher’s ideas (even the half-baked ones) don’t leave with him or her.
  • Academic consultants form an important part of the research effort: they may only have access to classified spaces a few times a year for a few days at a time, so being able to catch up quickly with what’s happened since their last visit is crucial to help them make the most of their time working with us.
  • HIMR is physically detached from the rest of GCHQ, and it’s important to have as many channels of communication as possible—preferably bidirectional!—so that this detachment doesn’t become isolation. The same goes even more so for second party partners as well. In HIMR’s METEOR SHOWER work, knowledge sharing is now primarily accomplished through two compartmented wikis hosted by CCR Princeton. For data mining, there should be more flexibility, since almost none of the methods and results produced will be ECI, and in fact they will usually be STRAP1 or lower. Paradoxically, however, the fact that work can be more widely shared can mean that there is less of a feeling of a community of interest with whom one particularly aims to share it: witness the fact that there is no shining model of data mining knowledge sharing elsewhere in the community for HIMR to copy! We suggest that as far as possible, data miners at HIMR build up a set of pages on GCWiki (which can then be read and edited by all five-eyes partners) in a similar way to how crypt research is recorded on the CCR wikis. They can then encourage contacts at GCHQ and elsewhere to watch, edit and comment on relevant pages. In particular, the practice of holding regular bull sessions 10 and taking live wiki notes during them is highly recommended. If any researchers feel so inclined, GCBlog and the other collaborative tools on GCWeb are available, and quite suitable for all STRAP1 work. For informal communications with people from MCR and ICTR, there is a chat-room called himr_dm: anyone involved in the HIMR data mining effort can keep this open in the background day by day. There is also a distillery room that is sadly under-used: in principle, it discusses SPL and the corporate DISTILLERY installations. For any STRAP2 work that comes along, there are currently no good collaborative options: creating an email distribution list would be one possibility.

A.3 Academic engagement

The first test for HIMR’s classified work must be its applicability and usefulness for SIGINT, but given that constraint, GCHQ is keen to encourage HIMR researchers to build relationships and collaborate with academic data miners, and publish their results in the open literature. Of course, security and policy will impose some red lines on what exactly is possible, but the basic principle is that when it comes to data mining, SIGINT data is sensitive, but generally applicable techniques used to analyse that data often are not. Just about everyone nowadays, whether they are in academia, industry or government, has to deal with big data, and by and large they all want to do the same things to it: count it, classify it and cluster it. If researchers develop a new technique that can be published in an open journal once references to SIGINT are excised, and after doing a small amount of extra work to collect results from applying it to an open source dataset too, then this should be a win-win situation: the researcher adds to his or her publication tally, and HIMR builds a reputation for data mining excellence. Of course, there may be occasions when publication is not appropriate, for example where a problem comes from a very specific SIGINT situation with no plausible unclassified analogy. Day-to-day contact with the Deputy Director at HIMR should flag up cases like this early on. There are also cases where we feel we have an algorithmic advantage over the outside that is worth trying to maintain, and this can be further complicated if equity from other partners is involved, or if a technique brings in ideas from areas like crypt where strict secrecy is the norm. The Deputy Director should be consulted before discussing anything that might be classified in a non-secure setting: he or she can further refer the question to Ops Policy if necessary.

Informal meetings at blackboards where people briefly describe work they have been doing and problems they have encountered, with accompanying discussion from others in the room. The rules: people who wish to speak bid the number of minutes they need (including time for questions). Talks are ordered from low to high bid, with ties broken arbitrarily. You can ask questions at any time. You can leave at any time. If you manage to take the chalk from the speaker, you can give the talk.

Doctorow, Cory - Snowden intelligence docs reveal UK spooks' malware checklist - Boing Boing 20160202

Doctorow, Cory - Snowden intelligence docs reveal UK spooks' malware checklist - Boing Boing 20160202

Boing Boing is proud to publish two original documents disclosed by Edward Snowden, in connection with "Sherlock Holmes and the Adventure of the Extraordinary Rendition," a short story written for Laura Poitras's Astro Noise exhibition, which runs at NYC's Whitney Museum of Modern Art from Feb 5 to May 1, 2016.

“I’d tell you, but I’d have to kill you.” This is what I shout at the TV (or the Youtube window) whenever I see a surveillance boss explain why none of his methods, or his mission, can be subjected to scrutiny. I write about surveillance, counter surveillance, and civil liberties, and have spent a fair bit of time in company with both the grunts and the generals of the surveillance industry, and I can always tell when one of these moments is coming up, the flinty-eyed look of someone about to play Jason Bourne.

The stories we tell ourselves are the secret pivots on which our lives turn. So when Laura Poitras approached me to write a piece for the Astro Noise book -- to accompany her show at the Whitney -- and offered me access to the Snowden archive for the purpose, I jumped at the opportunity.

Fortuitously, the Astro Noise offer coincided perfectly with another offer, from Laurie King and Leslie Klinger. Laurie is a bestselling Holmes writer; Les is the lawyer who won the lawsuit that put Sherlock Holmes in the public domain, firmly and unequivocally. Since their legal victory, they've been putting together unauthorized Sherlock anthologies, and did I want to write one for "Echoes of Holmes," the next one in line?

The two projects coincided perfectly. Holmes, after all, is the master of HUMINT, (human intelligence), the business of following people around, getting information from snitches, dressing up in putty noses and fake beards... Meanwhile, his smarter brother Mycroft is a corpulent, sedentary presence in the stories, the master of SIGINT (signals intelligence), a node through which all the intelligence of the nation flows, waiting to be pieced together by Mycroft and his enormous intellect. The Mycroft-Sherlock dynamic perfectly embodies the fraternal rivalry between SIGINT and HUMINT: Sherlock chases all around town dressed like an old beggar woman or similar ruse, catches his man and hands him over to Scotland Yard, and then reports in to Mycroft, who interrupts him before he can get a word out, arching an eyebrow and saying, "I expect you found that it was the Bohemian stable-hand all along, working for those American Freemasons who were after the Sultan's pearls, was it not?"

In 2014, I watched Jennifer Gibson from the eminent prisoners’ rights group Reprieve talking about her group's project to conduct a census of those killed by US drone strikes in Yemen and Pakistan. The CIA conducts these strikes, using SIGINT to identify mobile phones belonging to likely targets and dispatch killer drones to annihilate anything in their vicinity. As former NSA and CIA director Michael Hayden once confessed: "We kill people based on metadata."

But the CIA does not specialize in SIGINT (that's the NSA's job). For most of its existence, the CIA was known as a HUMINT agency, the masters of disguise and infiltration..

That was the old CIA. The new CIA is just another SIGINT agency. Signals Intelligence isn’t just an intelligence methodology, it’s a great business. SIGINT means huge procurements -- servers, administrators, electricity, data-centers, cooling -- while HUMINT involves sending a lot of your friends into harm's way, potentially never to return.

We are indeed in the “golden age of SIGINT”. Despite security services' claims that terrorists are "going dark" with unbreakable encryption, the spooks have done much to wiretap the whole Internet.

The UK spy agency GCHQ really tipped their hand when they called their flagship surveillance program "Mastering the Internet." Not "Mastering Cybercrime," not "Mastering Our Enemies." Mastering the *Internet* -- the very same Internet that everyone uses, from the UK's allies in the Five Eyes nations to the UK Parliament to Britons themselves. Similarly, a cursory glance at the logo for the NSA’s Special Source Operations -- the fiber-tapping specialists at the NSA -- tells the whole story.

These mass surveillance programs would likely not have withstood public scrutiny. If the NSA’s decision to launch SSO had been attended by a nightly news broadcast featuring that logo, it would have been laughed out of the room. The program depended on the NSA telling its story to itself, and not to the rest of us. The dotcom boom would have been a very different affair if the major legislative debate of the day had been over whether to allow the surveillance agencies of Western governments to monitor all the fiber cables, and harvest every click and keystroke they can legally lay claim to, parcel it into arbitrary categories like “metadata” and “content” to decide what to retain indefinitely, and to run unaccountable algorithms on that data to ascribe secret guilt.

As a result, the entire surveillance project has been undertaken in secrecy, within the bubble of people who already think that surveillance is the answer to virtually any question. The surveillance industry is a mushroom, grown in dark places, and it has sent out spores into every corner of the Internet, which have sprouted their own surveillance regimes. While this was happening, something important was happening to the Internet: as William Gibson wrote in 2007's "Spook Country, "cyberspace is everting" -- turning inside out. Computers aren’t just the things in our bags in the trunks of our cars. Today, our cars are computers. This is why Volkswagen was able to design a car that sensed when it was undergoing regulatory inspection and changed its behavior to sneak through tests. Our implanted defibrillators are computers, which is why Dick Cheney had the wireless interface turned off on his defibrillator prior to its implantation. Everything is a networked computer.

Those networked devices are an attack surface that is available to the NSA and GCHQ's adversaries -- primarily other governments, as well as non-government actors with political ambitions -- and to garden variety criminals. Blackmailers, voyeurs, identity thieves and antisocial trolls routinely seize control over innocents' computers and attack them in every conceivable way. Like the CIA and its drones, they often don't know who their victims are: they find an exploit, write a script to find as many potential victims as possible, and harvest them.

For those who are high-value targets, this lurking insecurity is even more of a risk -- witness the recent takeover of the personal email accounts of US Director of National Intelligence James Clapper by a group of self-described teenagers who previously took over CIA Director John Brennan's email account.

This is the moment when the security services could shine. We need cyber defense and we need it badly. But for the security services to shine, they'd have to spend all their time patching up the leaky boat of networked security, while their major project for a decade and more has been to discover weaknesses in the network and its end-points and expand them, adding vulnerabilities that they can weaponize against their adversaries -- leaving these vulnerabilities wide open for their adversaries to use in attacking us.

The NSA and GCHQ have weaponized flaws in router operating systems, rather than telling the vendors about these flaws, leaving the world’s electronic infrastructure vulnerable to attack by the NSA and GCHQ’s adversaries. Our spies hack core routers and their adversaries' infrastructure, but they have made themselves reliant upon the continuing fragility and insecurity of the architectures common to enemy and ally alike, when they could have been making us all more secure by figuring out how to harden them.

The mission of making it as hard as possible for the enemy to attack us is in irreconcilable tension with the mission of making it as easy as possible for our security services to attack their adversaries.

There isn't a Bad Guy Internet and a Good Guy Internet. There's no Bad Guy Operating System and Good Guy Operating System. When GCHQ discovers something breakable in a computer system that Iranians depend upon, they've also discovered something amiss that Britons rely upon. GCHQ can't keep that gap in Iran's armor intact without leaving an equally large gap open in our own armor.

For my Sherlock story, I wanted to explore what it means to have a security methodology that was all attack, and precious little defense, particularly one that proceeded in secret, without any accountability or even argument from people who thought you were doing it all wrong.


The Documents

Though I reviewed dozens of unpublished documents from the Snowden archive in writing my story, I relied upon three documents, two of which we are releasing today.

First, there's the crux of my Sherlock story, drawn from a March 2010 GCHQ document titled "What's the worst that could happen?" marked "TOP SECRET STRAP 1." This is a kind of checklist for spies who are seeking permission to infect their adversaries' computers or networks with malicious software.

It's a surprising document in many regards. The first thing that caught my eye about it is the quality of the prose. Most of the GCHQ documents I've reviewed read like they were written by management consultants, dry and anodyne in a way that makes even the famously tortured prose of the military seem juicy by comparison. The story the authors of those documents are telling themselves is called something like, “Serious grownups, doing serious work, seriously.”

"What's the worst..." reads like the transcript of a lecture by a fascinating and seasoned mentor, someone who's seen all the pitfalls and wants to help you, their protege, navigate this tricky piece of the intel business without shooting yourself in the foot.

It even tells a kind of story: we have partners who help us with our malware implantation. Are they going to help us with that business in the future if their names get splashed all over the papers? Remember, there are clever people like you working for foreign governments -- they're going to try and catch us out! Imagine what might happen if one of our good friends got blamed for what we did -- or blamed us for it! Let's not forget the exploits themselves: our brilliant researchers quietly beaver away, finding the defects that the best and the brightest programmers at, say, Apple and Microsoft have left behind in their code: if you get caught, the companies will patch the vulnerabilities and we will lose the use of them forever.

On it goes in this vein, for three pages, until the very last point:

“Who will have direct access to the data resulting from the operation and do we have any control over this? Could anyone take action on it without our agreement, eg could we be enabling the US to conduct a detention op which we would not consider permissible?”

That's where the whole thing comes to something of a screeching halt. We're not talking about Tom Clancy net-wars fantasies anymore -- now we're into the realm of something that must haunt every man and woman of good will and integrity who works in the spy agencies: the possibility that a colleague or ally, operating without oversight or consequence, might descend into barbarism based on something you did.

Reading this, I thought of the Canadian officials who incorrectly told US authorities that Maher Arar, a Canadian citizen of Syrian origin who was suspected of being connected to Al Qaeda.

Arar was detained by the United States Immigration and Naturalization Service (INS) during a stopover in New York on his way home from a family vacation in Tunis. The Americans, acting on incomplete intelligence from the Canadian Royal Canadian Mounted Police (RCMP), deported Arar to Syria, a country he had not visited since his move to Canada, and which does permit the renunciation of citizenship.

Arar claims he was tortured during his imprisonment which lasted almost a year, and bombarded with questions from his torturers that seemed to originate with the US security services. Finally, the Syrian government decided that Arar was innocent of any terrorist connections and let him go home to Canada. The US authorities refused to participate in the hearings on the Arar affair and the DHS has kept his family on the no-fly list.


Why did Syrian officials let him go? "Why shouldn't we leave him to go? We thought that would be a gesture of good will towards Canada, which is a friendly nation. For Syria, second, we could not substantiate any of the allegations against him." He added that the Syrian government now considers Arar completely innocent.

Is this what the unnamed author of this good-natured GCHQ document meant by "a detention op which we would not consider permissible?" The Canadian intelligence services apparently told their US counterparts early on that they'd been mistaken about Arar, but when a service operates with impunity, in secret, it gets to steamroller on, without letting facts get in the way, refusing to acknowledge its errors.

The security services are a system with a powerful accelerator and inadequate brakes. They’ve rebranded “terrorism” as an existential risk to civilization (rather than a lurid type of crime). The War on Terror is a lock that opens all doors. As innumerable DEA agents have discovered, the hint that the drug-runner you’re chasing may be funding terror is a talisman that clears away red-tape, checks and balances, and oversight.

The story of terrorism is that it must be stopped at all costs, that there are no limits when it comes to the capture and punishment of terrorists. The story of people under suspicion of terrorism, therefore, is the story of people to whom no mercy is due, and of whom all cunning must be assumed.

Within the security apparatus, identification as a potential terrorist is a life sentence, a “FAIR GAME” sign taped to the back of your shirt, until you successfully negotiate a kafka-esque thicket of secretive procedures and kangaroo courts. What story must the author of this document have been telling themself when they wrote that final clause, thinking of someone telling himself the DIE HARD story, using GCHQ’s data to assign someone fair game status for the rest of their life?

Holmes stories are perfectly suited to this kind of problem. From "A Scandal in Bohemia" to "A Study in Scarlet," to "The Man With the Twisted Lip," Holmes's clients often present at his doorstep wracked with guilt or anxiety about the consequences of their actions. Often as not, Holmes's solution to their problems involves not just unraveling the mystery, but presenting a clever way for the moral question to be resolved as well.

The next document is the "HIMR Data Mining Research Problem Book," a fascinating scholarly paper on the methods by which the massive data-streams from the deep fiber taps can be parsed out into identifiable, individual parcels, combining data from home computers, phones, and work computers.

It was written by researchers from the Heilbronn Institute for Mathematical Research in Bristol, a ”partnership between the UK Government Communications Headquarters and the University of Bristol.” Staff spend half their time working on public research, the other half is given over to secret projects for the government.

The Problem Book is a foundational document in the Snowden archive, written in clear prose that makes few assumptions about the reader’s existing knowledge. It likewise makes few ethical assertions about its work, striking a kind of academic posture in which something is ”good” if it does some task efficiently, regardless of the task. It spells out the boundaries on what is and is not ”metadata” without critical scrutiny, and dryly observes that ”cyber” is a talisman -- reminiscent of ”terrorist” -- that can be used to conjure up operating capital, even when all the other government agencies are having their budgets cut.

The UK government has recognized the critical importance of cyber to our strategic position: in the Comprehensive Spending Review of 2010, it allocated a significant amount of new money to cyber, at a time when almost everything else was cut. Much of this investment will be entrusted to GCHQ, and in return it is imperative for us to use that money for the UK’s advantage.

Some of the problems in this book look at ways of leveraging GCHQ’s passive SIGINT capabilities to give us a cyber edge, but researchers should always be on the look-out for opportunities to advance the cyber agenda.

The story the Problem Book tells is of scholars who’ve been tasked with a chewy problem: sieving usable intelligence out of the firehoses that GCHQ has arogated to itself with its fiber optic taps.

Somewhere in that data, they are told, must be signatures that uniquely identify terrorists. It’s a Big Data problem, and the Problem Book, dating to 2010, is very much a creature of the first rush of Big Data hype.

For the researchers, the problem is that their adversaries are no longer identifiable by their national affiliation. The UK government can’t keep on top of its enemies by identifying the bad countries and then spying on their officials, spies and military. Now the bad guys could be anyone. The nation-state problem was figuring out how to spy on your enemies. The new problem is figuring out which people to spy on.

"It is important to bear in mind that other states (..) are not bound by the same legal framework and ideas of necessity and proportionality that we impose on ourselves. Moreover, there are many other malicious actors in cyberspace, including criminals and hackers (sometimes motivated by ideology, sometimes just doing it for fun, and sometimes tied more or less closely to a nation state). We certainly cannot ignore these non-state actors".

The problem with this is that once you accept this framing, and note the happy coincidence that your paymasters just happen to have found a way to spy on everyone, the conclusion is obvious: just mine all of the data, from everyone to everyone, and use an algorithm to figure out who’s guilty.

The bad guys have a Modus Operandi, as anyone who’s watched a cop show knows. Find the MO, turn it into a data fingerprint, and you can just sort the firehose’s output into ”terrorist-ish” and ”unterrorist-ish.”

Once you accept this premise, then it’s equally obvious that the whole methodology has to be kept from scrutiny. If you’re depending on three ”tells” as indicators of terrorist planning, the terrorists will figure out how to plan their attacks without doing those three things.

This even has a name: Goodhart's law. "When a measure becomes a target, it ceases to be a good measure." Google started out by gauging a web page’s importance by counting the number of links they could find to it. This worked well before they told people what they were doing. Once getting a page ranked by Google became important, unscrupulous people set up dummy sites (“link-farms”) with lots of links pointing at their pages.

The San Bernardino shootings re-opened the discussion on this problem. When small groups of people independently plan atrocities that don’t require complicated or unusual steps to plan and set up, what kind of data massaging will surface them before it’s too late?

Much of the paper deals with supervised machine learning, a significant area of research and dispute today. Machine learning is used in "predictive policing" systems to send cops to neighborhoods where crime is predicted to be ripening, allegedly without bias. In reality, of course, the training data for these systems comes from the human-directed activity of the police before the system was set up. If the police stop-and-frisk all the brown people they find in poor neighborhoods, then that's where they'll find most of the crime. Feed those arrest records to a supervised machine algorithm and ask it where the crime will be and it will send your officers back to the places where they're already focusing their efforts: in other words, "predictive policing" is great at predicting what the police will do, but has dubious utility in predicting crime itself.

The part of the document I was most interested in was the section on reading and making sense of network graphs. They are the kind of thing you’d use in a PowerPoint slide when you want to represent an abstraction like "the Internet". Network graphs tell you a lot about the structures of organizations, about the relative power relationships between them. If the boss usually communicates to their top lieutenants after being contacted by a trusted advisor, then getting to that advisor is a great way to move the whole organization, whether you're a spy or a sales rep.

The ability of data-miners to walk the social and network graphs of their targets, to trace the "information cascades" (that is, to watch who takes orders from whom) and to spot anomalies in the network and zero in on them, is an important piece of the debate on "going dark." If spies can look at who talks to whom, and when, and deduce organizational structure and upcoming actions, then the ability to read the content of messages -- which may be masked by cryptography -- is hardly the make-or-break for fighting their adversaries.

This is crucial to the debate on surveillance. In the 1990s, there was a seminal debate over whether to prohibit civilian access to working cryptography, a debate that was won decisively for the side of unfettered access to privacy tools. Today, that debate has been renewed. David Cameron was re-elected to the UK Prime Minister's office after promising to ban strong crypto, and the UK government has just introduced a proposed cryptographic standard designed to be broken by spies.

The rubric for these measures is that spies have lost the ability to listen in on their targets, and with it, their ability to thwart attacks. But as the casebook demonstrates, a spy's-eye view on the Internet affords enormous insight into the activities of whole populations -- including high-value terrorism suspects.

The Problem Book sets up the Mycroftian counterpoint to Sherlock's human intelligence -- human and humane, focused on the particulars of each person in his stories.

Sherlock describes Mycroft as an all-knowing savant:

The conclusions of every department are passed to him, and he is the central exchange, the clearinghouse, which makes out the balance. All other men are specialists, but his specialism is omniscience.

While Sherlock is energized by his intellectual curiosity, his final actions are governed by moral consequences and empathy. Mycroft functions with the moral vacuum of a software: tell him to identify anomalies and he'll do it, regardless of why he's been asked or what happens next. Mycroft is a Big Data algorithm in human form.

The final document I relied upon in the story is one we won't be publishing today: an intercepted transcript of a jihadi chat room This document isn't being released because there were many people in that chat room, having what they thought was an off-the-record conversation with their friends. Though some of them were espousing extreme ideology, mostly they were doing exactly what my friends and I did when I was a teenager: mouthing off, talking about our love lives, telling dirty jokes, talking big.

These kids were funny, rude, silly, and sweet -- they were lovelorn and fighting with their parents. I went to school with kids like these. I was one of them. If you were to judge me and my friends based on our conversations like these, it would be difficult to tell us apart from these children. We all talked a big game, we all fretted about military adventurism, we all cursed the generals who decided that civilian losses are acceptable in the pursuit of their personal goals. I still curse those generals, for whatever it's worth. I read reams of these chat transcripts and I am mystified at their value to national security. These children hold some foolish beliefs, but they're not engaged in anything more sinister than big talk and trash talk.

Most people -- including most people like these kids -- are not terrorists. You can tell, because we're not all dead. An indiscriminate surveillance dragnet will harvest far more big talkers than bad guys. Mass surveillance is a recipe for creating an endless stream of Arars, and each Arar serves as inspiration for more junior jihadis.

In my fiction, I've always tried to link together real world subjects of social and technological interest with storytelling that tries to get into the way that the coming changes will make us feel. Many readers have accused me of predicting the future because I've written stories about mass surveillance and whistleblowers.

But the truth is that before Snowden, there was Wikileaks and Chelsea Manning, and Bill Binney and Thomas Drake before them, and Mark Klein before them. Mass surveillance has been an open secret since the first GW Bush administration, and informed speculation about where it was going was more a matter of paying attention to the newspaper than peering into a crystal ball.

Writing a Sherlock Holmes story from unpublished leaks was a novel experience, though, one that tied together my activist, journalist and fiction writing practices in a way that was both challenging and invigorating. In some ways, it represented a constraint, because once I had the nitty-gritty details of surveillance to hand, I couldn't make up new ones to suit the story. But it was also tremendous freedom, because the mass surveillance regimes of the NSA and GCHQ are so obviously ill-considered and prone to disastrous error that the story practically writes itself.

I worry about "cybersecurity," I really do. I know that kids can do crazy things. But in the absence of accountability and independent scrutiny, the security services have turned cyberspace into a battleground where they lob weapons at one another over our heads, and we don't get a say in the matter. Long after this round of the war on terror is behind us, we'll still be contending with increasingly small computers woven into our lives in increasingly intimate, life-or-death ways. The parochial needs of spies and the corporations that supply them mustn't trump the need for a resilient electronic nervous system for the twenty first century.

Astro Noise: A Survival Guide for Living Under Total Surveillance, edited by Laura Poitras, features my story "Sherlock Holmes and the Adventure of the Extraordinary Rendition," as well as contributions from Dave Eggers, Ai Weiwei, former Guantanamo Bay detainee Lakhdar Boumediene, Kate Crawford, and Edward Snowden.

The Astro Noise exhibition is on at New York City's Whitney Museum from February 5 to May 1, 2016.

Standing to sue for breach of privacy - When might state surveillance pose a "real and immediate risk"?

McGinty, Kevin - Massachusetts Court: Patients Have Standing to Sue for Data Breach Based on Data Exposure Alone - 20160105

A Massachusetts Superior Court judge held that a plaintiff has standing to sue for money damages based on the mere exposure of plaintiff’s private information in an alleged data breach. The court concluded that the plaintiff had pleaded a “real and immediate risk” of injury despite failing to allege that any unauthorized persons had even seen or accessed that information.  The Massachusetts decision adopts a more relaxed approach to standing than has generally been followed in the federal courts.  The holding, however, may not have broad applicability outside of Massachusetts state court, and does not eliminate potential obstacles to proving the claims asserted.

In Walker et al v. Boston Medical Center Corp., No. 2015-1733-BLS 1 (Mass. Super. Ct. Nov. 19, 2015), plaintiffs alleged that Boston Medical Center Corp. (“BMC”) notified them that their medical records “were inadvertently made accessible to the public through an independent medical record transcription service’s online site.”  Although BMC did not know how long the information had been vulnerable to access by unauthorized individuals, BMC notified the plaintiffs by letter that it had no reason to suspect that any patient data had been misused as a result of the breach.  Plaintiffs do not allege that any unauthorized persons actually viewed, accessed or misused their private information.  Plaintiffs seek to recover money damages under a host of statutory and common law theories.

BMC moved to dismiss for lack of standing. A robust line of federal authority, following the Supreme Court’s decision in Clapper v. Amnesty International USA, 113 S. Ct. 1138 (2013), holds that alleging mere exposure of private data, without any resulting harm or injury, is insufficient to establish standing to sue for money damages in federal court.  Without citing to or distinguishing  these federal cases, the Massachusetts court denied BMC’s motion to dismiss, reasoning that pleading a “real and immediate risk” of injury was sufficient for a plaintiff to demonstrate standing.  Although the Walker plaintiffs did not allege that their medical records had been accessed, or their personal information used, by any unauthorized person, the court’s holding indicates that the mere exposure of patient data to the potential to be accessed by unauthorized persons may still adequately plead an injury.  In this case, the plaintiffs alleged facts that, if true “suggest[ed] a real risk of harm from the data breach at BMC” (internal quotations omitted) because BMC’s letter notifying the plaintiffs of the data breach supported an inference that “plaintiffs’ medical records were available to the public on the internet for some period of time and that there is a serious risk of disclosure.”  Based on this inference, the court found it was reasonable to draw the further inference that the records “either were accessed or likely to be accessed by an unauthorized person.”  This “general allegation of injury from the data breach” was sufficient to demonstrate standing.

This decision is significant for several reasons. First, Walker represents a comparatively lax approach to standing, in which alleging the mere exposure of information with the potential for access and misuse by unauthorized persons pleads sufficient injury to establish standing and survive a motion to dismiss.  In contrast, in Clapper, the U.S. Supreme Court held that plaintiffs who alleged that the National Security Agency (“NSA”) actually had access to their private telephone and email conversations through its surveillance program still lacked Article III standing to sue based on the theory that their communications would be obtained at some future point.  In other words, the threat of future injury was insufficient to support Article III standing even where access, not just exposure, to private information was actually alleged.  113 S. Ct. 1138, 1143 (2013).

Walker’s adoption of the relaxed “real risk of harm” standard for establishing standing in a data breach claim also leaves in question whether there may be real, meaningful differences in standing doctrine between the federal courts and Massachusetts’ Trial Court.  While the federal courts are subject to the constitutional restrictions of Article III’s “case or controversy” requirement, Massachusetts’ highest court has suggested in other cases that standing doctrine in state courts is not so exacting: “State courts…are not burdened by” the federal courts’ “same jurisdictional concerns and, consequently, may determine, particularly when class actions are involved, that concerns other than standing in its most technical sense may take precedence.” Weld v. Glaxo Wellcome Inc., 434 Mass. 1, 88-89 (2001).  Given this comparatively lax application of standing doctrine in Massachusetts state courts, Walker’s holding may not actually move the needle much and may have limited force beyond Massachusetts Superior Court.

As the Walker case proceeds through discovery, the parties will have the opportunity to build a fulsome record demonstrating the actual breadth of the exposure, if any, resulting from the data breach, and whether, and to what extent, the breach posed a risk of harm to the plaintiffs, including the likelihood of any nefarious use of the plaintiffs’ personal information.  Accordingly, any longer lasting principles that develop out of this case may have to await further proceedings to establish what, if any, harm resulted from the breach.

Deibert, Ronald - The Geopolitics of Cyberspace After Snowden - 2015

Deibert, Ronald - The Geopolitics of Cyberspace After Snowden - 2015

“The aims of the Internet economy and those of state security converge around the same functional needs: collecting, monitoring, and analyzing as much data as possible.”

For several years now, it seems that not a day has gone by without a new revelation about the perils of cyberspace: the networks of Fortune 500 companies breached; cyberespionage campaigns uncovered; shadowy hacker groups infiltrating prominent websites and posting extremist propaganda. But the biggest shock came in June 2013 with the first of an apparently endless stream of riveting disclosures from former US National Security Agency (NSA) contractor Edward Snowden. These alarming revelations have served to refocus the world’s attention, aiming the spotlight not at cunning cyber activists or sinister data thieves, but rather at the world’s most powerful signals intelligence agencies: the NSA, Britain’s Government Communications Headquarters (GCHQ), and their allies.

The public is captivated by these disclosures, partly because of the way in which they have been released, but mostly because cyberspace is so essential to all of us. We are in the midst of what might be the most profound communications evolution in all of human history. Within the span of a few decades, society has become completely dependent on the digital information and communication technologies (ICTs) that infuse our lives. Our homes, our jobs, our social networks—the fundamental pillars of our existence—now demand immediate access to these technologies.

With so much at stake, it should not be surprising that cyberspace has become heavily contested. What was originally designed as a small-scale but robust information-sharing network for advanced university research has exploded into the information infrastructure for the entire planet. Its emergence has unsettled institutions and upset the traditional order of things, while simultaneously contributing to a revolution in economics, a path to extraordinary wealth for Internet entrepreneurs, and new forms of social mobilization. These contrasting outcomes have set off a desperate scramble, as stakeholders with competing interests attempt to shape cyberspace to their advantage. There is a geopolitical battle taking place over the future of cyberspace, similar those previously fought over land, sea, air, and space.

Three major trends have been increasingly shaping cyberspace: the big data explosion, the growing power and influence of the state, and the demographic shift to the global South. While these trends preceded the Snowden disclosures, his leaks have served to alter them somewhat, by intensifying and in some cases redirecting the focus of the conflicts over the Internet. This essay will identify several focal points where the outcomes of these contests are likely to be most critical to the future of cyberspace.

Big Data

Before discussing the implications of cyberspace, we need to first understand its characteristics: What is unique about the ICT environment that surrounds us? There have been many extraordinary inventions that revolutionized communications throughout human history: the alphabet, the printing press, the telegraph, radio, and television all come to mind. But arguably the most far-reaching in its effects is the creation and development of social media, mobile connectivity, and cloud computing—referred to in shorthand as “big data.” Although these three technological systems are different in many ways, they share one very important characteristic: a vast and rapidly growing volume of personal information, shared (usually voluntarily) with entities separate from the individuals to whom the information applies. Most of those entities are privately owned companies, often headquartered in political jurisdictions other than the one in which the individual providing the information lives (a critical point that will be further examined below).

We are, in essence, turning our lives inside out. Data that used to be stored in our filing cabinets, on our desktop computers, or even in our minds, are now routinely stored on equipment maintained by private companies spread across the globe. This data we entrust to them includes that which we are conscious of and deliberate about—websites visited, e-mails sent, texts received, images posted—but a lot of which we are unaware.

For example, a typical mobile phone, even when not in use, emits a pulse every few seconds as a beacon to the nearest WiFi router or cellphone tower. Within that beacon is an extraordinary amount of information about the phone and its owner (known as “metadata”), including make and model, the user’s name, and geographic location. And that is just the mobile device itself. Most users have within their devices several dozen applications (more than 50 billion apps have been downloaded from Apple’s iTunes store for social networking, fitness, health, games, music, shopping, banking, travel, even tracking sleep patterns), each of which typically gives itself permission to extract data about the user and the device. Some applications take the practice of data extraction several bold steps further, by requesting access to geolocation information, photo albums, contacts, or even the ability to turn on the device’s camera and microphone.

We leave behind a trail of digital “exhaust” wherever we go. Data related to our personal lives are compounded by the numerous and growing Internet-connected sensors that permeate our technological environment. The term “Internet of Things” refers to the approximately 15 billion devices (phones, computers, cars, refrigerators, dishwashers, watches, even eyeglasses) that now connect to the Internet and to each other, producing trillions of ever-expanding data points. These data points create an ethereal layer of digital exhaust that circles the globe, forming, in essence, a digital stratosphere.

Given the virtual characteristics of the digital experience, it may be easy to overlook the material properties of communication technologies. But physical geography is an essential component of cyberspace: Where technology is located is as important as what it is. While our Internet activities may seem a kind of ephemeral and private adventure, they are in fact embedded in a complex infrastructure (material, logistical, and regulatory) that in many cases crosses several borders. We assume that the data we create, manipulate, and distribute are in our possession. But in actuality, they are transported to us via signals and waves, through cables and wires, from distant servers that may or may not be housed in our own political jurisdiction. It is actual matter we are dealing with when we go online, and that matters—a lot. The data that follow us around, that track our lives and habits, do not disappear; they live in the servers of the companies that own and operate the infrastructure. What is done with this information is a decision for those companies to make. The details are buried in their rarely read terms of service, or, increasingly, in special laws, requirements, or policies laid down by the governments in whose jurisdictions they operate.

The vast majority of Internet users now live  in the global South.

Big State

The Internet started out as an isolated experiment largely separate from government. In the early days, most governments had no Internet policy, and those that did took a deliberately laissez-faire approach. Early Internet enthusiasts mistakenly understood this lack of policy engagement as a property unique to the technology. Some even went so far as to predict that the Internet would bring about the end of organized government altogether. Over time, however, state involvement has expanded, resulting in an increasing number of Internet-related laws, regulations, standards, and practices. In hindsight, this was inevitable. Anything that permeates our lives so thoroughly naturally introduces externalities—side effects of industrial or commercial activity—that then require the establishment of government policy. But as history demonstrates, linear progress is always punctuated by specific events—and for cyberspace, that event was 9/11.

We continue to live in the wake of 9/11. The events of that day in 2001 profoundly shaped many aspects of society. But no greater impact can be found than the changes it brought to cyberspace governance and security, specifically with respect to the role and influence of governments. One immediate impact was the acceleration of a change in threat perception that had been building for years.

During the Cold War, and largely throughout the modern period (roughly the eighteenth century onward), the primary threat for most governments was “interstate” based. In this paradigm, the state’s foremost concern is a cross-border invasion or attack—the idea that another country’s military could use force and violence in order to gain control. After the Cold War, and especially since 9/11, the concern has shifted to a different threat paradigm: that a violent attack could be executed by a small extremist group, or even a single human being who could blow himself or herself up in a crowded mall, hijack an airliner, or hack into critical infrastructure. Threats are now dispersed across all of society, regardless of national borders. As a result, the focus of the state’s security gaze has become omni-directional.

Accompanying this altered threat perception are legal and cultural changes, particularly in reaction to what was widely perceived as the reason for the 9/11 catastrophe in the first place: a “failure to connect the dots.” The imperative shifted from the micro to the macro. Now, it is not enough to simply look for a needle in the haystack. As General Keith Alexander (former head of the NSA and the US Cyber Command) said, it is now necessary to collect “the entire haystack.” Rapidly, new laws have been introduced that substantially broaden the reach of law enforcement and intelligence agencies, the most notable of them being the Patriot Act in the United States—although many other countries have followed suit.

This imperative to “collect it all” has focused government attention squarely on the private sector, which owns and operates most of cyberspace. States began to apply pressure on companies to act as a proxy for government controls—policing their own networks for content deemed illegal, suspicious, or a threat to national security. Thanks to the Snowden disclosures, we now have a much clearer picture of how this pressure manifests itself. Some companies have been paid fees to collude, such as Cable and Wireless (now owned by Vodafone), which was paid tens of millions of pounds by the GCHQ to install surveillance equipment on its networks. Other companies have been subjected to formal or informal pressures, such as court orders, national security letters, the withholding of operating licenses, or even appeals to patriotism. Still others became the targets of computer exploitation, such as US-based Google, whose back-end data infrastructure was secretly hacked into by the NSA.

This manner of government pressure on the private sector illustrates the importance of the physical geography of cyberspace. Of course, many of the corporations that own and operate the infrastructure—companies like Facebook, Microsoft, Twitter, Apple, and Google—are headquartered in the United States. They are subject to US national security law and, as a consequence, allow the government to benefit from a distinct homefield advantage in its attempt to “collect it all.” And that it does—a staggering volume, as it turns out. One top-secret NSA slide from the Snowden disclosures reveals that by 2011, the United States (with the cooperation of the private sector) was collecting and archiving about 15 billion Internet metadata records every single day. Contrary to the expectations of early Internet enthusiasts, the US government’s approach to cyberspace—and by extension that of many other governments as well—has been anything but laissez-faire in the post-9/11 era. While cyberspace may have been born largely in the absence of states, as it has matured states have become an inescapable and dominant presence.

Domain Domination

After 9/11, there was also a shift in US military thinking that profoundly affected cyberspace. The definition of cyberspace as a single “domain”— equal to land, sea, air, and space—was formalized in the early 2000s, leading to the imperative to dominate and rule this domain; to develop offensive capabilities to fight and win wars within cyberspace. A Rubicon was crossed with the Stuxnet virus, which sabotaged Iranian nuclear enrichment facilities. Reportedly engineered jointly by the United States and Israel, the Stuxnet attack was the first de facto act of war carried out entirely through cyberspace. As is often the case in international security dynamics, as one country reframes its objectives and builds up its capabilities, other countries follow suit. Dozens of governments now have within their armed forces dedicated “cyber commands” or their equivalents.

The race to build capabilities also has a ripple effect on industry, as the private sector positions itself to reap the rewards of major cyber-related defense contracts. The imperatives of mass surveillance and preparations for cyberwarfare across the globe have reoriented the defense industrial base.

It is noteworthy in this regard how the big data explosion and the growing power and influence of the state are together generating a politicaleconomic dynamic. The aims of the Internet economy and those of state security converge around the same functional needs: collecting, monitoring, and analyzing as much data as possible. Not surprisingly, many of the same firms service both segments. For example, companies that market facial recognition systems find their products being employed by Facebook on the one hand and the Central Intelligence Agency on the other.

As private individuals who live, work, and play in the cyber realm, we provide the seeds that are then cultivated, harvested, and delivered to market by a massive machine, fueled by the twin engines of corporate and national security needs. The confluence of these two major trends is creating extraordinary tensions in state-society relations, particularly around privacy. But perhaps the most important implications relate to the fact that the market for the cybersecurity industrial complex knows no boundaries—an ominous reality in light of the shifting demographics of cyberspace.

Southern Shift

While the “what” of cyberspace is critical, the “who” is equally important. There is a major demographic shift happening today that is easily overlooked, especially by users in the West, where the technology originates. The vast majority of Internet users now live in the global South. Of the 6 billion mobile devices in circulation, over 4 billion are located in the developing world. In 2001, 8 of every 100 citizens in developing nations owned a mobile subscription. That number has now jumped to 80. In Indonesia, the number of Internet users increases each month by a stunning 800,000. Nigeria had 200,000 Internet users in 2000; today, it has 68 million.

Remarkably, some of the fastest growing online populations are emerging in countries with weak governmental structures or corrupt, autocratic, or authoritarian regimes. Others are developing in zones of conflict, or in countries that have only recently gone through difficult transitions to democracy. Some of the fastest growth rates are in “failed” states, or in countries riven by ethnic rivalries or challenged by religious differences and sensitivities, such as Nigeria, India, Pakistan, Indonesia, and Thailand. Many of these countries do not have long-standing democratic traditions, and therefore lack proper systems of accountability to guard against abuses of power. In some, corruption is rampant, or the military has disproportionate influence.

Consider the relationship between cyberspace and authoritarian rule. We used to mock authoritarian regimes as slow-footed, technologically challenged dinosaurs that would be inevitably weeded out by the information age. The reality has proved more nuanced and complex. These regimes are proving much more adaptable than expected. National-level Internet controls on content and access to information in these countries are now a growing norm. Indeed, some are beginning to affect the very technology itself, rather than vice versa.

In China (the country with the world’s most Internet users), “foreign” social media like Facebook, Google, and Twitter are banned in favor of nationally based, more easily controlled alternatives. For example, WeChat - - owned by China-based parent company Tencent - is presently the fifth-largest Internet company in the world after Google, Amazon, Alibaba, and eBay, and as of August 2014 it had 438 million active users (70 million outside China) and a public valuation of over $400 billion. China’s popular chat applications and social media are required to police the country’s networks with regard to politically sensitive content, and some even have hidden censorship and surveillance functionality “baked” into their software. Interestingly, some of WeChat’s users outside China began experiencing the same type of content filtering as users inside China, an issue that Tencent claimed was due to a software bug (which it promptly fixed). But the implication of such extraterritorial applications of national-level controls is certainly worth further scrutiny, particularly as China-based companies begin to expand their service offerings in other countries and regions.

It is important to understand the historical context in which this rapid growth is occurring. Unlike the early adopters of the Internet in the West, citizens in the developing world are plugging in and connecting after the Snowden disclosures, and with the model of the NSA in the public domain. They are coming online with cybersecurity at the top of the international agenda, and fierce international competition emerging throughout cyberspace, from the submarine cables to social media. Political leaders in these countries have at their disposal a vast arsenal of products, services, and tools that provide their regimes with highly sophisticated forms of information control. At the same time, their populations are becoming more savvy about using digital media for political mobilization and protest.

While the digital innovations that we take advantage of daily have their origins in high-tech libertarian and free-market hubs like Silicon Valley, the future of cyberspace innovation will be in the global South. Inevitably, the assumptions, preferences, cultures, and controls that characterize that part of the world will come to define cyberspace as much as those of the early entrepreneurs of the information age did in its first two decades.

Who Rules?

Cyberspace is a complex technological environment that spans numerous industries, governments and regions. As a consequence, there is no one single forum or international organization for cyberspace. Instead, governance is spread throughout numerous small regimes, standard-setting forums, and technical organizations from the regional to the global. In the early days, Internet governance was largely informal and led by non-state actors, especially engineers. But over time, governments have become heavily involved, leading to more politicized struggles at international meetings.

The original promise of the Internet  as a forum for free exchange  of information is at risk.

Although there is no simple division of camps, observers tend to group countries into those that prefer a more open Internet and a tightly restricted role for governments versus those that prefer a more centralized and state-led form of governance, preferably through the auspices of the United Nations. The United States, the United Kingdom, other European nations, and Asian democracies are typically grouped in the former, with China, Russia, Iran, Saudi Arabia, and other nondemocratic countries grouped in the latter. A large number of emerging market economies, led by Brazil, India, and Indonesia, are seen as “swing states” that could go either way.

Prior to the Snowden disclosures, the battle lines between these opposing views were becoming quite acute—especially around the December 2012 World Congress on Information Technology (WCIT), where many feared Internet governance would fall into UN (and thus more state-controlled) hands. But the WCIT process stalled, and those fears never materialized, in part because of successful lobbying by the United States and its allies, and by Internet companies like Google. After the Snowden disclosures, however, the legitimacy and credibility of the “Internet freedom” camp have been considerably weakened, and there are renewed concerns about the future of cyberspace governance.

Meanwhile, less noticed but arguably more effective have been lower-level forms of Internet governance, particularly in regional security forums and standards-setting organizations. For example, Russia, China, and numerous Central Asian states, as well as observer countries like Iran, have been coordinating their Internet security policies through the Shanghai Cooperation Organization (SCO). Recently, the SCO held military exercises designed to counter Internet-enabled opposition of the sort that participated in the “color revolutions” in former Soviet states. Governments that prefer a tightly controlled Internet are engaging in partnerships, sharing best practices, and jointly developing information control platforms through forums like the SCO. While many casual Internet observers ruminate over the prospect of a UN takeover of the Internet that may never materialize, the most important norms around cyberspace controls could be taking hold beneath the spotlight and at the regional level.

Technological Sovereignty

Closely related to the questions surrounding cyberspace governance at the international level are issues of domestic-level Internet controls, and concerns over “technological sovereignty.” This area is one where the reactions to the Snowden disclosures have been most palpably felt in the short term, as countries react to what they see as the US “home-field advantage” (though not always in ways that are straightforward). Included among the leaked details of US- and GCHQ-led operations to exploit the global communications infrastructure are numerous accounts of specific actions to compromise state networks, or even the handheld devices of government officials—most notoriously, the hacking of German Chancellor Angela Merkel’s personal cellphone and the targeting of Brazilian government officials’ classified communications. But the vast scope of US-led exploitation of global cyberspace, from the code to the undersea cables and everything in between, has set off shockwaves of indignation and loud calls to take immediate responses to restore “technological sovereignty.”

For example, Brazil has spearheaded a project to lay a new submarine cable linking South America directly to Europe, thus bypassing the United States. Meanwhile, many European politicians have argued that contracts with US-based companies that may be secretly colluding with the NSA should be cancelled and replaced with contracts for domestic industry to implement regional and/or nationally autonomous data- routing policies—arguments that European industry has excitedly supported. It is sometimes difficult to unravel whether such measures are genuinely designed to protect citizens, or are really just another form of national industrial protectionism, or both. Largely obscured beneath the heated rhetoric and underlying self-interest, however, are serious questions about whether any of the measures proposed would have any more than a negligible impact when it comes to actually protecting the confidentiality and integrity of communications. As the Snowden disclosures reveal, the NSA and GCHQ have proved to be remarkably adept at exploiting traffic, no matter where it is based, by a variety of means.

We leave behind a  trail of digital “exhaust”  wherever we go.

A more troubling concern is that such measures may end up unintentionally legitimizing national cyberspace controls, particularly for developing countries, “swing states,” and emerging markets. Pointing to the Snowden disclosures and the fear of NSA-led surveillance can be useful for regimes looking to subject companies and citizens to a variety of information controls, from censorship to surveillance. Whereas policy makers previously might have had concerns about being cast as pariahs or infringers on human rights, they now have a convenient excuse supported by European and other governments’ reactions.

Spyware Bazaar

One byproduct of the huge growth in military and intelligence spending on cyber-security has been the fueling of a global market for sophisticated surveillance and other security tools. States that do not have an in-house operation on the level of the NSA can now buy advanced capabilities directly from private contractors. These tools are proving particularly attractive to many regimes that face ongoing insurgencies and other security challenges, as well as persistent popular protests. Since the advertised end uses of these products and services include many legitimate needs, such as network traffic management or the lawful interception of data, it is difficult to prevent abuses, and hard even for the companies themselves to know to what ends their products and services might ultimately be directed. Many therefore employ the term “dual-use” to describe such tools.

Research by the University of Toronto’s Citizen Lab from 2012 to 2014 has uncovered numerous cases of human rights activists targeted by advanced digital spyware manufactured by Western companies. Once implanted on a target’s device, this spyware can extract files and contacts, send emails and text messages, turn on the microphone and camera, and track the location of the user. If these were isolated incidences, perhaps we could write them off as anomalies. But the Citizen Lab’s international scan of the command and control servers of these products — the computers used to send instructions to infected devices—has produced disturbing evidence of a global market that knows no boundaries. Citizen Lab researchers found one product, Finspy, marketed by a UK company, Gamma Group, in a total of 25 countries— some with dubious human rights records, such as Bahrain, Bangladesh, Ethiopia, Qatar, and Turkmenistan. A subsequent Citizen Lab report found that 21 governments are current or former users of a spyware product sold by an Italian company called Hacking Team, including 9 that received the lowest ranking, “authoritarian,” in the Economist’s 2012 Democracy Index.

Meanwhile, a 2014 Privacy International report on surveillance in Central Asia says many of the countries in the region have implemented far-reaching surveillance systems at the base of their telecommunications networks, using advanced US and Israeli equipment, and supported by Russian intelligence training. Products that provide advanced deep packet inspection (the capability to inspect data packets in detail as they flow through networks), content filtering, social network mining, cellphone tracking, and even computer attack targeting are being developed by Western firms and marketed worldwide to regimes seeking to limit democratic participation, isolate and identify opposition, and infiltrate meddlesome adversaries abroad.

Pushing Back

The picture of the cyberspace landscape painted above is admittedly quite bleak, and therefore one-sided. The contests over cyberspace are multidimensional and include many groups and individuals pushing for technologies, laws, and norms that support free speech, privacy, and access to information. Here, too, the Snowden disclosures have had an animating effect, raising awareness of risks and spurring on change. Whereas vague concerns about widespread digital spying were voiced by a minority and sometimes trivialized before Snowden’s disclosures, now those fears have been given real substance and credibility, and surveillance is increasingly seen as a practical risk that requires some kind of remediation.

The Snowden disclosures have had a particularly salient impact on the private sector, the Internet engineering community, and civil society. The revelations have left many US companies in a public relations nightmare, with their trust weakened and lucrative contracts in jeopardy. In response, companies are pushing back. It is now standard for many telecommunications and social media companies to issue transparency reports about government requests to remove information from websites or share user data with authorities. USbased Internet companies even sued the government over gag orders that bar them from disclosing information on the nature and number of requests for user information. Others, including Google, Microsoft, Apple, Facebook, and WhatsApp, have implemented end-to-end encryption.

Internet engineers have reacted strongly to revelations showing that the NSA and its allies have subverted their security standards-setting processes. They are redoubling efforts to secure communications networks wholesale as a way to shield all users from mass surveillance, regardless of who is doing the spying. Among civil society groups that depend on an open cyberspace, the Snowden disclosures have helped trigger a burgeoning social movement around digital-security tool development and training, as well as more advanced research on the nature and impacts of information controls.

Wild Card

The cyberspace environment in which we live and on which we depend has never been more in flux. Tensions are mounting in several key areas, including Internet governance, mass and targeted surveillance, and military rivalry. The original promise of the Internet as a forum for free exchange of information is at risk. We are at a historical fork in the road: Decisions could take us down one path where cyberspace continues to evolve into a global commons, empowering individuals through access to information and freedom of speech and association, or down another path where this ideal meets its eventual demise. Securing cyberspace in ways that encourage freedom, while limiting controls and surveillance, is going to be a serious challenge.

Trends toward militarization and greater state control were already accelerating before the Snowden disclosures, and seem unlikely to abate in the near future. However, the leaks have thrown a wild card into the mix, creating opportunities for alternative approaches emphasizing human rights, corporate social responsibility, norms of mutual restraint, cyberspace arms control, and the rule of law. Whether such measures will be enough to stem the tide of territorialized controls remains to be seen. What is certain, however, is that a debate over the future of cyberspace will be a prominent feature of world politics for many years to come.

David Miranda in fresh challenge over Heathrow detention - The Guardian 20151208

David Miranda in fresh challenge over Heathrow detention - The Guardian 20151208

David Miranda, the partner of the former Guardian journalist Glenn Greenwald, has launched a fresh appeal challenging the legality of his detention under counter-terrorism powers for nine hours at Heathrow airport in 2013.

The hearing at the court of appeal in London is an attempt to overturn an earlier decision by a lower court that holding him under schedule 7 of the Terrorism Act 2000 was lawful.

About 60,000 people a year are held in such controversial port stops. The Home Office has argued that border controls exist to check on travellers where there is insufficient information to justify an arrest.

Miranda’s first legal challenge was supported by the Guardian. This court of appeal challenge is funded by First Look Media, which publishes the online magazine the Intercept. The organisation said the appeal had been brought to defend freedom of expression and journalists’ rights.

When Miranda was stopped in August 2013, he was carrying encrypted files containing journalistic material derived from the US National Security Agency whistleblower Edward Snowden, his lawyer told the appeal court.

Matthew Ryder QC said: “Snowden, whatever you may think of him, provided information which has been of immense public importance. In this case we are talking about journalism of unusually high quality.”

The previous court had erred in its decision, Ryder said, because it had misinterpreted the law on proportionality and the detention was incompatible with Miranda’s rights to privacy and freedom of expression under the European convention on human rights.

Last year three high court judges dismissed the challenge brought by Miranda, accepting that his detention and the seizure of computer material was “an indirect interference with press freedom” but said this was justified by legitimate and “very pressing” interests of national security.

The three judges – Lord Justice Laws, Mr Justice Ouseley and Mr Justice Openshaw – concluded that Miranda’s detention at Heathrow was lawful, proportionate and did not breach European human rights protections of freedom of expression.

Miranda was stopped in transit between Berlin and Rio de Janeiro after meeting the film-maker Laura Poitras, who had been involved in making disclosures based on documents leaked by Snowden.

Miranda was carrying encrypted files, including an external hard drive containing 58,000 highly classified UK intelligence documents, “in order to assist the journalistic activity of Greenwald”. The Guardian made his travel reservations and paid for the trip.

The high court judgment said the seized material included personal information that would allow staff to be identified, including those deployed overseas.

The court of appeal was read a message sent on 16 August 2013 by MI5 to Det Supt Stockley of the Metropolitan police’s counter-terrorism command (SO15).

The memorandum was headed: “National security justification for proposed operational actions around … David Miranda”.

It said: “We strongly assess that Miranda is carrying items which will assist in Greenwald releasing more of the NSA and GCHQ material we judge to be in Greenwald’s possession … Our main objectives against David Miranda are to understand the nature of any material he is carrying, mitigate the risks to national security that this material poses.”

It added: “We are requesting that you exercise your powers to carry out a ports stop against David Miranda … There is a substantial risk that David Miranda holds material which would be severely damaging to UK national security interests. Snowden holds a large volume of GCHQ material, which, if released, would have serious consequences for GCHQ’s collection capabilities, as well as broader SIA operational activities.”

SIA is believed to me secret intelligence agents.

The following day, before Miranda arrived on 18 August, acting Det Insp Woodford at Heathrow was concerned that the port circulation sheet had not confirmed that a schedule 7 stop had been requested, the court was told.

Woodford was eventually sent further details and the port circulation sheet received had additions to its intelligence summary section. It stated: “We assess that Miranda is knowingly carrying material, the release if which would endanger people’s lives.”

Ryder told the court: “The Security Service purpose was not a schedule 7 purpose. Wanting to get material off somebody and no more is not a [legitimate] schedule 7 purpose.”

The focus of a schedule 7 port stop, he maintained, should be whether or not the person targeted is preparing acts of terrorism. If the law was interpreted as the high court decided, Ryder said it “would mean that terrorism could be committed by acts that do not intend to incite violence or endanger life”.

He added: “It would mean that terrorism can be committed by acts that are themselves entirely lawful and … can be entirely lawful.”

Factsheets re: Investigatory Powers Bill - Big Brother Watch 20151202

The draft Investigatory Powers Bill has been published. At almost 300 pages long it outlines the future of surveillance powers in the UK.

Because the proposals have the potential to impact on each and every one of us, it is important we all know and understand what powers the Government want the police and spies to have.

To help make sense of what has been published we have created special edition Big Brother Watch Privacy Factsheets which outline each power, what it is, how it can be used and what concerns there may be.

We encourage you to take a moment to have a read and get a sense of what is being proposed. Feel free to print them off, share them on social media or with friends and family.

If you find yourself with worries or concerns please write to your Member of Parliament or to the Joint Committee of MPs and Lords who are scrutinising the proposals. You don’t have long to make your voice heard, only until the 21st of December.

Benjamin Franklin once wrote that there were two certainties in life, death and taxes. The proposals in this draft Bill will add another certainty….surveillance. With your support we have the opportunity to prevent that from happening.