Category Archives: Personal Security

Canada Is Considering Spying on Kids to Stop Cyberbullying - Vice 20160426

Canada Is Considering Spying on Kids to Stop Cyberbullying - Vice 20160426

Cyberbullying is simply awful, and its consequences can be utterly horrific. Canadians have known this all too well since 17-year-old Rehtaeh Parsons’ suicide in 2013, after photos of her alleged rape circulated online.

It’s only human to want to put a stop to it. But is it worth spying on kids?

To wit, the Canadian government is looking for a person or organization to “conduct an evaluation of an innovative cyberbullying prevention or intervention initiative” in a “sample of school-aged children and youth,” according to a tender notice published by Public Safety Canada last week.

Although nothing has been finalized, the government will consider letting the organization spy on kids’ digital communications to do it, Barry McKenna, the Public Safety procurement consultant in charge of the tender, told me.

“The tender doesn’t preclude or necessarily require digital monitoring,” said McKenna. “But there are certainly products on the market that do that, and I would guess that that kind of intervention would be one of interest.”

The school board overseeing the school used in the study would have to sign off on digital surveillance of kids, McKenna said, and so would Public Safety. McKenna would not disclose whether any person or organization has responded to the tender yet. The government has budgeted $60,000 for the program, the notice states.

“Any use by government of technology to scan the internet and read somebody’s communications obviously raises privacy issues,” said David Fraser, a Canadian privacy lawyer consulting on a new cyberbullying law for Nova Scotia. “Fewer privacy issues if it’s following an intervention and it’s targeted,” he continued, “way more if they’re trying to single out kids in Canada and assess what they’re saying.”

“What we’ve seen come out of Public Safety and most law enforcement agencies is a pretty un-nuanced, heavy-handed, over the top model,” Fraser added. Nova Scotia’s previous cyberbullying law, passed in the wake of Parsons’ suicide, was ruled unconstitutional and struck down for being too broad and infringing on people’s civil rights.

If the Public Safety study ends up taking a more blanket approach to monitoring kids instead of targeting surveillance after an incident, it could also risk undermining communication between kids and their teachers or parents, according to US Cyberbullying Research Center co-director Sameer Hinduja.

“Installing tracking apps undermines any sort of open-minded communication [that] youth-serving adults might have with these kids, because you’re tracking them surreptitiously,” said Hinduja. “Kids, as they get older, want more privacy and freedom. It’s natural—you want it, and I want it.”

This isn’t the first time somebody has considered surveillance as a solution to the complex social issue of kids being absolutely horrific to each other, and it likely won’t be the last. In 2013, The LA Times noted that the Glendale Unified School District in Southern California reportedly paid a firm $40,000 to monitor kids’ social media accounts to combat bullying. The move raised the ire of privacy advocates in the US then, too.

The point, according to Hinduja, is that bullying isn’t a uniquely digital problem. You don’t solve bullying forever by putting a teacher in every hallway, and you don’t fix crime by putting a cop on every corner.

“Cyberbullying isn’t a technological problem,” said Hinduja. “You can’t blame the apps, the smartphones, or the internet. Instead, cyberbullying is rooted in other issues that everyone has been dealing with since the beginning of time: adolescent development, kids learning to manage their problems, and dealing with stress.”

Snowden, Edward - The last lighthouse: Free software in dark times - 20160319


Enjoy my transcript of Edward Snowden's keynote address The last lighthouse: Free software in dark times, delivered to Libre Planet 2016 on March 19, 2106. Pre-release video recording available at

[John Sullivan]: We’re good? Well, welcome to Libre Planet 2016! You made it! You’re here! Well, not everybody made it, so we are streaming this event live. “Hello” to everybody watching along at home, too. Thank you for bearing with us, as we get things started here this morning. There are a lot of moving parts happening in this opening keynote, and we are doing it with all free software [audience cheers]. We’re really pushing the envelope here, and so there’s inevitably going to be some hang-ups, but we’ve been improving this process year after year, and documenting it, so that other conferences that are themed around free software and computer user freedom can hopefully use the same systems that we are [audience cheers] and practice what we want to preach. So, my name’s John Sullivan. I’m the Executive Director at the Free Software Foundation. This is always one of my favourite moments of the year, to start this conference off, but I’m especially excited about this year. We’ve had a [JS makes scary quotes] “Yuge” year, starting with our thirtieth anniversary in October, and continuing on to what is obviously our largest Libre Planet ever, and our biggest bang to start off the event, for sure. Let’s see, How many FSF members are here? Awesome! That’s amazing! Thank you, and I hope that the rest of you will consider becoming members by the end of the conference. You can join at Members and individual donors fund over eighty percent of the FSF’s work, including putting on this event, as well as our advocacy for computer user freedom, and development of free software. I’m really happy to have this event at MIT, again, where so much of the free software movement started, and our I want to thank our partners at the Student Information Processing Board - SIPB - for partnering with us, to make this happen. It’s really nice to see free software values continuing to be a strong part of the MIT community. [Applause] Yes, thank you. I have a few important announcements and reminders about the conference, the rest of the conference. First thing is, we have a safe space policy, that’s on page three of your program. Please read it and help us make this event a welcoming environment for absolutely everybody. If there are any issues that come up, please feel free to find me, or Georgia Young. The Information Desk will always know where we are and Georgia has her hand up in the back. Second of all, there is a party tonight at Elephant and Castle near the Downtown Crossings subway station. I hope you will join us there. We will provide some complimentary refreshments and continue conversations that get started during the conference today. We are streaming, as I mentioned, with all free software. The party, though, will not be streamed. [Laughter] We have a few program changes to announce [provides details of changes to conference program] [03:41] After the conference is over tomorrow, there will be a rally, at which we will, people will try to convince the W3C not to make a terrible mistake by endorsing DRM as a recommended extension to HTML 5. And that will be happening outside the Stata Center at 6:45 tomorrow night. Zak Rogoff at the Information Desk will have information for people who want to participate. That’s after the conference is concluded. Finally, please join us on IRC at #libreplanet channel on Freenode, both to communicate with people that are watching from home, and also just to have some back channel conversation about everything that’s happening. So, we have an amazing start to this year’s conference, with Daniel Kahn Gillmor and Edward Snowden. Daniel is a technologist with the ACLU’s Speech Privacy and Technology Project and a free software developer. He’s a Free Software Foundation member, thank you, a member of Debian, and a contributor to many free software programs, especially in the security layer many of us rely on. He participates in standards organizations, like IETF with an idea to preserving and improving civil liberties and civil rights through our shared infrastructure. Edward Snowden is a former intelligence officer, who served in the CIA, NSA, and DIA for nearly a decade as a subject-matter expert on technology and cyber security. In 2013, he revealed the NSA was unconstitutionally seizing the private records of billions of individuals who’d not been suspected of any wrong-doing, resulting in the largest debate about reforms of US surveillance policies since 1978. And I want to say, I take the chance to say “Thank you” for also inspiring us to, at the Free Software Foundation, to redouble our efforts to promote user security and privacy through the use of free software programs like GnuPG, if you’ve seen our guide at that was inspired by the actions that Snowden took and the conversation that that started. I would love to say more about how all this relates to free software, but I think I will leave that to our speakers this morning, while they have a conversation entitled “The last lighthouse: free software in dark times.” We started a little bit late. We are cancelling the break after this, so the next session will begin in this room immediately after this one concludes. So, we should have the full amount of time, so, thank you everybody. [JS gestures to DKG]

[06:47] DKG: So, I’m going ahead, and bring Ed in, hopefully. Let’s see. [Snowden appears 06:54] Ed, can you hear us? [Extended applause and cheers]

[07:00] Edward Snowden: Hello, Boston! Thank you. Wow!

DKG: Ed, you can’t see it, but people seem to be standing, right now.

ES: Thank you. Thank you. Wow! Thank you so much. Please, if I could say one thing. When we were introduced, the thing that always surprises me is that people say, you know, “Thank you” to me. But this is an extraordinary event, for me personally, because I get to say, “Thank you” to you. So many people forget – maybe people haven’t seen Citizen Four, for example, the documentary where they actually had the camera in the room when the NSA revelations were happening – but if you watch closely in credits, they thank a number of FOSS projects, including Debian, Tails, Tor, GnuPG, and so on and so forth. And that’s because, what happened in 2013 would not have been possible without free software. I did not use Windows machines when I was in my operational phase, because I couldn’t trust them – not because I knew that there was a particular backdoor or anything like that – but because I couldn’t be sure. Now, this ambiguity - this fear - this risk - that sort of creates this atmosphere of uncertainty that surrounds all of us - is one of the central problems that we in the security space – in the software space, in general – the connection space of the Internet - in the way that we relate to one another – whether it’s in politics, or law, or technology – is this thing that really is difficult to dispel. You don’t know it’s true. You don’t know it’s fact or not. Some critics of sort of the revelations and what happened - they say “Yeah, ah, we all knew that. Everyone knew that was happening. We figured that out.” And the difference is, many of us suspected – technologists suspected – specialists suspected – but we didn’t know. We knew it was possible. We didn’t know it was actually happening. Now, we know. And, now, we can start to make changes. We can integrate these threats into our threat model. We can adjust the way that we not just vote – not just the way we think about the issues – but the way that we develop, direct, and steer the software and systems that we all rely upon, everyday, that surround us invisibly in every space. Even people whose lives don’t touch the Internet - people who still have to go to the hospital - people who still may have a record of purchasing something at this location or that – somebody who spends money through banks – people who purchase something in a store – all of these things touch systems upon which we must all rely, but increasingly cannot trust - because we have that same Windows problem. Now, since 2013, I think everyone in the audience – this isn’t going to be controversial for you – would agree that Windows isn’t exactly moving in the right direction. They may be putting forth sort of new exploit mitigations, making things a little more difficult for buffer overflows and things like that, including ASLR, and everything like that, which is great, but at the same time we’re putting out an operating system like Windows 10, that is so contrary to user interests, where rather than the operating system working for you, you work for the operating system, you work for the manufacturer. This is not something that benefits society, this is not something that benefits the user, this is something that benefits the corporation. Now, that’s not to say “All corporations are evil.” That’s not to say, “I’m against private enterprise” or that you should be. We need to have systems of business, to be able to develop things, to go sell things, to trade and engage with each other, to connect and for [inaudible] – but, while sometimes corporations are on our side, sometimes corporations do stand up for the public interest, as is right now, Apple challenging the FBI, who is asking to basically smother the security of every American device, service, and product, that’s developed here, and ultimately around the world, while it’s still in its crib. We should not have to rely on them. And this talk today, I hope, is about where we’re at in the world, and thinking - for everyone in the audience - not what people say, not, you know, this fact or this authority, but what you believe, what you think, is the right way to move forward.

[12:26] KDG: So, I wanted to touch on that, on the questions around the security of free software and the security of non-free software as well. The Apple case is an interesting one, because it is a chance for us to, I think, continue to move the conversation forward about what protections are actually offered to users. There’s a lot of situations here where people are saying, “Well the Apple phones are more secure because they got this lock-down. And I think, I’d be curious to hear your take on, how do we respond to that? What are the trade-offs here, between the lock-down on Apple devices and the other possibilities - on hardware that’s maybe not so locked down?

[13:15] ES: A lot of people have difficulty distinguishing between related concepts – one of which is security, the other of which is control. Now, a lot of politicians have [inaudible] those issues, and have said this is a conversation about where we draw the line, between our rights and our security, or between liberty and surveillance, or whatever. But that really misses the point. And this is the central issue in that sort of Apple walled-garden approach. Apple does produce some pretty reliable exploit protections. Does that mean it’s a secure device? Well, they can push updates at any time, that they sign in an arbitrary way, that can completely change the functionality of the device. Now, we trust them currently, because many trust them, not to abuse that, and we’ve got at least some indication that they haven’t, which is a positive thing. But the question is, Is the device truly secure when you have no idea what it’s doing? And this is the problem with proprietary software. This is the problem with closed-source ecosystems, that are increasingly popular today. This is also the problem even with some open systems - or the more open systems, like the Android space - where security updates are just a complete, comprehensive, fractured disaster. [ES and audience laugh]. I don’t mean to go too far, but I’m sure you guys have heard this stuff. So nobody’s going to go stand up with a question, and then read me a speech about why this is wonderful. Um. But the challenge here is, Are there alternatives, right? And we know, from the Free Software Movement, that there are. You will notice in this talk, as the moderator introduced, that there’s no Google logo up here [ES points over his shoulder] for like the first time. Not the first time ever - I have done many talks on FOSS stuff - but never a full FOSS stack. Right now this is a complete stack, that’s completely free and open-source. And this is important, because what we do in our spaces, where we are a little more technical - we are a little more specialist - we can put up with more inconvenience - we develop the platforms, the capabilities, the strategies, that can then be ported over to benefit the entire class of communities that are less technical and, in many cases, simply cannot afford or access proprietary software in the traditional market-driven ways. Now, this is critical, because some of the most brilliant people I know, particularly Linux contributors, and so on and so forth, got their start - not because they necessarily believed in the ideology - but because they couldn’t afford licenses for all this different software, and they hadn’t yet developed sort of the technical sophistication to realize that they could just pirate everything. Now, this [audience and Snowden laugh] … this is actually a beneficial thing, and something I want everyone in the room to watch out for, right. Look for these people. This community that we have, that we’re building, that does so much for some people, has to grow, because we can’t compete with Apple, we can’t compete with Google directly, in the field of resources. What we can, eventually, do is head-count and heart-count. We can compete on the ground of ideology because ours is better [audience and Snowden laugh; audience applauds] … but we also have to focus on recruitment, on bringing people in, and helping them learn, right. Everybody got started somewhere. I did not start on Debian. I did not start on Linux. I was an MCSE, right, I was a Microsoft guy, ‘til eventually I saw the light. This doesn’t mean that you cast off … this doesn’t mean that you can’t use any proprietary software. I know Richard Stallman’s probably at the back and he’s waving his finger. [audience and Snowden laugh] But we’ve got to recognize that it’s a radical position to say that you can’t engage with proprietary software at all. That’s not to say it’s not without merit. The world needs radicals. We need lessons. We need leaders. We need figures who can pull us in the direction of trying new things, of expanding things, and recognizing that in a world where our visibility into the operation of our devices – whether it’s a washing machine, a fridge, or the phone in your pocket – is something that increasingly you have no idea what is going on, or, even if you want, you have no control over, short of exploiting it and trying to get /root, and then doing it on your own. That’s a fundamentally dangerous thing. And that’s why I call it the last lighthouse, right. The people in this room – whether you’re more on the radical side or more on the mainstream side – you’re blazing a trail, you’re recognizing solutions, and going “Look, we can deal with the software problem. We can do our best, but we recognize it’s a challenge.” But there are more problems that are coming, and we’re going to need more people, who are going to solve them. Everybody’s talking about the difficulties of software trust, but we really need to start thinking about hardware trust, right. There are distributions and projects like this - the Qubes project, researchers like Invisible Things Lab, Joanna Rutkowska, and others who are really focusing on these things, as well as many people in the Free Software Foundation. And we need to think about the world, where – alright, maybe the FBI – didn’t get a backdoor in the iPhone. But maybe it doesn’t matter, because they got the chip fabs. Maybe they already do. We need to think about a world where the infrastructure is something that we will never control. We will never be able to put the commercial pressure on telecommunications providers to make them watch the government, who they have to beg for regulatory licenses to actually operate the business. But what we can do, are layer our own systems on top of their infrastructure. Only think about things like the Tor project. Tor’s incredible. I use Tor everyday. I rely on Tor. I used it during the actual NSA work I did as well. And so many people around the world do. But Tor is showing its age, right. No project lasts forever. And we have to constantly be focused, we have to constantly be refreshing ourselves, and we need to look at where the opportunities are, and where the risks are. I should pass it back to Dan, because I just rambled for, like, twenty minutes. [audience laughs]

[19:55] DKG: Well, I think, so what you’re saying about how do we bring more people to the movement, I think, is really important. So I, I mean, I’ll say I came to free software for the technical excellence and I stayed for the freedom, right [audience and Snowden laugh] I came to free software at a time when Debian was an operating system that you could just install and automatically update and it worked. That didn’t exist elsewhere. I used to have Windows systems, where I was wiping the machine and re-installing every two months. [Snowden, audience and DKG laugh] and, I think a couple of people raised their hands, people have been there. So, you know, come for the technical excellence, and as I learned about the control that I ended up actually having and understanding what was going on in the systems … that became the reason that I stayed. It didn’t matter, as the other systems sort of caught up, and realized “Oh, well, we can actually do automated updates. Microsoft has a system update thing that they do.” So, I’m wondering if you have other ideas about maybe what are ways that we can expand our community, and what are ways we can sustain our community as we grow. I think maybe that’s a question for everyone in this room. But I’d be curious to know if you have any particular ideas or suggestions. Not everyone is going to come to the community is going to be maybe geeky enough to want to know what code is running on their refrigerator. But in ten years everybody’s refrigerator is going to be running code, and so how do we, like, how do we make sure that that message gets out? That people can be proud to have a free software fridge [audience and Snowden laugh] without being a free software hacker. What are ways that we can expand the community?

[21:42] ES: Well, one of the main ways is, we got to be better, right. If you have a free software fridge, it’s got to be better, it’s got to be more interesting, it’s got to be more capable, it’s got to be more fun than the proprietary equivalent. And the fact that, in many cases, it’s free is a big selling point. But beyond that – beyond the actual competitive strategy – we need to think about, as you said, the community strategy. And I don’t like [inaudible] for authority - especially from big talking heads on the wall – but, I would say, that everybody in the room should take a minute to think about their part in it, what they believe in, what they value, and how you can protect that, and how you can pass that to people who come after you, right. ‘Cause you can’t wait to your death bed, you know, like eighty, to make this happen. It’s something that has to be life-long practice, particularly in the context of organizing, particularly in the context of growing a group, particularly a group of belief. I would say, everybody in the room should make a task for themselves, that this year, bring five people into the free software community. Now, that seems really difficult. But when you think, you know, well alright, at any level - whether they just sign up for a membership when they donate, whether they do a basic commit on some Git somewhere, even if it’s just changing something cosmetic, making something a little bit more user-friendly. Even if it’s just a pull-request or a fork or branch that they’re using only for themselves, …

[23:10] DKG: Or a bug report.

ES: … or a bug report, even better. It’s important, because what we’re trying to do is, we’re trying to expose people into the language of empowerment, right. And that’s what this is really about. Where we get to back the whole thing before, whether it’s like privacy versus security, or [security versus privacy]. It’s not about privacy versus security, because when you’re more secure, you have more privacy; when you have more privacy, you’re a lot more secure as well. This is really about power, right. When we look at how these programs have actually been used in a surveillance context, it’s not just against terrorists, right. The GCHQ was using NSA systems to intercept the emails of journalists. They spied on Amnesty International. They spied on other human rights NGOs. In the United States, we used our capabilities to spy on UNICEF, the children’s fund, right, for the UN. And this was not the only time. When we looked at their actual statistics, we saw they abused their powers or broke the law 2,776 times in a single calendar {quarter/year}. Now, this is a problem for a lot of reasons, not least of which is the fact that no one was ever charged, right, no one was prosecuted, because they didn’t want to reveal the fact that these programs existed. But when we talk about what this means for people, right, ultimately it gets into that world of - Are you controlling your life? Are you controlling the things around you? Do they work for you? Or do they work for someone else? And this language of empowerment it is something, I think, that underlies everything that your organization has been doing, not just in the defense of liberty sense, or the “free as in kittens” sense, [audience and Snowden laugh] but the idea that, look, right, we’re no longer passive in our relationship with our devices.

[25:03] DKG: Yeah, so when I think about the devices that we need to be, to have some level of control over, there’s, I mentioned the refrigerator earlier, but, you know, increasingly we’re dealing with things like cars that have proprietary systems with over-the-air updates. [Snowden laughs] We’re dealing with more and more of our lives, our intimate conversations are mediated through these devices, and so, it’s interesting for me to think about how do we, how do we approach an ecosystem where there seems to be, maybe we actually now do have fully free computers, thanks to people in this room, we actually have, you know, laptops that are free from pretty much the BIOS upwards, including core boot, but how do we get, as more things become computerized, how do we get, how do we make sure that people’s cars aren’t, don’t themselves become surveillance devices, how to make sure the little pocket computers that everyone carries around actually aren’t surveillance devices for people? And, so I think one of the things that points to is that, as a community that cares about user empowerment, which is, this is freedom zero, right, the freedom to use these tools the way you want to use them, we have to, I think, make outreach also to communities with shared values. And you mentioned open hardware communities, people who are building tools that maybe we can have some level of control over, in the face of a bunch of other pieces of hardware that are under control. But there’s additional communities that we need, I think, also reach out to, to make sure that this, that this message of, you know, surveillance is this power dynamic, and we’re hoping that your control over your devices will actually provide people with some level of autonomy. And that means that we need to have more outreach to, I mean, to think about what’s going on, on the network stack itself. I mean, this is something I’ve focused on. If the protocols that we use are implemented in free software, but the network protocols are very leaky, that doesn’t actually provide people with what they want to do. It’s not very easy for people to come along and change the protocol, if it’s a communications protocol. So I think we need to look at the network standards, we need to look at regulatory standards, so I’m happy, I’m hoping there are lawyers in the room, I suspect there are, well, there’s a couple of people raising both hands. [Snowden and audience laugh] So, but that kind of outreach - can we have regulatory guidance that says, “If you’re going to put a vehicle on the road that it needs to be running free software”? I mean, that’s a super-radical position today. Can we make that not a radical position? How can we, how can we make that outreach into the communities of non-geeks to make sure that these messages about power and control, which are central to our lives in a heavily technologically-mediated society, actually are addressed in all of the places where they’re addressed? I don’t know if you have other particular places, where you can imagine outreach, Ed, a community to ally with?

[28:25] ES: You hit a big point with the network problem. That gets back into the fact that we can’t control the telecom providers, you know, we’re very vulnerable to them. If you wanted to compress the story of 2013 to its – leaving politics aside, right, leaving the big democratic question of the fact politicians were telling us this wasn’t happening, intelligence officials were giving sworn testimony saying this wasn’t happening, when it obviously was – and we focus just on the technical impact, and we want to compress it to a central point - it would be that the provider is hostile. The network path is hostile. And we need to think about mitigations for that. Now, we need to think about also where all the providers are, what they are, and how they can be replaced. Now, open hardware is one of the big challenges here. We’ve seen some adverts like the Lenovo laptop. We’ve seen some other things like Purism, and many others that I haven’t named directly. But there’s a large question here, where if we can’t control the telecommunications provider, if we can’t control the chip fabs, right, how can we assert things? Well, the first solution was, Encrypt everything. And this is an important first step, right. It doesn’t solve the metadata problem, but it’s an important first step. The next step is Tunnel everything, right. And then the step beyond that is, Mix everything, so we can mudge the metadata and it’s hard to tell where things went. Now, there’s still theoretical problems with global passive adversary, timing attacks, and what not, but you make this more expensive, and less practical with each step we go beyond, and then there’s somebody in this room, who likely has the idea that none of the rest of us have, on how to really solve this. And this is what we need. Also in the hardware space. Is it possible, that rather than getting these very specialized chips, that’s exactly this - I do exactly that, I have exactly this instruction set, and I’m inflexible - we realize that because we’re sort of bumping up to the limits of physical die shrinks at this point, that we could reach a point that, maybe we start changing our architecture a little bit more radically. We have flexible chips, things that are like FPGAs for everything. And instead of getting a hyper-specialized chip, instead we get a hyper-capable chip that can simply be used in any arbitrary manner, and this community shares masks of our own design, that are logical masks, rather than physical masks, for designing and directing how they work. [pauses] There’s another question here that I actually don’t know a lot about, but I think Daniel you’ve done some research on this, is – when we get into the actual toolchaining, right, how do we build program devices and things like that? For myself, I’m not a developer full-time. That was never my focus. And there’s this question, we’ve seen sort of attacks, including in, like, the NSA documents, the XcodeGhost type thing, where an adversary, an arbitrary adversary, will target a developer, right, and rather than poison a specific binary, rather than trying to steal their signing key or something, like that, or in addition to stealing their signing key, they’ll actually go after the compiler. They’ll actually go after their toolchains. Or, on the network, they’ll start tracking people, and the activities of developers, even if they start working pseudonymously, because they’ll look at their toolchains, they’ll look at, Is there some cruft? Is there some [inaudible] is there some artefact? Is there some string that constantly repeats in their work? Is there some variable that’s unique to them and their work, that identifies them, even if they’re under [inaudible] How do we start this off?

[32:08] DKG: Right, this is, like, one level past the I Hunt Sysadmins slide, right this is the I Hunt Developers slide, and I would hope that the free software developers in this room care about that issue, right. I mean, I certainly, I know that, as a free software developer, lots of people take their responsibilities seriously. You don’t want to release bad code – sometimes, occasionally, some people maybe make some mistakes, some bugs [Snowden laughs] but we take the responsibility seriously. We want to fix the bugs that we make. But what if your toolchain is corrupted? What if you do get targeted? If you’re maintaining a piece of core infrastructure, like many people in this room probably are, how do you ensure that a target, a targeted attack on you doesn’t become an attack against all of your user base? I think we actually, what’s great is we actually have people working on this problem. I know there’s a talk later today, or tomorrow rather, about reproducible builds, which is an opportunity to make sure we get, you can go from, I’m not going to give the talk in five minutes here [Snowden laughs] I’m just going to give an outline. You should definitely check it out. But the goal is you can go from your source code through your toolchain and get a reproducible, like byte-for-byte identical value. And so that way, as a software developer, you know that any attack against your toolchain doesn’t matter, as long as the tools, as long as you’re publishing the source code that you mean to publish, your users can rely on tools that are built by many different builders, that will all produce the same result, and they can verify that they’re getting the same thing from each party. We’re not there yet, because our tools are still kind of, kind of crufty, they build in some arbitrary things, but we’re making great strides towards making non-reproducibility itself something we can detect, and stamp out as a new class of bugs, that we can rule out, and that gives us a leg up also against the proprietary software community, where they can’t simply do that, if they don’t have the source code even visible, they have no way of saying, “Look, this is the human intent, the human-readable intent, and here’s the binaries that come out, that other people can do.” So reproducible build in one path to that kind of trust, and I think there are probably others, and I hope people are actively thinking about that. The other way that I’ve heard this framed is, “Do you want to be the person who gets made an offer that you can’t refuse, right?” [Snowden laughs] If you’re a free software developer, and you’re publishing your source code, people can see what you publish, and they can say “Hey, did you really mean to do this?” But if you’re just distributing binaries, or you’re distributing your source code next to binaries, and your binaries are compromised, anybody who is looking at your source, at your disk, “Well, the disks all look clean” and then your binaries could be compromised. So, personally, as a free software developer, I don’t want to be in that position. I don’t want to be giving anybody any binaries. I want to be able to give them changes that are human-readable. So, we’re running a little bit low on time, and I want to make sure that if people have questions, they get a chance to ask questions. There’s a couple other things that I’d love to talk with you about, but if people have questions I’m going to ask that you come down and line up here at the mic, if you have a question. A couple of people are starting. But before the questions start, I’m just going to lead off with one, which is, have you got any ideas for how we address the, you mentioned the Android lack of security updates, how do we address, any ideas or suggestions for how we address the stability versus legacy compatibility versus actually security updates quandary? [Snowden smiles]

[35:44] ES: So, this is, like, [Snowden laughs]

DKG: In one minute … it’s easy.

ES: If I could solve this, I’d have an easier time getting the Nobel prize, right. [audience and Snowden laugh] But the challenge here is that there’s a real impact to support for legacy. Everybody knows this. But the users don’t accept so well, that there’s a gigantic security impact, that makes it actually unethical to support really out-of-date versions, to keep people anchored back in that support chain. Because it’s not just a question about versioning, it’s not just a question about stable, right. Stable is important, but increasingly we’re finding out the pace of adversary offensive research is so fast that if our update cycles are not at least relevant to that attack speed that we’re actually endangering people. And by being one of the users who’s out there on a mailing list [Snowden gestures mockingly] “Oh, this breaks functionality blah-blah-blah for my floppy-disk driver in my virtual machine.” It’s like, “Yo! Stop using a floppy disk in your virtual machine!” [audience laughs]

[37:03] DKG: So, we’ve got a queue of questions. I want to make sure that we get to them. I might need to repeat the mic, repeat the questions. I’m not sure whether you’ll hear them. Go ahead.

[37:13] Question #1: Hi, my name is Curtis Glavin. Thanks for taking my question. I was wondering, should a priority for the free software community be developing a kind of killer app for privacy, security, and a lot of these ideas that we all care about, that could gain, that could gain widespread adoption and transform public opinion, mainstream public opinion on these issues? And, if so, what could such a kind of killer app be, and how could the community build it? Thanks.

[37:54] ES: Absolutely. I mean, we start to see some of these things happening, particularly in the [inaudible] space, where we’re on ecosystems, where we have less control over, or they’re starting to put apps there. Now, can we create a competing stack? And, more importantly, as you say, the first is capability, because that’s what people care about, that’s what the user cares about, is capability. We see things like Signal, that are starting to try to tackle this, and even messaging space, right. But Signal’s not perfect, right. Signal has weaknesses. It leaks metadata, like telephonic contact, your address list, and things like that. And we have to figure out, are there ways that we can change collaboration? Now, here’s the big one, right. Living in exile kind of gives you an interesting perspective on different ways that people interact. One of the big ones is the fact, look, there’s a warrant against me, right. If I was trying to speak with you at MIT, there’d be like an FBI raid and paddy wagons outside. But because of technology, here I am, and it’s all FOSS. But that’s only the beginning, because there are other alternative functions out. We’re trying to compete. We’re trying to replicate. We’re trying to distinguish. Can we get there first? Now, one of the big technologies - the disruptive technologies - that’s out there today, that’s coming out this year, are obviously the VR stuff that is starting to take off. We’ve got the Oculus Rift, we’ve got the HTC Vive, and, of course, they’ll be many different versions of this. Can we take the hardware and create our own applications for addressing the remote-work problem, right? Can you create a virtual workspace, a virtual meeting space, a virtual social space, that’s arbitrary, where you and your peers can engage? They can look over your shoulder, as if you were sitting in the same office, and see your terminal visibly in front of you in this virtual space, without regard to your physical location? Now, I’m sure, there are commercial providers out there, proprietary actors out there, who are trying to create this. You know, Facebook would be completely negligent if they weren’t trying to do it. But if we can get there first, and we can do it secure, we can do something simply, that Facebook simply can’t. Their business model does not permit them to provide privacy. We can. We can do the same thing. We can do it better. We can do it faster. And if we do, it will change the world, and it will change the politics of every person in every country, because now you’ll have a safe space - everywhere, anywhere, always. [applause]

[40:37] Question #2: Hi. My name is Sascha Costanza-Chock. Thank you so much. You mentioned a couple times in your comments, sort of nodded to the idea, that we abandon the infrastructure space and we build on top of, you know, on top of existing infrastructure. And I wonder if you could just make a couple comments about the communities that are trying to do DIY and community-controlled infrastructure? So, there are projects like Open BTS. There are community organizations like Rizomatica, that’s building free phone and internet networks in rural Mexico. There are projects like the Red Hook Initiative that’s training people how to build community-controlled wireless infrastructure in Brooklyn. There are projects like Detroit’s Digital Stewards that are doing the same thing in Detroit. And all over. There are people sort of bubbling up around the edges to do community infrastructure. And I wonder if you could comment a little bit more on, Yes, these things are longshots, but maybe we shouldn’t abandon this, the space, of the imaginary space of the possible libertory future where we do own our infrastructure as well?

[41:40] ES: I agree - and actually if you could stay at the mic there for just one second – because that is, that’s a powerful idea. Now, I have less familiarity with that. I’m not going to try to BS anybody. Nobody’s an expert in everything, right. I’m not as familiar with community infrastructure projects. When I think about that, I think about Open DD-WRT and so on. But that level, where we’re actually talking about, you know, knitting together meshnets, or small-scale cell networks, that’s awesome, and we should do more about it. I think we will have the most success, personally, where we’re leap-frogging over technologies, be more mobile, more agile, and we don’t have the same kind of sunk infrastructure costs, because, ultimately, infrastructure is what can be targeted by the adversary – whether it’s a criminal group, whether it’s a government. If we have things invested in boxes and spaces, those are things that a police car can drive up to. That’s not to say they’re not, that’s not the case. But if I could just ask you briefly to comment on that, since you do have more familiarity, and maybe everybody in the room could benefit from it - What do you see as the way forward in the next space of communications fabric? [another questioner comes to the mic] I was actually him a follow-up. But that’s fine, let’s just have the next question.

[43:04] Question 3: Hi, this one may be a partial regurgitation of the last one. Daniel Gnoutcheff, Sysadmin, Software Freedom Law [Center]. Oh, my goodness, sorry. Moving on. So, one of the responses I’ve seen to revelations of global surveillance is the rise of self-hosting projects, such as Freedom Box, that are trying to provide people with tools to move their data out of the cloud, so to say, so to speak, and into personal devices sitting in their own homes. Do you believe that these sorts of tools, such as Freedom Box, provide a reasonable defense to global surveillance? And, what would your advice be to Freedom Box and similar projects?

[43:52] ES: Yeah, absolutely. So this is one of the critical things in the values, where community infrastructure, like that open infrastructure, can actually be really valuable, even if it’s not global - in fact, especially if it’s not global. In my experience, so I worked at the NSA, right, actually with the tools of mass surveillance. I had XKEYSCORE at my desk every morning I went in. I had scheduled tasks that were just pumping out sort of all of my different targets, all of their activities around the global internet, including just general activities on subnets that I found interesting, right. If I wanted to see anybody in an entire subnet – they just sent a certain pattern of ping - I could get that, it would be just there waiting for me. It’s easy to do. If you can write RegEx you can use XKEYSCORE. And you don’t even need to do that, but more advanced people do. Everybody else was just typing in “Bad guy at” [audience laughs] but the idea here is that even, even mass surveillance has limits, right - and that’s the size and granularity of their sensor mesh, right. They have to compromise or co-op a telecommunications network. They have to hack a router, implant, and then put a targeting interdiction on that, to go “Are any of my interesting selectors - IP addresses, emails, classes of information, fingerprints of activity, anything like that passing this? Then I’ll add to, sort of, my bucket, that will come back as results. And what this means is, that for ordinary people, for dissidents, for activists, for people who want to maintain their privacy, the fewer hops that you cross, the more lower, the more local your network - particularly if you’re outside of, sort of, these big telecommunications spaces - the safer you are, because you can sort of live in those gaps between the sensor networks, and never be seen.

[45:48] DKG: So, unfortunately we’re running low on time here. We’ve got less than five minutes left. So maybe we can take one last question.

ES: Sure.

DKG: Sorry, I know there are people in the queue, but …

[46:00] Question 4 [a young man]: Hello. I wanted to ask. What is someone my age able to do, who is like in middle school or high school, to kind of help out?

[46:10] ES: First thing is care. If you care, you’ll learn. If you learn [applause] It’s not meant to be pat. A lot of people don’t care. And it’s not that they don’t care because they’ve looked at it, they understand it, and they go “It doesn’t matter.” It’s because everybody’s only got so many minutes in the day, right. There’s a contest for attention. There’s a contest for mind-share. And we can only be a specialist or an expert in so many things. If this is already interesting to you, right, you’re already on the right track. And you can do a lot of good. You can develop tools that will change lives, and maybe even save them. The key is to learn. The key is to develop capability, and to actually apply it. It’s not enough to simply care about something – that’s the start. It’s not enough to simply believe in something – that’s the next step. You actually have to stand for something. You have to invest in something. And you have to be willing to risk something to make change happen. [applause]

[47:35] DKG: So … sorry. We’ve got a bunch of other talks lined up today. And we don’t want to end up blocking them. But Ed, thank you for joining us. We really appreciate it.

ES: It’s my pleasure. Thank you so much. Enjoy the conference!

Crockford, Kade - Keep Fear Alive - The bald-eagle boondoggle of the terror wars - The Baffler 20160311

Crockford, Kade - Keep Fear Alive - The bald-eagle boondoggle of the terror wars - The Baffler 20160311


“If you’re submitting budget proposals for a law enforcement agency, for an intelligence agency, you’re not going to submit the proposal that ‘We won the war on terror and everything’s great,’ cuz the first thing that’s gonna happen is your budget’s gonna be cut in half. You know, it’s my opposite of Jesse Jackson’s ‘Keep Hope Alive’—it’s ‘Keep Fear Alive.’ Keep it alive.”
—Thomas Fuentes, former assistant director, FBI Office of International Operations

Can we imagine a free and peaceful country? A civil society that recognizes rights and security as complementary forces, rather than polar opposites? Terrorist attacks frighten us, as they are designed to. But when terrorism strikes the United States, we’re never urged to ponder the most enduring fallout from any such attack: our own government’s prosecution of the Terror Wars.

This failure generates all sorts of accompanying moral confusion. We cast ourselves as good, but our actions show that we are not. We rack up a numbing litany of decidedly uncivil abuses of basic human rights: global kidnapping and torture operations, gulags in which teenagers have grown into adulthood under “indefinite detention,” the overthrow of the Iraqi and Libyan governments, borderless execution-by-drone campaigns, discriminatory domestic police practices, dragnet surveillance, and countless other acts of state impunity.

The way we process the potential cognitive dissonance between our professed ideals and our actual behavior under the banner of freedom’s supposed defense is simply to ignore things as they really are.

They hate us for our freedom, screech the bald-eagle memes, and so we must solemnly fight on. But what, beneath the official rhetoric of permanent fear, explains the collective inability of the national security overlords to imagine a future of peace?

Incentives, for one thing. In a perverse but now familiar pattern, what we have come to call “intelligence failures” produce zero humility, and no promise of future remedies, among those charged with guarding us. Instead, a new array of national security demands circulate, which are always rapidly met. In America, the gray-haired representatives of the permanent security state say their number one responsibility is to protect us, but when they fail to do so, they go on television and growl. To take but one recent example, former defense secretary Donald Rumsfeld appeared before the morally bankrupt pundit panel on MSNBC’s Morning Joe to explain that intractable ethnic, tribal, and religious conflict has riven the Middle East for more than a century—the United States, and the West at large, were mere hapless bystanders in this long-running saga of civilizational decay. This sniveling performance came, mind you, just days after Politico reported that, while choreographing the run-up to the 2003 invasion of Iraq, Rumsfeld had quietly buried a report from the Joint Chiefs of Staff indicating that military intelligence officials had almost no persuasive evidence that Saddam Hussein was maintaining a serious WMD program. Even after being forced to resign in embarrassment over the botched Iraq invasion a decade ago, Rumsfeld continues to cast himself as an earnestly out-manned casualty of Oriental cunning and backbiting while an indulgent clutch of cable talking heads nods just as earnestly along.

And the same refrain echoes throughout the echelons of the national security state. Self-assured and aloof as the affluenza boy, the FBI, CIA, and NSA fuck up, and then immediately apply for a frenzied transfer of ever more money, power, and data in order to do more of what they’re already doing. Nearly fifteen years after the “Global War on Terror” began, the national security state is a trillion-dollar business. And with the latest, greatest, worst-ever terrorist threat always on the horizon, business is sure to keep booming.

The paradox produces a deep-state ouroboros: Successful terrorist attacks against the West do not provoke accountability reviews or congressional investigations designed to truly understand or correct the errors of the secret state. On the contrary, arrogant spies and fearful politicians exploit the attacks to cement and expand their authority. This permits them, in turn, to continue encroaching on the liberties they profess to defend. We hear solemn pledges to collect yet more information, to develop “back doors” to decrypt private communications, to keep better track of Muslims on visas, send more weapons to unnamed “rebel groups,” drop more cluster bombs. Habeas corpus, due process, equal protection, freedom of speech, and human rights be damned. And nearly all the leaders in both major political parties play along, like obliging extras on a Morning Joe panel. The only real disagreement between Republican and Democratic politicians on the national stage is how quickly we should dispose of our civil liberties. Do we torch the Bill of Rights à la Donald Trump and Dick Cheney, or apply a scalpel, Obama-style?

Safety Last

Both Democrats and Republicans justify Terror War abuses by telling the public, either directly or indirectly, that our national security hangs in the balance. But national security is not the same as public safety. And more: the things the government has done in the name of preserving national security—from invading Iraq to putting every man named Mohammed on a special list—actually undermine our public safety.

That’s because, as David Talbot demonstrates in The Devil’s Chessboard, his revelatory Allen Dulles biography and devastating portrait of a CIA run amok, national security centers on “national interests,” which translates, in the brand of Cold War realpolitik that Dulles pioneered, into the preferred policy agendas of powerful corporations.

Public safety, on the other hand, is concerned with whether you live or die, and how. Any serious effort at public safety requires a harm-reduction approach acknowledging straight out that no government program can foreclose the possibility of terroristic violence. The national security apparatus, by contrast, grows powerful in direct proportion to the perceived strength of the terrorist (or in yesterday’s language, the Communist) threat—and requires that you fear this threat so hysterically that you release your grip on reason. Reason tells you government cannot protect us from every bad thing that happens. But the endlessly repeated national security meme pretends otherwise, though the world consistently proves it wrong.

When it comes to state action, the most important distinction between what’s good for public safety (i.e., your health) and what’s good for national security (i.e., the health of the empire, markets, and prominent corporations) resides in the concept of the criminal predicate. This means, simply, that an agent of the government must have some reasonable cause to believe you are involved with a crime before launching an investigation into your life. When the criminal predicate forms the basis for state action, police and spies are required to focus on people they have reason to believe are up to no good. Without the criminal predicate, police and spies are free to monitor whomever they want. Police action that bypasses criminal predicates focuses on threats to people and communities that threaten power—regardless of whether those threats to power are fully legal and legitimate.

Nearly fifteen years after the “Global War on Terror” began, the national security state is a trillion-dollar business.

We can see the results of this neglect everywhere the national security state has set up shop. Across the United States right now, government actors and private contractors paid with public funds are monitoring the activities of dissidents organizing to end police brutality and the war on drugs, Israeli apartheid and colonization in Palestine, U.S. wars in the Middle East, and Big Oil’s assault on our physical environment. In the name of fighting terrorism, Congress created the Department of Homeland Security, which gave state and local law enforcement billions of dollars to integrate police departments into the national intelligence architecture. As a result, we now have nearly a million cops acting as surrogates for the FBI. But as countless studies have shown, the “fusion centers” and intelligence operations that have metastasized under post-9/11 authorities do nothing to avert the terror threat. Instead, they’ve targeted dissidents for surveillance, obsessive documentation, and even covert infiltration. When government actors charged with protecting us use their substantial power and resources to track and disrupt Black Lives Matter and Earth First! activists, they are not securing our liberties; they’re putting them in mortal peril.

Things weren’t always like this. Once upon a time, America’s power structure was stripped naked. When the nation saw the grotesque security cancer that had besieged the body politic in the decades after World War II (just as Harry Truman had warned it would) the country’s elected leadership reasserted control, placing handcuffs on the wrists of the security agencies. This democratic counterattack on the national security state not only erected a set of explicit protocols to shield Americans from unconstitutional domestic political policing, but also advanced public safety.

Mission Creeps

As late as the 1970s, the FBI was still universally thought to be a reputable organization in mainstream America. The dominant narrative held that J. Edgar Hoover’s capable agents, who had to meet his strict height, weight, and dress code requirements, were clean-cut, straight-laced men who followed the rules. Of course, anyone involved with the social movements of that age—anti-war, Communist, Black Power, American Indian, Puerto Rican Independence—knew a very different FBI, but they had no evidence to prove what they could see and feel all around them. And since this was the madcap 1970s, the disparity between the FBI’s glossy reputation as honest crusaders and its actual dirty fixation on criminalizing the exercise of domestic liberties drove a Pennsylvania college physics professor and anti-war activist named William Davidon to take an extraordinary action. On the night of the Muhammad Ali vs. Joe Frazier fight of March 8, 1971, Davidon and some friends broke into an FBI office in Media, Pennsylvania. They stole every paper file they could get their hands on. In communiqués to the press, to which they attached some of the most explosive of the Hoover files, they called themselves the Citizens’ Commission to Investigate the FBI.

Not one of the costly post-9/11 surveillance programs based on suspicionless, warrantless monitoring stopped Tsarnaev from blowing up the marathon.

When Davidon and his merry band of robbers broke into the FBI office, they blew the lid off of decades of secret—and sometimes deadly—police activity that targeted Black and Brown liberation organizers in the name of fighting the Soviet red menace. According to Noam Chomsky, the Citizens’ Commission concluded that the vast majority of the files at the FBI’s Media, Pennsylvania, office concerned political spying rather than criminal matters. Of the investigative files, only 16 percent dealt with crimes. The rest described FBI surveillance of political organizations and activists—overwhelmingly of the left-leaning variety—and Vietnam War draft resisters. As Chomsky wrote, “in the case of a secret terrorist organization such as the FBI,” it was impossible to know whether these Pennsylvania figures were representative of the FBI’s national mandate. But for Bill Davidon and millions of Americans—including many in Congress who were none too pleased with the disclosures—these files shattered Hoover’s image as a just-the-facts G-man. They proved that the FBI was not a decent organization dedicated to upholding the rule of law and protecting the United States from foreign communist threats, but rather a domestic political police primarily concerned with preserving the racist, sexist, imperialist status quo.

In a cascade of subsequent transparency efforts, journalists, activists, and members of Congress all probed the darker areas of the national security state, uncovering assassination plots against foreign leaders, dragnet surveillance programs, and political espionage targeting American dissidents under the secret counterintelligence program known as COINTELPRO. Not since the birth of the U.S. deep state, with the 1947 passage of the National Security Act, had the activities of the CIA, FBI, or NSA been so publicly or thoroughly examined and contested.

Subsequent reforms included the implementation of new attorney general’s guidelines for domestic investigations, which, for the first time in U.S. history, required FBI agents to suspect someone of a crime before investigating them. Under the 1976 Levi guidelines, named for their author, Nixon attorney general Edward Levi, the FBI could open a full domestic security investigation against someone only if its agents had “specific and articulable facts giving reason to believe that an individual or group is or may be engaged in activities which involve the use of force or violence.” The criminal predicate was now engraved in the foundations of the American security state—and the Levi rules prompted a democratic revolution in law enforcement and intelligence circles. It would take decades and three thousand dead Americans for the spies to win back their old Hoover-era sense of indomitable mission—and their investigative MO of boundless impunity.

False Flags

In the years following the 9/11 attacks, the Bush administration began Hoovering up our private records in powerful, secret dragnets. When we finally learned about the warrantless wiretapping program in 2005, it was a national scandal. But just as important, and much less discussed, was the abolition of Levi’s assertion of the criminal predicate. So-called domestic terrorism investigations would be treated principally as intelligence or espionage cases—not criminal ones. This shift has had profound, if almost universally ignored, implications.

Michael German, an FBI agent for sixteen years working undercover in white supremacist organizations to identify and arrest terrorists, saw firsthand what the undoing of the 1970s intelligence reforms meant for the FBI. And German argues, persuasively, that the eradication of the criminal predicate didn’t just put Americans at risk of COINTELPRO 2.0. It also threatened public safety. The First and Fourth Amendments, which protect, respectively, our rights to speech and association and our right to privacy, don’t just create the conditions for political freedom; they also help law enforcement focus, laser-like, on people who have the intent, the means, and the plans to harm the rest of us.

Think of it like this, German told me: You’re an FBI agent tasked with infiltrating a radical organization that promotes violence as a means of achieving its political goals—the Ku Klux Klan, for example. KKK members say horrible and disgusting things. But saying disgusting things isn’t against the law; nor, as numerous studies have shown, is it a reliable predictor of whether the speaker will commit an act of political violence. When surrounded by white supremacists constantly spouting hate speech, a law enforcement officer has to block it out. If he investigates people based on their rhetoric, his investigations will lead nowhere. After all, almost no white supremacist seriously intending to carry out a terrorist attack is all that likely to broadcast that intent in public. (Besides, have you noticed how many Americans routinely say disgusting things?)

Today, more than a decade after it shrugged off the Levi guidelines, the FBI conducts mass surveillance directed at the domestic population. But dragnet surveillance, however much it protects “national security,” doesn’t increase public safety, as two blue-ribbon presidential studies have in recent years concluded. Indeed, the Boston bombings, the Paris attacks, and the San Bernardino and Planned Parenthood shootings have all made the same basic point in the cold language of death. The national security state has an eye on everyone, including the people FBI director James Comey refers to as “the bad guys.” But despite its seeming omniscience, the Bureau does not stop those people from killing the rest of us in places where we are vulnerable.

The curious case of Boston Marathon bomber Tamerlan Tsarnaev demonstrates the strange consequences of sidelining criminal investigations for national security needs. In 2011, about eighteen months before the bombings, Tsarnaev’s best friend and two other men were murdered in a grisly suburban scene in Waltham, Massachusetts—their throats slashed, marijuana sprinkled on their mutilated corpses. These murders were never solved. But days after the marathon bombings, law enforcement leaked that they had forensic and cellphone location evidence tying Tamerlan Tsarnaev to those unsolved crimes. Not one of the costly post-9/11 surveillance programs based on suspicionless, warrantless monitoring stopped Tsarnaev from blowing up the marathon. But if the police leaks were correct in assigning him responsibility for the 2011 murders, plain old detective work likely would have.

If security agencies truly want to stop terrorism, they should eliminate all domestic monitoring that targets people who are not suspected of crimes. This would allow agents to redirect space and resources now devoted to targeting Muslims and dissidents into serious investigations of people actually known to be dangerous. It’s the only reasonable answer to the befuddling question: Why is it that so many of these terrorists succeed in killing people even though their names are on government lists of dangerous men?

After the terrorist attacks in November, the French government obtained greater emergency powers in the name of protecting a fearful public. Besides using those powers to round up hundreds of Muslims without evidence or judicial oversight, French authorities also put at least twenty-four climate activists on house arrest ahead of the Paris Climate Change Conference—an approach to squashing dissent that didn’t exactly scream liberté, and had nothing to do with political violence. As with the Boston Marathon and countless other attacks on Western targets, the men who attacked the Bataclan were known to intelligence agencies. In May 2015, months before the attacks in Paris, French authorities gained sweeping new surveillance powers authorizing them to monitor the private communications of suspected terrorists without judicial approval. The expanded surveillance didn’t protect the people of Paris. In France, as in the United States, the devolution of democratic law enforcement practice has opened up space that’s filled with political spying and methods of dragnet monitoring that enable social and political control. This is not only a boondoggle for unaccountable administrators of mass surveillance; it also obstructs the kind of painstaking detective work that might have prevented the attacks on the Bataclan and the marathon.

Our imperial government won’t ever admit this, but we must recognize that the best method for stopping terrorism before it strikes is to stop engaging in it on a grand scale. Terrorist attacks are the price we pay for maintaining a global empire—for killing a million Iraqis in a war based on lies, for which we have never apologized or made reparations, and for continuing to flood the Middle East with weapons. No biometrics program, no database, no algorithm, no airport security system will protect us from ourselves.

The Feds Have Let the Cyber World Burn. Let’s Put the Fire Out - Wired 20160301

IMAGINE, ALL ACROSS America, our homes and businesses regularly going up in flames. Firefighters would be deployed en masse to stop the fires from spreading. Law enforcement would hunt the arsonists and bring them to justice. Engineers would learn to design structures far more resistant to flames.

Our government would respond to this obvious national emergency with force, competence, and leadership. And it certainly would not persecute the firefighters.

Yet the United States government is doing just that as it struggles with a choice: Necessary security for all versus the desired insecurity of some. No less integral to civilization at this point than the roofs over our heads, the information technology that connects us to one another is increasingly connecting hackers to our daily lives. Every month, more devices go online: Cars,thermostats, and baby monitors, all troublingly exposed.

A hospital is forced to pay a ransom to keep treating patients. A small business goes under, its entire payroll account emptied in a weekend. Millions of consumers lose access to their credit cards over Christmas. Multi-billion dollar international corporations watch their digital infrastructure burn to the ground in the blink of an eye, perhaps as extortion, perhaps for fun. A quarter million citizens of the Ukraine have theirpower disrupted by hackers. And all those who ever sought the trust and confidence of our government must now fearidentity theft for the rest of their lives.

A Fireproof Future

But there is hope. Our technology companies, literally the most valuable in the world, have made dramatic strides toward building devices that cannot be hacked. If your iPhone is stolen, it is unlikely that the thief will be apprehended. But he will access no emails, view no photos, take no money, steal no secrets—not from you, not from your employer. There will be no breach to report, no loss to incur, no job to lose. You were protected from risk, and nothing was asked of you but a passcode or thumbprint.

Strong cybersecurity delivers the digital world that does not burn.

Instead of helping put out fires, though, the FBI is “concerned.” A world where not everything can be hacked is a world where it can’t necessarily hack everything. And so, in a case where the FBI has enjoyed almost complete cooperation with Apple, it is demanding more: The engineering authority to require a “backdoor,” making the extraction of data from any device trivial, and setting the dangerous precedent that the government can turn any or all of the technology in our lives against us.

The FBI’s argument against Apple seems almost reasonable at first glance. There’s an extraordinary crime, there’s a secret we as a society want. Why not hack this one device, just this once? Because it’s not just this once. Not only are there other cases in the courts where the government is asking for access to iPhones, the real point is precedent. The problem is that for every one device we want to hack, there are tens of thousands we need to protect. Do we leave every device vulnerable just so the next one can be hacked?

As a lifelong hacker committed to protecting the Internet—I found a core vulnerability in the Internet’s design, which led to what became its largest synchronized fix ever—I can tell you that we are suffering the largest crime wave in human history, and it is built on a foundation of failed cybersecurity.

The FBI’s actions against Apple seek to maintain and enshrine this cracked foundation. Apple CEO Tim Cook isfighting back, and our nation must support him.

The moral, economic, strategic, and technical leadership of the United States is at stake here. If Americans are not allowed to repair cybersecurity, somebody else will, and the damage to our interests will be incalculable and self-inflicted. Whoever masters making a secure digital world not just possible, but practical, will own the next Silicon Valley. There are at least 865 products from 55 countries with encryption, the vast majority from outside the US. Our companies have the head start in this coming space race. But there is a small team at Apple that just became an enormous liability for their company. They did world-class work to protect you, and now untold billions are at stake. If only they hadn’t done quite so good a job, or left a couple convenient flaws. If only their managers hadn’t hired people quite so passionate. As it happens, Cook is standing up for his team.

But Cook, and the enormous resources at his disposal, cannot be everywhere. By trying to set the precedent that it’s OK for the government to intentionally undermine Internet security, the FBI has placed all of America’s cybersecurity engineers on notice: Don’t do too good a job now. Let it burn, or we’ll burn you.

Our Nation Is Capable of So Much More

We must repair the Internet. Too much is broken and taking years or even decades to fix. Our failures are not for lack of trying, but they might be for lack of staffing. I am a proud member of what might be called the Internet’s community of “volunteer firefighters,” but there is something to be said for professionals in numbers with infrastructure and a mandate. Our society doesn’t have just “The Guy Who Works On Cancer;” we build institutes. So let’s find and fix these flaws, faster and better. Let’s collaborate, systematically, comprehensively. Engineers should have the data, based on real world experimentation, about how to build the future securely, and practically.

Millions are learning to code; how do we ensure that the next generation of innovation is not even more fragile than this one? There are solutions that work, but are impractical. There are solutions that are practical, but do not work. Chief information security officers are flooded with noise regarding magic solutions that will fix all their problems. It’s not all snake oil.

A “CyberUL”, similar to the system that tells us which hoverboards might set our homes on fire (apparently all of them), would be helpful. It could allow us to understand what security technologies to invest in, and what systems need protection. Even when it comes to the bugs we’re already finding, nobody quite knows the global severity of a particular flaw. Who’s at risk? What should we prioritize?

The FBI publishes crime statistics for a reason.

There will need to be a bureaucratic firewall in place, for these efforts to be credible. Those defending and repairing the Internet must be separated from those with offensive cyber missions, no matter how legitimate. “Dual Missions”—playing defense and offense, fixing infrastructure one day and exploiting it the next, are a lie and everybody knows it.

It has been said that our nation needs a Manhattan Project for cybersecurity. What we need is a project to protectManhattan, and San Francisco, and Seattle, and Chicago. Each of these cities suffered enormous fires once upon a time (Manhattan three times!). Our nation came together and fixed that. These very cities are guaranteed to be under cyberattack tomorrow. We can protect them, but only if we back Tim Cook in his profound belief that the Internet is not secure enough.

How to Tap Your Network and See Everything That Happens On It - Life Hacker 20141022

How to Tap Your Network and See Everything That Happens On It - Life Hacker 20141022

How to Tap Your Network and See Everything That Happens On It

Your home network is your fortress. Inside it lies tons of valuable information—unencrypted files, personal, private data, and perhaps most importantly, computers that can be hijacked and used for any purpose. Let's talk about how you can, with the power of evil, sniff around your home network to make sure you don't have any uninvited guests.

In this post, we'll show you how to map out your network, take a peek under the covers to see who's talking to what, and how to uncover devices or processes that may be sucking down bandwidth. In short: You’ll be able to recognize the signs that something on your network is compromised. We'll assume you're familiar with some networking basics, like how to find your router's list of devices and what a MAC address is. If not, head over to our Know Your Network night school to brush up first.

Before we go any further, though, we should issue a warning: Use these powers for good, and only run these tools and commands on hardware or networks you own or manage. Your friendly neighborhood IT department wouldn't like you port scanning or sniffing packets on the corporate network, and neither would all the people at your local coffee shop. As with every evil week post, the point is to teach you how it's done so you can do it yourself and protect yourself—not exploit others.

Step One: Make a Network Map

How to Tap Your Network and See Everything That Happens On It

Before you even log onto your computer, write down what you think you know. Start with a sheet of paper and jot down all of your connected devices. That includes things like smart TVs, set-top boxes, laptops and computers, tablets and phones, or any other device that might be connected to your network. If it helps, draw a map of your home, complete with rooms. Then write down every device and where it lives. You may be surprised with exactly how many devices you have connected to the internet at the same time.

Network admins and engineers will recognize this step—it's the first step in exploring any network you're not familiar with. Do an inventory of the devices on it, identify them, and then see if the reality matches up with what you expect. If (or when) it doesn't, you'll be able to quickly eliminate what you do know from what you don't know. You may be tempted to just log in to your router and look at its status page to see what's connected, but don’t do that yet. Unless you can identify everything on your network by its IP and MAC address, you'll just get a big list of stuff—one that includes any intruders or freeloaders. Take a physical inventory first, then move on to the digital one.

Step Two: Probe Your Network to See Who's On It

How to Tap Your Network and See Everything That Happens On It

Once you have a physical map of your network and a list of all of your trusted devices, it's time to go digging. Log in to your router and check its list of connected devices. That'll give you a basic list of names, IP addresses, and MAC addresses. Remember though, your routers device list may or may not show you everything. It should, but some routers only show you devices that use the router for its IP address. Either way, keep that list to the side—it's good, but we want more information.

Next, we're going to turn to our old friend nmap. For those unfamiliar, nmap is a cross-platform, open source network scanning tool that can find devices are on your network, along with a ton of detail on those devices. You can see open ports, the operating system in use, IP and MAC addresses, even open ports and services. Download nmap here, check out these install guides to set it up, and follow these instructions discover hosts on your home network.

In my case, I installed and ran it from the command line (if you want a graphical interface, Zenmap usually comes with the installer), then told nmap to scan the IP range I'm using for my home network. It found most of the active devices on my home network, excluding a few I have some enhanced security on (although those were discoverable too with some of nmap's commands, which you can find in the link above.)

How to Tap Your Network and See Everything That Happens On It

Compare nmap's list with your router's list. You should see the same things (unless something you wrote down earlier is powered off now.) If you see something on your router that nmap didn't turn up, try using nmap against that IP address directly. Then, based on what you know, look at the information nmap found about the device. If it's claiming to be an Apple TV, it probably shouldn't have services like http running, for example. If it looks strange, probe it specifically for more information, like I did in the screenshot above. I noticed one of my machines was rejecting ping requests, which made nmap skip over it. I told nmap to just probe it anyway, and sure it enough it responded.

Nmap is an extremely powerful tool, but it's not the easiest to use. If you're a little gun shy, you have some other options. Angry IP Scanner is another cross-platform utility that has a good-looking and easy-to-use interface that will give you a lot of the same information. Previously mentioned Who Is On My Wi-Fi is a Windows utility that offers similar features and can be set to scan in the background in case someone comes online when you're not watching. Wireless Network Watcher, again for Windows, is another utility we've mentioned with a nice interface that, despite its name, isn't limited to wireless networks.

Step Three: Sniff Around and See Who Everyone Is Talking To

By now, you should have a list of devices you know and trust, and a list of devices that you've found connected to your network. With luck, you're finished here, and everything either matches up or is self-explanatory (like a TV that's currently turned off, for example). However, if you see any actors you don't recognize, services running that don't correspond to the device (Why is my Roku running postgresql?), or something else feels off, it's time to do a little sniffing. Packet sniffing, that is.

When two computers communicate, either on your network or across the internet, they send bits of information called "packets" to one another. Put together, those packets create complex data streams that make up the videos we watch or the documents we download. Packet sniffing is the process of capturing and examining those bits of information to see where they go and what they contain. To do this, we'll need Wireshark. It's a cross-platform network monitoring tool that we used to do a little packet sniffing in our guide to sniffing out passwords and cookies. In this case, we'll be using it in a similar manner, but our goal isn't to capture anything specific, just to monitor what types of traffic is going around the network. To do this, you'll need to run Wireshark over Wi-Fi, in "promiscuous mode." That means it's not just looking for packets heading to or from your computer, it's out to collect any packets it can see on your network.

Once installed, open WireShark and select your Wi-Fi adapter. Click "options" next to it, and as you see in the video above (courtesy of the folks over at Hak5,) you can select "promiscuous mode" for that adapter. Once you have, you can start capturing packets. When you start the capture, you're going to get a lot of information. Luckily, Wireshark anticipates this, and makes it easy to filter.

How to Tap Your Network and See Everything That Happens On It

Since we're just looking to see what the suspicious actors on your network are doing, make sure the system in question is online. Go ahead and capture a few minutes' worth of traffic for starters. Then you can filter that traffic based on the IP address of that device using Wireshark's built-in filters. Doing this gives you a quick view of who that IP address is talking to, and what information they're sending back and forth. You can right-click on any of those packets to inspect it, follow the conversation between both ends, and filter the whole capture by IP or conversation. For more, How-To Geek has a detailed guide on Wireshark filtering. You may not know what you're looking at, but that's where a little sleuthing comes in.

If you see that suspicious computer talking to a strange IP address, use the nslookup command (in the command prompt in Windows, or in a terminal in OS X or Linux) to get its hostname. That can tell you a lot about the location or type of network your computer is connecting to. Wireshark also tells you the ports being used, so Google the port number and see what applications use it. If, for example, you have a computer connecting to a strange hostname over ports often used for IRC or file transfer, you may have an intruder. Of course, if you find the device is connecting to reputable services over commonly used ports for things like email or HTTP/HTTPS, you may have just stumbled on a tablet your roommate never told you he owned, or someone next door stealing your Wi-Fi. Either way, you'll have the data required to figure it out on your own.

Step Four: Play the Long Game and Log Your Captures

How to Tap Your Network and See Everything That Happens On It

Of course, not every bad actor on your network will be online and leeching away while you're looking for them. Up to this point, we're taught you how to check for connected devices, scan them to identify who they really are, and then sniff a little of their traffic to make sure it's all above board. However, what do you do if the suspicious computer is doing its dirty work at night when you're sleeping, or someone's leeching your Wi-Fi when you're at work all day and not around to check?

There are a couple of ways to address this. For one, the Who's On My Wi-Fiapplication we mentioned earlier can run in the background on your Windows computer and keep an eye on who's connecting and when. It can ping you when you're not looking at it, and let you know when someone's connected to your network, which is a nice touch. You can leave it running on a computer at home, and then when you wake up or come home from work, see what happened while you weren't looking.

How to Tap Your Network and See Everything That Happens On It

Your next option is to check your router's logging capabilities. Buried deep in your router's troubleshooting or security options is usually a tab dedicated to logging. How much you can log and what kind of information varies by router, but you can see in the screenshot above I can log incoming IP, destination port number, outgoing IP or URL filtered by the device on my network, internal IP address and their MAC address, and which devices on my network have checked in with the router via DHCP for their IP address (and, by proxy, which have not.) It's pretty robust, and the longer you leave the logs running, the more information you can capture.

Custom firmwares like DD-WRT and Tomato (both of which we've shown youhow to install) allow you to monitor and log bandwidth and connected devices for as long as you want, and can even dump that information to a text file that you can sift through later. Depending on how you have your router set up, it can even email that file to you regularly or drop it on an external hard drive or NAS. Either way, using your router's oft-ignored logging features is a great way to see if, for example, after midnight and everyone's gone to bed, your gaming PC suddenly starts crunching and transmitting a lot of outbound data, or you have a regular leech who likes to hop on your Wi-Fi and start downloading torrents at odd hours.

Your final option, and kind of the nuclear option at that, is to just let Wireshark capture for hours—or days. It's not unheard of, and many network administrators do it when they're really analyzing strange network behavior. It's a great way to pin down bad actors or chatty devices. However, it does require leaving a computer on for ages, constantly sniffing packets on your network, capturing everything that goes across it, and those logs can take up a good bit of space. You can trim things down by filtering captures by IP or type of traffic, but if you're not sure what you're looking for, you'll have a lot of data to sift through when you're looking at a capture over even a few hours. Still, it will definitely tell you everything you need to know.

How to Tap Your Network and See Everything That Happens On It

In all of these cases, once you have enough data logged, you'll be able to find out who's using your network, when, and if their device matches up with the network map you made earlier.

Step Five: Lock Your Network Down

How to Tap Your Network and See Everything That Happens On It

If you've followed along to here, you've identified the devices that should be able to connect to your home network, the ones that actually connect, identified the differences, and hopefully figured out if there are any bad actors, unexpected devices, or leeches hanging around. Now all you have to do is deal with them, and surprisingly, that's the easy part.

Wi-Fi leeches will get the boot as soon as you lock down your router. Before you do anything else, change your router's password, and turn off WPS if it's turned on. If someone's managed to log directly into your router, you don't want to change other things only to have them log in and regain access. Make sure that you use a good, strong, password that's difficult to brute force. Then, check for firmware updates. If your leech has made use of an exploit or vulnerability in your router's firmware, this will keep them out—assuming that exploit's been patched, of course. Finally, make sure your wireless security mode is set to WPA2 (because WPA and WEP are very easy to crack) and change your Wi-Fi password to another good, long password that can't be brute-forced. Then, the only devices that should be able to reconnect are ones you give the new password to.

That should take care of anyone leeching your Wi-Fi and doing all their downloading on your network instead of theirs. It'll help with wired security, too. If you can, you should also take a few additional wireless security steps, like turning off remote administration, disabling UPnP, and of course, seeing if your router supports Tomato or DD-WRT.

For bad actors on your wired computers, you have some hunting to do. If it's actually a physical device, it should have a direct connection to your router. Start tracing cables and talking to your roommates or family to see what's up. Worst case, you can always log back onto your router and block that suspicious IP address entirely. The owner of that set-top box or quietly-plugged in computer will come running pretty quickly when it stops working.

How to Tap Your Network and See Everything That Happens On It

The bigger worry here though, is compromised computers. A desktop that's been hijacked and joined to a botnet for overnight Bitcoin mining, for example, or a machine infected with malware that calls home and sends your personal information to who-knows-where, can be bad. Once you narrow your search to specific computers, it's time to root out where the problem lies on each machine. If you're really worried, take the security engineer's approach to the problem: Once your machines are owned, they're no longer trustworthy. Blow them away, reinstall, and restore from your backups. (You do have backups of your data, don't you?) Just make sure you keep an eye on the PC afterwards—you don't want to restore from an infected backup and start the process all over again.

If you're willing to roll up your sleeves, you can grab yourself a solid antivirus utility and an antimalware on-demand scanner (yes, you'll need both), and try to clean the computer in question. If you saw traffic for a specific type of application, look to see if it's not malware or just something someone's installed that's behaving badly. Keep scanning until everything turns up clean, and keep checking the traffic from that computer to make sure everything's okay.

We've only really scratched the surface here when it comes to network monitoring and security. There are tons of specific tools and methods that experts use to secure their networks, but these steps will work for you if you're the network admin for your home and family.

Rooting out suspicious devices or leeches on your network can be a long process, one that requires sleuthing and vigilance. Still, we're not trying to drum up paranoia. Odds are you won't find anything out of the ordinary, and those slow downloads or crappy Wi-Fi speeds are something else entirely. Even so, it's good to know how to probe a network and what to do if you find something unfamiliar. Just remember to use your powers for good.

Snowden's Chronicler Reveals Her Own Life Under Surveillance - Wired 20160204

Snowden's Chronicler Reveals Her Own Life Under Surveillance - Wired 20160204

Laura Poitras has a talent for disappearing. In her early documentaries like My Country, My Country and The Oath, her camera seems to float invisibly in rooms where subjects carry on intimate conversations as if they’re not being observed. Even in Citizenfour, the Oscar-winning film that tracks her personal journey from first contact with Edward Snowden to releasing his top secret NSA leaks to the world, she rarely offers a word of narration. She appears in that film exactly once, caught as if by accident in the mirror of Snowden’s Hong Kong hotel room.

Now, with the opening of her multi-media solo exhibit, Astro Noise, at New York’s Whitney Museum of American Art this week, Snowden’s chronicler has finally turned her lens onto herself. And she’s given us a glimpse into one of the darkest stretches of her life, when she wasn’t yet the revelator of modern American surveillance but instead its target.

The exhibit is vast and unsettling, ranging from films to documents that can be viewed only through wooden slits to a video expanse of Yemeni sky which visitors are invited to lie beneath. But the most personal parts of the show are documents that lay bare how excruciating life was for Poitras as a target of government surveillance—and how her subsequent paranoia made her the ideal collaborator in Snowden’s mission to expose America’s surveillance state. First, she’s installed a wall of papers that she received in response to an ongoing Freedom of Information lawsuit the Electronic Frontier Foundation filed on her behalf against the FBI. The documents definitively show why Poitras was tracked and repeatedly searched at the US border for years, and even that she was the subject of a grand jury investigation. And second, a book she’s publishing to accompany the exhibit includes her journal from the height of that surveillance, recording her first-person experience of becoming a spying subject, along with her inner monologue as she first corresponded with the secret NSA leaker she then knew only as “Citizenfour.”

Poitras says she initially intended to use only a few quotes from her journal in that book. But as she was transcribing it, she “realized that it was a primary source document about navigating a certain reality,” she says. The finished book, which includes a biographical piece by Guantanamo detainee Lakhdar Boumediene, a photo collection from Ai Weiwei, and a short essay by Snowden on using radio waves from stars to generate random data for encryption, is subtitled “A Survival Guide for Living Under Total Surveillance.” It will be published widely on February 23.

“I’ve asked people for a long time to reveal a lot in my films,” Poitras says. But telling her own story, even in limited glimpses, “provides a concrete example of how the process works we don’t usually see.”

That process, for Poitras, is the experience of being unwittingly ingested into the American surveillance system.

On the Government’s Radar
Poitras has long suspected that her targeting began after she filmed an Iraqi family in Baghdad for the documentary My Country, My Country. Now she’s sure, because the documents released by her Freedom of Information Act request prove it. During a 2004 ambush by Iraqi insurgents in which an American soldier died and several others were injured, she came out onto the roof of the family’s home to film them as they watched events unfolding on the street below. She shot for a total of eight minutes and 16 seconds. The resulting footage, which she shows in the Whitney exhibit, reveals nothing related to either American or insurgent military positions.

“Those eight minutes changed my life, though I didn’t know it at the time,” she says in an audio narration that plays around the documents in her exhibition. “After returning to the United States I was placed on a government watchlist and detained and searched every time I crossed the US border. It took me ten years to find out why.”

A Whitney Museum visitor looking at a selection of Poitras’ FOIAed documents framed in a collection of light boxes. ANDY GREENBERG
The heavily redacted documents show that the US Army Criminal Investigation Command requested in 2006 that the FBI investigate Poitras as a possible “U.S. media representative … involved with anti-coalition forces.” According to the FBI file, a member of the Oregon National Guard serving in Iraq identified Poitras and “a local [Iraqi] leader”—the father of the family that would become the subject of her film. The soldier, whose name was redacted, questioned Poitras at the time, and reported that she “became significantly nervous” and denied filming from the roof. He later told the Army investigators that he “strongly believed”—but without apparent evidence—“POITRAS had prior knowledge of the ambush and had the means to report it to U.S. Forces; however, she purposely did not report it so she could film the attack for her documentary.”

One page shown in the Whitney exhibit reveals that the New York field office of the FBI was tracking Poitras’ home addresses, and Poitras believes the reference to a “detective” working with the FBI indicates the New York Police Department may have also been involved. By 2007, the documents reveal that there was a grand jury investigation proceeding on whether to indict her for unnamed crimes—multiple subpoenas sought information about her from redacted sources. (Poitras says that the twelve pages she published in the Whitney exhibition are only a selection of 800 documents she’s received in her FOIA lawsuit, which is ongoing.)

Being Constantly Watched

Private as ever, Poitras declined to detail to WIRED exactly how she experienced that federal investigation in the years that followed. But flash forward to late 2012, and the surveillance targeting Poitras had transformed her into a nervous wreck. In the book, she shares a diary she kept during her time living in Berlin, in which she describes feeling constantly watched, entirely robbed of privacy. “I haven’t written in over a year for fear these words are not private,” are the journal’s first words. “That nothing in my life can be kept private.”

She sleeps badly, plagued with nightmares about the American government. She reads Cory Doctorow’s Homeland and re-reads 1984, finding too many parallels with her own life. She notes her computer glitching and “going pink” during her interviews with NSA whistleblower William Binney, and that it tells her its hard drive is full despite seeming to have 16 gigabytes free. Eventually she moves to a new apartment that she attempts to keep “off the radar” by avoiding all cell phones and only accessing the Internet over the anonymity software Tor.

When Snowden contacts her in January of 2013, Poitras has lived with the specter of spying long enough that she initially wonders if he might be part of a plan to entrap her or her contacts like Julian Assange or Jacob Appelbaum, an activist and Tor developer. “Is C4 a trap?” she asks herself, using an abbreviation of Snowden’s codename. “Will he put me in prison?”

Even once she decides he’s a legitimate source, the pressure threatens to overwhelm her. The stress becomes visceral: She writes that she feels like she’s “underwater” and that she can hear the blood rushing through her body. “I am battling with my nervous system,” she writes. “It doesn’t let me rest or sleep. Eye twitches, clenched throat, and now literally waiting to be raided.”

Finally she decides to meet Snowden and to publish his top secret leaks, despite her fears of both the risks to him and to herself. Both the journal and the documents she obtained from the government show how her own targeting helped to galvanize her resolve to expose the apparatus of surveillance. “He is prepared for the consequences of the disclosure,” she writes, then admits: “I really don’t want to become the story.”

In the end, Poitras has not only escaped the arrest or indictment she feared, but has become a kind of privacy folk hero: Her work has helped to noticeably shift the world’s view of government spying, led to legislation, and won both a Pulitzer and an Academy Award. But if her ultimate fear was to “become the story,” her latest revelations show that’s a fate she can no longer escape–and one she’s come to accept.

Poitras’ Astro Noise exhibit runs from February 5 until May 1 at the Whitney Museum of American Art, and the accompanying book will be published on February 23.

Lyon, David - The Snowden Stakes: Challenges for understanding surveillance today - 2015

Lyon, David - The Snowden Stakes: Challenges for understanding surveillance today - 2015


The drip-feed disclosures about state surveillance following Edward Snowden’s dramatic departure from his NSA contractor, Booz Allen, carrying over one million revealing files, angered some and prompted some serious heart-searching in others. One of the challenges is to those who engage in Surveillance Studies. Three kinds of issues present themselves: One, research disregard: responses to the revelations show a surprising lack of understanding of the complex, large-scale, multi-faceted panoply of surveillance that has been constructed over the past 40 years or so that includes but is far from exhausted by state surveillance itself. Two, research deficits: we find that a number of crucial areas require much more research. These include the role of physical conduits including fibre-optic cables within circuits or power, of global networks of security and intelligence professionals, and of the minutiae of everyday social media practices. Three, research direction: the kinds of surveillance that have developed over several decades are heavily dependent on the digital—and, increasingly, on so-called Big Data—but also extend beyond it. However, if there is a key issue raised by the Snowden revelations, it is the future of the internet. Information and its central conduits have become an unprecedented arena of political struggle, centred on surveillance and privacy. And those concepts themselves require rethinking.


Nineteen-Eighty-Four is an important book but we should not bind ourselves to the limits of the author’s imagination. Time has shown that the world is much more unpredictable and dangerous than that.”
- Edward Snowden, July 2014.


The disclosures about mass surveillance, provided by Edward Snowden, offer extensive insights into the inner workings of National Security Agency (NSA). One of the first things that featured in news accounts was that so-called mass surveillance is carried out on ‘US persons’ as well as foreigners and that those ‘foreigners’ may include close allies. While some details are tantalizingly patchy, for the most part the sheer volume of files and the range of areas to which they refer are nothing short of mind boggling. And although the drip-feed disclosures began in June 2013, they continue to be released, with the result that any commentary is open to further modification.

Moreover, the impact of Snowden’s whistleblowing leaks is now being felt more profoundly at a national policy level, in 2015, in more than one context. First, the US Freedom Act, passed on June 2, 2015, restored in modified form some aspects of the post-9/11 Patriot Act but crucially, restricts the bulk collection of telephone metadata of American citizens. Second, on June 11, a major governmentcommissioned report on counter-terrorism measures, A Question of Trust, by David Anderson, called for curbs on the UK’s GCHQ. In particular, it is highly critical of the existing system of oversight of intelligence agencies. Neither of these would have been possible without Snowden.

Both what may be learned from the disclosed documents and what may be seen of their direct impacts provide the basis for some serious re-thinking of some assumptions about surveillance in the 21st century. To take one prominent example, the very term ‘surveillance’ may require some new qualification. What is known about NSA practices raises questions about the supposed clear distinction between ‘mass’ and ‘targeted’ surveillance, and the wholesale use of ‘metadata’ foregrounds long-standing debates about how to define ‘personal data’ (or ‘personally identifiable information’). What goes for the ‘subject’ of surveillance applies to ‘privacy’ as well. Each requires some serious rethinking.

On these questions, themselves seen as controversial by defenders of the NSA’s practices, there is little settled opinion as yet. If data are sought on a ‘mass’ basis, from wide swathes of a given population, with a view to identifying algorithmically through correlations who might be a ‘person of interest,’ the point at which ‘mass’ becomes ‘targeted’ surveillance is at best an indeterminate threshold. And if the kind of data obtained in the first instance are in fact metadata—such as IP address, duration of call, which friends were contacted?—then they comprise just the kinds of information that a private detective might seek: who spoke to whom, when, and for how long? Despite protestations to the contrary, it is hard to deny that such metadata are highly ‘personal,’ especially now that the US Freedom Act explicitly limits such collection.

That the activities of the NSA and its sister agencies around the world are controversial is made abundantly clear by government efforts in more than one country to use the term ‘bulk collection’ of data rather than ‘mass surveillance.' 2 In a case in 2000, the European Court of Human Rights concluded that even the storing of data relating to the “private life” of an individual falls within the application of Article 8.1 of the European Convention on Human Rights (Bowden cited by Greenwald 2015). But the debates over this are fierce in countries such as the UK and US. This article argues that gathering and analyzing metadata, including content of communications, is best thought of as ‘mass’ surveillance, even though, as noted above, locating ‘suspects’ is still the main aim.

Surveillance Studies, the multi-disciplinary field of research dedicated to understanding in context contemporary practices such as monitoring, tracking and identification, is well positioned to respond to the new challenges raised by the Snowden files. However, the case made here is that, while some challenges are direct ones, to our grasp of substantive aspects of surveillance processes, others are indirect. While no claim is made about the exhaustiveness of the analysis that follows, it does suggest that Surveillance Studies can make significant contributions to considering each kind of challenge.

Snowden’s own comments about Orwell point in this direction, too. Given that, for many people, the spectre of Big Brother is still the one that fuels the imagination regarding mass surveillance, there is a need to place Orwell’s dystopic and cautionary tale in context. For Snowden, this is primarily a technological matter; “quaint” microphones hidden in bushes and the telescreen that can observe us have given way to mobile webcams and network microphones in cell-phones. But while Orwell cannot be blamed for not foreseeing the consequences of the so-called information revolution, it is also worth recalling that, like Max Weber or Hannah Arendt, 3 Orwell saw surveillance as in part an outcome of a relentless rationality expressed in bureaucratic procedures. That constraining cultural condition undoubtedly helps to explain why surveillance is in one sense self-augmenting. But more than that is needed to indicate in particular what difference is made by the digital.
Snowden’s conviction is that due to surveillance, today’s “…world is much more unpredictable and dangerous” than Orwell could have guessed. This too represents a genuine challenge from Snowden, not only to upgrade our grasp of new technology, but also place any and all technological systems in their social, political-economic and cultural context. The use of metadata, for example, is no mere outcome of technological potential, such as the exponential expansion of storage power, but of specific approaches to risk management in security industries and of consumer clustering in marketing, each of which has risen to prominence in contexts where globalization—understood as neo-liberalism—holds sway.

In what follows, three kinds of challenge are identified and discussed. The first, ‘research disregard,’ is in a sense historical: why the shocked and outraged responses to Snowden’s revelations as if this is the first we have heard of very large scale surveillance in the early 21st or even late 20th century? The second has more to do with substantive and current challenges emerging from the revelations themselves; I label it ‘research deficit.’ I indicate some areas that require some serious reappraisal in our understanding of surveillance today. The third, ‘research direction,’ points rather to the future, suggesting that the larger context of the Snowden revelations is the fate of the internet. Surveillance should never be thought of as a discrete dimension of the modern world. Today, it cannot be understood without investigating information and its current conduit, the internet. In a coda, I return to the issues of how to rethink ‘surveillance’ and ‘privacy’ for today.

These, then, are the Snowden stakes. The revelations have rightly remained buoyant in the headlines, just because so much is “at stake,” not merely for Surveillance Studies or the future of the internet, but more significantly, for privacy, human rights, civil liberties, freedom and justice.

Research disregard

The Snowden revelations continue to make headline news and several major diplomatic events have been sparked by them. Angela Merkel, Germany’s Chancellor and Dilma Roussef, the Brazilian president, for example, say they were shocked to discover that their cell-phone conversations had been monitored. 4 As well, individual populations outside the US have reacted negatively on finding that the NSA has been active in unexpected ways within their national territory. In Canada, for example, it was disclosed that the NSA had set up shop in the capital, Ottawa, in order to monitor the G8, G20 summit in June 2010 (Weston, Greenwald and Gallagher 2013).

Broadly speaking, at least three elements of surveillance practices became strikingly evident during 2013 and since. One, governments engage in mass surveillance on their own citizens. The NSA works closely with the ‘Five Eyes’ of Australia, Canada, New Zealand and the UK but their activities are also mirrored in many other countries. Two, corporations share their ‘own’ data supplies with government, to mutual benefit. This happens as internet companies in particular, knowingly or not, collude with government to provide personal data. Three, ordinary citizens also participate through their online interactions— especially in social media—and cell-phone use. Without necessarily being aware of it, we all feed data to the NSA and its cognate agencies, just by contacting others electronically (Lyon 2013).

However huge the revelations, though, it has to be said that there was little that was completely new about the three surveillance elements mentioned here. Granted, the massive import of the Snowden disclosures lay in the substantial store of clear evidence pointing to the present and ongoing reality of mass surveillance and this was undoubtedly new. When the news first broke in The Guardian on June 5 2013, several factors were startling. Verizon, the telecom giant, was required by the NSA to give information on all calls within the USA and between the USA and other countries between April and July that year. Secret domestic spying on an astounding scale was happening under President Obama (Greenwald 2013). But the international outcry against the realities of mass surveillance now revealed gave the impression that citizens were quite unaware and unprepared for what they now were hearing.

This suggests that surveillance was not really on the radar of most ordinary citizens. But still, to those engaged in examining surveillance and in proposing legal, technical and policy responses, the sense of unawareness may have come as something of a disappointment; it is easy to over-estimate the reception of our own work. Also, most responses worry about the assault on privacy, construed as a personal - understood as individual - matter, which shows little understanding of the ways that surveillance also operates as social sorting, targeting primarily population groups before individuals, or of how privacy speaks to these questions of human rights and social justice as well. The main exception to the individualizing focus on privacy is among those whose concern is that communications privacy has been egregiously violated, which prompts weighty questions about trust in particular.

The popular and media debate over Snowden has focused all-too-frequently on state surveillance primarily as a threat to individuals, except where the challenge to a free and open internet has been recognized. Yet the evidence shows that arbitrary power is used against all citizens when mass surveillance is practiced. As a number of advocates have argued for some time (Regan 1995; Bennett and Raab 2006; Steeves 2009), privacy is not only an individual matter. Surveillance and privacy can each be considered along a spectrum of relationships, from the monad to the multitude. By definition, mass surveillance means that anyone and everyone can be caught in the surveillance net and the larger the scale of surveillance, the more likely it is that false positives will emerge in the quest for ‘persons of interest.’ These questions are pursued below.

Despite two decades of growth in Surveillance Studies there seems to be little public understanding of surveillance as it is practiced today. The sorts of practices uncovered by Snowden are ones that have a long history, not only in the annals of intelligence gathering and national security agencies, but in spheres from policing to public administration to consumer marketing. This should be salutary for those engaged in the academic study of surveillance and indeed for any who care about freedom, democracy and justice in the 21st century (for a no-holds-barred critique see Giroux 2014). It is worth briefly reviewing that development.

In the 1980s those interested in the study of surveillance were concerned primarily with state surveillance on the one hand (e.g. Burnham 1983; Campbell and Connor 1986) and workplace surveillance on the other (e.g. Webster and Robins 1986; Zuboff 1988). More broadly, surveillance in the service of ‘social control’ was discussed in relation to policing and the management of offenders (e.g. Cohen 1985; Marx 1988), and this dimension was already merging, in part with questions of ‘national security.’ However, research on consumer surveillance—and its links with systems of public administration—were also available at this time (see the pioneering work of Rule 1974) but consumer surveillance would not be recognized as part of mainstream surveillance developments until the 1990s (see, prominently, Gandy 1993). Without exception, these authors stressed the impact of computerization on the ways that these existing forms of surveillance, including public video cameras, would develop.

By the 1990s, however, the term ‘surveillance society’ was in much more general use as a term that indicated the ways that what once seemed to be restricted to the activities of government, policing or employment was spilling over into everyday life (Lyon 2001). This term in no way minimized the importance of state surveillance but did indicate that systemic surveillance of many kinds could be expected simply as a result of conducting one’s daily affairs. Increasingly, surveillance became visible through ubiquitous cameras in public streets and locations such as shopping malls, the use of credit cards and, progressively, loyalty cards, plus, in some rudimentary ways, through online interactions that expanded after the development of the World Wide Web in 1994 and the subsequent commercialization of the internet, from 1995.

During the early 2000s, two events occurred that were to shape the direction of surveillance decisively, although the potential connections between them were not made public until 2010. One was the attacks of September 2001 (‘9/11’), and also the London bombings of ‘7/7’, and the Madrid train attack, the aftermath of which hugely boosted security-related surveillance at least in the global north. Interestingly, the activities of the quickly-formed Department of Homeland Security took some cues from ‘Customer Relationship Management’ (CRM) in their quest for ‘Total Information Awareness (TIA)’ (Lyon 2003: 92f). The other was the definitive appearance of social media, symbolized by the invention of Facebook in 2004, that quickly established itself as a mainstream dimension of the internet, simultaneously facilitating new levels of consumer surveillance (not to mention social surveillance, Marwick 2012; Trottier 2012), now based on self-expressed preferences and tastes. By President Obama’s inauguration in 2009 the DHS had developed a Social Networking Monitoring Center to check for ‘items of interest’ (Lynch 2010).

In a sense, then, the Snowden disclosures may be functioning as a wake-up call to publics still unaware that the day of mass surveillance of ordinary citizens had already dawned. If it was not already clear, after 9/11 the ‘national security’ rationale for intensified surveillance (Ball and Webster 2003) became prominent and with it the use of data analytics (now generally referred to as ‘Big Data,’ Lyon 2014a). The TIA was dependent on a very large-scale database using “new algorithms for mining, combining and refining data” 5 that included bank machine use, credit card trails, internet cookies, medical files — anything, indeed, that might produce interesting correlations that might indicate meaningful relationships between records. These, the Snowden files show, are among just the methods used by the NSA in its surveillance both domestic and foreign.

Without doubt, Snowden is right to raise issues of privacy, civil liberties—including freedom of expression, communication and assembly—and human rights in relation to what his findings have exposed about the NSA and its cognate agencies around the world. But what many studies of surveillance over the past two decades have shown is that deeper questions are raised that challenge many conventional assumptions about contemporary societies, their actual forms of power, their politics and their democratic institutions and processes. As the above analysis shows, this is not only a question of electronicallyenhanced bureaucratic power bearing down upon hapless citizens. It also has to do with how those citizens engage with the everyday, in communication, interaction and exchange, much of which occurs using digital devices. Arguably, then, it is also a matter of a surveillance culture (Lyon 2014b) in which an increasing proportion of the world’s population lives and to which, for a number of reasons, many have become inured.

As well as the more fundamental societal-cultural questions raised by the Snowden findings, the key issues of contemporary surveillance may also be discerned through considering some major trends that have become increasingly evident in the past decade or so (and in the following section, we explore how some of these intersect with three central Snowden-specific questions). In addition to the sheer exponential growth of surveillance, as it has increasingly become a basic mode of organizational practice, several other significant trends may be identified (for more on this, see Bennett et al. 2014; Brown 2010).

As mentioned earlier, security is becoming a key driver of surveillance, not only at the ‘national’ level but also in general types of policing, urban security and in workplaces, transit systems and schools (Taylor 2013). This is of course, a key issue and one fraught with basic problems of definition, which also relates to its status as a widely-used political rationale for a range of controversial measures. The kind of ‘national security’ that prompts increased surveillance arguably has little in common with the kinds of ‘security’—from things like famine, fear, even freedom—that many might think would benefit their communities and families. Moreover, in practice, many current attempts to procure national security seem to jeopardize the civil liberties and human rights basic to democratic practice (see Zedner 2009).

At the same time, it must be acknowledged that not only ‘security’ but also some much more mundane motifs are significant in the development of surveillance today. One is ‘efficiency,’ that encourages the use of cost-cutting policies and technology-intensive solutions and the other is ‘convenience’ that dominates much of the appeal of marketers to consumers. Under such very ordinary and unremarkable motifs surveillance expands apace, as evidence-producing technologies (as Josh Lauer calls them) are adopted for reasons that are routine and everyday.

‘Security,’ on the other hand, is still supreme among these ‘drivers.’ For philosopher Giorgio Agamben, the security motif seen behind contemporary surveillance may be trumping not only democracy but politics itself (Agamben 2013) and this insight may at least serve as a theorem to be explored. At the same time, this trend must be seen alongside another, the intertwining—and in some respects integration—of public and private agencies. The governmental and the corporate have always worked closely together in modern times but the idea that they inhabit essentially different spheres, with different mandates, is currently unraveling. As Snowden revealed, telephone companies such as Verizon and internet companies such as Microsoft work in tandem with state agencies such as the NSA, in ways that have yet to be fully understood.

Several other important trends also deserve mention, if only to flag their significance (they are discussed in Bennett et al. 2014). Mobile and location-based surveillance is expanding, which means that the timeand-space coordinates of our lives are increasingly monitored. Surveillance is more and more embedded in everyday environments such as buildings, vehicles and homes. Machines recognize individual owners and users through card-swiping or voice-activation. The human body is itself the source of surveillance data, with DNA records, fingerprinting, facial recognition coming to be viewed as reliable means of identification and verification. Moreover, all these trends are rapidly being globalized, which is in itself a surveillance trend of some import. As mentioned above, social surveillance via networking sites is rising, a topic we return to below. And in all this, it becomes steadily more difficult to know what exactly counts as ‘personal data.’ Vehicle licence plates, presence in group photos posted on social media and of course metadata make definition difficult.

All the above stand as challenges to Surveillance Studies in particular, and to any and all citizens of contemporary liberal democracies in general. There are, however, some more specific questions to which I now draw attention. These are areas in which, after Snowden, we are obliged to say that current surveillance research simply does not yet know enough.

Research deficit

If the historical problem is the apparent disregard of research about surveillance, permitting a sense of surprise rather than sober expectation, then the contemporary problem is that current research has yet to catch up with some vital surveillance developments. In each case—digital infrastructures, professional networks and social media practices—the difficulty of identifying the object of research is compounded by misleading language, dubious assumptions and inadequate theory. There is no conspiracy here, just an analytic fog that has to clear before the contours of each situation can be seen more sharply.

The first issue is one that may be most dramatically seen in relation to cloud computing (fog again?) and the electronic transfer of data from place to place. The metaphor of the cloud originated in diagrams intended to demonstrate how information is moved around (Mosco 2014: 77). The impression given—and reinforced through cloud marketing—is that somehow data flits weightlessly through the ether when in fact the actual conduits are fibre-optic cables. There is a geographical and material element to the cloud that belies the benign, fluffy, floating image. That material-geographic element is crucial to power configurations. Part of this has to do with the leading role of the US, through the NSA. As Andrew Clement shows, data files sent from the University of Toronto to the Ontario government (a few city blocks away, also in Toronto) actually travel down fibre-optic cables in a “boomerang” pattern to US data-handling interchanges before reaching their destination back in Canada (Clement 2013). They thus travel though a quite different data regime than Canada’s. But new power configurations are also generated by the capacity to tap into digital data, which depends on cooperation between participating countries in order to obtain general views of the operation of the internet.

NSA programs use such cables to collect (Upstream, Quantuminsert—see also commercial versions of such hacking programs 6 and to intercept (Tempora) data. Interceptors are placed strategically along the cable routes, a practice undertaken by many countries, as Snowden’s work shows, and through Global Crossing security agreements with private companies much of the world’s fibre optic cable is accessible to the US (Timberg and Nakashima 2013). More targeted surveillance occurs using systems like XKeyscore, which is linked to the PRISM program. XKeyscore also stores material in data caches spread around the world in specific locations (see map in Bennett et al. 2014: 113). PRISM, in turn, depends on consumer data obtained from internet companies through social media and cloud platforms (such as Dropbox; see Bauman et al. 2014: 123).

The second issue is that it is hard to pin down exactly who is conducting surveillance. Although the term ‘state’ surveillance is common in everyday parlance, those who stand in for ‘state’ employees are many and varied, and this follows from the point above about the blurring between public and private sectors. Snowden’s own position before his departure with the documents illustrates this. He worked for Booz Allen Hamilton, whose expertise was subcontracted to the NSA. Didier Bigo (e.g. 2008; see also Ball and Snider 2013; Bauman et al. 2014: 124-131; Lyon and Topak 2013) has for some time drawn attention to the ways in which “security professionals” now form an international network, operating in different countries but with extensive cooperation. These are intelligence agents, technical experts, police (both public and private), advisers and others whose immediate genesis lies in post-9/11 international antiterrorism cooperation but has now expanded into a clearly discernible network of some considerable influence.

Importantly, older distinctions break down as this network of “unease managers” (as Bigo calls them) develops. They connect public and private agencies, internal and external security, national and international interests and so on. This development grows alongside the digitization of security and surveillance such that, paradoxically, ‘national’ security is no longer ‘national’ in “…its acquisition or even analysis, of data…” which helps to blur “…the lines of what is national as well as the boundaries between law enforcement and intelligence” (Bauman et al. 2014: 125). This issue is related to the one mentioned above, about the uncertainty of who actually carries out surveillance, although the further point here is that a loose affiliation of professional organizations can be identified. They work together, learning from each other and developing their own protocols, rationales and surveillance practices.

As the examples from the US show, similar surveillance practices occur across the board, whether in the DHS, CIA, FBI or the NSA (or, for that matter, in the UK’s GCHQ or Canada’s CSEC). These ‘acronym’ policing and intelligence organizations also rely on similar subcontracting organizations that also display similar technical, statistical and political-economic activities (see Ball and Snider 2013). Both policing and intelligence agencies have military connections that also influence their practices and as well the traffic is two way: information handling is crucial to each, such that policing becomes more data-heavy (Haggerty and Ericson 1997) and also more inflected by military method (Brodeur 2010). In all cases it is also clear that such organizations do not just react to perceived threats to national security or to criminal acts. They actively construct the target populations and refine the rationales for so-doing. This is where the commercial connections with technology corporations also become centrally significant, in conjunction with government actors. Policy influences and is influenced by the corporate and technical approaches and practices. At an organizational and network level, then, relationships are manifold and complex.

The third ‘research deficit’ question has to do with the tissues connecting these organizational networks and their practices with the subjects of surveillance or, more properly, with target populations. The internet, and above all social media, are crucial here, although cell-phone use is another linked dimension of the same question. It is important to recall that social media is a 21st century phenomenon of only very recent provenance. Yet it has grown at an astonishing speed and with amazing global reach such that it is now one of the dominant aspects of internet use. While much significant social research has occurred in this area—particularly with the help of units such as the Oxford Internet Institute in the UK or the Pew ‘Internet and American Life’ program—understanding how social media users operate in relation to practices and concepts relating to surveillance and privacy is still very much an infant subfield and a vital research priority (see, e.g. Fuchs 2014; Marwick 2013; Trottier 2012).

In a longer historical frame, it might seem strange that social media users would freely permit personal details to be widely and promiscuously circulated online, thus making them vulnerable to intense surveillance both by the corporations that seek their data for marketing purposes and by policing and intelligence agencies. Such willing compliance would surely have puzzled and bothered an Orwell, attuned as he was to the use of new technologies to procure popular subservience to the state. But there is a strong sense in which today’s situation is decidedly post-Orwellian. Not merely that the technologies of surveillance have been hugely upgraded, but that surveillance practices are common to all organizations, which amounts to surveillance “regimes” (Giroux 2014: 7) and as I noted above, a surveillance culture. In such a culture, surveillance is not only a form of entertainment but also something encountered in everyday life and in which many knowingly and actively engage themselves. Lives are lived, in part, online.

The research question that presents itself here is what long-term impact the Snowden revelations and their aftermath will have in informing and perhaps reorienting the practices of social media users. This involves careful analysis of how users themselves perceive the situations in which they find themselves and the practices they pursue online. For instance, Pew researchers found that social media users are unwilling to discuss Snowden online—and offline too—preferring safer environments such as the dinner table for such conversations. 7 As well this is a challenge for policy and advocacy research that is willing to go beyond conventional understandings of surveillance and, especially, privacy (see e.g. Cohen 2012).

This also involves fresh investigations of the potential of internet communication for questioning and resistance to forms of surveillance deemed excessive, unnecessary, or illegal. One the one hand, numerous internet-related NGOs and lobby and pressure groups have formed a disparate social movement to demand accountability for, and transparency about, the surveillance practices exposed by Snowden. 8 On the other, everyday engagement of users with social media may be reflexively informed by growing knowledge of how surveillance works in the world after Snowden. Concepts such as “exposure” (Ball 2009) find new critical import for understanding how, how much, and under what circumstances users reveal personal data to others.

These questions lead into a more general inquiry about the future of internet-related surveillance research, which, I argue in the next section, has risen in significance to stand today as a key area—in the sense that it informs many other areas —in surveillance research.

Research direction

Internet freedom—the ability to use the network without institutional constraints, social or state control, and pervasive fear—is central to the fulfillment of [its] promise. Converting the internet into a system of surveillance thus guts it of its core potential.
(- Glenn Greenwald 2014: 6)

Any field of study, including that of surveillance, is obliged to review from time-to-time the main force-fields that shape the object of analysis. Today the internet is bound up with surveillance at many levels and thus deserves special attention. This section argues that the research direction for Surveillance Studies should be strongly inflected by issues of information and the internet. The kinds of surveillance that have developed over several decades are heavily dependent on the digital—and, increasingly, on what is now labelled Big Data—but also extend beyond it. As Greenwald indicates, the Snowden revelations raise as a key issue the future of the internet. While it is true that modern societies have been ‘information societies’—and thus ‘surveillance societies’—from their inception (Lyon 2005), today information and its central conduits have become an unprecedented arena of political struggle, centred on surveillance. This suggests that both analytically, in terms of research directions, and politically, in terms of practice and policy, the internet and surveillance are bound in a mutually-informing relation.

The use of internet for surveillance is not new but its scope has never been greater. For many, such as Greenwald and Snowden himself, this is a great betrayal of the initial wave of optimism about its democratic potential with which the internet was born. The hoped-for human benefit pre-dated the commercialization of the internet but versions of it were also woven-into many corporate aspirations in Silicon Valley and elsewhere from the 1990s onwards. Some popular and prescient writers such as Ithiel de Sola Pool (1983) foresaw the development of what we now call the internet, arguing that it was a key carrier of technological freedom. He insisted that free speech would become a vital issue. How regulation and access were organized would determine whether or not the new communications would enhance democracy as the political platform and the printing press had done before.

What happened to those utopian dreams of the ‘information revolution’ pundits of the 1980s? After all, they had correctly noted the emancipatory and democratizing possibilities offered by the new technologies. But Ithiel de Sola Pool and others of his ilk perhaps paid insufficient attention to the already-existing political economy informing information technologies—not to mention the over-arching cultural belief in the power of Technology. Together, they led to a failure to note that the new technologies might be deemed efficacious despite evidence to the contrary, and to see the flaws in the analysis that sees knowledge as a new and independent factor of production. Following Karl Polanyi (1944, 2001), one might think of informational knowledge as in fact a ‘fictitious commodity’ that has been cut off from its social origins in creative labour as an ‘independent’ form in expert systems or virtual services (Hayles 1999), integrated into an economic system of general commodification where profit is the bottom line, and is allocated through the market, where reciprocity or social justice have little or no say (Jessop 2007; also Schiller 1988). The commodification of the internet in 1995 was a critical moment in the more general development of information as a fictitious commodity.

However, De Sola Pool’s thirty-year-old comments on freedom of speech came home dramatically as documents were released by Snowden. By this time matters had become polarized. Right on the heels of the news about the NSA’s access to Verizon telephone subscriber data came the disclosures about the PRISM program that directly implicated major internet companies such as Microsoft, Yahoo!, Google and Facebook. Urgent exchanges occurred, some of which involved some puzzlement on the part of the companies: yes, they had parted with some data but the revelations seemed to suggest that far greater quantities were involved than they had themselves authorized. As it transpired, beyond the FISAauthorized access to data held by internet companies, the NSA had also found ways to intercept upstream in the data flow, using systems such as Muscular, developed by the NSA along with Five Eyes partner the UK GCHQ (Gellman and Soltani 2013).

As Steven Levy noted in a Wired article (Levy 2014), the Snowden ‘revelations’ exposed a “…seemingly irresolvable conflict. While Silicon Valley must be transparent in many regards, spy agencies operate under a cloak of obfuscation.”

Snowden’s findings shone a spotlight on an issue of which internet companies had been all too aware for some years. Companies such as Google, Yahoo! and Twitter had struggled to hold off government attempts, through the Foreign Intelligence Surveillance Act (FISA) court, to oblige them to hand over customer data. To their credit, the companies seemed to have tried to ward off such efforts 9 but the combination of government power and the fact that the companies also have government contracts compromised the struggle somewhat. PRISM focused the fight, but the secrecy surrounding the NSA has made it very difficult to know exactly what is happening. They are fighting in a fog. This also presents problems for those trying to research corporate-government surveillance relations.

The details of the ongoing controversies and battles may be found on various sites 10 but the theme that unites them is surveillance and the future of the internet. This has several implications for analysis and action.

One important outcome is that those studying surveillance have come to realize that those researching communications have much to offer. From Oscar Gandy’s or Joseph Turow’s pioneering work in consumer surveillance to Mark Andrejevic’s or Alice Marwick’s explorations of online surveillance (Gandy 1993, 2012; Turow 2012; Andrejevic 2007, 2013; Marwick 2013), not to mention ongoing work on communications surveillance itself, the connections are clear. Those whose surveillance background is in criminology or public policy in particular may need to strengthen their analyses by examining more closely how the internet intersects with their understanding of surveillance. Equally, those grappling with questions about internet surveillance would do well to look at the literatures of surveillance—and of privacy—more broadly conceived (e.g. Raab and Goold 2011).

A second area is to explore further the analytical possibilities for considering information as a fictitious commodity. One could argue, for example, with the strong push towards so-called Big Data, that the severing of connections between information and its social roots is now even more pronounced. In N. Katherine Hayles’ analysis, information “loses its body” dating from the time of the 1950s Macey Conferences on communication theory onward. But I would argue that now so-called personal data progressively loses its ‘person’ (Lyon 2014a). When data gathered for commercial (marketing) purposes—which already stretch the links between data and individuals—are then resignified for security goals, quite new social and legal problems appear (Amoore 2014). All too often, inappropriate talk of ‘raw data’ gives the impression that they are harmless technical means of connecting the dots through algorithms. The practices and politics of algorithms are profound but scarcely explored (but see e.g. Kitchin 2014; Morozov 2013).

A third area of concern has to do with the politics of the internet in the era of ‘mass’ surveillance. In an obvious sense, this has been a key aspect of the Snowden controversies from the outset. Governments, including the US Administration, have been obliged to respond to the continuing debates over state power and its entwinement with commercial networks, especially internet companies (see e.g. Clarke et al. 2014). But the politics of internet surveillance is also a strong current running through the internet companies themselves—they have had to distance themselves from the NSA while at the same time acknowledging that they do cooperate extensively with government. Alongside these areas of turbulence is the active resistance of numerous NGOs who are engaged with both the civil liberties and privacy dimensions of mass surveillance and, again, the future of the internet itself. The new coalitions that have formed since Snowden, between EPIC, EFF and ACLU in the US, for instance, or under the banner of OpenMedia in Canada, are making waves in fresh ways and building creatively towards consensus on each new Snowden revelation. Could this be the more concerted response to surveillance that Colin Bennett concluded was still lacking when he published his 2008 book, The Privacy Advocates?

The future of the internet still hangs in the balance as the revelations about mass surveillance continue. As Ron Deibert indicates in Black Code (2013), broad issues of enclosure, secrecy and the arms race are all implicated here. And as Jonathan Zittrain (2009) reminds us, from a different standpoint, the internet has never had a golden age. The problems as well as the potential were built-in from the outset. Analysis of the spread of surveillance has never been more significant, from the threats to individual people to the consequences for war and peace, wealth and poverty, on a global level.

Coda: Snowden, surveillance and privacy

This article has surveyed some of the most striking implications of what, thanks to Snowden, we now know about ‘national security’ surveillance in the early 21st century. The historical question is, why, when the surveillance society is already so well-developed, were Snowden’s ‘revelations’ read in the media as a complete surprise? The current question asks what key aspects of today’s surveillance require new forms of analysis, along with policy and political response? The future question considers what the internet has now come to signify, and how it might be reclaimed for its original promise, given that it is the key site for surveillance practices at several levels?

At the outset, however, we noted that the Snowden revelations raise questions about the very language commonly used to discuss the monitoring and tracking of daily life and responses to these practices: surveillance and privacy. Concepts are always contested, some more than others. And definitions are always difficult because they reveal the time, place and cultural assumptions of their origins. Again, these questions have been raised before, but perhaps never so sharply as in relation to the post-Snowden scene. Once, the distinction between targeted and mass surveillance seemed fairly clear. No more. The lines blur with traffic between the two; is the person or the profile being surveilled? Once, privacy was construed primarily as a matter relating to the interests, or rights, of a specific identifiable individual. No more.

When profiling is ‘anticipatory’ and hunches about a possible ‘nexus’ to terrorism are the basis of suspicion, how exactly does privacy address this?

It has been argued here that the kinds of surveillance highlighted by the Snowden revelations are on the one hand information-intensive, often relating to the internet and on the other, ‘national security’-oriented. The concept of ‘security’ also requires problematizing in this context, which is yet another task for the multi-disciplinary research that today is patently urgent. As with surveillance or privacy, defining security is difficult especially under present conditions, where ‘national’ security has been elevated to a top priority by many governments. It is a highly contested concept (Zedner 2009) often erroneously supposed to be in conflict 11 with claims to a right to privacy or to civil liberties. Much more nuanced understandings of security are required if the term is to retain any connection with the desires, aspirations and indeed well-being of everyday citizens. And these must be considered in relation to the other concepts— surveillance and privacy—affected by the ‘Snowden stakes’ and discussed here (see Raab 2014; Bigo [2008] 2012; Lyon 2015).

The Snowden stakes are many and varied and differ from country to country. But this complexity should not be allowed to obscure the fact that in all cases those stakes are high. The disclosures challenge some taken-for-granted assumptions and expose the real gaps in current knowledge. But this is not only a matter for those engaged in surveillance research—from whatever discipline; this is a multi-disciplinary enterprise involving not only the social sciences but investigative journalists and computer professionals as well. 12 At stake, in particular, is the future of the internet and of digital communications in general. This article tries to stress the magnitude of this challenge and hints at some ways in which this can at least be described and analyzed that do not conform to some of the dangerously dominant assumptions currently available. But the stakes are even larger and they include the very character and possibilities for politics, democracy and social justice in a time of post-Orwellian big data surveillance.


Agamben, Giorgio. 2013. For a theory of destituent power. Chronos. Available at:

Amoore, Louise. 2014. Security and the claim to privacy. International Political Sociology 8(1) 108-112.

Andrejevic, Mark. 2013. Infoglut: How too much information is changing the way we think and know. London: Routledge.

Andrejevic, Mark. 2007. iSpy: Surveillance and Power in the Interactive Era. Lawrence: University of Kansas Press.

Ball, Kirstie S. and Laureen Snider, eds. 2013. The Surveillance-Industrial Complex: A Political Economy of Surveillance. London and New York: Routledge.

Ball, Kirstie S. 2009. Exposure: exploring the subject of surveillance. Information, Communication & Society 12 (5): 639-657.

Ball, Kirstie S. and Frank Webster, eds. 2003. The Intensification of Surveillance. London: Pluto Press.

Bauman, Zygmunt, Didier Bigo, Paulo Esteves, Elspeth Guild, Vivienne Jabri, David Lyon and R.B. Walker. 2014. After Snowden: Rethinking the impact of surveillance. International Political Sociology 8 (2): 121-144.

Bennett, Colin J. 2008. The Privacy Advocates: Resisting the Spread of Surveillance. Cambridge, MA: MIT Press.

Bennett, Colin J. and Charles D. Raab. 2006. The Governance of Privacy: Policy Instruments in Global Perspective. Cambridge, MA: MIT Press.

Bennett, Colin J., Kevin D. Haggerty, David Lyon and Valerie Steeves, eds. 2014. Transparent Lives: Surveillance in Canada. Edmonton AB: Athabasca University Press. Also available at:

Bigo, Didier. 2008. Globalized (in)security: The field and the banopticon. In Didier Bigo and Aanastassia Tsouskala, eds. Terror, Insecurity and Liberty. London and New York: Routledge.

Bigo, Didier. 2012 [2008]. International Political Sociology. In Security Studies: An Introduction, ed. P. Williams, 116-128. Abingdon: Routledge.

Brodeur, Jean-Paul. 2010. The Policing Web. Oxford and New York: Oxford University Press.

Brown, Ian. 2010. The challenges to European data protection laws and principles. Woking paper #1 of the Directorate General Justice Freedom and Security. Available at:

Burnham, David. 1983. The Rise of the Computer State. New York: Vintage.

Campbell, Duncan and Steve Connor. 1986. On the Record: Surveillance, Computers and Privacy. London: Michael Joseph.

Clarke, Richard, Michael Morrell, Geoffrey Stone, Cass Sunstein and Peter Swire. 2014. The NSA Report: Liberty and Security in a Changing World. Princeton, NJ and Oxford: Princeton University Press.

Clement, Andrew. 2013. IXmaps – Tracking your personal data through the NSA’s warrantless wiretapping sites. IEEE International Symposium on Technology and Society. (IEEE Explore Digital Library: 216-223, doi:

Cohen, Julie. 2012. Configuring the Networked Self. New Haven, CN: Yale University Press.

Cohen, Stanley. 1985. Visions of Social Control. Cambridge: Polity Press.

Dandeker, Christopher. 1990. Surveillance Power and Modernity. Cambridge: Polity.

De Sola Pool, Ithiel. 1983. Technologies of Freedom: On Free Speech in an Electronic Age. Cambridge, MA: The Belknap Press.

Deibert, Ronald. J. 2013. Black Code: Surveillance, Privacy and the Dark Side of the Internet. Toronto: Signal (McClelland & Stewart).

Fuchs, Christian. 2014. Social Media: A Critical Introduction. London: Sage.

Gandy, Oscar. 2012. Coming the Terms with Chance: Engaging Rational Discrimination and Cumulative Disadvantage. London: Ashgate.

Gandy, Oscar. 1993. The Panoptic Sort: A Political Economy of Personal Information. Boulder, CO: Westview Press.

Gellman, Barton, and Ashkan Soltani. 2013. NSA infiltrates links to Yahoo, Google data centers worldwide, Snowden documents say. The Washington Post. October 30. Available at:

Giroux, Henry. 2014. Totalitarian paranoia in the post-Orwellian surveillance state. Cultural Studies. Online May 14. Available at:

Greenwald, Glenn. 2015. The Orwellian re-branding of mass surveillance as merely ‘bulk collection.’ The Intercept March 13. At

Greenwald, Glenn. 2014. No Place to Hide: Edward Snowden, the NSA, and the US Surveillance State. New York: Metropolitan Books,Toronto: McClelland and Stewart.

Greenwald, Glenn. 2013. NSA collecting phone records of millions of Verizon customers daily. The Guardian. June 5. At

Haggerty, Kevin D. and Richard V. Ericson. 1997. Policing the Risk Society. Toronto: University of Toronto Press.

Hayles, N. Katherine. 1999. How we became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.

Jessop, Bob. 2007. Knowledge as a fictitious commodity: insights and limits of a Polanyian analysis. In Reading Karl Polanyi for the 21st Century: Market Economy as a Political Project, eds A. Bugra and K. Agartan, 115-134. Basingstoke: Palgrave. Kitchin, Rob. 2014. The Data Revolution: Big Data, Open Data, Data Infrastructures and their Consequences. London: Sage.

Lauer, Josh. 2012. Surveillance history and the history of new media: evidence-producing technologies. New Media and Society 14 (4) 566-582.

Levy, Steven. 2014. How the NSA almost killed the internet. Wired. July 1. Available at:

Lynch, Jennifer. 2010. New FOIA documents reveal DHS social media monitoring during Obama inauguration. Available at:

Lyon, David. 2015 (forthcoming). Surveillance after Snowden. Cambridge: Polity.

Lyon, David. 2014a. Surveillance, Snowden and Big Data: Capacities, Consequences, Critique. Big Data & Society 1 (1). Available at:

Lyon, David. 2014b. The emerging surveillance culture. In Media, Surveillance and Identity, eds André Jansson and Miyase Christensen. New York: Peter Lang.

Lyon, David. 2013. Can citizens roll back silent army of watchers? The Toronto Star. September 23. Available at:

Lyon, David and Özgün Topak. 2013. Promoting global identification: corporations, IGOs and ID card systems. In The Surveillance-Industrial Complex: A Political Economy of Surveillance, eds Kirstie S. Ball and Laureen Snider, 27-43. London and New York: Routledge.

Lyon, David. 2005. A sociology of information. In The Sage Handbook of Sociology, eds Craig Calhoun, Chris Rojek and Bryan Turner. London and New York: Sage.

Lyon, David. 2003. Surveillance after September 11. Cambridge: Polity.

Lyon, David. 2001. Surveillance Society: Monitoring Everyday Life. Buckingham: Open University Press.

Morozov, Evgeny. 2013. The real privacy problem. MIT Technology Review. October 22. Available at:

Marquez, Xavier. 2012. Spaces of appearance and spaces of surveillance. Polity 44: 6-31.

Marwick, Alice. 2013. Status Update: Celebrity, Publicity and Branding in the Social Media Age. New Haven, CT: Yale University Press.

Marwick, Alice. 2012. The public domain: Surveillance in everyday life. Surveillance & Society 9 (4): 378-393.  Marx, Gary. 1988. Undercover: Police Surveillance in America. Berkeley, CA: University of California Press.

Mosco, Vincent. 2014. To the Cloud: Big Data in a Turbulent World. Boulder, CO and London: Paradigm Publishers.

Polanyi, Karl. 2001. The Great Transformation: The Political and Economic Origins of Our Time, 2nd ed. Foreword by Joseph E. Stiglitz. Boston: Beacon Press.

Polanyi, Karl. 1944. The Great Transformation. New York: Farrar and Rinehart.

Raab, Charles. 2014. Privacy as a Security Value. In Jon Bing: En Hyllest / A Tribute, eds Dag Wiese Schartum, Lee Bygrave and Anne Gunn Berge Bekken, 39-58. Oslo: Gyldendal.

Raab, Charles. 2013. Studying surveillance: the contribution of political science? Political Insight. October 29. Available at:

Raab, Charles and Benjamin Goold. 2011. Protecting Information Privacy. Equality and Human Rights Commission Research Report 69. Available at:

Regan, Priscilla. 2009 (1995). Legislating Privacy: Technology, Social Values and Public Policy. Durham, NC: University of North Carolina Press.

Rule, James. 1974. Private Lives, Public Surveillance: Social Control in the Computer Age. New York: Schocken Books.

Schiller, Dan. 1988. How to Think about Information. In The Political Economy of Information, eds V. Mosco and J. Wasko, 2744. Madison: University of Wisconsin Press.

Steeves, Valerie. 2009. Reclaiming the social value of privacy. In Lessons from the Identity Trail: Anonymity, Privacy and Identity in a Networked Age, eds Ian Kerr, Carol Lucock and Valerie Steeves. New York: Oxford University Press.

Taylor, Emmeline. 2013. Surveillance Schools: Security, Discipline and Control in Contemporary Education. London: Macmillan.

Timberg, Craig and Ellen Nakashima. 2013. Agreements with private companies protect access to cables’ data for surveillance. The Washington Post, July 6. Available at:

Trottier, Daniel. 2012. Social Media as Surveillance. London: Ashgate.

Turow, Joseph. 2012. The Daily You: How the New Advertising Industry is Defining your Identity and your Worth. New Haven, CT: Yale University Press.

Webster, Frank, and Kevin Robins. 1986. Information Technology: A Luddite Analysis. NJ: Ablex.

Weston, Paul, Glenn Greenwald and Ryan Gallagher 2013. New Snowden docs show US spied during G20 in Toronto. CBC News, Nov 27. Available at:

Zedner, Lucia. 2009. Security. New York and London: Routledge.

Zittrain, Jonathan. 2009. The Future of the Internet. New Haven, CT: Yale University Press. 

Zuboff, Shoshana. 1988. In the Age of the Smart Machine: The Future of Work and Power. New York: Basic Books.

  2.  In May 2015 the US Court of Appeals ruled that the collection of bulk phone metadata by the NSA never was legal.
  3. Weber and Arendt have much to say about what is now known as surveillance, in relation to maintaining bureaucratic records on individuals (Weber) or how power is generated in “spaces of appearance” (Arendt). See e.g. Dandeker 1990; Marquez 2012.
  4. It is symptomatic of today’s celebrity culture, of course, that surveillance of high-profile public figures garners much more mass media interest than the mass surveillance of ordinary—in this case German—citizens. At the same time, it is not only the NSA that spies on others’ leaders: Germany has also kept tabs on prominent Americans such as John Kerry and Hillary Clinton.  See
  8.  Coalitions against mass surveillance have engaged in several concerted global events since Snowden’s disclosures began. See e.g.  The "tansparency" question, however, is in tension with the legitimate but limited need for secrecy within intelligence agencies. Research could fruitfully be brought to bear on this vexed question.
  9. Centre for Democracy and Technology, September 2014: ‘Yahoo v. U.S. PRISM documents,’ available at
  10. See  e.g. , or
  11. The phrase “…finding a balance between privacy and security” is routinely intoned by governments and media alike but it is at best vacuous and at worst a cloak for undermining the one to bolster the other.
  12. See e.g. the call from political scientist Charles D. Raab (2013).

Deibert, Ronald - Authoritarianism Goes Global: Cyberspace Under siege - 201507

Deibert, Ronald, "Authoritarianism Goes Global: Cyberspace Under siege," Journal of Democracy, Volume 26, Number 3, July 2015, pp. 64-78.

December 2014 marked the fourth anniversary of the Arab Spring. Beginning in December 2010, Arab peoples seized the attention of the world by taking to the Internet and the streets to press for change. They toppled regimes once thought immovable, including that of Egyptian dictator Hosni Mubarak. Four years later, not only is Cairo’s Tahrir Square empty of protesters, but the Egyptian army is back in charge. Invoking the familiar mantras of anti-terrorism and cyber-security, Egypt’s new president, General Abdel Fattah al-Sisi, has imposed a suite of in- formation controls. 1 Bloggers have been arrested and websites blocked; suspicions of mass surveillance cluster around an ominous-sounding new “High Council of Cyber Crime.” The very technologies that many heralded as “tools of liberation” four years ago are now being used to stifle dissent and squeeze civil society. The aftermath of the Arab Spring is looking more like a cold winter, and a potent example of resurgent authoritarianism in cyberspace.

Authoritarianism means state constraints on legitimate democratic political participation, rule by emotion and fear, repression of civil society, and the concentration of executive power in the hands of an unaccountable elite. At its most extreme, it encompasses totalitarian states such as North Korea, but it also includes a large number of weak states and “competitive authoritarian” regimes. 2 Once assumed to be incompatible with today’s fast-paced media environment, authoritarian systems of rule are showing not only resilience, but a capacity for resurgence. Far from being made obsolete by the Internet, authoritarian regimes are now actively shaping cyberspace to their own strategic advantage. This shaping includes technological, legal, extralegal, and other targeted information controls. It also includes regional and bilateral cooperation, the promotion of international norms friendly to authoritarianism, and the sharing of “best” practices and technologies.

The development of several generations of information controls has resulted in a tightening grip on cyberspace within sovereign territorial boundaries. A major impetus behind these controls is the growing imperative to implement cyber-security and anti-terror measures, which often have the effect of strengthening the state at the expense of human rights and civil society. In the short term, the disclosures by Edward Snowden concerning surveillance carried out by the U.S. National Security Agency (NSA) and its allies must also be cited as a factor that has contributed, even if unintentionally, to the authoritarian resurgence.

Liberal democrats have wrung their hands a good deal lately as they have watched authoritarian regimes use international organizations to promote norms that favor domestic information controls. Yet events in regional, bilateral, and other contexts where authoritarians learn from and cooperate with one another have mattered even more. Moreover, with regard to surveillance, censorship, and targeted digital espionage, commercial developments and their spin-offs have been key. Any thinking about how best to counter resurgent authoritarianism in cyberspace must reckon with this reality.

Mention authoritarian controls over cyberspace, and people often think of major Internet disruptions such as Egypt’s shutdown in late January and early February 2011, or China’s so-called Great Firewall. These are noteworthy, to be sure, but they do not capture the full gamut of cyberspace controls. Over time, authoritarians have developed an arsenal that extends from technical measures, laws, policies, and regulations, to more covert and offensive techniques such as targeted malware attacks and campaigns to co-opt social media. Subtler and thus more likely to be effective than blunt-force tactics such as shutdowns, these measures reveal a considerable degree of learning. Cyberspace authoritarianism, in other words, has evolved over at least three generations of information controls. 3

First-generation controls tend to be “defensive,” and involve erecting national cyber-borders that limit citizens’ access to information from abroad. The archetypal example is the Great Firewall of China, a system for filtering keywords and URLs to control what computer users within the country can see on the Internet. Although few countries have matched the Great Firewall (Iran,  Pakistan, Saudi Arabia, Bahrain, Yemen, and Vietnam have come the closest), first-generation controls are common. Indeed, Internet filtering of one sort or another is now normal even in democracies.

Where countries vary is in terms of the content targeted for blocking and the transparency of filtering practices. Some countries, including Canada, the United Kingdom, and the United States, block content related to the sexual exploitation of children as well as content that infringes copyrights. Other countries focus primarily on guarding reli- gious sensitivities. Since September 2012, Pakistan has been blocking all of  YouTube over a video, titled “Innocence of Muslims,” that Pakistani authorities deem blasphemous. 4  A growing number of countries are blocking access to political and security-related content, especially content posted by opposition and human-rights groups, insurgents, “extremists,” or “terrorists.” Those last two terms are in quotation marks because in some places, such as the Gulf states, they are defined so broadly that content is blocked which in most other countries would fall within the bounds of legitimate expression.

In 2012, Renu Srinavasan of Mumbai found herself arrested merely for hitting the “like” button below a friend’s Facebook post.

National-level Internet filtering is notoriously crude. Errors and inconsistencies are common. One Citizen Lab study found that Blue Coat (a U.S. software widely used to automate national filtering systems) mistakenly blocked hundreds of non-pornographic websites. 5 Another Citizen Lab study found that Oman residents were blocked from a Bollywood-related website not because it was banned in Oman, but because of upstream filtering in India, the pass-through country for a portion of Oman’s Internet traffic. 6  In Indonesia, Internet-censorship rules are applied at the level of Internet Service Providers (ISPs). The country has more than three-hundred of these; what you can see online has much to do with which one you use. 7

As censorship extends into social media and applications, inconsistencies bloom, as is famously the case in China. In some countries, a user cannot see the filtering, which displays as a “network error.” Although relatively easy to bypass and document, 8 first-generation controls have won enough acceptance to have opened the door to more expansive measures.

Second-generation controls are best thought of as deepening and ex- tending information controls into society through laws, regulations, or requirements that force the private sector to do the state’s bidding by policing privately owned and operated networks according to the state’s demands. Second-generation controls can now be found in every region of the world, and their number is growing. Turkey is passing new laws, on the pretext of protecting national security and fighting cyber-crime, that will expand wiretapping and other surveillance and detention powers while allowing the state to censor websites without a court order. Ethiopia charged six bloggers from the Zone 9 group and three independent journalists with terrorism and treason after they covered political issues. Thailand is considering new cyber-crime laws that would grant authorities the right to access emails, telephone records, computers, and postal mail without needing prior court approval. Under reimposed martial law, Egypt has tightened regulations on demonstrations and arrested prominent bloggers, including Arab Spring icon Alaa Abd El Fattah. Saudi blogger Raif Badawi is looking at ten years in jail and 950 remaining lashes (he received the first fifty lashes in January 2015) for criticizing Saudi clerics online. Tunisia passed broad reforms after the Arab Spring, but even there a blogger has been arrested under an obscure older law for “defaming the military” and “insulting military commanders” on Facebook. Between 2008 and March 2015 (when the Supreme Court struck it down), India had a law that banned “menacing” or “offensive” social-media posts. In 2012, Renu Srinavasan of Mumbai found herself arrested merely for hitting the “like” button below a friend’s Facebook post. In Singapore, blogger and LGBT activist Alex Au was fined in March 2015 for criticizing how a pair of court cases was handled.

Second-generation controls also include various forms of “baked-in” surveillance, censorship, and “backdoor” functionalities that governments, wielding their licensing authority, require manufacturers and service providers to build into their products. Under new anti-terrorism laws, Beijing recently announced that it would require companies offering services in China to turn over encryption keys for state inspection and build into all systems backdoors open to police and security agencies. Existing regulations already require social-media companies to survey and censor their own networks. Citizen Lab has documented that many chat applications popular in China come pre-configured with censorship and surveillance capabilities. 9  For many years, the Russian government has required telecommunications companies and ISPs to be “SORM-compliant” — SORM is the Russian acronym for the surveillance system that directs copies of all electronic communications to local security offices for archiving and inspection. In like fashion, India’s Central Monitoring System gives the government direct access to the country’s telecommunications networks. Agents can listen in on broadband phone calls, SMS messages, and email traffic, while all call-data records are archived and analyzed. In Indonesia, where BlackBerry smartphones remain popular, the government has repeatedly pressured Canada-based BlackBerry Limited to comply with “lawful-access” demands, even threatening to ban the company’s services unless BlackBerry agreed to host data on servers in the country. Similar demands have come from India, Saudi Arabia, and the United Arab Emirates. The company has even agreed to bring Indian technicians to Canada for special surveillance training. 10

Also spreading are new laws that ban security and anonymizing tools, including software that permits users to bypass first-generation blocks. Iran has arrested those who distribute circumvention tools, and it has throttled Internet traffic to frustrate users trying to connect to popular circumvention and anonymizer tools such as Psiphon and Tor. Belarus and Russia have both recently proposed making Tor and similar tools illegal. China has banned virtual private networks (VPNs) nationwide — the latest in a long line of such bans—despite the difficulties that this causes for business. Pakistan has banned encryption since 2011, although its widespread use in financial and other communications inside the country suggests that enforcement is lax. The United Arab Emirates has banned VPNs, and police there have stressed that individuals caught using them may be charged with violating the country’s harsh cyber-crime laws.

Second-generation controls include finer-grained registration and identification requirements that tie people to specific accounts or devices, or even require citizens to obtain government permission before using the Internet. Pakistan has outlawed the sale of prepaid SIM cards and demands that all citizens register their SIM cards using bio-metric identification technology. The Thai military junta has extended such registration rules to cover free WiFi accounts as well. China has imposed real-name registration policies on Internet and social-media accounts, and companies have dutifully deleted tens of thousands of accounts that could not be authenticated. Chinese users must also commit to respect the seven “baselines,” including “laws and regulations, the Socialist system, the national interest, citizens’ lawful rights and interests public order, morals, and the veracity of information.”11

By expanding the reach of laws and broad regulations, second-generation controls narrow the space left free for civil society, and subject the once “wild frontier” of the Internet to growing regulation. While enforcement may be uneven, in country after country these laws hang like dark clouds over civil society, creating a climate of uncertainty and fear.

Authoritarians on the Offensive

Third-generation controls are the hardest to document, but may be the most effective. They involve surveillance, targeted espionage, and other types of covert disruptions in cyberspace. While first-generation controls are defensive and second-generation controls probe deeper into society, third-generation controls are offensive. The best known of these are the targeted cyber-espionage campaigns that emanate from China. Although Chinese spying on businesses and governments draws most of the news reports, Beijing uses the same tactics to target human-rights, pro-democracy, and independence movements outside China. A recent four-year comparative study by Citizen Lab and ten participating NGOs found that those groups suffered the same persistent China-based digital attacks as governments and Fortune 500 companies. 12 The study also found that targeted espionage campaigns can have severe consequences including disruptions of civil society and threats to liberty. At the very least, persistent cyber-espionage attacks breed self-censorship and undermine the networking advantages that civil society might otherwise reap from digital media. Another Citizen Lab report found that China has employed a new attack tool, called “The Great Cannon,” which can redirect the website requests of unwitting foreign users into denial-of-service attacks or replace web requests with malicious software. 13

While other states may not be able to match China’s cyber-espionage or online-attack capabilities, they do have options. Some might buy off-the-shelf espionage “solutions” from Western companies such as the United Kingdom’s Gamma Group or Italy’s Hacking Team — each of which Citizen Lab research has linked to dozens of authoritarian-government clients. 14 In Syria, which is currently the site of a multi-sided, no-holds-barred regional war, security services and extremist groups such as ISIS are borrowing cyber-criminals’ targeted-attack techniques, downloading crude but effective trade-craft from open sources and then using it to infiltrate opposition groups, often with deadly results. 15 The capacity to mount targeted digital attacks is proving particularly attractive to regimes that face persistent insurgencies, popular protests, or other standing security challenges. As these techniques become more widely used and known, they create a chilling effect: Even without particular evidence, activists may avoid digital communication for fear that they are being monitored.

Third-generation controls also include efforts to aim crowd-sourced antagonism at political foes. Governments recruit “electronic armies” that can use the very social media employed by popular opposition movements to discredit and intimidate those who dare to criticize the state. 16 Such on-line swarms are meant to make orchestrated denunciations of opponents look like spontaneous popular expressions. If the activities of its electronic armies come under legal question or result in excesses, a regime can hide behind “plausible deniability.” Examples of pro-government e-warriors include Venezuela’s Chavista “communicational guerrillas,” the Egyptian Cyber Army, the pro-Assad Syrian Electronic Army, the pro-Putin bloggers of Russia, Kenya’s “director of digital media” Dennis Itumbi plus his bloggers, Saudi Arabia’s anti-pornography “ethical hackers,” and China’s notorious “fifty-centers,” so called because they are allegedly paid that much for each pro-government comment or status update they post.

Other guises under which third-generation controls may travel include not only targeted attacks on Internet users but wholesale disruptions of cyber-space. Typically scheduled to cluster before and during major political events such as elections, anniversaries, and public demonstrations, “just-in-time” disruptions can be as severe as total Internet blackouts. More common, however, are selective disruptions. In Tajikistan, SMS services went down for several days leading up to planned opposition rallies in October 2014. The government blamed technical errors; others saw the hand of the state at work. 17 Pakistan blocked all mobile services in its capital, Islamabad, for part of the day on 23 March 2015 in order to shield national-day parades from improvised explosive devices. 18 During the 2014 pro-democracy demonstrations in Hong Kong, China closed access to the photo-sharing site Instagram. Telecommunications companies in the Democratic Republic of Congo were ordered to shut down all mobile and SMS communications in response to anti-government protests. Ban- gladesh ordered a ban on the popular smartphone messaging application Viber in January 2015, after it was linked to demonstrations.

To these three generations, we might add a fourth. This comes in the form of a more assertive authoritarianism at the international level. For years, governments that favor greater sovereign control over cyber-space have sought to assert their preferences—despite at times stiff resistance—in forums such as the International Telecommunication Union (ITU), the Internet Governance Forum (IGF), the United Nations (UN), and the Internet Corporation for Assigned Names and Numbers (ICANN). 19 Although there is no simple division of “camps,” observers tend to group countries broadly into those that prefer a more open Internet and a limited role for states and those that prefer a state-led form of governance, probably under UN auspices.

The United States, the United Kingdom, Europe, and the Asian democracies line up most often behind openness, while China, Iran, Russia, Saudi Arabia, and various other non-democracies fall into the latter group. A large number of emerging-market countries, led by Brazil, India, and Indonesia, are “swing states” that can go either way. Battle lines between these opposing views were becoming sharper around the time of the December 2012 World Congress on Information Technology (WCIT) in Dubai—an event that many worried would mark the fall of Internet governance into UN (and thus state) hands. But the WCIT process stalled, and lobbying by the United States and its allies (plus Internet companies such as Google) played a role in preventing fears of a state-dominated Internet from coming true.

If recent proposals on international cyber-security submitted to the UN by China, Russia, and their allies tell us anything, future rounds of the cyber-governance forums may be less straightforward than what transpired at Dubai. In January 2015, the Beijing- and Moscow-led Shanghai Cooperation Organization (SCO) submitted a draft “International Code of Conduct for Information Security” to the UN. This document reaffirms many of the same principles as the ill-fated WCIT Treaty, including greater state control over cyber-space.

Such proposals will surely raise the ire of those in the “Internet freedom” camp, who will then marshal their resources to lobby against their adoption. But will wins for Internet freedom in high-level international venues (assuming that such wins are in the cards) do anything to stop local and regional trends toward greater government control of the online world? Writing their preferred language into international statements may please Internet- freedom advocates, but what if such language merely serves to gloss over a ground-level reality of more rather than less state cyber-authority?

It is important to understand the driving forces behind resurgent authoritarianism in cyberspace if we are to comprehend fully the challenges ahead, the broader prospects facing human rights and democracy promotion worldwide, and the reasons to suspect that the authoritarian resurgence in cyberspace will continue.

A major driver of this resurgence has been and likely will continue to be the growing impetus worldwide to adopt cyber-security and anti-terror policies. As societies come to depend ever more heavily on networked digital information, keeping it secure has become an ever-higher state priority. Data breaches and cyber-espionage attacks — including massive thefts of intellectual property — are growing in number. While the cyber- security realm is replete with self-serving rhetoric and threat inflation, the sum total of concerns means that dealing with cyber-crime has now become an unavoidable state imperative. For example, the U.S. intelligence community’s official 2015 “Worldwide Threat Assessment” put cyber-attacks first on the list of dangers to U.S. national security. 20

It is crucial to note how laws and policies in the area of cyber-security are combining and interacting with those in the anti-terror realm. Violent extremists have been active online at least since the early days of al-Qaeda several decades ago. More recently, the rise of the Islamic State and its gruesome use of social media for publicity and recruitment have spurred a new sense of urgency. The Islamic State atrocities recorded in viral beheading videos are joined by (to list a few) terror attacks such as the Mumbai assault in India (November 2008); the Boston Marathon bombings (April 2013); the Westgate Mall shootings in Kenya (September 2013); the Ottawa Parliament shooting (October 2014); the Charlie Hebdo and related attacks in Paris (January 2015); repeated deadly assaults on Shia mosques in Pakistan (most recently in February 2015); and the depredations of Nigeria’s Boko Haram.

Horrors such as these underline the value of being able to identify, in timely fashion amid the wilderness of cyberspace, those bent on violence before they strike. The interest of public-safety officials in data-mining and other high-tech surveillance and analytical techniques is natural and understandable. But as expansive laws are rapidly passed and state-security services (alongside the private companies that work for and with them) garner vast new powers and resources, checks and balances that protect civil liberties and guard against the abuse of power can be easily forgotten. The adoption by liberal democracies of sweeping cyber-crime and anti-terror measures without checks and balances cannot help but lend legitimacy and normative support to similar steps taken by authoritarian states. The headlong rush to guard against extremism and terrorism worldwide, in other words, could end up providing the biggest boost to resurgent authoritarianism.

Regional Security Cooperation as a Factor

While international cyberspace conferences attract attention, often overlooked are regional security forums. The latter are the places where cyber-security coordination happens. They are focused sites of learning and norm promotion where ideas, technologies, and “best” practices are exchanged. Even countries that are otherwise rivals can and do agree and cooperate within the context of such security forums.

The SCO, to name one prominent regional group, boasts a well-developed normative framework that calls upon its member states to combat the “three evils” of terrorism, separatism, and extremism. The upshot has been information controls designed to bolster regime stability against opposition groups and the claims of restive ethnic minorities. The SCO recently held joint military exercises in order to teach its forces how to counter Internet-enabled opposition of the sort that else- where has led to “color revolutions.” The Chinese official who directs the SCO’s “Regional Anti-Terrorist Structure” (RATS) told the UN Counter-Terrorism Committee that RATS had “collected and distributed to its Member States intelligence information regarding the use of the Internet by terrorist groups active in the region to promote their ideas.” 21

Such information may include intelligence on individuals involved in what international human-rights law considers legitimate political expression. Another Eurasian regional security organization in which Russia plays a leading role, the Collective Security Treaty Organization (CSTO), has announced that it will be creating an “international center to combat cyber threats.”22 Both the SCO and the CSTO are venues where commercial platforms for both mass and targeted surveillance are sold, shared, and exchanged. The telecommunications systems and ISPs in each of the five Central Asian republics are all “SORM-compliant” — ready to copy all data routinely to security services, just as in Russia. The SCO and CSTO typically carry out most of their deliberations behind closed doors and release no disclosures in English, meaning that much of what they do escapes the attention of Western observers and civil society groups.

The regional cyber-security coordination undertaken by the Gulf Co- operation Council (GCC) offers another example. In 2014, the GCC approved a long-awaited plan to form a joint police force, with head- quarters in Abu Dhabi. While the fights against drug dealing and money laundering are to be among the tasks of this Gulf Interpol, the new force will also have the mission of battling cyber-crime. In the Gulf monarchies, however, online offenses are defined broadly and include posting items that can be taken as critical of royal persons, ruling families, or the Muslim religion. These kingdoms and emirates have long records of suppressing dissent and even arresting one another’s political opponents. Whatever its other law-enforcement functions, the GCC version of Interpol is all too likely to become a regional tool for suppressing protest and rooting out expressions of discontent.

“Flying under the radar,” with little flash, few reporters taking notice, and lots of closed meetings carried on in local languages by like-minded officials from neighboring authoritarian states, organizations concerned with regional governance and security attract far less attention than UN conferences that seem poised to unleash dramatic Web takeovers which may never materialize. Yet it is in these obscure regional corners that the key norms of cyberspace controls may be taking shape and taking hold.

The Cyber-security Market as a Factor

A third driving factor has to do with the rapid growth of digital connectivity in the global South and among the populations of authoritarian regimes, weak states, and flawed democracies. In Indonesia the number of Internet users increases each month by a stunning 800,000. In 2000, Nigeria had fewer than a quarter-million Internet users; today, it has 68 million. The Internet-penetration rate in Cambodia rose a staggering 414 percent from January 2014 to January 2015 alone. By the end of 2014, the number of mobile-connected devices exceeded the number of people on Earth. Cisco Systems estimates that by 2019, there will be nearly 1.5 mobile devices per living human. The same report predicts that the steepest rates of growth in mobile-data traffic will be found in the Middle East and Africa. 23

Booming digital technology is good for economic growth, but it also creates security and governance pressure points that authoritarian regimes can squeeze. We have seen how social media and the like can mobilize masses of people instantly on behalf of various causes (pro-democratic ones included). Yet many of the very same technologies can also be used as tools of control. Mobile devices, with their portability, low-cost, and light physical-infrastructure requirements, are how citizens in the developing world connect. These handheld marvels allow people to do a wealth of things that they could hardly have dreamt of doing before. Yet all mobile devices and their dozens of installed applications emit reams of highly detailed information about peoples’ movements, social relationships, habits, and even thoughts — data that sophisticated agencies can use in any number of ways to spy, to track, to manipulate, to deceive, to extort, to influence, and to target.

The market for digital spyware described earlier needs to be seen not only as a source of material and technology for countries who demand them, but as an active shaper of those countries’ preferences, practices, and policies. This is not to say that companies are persuading policy makers regarding what governments should do. Rather, companies and the services that they offer can open up possibilities for solutions, be they deep-packet inspection, content filtering, cellphone tracking, “big-data” analytics, or targeted spyware. SkyLock, a cellphone-tracking solution sold by Verint Systems of Melville, New York, purports to offer governments “a cost-effective, new approach to obtaining global location information concerning known targets.” Company brochures obtained by the Washington Post include “screen shots of maps depicting location tracking in what appears to be Mexico, Nigeria, South Africa, Brazil, Congo, the United Arab Emirates, Zimbabwe, and several other countries.” 24

Large industry trade fairs where these systems are sold are also crucial sites for learning and information exchange. The best known of these, the Intelligence Support Systems (ISS) events, are run by TeleStrategies, Incorporated, of McLean, Virginia. Dubbed the “Wiretappers’ Ball” by critics, ISS events are exclusive conventions with registration fees high enough to exclude most attendees other than governments and their agencies. As one recent study noted, ISS serves to connect registrants with surveillance-technology vendors, and provides training in the latest industry practices and equipment. 25  The March 2014 ISS event in Dubai featured one session on “Mobile Location, Surveillance and Signal Intercept Product Training” and another that promised to teach attendees how to achieve “unrivaled at-tack capabilities and total resistance to detection, quarantine and removal by any endpoint security technology.” 26 Major corporate vendors of lawful-access, targeted-surveillance, and data-analytic solutions are fixtures at ISS meetings and use them to gather clients.

As cyber-security demands grow, so will this market. Authoritarian policy makers looking to channel industrial development and employment opportunities into paths that reinforce state control can be expected to support local innovation. Already, schools of engineering, computer science, and data-processing are widely seen in the developing world as viable paths to employment and economic sustainability, and within those fields cyber-security is now a major driving force. In Malaysia, for example, the British defense contractor BAE Systems agreed to under- write a degree-granting academic program in cyber-security in partial fulfillment of its “defense offsets” obligation. 27  India’s new “National Cyber Security Policy” lays out an ambitious strategy for training a new generation of experts in, among other things, the fine points of “ethical hacking.” The goal is to give India an electronic army of high-tech specialists a half-million strong. In a world where “Big Brother” and “Big Data” share so many of the same needs, the political economy of cyber-security must be singled out as a major driver of resurgent authoritarianism in cyberspace.

Edward Snowden as a Factor

Since June 2013, barely a month has gone by without new revelations concerning U.S. and allied spying—revelations that flow from the disclosures made by former NSA contractor Edward Snowden. The disclosures fill in the picture of a remarkable effort to marshal extraordinary capacities for information control across the entire spectrum of cyber- space. The Snowden revelations will continue to fuel an important public debate about the proper balance to be struck between liberty and security.

While the value of Snowden’s disclosures in helping to start a long- needed discussion is undeniable, the revelations have also had unintended consequences for resurgent authoritarianism and cyberspace. First, they have served to deflect attention away from authoritarian-regime cyber-espionage campaigns such as China’s. Before Snowden fled to Hong Kong, U.S. diplomacy was taking an aggressive stand against cyber-espionage. Individuals in the pay of the Chinese military and allegedly linked to Chinese cyber-espionage were finding themselves under indictment. Since Snowden, the pressure on China has eased. Beijing, Moscow, and others have found it easy to complain loudly about a double standard supposedly favoring the United States while they rationalize their own actions as “normal” great-power behavior and congratulate themselves for correcting the imbalance that they say has beset cyberspace for too long.

Second, the disclosures have created an atmosphere of suspicion around Western governments’ intentions and raised questions about the legitimacy of the “Internet Freedom” agenda backed by the United States and its allies. Since the Snowden disclosures—revealing top-secret exploitation and disruption programs that in some respects are indistinguishable from those that Washington and its allies have routinely condemned — the rhetoric of the Internet Freedom coalition has rung rather hollow. In February 2015, it even came out that British, Canadian, and U.S. signals-intelligence agencies had been “piggybacking” on China-based cyber-espionage campaigns—stealing data from Chinese hackers who had not properly secured their own command-and-control networks. 28

Third, the disclosures have opened up foreign investment opportunities for IT companies that used to run afoul of national-security concerns. Before Snowden, rumors of hidden “backdoors” in Chinese-made technology such as Huawei routers put a damper on that company’s sales. Then it came out that the United States and allied governments had been compelling (legally or otherwise) U.S.-based tech companies to do precisely what many had feared China was doing—namely, in-stalling secret backdoors. So now Western companies have a “Huawei” problem of their own, and Huawei no longer looks so bad.

In the longer term, the Snowden disclosures may have the salutary effect of educating a large number of citizens about mass surveillance. In the nearer term, however, the revelations have handed countries other than the United States and its allies an opportunity for the self-interested promotion of local IT wares under the convenient rhetorical guise of striking a blow for “technological sovereignty” and bypassing U.S. in- formation controls.

There was a time when authoritarian regimes seemed like slow-footed, technologically challenged dinosaurs whom the Information Age was sure to put on a path toward ultimate extinction. That time is no more—these regimes have proven themselves surprisingly (and dismayingly) light-footed and adaptable. National-level information controls are now deeply entrenched and growing. Authoritarian regimes are becoming more active and assertive, sharing norms, technologies, and “best” practices with one another as they look to shape cyberspace in ways that legitimize their national interests and domestic goals.

Sadly, prospects for halting these trends anytime soon look bleak. As resurgent authoritarianism in cyberspace increases, civil society will struggle: A web of ever more fine-grained information controls tightens the grip of unaccountable elites. Given the comprehensive range of information controls outlined here, and their interlocking sources deep within societies, economies, and political systems, it is clear that an equally comprehensive approach to the problem is required. Those who seek to promote human rights and democracy through cyberspace will err gravely if they stick to high-profile “Internet Freedom” conferences or investments in “secure apps” and digital training. No amount of rhetoric or technological development alone will solve a problem whose roots run this deep and cut across the borders of so many regions and countries.

What we need is a patient, multi-pronged, and well-grounded approach across numerous spheres, with engagement in a variety of venues. Researchers, investigative journalists, and others must learn to pay more attention to developments in regional security settings and obscure trade fairs. The long-term goal should be to open these venues to greater civil society participation and public accountability so that considerations of human rights and privacy are at least raised, even if not immediately respected.

The private sector now gathers and retains staggering mountains of data about countless millions of people. It is no longer enough for states to conduct themselves according to the principles of transparency, accountability, and oversight that democracy prizes; the companies that own and operate cyberspace — and that often come under tremendous pressure from states — must do so as well. Export controls and “smart sanctions” that target rights-offending technologies without infringing on academic freedom can play a role. A highly distributed, independent, and powerful system of cyberspace verification should be built on a global scale that monitors for rights violations, dual-use technologies, targeted malware attacks, and privacy breaches. A model for such a system might be found in traditional arms-control verification regimes such as the one administered by the Organization for the Prohibition of Chemical Weapons. Or it might come from the research of academic groups such as Citizen Lab, or the setup of national computer emergency-response teams (CERTs) once these are freed from their current subordination to parochial national-security concerns. 29   However it is ultimately constituted, there needs to be a system for monitoring cyber- space rights and freedoms that is globally distributed and independent of governments and the private sector.

Finally, we need models of cyberspace security that can show us how to prevent disruptions or threats to life and property without sacrificing liberties and rights. Internet-freedom advocates must reckon with the realization that a free, open, and secure cyberspace will materialize only within a framework of democratic oversight, public accountability, transparent checks and balances, and the rule of law. For individuals living under authoritarianism’s heavy hand, achieving such lofty goals must sound like a distant dream. Yet for those who reside in affluent countries, especially ones where these principles have lost ground to anti-terror measures and mass-surveillance programs, fighting for them should loom as an urgent priority and a practically achievable first step on the road to remediation.

Time, 18 February 2015.

  1. Sam Kimball, “After the Arab Spring, Surveillance in Egypt Intensifies,” Intercept, 9 March 2015, egypt-intensifies.
  2. Steven Levitsky and Lucan A. Way, “The Rise of Competitive Authoritarianism,” Journal of Democracy 13 (April 2002):, 51–65.
  3. Ronald Deibert and Rafal Rohozinski, “Beyond Denial: Introducing Next Generation Information Access Controls,”  Note that the “generations” of controls are not assumed to be strictly chronological: Governments can skip generations, and several generations can exist together. Rather, they are a useful heuristic device for understanding the evolution of information.
  4. “YouTube to Remain Blocked ‘Indefinitely’ in Pakistan: Officials,” Dawn (Islamabad), 8 February 2015,
  5. Bennett Haselton, “Blue Coat Errors: Sites Miscategorized as ‘Pornography,’” Citizen Lab, 10 March 2014,
  6. “Routing Gone Wild: Documenting Upstream Filtering in Oman via India,” Citizen Lab, 12 July 2012,
  7. “IGF 2013: Islands of Control, Island of Resistance: Monitoring the 2013 Indone- sian IGF (Foreword),” Citizen Lab, 20 January 2014,
  8. Masashi Crete-Nishihata, Ronald J. Deibert, and Adam Senft, “Not by Technical Means Alone: The Multidisciplinary Challenge of Studying Information Controls,” IEEE Internet Computing 17 (May–June 2013): 34–41.
  9. See
  10. Amol Sharma, “RIM Facility Helps India in Surveillance Efforts,” Wall Street Journal, 28 October 2011.
  11. Rogier Creemers, “New Internet Rules Reflect China’s ‘Intent to Target Individuals Online,’” Deutsche Welle, 2 March 2015.
  12. Citizen Lab, “Communities @ Risk: Targeted Digital Threats Against Civil Society,” 11 November 2014,;
  13. Bill Marczak et , “China’s Great Cannon,” Citizen Lab, 10 April 2015,
  14. “For Their Eyes Only: The Commercialization of Digital Spying,” Citizen Lab, 30 April 2013,;
  15. “Malware Attack Targeting Syrian ISIS Critics,” Citizen Lab, 18 December 2014,
  16. Seva Gunitzky, “Corrupting the Cyber-Commons: Social Media as a Tool of Autocratic Stability,” Perspectives on Politics 13 (March 2015): 42–54.
  17. RFE/RL Tajik Service, “SMS Services Down in Tajikistan After Protest Calls,” Radio Free Europe/Radio Liberty, 10 October 2014,
  18. See “No Mobile Phone Services on March 23 in Islamabad,” Daily Capital (Islamabad), 22 March 2015,
  19. Ronald J. Deibert and Masashi Crete-Nishihata, “Global Governance and the Spread of Cyberspace Controls,” Global Governance 18 (2012): 339–61, http://citizenlab. org/cybernorms2012/governance.pdf.
  20. See James R. Clapper, “Statement for the Record Worldwide Threat Assessment of the US Intelligence Community,” Senate Armed Services Committee, 26 February 2015,
  21. See “Counter-Terrorism Committee Welcomes Close Cooperation with the Regional Anti-Terrorist Structure of the Shanghai Cooperation Organization,” 24 October 2014,
  22. See Joshua Kucera, “SCO, CSTO Increasing Efforts Against Internet Threats,” The Bug Pit, 16 June 2014,
  23. See Cisco, “Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update 2014–2019,” white paper, 3 February 2015,
  24. Craig Timberg, “For Sale: Systems That Can Secretly Track Where Cellphone Users Go Around the Globe,” Washington Post, 24 August 2014.
  25. Collin Anderson, “Monitoring the Lines: Sanctions and Human Rights Policy Considerations of TeleStrategies ISS World Seminars,” 31 July 2014,
  26. Anderson, “Monitoring the Lines: Sanctions and Human Rights Policy Considerations of TeleStrategies ISS World Seminars,” 31 July 2014,
  27. See Jon Grevatt, “BAE Systems Announces Funding of Malaysian Cyber Degree Programme,” IHS Jane’s 360, 5 March 2015,
  28. Colin Freeze, “Canadian Agencies Use Data Stolen by Foreign Hackers, Memo Reveals” Globe and Mail (Toronto), 6 February 2015.
  29. For one proposal along these lines, see Duncan Hollis and Tim Maurer, “A Red Cross for Cyberspace," Time, 18 February 2015.

Schneier, Bruce - Backdoors will not stop ISIS, but maybe outlawing general purpose computers will - DEFCON 23 20150807

Take away from Bruce Schneier - Q & A - DEFCON 23 20150807 - Backdoors will not stop ISIS, but maybe outlawing general purpose computers will

Questioner: I wanted to see your opinion on the backdoor that Obama wants.

Schneier: Which one does he want ... You know so, so I'm not sure Obama personally has an opinion here! [Laughter] ...

I'm not sure Obama personally has an opinion here - it's interesting. This is ... this is the same backdoor that the FBI has been wanting since the mid-90s. In the mid-90s we called it "The Crypto War" - now we call that "The First Crypto War" - so Number Three - I'm done - it is you guys! I only do two Crypto Wars per lifetime. It's interesting ...

FBI Director Comey gave a really interesting talk - a Q & A at the Aspen Security Forum. Actually, I recommend listening to these talks. This is a very high level - mostly government - discussions about security, cyber security, national security - really interesting stuff. He was interviewed by, I think, by Wolf Blitzer, who actually asked a great question - saying, what did he say, "This is kind of personal, but why don't you like the term 'lone-wolf terrorist'?" That was kind of funny.

He was talking about the "going dark" problem and the need for a backdoor, and this is the scenario he is worried about. And he's very explicit. It is an ISIS scenario. ISIS is a new kind of adversary, in the government's eyes, because of the way it uses social media. Unlike Al Queda, which was like your normal terrorist organization, that would recruit terrorists to go to Afghanistan, get trained, and come back, ISIS does it with Twitter. And this freaks the government out.

So, this story - and they swear up and down this happens - is that ISIS is really good at social media, at Twitter and YouTube and various other websites. They get people to talk to them, who are in the US - like you guys, except, you know, a little less socially-adept, and maybe kind of a little crazier, and a little, you know. But they find these marginal people, and they talk to them, and the FBI can monitor this. And "Go FBI! Rah-rah!" But then, they say "Go use this secure App." And then this radicalized American does, they talk more securely, and the FBI can't listen. And then this ... and then dot-dot-dot-explosion. [Laughter] So this is the scenario that the FBI is worried about - very explicitly. And they've used this story again and again. And they say "This is real. This is happening." OK? Now, ...

It's sort of interesting. If this is true, I mean, let's take it as read that it's true. The other phrase that they use, it's actually a new phrase, that I recommend, they talk about the time between "flash" to "bang". "Flash" is when they find the guy, "bang" is when the explosion happens. And that time is decreasing. So the FBI has to be able to monitor. So they are pissed off that things like iMessage and other Apps cannot be monitored, even if they get a warrant. And this really bugs them! "I have a warrant, dammit! Why can't I listen? I can get the metadata. I can't listen."

So, if you think about that as a scenario - and assume that it's true - it is not a scenario that any kind of mandatory backdoor solves. Because the problem isn't that the main security Apps are encrypted. The problem is, there exists one security App that is encrypted. Because the ISIS handler can say "Go download Signal. Go download Mujaheddin Secrets. Go download this random file encryption App I've just uploaded on GitHub ten minutes ago."

So the problem is not what he thinks it is. The problem is, general purpose computers. The problem is, an international market in software.

So, I think the backdoor is a really bad idea for a whole bunch of reasons. I've written papers about this. But what I've come to realize in the past few weeks is - it's not going to solve the problem the FBI claims it has. And I think we need to start talking about that, because otherwise we're going to get some really bad policy.

Ahmed, Nafeez Mosaddeq - ISIS wants to destroy the 'grey zone'. Here's how we defend it - Open Democracy 20151116

After the Paris attacks, it is imperative that we safeguard this arena of co-existence, where people of all faith and none remain unified on the principles of common humanity.

Credit: Mauro Biani ( Manifesto. All rights reserved.
Credit: Mauro Biani ( Manifesto. All rights reserved.
At the end of last year, as politicians and pundits cheered on coalition airstrikes in Syria, I wrote this:

“The war on ISIS has already been lost. As regional instability escalates predictably as a direct consequence of the US-UK led non-strategy, ISIS will become stronger, and reactionary terrorist violence against western targets will proliferate – in turn fuelling reactionary and militant responses from western foreign policy establishments.”

Less than a year later, 129 people have been confirmed dead, and 352 injured, from terrorist attacks in Paris.

‘Islamic State’ (ISIS) acolytes conducted a sophisticated operation involving three coordinated teams, striking multiple targets simultaneously, demonstrating a considerable degree of training and planning.

Yet the airstrikes that began last year had been justified by our leaders precisely on the pretext that they would be necessary to prevent ISIS from striking the west.

Although the attacks appear to have been triggered by the drone strike against ‘Jihadi John’, their sophistication reveals that preparations for the operation had been going on for months, at least.

ISIS, in other words, activated sleeper cells with a longstanding presence in France.

But the attacks in Paris must not be viewed in isolation.

So far, world governments have responded as if the ISIS attack came entirely out of the blue, “an act of war” in Hollande’s words, targeted “against France, against the values that we defend everywhere in the world, against what we are: a free country that means something to the whole planet".

While there is truth to Hollande’s words, they are also misleading.

The Paris attacks have occurred on the tail-end of an escalating series of massacres.

On 22 May, an ISIS militant blew himself up at a mosque in Qatif, Saudi Arabia, killing 21 people.

On 20 July, a female suicide bomber killed 31 students in Suruc, a Turkish city close to the Syrian border.

On 13 August, an ISIS bomb detonated at a farmers’ market in an impoverished district of Baghdad killed 80 people, and wounded over 200.

The Paris attacks have occurred on the tail-end of an escalating series of massacres.

On 2 September, an ISIS bombing killed 28 people and wounded 75 at a mosque in northern Sanaa, the capital of Yemen.

Another suicide bomb attack at Sanaa’s al-Balili mosque three weeks later took the lives of at least 25 people, and injured over 36.

On 10 October, ISIS suicide bombers in the Turkish capital, Ankara, killed 102, and wounded 400.

The night before the attacks on Paris, ISIS suicide bombers slaughtered 43 people in Beirut, and injured 250.

The following morning, an ISIS suicide bomber killed at least 19 people and injured 41 at a funeral in Baghdad.

Then, that evening, ISIS militants coordinated the attacks on Paris.

ISIS’s choice of targets reveal a range of ideological motives – sectarian targeting of minorities like Shi’as, Kurds and Yazidis; striking in the heart of Muslim regimes that have joined the anti-ISIS coalition; as well as demonstrating the punitive consequences of attacking ISIS to western publics by hitting them at their most vulnerable, in bars, restaurants and music venues.

The goal, of course, is to inflict trauma, fear, paranoia, suspicion, panic and terror – but there is a particularly twisted logic as part of this continuum of violence, which is to draw the western world into an apocalyptic civilizational Armageddon with ‘Islam.’

Islamic State flag. Wikimedia Commons. Public domain.
Islamic State flag. Wikimedia Commons. Public domain.
ISIS recognizes that it has only marginal support amongst Muslims around the world. The only way it can accelerate recruitment and strengthen its territorial ambitions is twofold: firstly, demonstrating to Islamist jihadist networks that there is now only one credible terror game in town capable of pulling off spectacular terrorist attacks in the heart of the west, and two, by deteriorating conditions of life for Muslims all over the world to draw them into joining or supporting ISIS.

Both these goals depend on two constructs: the ‘crusader’ civilisation of the ‘kuffar’ (disbelievers) pitted against the authentic ‘Islamic’ utopia of ISIS.

In their own literature shortly after the Charlie Hebdo attacks, ISIS shamelessly drew on the late Osama bin Laden’s endorsement of the words of President George W. Bush, to justify this apocalyptic vision: “The world today is divided into two camps. Bush spoke the truth when he said, ‘either you are with us or you are with the terrorists.’ Meaning either you are with the crusade or you are with Islam.”

Continuing in its English-language magazine, Dabiq, ISIS forecasted the “extinction” of the “grey zone” between these two camps:

“One of the first matters renounced by the hypocrites abandoning the grayzone and fleeing to the camp of apostasy and kufr after the operations in Paris is the clear-cut obligation to kill those who mock the Messenger [Muhammad]. The evidences [religious justification based on Islamic sources] for this issue are so abundant and clear, and yet some apostates, who abandoned the grayzone, claimed that the operations in Paris contradicted the teachings of Islam!... There is no doubt that such deeds are apostasy, that those who publicly call to such deeds in the name of Islam and scholarship are from the du’āt(callers) to apostasy, and that there is great reward awaiting the Muslim in the Hereafter if he kills these apostate imāms…”

The strategy behind this call to “kill” apostate Muslims who reject ISIS is also laid out candidly: to terrorise western countries into genocidal violence against their own Muslim populations:

“The Muslims in the West will quickly find themselves between one of two choices, they either apostatize and adopt the kufrī [infidel] religion propagated by Bush, Obama, Blair, Cameron, Sarkozy, and Hollande in the name of Islam so as to live amongst the kuffār [infidels] without hardship, or they perform hijrah [emigrate] to the Islamic State and thereby escape persecution from the crusader governments and citizens... Muslims in the crusader countries will find themselves driven to abandon their homes for a place to live in the Khilāfah, as the crusaders increase persecution against Muslims living in Western lands so as to force them into a tolerable sect of apostasy in the name of 'Islam' before forcing them into blatant Christianity and democracy.”

While Hollande’s reactionary declaration of war is understandable, it falls into the ideological trap laid by ISIS. France’s new state of emergency grants the government extraordinary powers that effectively put an end to democratic accountability, and give law-enforcement and security agencies unaccountable authority to run amok.

Hollande’s reactionary declaration of war falls into the ideological trap laid by ISIS.

This includes being able to enforce curfews, close public spaces, and even exert control of media. Authorities can now “prohibit passage of vehicles or people,” establish “protection or security zones, where people’s presence is regulated,” exclude from a public space “any person seeking to obstruct, in any way, the actions of the public authorities,” and detain anyone in their homes “whose activity appears dangerous for public security and order.”

The problem is in the open-ended way such vague precepts can be interpreted and executed. Obstructing “in any way” the actions of the state, or activity that “appears dangerous” for “security and order” could, crucially, be used to shut down public criticisms of the French government’s response to the Paris attacks.

Dissent against past or present French foreign and counter-terror policies can easily be construed as “dangerous” or obstructive to those policies. The language also perpetuates the Bush-era ‘with us or against us’ mantra, which ISIS sees as central to its agenda of fracturing what it calls the "grey zone."

Such a sweeping approach to countering ‘extremism’ – interpreted essentially as any ideological threat to the state – has already fuelled social polarization in Britain, where the ‘Prevent’ duty, for instance, is being used to police the thoughts of children as young as three years old.

Leaked government training documents reveal that the British government’s Prevent programme views political activism in general as a potential ‘extremist’ threat to the state’s hegemonic construct of ‘British values’, including environmental, animal rights and anti-nuclear campaigning. These measures are already going some way to fulfil ISIS’s objective of eroding the "grey zone" in the west.

According to Yahya Birt, an academic at the University of Leeds who is part of #EducationNotSurveillance – a national network of parents, teachers, educationalists, activists and academics – cases of unwarranted targeting of Muslim students under the Prevent duty are becoming legion.

“Muslim students are being profiled disproportionately under the government’s mandatory programme simply for displaying an interest in their own faith, or for holding political opinions critical of government foreign policy,” Birt told me. “Far from upholding democratic values, the programme is eroding them, and making perfectly normal, decent British Muslim citizens feel that they are under siege.”

Documented cases include a fifteen-year-old Muslim boy being questioned by police officers on his views about ISIS simply for wearing a ‘Free Palestine’ badge to school and handing out leaflets calling for sanctions on Israel.  The officers told him that he had “terrorist-like beliefs”, and warned him against speaking about his views in school. Another Muslim child was questioned after a classroom lesson about ISIS in which he aired his support for environmental activism.

In the US and France, similar programmes are also underway.

David Frum. Flickr/Policy Exchange. Some rights reserved.
David Frum. Flickr/Policy Exchange. Some rights reserved.
Neoconservatives on both sides of the Atlantic, however, want more.

David Frum, a former Bush speechwriter turned senior editor at the Atlantic, took to Twitter to demand the forcible mass deportation of Arabs who had migrated to Europe over the past two years.

In Britain, Douglas Murray, a director at the Henry Jackson Society in London, told his fellow guests on BBC Sunday Morning Live that “any percentage of Muslims you like” in Britain are ISIS sympathisers.

In a blog in the Spectator the day before, he had claimed: “Islam is not a peaceful religion. No religion is, but Islam is especially not.” Though acknowledging that there are “many peaceful verses in the Quran which – luckily for us – the majority of Muslims live by,” Islam, he went on, is “by no means, only a religion of peace” and “this is the verifiable truth based on the texts.”

Murray’s conception of ‘Us’, it seems, does not include ‘Them’, “Muslims” whom, he believes, are not ISIS suicide bombers purely because they follow their faith selectively.

Islamic theologians who specialize in those very texts unanimously disagree with Murray.

But that matters not, for Murray has previously endorsed the very same policies of forced mass expulsion of European Muslims advocated by Frum – not entirely surprising given that Murray praises Frum profusely in his book,Neoconservativism: Why We Need It (2006).

Pundits like Frum and Murray provide a critically powerful PR service for ISIS. Parading themselves as liberals seeking to defend western civilisation, they call for precisely what ISIS wants: the equation of ‘Islam’ with the ‘Islamic State’; the impossibility of ‘Islam’ co-existing peacefully with ‘the West’; the inherent threat posed by Muslims residing in the west due to their faith; and the need to therefore discriminate against and persecute Muslims in particular.

Pundits like Frum and Murray provide a critically powerful PR service for ISIS.

This sort of far-right sympathizing subsists in direct symbiosis with ISIS’s divisive ideology. It also serves to obscure the deeper more uncomfortable reality that ISIS has emerged and thrived precisely in the context of the ‘war on terror.’

When Hollande declared that the Paris massacre constituted an “act of war”, he appeared to have forgotten that we have been engaged in perpetual war for the last decade and a half, with no end in sight.

The kneejerk relapse to the familiar is understandable – more surveillance, more airstrikes, more thought-policing. Yet it is merely a reversion to what we think we know, rather than a recognition that what we think we know has clearly failed.

It is psychologically easier to frame the Paris attacks as even further evidence of the unfathomable evil of ‘Them’, which must be even more ruthlessly flushed out by ‘Us’.

But the truth staring us in the face is that the Paris attacks offer incontrovertible proof that all our efforts at ruthlessly flushing out terror through mass surveillance, drone strikes, air strikes, ground troops, torture, rendition, the Prevent agenda, and so on and so forth, have produced the opposite result.

AC-130H Howitzer. Flickr/US Air Force. Some rights reserved.
AC-130H Howitzer. Flickr/US Air Force. Some rights reserved.
What we really need is a fundamental re-assessment of everything we have done since 9/11, a full-on, formal international public inquiry into the abject failure of the ‘war on terror.’

The facts, which most pundits and politicians continue to avoid mentioning, speak for themselves. We have spent well over $5 trillion on waging the ‘war on terror’, not just in Iraq, Afghanistan and Pakistan, but across the Middle East and central Asia. Over that period, US State Department data shows that terror attacks have skyrocketed by 6,500 percent, while the number of casualties from terror attacks has increased by 4,500 percent.

2004 terrorism estimates from CIA figures.
*2004 terrorism estimates from CIA figures.
Journalist Paul Gottinger, who analysed the data, noted that spikes in these figures coincided with military intervention: “…. from 2007 to 2011 almost half of all the world’s terror took place in Iraq or Afghanistan – two countries being occupied by the US at the time.” And in 2014, he found, “74 percent of all terror-related casualties occurred in Iraq, Nigeria, Afghanistan, Pakistan, or Syria. Of these five, only Nigeria did not experience either US air strikes or a military occupation in that year.”

Simultaneously, even as the US-led anti-ISIS coalition has accelerated attacks on the group in Iraq and Syria, the group has only grown in power. Latest figures suggest the group now has some 80,000 fighters at least, up from last year’s estimates of around 20,000 to 31,500.

It would be naïve in the extreme, then, to pretend that the rise of ISIS has nothing to do with the string of failed or failing states that have been wrought in the region in the aftermath of such interventions, in Afghanistan, Iraq, Libya and Syria.

But more than that, there is the far more uncomfortable question of the regional geopolitics that continues to feed ISIS under the nose of coalition airstrikes.

There is the far more uncomfortable question of the regional geopolitics that continues to feed ISIS.

The leading players in the anti-ISIS coalition, many of whom are Muslim regimes considered to be staunch allies of the west, have provided billions of dollars of funding and military support to the most extreme Islamist militants in Syria.

Saudi Arabia, Qatar, the UAE, Kuwait and Turkey played lead roles in funneling support to groups affiliated with al-Qaeda, including the ISIS precursors al-Qaeda in Iraq and al-Qaeda in Syria (Jabhat al-Nusra), in their western-backed bid to oust Bashir al-Assad.

Due to porous links between some Free Syrian Army (FSA) rebels, other Islamist groups like al-Nusra and Ahrar al-Sham, and ISIS, there have been prolific weapons transfers from ‘moderate’ to Islamist militant groups, to the extent that the German journalist Jurgen Todenhofer, who spent 10 days inside the Islamic State, reported last year that ISIS is being “indirectly” armed by the west: “They buy the weapons that we give to the Free Syrian Army, so they get western weapons – they get French weapons… I saw German weapons, I saw American weapons.”

Meanwhile, it is not even clear whether our own allies in the anti-ISIS coalition have stopped funding the terror entity. In his testimony before the Senate Armed Services Committee in September 2014, General Martin Dempsey, then chairman of the US Joint Chiefs of Staff, was asked by Senator Lindsay Graham whether he knew of “any major Arab ally that embraces ISIL”. General Dempsey replied: “I know major Arab allies who fund them.”

Senator Graham, clearly taken aback by the blunt response, quickly attempted to play down the damning implications: “they were tried [sic] to beat Assad. I think they realise the folly of their ways. Let’s don’t [sic] taint the Mideast unfairly.”

Never mind that the most senior US military official confirms the Pentagon’s full awareness that its own allies in the anti-ISIS coalition are simultaneously “funding” ISIS while purportedly bombing the group.

Paris tribute. Demotix/David Pauwels. All rights reserved.
Paris tribute. Demotix/David Pauwels. All rights reserved.
Such linkages between our geopolitical allies and the terrorists we are purportedly fighting in Iraq-Syria came to the fore when it emerged that Syrian passports discovered near the bodies of two of the suspected Paris attackers were fake.

Police sources in France had told Channel 4 News that the passports were likely forged in Turkey.

French officials now concede that one of the suicide bombers, Omar Ismail Mostefai had been on a “watch list” as a “potential security threat” in 2010, and was known to have links with “radical Islam.”

But according to a Turkish official, Turkish intelligence had tipped off French authorities “twice” about Mostefai before the Paris attacks.

Earlier this year, the Turkish daily Today’s Zaman reported that “more than 100,000 fake Turkish passports” had been given to ISIS. Erdogan’s government, the newspaper added, “has been accused of supporting the terrorist organization by turning a blind eye to its militants crossing the border and even buying its oil… Based on a 2014 report, Sezgin Tanrıkulu, deputy chairman of the main opposition Republican People's Party (CHP) said that ISIL terrorists fighting in Syria have also been claimed to have been treated in hospitals in Turkey.”

But it is far worse than that. A senior western official familiar with a large cache of intelligence obtained this summer told the Guardian that “direct dealings between Turkish officials and ranking ISIS members was now ‘undeniable’”.

ISIS, in other words, is state-sponsored.

The same official confirmed that Turkey is not just supporting ISIS, but also other jihadist groups, including Ahrar al-Sham and Jabhat al-Nusra, al-Qaeda’s affiliate in Syria. “The distinctions they draw [with other opposition groups] are thin indeed,” said the official. “There is no doubt at all that they militarily cooperate with both.”

Turkey has played a key role in facilitating the life-blood of ISIS’s expansion: black market oil sales. Senior political and intelligence sources in Turkey, Iraq, and the Kurdistan Regional Government confirm that Turkish authorities have actively facilitated ISIS oil sales through the country.

ISIS, in other words, is state-sponsored – indeed, sponsored by purportedly western-friendly regimes in the Muslim world who are integral to the anti-ISIS coalition. Turkey, for instance, plays a central role in both the CIA and Pentagon-run rebel training and assistance programmes.

To what extent, then, did our unquestionable geopolitical alliance with Turkey, our unwavering commitment to empowering allies like Turkey to fund Islamist militants of their choice in Syria, contribute to the freedom of movement those militants used to execute the Paris operation?

Gare de Lyon. Flickr/Jon Siegel. Some rights reserved.
Gare de Lyon. Flickr/Jon Siegel. Some rights reserved.
All this calls for a complete re-think of our approach to terrorism. We require, urgently, an international public inquiry into the colossal failure of the strategies deployed in the ‘war on terror.’

How has over $5 trillion succeeded only in permitting an extremist terror-state, to conquer a third of Iraq and Syria, while carrying out a series of assaults on cities across the region and in the heart of Europe?

The re-assessment must accompany concrete measures, now.

First and foremost, our alliances with terror-sponsoring dictatorships across the Muslim world must end. All the talk of making difficult decisions is meaningless if we would rather sacrifice civil liberties instead of sacrificing profit-oriented investments in brutal autocracies like Saudi Arabia, which have exploited western dependence on its oil resources to export Islamist extremism around the world.

Addressing those alliances means taking decisive action to enforce punitive measures in terms of the financing of Islamist militants, the facilitation of black-market ISIS oil sales, and the export of narrow extremist ideologies. Without this, military experts can give as much lip-service to ‘draining the swamp’ as they like – it means nothing if we think draining it means using a few buckets to fling out the mud while our allies pour gallons back in.

Secondly, in Syria, efforts to find a political resolution to the conflict must ramp up. So far, neither the US nor Russia, driven by their own narrow geopolitical concerns, have done very much to destroy ISIS strongholds. The gung-ho entry of Russia into the conflict has only served to unify the most extreme jihadists and vindicate ISIS’s victim-bating claim to be a ‘David’ fighting the ‘Goliath’ of a homogenous “kafir” (infidel) crusader-axis.

Every military escalation has been followed by a further escalation, because ISIS itself was incubated in the militarized nightmare of occupied Iraq and Assad-bombed Syria.

Thirdly, and relatedly, all military support to all actors in the Syria conflict must end. Western powers can pressurise their Gulf and Turkish state allies to end support to rebel groups, which is now so out of control that there is no longer any prospect of preventing such support from being diverted to ISIS; while Russia and Iran can withdraw their aid to Assad’s bankrupt regime. If Russia and France genuinely wish to avoid further blowback against their own citizens, they would throw their weight behind such measures with a view to force regional actors to come to the negotiating table.

Talk of ‘solidarity’ is not merely empty sloganeering.

Fourthly, it must be recognized that contrary to the exhortations of fanatics like Douglas Murray, talk of ‘solidarity’ is not merely empty sloganeering. The imperative now is for citizens around the world to work together to safeguard what ISIS calls the "grey zone" – the arena of co-existence where people of all faith and none remain unified on the simple principles of our common humanity. Despite the protestations of extremists, the reality is that the vast majority of secular humanists and religious believers accept and embrace this heritage of mutual acceptance.

But safeguarding the "grey zone" means more than bandying about the word ‘solidarity’ – it means enacting citizen-solidarity by firmly rejecting efforts by both ISIS and the far-right to exploit terrorism as a way to transform our societies into militarized police-states where dissent is demonized, the Other is feared, and mutual paranoia is the name of the game. That, in turn, means working together to advance and innovate the institutions, checks and balances, and accountability necessary to maintain and improve the framework of free, open and diverse societies.

It is not just ISIS that would benefit from a dangerous shift to the contrary.

Incumbent political elites keen to avoid accountability for a decade and a half of failure will use heightened public anxiety to push through more of the same. They will seek to avoid hard questions about past failures, while casting suspicion everywhere except the state itself, with a view to continue business-as-usual. And in similar vein, the military-industrial complex, whose profits have come to depend symbiotically on perpetual war, wants to avoid awkward questions about lack of transparency and corrupt relationships with governments. They would much rather keep the trillion-dollar gravy train flowing out of the public purse.

It's time