Category Archives: Privacy

A Human Rights Response to Government Hacking - Access Now 201609


Recently we have seen several high-profile examples of governments hacking into consumer devices or accounts for law enforcement or national security purposes. Access Now released a report where we consider government hacking activity from the perspective of international human rights and conclude that based upon its serious interference with the rights to privacy, free expression, and due process, there should be a presumptive prohibition on all government hacking. There has yet to be an international public conversation on the scope, impact, or human rights safeguards for government hacking. The public requires more transparency regarding how governments decide to employ hacking and how and when hacking activity has had unanticipated impacts. Finally, we propose Ten Human Rights Safeguards for Government Hacking in pursuit of surveillance or intelligence gathering. The full report is available at:



We define hacking as the manipulation of software, data, a computer system, network, or other electronic device without the permission of the person or organization responsible for the device, data, or service or who is ultimately affected by the manipulation.

We consider government hacking in three categories based on the broad goal to be achieved:

  1. Messaging control: Hacking to control the message seen or heard, specifically by a particular target audience. to control a message, to cause damage, or to conduct surveillance.
  2. Causing damage: Hacking to cause some degree of harm to one of any number of target entities.
  3. Commission of surveillance or intelligence gathering: Hacking to compromise the target in order to get information, particularly on an on-going basis.

All government hacking substantially interferes with human rights, including the right to privacy and freedom of expression. While in many ways this interference may be similar to more traditional government activity, the nature of hacking creates new threats to human rights that are greater in both scale and scope. Hacking can provide access to protected information, both stored or in transit, or even while it is being created or drafted. Exploits used in operations can act unpredictably, damaging hardware or software or infecting non-targets and compromising their information. Even when a particular hack is narrowly designed, it can have unexpected and unforeseen impact.


Based on analysis of human rights law, we conclude that there must be a presumptive prohibition on all government hacking. In addition, we reason that more information about the history and the extent of government hacking is necessary to determine the full ramifications of the activity.

In the first two categories — messaging control and causing damage — we determine that this presumption cannot be overcome. However, we find that, with robust protections, it may be possible, though still not necessarily advisable, for the government to overcome the presumptive prohibition in the third category, government hacking for surveillance or intelligence gathering. We note that the circumstances under which it could be overcome are both limited and exceptional.

In the context of government hacking for surveillance, Access Now identifies Ten Human Rights Safeguards for Government Hacking, including vulnerability disclosure and oversight, that must both be implemented and complied with to meet that standard. Absent government compliance with all ten safeguards, the presumptive prohibition on hacking remains. In addition, the high threat that government hacking poses to other interests, defined in greater detail in our report, may (and probably should) necessitate additional limitations and prohibitions.

Government hacking threatens human rights embodied in international documents.

There should be a presumptive prohibition on all government hacking. In any instance where government hacking is for purposes of surveillance or intelligence-gathering, the following ten safeguards must all be in place and actually complied with in order for a government to successfully rebut that presumption.

Government hacking for the purposes of messaging control or causing damage cannot overcome this presumption.

1. Government hacking must be provided for by law, which is both clearly written and publicly available and which specifies the narrow circumstances in which it could be authorized. Government hacking must never occur with either a discriminatory purpose or effect;

2. Government actors must be able to clearly explain why hacking is the least invasive means for getting Protected Information in any case where it is to be authorized and must connect that necessity back to one of the statutory purposes provided. The necessity should be demonstrated for every type of Protected Information that is sought, which must be identified, and every user (and device) targeted. Indiscriminate, or mass, hacking must be prohibited;

3. Government hacking operations must never occur in perpetuity. Authorizations for government hacking must include a plan for concluding the operation. Government hacking operations must be narrowly designed to return only specific types of authorized information from specific targets and to not affect non-target users or broad categories of users. Protected Information returned outside of that for which hacking was necessary should be purged immediately;

4. Applications for government hacking must be sufficiently detailed and approved by a competent judicial authority who is legally and practically independent from the entity requesting the authorization and who has access to sufficient technical expertise to understand the full nature of the application and any likely collateral damage that may result. Hacking should never occur prior to authorization;

5. Government hacking must always provide actual notice to the target of the operation and, when practicable, also to all owners of devices or networks directly impacted by the tool or technique;

6. Agencies conducting government hacking should publish at least annually reports that indicate the extent of government hacking operations, including at a minimum the users impacted, the devices impacted, the length of the operations, and any unexpected consequences of the operation;

7. Government hacking operations must never compel private entities to engage in activity that impacts their own products and services with the intention of undermining digital security;

8. If a government hacking operation exceeds the scope of its authorization, the agency in charge of the authorization should report back to the judicial authority the extent and reason;

9. Extraterritorial government hacking should not occur absent authorization under principles of dual criminality;

10. Agencies conducting government hacking should not stock vulnerabilities and, instead, should disclose vulnerabilities either discovered or purchased unless circumstances weigh heavily against disclosure. Governments should release reports at least annually on the acquisition and disclosure of vulnerabilities. In addition to these safeguards, which represent only what is necessary from a human rights perspective, the judicial authority authorizing hacking activity must consider the entire range of potential harm that could be caused by the operation, particularly the potential harm to cybersecurity as well as incidental harms that could be caused to other users or generally to any segment of the population.


IN JANUARY, ACADEMIC-TURNED-REGULATOR Lorrie Cranor gave a presentation and provided the closing remarks at PrivacyCon, a Federal Trade Commission event intended to “inform policymaking with research,” as she put it. Cranor, the FTC’s chief technologist, neglected to mention that over half of the researchers who presented that day had received financial support from Google — hardly a neutral figure in the debate over privacy. Cranor herself got an “unrestricted gift” of roughly $350,000 from the company, according to her CV.

Virtually none of these ties were disclosed, so Google’s entanglements at PrivacyCon were not just extensive, they were also invisible. The internet powerhouse is keenly interested in influencing a lot of government activity, including antitrust regulation, telecommunications policy, copyright enforcement, online security, and trade pacts, and to advance that goal, has thrown around a lot of money in the nation’s capital. Ties to academia let Google attempt to sway power less directly, by giving money to university and graduate researchers whose work remains largely within academic circles — until it gains the audience of federal policymakers, as at PrivacyCon.

Some research at the event supported Google’s positions. An MIT economist who took Google money, for example, questioned whether the government needed to intervene to further regulate privacy when corporations are sometimes incentivized to do so themselves. Geoffrey Manne, the executive director of a Portland-based legal think tank that relies on funding from Google (and a former Microsoft employee), presented a paper saying that “we need to give some thought to self-help and reputation and competition as solutions” to privacy concerns “before [regulators start] to intervene.” (Manne did not return a request for comment.) Other research presented at PrivacyCon led to conclusions the company would likely dispute.

The problem with Google’s hidden links to the event is not that they should place researchers under automatic suspicion, but rather that the motives of corporate academic benefactors ought to always be suspect. Without prominent disclosure of corporate money in academia, it becomes hard for the consumers of research to raise important questions about its origins and framing.

Google declined to comment on the record for this article.

How Tech Money Flows to Privacy Scholars

Google’s ties to PrivacyCon are pervasive enough to warrant interrogation. As a case study in how pervasive and well-concealed this type of influence has become, PrivacyCon is hard to beat.

Authors of a whopping 13 out of 19 papers presented at the conference and 23 out of 41 speakers have financial ties to Google. Only two papers included disclosure of an ongoing or past financial connection to Google.

Other tech companies are also financially linked to speakers at the event. At least two presenters took money from Microsoft,` while three others are affiliated with a university center funded by Amazon, Facebook, Google, Microsoft, and Twitter.

“Are we getting voices that have never received money from a company like Google?” — Paul Ohm, Georgetown

But Google’s corporate adversaries are helping to shine a spotlight on what their fellow travelers describe as Google’s particularly deep ties to academia. Those ties are a major focus of a new report from an entity called the Google Transparency Project, part of a charitable nonprofit known as the Campaign for Accountability. The Campaign for Accountability, in turn, receives major, undisclosed funding from Google nemesis and business software company Oracle, as well as from the Bill and Melinda Gates Foundation, which was set up by the co-founder and longtime CEO of Google rival Microsoft (the nonprofit says its funding sources have no bearing on its work to expose funding sources). The Intercept, meanwhile, operates with funding from eBay founder Pierre Omidyar. In other words, tech money even pervades the research into everything tech money pervades. But even accepting that, the report does highlight the extent to which Silicon Valley is widening its influence at the intersection of academia and government.Take MIT professor Catherine Tucker, who in one PrivacyCon paper argued against proposed government regulations requiring genetic testing services to obtain a type of written permission known as “informed consent” from patients. Tucker added that such a requirement would deter patients from using the testing services and specifically cited one such service, 23andMe, a firm that Google has invested in repeatedly, mostrecently in October, and whose CEO is the ex-wife of Google co-founder Sergey Brin. Tucker did not disclose in the paper that she has received over $150,000 in grants from Google since 2009, plus another $49,000 from theNet Institute, a think tank funded in part by Google. Contacted by email, Tucker answered that she discloses “nearly two pages of grants from, and consulting work for, a variety of companies and other organizations, on my CV.”

Google has been appreciative of Tucker’s conference work. In a series of emails between Google and George Mason University law professor James Cooper for a 2012 policy conference, first reported by Salon, a Google representative went so far as to personally recommend the marketing professor as someone to invite:

Cooper did not return multiple requests for comment on this story. Reached for comment via email, Cranor replied that she lists “the funder(s) in the acknowledgments of the specific papers that their grant funded,” and that there “have also been press releases about most of the Google funding I received, so everything has been fully disclosed in multiple places.” Cranor added that “all of these grants are made to Carnegie Mellon University for use in my research,” and “I did not receive any of this money personally.” But it is surely worth noting that one of the press releases Cranor references says that “each funded project receives an individual Google sponsor to help develop the research direction and facilitate collaboration between Google and the research team.” Cranor did not reply when asked what role “an individual Google sponsor” has played in her research.

Nick Feamster, a Princeton professor, presented at PrivacyCon on internet-connected household objects and did not disclose that he’s received over $1.5 million in research support from Google. Over email, Feamster told The Intercept that any notion of a conflict “doesn’t even make any sense given the nature of the content we presented,” which included descriptions of security shortcomings in the Nest smart thermostat, owned by Google. “If they were really trying to exert the ‘influence’ that the [report] is trying to suggest, do you think they would have influenced us to do work that actually calls them out on bad privacy practices?”

Many other PrivacyCon speakers, like Omer Tene, an affiliate scholar at Stanford’s Center for Internet and Society, don’t seem ever to have received money from Google; rather, a department or organization they work for is funded in part by Google. On the CIS website, this is made plain:

We are fortunate to enjoy the support of individual and organizational donors, including generous support from Google, Inc. Like all donors to CIS, Google has agreed to provide funds as unrestricted gifts, for which there is no contractual agreement and no promised products, results, or deliverables. To avoid any conflict of interest, CIS avoids litigation if it involves Google. CIS does not accept corporate funding for its network neutrality-related work.

The CIS website also cites Microsoft as a funding source, along with the National Internet Alliance, a telecom lobbying group.

“Neither Google nor any of the other supporters has influenced my work,” Tene told me, referring to his long bibliography on personal data and online privacy.

But support at the institutional level may still influence individual behavior. Cooper, the George Mason staffer who reached out to Google for advice on a privacy conference in the screenshot above, works as the director of the program on economics and privacy at the university’s Law and Economics Center, which has received at least $750,000 from Google, as well as additional funds from Amazon, AT&T, and Chinese internet giant Tencent. A 2015 report in Salon detailed close ties between Google and Cooper, including emails indicating that Google was shopping an op-ed written by Cooper to newspapers, and other messages where Cooper asks Google for help crafting the content of a “symposium on dynamic competition and mergers”:

Cooper also wrote pro-Google academic papers, including this one for the George Mason Law Review entitled “Privacy and Antitrust: Underpants Gnomes, the First Amendment, and Subjectivity,” where he argues that privacy should not be included in any antitrust analysis. Cooper does not disclose Google’s funding of the [Law and Economics Center] in the article. Other pro-Google articles by Cooper, like this one from Main Justice, do include disclosure.

Cooper presented at this year’s PrivacyCon and did not disclose his relationship with Google. Cooper did not return a request for comment.

Among the PrivacyCon presenters who have benefited from non-Google generosity: Carnegie Mellon University’s Alessandro Acquisti and Columbia University’s Roxana Geambasu received $60,000 and $215,000 in Microsoft money, respectively, on top of financial ties to Google. Both co-authored and presented papers on the topic of targeted advertising. Acquisti’s papers, which did not disclose his funding sources, concluded that such marketing was not necessarily to the detriment of users. Geambasu (to her credit) produced data that contradicted Google’s claims about how targeting works and disclosed her financial relationship with the company. She also noted to The Intercept that “all my funding sources are listed in my resume,” located on her website.

The University of California, Berkeley’s International Computer Science Institute, which had an affiliated researcher presenting at PrivacyCon, counts on not just Google for its survival, but Microsoft, Comcast, Cisco, Intel, and Samsung. Two PrivacyCon submissions came out of Columbia’s Data Science Institute, which relies on Yahoo and Cisco. The Center for Democracy and Technology — which employs one PrivacyCon presenter and co-organized a privacy conference with Princeton in May — is made possible not just by Google but also an alphabet of startup and legacy tech money,according to IRS disclosures: Adobe, Airbnb, Amazon, AOL, Apple, all the way down to Twitter and Yahoo. Corporate gifts are often able to keep entire academic units functioning. The CMU CyLab, affiliated with four PrivacyCon presenters, is supported by Facebook, Symantec, and LG, among others.

Narrower Disclosure Standards in Academia

Contacted by The Intercept, academics who took money from tech companies and then spoke at PrivacyCon without disclosure provided responses ranging from flat denials to lengthy rationales. Some of the academics argued that just because their institution or organization keeps the lights on because of Silicon Valley money doesn’t mean they’re beholden to or even aware of these benefactors. But it’s harder to imagine, say, an environmental studies department getting away with floating in Exxon money, or a cancer researcher bankrolled by Phillip Morris. Like radon or noise pollution, invisible biases are something people both overstate and don’t take seriously enough — anyone with whom we disagree must be biased, and we’re loath to admit the possibility of our own.

Serge Egelman, the director of usable security and privacy at Berkeley’s International Computer Science Institute, argued that this is hardly an issue unique to Google:

I am a Google grant recipient, as are literally thousands of other researchers in computer science. Every year, like many other companies who have an interest in advancing basic research (e.g., Cisco, IBM, Comcast, Microsoft, Intel, etc.), Google posts grant solicitations. Grants are made as unrestricted gifts, meaning that Google has no control over how the money is used, and certainly cannot impose any restrictions over what is published or presented. These are grants made to institutions, and not individuals; this has no bearing on my personal income, but means that I can (partially) support a graduate student for a year. Corporate philanthropy is currently filling a gap created by dwindling government support for basic research (though only partially).

He also added that no matter who’s paying the bills, his research is independent and strongly in the public interest:

My own talk was on how Android apps gather sensitive data against users’ wishes, and the ways that the platform could be improved to better support users’ privacy preferences. All of my work in the privacy space is on protecting consumers’ privacy interests and this is the first time anyone has accused me of doing otherwise.

The list of people who spoke at PrivacyCon are some of the most active researchers in the privacy space. They come from the top universities in computer science, which is why it’s no surprise that their institutions have received research funding from many different sources, Google included. The question that you should be asking is, was the research that was presented in the public interest? I think the answer is a resounding yes.

Acquisti, the PrivacyCon presenter from CMU, is a professor affiliated with the university’s CyLab privacy think tank and shared a 2010 $400,000 Google gift with FTC technologist Cranor and fellow CMU professor Norman Sadeh before submitting and presenting two PrivacyCon papers sans disclosure, plus one presentation that included a disclosure. When the gift was given, the New York Times observed that “it is presumably in Google’s interest to promote the development of privacy-handling tools that forestall federal regulation.” Over email, Acquisti argued that disclosure is only necessary when it applies to the specific funding for a body of work being published or presented. That is, once you’ve given the talk or published a paper, your obligation to mention its financing source ends: “It would be highly misleading and incorrect for an author to list in the acknowledgements of an article a funding source that did NOT in fact fund the research activities conducted for and presented in that article.” In fact, Acquisti said, “It would be nearly fraudulent”:

It would be like claiming that funds from a certain source were used to cover a study (e.g. pay for a lab experiment) while they were not; or it would be like claiming that a research proposal was submitted to (and approved by) some grant committee at some agency/institution, whereas in fact that institution never even knew or heard about that research. … This is such a basic tenet in academia.

This line of reasoning came up again and again as I spoke to privacy-oriented researchers and academics — that papers actually should not mention funding directed to the researcher for other projects, even when such disclosure could bear on a conflict of interest, and that, for better or for worse, this deeply narrow standard of disclosure is just the way it is. And besides, it’s not as if researchers who enjoy cash from Google are necessarilyhanding favors back, right?

According to Paul Ohm, a professor of law and director at Georgetown University’s Center on Privacy and Technology, that’s missing the point: The danger of corporate money isn’t just clear-cut corruption, but subconscious calculus among academics about their research topics and conclusions and invisible influence that funding might cause. Ohm said he continually worries about “the corrupting influence of corporate money in scholarship” among his peers.

“I think privacy law is so poorly defined,” Ohm told The Intercept, “and we have so few clear rules for the road, that people who practice in privacy law rely on academics more than they do in most areas of the law, because of that it really has become a corporate strategy to interact with academics a lot.”

It’s exactly this threat that a disclosure is meant to counter — not an admission of any wrongdoing but a warning that it’s possible the work in question was compromised in some way, however slight. A disclosure isn’t a stop sign so much as one suggesting caution. That’s why Ohm thinks it’s wise for PrivacyCon (and the infinite stream of other academic conferences) to err on the side of too much disclosure — he goes as far as to say organizers should consider donor source diversity in a “code of conduct.”

“Let’s try to make sure we have at least one voice on every panel that didn’t take money,” Ohm said. “Are we getting voices that have never received money from a company like Google?” And ultimately, why not disclose? Egelman, from UC Berkeley’s International Computer Science Institute, told me he thought extra disclosures wouldn’t be a good way for “researchers [to] use valuable conference time.” Ohm disagrees: “I don’t think it’s difficult at the beginning of your talk to say, ‘I took funding in the broader research project of which this a part.’” In other words: Have you taken money from Google? Are you presenting to a room filled with regulators on a topic about which you cannot speak without the existence of Google at least looming overhead? It would serve your audience — and you — to spend 10 seconds on a disclosure. “Disclosure would have been great,” Ohm said of PrivacyCon. “Recent disclosure would have been great. Disclosure of related funding would have been great.” Apparently, only two other researchers agreed.

Canada Is Considering Spying on Kids to Stop Cyberbullying - Vice 20160426

Canada Is Considering Spying on Kids to Stop Cyberbullying - Vice 20160426

Cyberbullying is simply awful, and its consequences can be utterly horrific. Canadians have known this all too well since 17-year-old Rehtaeh Parsons’ suicide in 2013, after photos of her alleged rape circulated online.

It’s only human to want to put a stop to it. But is it worth spying on kids?

To wit, the Canadian government is looking for a person or organization to “conduct an evaluation of an innovative cyberbullying prevention or intervention initiative” in a “sample of school-aged children and youth,” according to a tender notice published by Public Safety Canada last week.

Although nothing has been finalized, the government will consider letting the organization spy on kids’ digital communications to do it, Barry McKenna, the Public Safety procurement consultant in charge of the tender, told me.

“The tender doesn’t preclude or necessarily require digital monitoring,” said McKenna. “But there are certainly products on the market that do that, and I would guess that that kind of intervention would be one of interest.”

The school board overseeing the school used in the study would have to sign off on digital surveillance of kids, McKenna said, and so would Public Safety. McKenna would not disclose whether any person or organization has responded to the tender yet. The government has budgeted $60,000 for the program, the notice states.

“Any use by government of technology to scan the internet and read somebody’s communications obviously raises privacy issues,” said David Fraser, a Canadian privacy lawyer consulting on a new cyberbullying law for Nova Scotia. “Fewer privacy issues if it’s following an intervention and it’s targeted,” he continued, “way more if they’re trying to single out kids in Canada and assess what they’re saying.”

“What we’ve seen come out of Public Safety and most law enforcement agencies is a pretty un-nuanced, heavy-handed, over the top model,” Fraser added. Nova Scotia’s previous cyberbullying law, passed in the wake of Parsons’ suicide, was ruled unconstitutional and struck down for being too broad and infringing on people’s civil rights.

If the Public Safety study ends up taking a more blanket approach to monitoring kids instead of targeting surveillance after an incident, it could also risk undermining communication between kids and their teachers or parents, according to US Cyberbullying Research Center co-director Sameer Hinduja.

“Installing tracking apps undermines any sort of open-minded communication [that] youth-serving adults might have with these kids, because you’re tracking them surreptitiously,” said Hinduja. “Kids, as they get older, want more privacy and freedom. It’s natural—you want it, and I want it.”

This isn’t the first time somebody has considered surveillance as a solution to the complex social issue of kids being absolutely horrific to each other, and it likely won’t be the last. In 2013, The LA Times noted that the Glendale Unified School District in Southern California reportedly paid a firm $40,000 to monitor kids’ social media accounts to combat bullying. The move raised the ire of privacy advocates in the US then, too.

The point, according to Hinduja, is that bullying isn’t a uniquely digital problem. You don’t solve bullying forever by putting a teacher in every hallway, and you don’t fix crime by putting a cop on every corner.

“Cyberbullying isn’t a technological problem,” said Hinduja. “You can’t blame the apps, the smartphones, or the internet. Instead, cyberbullying is rooted in other issues that everyone has been dealing with since the beginning of time: adolescent development, kids learning to manage their problems, and dealing with stress.”

Threatpost - Blackberry CEO defends lawful access principles, supports phone hack - 20160419

Threatpost - Blackberry CEO defends lawful access principles, supports phone hack - 20160419

BlackBerry’s CEO made the company’s stance on lawful access requests clear this week and is defending actions to provide Canadian law enforcement with what it needed to decrypt communications between devices.

The company’s CEO John Chen penned a statement on Monday, reiterating that one of BlackBerry’s core principles is customer privacy but also acknowledged that BlackBerry stood by its “lawful access principles” in a recently publicized criminal investigation where it was alleged that BlackBerry assisted law enforcement in retrieving data from a phone.

“We have long been clear in our stance that tech companies as good corporate citizens should comply with reasonable lawful access requests,” Chen said. Then, in a thinly veiled jab at Apple, Chen added, “I have stated before that we are indeed in a dark place when companies put their reputations above the greater good.” Speculation around the inner workings of the case, which deals with a mafia-related murder in Montreal, has intensified over the last week following a Vice report on Thursday. According to the news outlet, the Royal Canadian Mounted Police (RCMP) – the country’s federal police force – successfully intercepted and decrypted over one million BlackBerry messages relating to the case between 2010 and 2012.

Reporters combed through thousands of court documents that strongly suggest that both BlackBerry and Rogers, a Canadian communications company, cooperated with law enforcement to do so. Particularly telling was a reference in the documents to a “decryption key” that deals with “BlackBerry interception.”

The RCMP oversees a server in Ottawa that “simulates a mobile device that receives a message intended for [the rightful recipient]” according to court filings. In another document, an affidavit, RCMP Sergeant Patrick Boismenu said the server is referred to by the RCMP as a “BlackBerry interception and processing system,” and that it “performs the decryption of the message using the appropriate decryption key.”

BlackBerry has long used a global encryption key – a PIN that it uses to decrypt messages – for its consumer devices.

It’s unclear how exactly the RCMP secured access to a BlackBerry decryption key, or for that matter if it still has the key, but BlackBerry “facilitated the interception process,” according to RCMP inspector Mark Flynn, who testified in a transcript.

Defense lawyers believe the technology the RCMP is using to target BlackBerry devices mimics a cell phone tower and can be manipulated to intercept devices and forward information to police. Largely known as Stingray tracking devices or International Mobile Subscriber Identity (IMSI) catchers, the RCMP refers to the devices as “mobile device identifiers” or “MDIs.” The Globe and Mail did a deep dive on the technology on Monday, noting the technology has been in use in Canada since 2011 and is capable of knocking people calling 911 offline.

If the RCMP is still in possession of the global key, it’s likely that Mounties could still use it to decrypt PIN-to-PIN communications on consumer devices.

While Chen didn’t get into specifics around his company’s move, he lauded it on Monday.

“Regarding BlackBerry’s assistance, I can reaffirm that we stood by our lawful access principles,” Chen wrote, further likening it to doing the right thing in a difficult situation and boasting that it helped lead to a “major criminal organization being dismantled.”

Conversely, privacy experts questioned Chen’s statement and pondered whether it could signal the beginning of the end for the company.

“I think Chen is traveling down a very dangerous path here,” Richard Morochove, a computer forensics investigator with Toronto-based computer consulting firm Morochove & Associates said Tuesday on Canada’s Business News Network, “With this announcement he’s just pounded a big nail into BlackBerry’s coffin.”

BlackBerry uses a global key for its consumer devices, but Chen insists that the company’s BlackBerry Enterprise Server (BES) was not involved in the case and that messages sent from corporate BlackBerry phones cannot be decrypted.

“Our BES continues to be impenetrable – also without the ability for backdoor access – and is the most secure mobile platform for managing all mobile devices,” Chen wrote.

While that means that many of the company’s higher end clientele, government workers and corporations, are protected, any consumers who own BlackBerry devices may have been open, or could still be open to spying by the Canadian police.

Chen’s position of course marks a stark delineation between BlackBerry and Apple, another company that’s been waging its own battle with the government over granting access to customer information.

While Apple refused to break its own crypto to let the FBI bypass the iPhone’s encryption, it sounds like all law enforcement has to do to break into a BlackBerry is ask.

CIGI-Ipsos Global Survey on Internet Security and Trust - 20142016

CIGI-Ipsos Global Survey on Internet Security and Trust

The CIGI-Ipsos Global Survey on Internet Security and Trust, undertaken by the Centre for International Governance Innovation (CIGI) and conducted by global research company Ipsos, reached 23,376 Internet users in 24 countries, and was carried out between October 7, 2014 and November 12, 2014.

The countries included: Australia, Brazil, Canada, China, Egypt, France, Germany, Great Britain, Hong Kong, India, Indonesia, Italy, Japan, Kenya, Mexico, Nigeria, Pakistan, Poland, South Africa, South Korea, Sweden, Tunisia, Turkey and the United States.

The survey found that:

  • 83% of users believe that affordable access to the Internet should be a basic human right;
  • two thirds (64%) of users are more concerned today about online privacy than they were compared to one year ago; and,
    when given a choice of various governance sources to effectively run the world-wide Internet, a majority (57%) chose the multi-stakeholder option—a “combined body of technology companies, engineers, non-governmental organizations and institutions that represent the interests and will of ordinary citizens, and governments.”
  • The global Survey was developed to help support the work of the Global Commission on Internet Governance (GCIG). The GCIC, an initiative by CIGI and Chatham House, was established to articulate and advance a strategic vision for the future of Internet governance.

Survey Findings

Global survey in 24 countries involving over 23,376 Internet users


  • 83% of users believe affordable access to the Internet should be a basic human right
  • 91% of users say the Internet is important for their future in terms of accessing important information and scientific knowledge
  • 87% of users say the Internet is important for their future in terms of personal enjoyment and recreation
  • 85% of users say the Internet is important for their future in terms of social communication
  • 83% of users say the Internet is important for their future in terms of free speech and political expression
  • 81% of users say the Internet is important for their future in terms of their economic future and livelihood



  • 64% of users have some degree of concern about their online privacy compared to one year ago
  • 8% of users say their government does a very good job of making sure the Internet is safe and secure
  • 36% of users believe private information on the Internet is very secure
  • 41% of users believe the chance of their personal information being compromised is so small that it's not worth worrying about
  • 37% of users share personal information with private companies online all the time and say "it’s no big deal"


  • 78% of users are concerned about a criminal hacking into their personal bank accounts
  • 77% of users are concerned about someone hacking into their online accounts and stealing personal information


  • 74% of users are concerned about companies monitoring online activities and then selling that information for commercial purposes


  • 72% of users are concerned about institutions in their country being cyber-attacked by a foreign government or terrorist organization


  • 64% of users are concerned about governments censoring the Internet
  • 62% of users are concerned about government agencies from other countries secretly monitoring their online activities
  • 61% of users are concerned about police or other government agencies from their own country secretly monitoring their online activities


  • 43% of users believe governments other than their own will restrict access to the Internet
  • 34% of users believe their government will restrict access to the Internet



  • 60% of users have heard about Edward Snowden
  • Of those aware of Edward Snowden, 39% have taken steps to protect their online privacy and security as a result of his revelations
  • Compared to one year ago, 43% of users now avoid certain websites and applications and 39% now change their passwords regularly


  • 73% of users want their online data and personal information to be physically stored on a secure server
  • 72% of users want their online data and personal information to be physically stored on a secure server in their own country


  • 57% of users would trust a combined body of technology companies, engineers, non-governmental organizations and institutions that represent the interests and will of ordinary citizens, and governments to play an important role in running the Internet
  • 54% would trust an international body of engineers and technical experts
  • 50% of users would trust the United Nations to play an important role in running the Internet
  • 49% of users would trust International technology companies to play an important role in running the Internet
  • 47% of users would trust their own government to play an important role in running the Internet
  • 36% of users would trust the United States to play an important role in running the Internet
  • 83% of users believe affordable access to the Internet should be a basic human right

CIGI-Ipsos Global Survey on Internet Security and Trust - Centre for International Governance Innovation 2016

CIGI-Ipsos Global Survey on Internet Security and Trust - Centre for International Governance Innovation 2016

The 2016 CIGI-Ipsos Global Survey on Internet Security and Trust, undertaken by the Centre for International Governance Innovation (CIGI) and conducted by global research company Ipsos, reached 24,143 Internet users in 24 countries, and was carried out between November 20, 2015 and December 4, 2015.

The countries included: Australia, Brazil, Canada, China, Egypt, France, Germany, Great Britain, Hong Kong, India, Indonesia, Italy, Japan, Kenya, Mexico, Nigeria, Pakistan, Poland, South Africa, South Korea, Sweden, Tunisia, Turkey and the United States.

The global Survey was developed to help support the work of the Global Commission on Internet Governance (GCIG). The GCIC, an initiative by CIGI and Chatham House, was established to articulate and advance a strategic vision for the future of Internet governance.

The Dark Net

The survey found that:

Seven in ten global citizens say the “dark net” should be shut down, while three in ten disagree, believing it should continue to exist. The question remains: why do so many global citizens believe the dark net should continue to exist, if it embodies the seedy underbelly of the Internet? The answer lies in the desire of global citizens to preserve the anonymity and benefits that are also a central part of the dark net.

  • 71% of global citizens agree the dark net should be shut down
  • 46% of global citizens trust that their activities on the Internet are not being censored
  • 38% of global citizens trust that their activities on the Internet are not being monitored
  • Only six in ten users say that government assurances that they are not being censored (59%) or monitored (58%) would make them trust the Internet more.

Read the news release here.

Privacy vs National Security

The survey found that:

Most global citizens favour enabling law enforcement to access private online conversations if they have valid national security reasons to do so, or if they are investigating an individual suspected of committing a crime. The survey also found that a majority of respondents do not want companies to develop technologies that would undermine law enforcement’s ability to access much needed data.

  • 70% of global citizens agree that law enforcement agencies should have a right to access the content of their citizens’ online communications for valid national security reasons, including 69% of Americans and 65% of Canadians who agree
  • 85% of global citizens agree that when someone is suspected of a crime governments should be able to find out who their suspects communicated with online, including 80% of Americans who agree
  • 63% of global citizens agree that companies should not develop technologies that prevent law enforcement from accessing the content of an individual's online conversations
  • Sixty percent of Americans and 57% of Canadians are most likely to agree with this statement.

Read the news release here.

National security interests override digital privacy: CIGI-Ipsos global survey of online citizens - CIGI 20160302

National security interests override digital privacy: CIGI-Ipsos global survey of online citizens - CIGI 20160302

As new battles continue to emerge between national governments and private companies in the domains of national security and privacy, the results of a new survey, commissioned by the Centre for International Governance Innovation (CIGI) and conducted by global research company Ipsos across 24 countries find that most global citizens favour enabling law enforcement to access private online conversations if they have valid national security reasons to do so, or if they are investigating an individual suspected of committing a crime.

The study – titled the 2016 CIGI-Ipsos Global Survey on Internet Security and Trust – comes at a time when tech giant Apple is defying the F.B.I.’s orders to assist in accessing data stored in an iPhone owned by one of the two suspects who killed 14 people in San Bernardino, California, in December.

According to responses, most global citizens say law-enforcement agencies should have a right to access the online communications of its citizens (70%), especially those suspected of a crime (85%). As the Apple case unfolds today, 60% of Americans and 63% of internet users in 24 different countries think that companies should not develop technologies that prevent law enforcement from accessing the content of a user’s online data.

The survey of 24,143 users was conducted in 24 countries between the dates of November 20 and December 4, 2015 in: Australia, Brazil, Canada, China, Egypt, France, Germany, Great Britain, Hong Kong, India, Indonesia, Italy, Japan, Kenya, Mexico, Nigeria, Pakistan, Poland, South Africa, South Korea, Sweden, Tunisia, Turkey and the United States.

“The findings in this survey shine an important light on the nexus between trust, national security, and privacy in the increasingly dark and ungoverned space of the Internet,” said Fen Hampson, Director of CIGI’s Global Security & Politics Program & Co-Director of the Global Commission on Internet Governance. “Some of the most pressing challenges that the international community faces today live in this interconnection, and continue to illuminate the need for innovative governance solutions.”

The survey further found that, when someone is suspected of a crime, 85% of global citizens agree (49% strongly/37% somewhat) that governments should be able to find out who their suspects are communicating with online, including 80% of Americans who agree. Residents of Nigeria (95%) and Tunisia (93%) are most likely to agree with this position, while those in South Korea (67%) and Japan (70%) are by far the last likely to agree.

"The findings in this survey shine an important light on the nexus between trust, national security, and privacy in the increasingly dark and ungoverned space of the Internet "

More contentious is the idea of whether companies should be allowed to develop technologies that prevent law enforcement from accessing the content of an individual’s online conversations. On this issue, 63% agree (26% strongly/36% somewhat) that companies should not develop this technology, including 60% of Americans. Those in China (74%) and India (74%) are most likely to agree, while only a minority of South Koreans (46%) believe companies should not do this.

“Public attention today is focused on national security and digital privacy. When it comes to national security, Americans and Canadians, as well as global citizens from 24 countries believe that digital privacy considerations come secondary to their own government’s pursuit of keeping their home country safe,” said Darrell Bricker, CEO of Ipsos Public Affairs & CIGI Senior Fellow.


On national security & trust:

70% agree that law enforcement should have a right to access content of citizens’ online communications for valid national security reasons.
Countries such as Tunisia (84%), Nigeria (82%), India (82%), Sweden (80%) and Great Britain (80%) are most likely to agree that law enforcement should have the right to access content of citizens’ online communications for national security reasons.
Seven in ten Americans (69%) and 65% of Canadians agree that law enforcement should have a right to access content of citizens’ online communications for valid national security reasons.
85% agree governments should be able to find out who their suspects communicated with online when suspected of a crime.
On national security & digital privacy:

63% agree (26% strongly/36% somewhat) that companies should not develop technologies that prevent law enforcement from accessing the content of an individual’s online conversations, including 60% of Americans, and 57% of Canadians.
Residents of North America (58%) are least likely to agree, while those living somewhere in the G-8 (61%), Middle East and Africa (63%), Asia Pacific (63%), Latin America (64%), Europe (64%) or BRIC (69%) are more likely to agree.


For more information and to see additional data collected as part of the CIGI-Ipsos Global Survey on Internet Security and Trust, please visit:

The Centre for International Governance Innovation (CIGI) is an independent, non-partisan think tank on international governance. Led by experienced practitioners and distinguished academics, CIGI supports research, forms networks, advances policy debate and generates ideas for multilateral governance improvements. Conducting an active agenda of research, events and publications, CIGI’s interdisciplinary work includes collaboration with policy, business and academic communities around the world. CIGI was founded in 2001 by Jim Balsillie, then co-CEO of Research In Motion (BlackBerry), and collaborates with and gratefully acknowledges support from a number of strategic partners, in particular the Government of Canada and the Government of Ontario. For more information, please visit

Mass surveillance silences minority opinions, according to study - The Washington Post 20160328

Mass surveillance silences minority opinions, according to study - The Washington Post 20160328

A new study shows that knowledge of government surveillance causes people to self-censor their dissenting opinions online. The research offers a sobering look at the oft-touted "democratizing" effect of social media and Internet access that bolsters minority opinion.

The study, published in Journalism and Mass Communication Quarterly, studied the effects of subtle reminders of mass surveillance on its subjects. The majority of participants reacted by suppressing opinions that they perceived to be in the minority. This research illustrates the silencing effect of participants’ dissenting opinions in the wake of widespread knowledge of government surveillance, as revealed by whistleblower Edward Snowden in 2013.

The “spiral of silence” is a well-researched phenomenon in which people suppress unpopular opinions to fit in and avoid social isolation. It has been looked at in the context of social media and the echo-chamber effect, in which we tailor our opinions to fit the online activity of our Facebook and Twitter friends. But this study adds a new layer by explicitly examining how government surveillance affects self-censorship.

Participants in the study were first surveyed about their political beliefs, personality traits and online activity, to create a psychological profile for each person. A random sample group was then subtly reminded of government surveillance, followed by everyone in the study being shown a neutral, fictional headline stating that U.S. airstrikes had targeted the Islamic State in Iraq. Subjects were then asked a series of questions about their attitudes toward the hypothetical news event, such as how they think most Americans would feel about it and whether they would publicly voice their opinion on the topic. The majority of those primed with surveillance information were less likely to speak out about their more nonconformist ideas, including those assessed as less likely to self-censor based on their psychological profile.

Elizabeth Stoycheff, lead researcher of the study and assistant professor at Wayne State University, is disturbed by her findings.

“So many people I've talked with say they don't care about online surveillance because they don't break any laws and don't have anything to hide. And I find these rationales deeply troubling,” she said.

She said that participants who shared the “nothing to hide” belief, those who tended to support mass surveillance as necessary for national security, were the most likely to silence their minority opinions.

“The fact that the 'nothing to hide' individuals experience a significant chilling effect speaks to how online privacy is much bigger than the mere lawfulness of one's actions. It's about a fundamental human right to have control over one's self-presentation and image, in private, and now, in search histories and metadata,” she said.

Stoycheff is also concerned about the quietly oppressive behavior of self-censorship.

“It concerns me that surveillance seems to be enabling a culture of self-censorship because it further disenfranchises minority groups. And it is difficult to protect and extend the rights of these vulnerable populations when their voices aren't part of the discussion. Democracy thrives on a diversity of ideas, and self-censorship starves it,” she said. “Shifting this discussion so Americans understand that civil liberties are just as fundamental to the country's long-term well-being as thwarting very rare terrorist attacks is a necessary move.”

Stoycheff has written about the capacity of online sharing tools to inspire democratic change. But the results of this study have caused her views to change. "The adoption of surveillance techniques, by both the government and private sectors, undermines the Internet's ability to serve as a neutral platform for honest and open deliberation. It begins to strip away the Internet's ability to serve as a venue for all voices, instead catering only to the most dominant," she said. She received no outside funding for the research or publication of this study, she said.

Some related references

Glynn, J.C., Hayes, F.A. & Shanahan, J. (1997). “Perceived support for ones opinions sand willingness to speak out: A meta-analysis of survey studies on the ‘spiral of silence’” Public Opinion Quarterly 61 (3):452-463.

Glynn, J.C. & McLeod, J. (1984). “Public opinion du jour: An examination of the spiral of silence, “ Public Opinion Quarterly 48 (4):731-740.

Noelle-Neumann, E. (1984). The Spiral of Silence: Public Opinion -- Our social skin. Chicago: University of Chicago.

Noelle-Neumann, E. (1991). The theory of public opinion: The concept of the Spiral of Silence. In J. A. Anderson (Ed.),Communication Yearbook 14, 256-287. Newbury Park, CA: Sage.

Simpson, C. (1996). “Elisabeth Noelle-Neumann’s ‘spiral of silence’ and the historical context of communication theory.”Journal of Communication 46 (3):149-173.

Taylor, D.G. (1982). “Pluralistic ignorance and the spiral of silence: A formal analysis,” Public Opinion Quarterly 46(3):311-335. See also: Kennamer, J.D. (1990). “Self-serving biases in perceiving the opinions of others: Implications for the spiral of silence,” Communication Research 17 (3):393-404; Yassin Ahmed Lashin (1984). Testing the spiral of silence hypothesis: Toward an integrated theory of public opinion. Unpublished dissertation, University of Illinois at Urbana-Champaign.

Snowden, Edward - The last lighthouse: Free software in dark times - 20160319


Enjoy my transcript of Edward Snowden's keynote address The last lighthouse: Free software in dark times, delivered to Libre Planet 2016 on March 19, 2106. Pre-release video recording available at

[John Sullivan]: We’re good? Well, welcome to Libre Planet 2016! You made it! You’re here! Well, not everybody made it, so we are streaming this event live. “Hello” to everybody watching along at home, too. Thank you for bearing with us, as we get things started here this morning. There are a lot of moving parts happening in this opening keynote, and we are doing it with all free software [audience cheers]. We’re really pushing the envelope here, and so there’s inevitably going to be some hang-ups, but we’ve been improving this process year after year, and documenting it, so that other conferences that are themed around free software and computer user freedom can hopefully use the same systems that we are [audience cheers] and practice what we want to preach. So, my name’s John Sullivan. I’m the Executive Director at the Free Software Foundation. This is always one of my favourite moments of the year, to start this conference off, but I’m especially excited about this year. We’ve had a [JS makes scary quotes] “Yuge” year, starting with our thirtieth anniversary in October, and continuing on to what is obviously our largest Libre Planet ever, and our biggest bang to start off the event, for sure. Let’s see, How many FSF members are here? Awesome! That’s amazing! Thank you, and I hope that the rest of you will consider becoming members by the end of the conference. You can join at Members and individual donors fund over eighty percent of the FSF’s work, including putting on this event, as well as our advocacy for computer user freedom, and development of free software. I’m really happy to have this event at MIT, again, where so much of the free software movement started, and our I want to thank our partners at the Student Information Processing Board - SIPB - for partnering with us, to make this happen. It’s really nice to see free software values continuing to be a strong part of the MIT community. [Applause] Yes, thank you. I have a few important announcements and reminders about the conference, the rest of the conference. First thing is, we have a safe space policy, that’s on page three of your program. Please read it and help us make this event a welcoming environment for absolutely everybody. If there are any issues that come up, please feel free to find me, or Georgia Young. The Information Desk will always know where we are and Georgia has her hand up in the back. Second of all, there is a party tonight at Elephant and Castle near the Downtown Crossings subway station. I hope you will join us there. We will provide some complimentary refreshments and continue conversations that get started during the conference today. We are streaming, as I mentioned, with all free software. The party, though, will not be streamed. [Laughter] We have a few program changes to announce [provides details of changes to conference program] [03:41] After the conference is over tomorrow, there will be a rally, at which we will, people will try to convince the W3C not to make a terrible mistake by endorsing DRM as a recommended extension to HTML 5. And that will be happening outside the Stata Center at 6:45 tomorrow night. Zak Rogoff at the Information Desk will have information for people who want to participate. That’s after the conference is concluded. Finally, please join us on IRC at #libreplanet channel on Freenode, both to communicate with people that are watching from home, and also just to have some back channel conversation about everything that’s happening. So, we have an amazing start to this year’s conference, with Daniel Kahn Gillmor and Edward Snowden. Daniel is a technologist with the ACLU’s Speech Privacy and Technology Project and a free software developer. He’s a Free Software Foundation member, thank you, a member of Debian, and a contributor to many free software programs, especially in the security layer many of us rely on. He participates in standards organizations, like IETF with an idea to preserving and improving civil liberties and civil rights through our shared infrastructure. Edward Snowden is a former intelligence officer, who served in the CIA, NSA, and DIA for nearly a decade as a subject-matter expert on technology and cyber security. In 2013, he revealed the NSA was unconstitutionally seizing the private records of billions of individuals who’d not been suspected of any wrong-doing, resulting in the largest debate about reforms of US surveillance policies since 1978. And I want to say, I take the chance to say “Thank you” for also inspiring us to, at the Free Software Foundation, to redouble our efforts to promote user security and privacy through the use of free software programs like GnuPG, if you’ve seen our guide at that was inspired by the actions that Snowden took and the conversation that that started. I would love to say more about how all this relates to free software, but I think I will leave that to our speakers this morning, while they have a conversation entitled “The last lighthouse: free software in dark times.” We started a little bit late. We are cancelling the break after this, so the next session will begin in this room immediately after this one concludes. So, we should have the full amount of time, so, thank you everybody. [JS gestures to DKG]

[06:47] DKG: So, I’m going ahead, and bring Ed in, hopefully. Let’s see. [Snowden appears 06:54] Ed, can you hear us? [Extended applause and cheers]

[07:00] Edward Snowden: Hello, Boston! Thank you. Wow!

DKG: Ed, you can’t see it, but people seem to be standing, right now.

ES: Thank you. Thank you. Wow! Thank you so much. Please, if I could say one thing. When we were introduced, the thing that always surprises me is that people say, you know, “Thank you” to me. But this is an extraordinary event, for me personally, because I get to say, “Thank you” to you. So many people forget – maybe people haven’t seen Citizen Four, for example, the documentary where they actually had the camera in the room when the NSA revelations were happening – but if you watch closely in credits, they thank a number of FOSS projects, including Debian, Tails, Tor, GnuPG, and so on and so forth. And that’s because, what happened in 2013 would not have been possible without free software. I did not use Windows machines when I was in my operational phase, because I couldn’t trust them – not because I knew that there was a particular backdoor or anything like that – but because I couldn’t be sure. Now, this ambiguity - this fear - this risk - that sort of creates this atmosphere of uncertainty that surrounds all of us - is one of the central problems that we in the security space – in the software space, in general – the connection space of the Internet - in the way that we relate to one another – whether it’s in politics, or law, or technology – is this thing that really is difficult to dispel. You don’t know it’s true. You don’t know it’s fact or not. Some critics of sort of the revelations and what happened - they say “Yeah, ah, we all knew that. Everyone knew that was happening. We figured that out.” And the difference is, many of us suspected – technologists suspected – specialists suspected – but we didn’t know. We knew it was possible. We didn’t know it was actually happening. Now, we know. And, now, we can start to make changes. We can integrate these threats into our threat model. We can adjust the way that we not just vote – not just the way we think about the issues – but the way that we develop, direct, and steer the software and systems that we all rely upon, everyday, that surround us invisibly in every space. Even people whose lives don’t touch the Internet - people who still have to go to the hospital - people who still may have a record of purchasing something at this location or that – somebody who spends money through banks – people who purchase something in a store – all of these things touch systems upon which we must all rely, but increasingly cannot trust - because we have that same Windows problem. Now, since 2013, I think everyone in the audience – this isn’t going to be controversial for you – would agree that Windows isn’t exactly moving in the right direction. They may be putting forth sort of new exploit mitigations, making things a little more difficult for buffer overflows and things like that, including ASLR, and everything like that, which is great, but at the same time we’re putting out an operating system like Windows 10, that is so contrary to user interests, where rather than the operating system working for you, you work for the operating system, you work for the manufacturer. This is not something that benefits society, this is not something that benefits the user, this is something that benefits the corporation. Now, that’s not to say “All corporations are evil.” That’s not to say, “I’m against private enterprise” or that you should be. We need to have systems of business, to be able to develop things, to go sell things, to trade and engage with each other, to connect and for [inaudible] – but, while sometimes corporations are on our side, sometimes corporations do stand up for the public interest, as is right now, Apple challenging the FBI, who is asking to basically smother the security of every American device, service, and product, that’s developed here, and ultimately around the world, while it’s still in its crib. We should not have to rely on them. And this talk today, I hope, is about where we’re at in the world, and thinking - for everyone in the audience - not what people say, not, you know, this fact or this authority, but what you believe, what you think, is the right way to move forward.

[12:26] KDG: So, I wanted to touch on that, on the questions around the security of free software and the security of non-free software as well. The Apple case is an interesting one, because it is a chance for us to, I think, continue to move the conversation forward about what protections are actually offered to users. There’s a lot of situations here where people are saying, “Well the Apple phones are more secure because they got this lock-down. And I think, I’d be curious to hear your take on, how do we respond to that? What are the trade-offs here, between the lock-down on Apple devices and the other possibilities - on hardware that’s maybe not so locked down?

[13:15] ES: A lot of people have difficulty distinguishing between related concepts – one of which is security, the other of which is control. Now, a lot of politicians have [inaudible] those issues, and have said this is a conversation about where we draw the line, between our rights and our security, or between liberty and surveillance, or whatever. But that really misses the point. And this is the central issue in that sort of Apple walled-garden approach. Apple does produce some pretty reliable exploit protections. Does that mean it’s a secure device? Well, they can push updates at any time, that they sign in an arbitrary way, that can completely change the functionality of the device. Now, we trust them currently, because many trust them, not to abuse that, and we’ve got at least some indication that they haven’t, which is a positive thing. But the question is, Is the device truly secure when you have no idea what it’s doing? And this is the problem with proprietary software. This is the problem with closed-source ecosystems, that are increasingly popular today. This is also the problem even with some open systems - or the more open systems, like the Android space - where security updates are just a complete, comprehensive, fractured disaster. [ES and audience laugh]. I don’t mean to go too far, but I’m sure you guys have heard this stuff. So nobody’s going to go stand up with a question, and then read me a speech about why this is wonderful. Um. But the challenge here is, Are there alternatives, right? And we know, from the Free Software Movement, that there are. You will notice in this talk, as the moderator introduced, that there’s no Google logo up here [ES points over his shoulder] for like the first time. Not the first time ever - I have done many talks on FOSS stuff - but never a full FOSS stack. Right now this is a complete stack, that’s completely free and open-source. And this is important, because what we do in our spaces, where we are a little more technical - we are a little more specialist - we can put up with more inconvenience - we develop the platforms, the capabilities, the strategies, that can then be ported over to benefit the entire class of communities that are less technical and, in many cases, simply cannot afford or access proprietary software in the traditional market-driven ways. Now, this is critical, because some of the most brilliant people I know, particularly Linux contributors, and so on and so forth, got their start - not because they necessarily believed in the ideology - but because they couldn’t afford licenses for all this different software, and they hadn’t yet developed sort of the technical sophistication to realize that they could just pirate everything. Now, this [audience and Snowden laugh] … this is actually a beneficial thing, and something I want everyone in the room to watch out for, right. Look for these people. This community that we have, that we’re building, that does so much for some people, has to grow, because we can’t compete with Apple, we can’t compete with Google directly, in the field of resources. What we can, eventually, do is head-count and heart-count. We can compete on the ground of ideology because ours is better [audience and Snowden laugh; audience applauds] … but we also have to focus on recruitment, on bringing people in, and helping them learn, right. Everybody got started somewhere. I did not start on Debian. I did not start on Linux. I was an MCSE, right, I was a Microsoft guy, ‘til eventually I saw the light. This doesn’t mean that you cast off … this doesn’t mean that you can’t use any proprietary software. I know Richard Stallman’s probably at the back and he’s waving his finger. [audience and Snowden laugh] But we’ve got to recognize that it’s a radical position to say that you can’t engage with proprietary software at all. That’s not to say it’s not without merit. The world needs radicals. We need lessons. We need leaders. We need figures who can pull us in the direction of trying new things, of expanding things, and recognizing that in a world where our visibility into the operation of our devices – whether it’s a washing machine, a fridge, or the phone in your pocket – is something that increasingly you have no idea what is going on, or, even if you want, you have no control over, short of exploiting it and trying to get /root, and then doing it on your own. That’s a fundamentally dangerous thing. And that’s why I call it the last lighthouse, right. The people in this room – whether you’re more on the radical side or more on the mainstream side – you’re blazing a trail, you’re recognizing solutions, and going “Look, we can deal with the software problem. We can do our best, but we recognize it’s a challenge.” But there are more problems that are coming, and we’re going to need more people, who are going to solve them. Everybody’s talking about the difficulties of software trust, but we really need to start thinking about hardware trust, right. There are distributions and projects like this - the Qubes project, researchers like Invisible Things Lab, Joanna Rutkowska, and others who are really focusing on these things, as well as many people in the Free Software Foundation. And we need to think about the world, where – alright, maybe the FBI – didn’t get a backdoor in the iPhone. But maybe it doesn’t matter, because they got the chip fabs. Maybe they already do. We need to think about a world where the infrastructure is something that we will never control. We will never be able to put the commercial pressure on telecommunications providers to make them watch the government, who they have to beg for regulatory licenses to actually operate the business. But what we can do, are layer our own systems on top of their infrastructure. Only think about things like the Tor project. Tor’s incredible. I use Tor everyday. I rely on Tor. I used it during the actual NSA work I did as well. And so many people around the world do. But Tor is showing its age, right. No project lasts forever. And we have to constantly be focused, we have to constantly be refreshing ourselves, and we need to look at where the opportunities are, and where the risks are. I should pass it back to Dan, because I just rambled for, like, twenty minutes. [audience laughs]

[19:55] DKG: Well, I think, so what you’re saying about how do we bring more people to the movement, I think, is really important. So I, I mean, I’ll say I came to free software for the technical excellence and I stayed for the freedom, right [audience and Snowden laugh] I came to free software at a time when Debian was an operating system that you could just install and automatically update and it worked. That didn’t exist elsewhere. I used to have Windows systems, where I was wiping the machine and re-installing every two months. [Snowden, audience and DKG laugh] and, I think a couple of people raised their hands, people have been there. So, you know, come for the technical excellence, and as I learned about the control that I ended up actually having and understanding what was going on in the systems … that became the reason that I stayed. It didn’t matter, as the other systems sort of caught up, and realized “Oh, well, we can actually do automated updates. Microsoft has a system update thing that they do.” So, I’m wondering if you have other ideas about maybe what are ways that we can expand our community, and what are ways we can sustain our community as we grow. I think maybe that’s a question for everyone in this room. But I’d be curious to know if you have any particular ideas or suggestions. Not everyone is going to come to the community is going to be maybe geeky enough to want to know what code is running on their refrigerator. But in ten years everybody’s refrigerator is going to be running code, and so how do we, like, how do we make sure that that message gets out? That people can be proud to have a free software fridge [audience and Snowden laugh] without being a free software hacker. What are ways that we can expand the community?

[21:42] ES: Well, one of the main ways is, we got to be better, right. If you have a free software fridge, it’s got to be better, it’s got to be more interesting, it’s got to be more capable, it’s got to be more fun than the proprietary equivalent. And the fact that, in many cases, it’s free is a big selling point. But beyond that – beyond the actual competitive strategy – we need to think about, as you said, the community strategy. And I don’t like [inaudible] for authority - especially from big talking heads on the wall – but, I would say, that everybody in the room should take a minute to think about their part in it, what they believe in, what they value, and how you can protect that, and how you can pass that to people who come after you, right. ‘Cause you can’t wait to your death bed, you know, like eighty, to make this happen. It’s something that has to be life-long practice, particularly in the context of organizing, particularly in the context of growing a group, particularly a group of belief. I would say, everybody in the room should make a task for themselves, that this year, bring five people into the free software community. Now, that seems really difficult. But when you think, you know, well alright, at any level - whether they just sign up for a membership when they donate, whether they do a basic commit on some Git somewhere, even if it’s just changing something cosmetic, making something a little bit more user-friendly. Even if it’s just a pull-request or a fork or branch that they’re using only for themselves, …

[23:10] DKG: Or a bug report.

ES: … or a bug report, even better. It’s important, because what we’re trying to do is, we’re trying to expose people into the language of empowerment, right. And that’s what this is really about. Where we get to back the whole thing before, whether it’s like privacy versus security, or [security versus privacy]. It’s not about privacy versus security, because when you’re more secure, you have more privacy; when you have more privacy, you’re a lot more secure as well. This is really about power, right. When we look at how these programs have actually been used in a surveillance context, it’s not just against terrorists, right. The GCHQ was using NSA systems to intercept the emails of journalists. They spied on Amnesty International. They spied on other human rights NGOs. In the United States, we used our capabilities to spy on UNICEF, the children’s fund, right, for the UN. And this was not the only time. When we looked at their actual statistics, we saw they abused their powers or broke the law 2,776 times in a single calendar {quarter/year}. Now, this is a problem for a lot of reasons, not least of which is the fact that no one was ever charged, right, no one was prosecuted, because they didn’t want to reveal the fact that these programs existed. But when we talk about what this means for people, right, ultimately it gets into that world of - Are you controlling your life? Are you controlling the things around you? Do they work for you? Or do they work for someone else? And this language of empowerment it is something, I think, that underlies everything that your organization has been doing, not just in the defense of liberty sense, or the “free as in kittens” sense, [audience and Snowden laugh] but the idea that, look, right, we’re no longer passive in our relationship with our devices.

[25:03] DKG: Yeah, so when I think about the devices that we need to be, to have some level of control over, there’s, I mentioned the refrigerator earlier, but, you know, increasingly we’re dealing with things like cars that have proprietary systems with over-the-air updates. [Snowden laughs] We’re dealing with more and more of our lives, our intimate conversations are mediated through these devices, and so, it’s interesting for me to think about how do we, how do we approach an ecosystem where there seems to be, maybe we actually now do have fully free computers, thanks to people in this room, we actually have, you know, laptops that are free from pretty much the BIOS upwards, including core boot, but how do we get, as more things become computerized, how do we get, how do we make sure that people’s cars aren’t, don’t themselves become surveillance devices, how to make sure the little pocket computers that everyone carries around actually aren’t surveillance devices for people? And, so I think one of the things that points to is that, as a community that cares about user empowerment, which is, this is freedom zero, right, the freedom to use these tools the way you want to use them, we have to, I think, make outreach also to communities with shared values. And you mentioned open hardware communities, people who are building tools that maybe we can have some level of control over, in the face of a bunch of other pieces of hardware that are under control. But there’s additional communities that we need, I think, also reach out to, to make sure that this, that this message of, you know, surveillance is this power dynamic, and we’re hoping that your control over your devices will actually provide people with some level of autonomy. And that means that we need to have more outreach to, I mean, to think about what’s going on, on the network stack itself. I mean, this is something I’ve focused on. If the protocols that we use are implemented in free software, but the network protocols are very leaky, that doesn’t actually provide people with what they want to do. It’s not very easy for people to come along and change the protocol, if it’s a communications protocol. So I think we need to look at the network standards, we need to look at regulatory standards, so I’m happy, I’m hoping there are lawyers in the room, I suspect there are, well, there’s a couple of people raising both hands. [Snowden and audience laugh] So, but that kind of outreach - can we have regulatory guidance that says, “If you’re going to put a vehicle on the road that it needs to be running free software”? I mean, that’s a super-radical position today. Can we make that not a radical position? How can we, how can we make that outreach into the communities of non-geeks to make sure that these messages about power and control, which are central to our lives in a heavily technologically-mediated society, actually are addressed in all of the places where they’re addressed? I don’t know if you have other particular places, where you can imagine outreach, Ed, a community to ally with?

[28:25] ES: You hit a big point with the network problem. That gets back into the fact that we can’t control the telecom providers, you know, we’re very vulnerable to them. If you wanted to compress the story of 2013 to its – leaving politics aside, right, leaving the big democratic question of the fact politicians were telling us this wasn’t happening, intelligence officials were giving sworn testimony saying this wasn’t happening, when it obviously was – and we focus just on the technical impact, and we want to compress it to a central point - it would be that the provider is hostile. The network path is hostile. And we need to think about mitigations for that. Now, we need to think about also where all the providers are, what they are, and how they can be replaced. Now, open hardware is one of the big challenges here. We’ve seen some adverts like the Lenovo laptop. We’ve seen some other things like Purism, and many others that I haven’t named directly. But there’s a large question here, where if we can’t control the telecommunications provider, if we can’t control the chip fabs, right, how can we assert things? Well, the first solution was, Encrypt everything. And this is an important first step, right. It doesn’t solve the metadata problem, but it’s an important first step. The next step is Tunnel everything, right. And then the step beyond that is, Mix everything, so we can mudge the metadata and it’s hard to tell where things went. Now, there’s still theoretical problems with global passive adversary, timing attacks, and what not, but you make this more expensive, and less practical with each step we go beyond, and then there’s somebody in this room, who likely has the idea that none of the rest of us have, on how to really solve this. And this is what we need. Also in the hardware space. Is it possible, that rather than getting these very specialized chips, that’s exactly this - I do exactly that, I have exactly this instruction set, and I’m inflexible - we realize that because we’re sort of bumping up to the limits of physical die shrinks at this point, that we could reach a point that, maybe we start changing our architecture a little bit more radically. We have flexible chips, things that are like FPGAs for everything. And instead of getting a hyper-specialized chip, instead we get a hyper-capable chip that can simply be used in any arbitrary manner, and this community shares masks of our own design, that are logical masks, rather than physical masks, for designing and directing how they work. [pauses] There’s another question here that I actually don’t know a lot about, but I think Daniel you’ve done some research on this, is – when we get into the actual toolchaining, right, how do we build program devices and things like that? For myself, I’m not a developer full-time. That was never my focus. And there’s this question, we’ve seen sort of attacks, including in, like, the NSA documents, the XcodeGhost type thing, where an adversary, an arbitrary adversary, will target a developer, right, and rather than poison a specific binary, rather than trying to steal their signing key or something, like that, or in addition to stealing their signing key, they’ll actually go after the compiler. They’ll actually go after their toolchains. Or, on the network, they’ll start tracking people, and the activities of developers, even if they start working pseudonymously, because they’ll look at their toolchains, they’ll look at, Is there some cruft? Is there some [inaudible] is there some artefact? Is there some string that constantly repeats in their work? Is there some variable that’s unique to them and their work, that identifies them, even if they’re under [inaudible] How do we start this off?

[32:08] DKG: Right, this is, like, one level past the I Hunt Sysadmins slide, right this is the I Hunt Developers slide, and I would hope that the free software developers in this room care about that issue, right. I mean, I certainly, I know that, as a free software developer, lots of people take their responsibilities seriously. You don’t want to release bad code – sometimes, occasionally, some people maybe make some mistakes, some bugs [Snowden laughs] but we take the responsibility seriously. We want to fix the bugs that we make. But what if your toolchain is corrupted? What if you do get targeted? If you’re maintaining a piece of core infrastructure, like many people in this room probably are, how do you ensure that a target, a targeted attack on you doesn’t become an attack against all of your user base? I think we actually, what’s great is we actually have people working on this problem. I know there’s a talk later today, or tomorrow rather, about reproducible builds, which is an opportunity to make sure we get, you can go from, I’m not going to give the talk in five minutes here [Snowden laughs] I’m just going to give an outline. You should definitely check it out. But the goal is you can go from your source code through your toolchain and get a reproducible, like byte-for-byte identical value. And so that way, as a software developer, you know that any attack against your toolchain doesn’t matter, as long as the tools, as long as you’re publishing the source code that you mean to publish, your users can rely on tools that are built by many different builders, that will all produce the same result, and they can verify that they’re getting the same thing from each party. We’re not there yet, because our tools are still kind of, kind of crufty, they build in some arbitrary things, but we’re making great strides towards making non-reproducibility itself something we can detect, and stamp out as a new class of bugs, that we can rule out, and that gives us a leg up also against the proprietary software community, where they can’t simply do that, if they don’t have the source code even visible, they have no way of saying, “Look, this is the human intent, the human-readable intent, and here’s the binaries that come out, that other people can do.” So reproducible build in one path to that kind of trust, and I think there are probably others, and I hope people are actively thinking about that. The other way that I’ve heard this framed is, “Do you want to be the person who gets made an offer that you can’t refuse, right?” [Snowden laughs] If you’re a free software developer, and you’re publishing your source code, people can see what you publish, and they can say “Hey, did you really mean to do this?” But if you’re just distributing binaries, or you’re distributing your source code next to binaries, and your binaries are compromised, anybody who is looking at your source, at your disk, “Well, the disks all look clean” and then your binaries could be compromised. So, personally, as a free software developer, I don’t want to be in that position. I don’t want to be giving anybody any binaries. I want to be able to give them changes that are human-readable. So, we’re running a little bit low on time, and I want to make sure that if people have questions, they get a chance to ask questions. There’s a couple other things that I’d love to talk with you about, but if people have questions I’m going to ask that you come down and line up here at the mic, if you have a question. A couple of people are starting. But before the questions start, I’m just going to lead off with one, which is, have you got any ideas for how we address the, you mentioned the Android lack of security updates, how do we address, any ideas or suggestions for how we address the stability versus legacy compatibility versus actually security updates quandary? [Snowden smiles]

[35:44] ES: So, this is, like, [Snowden laughs]

DKG: In one minute … it’s easy.

ES: If I could solve this, I’d have an easier time getting the Nobel prize, right. [audience and Snowden laugh] But the challenge here is that there’s a real impact to support for legacy. Everybody knows this. But the users don’t accept so well, that there’s a gigantic security impact, that makes it actually unethical to support really out-of-date versions, to keep people anchored back in that support chain. Because it’s not just a question about versioning, it’s not just a question about stable, right. Stable is important, but increasingly we’re finding out the pace of adversary offensive research is so fast that if our update cycles are not at least relevant to that attack speed that we’re actually endangering people. And by being one of the users who’s out there on a mailing list [Snowden gestures mockingly] “Oh, this breaks functionality blah-blah-blah for my floppy-disk driver in my virtual machine.” It’s like, “Yo! Stop using a floppy disk in your virtual machine!” [audience laughs]

[37:03] DKG: So, we’ve got a queue of questions. I want to make sure that we get to them. I might need to repeat the mic, repeat the questions. I’m not sure whether you’ll hear them. Go ahead.

[37:13] Question #1: Hi, my name is Curtis Glavin. Thanks for taking my question. I was wondering, should a priority for the free software community be developing a kind of killer app for privacy, security, and a lot of these ideas that we all care about, that could gain, that could gain widespread adoption and transform public opinion, mainstream public opinion on these issues? And, if so, what could such a kind of killer app be, and how could the community build it? Thanks.

[37:54] ES: Absolutely. I mean, we start to see some of these things happening, particularly in the [inaudible] space, where we’re on ecosystems, where we have less control over, or they’re starting to put apps there. Now, can we create a competing stack? And, more importantly, as you say, the first is capability, because that’s what people care about, that’s what the user cares about, is capability. We see things like Signal, that are starting to try to tackle this, and even messaging space, right. But Signal’s not perfect, right. Signal has weaknesses. It leaks metadata, like telephonic contact, your address list, and things like that. And we have to figure out, are there ways that we can change collaboration? Now, here’s the big one, right. Living in exile kind of gives you an interesting perspective on different ways that people interact. One of the big ones is the fact, look, there’s a warrant against me, right. If I was trying to speak with you at MIT, there’d be like an FBI raid and paddy wagons outside. But because of technology, here I am, and it’s all FOSS. But that’s only the beginning, because there are other alternative functions out. We’re trying to compete. We’re trying to replicate. We’re trying to distinguish. Can we get there first? Now, one of the big technologies - the disruptive technologies - that’s out there today, that’s coming out this year, are obviously the VR stuff that is starting to take off. We’ve got the Oculus Rift, we’ve got the HTC Vive, and, of course, they’ll be many different versions of this. Can we take the hardware and create our own applications for addressing the remote-work problem, right? Can you create a virtual workspace, a virtual meeting space, a virtual social space, that’s arbitrary, where you and your peers can engage? They can look over your shoulder, as if you were sitting in the same office, and see your terminal visibly in front of you in this virtual space, without regard to your physical location? Now, I’m sure, there are commercial providers out there, proprietary actors out there, who are trying to create this. You know, Facebook would be completely negligent if they weren’t trying to do it. But if we can get there first, and we can do it secure, we can do something simply, that Facebook simply can’t. Their business model does not permit them to provide privacy. We can. We can do the same thing. We can do it better. We can do it faster. And if we do, it will change the world, and it will change the politics of every person in every country, because now you’ll have a safe space - everywhere, anywhere, always. [applause]

[40:37] Question #2: Hi. My name is Sascha Costanza-Chock. Thank you so much. You mentioned a couple times in your comments, sort of nodded to the idea, that we abandon the infrastructure space and we build on top of, you know, on top of existing infrastructure. And I wonder if you could just make a couple comments about the communities that are trying to do DIY and community-controlled infrastructure? So, there are projects like Open BTS. There are community organizations like Rizomatica, that’s building free phone and internet networks in rural Mexico. There are projects like the Red Hook Initiative that’s training people how to build community-controlled wireless infrastructure in Brooklyn. There are projects like Detroit’s Digital Stewards that are doing the same thing in Detroit. And all over. There are people sort of bubbling up around the edges to do community infrastructure. And I wonder if you could comment a little bit more on, Yes, these things are longshots, but maybe we shouldn’t abandon this, the space, of the imaginary space of the possible libertory future where we do own our infrastructure as well?

[41:40] ES: I agree - and actually if you could stay at the mic there for just one second – because that is, that’s a powerful idea. Now, I have less familiarity with that. I’m not going to try to BS anybody. Nobody’s an expert in everything, right. I’m not as familiar with community infrastructure projects. When I think about that, I think about Open DD-WRT and so on. But that level, where we’re actually talking about, you know, knitting together meshnets, or small-scale cell networks, that’s awesome, and we should do more about it. I think we will have the most success, personally, where we’re leap-frogging over technologies, be more mobile, more agile, and we don’t have the same kind of sunk infrastructure costs, because, ultimately, infrastructure is what can be targeted by the adversary – whether it’s a criminal group, whether it’s a government. If we have things invested in boxes and spaces, those are things that a police car can drive up to. That’s not to say they’re not, that’s not the case. But if I could just ask you briefly to comment on that, since you do have more familiarity, and maybe everybody in the room could benefit from it - What do you see as the way forward in the next space of communications fabric? [another questioner comes to the mic] I was actually him a follow-up. But that’s fine, let’s just have the next question.

[43:04] Question 3: Hi, this one may be a partial regurgitation of the last one. Daniel Gnoutcheff, Sysadmin, Software Freedom Law [Center]. Oh, my goodness, sorry. Moving on. So, one of the responses I’ve seen to revelations of global surveillance is the rise of self-hosting projects, such as Freedom Box, that are trying to provide people with tools to move their data out of the cloud, so to say, so to speak, and into personal devices sitting in their own homes. Do you believe that these sorts of tools, such as Freedom Box, provide a reasonable defense to global surveillance? And, what would your advice be to Freedom Box and similar projects?

[43:52] ES: Yeah, absolutely. So this is one of the critical things in the values, where community infrastructure, like that open infrastructure, can actually be really valuable, even if it’s not global - in fact, especially if it’s not global. In my experience, so I worked at the NSA, right, actually with the tools of mass surveillance. I had XKEYSCORE at my desk every morning I went in. I had scheduled tasks that were just pumping out sort of all of my different targets, all of their activities around the global internet, including just general activities on subnets that I found interesting, right. If I wanted to see anybody in an entire subnet – they just sent a certain pattern of ping - I could get that, it would be just there waiting for me. It’s easy to do. If you can write RegEx you can use XKEYSCORE. And you don’t even need to do that, but more advanced people do. Everybody else was just typing in “Bad guy at” [audience laughs] but the idea here is that even, even mass surveillance has limits, right - and that’s the size and granularity of their sensor mesh, right. They have to compromise or co-op a telecommunications network. They have to hack a router, implant, and then put a targeting interdiction on that, to go “Are any of my interesting selectors - IP addresses, emails, classes of information, fingerprints of activity, anything like that passing this? Then I’ll add to, sort of, my bucket, that will come back as results. And what this means is, that for ordinary people, for dissidents, for activists, for people who want to maintain their privacy, the fewer hops that you cross, the more lower, the more local your network - particularly if you’re outside of, sort of, these big telecommunications spaces - the safer you are, because you can sort of live in those gaps between the sensor networks, and never be seen.

[45:48] DKG: So, unfortunately we’re running low on time here. We’ve got less than five minutes left. So maybe we can take one last question.

ES: Sure.

DKG: Sorry, I know there are people in the queue, but …

[46:00] Question 4 [a young man]: Hello. I wanted to ask. What is someone my age able to do, who is like in middle school or high school, to kind of help out?

[46:10] ES: First thing is care. If you care, you’ll learn. If you learn [applause] It’s not meant to be pat. A lot of people don’t care. And it’s not that they don’t care because they’ve looked at it, they understand it, and they go “It doesn’t matter.” It’s because everybody’s only got so many minutes in the day, right. There’s a contest for attention. There’s a contest for mind-share. And we can only be a specialist or an expert in so many things. If this is already interesting to you, right, you’re already on the right track. And you can do a lot of good. You can develop tools that will change lives, and maybe even save them. The key is to learn. The key is to develop capability, and to actually apply it. It’s not enough to simply care about something – that’s the start. It’s not enough to simply believe in something – that’s the next step. You actually have to stand for something. You have to invest in something. And you have to be willing to risk something to make change happen. [applause]

[47:35] DKG: So … sorry. We’ve got a bunch of other talks lined up today. And we don’t want to end up blocking them. But Ed, thank you for joining us. We really appreciate it.

ES: It’s my pleasure. Thank you so much. Enjoy the conference!

Our fragmenting Europe and DiEM's response - Open Democracy 20160319

Our fragmenting Europe and DiEM's response - Open Democracy 20160319

A video edit from the 1st Session of the Democracy in Europe Movement 2025 (DiEM25) launch in Berlin, kick starting the conversation that DiEM25 is facilitating to put the demos back into Europe’s democracy. ( 27 mins.)

A video edit from the 1st Session of DiEM25's launch in Berlin to kick start the conversation on putting the demos back into Europe’s democracy, how to address the various crises currently tearing Europe apart, and how to organise our political interventions in our localities, regions, countries and, of course, at the European level.

Session / OUR FRAGMENTING EUROPE & DiEM’s RESPONSE Re-conceptualising the centrifugal forces and multiplying divisions tearing the EU apart: refugees, borders & fences (Schengen), Fortress Europe (Frontex), inequality-poverty, chauvinism, nationalism, insecurity.

Introduction & Moderation: Srećko Horvat (DiEM, Croatia)
1. Baier, Walter (Transform! Network – Austria)
2. Buden, Boris (public intellectual / activist, Germany/Croatia)
3. Büllesbach, Daphne (European Alternatives / Germany)
4. Demh, Dieter (MP, Germany)
5. Fassin, Eric (Academic, France)
6. Guerot, Ulrike (European Democracy Lab, Germany)
7. Krempaska, Alena (Human Rights Institute, Slovakia)
8. Martin, Luis (Journalist, Spain)
9. Meddeb, Hind (Filmmaker/Journalist, France)
10.Morel Darleux, Corrine (Activist, Deputy for Rhône-Alpes, France)
11.Mezzadra, Sandro (Public Intellectual-Activist, Italy)
12.Negri, Toni (Public Intellectual-Activist, Italy)
13.Papastergiadis, Nikos (Academic, Australia-Greece)
14.Pisarello, Gerardo (First Deputy Mayor, City of Barcelona, Cataluña)
15.Richter, Angela (Theatre Director, Germany)
16.Sierakowski, Slavek (Krytyka Polityczna, Poland)
17.Tsomou, Margarita (Author-Journalist, Germany/Greece)
and Jacob Appelbaum, independent journalist, computer security researcher and core member of the Tor project.
(Video: Hind Meddeb)