Category Archives: Surveillance

Unprecedented and Unlawful: The NSA’s “Upstream” Surveillance - Just Security 20160919

Unprecedented and Unlawful: The NSA’s “Upstream” Surveillance

The FISA Amendments Act of 2008 (FAA) — the statute the government uses to engage in warrantless surveillance of Americans’ international communications — is scheduled to expire in December 2017. In anticipation of the coming legislative debate over reauthorization, Congress has already begun to hold hearings. While Congress must address many problems with the government’s use of this law to surveil and investigate Americans, the government’s use of “Upstream” surveillance to search Internet traffic deserves special attention. Indeed, Congress has never engaged in a meaningful public debate about Upstream surveillance — but it should.

First disclosed as part of the Snowden revelations, Upstream surveillance involves the NSA’s bulk interception and searching of Americans’ international Internet communications — including emails, chats, and web-browsing traffic —  as their communications travel the spine of the Internet between sender and receiver. If you send emails to friends abroad, message family members overseas, or browse websites hosted outside of the United States, the NSA has almost certainly searched through the contents of your communications — and it has done so without a warrant.

The executive branch contends that Upstream surveillance was authorized by the FAA; however, as others have noted, neither the text of the statute nor the legislative history support that claim. Moreover, as former Assistant Attorney General for National Security David Kris recently explained, Upstream raises “challenging” legal questions about the suspicionless searching of Americans’ Internet communications — questions that Congress must address before reauthorizing the FAA.

Because of how it operates, Upstream surveillance represents a new surveillance paradigm, one in which computers constantly scan our communications for information of interest to the government. As the legislative debate gets underway, it’s critical to frame the technological and legal issues that Congress and the public must consider — and to examine far more closely the less-intrusive alternatives available to the government.

Upstream Surveillance: An Overview

As we’ve learned from official government sources and media reports, Upstream surveillance consists of the mass copying and content-searching of Americans’ international Internet communications while those communications are in transit. The surveillance takes place on the Internet “backbone” — the network of high-capacity cables, switches, and routers that carry Americans’ domestic and international Internet communications.  With the compelled assistance of telecommunications providers like AT&T and Verizon, the NSA has installed surveillance equipment at dozens of points along the Internet backbone, allowing the agency to copy and then search vast quantities of Internet traffic as those communications flow past.

The NSA is searching Americans’ international communications for what it calls “selectors.” Selectors are, in essence, keywords. Under the FAA, they are typically email addresses, phone numbers, or other identifiers associated with the government’s targets. While this might sound like a narrow category, the reality is much different, as Jennifer Granick and Jadzia Butler recently explained. That’s because the NSA can target any foreigner located outside the United States who is believed to possess “foreign intelligence information” — including journalists, human rights researchers, and attorneys, not just suspected terrorists or foreign spies. At last count, the NSA was targeting more than 94,000 people, organizations, and groups under the FAA.

In practice, that means the NSA is examining the contents of each communication for the presence of tens of thousands of different search terms that are of interest to the government. And that list continues to grow, as the NSA adds new targets and entirely new categories of selectors to Upstream surveillance. Whenever the NSA finds a communication that contains a “hit” for any one of its many selectors, it stores that communication for the agency’s long-term use and analysis — and it may share those communications with the FBI for use in criminal investigations.

“About” Surveillance

Observers, including the Privacy and Civil Liberties Oversight Board (PCLOB), have singled out one feature of this surveillance as especially controversial: what’s often called “about” surveillance. This term refers to the fact that the government is not only intercepting communications to and from its targets, but is systematically examining the communications of third parties in order to identify those that simply mention a targeted selector. (In other words, the NSA is searching for and collecting communications that are merely “about” its targets.)

“About” surveillance has little precedent. To use a non-digital comparison: It’s as if the NSA sent agents to the U.S. Postal Service’s major processing centers to engage in continuous searches of everyone’s international mail. The agents would open, copy, and read each letter, and would keep a copy of any letter that mentioned specific items of interest — despite the fact that the government had no reason to suspect the letter’s sender or recipient beforehand. In the same way, Upstream involves general searches of Americans’ international Internet communications.

Upstream Surveillance Is Bulk Searching

Although the government frequently contends otherwise, Upstream surveillance is a form of bulk surveillance. To put it plainly, the government is searching the contents of essentially everyone’s communications as they flow through the NSA’s surveillance devices, in order to determine which communications contain the information the NSA seeks. While the government has “targets,” its searches are not limited to those targets’ communications. Rather, in order to locate communications that are to, from, or “about” its targets, the government is first copying and searching Americans’ international communications in bulk.

There is no question that these searches are extraordinarily far-reaching. The leading treatise on national-security surveillance, co-authored by former Assistant Attorney General David Kris, explains that the “NSA’s machines scan the contents of all of the communications passing through the collection point, and the presence of the selector or other signature that justifies the collection is not known until after the scanning is complete.” Likewise, the Foreign Intelligence Surveillance Court (FISC) has made clear that the NSA is searching the full text of every communication flowing through the surveillance devices installed on certain international backbone links.

For technological reasons, Upstream surveillance — at least as it’s conducted today — necessarily ensnares vast quantities of communications. When an individual uses the Internet, whether to browse a webpage or send an email, his computer sends and receives information in the form of data “packets” that are transmitted separately across the Internet backbone. As Charlie Savage recently explained in Power Wars, “when an e-mail is transmitted over the Internet, it is broken apart like a puzzle. Each piece of the puzzle travels independently to a shared destination, where they converge and are reassembled. For this reason, interception equipment on a switch in the middle cannot grab only a target’s e-mail. Instead, the wiretapper has to make a copy of everything.” While the NSA may exclude certain types of irrelevant traffic — like Netflix videos — it can identify the communications it’s seeking only by copying and searching the remaining Internet traffic in bulk.

In court, the Department of Justice has resisted acknowledging the breadth of these bulk searches —preferring to say, euphemistically, that the NSA is “screening” or “filtering” communications. But it’s playing word games. The only way for the NSA to determine whether a communication contains one of its selectors is to search the contents of that communication. At scale, that means the NSA is searching the contents of trillions of Internet communications, without anything resembling a warrant.

Upstream Surveillance Is Unprecedented and Unlawful

Because it involves bulk searches, Upstream surveillance is very different from other forms of surveillance, and it should be debated with that in mind. As the Privacy and Civil Liberties Oversight Board (PCLOB) explained:

Nothing comparable is permitted as a legal matter or possible as a practical matter with respect to analogous but more traditional forms of communication. From a legal standpoint, under the Fourth Amendment the government may not, without a warrant, open and read letters sent through the mail in order to acquire those that contain particular information. Likewise, the government cannot listen to telephone conversations, without probable cause about one of the callers or about the telephone, in order to keep recordings of those conversations that contain particular content.

In short, the Fourth Amendment does not allow the government to conduct a general, suspicionless search in order to locate specific information or evidence. Instead, as the ACLU has explained at length elsewhere, the government is required to have probable cause — and a warrant — before it searches the contents of our communications. Upstream surveillance reverses this logic, using the end results of the NSA’s searches to justify the continuous, bulk review of Americans’ Internet traffic. The ODNI General Counsel has effectively called for rewriting the Fourth Amendment to permit these types of searches — which only underscores how novel and extreme the government’s legal theory really is.

Americans — and Congress — need to be concerned about what it means to have government computers monitoring our communications in real-time. As the PCLOB emphasized, one of the fundamental problems posed by Upstream surveillance is that “it permits the government to acquire communications exclusively between people about whom the government had no prior suspicion, or even knowledge of their existence, based entirely on what is contained within the contents of their communications.” David Krishighlighted a related problem, asking whether the government should be permitted to “review the contents of an unlimited number of e-mails from unrelated parties in its effort to find information ‘about’ the target.”

The PCLOB, in its report, expressed serious concern about Upstream surveillance, finding that the nature and breadth of this surveillance pushed it “close to the line” in terms of lawfulness. At the same time, however, the PCLOB expressed the view that “about” surveillance was unavoidable for technological reasons. While this is the subject for a separate post, that factual claim is doubtful. The NSA could, if it chose, do far more to isolate the communications of its targets based on metadata — such as email addressing information — rather than searching the entire contents of everyone’s communications using selectors. Indeed, “Next Generation Firewall” technology is capable of distinguishing metadata from content across many different types of communications. Moreover, the NSA has already shown that it can implement this capability on the Internet backbone — because its bulk Internet metadata program, which it operated for ten years, required very similar capabilities. Even with these modifications, significant questions about the lawfulness of the surveillance would remain; but there is no question that it would be more protective of Americans’ privacy than today’s Upstream surveillance.

Between now and the sunset of the FAA in December 2017, it is crucial that Congress engage in an informed, public debate about whether it is constitutional — and whether it is prudent — to permit the executive branch to wield this incredibly invasive surveillance tool.

Editor’s note: The authors are staff attorneys with the ACLU’s National Security Project. Last year, the ACLU challenged Upstream surveillance on behalf of a broad group of educational, legal, human rights, and media organizations — including Wikimedia, the operator of one of the most-visited websites in the world — whose communications are swept up by this unprecedented dragnet. In October 2015, a federal district court in the District of Maryland held that the plaintiffs lacked “standing” to bring suit. The case is presently on appeal in the Fourth Circuit.

A Human Rights Response to Government Hacking - Access Now 201609

Excerpt.

Recently we have seen several high-profile examples of governments hacking into consumer devices or accounts for law enforcement or national security purposes. Access Now released a report where we consider government hacking activity from the perspective of international human rights and conclude that based upon its serious interference with the rights to privacy, free expression, and due process, there should be a presumptive prohibition on all government hacking. There has yet to be an international public conversation on the scope, impact, or human rights safeguards for government hacking. The public requires more transparency regarding how governments decide to employ hacking and how and when hacking activity has had unanticipated impacts. Finally, we propose Ten Human Rights Safeguards for Government Hacking in pursuit of surveillance or intelligence gathering. The full report is available at: www.accessnow.org/GovernmentHackingDoc

A HUMAN RIGHTS RESPONSE TO GOVERNMENT HACKING

WHAT IS GOVERNMENT HACKING?

We define hacking as the manipulation of software, data, a computer system, network, or other electronic device without the permission of the person or organization responsible for the device, data, or service or who is ultimately affected by the manipulation.

We consider government hacking in three categories based on the broad goal to be achieved:

  1. Messaging control: Hacking to control the message seen or heard, specifically by a particular target audience. to control a message, to cause damage, or to conduct surveillance.
  2. Causing damage: Hacking to cause some degree of harm to one of any number of target entities.
  3. Commission of surveillance or intelligence gathering: Hacking to compromise the target in order to get information, particularly on an on-going basis.

All government hacking substantially interferes with human rights, including the right to privacy and freedom of expression. While in many ways this interference may be similar to more traditional government activity, the nature of hacking creates new threats to human rights that are greater in both scale and scope. Hacking can provide access to protected information, both stored or in transit, or even while it is being created or drafted. Exploits used in operations can act unpredictably, damaging hardware or software or infecting non-targets and compromising their information. Even when a particular hack is narrowly designed, it can have unexpected and unforeseen impact.

HOW DOES GOVERNMENT HACKING IMPLICATE HUMAN RIGHTS?

Based on analysis of human rights law, we conclude that there must be a presumptive prohibition on all government hacking. In addition, we reason that more information about the history and the extent of government hacking is necessary to determine the full ramifications of the activity.

In the first two categories — messaging control and causing damage — we determine that this presumption cannot be overcome. However, we find that, with robust protections, it may be possible, though still not necessarily advisable, for the government to overcome the presumptive prohibition in the third category, government hacking for surveillance or intelligence gathering. We note that the circumstances under which it could be overcome are both limited and exceptional.

In the context of government hacking for surveillance, Access Now identifies Ten Human Rights Safeguards for Government Hacking, including vulnerability disclosure and oversight, that must both be implemented and complied with to meet that standard. Absent government compliance with all ten safeguards, the presumptive prohibition on hacking remains. In addition, the high threat that government hacking poses to other interests, defined in greater detail in our report, may (and probably should) necessitate additional limitations and prohibitions.

Government hacking threatens human rights embodied in international documents.

There should be a presumptive prohibition on all government hacking. In any instance where government hacking is for purposes of surveillance or intelligence-gathering, the following ten safeguards must all be in place and actually complied with in order for a government to successfully rebut that presumption.

Government hacking for the purposes of messaging control or causing damage cannot overcome this presumption.

1. Government hacking must be provided for by law, which is both clearly written and publicly available and which specifies the narrow circumstances in which it could be authorized. Government hacking must never occur with either a discriminatory purpose or effect;

2. Government actors must be able to clearly explain why hacking is the least invasive means for getting Protected Information in any case where it is to be authorized and must connect that necessity back to one of the statutory purposes provided. The necessity should be demonstrated for every type of Protected Information that is sought, which must be identified, and every user (and device) targeted. Indiscriminate, or mass, hacking must be prohibited;

3. Government hacking operations must never occur in perpetuity. Authorizations for government hacking must include a plan for concluding the operation. Government hacking operations must be narrowly designed to return only specific types of authorized information from specific targets and to not affect non-target users or broad categories of users. Protected Information returned outside of that for which hacking was necessary should be purged immediately;

4. Applications for government hacking must be sufficiently detailed and approved by a competent judicial authority who is legally and practically independent from the entity requesting the authorization and who has access to sufficient technical expertise to understand the full nature of the application and any likely collateral damage that may result. Hacking should never occur prior to authorization;

5. Government hacking must always provide actual notice to the target of the operation and, when practicable, also to all owners of devices or networks directly impacted by the tool or technique;

6. Agencies conducting government hacking should publish at least annually reports that indicate the extent of government hacking operations, including at a minimum the users impacted, the devices impacted, the length of the operations, and any unexpected consequences of the operation;

7. Government hacking operations must never compel private entities to engage in activity that impacts their own products and services with the intention of undermining digital security;

8. If a government hacking operation exceeds the scope of its authorization, the agency in charge of the authorization should report back to the judicial authority the extent and reason;

9. Extraterritorial government hacking should not occur absent authorization under principles of dual criminality;

10. Agencies conducting government hacking should not stock vulnerabilities and, instead, should disclose vulnerabilities either discovered or purchased unless circumstances weigh heavily against disclosure. Governments should release reports at least annually on the acquisition and disclosure of vulnerabilities. In addition to these safeguards, which represent only what is necessary from a human rights perspective, the judicial authority authorizing hacking activity must consider the entire range of potential harm that could be caused by the operation, particularly the potential harm to cybersecurity as well as incidental harms that could be caused to other users or generally to any segment of the population.

How Big Data Harms Poor Communities - The Atlantic 20160408

How Big Data Harms Poor Communities - The Atlantic 20160408

Surveillance and public-benefits programs gather large amounts of information on low-income people, feeding opaque algorithms that can trap them in poverty.

Big data can help solve problems that are too big for one person to wrap their head around. It’s helped businesses cut costs, cities plan new developments, intelligence agencies discover connections between terrorists, health officials predict outbreaks, and police forces get ahead of crime. Decision-makers are increasingly told to “listen to the data,” and make choices informed by the outputs of complex algorithms.

But when the data is about humans—especially those who lack a strong voice—those algorithms can become oppressive rather than liberating. For many poor people in the U.S., the data that’s gathered about them at every turn can obstruct attempts to escape poverty.

Low-income communities are among the most surveilled communities in America. And it’s not just the police that are watching, says Michele Gilman, a law professor at the University of Baltimore and a former civil-rights attorney at the Department of Justice. Public-benefits programs, child-welfare systems, and monitoring programs for domestic-abuse offenders all gather large amounts of data on their users, who are disproportionately poor.

In certain places, in order to qualify for public benefits like food stamps, applicants have to undergo fingerprinting and drug testing. Once people start receiving the benefits, officials regularly monitor them to see how they spend the money, and sometimes check in on them in their homes.

Data gathered from those sources can end up feeding back into police systems, leading to a cycle of surveillance. “It becomes part of these big-data information flows that most people aren’t aware they’re captured in, but that can have really concrete impacts on opportunities,” Gilman says.

Once an arrest crops up on a person’s record, for example, it becomes much more difficult for that person to find a job, secure a loan, or rent a home. And that’s not necessarily because loan officers or hiring managers pass over applicants with arrest records—computer systems that whittle down tall stacks of resumes or loan applications will often weed some out based on run-ins with the police.

When big-data systems make predictions that cut people off from meaningful opportunities like these, they can violate the legal principle of presumed innocence, according to Ian Kerr, a professor and researcher of ethics, law, and technology at the University of Ottawa.

Outside the court system, “innocent until proven guilty” is upheld by people’s due-process rights, Kerr says: “A right to be heard, a right to participate in one’s hearing, a right to know what information is collected about me, and a right to challenge that information.” But when opaque data-driven decision-making takes over—what Kerr calls “algorithmic justice”—some of those rights begin to erode.

As a part of her teaching, Gilman runs clinics with her students to help people erase harmful arrest records from their files. She told me about one client she worked with, a homeless African-American male who had been arrested 14 times. His arrests, she said, were “typical of someone who doesn’t have a permanent home”—loitering, for example—and none led to convictions. She helped him file the relevant paperwork and got the arrests expunged.

But getting arrests off a person’s record doesn’t always make a difference. When arrests are successfully expunged, they disappear from the relevant state’s publicly searchable records database. But errors and old information can persist in other databases even when officially corrected. If an arrest record has already been shared with a private data broker, for example, the broker probably won’t get notified once the record is changed.

In cases like these, states are nominally following fair-information principles. They’re allowing people to see information gathered about them, and to correct mistakes or update records. But if the data lives on after an update, Kerr said, and “there’s no way of having any input or oversight of its actual subsequent use—it’s almost as though you didn’t do it.”

The pitfalls of big data have caught the eye of the Federal Trade Commission, which hosted a workshop on the topic in September. Participants discussed how big-data analysis can include or exclude certain groups, according to a report based on the workshop. Some commenters warned that algorithms can deny people opportunities “based on the actions of others.” In one example, a credit-card company lowered some customers’ credit limits because other people who had shopped at the same stores had a history of late payments.

But when applied differently, other workshop participants noted, big data can be a boon to low-income communities. For example, some companies have compiled and analyzed publicly available data to calculate credit scores for people who previously did not have one. “Thus, consumers who may not have access to traditional credit, but, for instance, have a professional license, pay rent on time, or own a car, may be given better access to credit than they otherwise would have,” the report says.

There’s no question that algorithms can help humans make difficult decisions more efficiently and accurately. Big data has the power to improve lives, and it often does. But absent a human touch, its single-minded efficiency can further isolate groups that are already at society’s margins.

As encryption debate heats up, experts dissect Obama's surveillance policies - Daily Dot 20160408

As encryption debate heats up, experts dissect Obama's surveillance policies - Daily Dot 20160408

When FBI Director James Comey told an audience at Kenyon College on Wednesday that Americans should reconsider the value of unbreakable encryption in a world of persistent threats, he was addressing a conflict far broader than whether his agency could unlock a suspect's iPhone. He was wading into a debate over the course of national-security law that has emerged as one of the central conflicts of post-9/11 America.

On Friday morning, in one of the final events of Kenyon's biennial political-science conference, a panel of experts discussed the national-security approaches of Presidents George W. Bush and Barack Obama; the relationship between federal laws and local police practices; and the rhetoric of officials, like Comey, who consistently push for broader government power.

Charlie Savage, a national-security reporter at the New York Times, opened the discussion by recounting a discussion he had had with Greg Craig, President Obama's first White House counsel, about Obama's decision to preserve—and in some cases expand—the far-reaching surveillance state he inherited from President Bush. As Craig explained it, Obama's lawyers heard from the leaders of the intelligence community that the government's programs were both necessary and legal, and they stopped there.

“They didn’t ask, ‘Is this American?’” Savage said. The Obama team, intent on rectifying the perceived lawlessness and rhetorical overreach of the Bush administration, focused on grounding everything the government did in the law—brushing aside many civil-liberties questions, including whether a program comported with American traditions of liberty.

In his remarks at Kenyon, Savage reiterated the argument he made in his 2015 book Power Wars, about the difference between rule-of-law and civil-liberties critiques of national-security policy. When Obama’s liberal critics accused him of acting like Bush on surveillance issues, they meant it in a civil-liberties context. Obama's officials, Savage said, rejected this criticism because they were looking at things through a rule-of-law prism—and in that context, they believed, they were nothing like the Bush officials, who championed controversial legal theories about the commander-in-chief being able to override statutes in the name of national security.

Jameel Jaffer, deputy legal director at the American Civil Liberties Institute, took issue with Savage's framing and presented a different view of two ways to criticize national-security policy. Some people, he said, were concerned with how the Bush administration saw the relations between the branches of government (namely that Bush, as president, could trump Congress and the courts in national-security areas). Others were worried about how Bush's programs changed the relationship between government and citizenry.

People cared that Congress and courts weren’t involved in Bush's original warrantless-surveillance and military-detention programs, Jaffer said, but they cared more about the impact of those programs on their lives.

Jaffer's view was that the Obama administration “found statutory arguments to get to more or less the same place” as Bush on many national-security issues. Thus, he said, they could not be praised for caring more about the rule of law, per se, because, in his view, they simply construed the language of the laws to suit their policy goals.

When an administration essentially twists statutory language to permit it to do whatever it wants, Jaffer said, “the phrase ‘rule-of-law’ doesn’t fit comfortably with what you are actually doing.”

Chris Calabrese, vice president for policy at the Center for Democracy and Technology, agreed that Obama had “essentially ratified” Bush-era programs by declining to end them upon assuming office. What's more, Calabrese said, Obama's approval made the programs bipartisan, shielding them from many common political accusations while normalizing surveillance practices that, he said, would have appalled people had they foreseen them in 2002.

Calabrese also expanded the conversation to the state and local level. While the federal government develops technology like Stingray devices and policies like mass surveillance, local police often adopt these tools for their own work. Lawmakers, said Calabrese, must place the limits on these approaches, because at the investigative level, police will always do the most they can do; that is, after all, their job.

This interplay between federal and local tactics can profoundly affect a citizen's relationship with her government. Calabrese described a technique called "parallel construction," in which a spy agency learns something incriminating about an American and tells a law-enforcement agency how to discover it in a "clean" way that will be admissible in court. Americans arrested for crimes discovered in this manner cannot contest the real methods used to discover them, because those exist within national agencies that are subject to different rules.

Julian Sanchez, a senior fellow at the libertarian Cato Institute, sharply criticized Comey's Wednesday night remarks about encryption and its effects on investigative practices.

Comey was “rhetorically really masterful,” he said, using measured language to urge people to accept the need for a new “balance” between individual rights and government demands. By casting this balanced approach as the only rational one, Sanchez said, Comey implicitly characterized the status quo—itself the result of decades of laws and exemptions—as “absolutist.”

As an example, Sanchez noted that Comey had mentioned the Communications Assistance for Law Enforcement Act of 1994, which required companies to be able to comply with wiretaps but specifically excluded situations where companies did not control the ability to decrypt communications. Instead of accepting that CALEA was the result of a political compromise, Comey characterized it and the resulting legal environment as an absolutist position in favor of privacy.

Sanchez urged the audience to worry about this argument, saying that, when policymakers who grow uncomfortable with current surveillance law describe it as unacceptable and in need of rebalancing, this produces a “ratcheting toward ever-greater surveillance.”

"Architectures are stickier than rules," Sanchez said. “The architecture we construct on the premise that the legal restrictions on it will inhibit its use will outlast those rules. The rules can change much faster than the architecture.”

Countries that Use Tor Most Are Either Highly Repressive or Highly Liberal - Motherboard 20160406

Countries that Use Tor Most Are Either Highly Repressive or Highly Liberal - Motherboard 20160406

You might assume that people in the most oppressive regimes wouldn’t use the Tor anonymity network because of severe restrictions on technology or communication. On the other hand, you might think that people in the most liberal settings would have no immediate need for Tor. A new paper shows that Tor usage is in fact highest at both these tips of the political spectrum, peaking in the most oppressed and the most free countries around the world.

“There is evidence to suggest that at extreme levels of repression, Tor does provide a useful tool to people in those circumstances to do things that they otherwise would not be able to do,” Eric Jardine, research fellow at the Centre for International Governance Innovation (CIGI), a Canadian think-tank, told Motherboard in a phone call. Jardine is the author of the new paper, recently published in peer-reviewed journal New Media & Society.

Jardine analysed data from 157 countries, stretching from 2011 to 2013. That information included a rating for a country's political repression, derived from assessments made by US-based research group Freedom House, and metrics for Tor usage, sourced from the Tor Project's own figures.

"Controlling for other relevant factors, political repression does drive usage of the Tor network"
Jardine included data for use of both Tor relays, which are nodes of the network users typically route their traffic through, and bridges, which are essentially non-public relays designed to be used in censorship-heavy countries that might block access to normal relays. He also considered a country's internet penetration rate, intellectual property rights regime, wealth, secondary education levels, and openness to foreign influences.

“The results show that, controlling for other relevant factors, political repression does drive usage of the Tor network,” Jardine writes.

Bridges had the strongest association with political repression. “Moving from a country like Burkina Faso (political repression equals 8) to a country like Uzbekistan (political repression equals 14) results in an increase of around 212.58 Tor bridge users per 100,000 Internet users per year,” the paper reads.

There was also a “statistically significant” relationship between a regime's political context and the use of Tor overall, Jardine adds.

This graph shows use of specifically Tor bridges (not relays) according to political repression. Image: Eric Jardine/New Media & Society
Interestingly, however, it's not just harsh regimes that have a higher Tor usage. Countries on the lower end of the political repression spectrum also showed significant use. It was countries in the middle, ranked neither as strictly authoritarian regimes or free democracies, that had the lowest number of people connecting to Tor.

This might run counter to some people's intuition; wouldn’t liberal democracies have little need for Tor?

“But because it's dual-use, you start to see a different pattern,” Jardine said, meaning that Tor is not just used to circumvent censorship in oppressive regimes, for example. Instead, the technology could be to protect privacy, or for criminal purposes. (It's worth remembering that the study looked at data largely before the fallout of Edward Snowden's June 2013 revelations).

Why Tor usage peaks at the extremes of the political spectrum is less clear. Jardine hypothesises that it may be connected to a country's political need for such tools, such as circumventing censorship, but also the increased opportunity for their use—for example, in the US, Tor can be used easily without major consequence. Finding out the reasons for the trend are, however, beyond the scope of this study.

Tor, and the related technology of hidden services, can polarise discussions, with supporters often refusing to acknowledge criminal applications, and critics ignoring positive aspects. In a debate that is often overshadowed by emotions and feverish media coverage, having empirical data and analysis on the use of anonymity technology can only be beneficial.

Brussels terror attacks: Why ramping up online surveillance isn’t the answer - Ars Technica 20160402

Brussels terror attacks: Why ramping up online surveillance isn’t the answer - Ars Technica 20160402

I am in Brussels. And I am scared. Very scared… of the probable security backlash following last month’s terrorist attacks.

I don’t want to live in a city where everyone is viewed with suspicion by the authorities, because it won’t stop there. Because suspicion is infectious. When misappropriated and misdirected, that sort of suspicion can very easily become racism and prejudice—two of the key ingredients that led the awful attacks on the morning of Tuesday, March 22.

ISIL is not only fighting a cultural war; it's fighting a media one. For that reason maybe we should really stop talking about it as though it was a “real” war. As though there were valiant warriors on both sides. As though those responsible for the Brussels bombings are anything more than common murderers, plain and simple. Truthfully, the only community the Brussels attackers belong to now is the criminal community.

It is high time to strip terrorists of their mystique. We must stop playing their game. Statistically, I am not any less safe today than I was on the Monday before the attacks. Yet if many politicians have their way, my activities will be monitored a great deal more.

Two days after the attacks, EU ministers met in Brussels at an emergency justice and home affairs council, and predictably demanded more access to our Internet histories, more powers to track people, and more ways to break into our private communications.

The European People's Party has reportedly said it wants personal data on everyone who takes a train to be stored. Meanwhile, a so-called Passenger Name Record is in the works for all airline passengers.

And, even before the terrorist attacks, Belgium officials were mulling the expansion of the country's data collecting and storing laws. Never mind that the European Court of Justice, and the Belgian Constitutional Court have ruled that data retention is illegal. My adopted country also plans new surveillance legislation that would allow intelligence agencies more freedom to eavesdrop on cross-border communications: “Hello Mum, nice to talk to you… and everyone else listening in.”

Turning leaky tap on secure apps

On a European level, the ePrivacy Directive is up for review this year, and there will be no prizes for predicting that secure online communications services, such as WhatsApp and Telegram, and even Viber, Skype, and Facebook Messenger could all be in the cross-hairs. Will there be anywhere left if you want to have a private conversation online?

Like anyone, I believe those who carried out the attacks in Brussels should be caught and brought to justice, but not at any cost. And certainly not at the cost to ordinary citizens’ freedom, and way of life.

FURTHER READING

FRANCE VOTES TO PENALISE COMPANIES FOR REFUSING TO DECRYPT DEVICES, MESSAGES
But UN official warns: "without encryption tools, lives may be endangered."
That is even supposing these new measures would work to prevent future attacks: I’ve seen no evidence—and I've asked the question among many Brussels-based folk—to support that view. Does taking a plane or a train make you more likely to be a terrorist—sorry—murderer? Are overwhelmed police forces really able to cope with combing through that amount of data? Experience suggests that having access to the travel histories of everyone would have made little difference in the Brussels case. Europe’s security problem is not with too little information, but with too little sharing, and understanding of that information.

Just like physical security, increased surveillance powers generally don’t make us any safer. The reality is that when we see souped-up security guards everywhere we don’t feel more secure. Often the effect is the opposite. Security theatre isn’t even effective as good theatre.

I completely understand the desire to do something—anything!—after such a horrible atrocity. One of the most difficult emotions I had to cope with, as the horror unfolded, was feeling useless. But sometimes, especially when we are shocked, doing nothing is the better option.

Kneejerk reactions are almost always the most ill-thought out ones. Four day before the attacks, the European Data Protection Supervisor, Giovanni Buttarelli, put out a press release saying that legislative proposals to fight cross-border crime including terrorism were too rushed and too weak to do the job, anyway. Now, post-March 22, hasty decisions are even more likely.

That I am appalled at some of the reaction to the attacks is not a surprise. My view has always been that more “security,” more surveillance, and more data retention, not only won't work, but will undermine our rights. My opinion on this point is not new. What is new is that it has been tested. As someone who walks the streets past Maelbeek metro station every day, I feel I have a valid insight on what will and won't make me feel safe, and how much of my privacy I am willing to give up for it.

Existing Belgian powers didn't prevent these attacks

Here in Belgium, investigators already have the power to get a court order on telecoms operators to track a suspect’s SIM card down to the nearest phone tower location, as they did with Salah Abdeslam, the man suspected of being behind the Paris attacks. Of course terrorists must be expected to keep their phone with them at all times. Imagine if they learned sophisticated counter-intelligence techniques like, say, leaving it at home!

Enlarge
Miguel Discart
Yes, I have sat down and cried at what has happened in my city. I swallowed fear as I anxiously waited to hear from loved ones. I felt powerless, and grateful to strangers for support in the wake of the attacks. Today, I feel angry that cynical opportunists will twist this to their own ends.

In my ideal world there would be a moratorium on new security or surveillance laws for at least three months following a terror atrocity. That won’t happen because, as bad as knee-jerk reactions are, there are others who will have waited for just this sort of event to push their own agenda. That, as much as everything else that has happened in the days since the attacks on Brussels, makes me want to weep.

Predictably, even those further afield had an opinion: US presidential candidate Hillary Clinton said “we have to toughen our surveillance, our interception of communication.” Presumably “we” in this instance is those already with huge amounts of power, and control.

Meanwhile, would-be US president Donald Trump reportedly said he would use torture to combat terrorism. I read sensible, reasonable replies from decent people explaining why torture doesn’t work or is unreliable for gathering information. And—for a moment—this seems a reasonable conversation. Before I jolt back to myself and realise this is torture we are talking about. Surely any decent human being opposes it on principle. What point have we reached, where we are even in a position of discussing this?

FURTHER READING

MPS VOTE IN FAVOUR OF INVESTIGATORY POWERS BILL AFTER LABOUR, SNP ABSTAIN [UPDATED]
"Mustn't sacrifice quality to meet the deadline," says shadow home secretary.
I am well aware that collecting PNR data is not comparable to torture. And I am not opposed to proportionate and specific surveillance. My opposition, on principle, is to mass unjustified collection of personal information "just in case." Just in case of what? In case we're all closet terrorists?

Predictably, Europol Director Rob Wainwright blamed encryption. He told POLITICO. “Encrypted communication via the Internet, and smartphones are a part of the problems investigators face in these instances. We have to find a more constructive legislative solution for this problem of encryption.”

Since when is encryption a problem? Encryption is what allows us to use online banking to book holidays, buy birthday presents. Using crypto tools doesn’t mean you are a terrorist. Weakening encryption will just create vulnerabilities that will be exploited by the very criminals and terrorists we want to stop. This is as good a reason as any to defend encryption.

But again I shake my head and wonder why we can’t just expect privacy on principle. I am frustrated and saddened that the default position of treating other humans as decent law-abiding folk, is changing to one where the assumption is we are all potential terrorists.

The terrorist’s weapon of choice is fear. When we fear them, they have won a battle. When we start to fear each other—the woman in a headscarf on the metro, the man with the large bag at the airport, the teenager with his hands in his pockets—they are one step closer to winning the war. Let’s not play into their hands.

Senator: let’s fix “third-party doctrine” that enabled NSA mass snooping - Ars Technica 20160403

Senator: let’s fix “third-party doctrine” that enabled NSA mass snooping - Ars Technica 20160403

Q&A: Ars sits down with Oregon's outspoken advocate of strong crypto, Sen. Ron Wyden.

This past week hundreds of lawyers, technologists, journalists, activists, and others from around the globe descended upon a university conference center to try to figure out the state of digital rights in 2016. The conference, appropriately dubbed "RightsCon," featured many notable speakers, including Edward Snowden via video-conference, but relatively few from those inside government.

Sen. Ron Wyden (D-Oregon), however, was an exception. On the first day of the conference, he gave an in-person speech, in which he argued for a "New Compact for Security and Liberty."

The Oregon senator is likely familiar to Ars readers: he’s been one of the most consistently critical voices of the expansion of government surveillance in recent years. We last spoke with him in October 2014 when he made the case that expanded active spying hurts the American economy. In December 2014, Wyden introduced the "Secure Data Act" in the United States Senate, which aims to shut down government-ordered backdoors into digital systems. However, that bill hasn’t even made it to committee yet, over a year later.

On Thursday, the day after his address, Wyden sat down with Ars at a downtown Peet’s Coffee, where we chatted in a more detail about his proposal. What follows is the transcript of our conversation that has been lightly edited for clarity.

Ars: What does your compact mean in terms of new legislation? Because some of these items outlined in your speech, like the third-party doctrine, Congress doesn’t have the authority to overturn that.

FURTHER READING

SEN. WYDEN PUTS FORWARD A BILL TO BAN DATA “BACKDOORS”
Bill prevents FBI from meddling with companies that choose to encrypt by default.
A: Well, Congress could pass a law. But let’s begin at the beginning. What I wanted to do yesterday in this speech was to refocus the debate. More than anything else, that’s what the talk was about. I can tell you—and I don’t have an exact count—but my guess is that there have been thousands upon thousands of articles written in the last few months and they invariably start with the phrase: "In the ongoing debate between security and privacy, the following happened today... "
And I want to make clear that I don’t think that’s what the debate is all about. It is not about security versus privacy. In my view, this debate is about less security versus more security. My view is that at a time when millions of Americans have their life wrapped up in a smartphone—their medical records, their financial records, they might be tracking their child to make sure their child isn’t molested—strong encryption is the must-have go-to security tool for millions of Americans and the communities in which they live. So I want to re-focus the debate along those lines.

Are we to understand that what you're calling a compact will evolve into actual legislation?

Let’s take some of these devices one by one. As your readers know, for weeks now we’ve been told that there is going to be a Burr-Feinstein bill in the United States Senate that in fact would be a piece of legislation that would, in effect, mandate that a private company weaken the security of their products so they would be to comply with a court order. The first thing that I want to do as part of our strategy is to block that legislation. And I’m going to argue that it should be blocked on the grounds that it will weaken the security of millions of Americans. The second thing that I want to do after we block that bill is pass affirmative legislation that I’ve introduced called the Secure Data Act, where we wouldn’t be talking about blocking legislation, but talking about affirmative action to ensure the security of the data of millions of Americans. So those would clearly be two steps that would be very relevant to today’s discussion.

Beyond that, with respect to the third-party doctrine. I think that when people enter into a private business relationship, they don’t expect that that’s going to be public. And particularly now in an age of digital services I think it’s important that that law be re-written: that law stems from a decision that’s decades old. And I’m encouraged that even people like Justice Sotomayor thinks it ought to be rewritten. So that’s the third area.

FURTHER READING

FEDS WANT AN EXPANDED ABILITY TO HACK CRIMINAL SUSPECTS’ COMPUTERS
Proposed rules to let one judge authorize "remote access" essentially anywhere.
A fourth area would be that we’re more vigilant with respect to administrative actions that might be taken that again, instead of a win-win in which we’ll have more security and more liberty, there will be a lose-lose. Yesterday I talked about Rule 41, which is something that the Justice Department wants to do, where in effect, they could get one warrant and in effect get access to scores and scores of computers outsides that one jurisdiction. And I think that’s a mistake.
And finally I talked about the need for more talent. I take a position that challenges the intelligence agencies to adapt to new times. That’s why I went through the Miranda decision and how people thought "Oh my goodness, we’ll never get a confession!" Obviously law enforcement adapted to those new challenges. I think having talented people, some of whom have been in the room at RightsCon, would be a very good way to adapt. So those are, kind of, the four or five areas where a combination of elected officials who block unwise measures and affirmatively move to pass legislation to update our laws makes sense.

We at Ars struggle, as I think a lot of people struggle, to not only understanding the tools like PGP, but also to put them into practice and use them. For example, at Ars, I’m one of six people who has a publicly-listed PGP key. I’d be curious to find out from you what kinds of tools you use in your office, what kinds of tools are used in the Senate more generally, and what that experience has been like.

First of all, I think that those who are using a smartphone are counting on encryption. And that is a basic security measure. But for me, the important way to assess your question is that when legislators make policy, the big mistakes come when they are reacting, particularly when there has been a horrible tragedy and someone makes a knee-jerk reaction. When you get a chance to reflect on it, instead of what I call a win-win—security and liberty—too often you get a lose-lose. For example you weaken strong encryption, the first thing that’s going to happen is people who seek encryption are going to go overseas where there are hundreds of products and there’s even less control over them.

You’re somebody who pops up in the news a lot, talking about these issues of privacy. You obviously care a lot about them. I’d love to hear how you plan to convince your colleagues of the importance of these issues. I think it can be hard for people who aren't as steeped in these issues to wrap their brains around them. So I’d love to hear what that process has been like for you.

FURTHER READING

IN SILICON VALLEY, SENATOR CALLS FOR ENDING AMERICAN “DIGITAL DRAGNET”
Tech leaders also look to prevent government spying.
First of all, we’ve come quite a ways. Back when I started in 1996, I wrote the law that ensured that a website owner would not be held personally liable for something that was posted on the site. We wrote the digital signatures law and banning tax discrimination, for example, so that people who needed Internet access to get education and employment opportunities wouldn’t face problems. It’s been a pretty amazing ride since then. All the way to the time when the NSA overreached with respect to metadata. When we started, there were only a handful of us trying to rein in that overreach. By the time we were done, we had plenty of Republican votes and what had been a secret interpretation of the Patriot Act was gone. Education efforts take time.
One of my favorite accounts was that there was a law that came out of Intel Comm [Senate Select Committee on Intelligence] that passed 14-1 written by Sen. Feinstein (D-California) to deal with so-called overly broad leaks, and I knew the bill was a turkey from the very beginning, and I didn’t even know how bad it was. After it got out of committee, we had a chance to learn more about it, educate ourselves, we all talked about it, and by the time we were done, the senators who had written it didn’t want anything to do with it and we were able to get rid of it. So education efforts can take more time. But we’ve had a fair number of successes and of course nothing matches the campaign of SOPA and PIPA.

You talked about hiring more technologists. What would that look like in your mind?

Obviously I think it's very valuable for offices, individual House and Senate offices to have a go-to person who is knowledgeable about the technology. I was talking yesterday mostly about agencies like the FBI and the government.

Would that involve hiring from the private sector and bringing them on in these types of cases [that involve cryptography] ? Because obviously the FBI already has people...

I’d like them to be in a position to get leadership positions and permanent positions on the basis of their knowledge and expertise and the kinds of issues that people were talking about at RightsCon.

Are there any cases or issues that we in the public should be aware of in Oregon that maybe haven't hit the national stage yet?

FURTHER READING

FEDS BREAK THROUGH SEIZED IPHONE, STAND DOWN IN LEGAL BATTLE WITH APPLE
DOJ won't say how, but its mysterious new method to bust through iPhone 5C worked.
Let’s put it this way: when the FBI said that they had been able to access the Apple San Bernardino phone, it was clear that was not the end of the debate. In fact, this debate is just starting. And we’ve heard about other jurisdictions that purportedly are looking at it. I’ve been very troubled and it will be something I'll be following up on. The FBI has said this just involves one phone. We’re talking about re-creating code, so it’s not about one. And then later the district attorney in New York talked about scores and scores of phones.
Is there anything that you'll be taking from your experience at RightsCon back to Oregon or to Washington?

I was hoping that it was a two-way street, and it was. My goal was to make sure that the people there who play these leadership roles in so many grassroots organizations, that they had a sense of what I as one elected official thought the challenge was all about. That’s why I said right at the beginning: I see our job as trying to convince politicians it’s not about security versus privacy, it’s about more security versus less security. And I think as we went we had a lot of good conversation. I think there was a lot of interest at RightsCon about what’s coming next—[people were asking] what does he think is coming next—and there was a lot of interest in Rule 41.

Last time we spoke, you’d mentioned that before the president leaves office that you wanted to play basketball with him. Is that going to happen?

It’d better happen soon! There are a few priorities for Oregon that I may see if I can get on the court. He’s been very gracious and he’s invited me multiple times and I think I indicated that I was saving it for something big that Oregon needs, and we’re heading into the home stretch.

Anything else that I didn’t ask you about that you’d like to add?

I think that this is going to be a very busy few months. People have asked what’s next, and we’re going to have some classified briefings, I assume, to try to figure out what the details are with respect to how the process went forward, with respect to accessing the data on the [San Bernardino] phone, and there are zero-day issues that we’re talking about and looking at.

Court considers when police need warrants to track suspects through cellphones - The Washington Post 20160323

Court considers when police need warrants to track suspects through cellphones - The Washington Post 20160323

A federal appeals court on Wednesday considered how easily investigators should be able to track criminal suspects through their cellphones, becoming the latest front in the debate over how to balance public-safety interests with digital privacy.

The issue before a full panel of the U.S. Court of Appeals for the 4th Circuit, which has jurisdiction over Maryland and Virginia, was whether law enforcement officials need search warrants to pull cellphone records to trace the long-term movements of suspects.

The case, argued in Richmond, arose after investigators in Maryland obtained seven months of phone records to map the movements of two men later convicted in armed robberies around Baltimore.

[D.C. Court of Appeals considers constitutionality of cellphone-tracking technology]

Almost immediately Wednesday, questions from the bench centered on whether location information from cellphones is any different than records of banking transactions or landline phone calls.

Defense attorney Meghan S. Skelton said the government had essentially tracked the defendants’ every move, equating cellphone location data to “dragnet surveillance.” Maryland U.S. Attorney Rod J. Rosenstein countered that the information gleaned from cell towers was imprecise, unobtrusive and created by the wireless provider — not the government.

A divided three-judge panel of the court ruled in August that accessing the location information without a warrant for an “extended period” is unconstitutional because it allows law enforcement to trace a person’s daily travels and activities across public and private spaces.

Two other federal appellate courts — in Florida and New Orleans — concluded that warrants are not necessary.

[Justice Department: Agencies need warrants to use cellphone trackers]

The full spectrum of opinions was on display Wednesday from the 15 judges in the spirited hour-long discussion.

Judge James A. Wynn Jr. expressed disbelief about the length of time — 221 days — that investigators had collected records for suspect, Aaron Graham, to help place him near the scene of the robberies after his arrest.

“We all know where technology is going,” Wynn said. “They are going to be able to pinpoint your every move.”

Texting, calling, and checking email or the weather from a cellphone generally involves connecting with the closest communications tower. Wireless providers log and retain records showing which tower a phone used at the beginning and end of every call, and increasingly, for texts and data connections.

Decades-old rules allow authorities to obtain business or “third party” records with a court order.

Judge Paul V. Niemeyer pointed out that authorities already could obtain — without a warrant — even more precise details about a person’s travels by pulling records from credit card purchases and highway E-ZPass transponders.

Local Crime & Safety Alerts
Breaking news about public safety in and around D.C.
Sign up
Judge J. Harvie Wilkinson III said the “third party” rules do not need to change because of technological advances, and that to do so would be “an audacious step” away from the court’s past practice.

Civil liberties groups and privacy advocates want investigators to have to get a warrant first for the tracking, because a warrant request requires that judge be presented with information to support the request that would meet the more rigorous gold standard of probable cause.

Judge Pamela A. Harris, the newest member of the court, noted that the Supreme Court has signaled that digital devices are different when it comes to 4th Amendment protections against unreasonable search and seizure.

Justices have expressed concern about privacy implications of technology such as cellphones that contain sensitive personal information, and are essentially appendages in people’s purses, pockets and hands.

If the full 4th Circuit upholds its panel’s decision, there would be a clear divide with the other courts — the type of split that often attracts the attention of the Supreme Court.

‘There is No Shame in Fear': Confronting Surveillance in Post-Revolution Egypt - Ad Vox 20160329

‘There is No Shame in Fear': Confronting Surveillance in Post-Revolution Egypt - Ad Vox 20160329

It has been a long time since technology and the Internet became integral to the social change movement and political activism in Egypt. When this began, the new medium looked very promising and exciting – it also looked somehow exclusive. My generation started to explore and experiment and treat the Internet as a tool that could enable us to organize, come together, be creative in a different way, express our thoughts and discover our peers’ ideas at a deeper level. Nothing determined what was allowed or not allowed. No approvals were needed. My fellow technologists and I started to utilize coding and our passion for free and open source technology and methodologies to develop solutions that addressed different needs of political activists and parties, human rights groups, media practitioners and youth.

Many topics and stories were introduced courageously through cyberspace in the form of different media – texts, videos, and images – addressing subjects such as torture, military corruption, minorities issues, sexual violence, economic problems and of course democracy issues. It gave us hope and it made things look possible to achieve.

Back then it was different. There were no big-data machines or service providers digging into our data and online behaviors and there were no algorithms shaping what we read and when. Fewer users meant fewer variations in opinions and more potential for conversations, and we saw very little extreme polarization.

New generations and actors joined the social activism movement after January 25, 2011. More citizens began participating in public spaces and joining online platforms. Different voices became more present, leading to notable changes in the dynamics between people and how interactions take place with different content. The notion of organizing, mobilizing and expression developed within the society, opening new possibilities for exploration and critique.

It is not only the activism scene that has changed over the past couple of years. The military has also become more present in public life and a mix of military-police state has become very active and in-control. At the same time, the state has developed a stronger grip on investors and different media channels and newspapers. This was nothing new per se, but the state's monitoring of dominant public narratives and the mindset of the majority of Egyptians led to a practice of ignoring different narratives about what is happening here. Out of genuine fear, or in an effort to support the state, many Egyptians have turned a blind eye towards grave human rights violations, deterioration of the economy and the crushing of basic liberties.

The state managed to control the flow of information and news on different media streams – except the very few alternative online news and social platforms. And the situation is still the same: exposure of big news events, violations, corruptions, military and police abuses, medical scandals, they all begin one way or another online. And this keeps testing new boundaries and pushing the predefined red-lines, despite the intense political polarization and development of restrictive laws and unfair trials.

Graffiti art of surveillance camera. Published and labeled for reuse on Pixabay.
Graffiti art of surveillance camera. Published and labeled for reuse on Pixabay.
Alongside this, the security sector became increasingly interested in listening to and watching what we say and do, identifying what “others” think. And it became interested in mapping our social and professional lives and networks. The abilities of mass surveillance and targeted surveillance techniques grew over time. Relationships with multinational companies producing surveillance technologies sprang up. Agencies abused their already-absolute power and built relationships with mobile and Internet service providers in the country to access users’ data and excel in surveillance of communications that goes through national infrastructure. There is no due process in any of this, it is now enough if an officer “wants to” have this information. Purchases of invasive hacking and targeted-surveillance software started – and the amount of technical “infections” purchased by state-agencies to target individuals data grew from tens to hundreds.

Of course, this element of the equation is not unique to Egypt. The security and intelligence community in the country are using the same set of justifications used everywhere: “we are fighting extremism”; “we are in a war on terrorism”; “you have nothing to hide”; “we are using it only with bad people.” And of course they are fascinated by the surveillance capabilities used by top intelligence agencies like those within the Five Eyes.

The security and intelligence community in the country are using the same set of justifications used everywhere: “we are fighting extremism”; “we are in a war on terrorism”; “you have nothing to hide”; “we are using it only with bad people.”
In March 2011 when revolutionaries raided the state security service headquarters in Cairo – which is notorious for torture and surveillance – many people found their own files and transcripts of their communications. Since that time and up to the present day, public acknowledgment and realization of these surveillance practices has changed gradually. Unfortunately, it is now a common joke that we are all under surveillance. However, everyday communication and organizational norms didn't change for most people – I think part of this had to do with the revolutionary energy and sense of anger during that period.

Since 2011, state media have normalized the practice of social surveillance and watching each others’ actions while hawking hate speech towards anything different or “foreigner” has become acceptable. New restrictive regulations are constantly being imposed and the sense of surveillance is gradually growing in the background, affecting the activism community and those involved in public change and the media ecosystem.

It is now normal for individuals to think twice or more before deciding how to say something and when to say it and to calculate the consequences. Without realizing it, they are practicing what digital security researchers call threat-modeling, weighing the impact of their choices in public spheres as well as private ones.

The separation between professional and personal has also become very hard to navigate, as both are impacting each other. We go through a wide range of emotions when it comes to engagement with social change. Losing many friends who are either in prison or had to leave the country makes it harder to do this kind of work, and leaves you less-connected with your peers. Among those who are involved in the practice of informing others of facts about what is happening, it has become a common expectation to get summoned by the security or kidnapped or banned from travel or to have your office raided, or receive a “call” from someone politely threatening you.

I have been involved in helping many individuals and institutions over the past years, both in assessing their threats and risks, and in helping them integrate proper measures to maintain their privacy and security. This has opened my eyes to how the idea of a threat has changed deeply over time and our definition of what is a problem is expanding. It is also obvious that sometimes our ability to make a proper guess or estimation is weaker, as there is not enough logical input or variables to count on – the situation is very chaotic, always changing and often surprising.

There is nothing wrong in feeling fear and nothing to be ashamed of – we are human. And it takes time and effort to try to convert the sense of fear into positive energy to continue and insist.
It also has become obvious how fear and worry impact our ability to be creative and continue our work and plan in a proper manner. It is always a struggle between our beliefs, our driving forces, and the threats and fear we feel and experience each day. I keep reminding myself to put aside all my fears, so that I may focus and think and continue. There is nothing wrong in feeling fear and nothing to be ashamed of – we are human. And it takes time and effort to try to convert the sense of fear into positive energy to continue and insist. Oppression and restrictions in the long term push us to be more creative and do as much as possible, despite all the personal challenges we face.

In a dictatorship, as so many unjust things become normalized and accepted in our daily lives, the act of spreading information and informing others – however difficult – becomes an ever-more vital part of activism.