Tag Archives: The Intercept

TECH MONEY LURKS BEHIND GOVERNMENT PRIVACY CONFERENCE - The Intercept 20160915

IN JANUARY, ACADEMIC-TURNED-REGULATOR Lorrie Cranor gave a presentation and provided the closing remarks at PrivacyCon, a Federal Trade Commission event intended to “inform policymaking with research,” as she put it. Cranor, the FTC’s chief technologist, neglected to mention that over half of the researchers who presented that day had received financial support from Google — hardly a neutral figure in the debate over privacy. Cranor herself got an “unrestricted gift” of roughly $350,000 from the company, according to her CV.

Virtually none of these ties were disclosed, so Google’s entanglements at PrivacyCon were not just extensive, they were also invisible. The internet powerhouse is keenly interested in influencing a lot of government activity, including antitrust regulation, telecommunications policy, copyright enforcement, online security, and trade pacts, and to advance that goal, has thrown around a lot of money in the nation’s capital. Ties to academia let Google attempt to sway power less directly, by giving money to university and graduate researchers whose work remains largely within academic circles — until it gains the audience of federal policymakers, as at PrivacyCon.

Some research at the event supported Google’s positions. An MIT economist who took Google money, for example, questioned whether the government needed to intervene to further regulate privacy when corporations are sometimes incentivized to do so themselves. Geoffrey Manne, the executive director of a Portland-based legal think tank that relies on funding from Google (and a former Microsoft employee), presented a paper saying that “we need to give some thought to self-help and reputation and competition as solutions” to privacy concerns “before [regulators start] to intervene.” (Manne did not return a request for comment.) Other research presented at PrivacyCon led to conclusions the company would likely dispute.

The problem with Google’s hidden links to the event is not that they should place researchers under automatic suspicion, but rather that the motives of corporate academic benefactors ought to always be suspect. Without prominent disclosure of corporate money in academia, it becomes hard for the consumers of research to raise important questions about its origins and framing.

Google declined to comment on the record for this article.

How Tech Money Flows to Privacy Scholars

Google’s ties to PrivacyCon are pervasive enough to warrant interrogation. As a case study in how pervasive and well-concealed this type of influence has become, PrivacyCon is hard to beat.

Authors of a whopping 13 out of 19 papers presented at the conference and 23 out of 41 speakers have financial ties to Google. Only two papers included disclosure of an ongoing or past financial connection to Google.

Other tech companies are also financially linked to speakers at the event. At least two presenters took money from Microsoft,` while three others are affiliated with a university center funded by Amazon, Facebook, Google, Microsoft, and Twitter.

“Are we getting voices that have never received money from a company like Google?” — Paul Ohm, Georgetown

But Google’s corporate adversaries are helping to shine a spotlight on what their fellow travelers describe as Google’s particularly deep ties to academia. Those ties are a major focus of a new report from an entity called the Google Transparency Project, part of a charitable nonprofit known as the Campaign for Accountability. The Campaign for Accountability, in turn, receives major, undisclosed funding from Google nemesis and business software company Oracle, as well as from the Bill and Melinda Gates Foundation, which was set up by the co-founder and longtime CEO of Google rival Microsoft (the nonprofit says its funding sources have no bearing on its work to expose funding sources). The Intercept, meanwhile, operates with funding from eBay founder Pierre Omidyar. In other words, tech money even pervades the research into everything tech money pervades. But even accepting that, the report does highlight the extent to which Silicon Valley is widening its influence at the intersection of academia and government.Take MIT professor Catherine Tucker, who in one PrivacyCon paper argued against proposed government regulations requiring genetic testing services to obtain a type of written permission known as “informed consent” from patients. Tucker added that such a requirement would deter patients from using the testing services and specifically cited one such service, 23andMe, a firm that Google has invested in repeatedly, mostrecently in October, and whose CEO is the ex-wife of Google co-founder Sergey Brin. Tucker did not disclose in the paper that she has received over $150,000 in grants from Google since 2009, plus another $49,000 from theNet Institute, a think tank funded in part by Google. Contacted by email, Tucker answered that she discloses “nearly two pages of grants from, and consulting work for, a variety of companies and other organizations, on my CV.”

Google has been appreciative of Tucker’s conference work. In a series of emails between Google and George Mason University law professor James Cooper for a 2012 policy conference, first reported by Salon, a Google representative went so far as to personally recommend the marketing professor as someone to invite:

Cooper did not return multiple requests for comment on this story. Reached for comment via email, Cranor replied that she lists “the funder(s) in the acknowledgments of the specific papers that their grant funded,” and that there “have also been press releases about most of the Google funding I received, so everything has been fully disclosed in multiple places.” Cranor added that “all of these grants are made to Carnegie Mellon University for use in my research,” and “I did not receive any of this money personally.” But it is surely worth noting that one of the press releases Cranor references says that “each funded project receives an individual Google sponsor to help develop the research direction and facilitate collaboration between Google and the research team.” Cranor did not reply when asked what role “an individual Google sponsor” has played in her research.

Nick Feamster, a Princeton professor, presented at PrivacyCon on internet-connected household objects and did not disclose that he’s received over $1.5 million in research support from Google. Over email, Feamster told The Intercept that any notion of a conflict “doesn’t even make any sense given the nature of the content we presented,” which included descriptions of security shortcomings in the Nest smart thermostat, owned by Google. “If they were really trying to exert the ‘influence’ that the [report] is trying to suggest, do you think they would have influenced us to do work that actually calls them out on bad privacy practices?”

Many other PrivacyCon speakers, like Omer Tene, an affiliate scholar at Stanford’s Center for Internet and Society, don’t seem ever to have received money from Google; rather, a department or organization they work for is funded in part by Google. On the CIS website, this is made plain:

We are fortunate to enjoy the support of individual and organizational donors, including generous support from Google, Inc. Like all donors to CIS, Google has agreed to provide funds as unrestricted gifts, for which there is no contractual agreement and no promised products, results, or deliverables. To avoid any conflict of interest, CIS avoids litigation if it involves Google. CIS does not accept corporate funding for its network neutrality-related work.

The CIS website also cites Microsoft as a funding source, along with the National Internet Alliance, a telecom lobbying group.

“Neither Google nor any of the other supporters has influenced my work,” Tene told me, referring to his long bibliography on personal data and online privacy.

But support at the institutional level may still influence individual behavior. Cooper, the George Mason staffer who reached out to Google for advice on a privacy conference in the screenshot above, works as the director of the program on economics and privacy at the university’s Law and Economics Center, which has received at least $750,000 from Google, as well as additional funds from Amazon, AT&T, and Chinese internet giant Tencent. A 2015 report in Salon detailed close ties between Google and Cooper, including emails indicating that Google was shopping an op-ed written by Cooper to newspapers, and other messages where Cooper asks Google for help crafting the content of a “symposium on dynamic competition and mergers”:

Cooper also wrote pro-Google academic papers, including this one for the George Mason Law Review entitled “Privacy and Antitrust: Underpants Gnomes, the First Amendment, and Subjectivity,” where he argues that privacy should not be included in any antitrust analysis. Cooper does not disclose Google’s funding of the [Law and Economics Center] in the article. Other pro-Google articles by Cooper, like this one from Main Justice, do include disclosure.

Cooper presented at this year’s PrivacyCon and did not disclose his relationship with Google. Cooper did not return a request for comment.

Among the PrivacyCon presenters who have benefited from non-Google generosity: Carnegie Mellon University’s Alessandro Acquisti and Columbia University’s Roxana Geambasu received $60,000 and $215,000 in Microsoft money, respectively, on top of financial ties to Google. Both co-authored and presented papers on the topic of targeted advertising. Acquisti’s papers, which did not disclose his funding sources, concluded that such marketing was not necessarily to the detriment of users. Geambasu (to her credit) produced data that contradicted Google’s claims about how targeting works and disclosed her financial relationship with the company. She also noted to The Intercept that “all my funding sources are listed in my resume,” located on her website.

The University of California, Berkeley’s International Computer Science Institute, which had an affiliated researcher presenting at PrivacyCon, counts on not just Google for its survival, but Microsoft, Comcast, Cisco, Intel, and Samsung. Two PrivacyCon submissions came out of Columbia’s Data Science Institute, which relies on Yahoo and Cisco. The Center for Democracy and Technology — which employs one PrivacyCon presenter and co-organized a privacy conference with Princeton in May — is made possible not just by Google but also an alphabet of startup and legacy tech money,according to IRS disclosures: Adobe, Airbnb, Amazon, AOL, Apple, all the way down to Twitter and Yahoo. Corporate gifts are often able to keep entire academic units functioning. The CMU CyLab, affiliated with four PrivacyCon presenters, is supported by Facebook, Symantec, and LG, among others.

Narrower Disclosure Standards in Academia

Contacted by The Intercept, academics who took money from tech companies and then spoke at PrivacyCon without disclosure provided responses ranging from flat denials to lengthy rationales. Some of the academics argued that just because their institution or organization keeps the lights on because of Silicon Valley money doesn’t mean they’re beholden to or even aware of these benefactors. But it’s harder to imagine, say, an environmental studies department getting away with floating in Exxon money, or a cancer researcher bankrolled by Phillip Morris. Like radon or noise pollution, invisible biases are something people both overstate and don’t take seriously enough — anyone with whom we disagree must be biased, and we’re loath to admit the possibility of our own.

Serge Egelman, the director of usable security and privacy at Berkeley’s International Computer Science Institute, argued that this is hardly an issue unique to Google:

I am a Google grant recipient, as are literally thousands of other researchers in computer science. Every year, like many other companies who have an interest in advancing basic research (e.g., Cisco, IBM, Comcast, Microsoft, Intel, etc.), Google posts grant solicitations. Grants are made as unrestricted gifts, meaning that Google has no control over how the money is used, and certainly cannot impose any restrictions over what is published or presented. These are grants made to institutions, and not individuals; this has no bearing on my personal income, but means that I can (partially) support a graduate student for a year. Corporate philanthropy is currently filling a gap created by dwindling government support for basic research (though only partially).

He also added that no matter who’s paying the bills, his research is independent and strongly in the public interest:

My own talk was on how Android apps gather sensitive data against users’ wishes, and the ways that the platform could be improved to better support users’ privacy preferences. All of my work in the privacy space is on protecting consumers’ privacy interests and this is the first time anyone has accused me of doing otherwise.

The list of people who spoke at PrivacyCon are some of the most active researchers in the privacy space. They come from the top universities in computer science, which is why it’s no surprise that their institutions have received research funding from many different sources, Google included. The question that you should be asking is, was the research that was presented in the public interest? I think the answer is a resounding yes.

Acquisti, the PrivacyCon presenter from CMU, is a professor affiliated with the university’s CyLab privacy think tank and shared a 2010 $400,000 Google gift with FTC technologist Cranor and fellow CMU professor Norman Sadeh before submitting and presenting two PrivacyCon papers sans disclosure, plus one presentation that included a disclosure. When the gift was given, the New York Times observed that “it is presumably in Google’s interest to promote the development of privacy-handling tools that forestall federal regulation.” Over email, Acquisti argued that disclosure is only necessary when it applies to the specific funding for a body of work being published or presented. That is, once you’ve given the talk or published a paper, your obligation to mention its financing source ends: “It would be highly misleading and incorrect for an author to list in the acknowledgements of an article a funding source that did NOT in fact fund the research activities conducted for and presented in that article.” In fact, Acquisti said, “It would be nearly fraudulent”:

It would be like claiming that funds from a certain source were used to cover a study (e.g. pay for a lab experiment) while they were not; or it would be like claiming that a research proposal was submitted to (and approved by) some grant committee at some agency/institution, whereas in fact that institution never even knew or heard about that research. … This is such a basic tenet in academia.

This line of reasoning came up again and again as I spoke to privacy-oriented researchers and academics — that papers actually should not mention funding directed to the researcher for other projects, even when such disclosure could bear on a conflict of interest, and that, for better or for worse, this deeply narrow standard of disclosure is just the way it is. And besides, it’s not as if researchers who enjoy cash from Google are necessarilyhanding favors back, right?

According to Paul Ohm, a professor of law and director at Georgetown University’s Center on Privacy and Technology, that’s missing the point: The danger of corporate money isn’t just clear-cut corruption, but subconscious calculus among academics about their research topics and conclusions and invisible influence that funding might cause. Ohm said he continually worries about “the corrupting influence of corporate money in scholarship” among his peers.

“I think privacy law is so poorly defined,” Ohm told The Intercept, “and we have so few clear rules for the road, that people who practice in privacy law rely on academics more than they do in most areas of the law, because of that it really has become a corporate strategy to interact with academics a lot.”

It’s exactly this threat that a disclosure is meant to counter — not an admission of any wrongdoing but a warning that it’s possible the work in question was compromised in some way, however slight. A disclosure isn’t a stop sign so much as one suggesting caution. That’s why Ohm thinks it’s wise for PrivacyCon (and the infinite stream of other academic conferences) to err on the side of too much disclosure — he goes as far as to say organizers should consider donor source diversity in a “code of conduct.”

“Let’s try to make sure we have at least one voice on every panel that didn’t take money,” Ohm said. “Are we getting voices that have never received money from a company like Google?” And ultimately, why not disclose? Egelman, from UC Berkeley’s International Computer Science Institute, told me he thought extra disclosures wouldn’t be a good way for “researchers [to] use valuable conference time.” Ohm disagrees: “I don’t think it’s difficult at the beginning of your talk to say, ‘I took funding in the broader research project of which this a part.’” In other words: Have you taken money from Google? Are you presenting to a room filled with regulators on a topic about which you cannot speak without the existence of Google at least looming overhead? It would serve your audience — and you — to spend 10 seconds on a disclosure. “Disclosure would have been great,” Ohm said of PrivacyCon. “Recent disclosure would have been great. Disclosure of related funding would have been great.” Apparently, only two other researchers agreed.

Obama's gift to Donald Trump: A policy of cracking down on journalists and their sources - The Intercept 20160406

Obama's gift to Donald Trump: A policy of cracking down on journalists and their sources - The Intercept 20160406

ONE OF THE intellectual gargoyles that has crawled out of Donald Trump’s brain is the idea that we should “open up” libel laws to make it easier to punish the media for negative or unfair stories. Trump also wants top officials to sign nondisclosure agreements, so they never write memoirs that upset the boss. Trump is so disdainful of free speech that he has even vowed to use the Espionage Act to imprison anyone who says or leaks anything to the media that displeases him.

Actually, that last bit is made up; Trump hasn’t talked about the Espionage Act. Instead, the Obama administration has used the draconian 1917 law to prosecute more leakers and whistleblowers than all previous administrations combined. Under the cover of the Espionage Act and other laws, the administration has secretly obtained the emails and phone records of various reporters, and declared one of them — James Rosen of Fox News — a potential “co-conspirator” with his government source. Another reporter, James Risen of the New York Times, faced a jail sentence unless he revealed a government source (which he refused to do).

Obama has warned of the imminent perils of a Trump presidency, but on the key issue of freedom of the press, which is intimately tied to the ability of officials to talk to journalists, his own administration has established a dangerous precedent for Trump — or any future occupant of the Oval Office — to use one of the most punitive laws of the land against some of the most courageous and necessary people we have. One section of the Espionage Act even allows for the death penalty.

Obama’s gift to Trump was unintentionally highlighted in a speech the president delivered last week at a ceremony to honor the winner of the Toner Prize for Excellence in Journalism. Obama lamented the financial challenges facing the journalism industry and lauded the assembled reporters and editors for the hard and vital work they do. He made no mention of the ways in which his administration is making that job even harder, however, and that omission prompted the winner of the prize, ProPublica’s Alec MacGillis, to gently note, “That does not get him off the hook for his administration taking so long to respond to our FOIAs.”

Two years ago, Eben Moglen, a law professor at Columbia University, gave a series of lectures in which he discussed the idea of “fastening the procedures of totalitarianism on the substance of democratic society.” Moglen’s lectures were mostly concerned with surveillance by the National Security Agency — the title of his talks was “Snowden and the Future” — but his idea applies to other procedures the U.S. government has recently become fond of. Few are more important than targeting whistleblowers and journalists, and Obama has begun the fastening process.

The Release, a new film about Stephen Kim, directed by Stephen Maing.

It’s a maddening situation that becomes all the more maddening when you think of the lives of the leakers and whistleblowers the Department of Justice has ruined. I have previously written at length about two of them, Jeffrey Sterling of the CIA and Stephen Kim of the State Department. A new documentary about Kim, directed by Steve Maing and released this week by Field of Vision, the film division of The Intercept, powerfully shows the personal hell of living under Obama’s crackdown. After serving a prison sentence for discussing a classified report on North Korea with Fox’s James Rosen, Kim now finds it impossible to return to his old life. Although he has advanced degrees from Harvard and Yale, he cannot get a foreign policy job because of the taint of being a convicted leaker. Kim now describes himself as “homeless, penniless, family-less,” and adds, “I cannot go back to what I was. That person is gone.”

Leakers and whistleblowers are not just categories of people — they are actual people with names and careers and children and lives that have been unjustly crushed. David Petraeus, the former four-star general and CIA director, leaked far more classified data to his biographer-girlfriend than Sterling or Kim or John Kiriakou, and lied to the FBI about it. Petraeus, however, was let off with a misdemeanor plea bargain, because if you are powerful you can do as you like. That deal is another gift to Trump or any menace-in-waiting. The president has set a precedent that says it’s okay to literally give a get-out-of-jail card to your friends.

Now that we live in the shadow of a political era that goes by two words — President Trump — it’s time for Obama to disavow the precedent he has set. The next time he gives a speech on the importance of journalism and free speech, he should admit he has made a terrible mistake and pardon the people who were wrongly prosecuted, including Manning, Kim, Sterling, Kiriakou, and Thomas Drake. He should ask their forgiveness. Obama does not have the power to stop us from electing a terrible president, but he can limit the damage that one can do.

McLaughlin, Jenna - Five Big Unanswered Questions About NSA’s Worldwide Spying - The Intercept 20160317

McLaughlin, Jenna - Five Big Unanswered Questions About NSA’s Worldwide Spying - The Intercept 20160317

Nearly three years after NSA whistleblower Edward Snowden gave journalists his trove of documents on the intelligence community’s broad and powerful surveillance regime, the public is still missing some crucial, basic facts about how the operations work.

Surveillance researchers and privacy advocates published a report on Wednesday outlining what we do know, thanks to the period of discovery post-Snowden — and the overwhelming amount of things we don’t.

The NSA’s domestic surveillance was understandably the initial focus of public debate. But that debate never really moved on to examine the NSA’s vastly bigger foreign operations.

“There has been relatively little public or congressional debate within the United States about the NSA’s overseas surveillance operations,” write Faiza Patel and Elizabeth Goitein, co-directors of the Brennan Center for Justice’s Liberty and National Security Program, and Amos Toh, legal adviser for David Kaye, the U.N. special rapporteur on the right to freedom of opinion and expression.

The central guidelines the NSA is supposed to follow while spying abroad are described in Executive Order 12333, issued by President Ronald Reagan in 1981, which the authors describe as “a black box.”

Just Security, a national security law blog, and the Brennan Center for Justice are co-hosting a panel on Thursday on Capitol Hill to discuss the policy, where the NSA’s privacy and civil liberties officer, Rebecca Richards, will be present.

And the independent government watchdog, the Privacy and Civil Liberties Oversight Board, which has authored in-depth reports on other NSA programs, intends to publish a report on 12333 surveillance programs “this year,” according to spokesperson Jen Burita.

In the meantime, the authors of the report came up with a list of questions they say need to be answered to create an informed public debate.

1. How far does the law go?

The authors ask: How does the NSA actually interpret the law — most of which is public — and use it to justify its tactics? Are there any other laws governing overseas surveillance that are still hidden from public view?

When Congress discovered how the NSA was citing Section 215 of the Patriot Act as giving it the authority to vacuum up massive amounts of information about American telephone calls, many were shocked. One of the Patriot Act’s original authors, Rep. Jim Sensenbrenner, R-Wis., has repeatedly said the NSA abused what was meant to be a narrow law.

“The public deserves to know how the agencies interpret their duties and obligations under the Constitution and international law,” the authors write.

2. Who’s watching the spies?

How can we know there’s proper oversight of the intelligence community, both internally and through Congress? Does Congress even know what it’s funding, especially when intelligence work is contracted out to the private sector?

Lawmakers have complained that they learned more about NSA spying from the media and Snowden than from classified hearings.

3. How much foreign spying ends up in domestic courts?

The authors wonder how evidence collected through foreign spying is used in court, and whether or not “targets” of the surveillance are told about the NSA’s search when that search finds data that can be used against them.

Officials told New York Times reporter Charlie Savage that “in practice … the government already avoids” introducing evidence obtained directly from 12333 intercepts “so as not to have to divulge the origins of the evidence in court.” “But the officials contend,” Savage wrote, “that defendants have no right to know if 12333 intercepts provided a tip from which investigators derived other evidence.”

4. How many words don’t mean what we think they mean?

Some of the report’s questions focus on the NSA’s use of language when it describes different programs. Though words like “collection” and “gathering” sound synonymous to us, the NSA could be using them differently, leading to misinterpretation of what the agency is actually doing. “Is the term ‘collection’ interpreted differently from the terms ‘interception,’ ‘gathering,’ and ‘acquisition’?” the authors ask.

5. Where does it end?

When the NSA says a search is “targeted,” could the agency still be sweeping up a lot of information? And not just about foreigners?

Does the agency use vague search terms like “ISIS” or “nuclear” when combing through communications, thereby grabbing up data from millions of innocent people simply discussing the news?

And how much American data is swept up, either on purpose or incidentally, when Americans talk with friends overseas, or their messages are routed through other countries due to the way the internet works?

“The fact that [12333 programs] are conducted abroad rather than at home makes little difference in an age where data and information flows are unconstrained by geography, and where the constitutional rights of Americans are just as easily compromised by operations in London as those in Los Angeles,” the authors write.

Upgrade your iPhone passcode to defeat the FBI's backdoor strategy - The Intercept 20160218

Upgrade your iPhone passcode to defeat the FBI's backdoor strategy - The Intercept 20160218

Yesterday, Apple CEO Tim Cook published an open letter opposing a court order to build the FBI a “backdoor” for the iPhone.

Cook wrote that the backdoor, which removes limitations on how often an attacker can incorrectly guess an iPhone passcode, would set a dangerous precedent and “would have the potential to unlock any iPhone in someone’s physical possession,” even though in this instance, the FBI is seeking to unlock a single iPhone belonging to one of the killers in a 14-victim mass shooting spree in San Bernardino, California, in December.

It’s true that ordering Apple to develop the backdoor will fundamentally undermine iPhone security, as Cook and other digital security advocates have argued. But it’s possible for individual iPhone users to protect themselves from government snooping by setting strong passcodes on their phones — passcodes the FBI would not be able to unlock even if it gets its iPhone backdoor.

The technical details of how the iPhone encrypts data, and how the FBI might circumvent this protection, are complex and convoluted, and are being thoroughly explored elsewhere on the internet. What I’m going to focus on here is how ordinary iPhone users can protect themselves.

The short version: If you’re worried about governments trying to access your phone, set your iPhone up with a random, 11-digit numeric passcode. What follows is an explanation of why that will protect you and how to actually do it.

If it sounds outlandish to worry about government agents trying to crack into your phone, consider that when you travel internationally, agents at the airport or other border crossings can seize, search, and temporarily retain your digital devices — even without any grounds for suspicion. And while a local police officer can’t search your iPhone without a warrant, cops have used their own digital devices to get search warrants within 15 minutes, as a Supreme Court opinion recently noted.

The most obvious way to try and crack into your iPhone, and what the FBI is trying to do in the San Bernardino case, is to simply run through every possible passcode until the correct one is discovered and the phone is unlocked. This is known as a “brute force” attack.

For example, let’s say you set a six-digit passcode on your iPhone. There are 10 possibilities for each digit in a numbers-based passcode, and so there are 106, or 1 million, possible combinations for a six-digit passcode as a whole. It is trivial for a computer to generate all of these possible codes. The difficulty comes in trying to test them.

One obstacle to testing all possible passcodes is that the iPhone intentionally slows down after you guess wrong a few times. An attacker can try four incorrect passcodes before she’s forced to wait one minute. If she continues to guess wrong, the time delay increases to five minutes, 15 minutes, and finally one hour. There’s even a setting to erase all data on the iPhone after 10 wrong guesses.

This is where the FBI’s requested backdoor comes into play. The FBI is demanding that Apple create a special version of the iPhone’s operating system, iOS, that removes the time delays and ignores the data erasure setting. The FBI could install this malicious software on the San Bernardino killer’s iPhone, brute force the passcode, unlock the phone, and access all of its data. And that process could hypothetically be repeated on anyone else’s iPhone.

(There’s also speculation that the government could make Apple alter the operation of a piece of iPhone hardware known as the Secure Enclave; for the purposes of this article, I assume the protections offered by this hardware, which would slow an attacker down even more, are not in place.)

Even if the FBI gets its way and can clear away iPhone safeguards against passcode guessing, it faces another obstacle, one that should help keep it from cracking passcodes of, say, 11 digits: It can only test potential passcodes for your iPhone using the iPhone itself; the FBI can’t use a supercomputer or a cluster of iPhones to speed up the guessing process. That’s because iPhone models, at least as far back as May 2012, have come with a Unique ID (UID) embedded in the device hardware. Each iPhone has a different UID fused to the phone, and, by design, no one can read it and copy it to another computer. The iPhone can only be unlocked when the owner’s passcode is combined with the the UID to derive an encryption key.

So the FBI is stuck using your iPhone to test passcodes. And it turns out that your iPhone is kind of slow at that: iPhones intentionally encrypt data in such a way that they must spend about 80 milliseconds doing the math needed to test a passcode, according to Apple. That limits them to testing 12.5 passcode guesses per second, which means that guessing a six-digit passcode would take, at most, just over 22 hours.

You can calculate the time for that task simply by dividing the 1 million possible six-digit passcodes by 12.5 per seconds. That’s 80,000 seconds, or 1,333 minutes, or 22 hours. But the attacker doesn’t have to try each passcode; she can stop when she finds one that successfully unlocks the device. On average, it will only take 11 hours for that to happen.

But the FBI would be happy to spend mere hours cracking your iPhone. What if you use a longer passcode? Here’s how long the FBI would need:

  • seven-digit passcodes will take up to 9.2 days, and on average 4.6 days, to crack
  • eight-digit passcodes will take up to three months, and on average 46 days, to crack
  • nine-digit passcodes will take up to 2.5 years, and on average 1.2 years, to crack
  • 10-digit passcodes will take up to 25 years, and on average 12.6 years, to crack
  • 11-digit passcodes will take up to 253 years, and on average 127 years, to crack
  • 12-digit passcodes will take up to 2,536 years, and on average 1,268 years, to crack
  • 13-digit passcodes will take up to 25,367 years, and on average 12,683 years, to crack

It’s important to note that these estimates only apply to truly random passcodes. If you choose a passcode by stringing together dates, phone numbers, social security numbers, or anything else that’s at all predictable, the attacker might try guessing those first, and might crack your 11-digit passcode in a very short amount of time. So make sure your passcode is random, even if this means it takes extra time to memorize it. (Memorizing that many digits might seem daunting, but if you’re older than, say, 29, there was probably a time when you memorized several phone numbers that you dialed on a regular basis.)

Nerd tip: If you’re using a Mac or Linux, you can securely generate a random 11-digit passcode by opening the Terminal app and typing this command:

python -c 'from random import SystemRandom as r; print(r().randint(0,10**11-1))'

It’s also important to note that we’re assuming the FBI, or some other government agency, has not found a flaw in Apple’s security architecture that would allow them to test passcodes on their own computers or at a rate faster than 80 milliseconds per passcode.

Once you’ve created a new 11-digit passcode, you can start using it by opening the Settings app, selecting “Touch ID & Passcode,” and entering your old passcode if prompted. Then, if you have an existing passcode, select “Change passcode” and enter your old passcode. If you do not have an existing passcode, and are setting one for the first time, click “Turn passcode on.”

Then, in all cases, click “Passcode options,”  select “Custom numeric code,” and then enter your new passcode.

Here are a few final tips to make this long-passcode thing work better:

  • Within the “Touch ID & Passcode” settings screen, make sure to turn on the Erase Data setting to erase all data on your iPhone after 10 failed passcode attempts.
  • Make sure you don’t forget your passcode, or you’ll lose access to all of the data on your iPhone.
  • Don’t use Touch ID to unlock your phone. Your attacker doesn’t need to guess your passcode if she can push your finger onto the home button to unlock it instead. (At least one court has ruled that while the police cannot compel you to disclose your passcode, they can compel you to use your fingerprint to unlock your smartphone.)
  • Don’t use iCloud backups. Your attacker doesn’t need to guess your passcode if she can get a copy of all the same data from Apple’s server, where it’s no longer protected by your passcode.
  • Do make local backups to your computer using iTunes, especially if you are worried about forgetting your iPhone passcode. You can encrypt the backups, too.

By choosing a strong passcode, the FBI shouldn’t be able to unlock your encrypted phone, even if it installs a backdoored version of iOS on it. Not unless it has hundreds of years to spare.

The Surveillance Engine: How the NSA Built Its Own Secret Google - 20140825

The Surveillance Engine: How the NSA Built Its Own Secret Google - 20140825

The National Security Agency is secretly providing data to nearly two dozen U.S. government agencies with a “Google-like” search engine built to share more than 850 billion records about phone calls, emails, cellphone locations, and internet chats, according to classified documents obtained by The Intercept.

The documents provide the first definitive evidence that the NSA has for years made massive amounts of surveillance data directly accessible to domestic law enforcement agencies. Planning documents for ICREACH, as the search engine is called, cite the Federal Bureau of Investigation and the Drug Enforcement Administration as key participants.

ICREACH contains information on the private communications of foreigners and, it appears, millions of records on American citizens who have not been accused of any wrongdoing. Details about its existence are contained in the archive of materials provided to The Intercept by NSA whistleblower Edward Snowden.

Earlier revelations sourced to the Snowden documents have exposed a multitude of NSA programs for collecting large volumes of communications. The NSA has acknowledged that it shares some of its collected data with domestic agencies like the FBI, but details about the method and scope of its sharing have remained shrouded in secrecy.

architecture

ICREACH has been accessible to more than 1,000 analysts at 23 U.S. government agencies that perform intelligence work, according to a 2010 memo. A planning document from 2007 lists the DEA, FBI, Central Intelligence Agency, and the Defense Intelligence Agency as core members. Information shared through ICREACH can be used to track people’s movements, map out their networks of associates, help predict future actions, and potentially reveal religious affiliations or political beliefs.

The creation of ICREACH represented a landmark moment in the history of classified U.S. government surveillance, according to the NSA documents.

“The ICREACH team delivered the first-ever wholesale sharing of communications metadata within the U.S. Intelligence Community,” noted a top-secret memo dated December 2007. “This team began over two years ago with a basic concept compelled by the IC’s increasing need for communications metadata and NSA’s ability to collect, process and store vast amounts of communications metadata related to worldwide intelligence targets.”

The search tool was designed to be the largest system for internally sharing secret surveillance records in the United States, capable of handling two to five billion new records every day, including more than 30 different kinds of metadata on emails, phone calls, faxes, internet chats, and text messages, as well as location information collected from cellphones. Metadata reveals information about a communication—such as the “to” and “from” parts of an email, and the time and date it was sent, or the phone numbers someone called and when they called—but not the content of the message or audio of the call.

ICREACH does not appear to have a direct relationship to the large NSA database, previously reported by The Guardian, that stores information on millions of ordinary Americans’ phone calls under Section 215 of the Patriot Act. Unlike the 215 database, which is accessible to a small number of NSA employees and can be searched only in terrorism-related investigations, ICREACH grants access to a vast pool of data that can be mined by analysts from across the intelligence community for “foreign intelligence”—a vague term that is far broader than counterterrorism.

large-scale-expansion

Data available through ICREACH appears to be primarily derived from surveillance of foreigners’ communications, and planning documents show that it draws on a variety of different sources of data maintained by the NSA. Though one 2010 internal paper clearly calls it “the ICREACH database,” a U.S. official familiar with the system disputed that, telling The Intercept that while “it enables the sharing of certain foreign intelligence metadata,” ICREACH is “not a repository [and] does not store events or records.” Instead, it appears to provide analysts with the ability to perform a one-stop search of information from a wide variety of separate databases.

In a statement to The Intercept, the Office of the Director of National Intelligence confirmed that the system shares data that is swept up by programs authorized under Executive Order 12333, a controversial Reagan-era presidential directive that underpins several NSA bulk surveillance operations that monitor communications overseas. The 12333 surveillance takes place with no court oversight and has received minimal Congressional scrutiny because it is targeted at foreign, not domestic, communication networks. But the broad scale of 12333 surveillance means that some Americans’ communications get caught in the dragnet as they transit international cables or satellites—and documents contained in the Snowden archive indicate that ICREACH taps into some of that data.

Legal experts told The Intercept they were shocked to learn about the scale of the ICREACH system and are concerned that law enforcement authorities might use it for domestic investigations that are not related to terrorism.

“To me, this is extremely troublesome,” said Elizabeth Goitein, co-director of the Liberty and National Security Program at the New York University School of Law’s Brennan Center for Justice. “The myth that metadata is just a bunch of numbers and is not as revealing as actual communications content was exploded long ago—this is a trove of incredibly sensitive information.”

Brian Owsley, a federal magistrate judge between 2005 and 2013, said he was alarmed that traditional law enforcement agencies such as the FBI and the DEA were among those with access to the NSA’s surveillance troves.

“This is not something that I think the government should be doing,” said Owsley, an assistant professor of law at Indiana Tech Law School. “Perhaps if information is useful in a specific case, they can get judicial authority to provide it to another agency. But there shouldn’t be this buddy-buddy system back-and-forth.”

Jeffrey Anchukaitis, an ODNI spokesman, declined to comment on a series of questions from The Intercept about the size and scope of ICREACH, but said that sharing information had become “a pillar of the post-9/11 intelligence community” as part of an effort to prevent valuable intelligence from being “stove-piped in any single office or agency.”

Using ICREACH to query the surveillance data, “analysts can develop vital intelligence leads without requiring access to raw intelligence collected by other IC [Intelligence Community] agencies,” Anchukaitis said. “In the case of NSA, access to raw signals intelligence is strictly limited to those with the training and authority to handle it appropriately. The highest priority of the intelligence community is to work within the constraints of law to collect, analyze and understand information related to potential threats to our national security.”

One-Stop Shopping

The mastermind behind ICREACH was recently retired NSA director Gen. Keith Alexander, who outlined his vision for the system in a classified 2006 letter to the then-Director of National Intelligence John Negroponte. The search tool, Alexander wrote, would “allow unprecedented volumes of communications metadata to be shared and analyzed,” opening up a “vast, rich source of information” for other agencies to exploit. By late 2007 the NSA reported to its employees that the system had gone live as a pilot program.

The NSA described ICREACH as a “one-stop shopping tool” for analyzing communications. The system would enable at least a 12-fold increase in the volume of metadata being shared between intelligence community agencies, the documents stated. Using ICREACH, the NSA planned to boost the amount of communications “events” it shared with other U.S. government agencies from 50 billion to more than 850 billion, bolstering an older top-secret data sharing system named CRISSCROSS/PROTON, which was launched in the 1990s and managed by the CIA.

To allow government agents to sift through the masses of records on ICREACH, engineers designed a simple “Google-like” search interface. This enabled analysts to run searches against particular “selectors” associated with a person of interest—such as an email address or phone number—and receive a page of results displaying, for instance, a list of phone calls made and received by a suspect over a month-long period. The documents suggest these results can be used reveal the “social network” of the person of interest—in other words, those that they communicate with, such as friends, family, and other associates.

increases-number

The purpose of ICREACH, projected initially to cost between $2.5 million and $4.5 million per year, was to allow government agents to comb through the NSA’s metadata troves to identify new leads for investigations, to predict potential future threats against the U.S., and to keep tabs on what the NSA calls “worldwide intelligence targets.”

However, the documents make clear that it is not only data about foreigners’ communications that are available on the system. Alexander’s memo states that “many millions of…minimized communications metadata records” would be available through ICREACH, a reference to the process of “minimization,” whereby identifying information—such as part of a phone number or email address—is removed so it is not visible to the analyst. NSA documents define minimization as “specific procedures to minimize the acquisition and retention [of] information concerning unconsenting U.S. persons”—making it a near certainty that ICREACH gives analysts access to millions of records about Americans. The “minimized” information can still be retained under NSA rules for up to five years and “unmasked” at any point during that period if it is ever deemed necessary for an investigation.

The Brennan Center’s Goitein said it appeared that with ICREACH, the government “drove a truck” through loopholes that allowed it to circumvent restrictions on retaining data about Americans. This raises a variety of legal and constitutional issues, according to Goitein, particularly if the data can be easily searched on a large scale by agencies like the FBI and DEA for their domestic investigations.

“The idea with minimization is that the government is basically supposed to pretend this information doesn’t exist, unless it falls under certain narrow categories,” Goitein said. “But functionally speaking, what we’re seeing here is that minimization means, ‘we’ll hold on to the data as long as we want to, and if we see anything that interests us then we can use it.'”

A key question, according to several experts consulted by The Intercept, is whether the FBI, DEA or other domestic agencies have used their access to ICREACH to secretly trigger investigations of Americans through a controversial process known as “parallel construction.”

Parallel construction involves law enforcement agents using information gleaned from covert surveillance, but later covering up their use of that data by creating a new evidence trail that excludes it. This hides the true origin of the investigation from defense lawyers and, on occasion, prosecutors and judges—which means the legality of the evidence that triggered the investigation cannot be challenged in court.

In practice, this could mean that a DEA agent identifies an individual he believes is involved in drug trafficking in the United States on the basis of information stored on ICREACH. The agent begins an investigation but pretends, in his records of the investigation, that the original tip did not come from the secret trove. Last year, Reuters first reported details of parallel construction based on NSA data, linking the practice to a unit known as the Special Operations Division, which Reuters said distributes tips from NSA intercepts and a DEA database known as DICE.

Tampa attorney James Felman, chair of the American Bar Association’s criminal justice section, told The Intercept that parallel construction is a “tremendously problematic” tactic because law enforcement agencies “must be honest with courts about where they are getting their information.” The ICREACH revelations, he said, “raise the question of whether parallel construction is present in more cases than we had thought. And if that’s true, it is deeply disturbing and disappointing.”

Anchukaitis, the ODNI spokesman, declined to say whether ICREACH has been used to aid domestic investigations, and he would not name all of the agencies with access to the data. “Access to information-sharing tools is restricted to users conducting foreign intelligence analysis who have the appropriate training to handle the data,” he said.

CIA headquarters in Langley, Virginia, 2001.
CIA headquarters in Langley, Virginia, 2001.

Project CRISSCROSS

The roots of ICREACH can be traced back more than two decades.

In the early 1990s, the CIA and the DEA embarked on a secret initiative called Project CRISSCROSS. The agencies built a database system to analyze phone billing records and phone directories, in order to identify links between intelligence targets and other persons of interest. At first, CRISSCROSS was used in Latin America and was “extremely successful” at identifying narcotics-related suspects. It stored only five kinds of metadata on phone calls: date, time, duration, called number, and calling number, according to an NSA memo.

The program rapidly grew in size and scope. By 1999, the NSA, the Defense Intelligence Agency, and the FBI had gained access to CRISSCROSS and were contributing information to it. As CRISSCROSS continued to expand, it was supplemented with a system called PROTON that enabled analysts to store and examine additional types of data. These included unique codes used to identify individual cellphones, location data, text messages, passport and flight records, visa application information, as well as excerpts culled from CIA intelligence reports.

An NSA memo noted that PROTON could identify people based on whether they behaved in a “similar manner to a specific target.” The memo also said the system “identifies correspondents in common with two or more targets, identifies potential new phone numbers when a target switches phones, and identifies networks of organizations based on communications within the group.” In July 2006, the NSA estimated that it was storing 149 billion phone records on PROTON.

According to the NSA documents, PROTON was used to track down “High Value Individuals” in the United States and Iraq, investigate front companies, and discover information about foreign government operatives. CRISSCROSS enabled major narcotics arrests and was integral to the CIA’s rendition program during the Bush Administration, which involved abducting terror suspects and flying them to secret “black site” prisons where they were brutally interrogated and sometimes tortured. One NSA document on the system, dated from July 2005, noted that the use of communications metadata “has been a contribution to virtually every successful rendition of suspects and often, the deciding factor.”

However, the NSA came to view CRISSCROSS/PROTON as insufficient, in part due to the aging standard of its technology. The intelligence community was sensitive to criticism that it had failed to share information that could potentially have helped prevent the 9/11 attacks, and it had been strongly criticized for intelligence failures before the invasion of Iraq in 2003. For the NSA, it was time to build a new and more advanced system to radically increase metadata sharing.

A New Standard

In 2006, NSA director Alexander drafted his secret proposal to then-Director of National Intelligence Negroponte.

Alexander laid out his vision for what he described as a “communications metadata coalition” that would be led by the NSA. His idea was to build a sophisticated new tool that would grant other federal agencies access to “more than 50 existing NSA/CSS metadata fields contained in trillions of records” and handle “many millions” of new minimized records every day—indicating that a large number of Americans’ communications would be included.

The NSA’s contributions to the ICREACH system, Alexander wrote, “would dwarf the volume of NSA’s present contributions to PROTON, as well as the input of all other [intelligence community] contributors.”

Alexander explained in the memo that NSA was already collecting “vast amounts of communications metadata” and was preparing to share some of it on a system called GLOBALREACH with its counterparts in the so-called Five Eyes surveillance alliance: the United Kingdom, Australia, Canada, and New Zealand.

ICREACH, he proposed, could be designed like GLOBALREACH and accessible only to U.S. agencies in the intelligence community, or IC.

A top-secret PowerPoint presentation from May 2007 illustrated how ICREACH would work—revealing its “Google-like” search interface and showing how the NSA planned to link it to the DEA, DIA, CIA, and the FBI. Each agency would access and input data through a secret data “broker”—a sort of digital letterbox—linked to the central NSA system. ICREACH, according to the presentation, would also receive metadata from the Five Eyes allies.

The aim was not necessarily for ICREACH to completely replace CRISSCROSS/PROTON, but rather to complement it. The NSA planned to use the new system to perform more advanced kinds of surveillance—such as “pattern of life analysis,” which involves monitoring who individuals communicate with and the places they visit over a period of several months, in order to observe their habits and predict future behavior.

The NSA agreed to train other U.S. government agencies to use ICREACH. Intelligence analysts could be “certified” for access to the massive database if they required access in support of a given mission, worked as an analyst within the U.S. intelligence community, and had top-secret security clearance. (According to the latest government figures, there are more than 1.2 million government employees and contractors with top-secret clearance.)

In November 2006, according to the documents, the Director of National Intelligence approved the proposal. ICREACH was rolled out as a test program by late 2007. It’s not clear when it became fully operational, but a September 2010 NSA memo referred to it as the primary tool for sharing data in the intelligence community. “ICREACH has been identified by the Office of the Director of National Intelligence as the U.S. Intelligence Community’s standard architecture for sharing communications metadata,” the memo states, adding that it provides “telephony metadata events” from the NSA and its Five Eyes partners “to over 1000 analysts across 23 U.S. Intelligence Community agencies.” It does not name all of the 23 agencies, however.

The limitations placed on analysts authorized to sift through the vast data troves are not outlined in the Snowden files, with only scant references to oversight mechanisms. According to the documents, searches performed by analysts are subject to auditing by the agencies for which they work. The documents also say the NSA would conduct random audits of the system to check for any government agents abusing their access to the data. The Intercept asked the NSA and the ODNI whether any analysts had been found to have conducted improper searches, but the agencies declined to comment.

While the NSA initially estimated making upwards of 850 billion records available on ICREACH, the documents indicate that target could have been surpassed, and that the number of personnel accessing the system may have increased since the 2010 reference to more than 1,000 analysts. The intelligence community’s top-secret “Black Budget” for 2013, also obtained by Snowden, shows that the NSA recently sought new funding to upgrade ICREACH to “provide IC analysts with access to a wider set of shareable data.”

In December last year, a surveillance review group appointed by President Obama recommended that as a general rule “the government should not be permitted to collect and store all mass, undigested, non-public personal information about individuals to enable future queries and data-mining for foreign intelligence purposes.” It also recommended that any information about United States persons should be “purged upon detection unless it either has foreign intelligence value or is necessary to prevent serious harm to others.”

Peter Swire, one of the five members of the review panel, told The Intercepthe could not comment on whether the group was briefed on specific programs such as ICREACH, but noted that the review group raised concerns that “the need to share had gone too far among multiple agencies.”

Documents published with this article:

Comey Calls on Tech Companies Offering End-to-End Encryption to Reconsider “Their Business Model” - The Intercept 20151209

FBI Director James Comey on Wednesday called for tech companies currently offering end-to-end encryption to reconsider their business model, and instead adopt encryption techniques that allow them to intercept and turn over communications to law enforcement when necessary.

End-to-end encryption, which is the state of the art in providing secure communications on the internet, has become increasingly common and desirable in the wake of NSA whistleblower Edward Snowden’s revelations about mass surveillance by the government.

Comey had previously argued that tech companies could somehow come up with a “solution” that allowed for government access but didn’t weaken security. Tech experts called this a “magic pony” and mocked him for his naivete.

Now, Comey said at a Senate Judiciary Committee hearing Wednesday morning, extensive conversations with tech companies have persuaded him that “it’s not a technical issue.”

“It is a business model question,” he said. “The question we have to ask is: Should they change their business model?”

Comey’s clear implication was that companies that think it’s a good business model to offer end-to-end encryption — or, like Apple, allow users to fully encrypt their iPhones — should roll those services back.

Comey and other government representatives have been pressuringcompanies like Apple and Google for many months in public hearings to find a way to provide law enforcement access to decrypted communications whenever there’s a lawful request. Deputy Attorney General Sally Quillian Yates said in a July hearing that some sort of mandate or legislation “may ultimately be necessary” to compel companies to comply, but insisted that wasn’t the DOJ’s desire. Now, there’s little pussyfooting about it.

“There are plenty of companies today that provide secure services to their customers and still comply with court orders,” he said. “There are plenty of folks who make good phones who are able to unlock them in response to a court order. In fact, the makers of phones that today can’t be unlocked, a year ago they could be unlocked.”

Comey indicated that these companies should be satisfied providing customers with encryption that allows for interception by the providers, who can then turn over the information to law enforcement.

Privacy experts say that the same holes in encryption that allow for authorized interception also allow for unauthorized interception — and therefore provide insufficient security.

Comey called on customers, who he said are becoming more aware of the “dangers” of encryption, to “speak to” phone companies and insist they’ll “keep using [their] phones” if they stopped offering the technology.

Comey acknowledged that encrypted apps would still exist. But, he said, encryption “by default” is the real problem. He told Sen. Mike Lee, R-Utah, that “I think there’s no way we solve this entire problem. … The sophisticated user could still find a way.”

That didn’t stop him from calling for an international standard for encryption technologies, however. Many popular encrypted applications are not U.S. based. Any action imposed on American companies would likely handicap them and lead customers to turn to overseas options.

“We have to remember limits of what we can do legislatively,” said Lee. “If we’re going to mandate that legislatively” — force companies to stop offering strong encryption — “it wouldn’t necessarily fix the problem,” he said.

For the first time, Comey made a specific allegation about encryption having interfered with an FBI terror investigation.

“In May, when two terrorists attempted to kill a whole lot of people in Garland, Texas, and were stopped by the action of great local law enforcement … that morning, before one of those terrorists left to try to commit mass murder, he exchanged 109 messages with an overseas terrorist. We have no idea what he said, because those messages were encrypted.”

“That is a big problem,” Comey said.

But in the Garland case, the FBI had been tracking one of the would-be attackers for months — and had alerted local police that he might be headed to a controversial anti-Muslim exhibition. But FBI surveillance didn’t stop Elton Simpson — the Garland Police Department did. The local police never got the FBI’s email.

Comey did not request specific legislation to compel companies to abandon end-to-end encryption, but told Sen. Dianne Feinstein, D-Calif., that he would like to see all companies responding to lawful requests for data. Feinstein offered to pursue legislation herself, citing fear that her grandchildren might start communicating with terrorists over encrypted PlayStation systems.

Toward the end of the hearing, Comey seemed to contradict his earlier comments urging companies to reconsider their business models. “I don’t want to tell them how to do their business,” he said. Then, moments later, he added that “there are costs to being an American business — you can’t pollute.” The implication there was that American businesses might need to comply with new standards regardless of what the rest of the world does — as if providing end-to-end encryption to protect the average person’s communications is the same as destroying the environment.

Technologists, privacy advocates, and journalists reacted on Twitter with confusion and frustration.

Scope of Secretive FBI National Security Letters Revealed by First Lifted Gag Order - The Intercept 20151130

Fourteen years after the FBI began using national security letters to unilaterally and quietly demand records from Internet service providers, telephone companies and financial institutions, one recipient — former ISP founder Nicholas Merrill — is finally free to talk about what it’s like to get one.

The FBI issues the letters, known as NSLs, without any judicial review whatsoever. And they come with a gag order.

But a federal District Court judge in New York ruled in September that the continuous ban on Merrill’s speech about the order was not justified, considering that the FBI’s investigation was long over and most details about the order were already openly available.

After waiting for 90 days to let the government appeal the decision — which it didn’t — the judge lifted the gag on Monday.

Merrill immediately released the FBI’s attachment to the national security letter it sent him 11 years ago, listing the kinds of information it wanted about a particular customer without getting a warrant.

One of the most striking revelations, Merrill said during a press teleconference, was that the FBI was requesting detailed cell site location information — cellphone tracking records — under the heading of “radius log” information. Traditionally, radius log refers to a user’s attempts to connect to a server or a DSL line — a sort of anachronism given the progress of technology.

“The notion that the government can collect cellphone location information — to turn your cellphone into a tracking device, just by signing a letter — is extremely troubling,” Merrill said.

The court ruling noted that the FBI is no longer requesting this type of information using NSLs, but wants to maintain the possibility of doing so in the future.

The question of whether law enforcement should be required to get a warrant before obtaining detailed cell site location information is currently being reviewed in several federal District Courts, though the Supreme Court recently turned the case down.

And, according to Merrill, the FBI’s request for “any other information which you consider to be an electronic communication transactional record” also includes incredibly invasive things like a detailed list of all the web searches performed on a computer.

Merrill did not release the name of the target of the investigation and the letter, though he is now legally allowed to do so — “for privacy reasons,” he said.

Otherwise, the newly disclosed list did not provide much new information about the FBI’s investigation practices — a big reason why the court chose to lift the gag order.

In the newly unredacted ruling, U.S. District Court Judge Victor Marrero wrote that the case “implicates serious issues, both with respect to the First Amendment and accountability of the government to the people.”

According to the Electronic Frontier Foundation, around 300,000 NSLs have been issued since 2001. By 2008, the Justice Department concluded that the FBI had been abusing its powers with NSLs, even after changing policies in 2006.

“I feel vindicated today,” said Merrill. “But there’s a lot more work to be done.”

It's time