Tag Archives: Just Security

Unprecedented and Unlawful: The NSA’s “Upstream” Surveillance - Just Security 20160919

Unprecedented and Unlawful: The NSA’s “Upstream” Surveillance

The FISA Amendments Act of 2008 (FAA) — the statute the government uses to engage in warrantless surveillance of Americans’ international communications — is scheduled to expire in December 2017. In anticipation of the coming legislative debate over reauthorization, Congress has already begun to hold hearings. While Congress must address many problems with the government’s use of this law to surveil and investigate Americans, the government’s use of “Upstream” surveillance to search Internet traffic deserves special attention. Indeed, Congress has never engaged in a meaningful public debate about Upstream surveillance — but it should.

First disclosed as part of the Snowden revelations, Upstream surveillance involves the NSA’s bulk interception and searching of Americans’ international Internet communications — including emails, chats, and web-browsing traffic —  as their communications travel the spine of the Internet between sender and receiver. If you send emails to friends abroad, message family members overseas, or browse websites hosted outside of the United States, the NSA has almost certainly searched through the contents of your communications — and it has done so without a warrant.

The executive branch contends that Upstream surveillance was authorized by the FAA; however, as others have noted, neither the text of the statute nor the legislative history support that claim. Moreover, as former Assistant Attorney General for National Security David Kris recently explained, Upstream raises “challenging” legal questions about the suspicionless searching of Americans’ Internet communications — questions that Congress must address before reauthorizing the FAA.

Because of how it operates, Upstream surveillance represents a new surveillance paradigm, one in which computers constantly scan our communications for information of interest to the government. As the legislative debate gets underway, it’s critical to frame the technological and legal issues that Congress and the public must consider — and to examine far more closely the less-intrusive alternatives available to the government.

Upstream Surveillance: An Overview

As we’ve learned from official government sources and media reports, Upstream surveillance consists of the mass copying and content-searching of Americans’ international Internet communications while those communications are in transit. The surveillance takes place on the Internet “backbone” — the network of high-capacity cables, switches, and routers that carry Americans’ domestic and international Internet communications.  With the compelled assistance of telecommunications providers like AT&T and Verizon, the NSA has installed surveillance equipment at dozens of points along the Internet backbone, allowing the agency to copy and then search vast quantities of Internet traffic as those communications flow past.

The NSA is searching Americans’ international communications for what it calls “selectors.” Selectors are, in essence, keywords. Under the FAA, they are typically email addresses, phone numbers, or other identifiers associated with the government’s targets. While this might sound like a narrow category, the reality is much different, as Jennifer Granick and Jadzia Butler recently explained. That’s because the NSA can target any foreigner located outside the United States who is believed to possess “foreign intelligence information” — including journalists, human rights researchers, and attorneys, not just suspected terrorists or foreign spies. At last count, the NSA was targeting more than 94,000 people, organizations, and groups under the FAA.

In practice, that means the NSA is examining the contents of each communication for the presence of tens of thousands of different search terms that are of interest to the government. And that list continues to grow, as the NSA adds new targets and entirely new categories of selectors to Upstream surveillance. Whenever the NSA finds a communication that contains a “hit” for any one of its many selectors, it stores that communication for the agency’s long-term use and analysis — and it may share those communications with the FBI for use in criminal investigations.

“About” Surveillance

Observers, including the Privacy and Civil Liberties Oversight Board (PCLOB), have singled out one feature of this surveillance as especially controversial: what’s often called “about” surveillance. This term refers to the fact that the government is not only intercepting communications to and from its targets, but is systematically examining the communications of third parties in order to identify those that simply mention a targeted selector. (In other words, the NSA is searching for and collecting communications that are merely “about” its targets.)

“About” surveillance has little precedent. To use a non-digital comparison: It’s as if the NSA sent agents to the U.S. Postal Service’s major processing centers to engage in continuous searches of everyone’s international mail. The agents would open, copy, and read each letter, and would keep a copy of any letter that mentioned specific items of interest — despite the fact that the government had no reason to suspect the letter’s sender or recipient beforehand. In the same way, Upstream involves general searches of Americans’ international Internet communications.

Upstream Surveillance Is Bulk Searching

Although the government frequently contends otherwise, Upstream surveillance is a form of bulk surveillance. To put it plainly, the government is searching the contents of essentially everyone’s communications as they flow through the NSA’s surveillance devices, in order to determine which communications contain the information the NSA seeks. While the government has “targets,” its searches are not limited to those targets’ communications. Rather, in order to locate communications that are to, from, or “about” its targets, the government is first copying and searching Americans’ international communications in bulk.

There is no question that these searches are extraordinarily far-reaching. The leading treatise on national-security surveillance, co-authored by former Assistant Attorney General David Kris, explains that the “NSA’s machines scan the contents of all of the communications passing through the collection point, and the presence of the selector or other signature that justifies the collection is not known until after the scanning is complete.” Likewise, the Foreign Intelligence Surveillance Court (FISC) has made clear that the NSA is searching the full text of every communication flowing through the surveillance devices installed on certain international backbone links.

For technological reasons, Upstream surveillance — at least as it’s conducted today — necessarily ensnares vast quantities of communications. When an individual uses the Internet, whether to browse a webpage or send an email, his computer sends and receives information in the form of data “packets” that are transmitted separately across the Internet backbone. As Charlie Savage recently explained in Power Wars, “when an e-mail is transmitted over the Internet, it is broken apart like a puzzle. Each piece of the puzzle travels independently to a shared destination, where they converge and are reassembled. For this reason, interception equipment on a switch in the middle cannot grab only a target’s e-mail. Instead, the wiretapper has to make a copy of everything.” While the NSA may exclude certain types of irrelevant traffic — like Netflix videos — it can identify the communications it’s seeking only by copying and searching the remaining Internet traffic in bulk.

In court, the Department of Justice has resisted acknowledging the breadth of these bulk searches —preferring to say, euphemistically, that the NSA is “screening” or “filtering” communications. But it’s playing word games. The only way for the NSA to determine whether a communication contains one of its selectors is to search the contents of that communication. At scale, that means the NSA is searching the contents of trillions of Internet communications, without anything resembling a warrant.

Upstream Surveillance Is Unprecedented and Unlawful

Because it involves bulk searches, Upstream surveillance is very different from other forms of surveillance, and it should be debated with that in mind. As the Privacy and Civil Liberties Oversight Board (PCLOB) explained:

Nothing comparable is permitted as a legal matter or possible as a practical matter with respect to analogous but more traditional forms of communication. From a legal standpoint, under the Fourth Amendment the government may not, without a warrant, open and read letters sent through the mail in order to acquire those that contain particular information. Likewise, the government cannot listen to telephone conversations, without probable cause about one of the callers or about the telephone, in order to keep recordings of those conversations that contain particular content.

In short, the Fourth Amendment does not allow the government to conduct a general, suspicionless search in order to locate specific information or evidence. Instead, as the ACLU has explained at length elsewhere, the government is required to have probable cause — and a warrant — before it searches the contents of our communications. Upstream surveillance reverses this logic, using the end results of the NSA’s searches to justify the continuous, bulk review of Americans’ Internet traffic. The ODNI General Counsel has effectively called for rewriting the Fourth Amendment to permit these types of searches — which only underscores how novel and extreme the government’s legal theory really is.

Americans — and Congress — need to be concerned about what it means to have government computers monitoring our communications in real-time. As the PCLOB emphasized, one of the fundamental problems posed by Upstream surveillance is that “it permits the government to acquire communications exclusively between people about whom the government had no prior suspicion, or even knowledge of their existence, based entirely on what is contained within the contents of their communications.” David Krishighlighted a related problem, asking whether the government should be permitted to “review the contents of an unlimited number of e-mails from unrelated parties in its effort to find information ‘about’ the target.”

The PCLOB, in its report, expressed serious concern about Upstream surveillance, finding that the nature and breadth of this surveillance pushed it “close to the line” in terms of lawfulness. At the same time, however, the PCLOB expressed the view that “about” surveillance was unavoidable for technological reasons. While this is the subject for a separate post, that factual claim is doubtful. The NSA could, if it chose, do far more to isolate the communications of its targets based on metadata — such as email addressing information — rather than searching the entire contents of everyone’s communications using selectors. Indeed, “Next Generation Firewall” technology is capable of distinguishing metadata from content across many different types of communications. Moreover, the NSA has already shown that it can implement this capability on the Internet backbone — because its bulk Internet metadata program, which it operated for ten years, required very similar capabilities. Even with these modifications, significant questions about the lawfulness of the surveillance would remain; but there is no question that it would be more protective of Americans’ privacy than today’s Upstream surveillance.

Between now and the sunset of the FAA in December 2017, it is crucial that Congress engage in an informed, public debate about whether it is constitutional — and whether it is prudent — to permit the executive branch to wield this incredibly invasive surveillance tool.

Editor’s note: The authors are staff attorneys with the ACLU’s National Security Project. Last year, the ACLU challenged Upstream surveillance on behalf of a broad group of educational, legal, human rights, and media organizations — including Wikimedia, the operator of one of the most-visited websites in the world — whose communications are swept up by this unprecedented dragnet. In October 2015, a federal district court in the District of Maryland held that the plaintiffs lacked “standing” to bring suit. The case is presently on appeal in the Fourth Circuit.

Sanchez, Julian - Apple vs. FBI: “Just This Once”? - Just Security 20160223

Sanchez, Julian - Apple vs. FBI: “Just This Once”? - Just Security 20160223

I wrote about the FBI’s attempt to force Apple to write an iPhone hacking tool for the bureau over at Time last week — and go read that if you’re getting caught up on the case — but we’ve had some added developments over the weekend worth noting. Apple hasexplained its position in a bit more detail, while the Justice Department filed a motion to compel Apple’s compliance and FBI Director James Comey penned a brief blog post atLawfare arguing that the Bureau isn’t looking to “set a precedent” or “send a message” or “break anyone’s encryption or set a master key loose on the land” — only provide justice to victims of a horrific shooting. That’s a message the government’s lawyers seek to hammer home at some length in the motion to compel: They don’t want some master key that could be used to unlock any phone. They just want a little bit of code that’s uniquely tethered to this one device and wouldn’t work on any others, which Apple is free to keep in its own headquarters and erase after use. So is Tim Cook just fearmongering when he claims this would require them to create a more generally exploitable tool?

Not if you understand what the realistic consequences of the government’s request are. First, as iOS security expert Jonathan Zdziarski observes, even if we’re thinking exclusively about this case, standard forensic practice would require the code of any forensic tool Apple produces to be preserved at least for as long as it might be needed as evidence in court. Maybe that’s not such a big deal: Source code to enable brute force attacks on iOS is already out there in the form of frameworks like MobileKeyBag — it’s not of much use unless you can get the iPhone processor to actually run it, which requires either exploiting a flaw in the secure boot chain (which is how “jailbreaking” works) or having code signed with Apple’s private key. If you can do the former without wiping the key material you need on the device, this is largely moot, so the additional risk here comes from the existence of that signed code — and, in the longer term, of a process for routinely signing such code.

DOJ wants to downplay that risk, because they say Apple can ensure the signed custom boot ROM they want to load is designed to only work on this one specific device. Understand, however, that the way you’d do this isn’t really by building a “key” that only works in one lock. You have to design a skeleton key, then effectively cripple it so it won’t work in any other locks. But this creates a new attack surface for an adversary who’s able to obtain one of these device-specific pieces of software. Previously, you had to attack the authentication process on the phone to get your own code, unsigned by Apple, to load. With this signed code in hand, you’ve got the potentially much easier task of just circumventing the part of it that prevents it from running on other devices. The simplest ways of doing this would be relatively easy to get around. You could, for instance, write it to check some specific hardware ID number and stop loading unless it matches what you’ve coded. But someone with physical access to a device could feed it false information, “spoofing” the ID from the device the software was built to run on. Writing the code is the easy part—they can probably just tweak and sign tools already in the wild.  Guaranteeing that a crippled one-device key can’t be un-crippled and turned back into a skeleton key is the harder part. There are more complicated and sophisticated methods Apple might be able to use to tether their hacking tool to a specific device, which would be more difficult to circumvent—we are admittedly bumping into the limits of my technical understanding of Apple’s security architecture here—but then we run into the problem of whether it scales securely.

Loudly as the Justice Department protests that this dispute is simply about one particular phone, that’s fairly clearly not the case. Forget other even more dangerous ways Apple could be compelled to use their private key and let’s stay focused on breaking iPhones for the moment. The Manhattan DA’s office alone has at least 175 iPhones that they’d like Apple to help them break into, and DOJ itself has 12 other ongoing lawsuits seeking access to iPhones. Realistically, if Apple loses here — and especially if they lose at the appellate level, which is where this is likely going given Apple’s decision to hire superstar lawyer Ted Olson for the case — they’re going to be fielding thousands of similar demands every year. As a practical matter, they’re going to need a team dedicated to developing, debugging, testing, customizing, and deploying the code used to brute force passcodes.

Now, when it comes to the Holy Grail of Apple’s security infrastructure — the private key — it’s almost certainly stored in secure vaults, on a Hardware Security Module that makes it difficult or impossible to copy the key itself off that dedicated hardware, and likely protected by elaborate procedures that have to be followed to authenticate major new software releases. If your adversaries realistically include, say, the Chinese and Russian intelligence services — and for Apple, you’d better believe it — it’s a serious enough security problem to guard against exfiltration or use of that Holy Grail private key. Doing the same for a continuously updated and deployed hacking tool is likely to be hugely more difficult. As the company explains:

Apple would do our best to protect that key, but in a world where all of our data is under constant threat, it would be relentlessly attacked by hackers and cybercriminals. As recent attacks on the IRS systems and countless other data breaches have shown, no one is immune to cyberattacks.

The Justice Department might not intend to “set a master key loose on the land” — but the predictable consequence of mandating compliance with requests of this type will be to significantly increase the chance of exactly that occurring. And that’s an increased risk that every individual or enterprise customer relying on iOS devices to secure critical data will need to take into account.

Finally, it’s worth stressing the awkward position this puts Apple engineers in, and the contradictory incentives it generates. A loss for Apple here very quickly results in their being required to have a team of engineers in house dedicated to complying with requests to either hack phones or build and disseminate tools for government agencies to hack phones. Those may or may not be the same engineers responsible for designing and building security features for iOS devices in the first instance. As Comey notes, in support of his “just this once” argument, the hacking tool FBI wants Apple to build here is “limited and its value increasingly obsolete because the technology continues to evolve.” Now, maybe that’s right — probably this exact attack doesn’t work, or at least in the same way, on the next model of iPhone. But that sounds more like a bug than a feature.

Consider: Possibly the next iPhone simply eliminates Apple’s ability to assist in any way. But it’s hard to imagine a scenario where the designer and key — holder for a device designed to be used by normal humans can do literally nothing, at the margin, to assist an attacker. That means every improvement in device security involves a gamble: Maybe the cost of developing new ways to attack the newly hardened device becomes so high that the courts recognize it as an “undue burden” and start quashing (or declining to issue) All Writs Act orders to compel hacking assistance. Maybe. But Apple is a very large, very rich company, and much of the practical “burden” comes from the demands of complyingsecurely and at scale. The government will surely continue arguing in future cases that the burden of complying just this one time are not so great for a huge tech company like Apple. (And, to quote The Smiths, they’ll never never do it again — of course they won’t; not until the next time.)

Under that scenario, engineers face an additional difficult tradeoff: Every design choice they make to improve device security entails, not only the foreseeable front-end costs of implementing it, but the unpredictable back-end costs of degrading that improved security, provided someone is able to think of a way Apple is uniquely situated to do so vis a vis any particular security measure. That makes it, in short, risky and potentially costly for the company to improve its own security. In an extreme scenario — think of the Lavabit case — the government may be pushed to demand more radical forms of assistance as Apple’s ability to comply is diminished. Having rendered themselves incapable of simply extracting data from the iPhone, the government has ratcheted up their demands by asking the company to build a tool to enable a brute-force attack on the passcode. Late last year, for instance, Apple increased the length of a default numeric iPhone PIN from four to six digits, which would radically increase the time required to attempt a brute force attack of this kind against a numeric passcode — at least if they want to run the attack in house instead of providing exploit code to an outside party. Instead of simply asking whether new security measures are cost-effective to implement from a user’s perspective, they’ll need to evaluate whether they justify the additional cost of being required to attack those measures.

Little wonder, then, that Comey and the FBI keep stressing that they’re seeking very narrow and limited relief, in just this one case. If that were true, then unlikely as it is that any useful data will be recovered from this phone, it would seem awfully unreasonable for Apple not to offer its voluntary assistance, this one time. Once you realize that it’s very obviously not true, and consider even just the most immediate and unambiguous near-term consequences — leaving aside the prospect of tech companies more broadly being forced to sign other sorts of exploit code — it starts to look much more like the Justice Department is the one making unreasonable demands.

McCraw, David - National Security Letters and leak investigations - Just Security 20160120

McCraw, David - National Security Letters and leak investigations - Just Security 20160120

Journalists were reminded again last week of how little legal protection actually exists when the federal government decides to investigate national security leaks.

In an ongoing Freedom of Information Act suit, the Freedom of the Press Foundation has sought the guidelines used by the Justice Department in deciding when federal agents can use National Security Letters to pursue information about reporters. DOJ recentlyproduced documents in response to the suit. They confirm that the rules governing the use of NSLs in media leak cases remain classified. That undue secrecy cripples any real opportunity for public oversight of a process already encased in layers of secrecy.

DOJ’s position is a disappointment but hardly a surprise. In 2013, DOJ found itself in the center of a public storm when it was revealed that federal agents had secretly gathered telephone and email records from reporters at The Associated Press and Fox News during national security leak investigations. The investigators wanted the records to track down the government employees who had passed information to the press.

The AP and Fox revelations led to a series of meetings between Eric Holder and news organizations and ultimately to DOJ’s decision to revise its own guidelines for when subpoenas can be used to obtain reporters’ records. The guidelines were first implemented more than 40 years ago, and they recognize the chilling effect that subpoenas have on reporter-source relationships and therefore on the flow of information to the public. The guidelines are designed to make subpoenas targeting reporters a last resort, issued only after a high-level review within DOJ. Because there is no federal statutory privilege for journalists, and the courts have backed away from finding such a privilege in the First Amendment or common law, the guidelines are in fact an important bulwark against investigative overreaching by federal agents. The revised guidelines, released early in 2015, were intended to strengthen the protection afforded news gathering.

Fairly read, the revised guidelines do that — but even as the revisions were being hammered out in discussions between DOJ and representatives of the press, DOJ made clear that the guidelines would not apply to NSLs. It is a carve-out that cuts deeply.

It is not just that NSLs, typically used to obtain communication records from third parties, have none of the judicial oversight that attends to subpoenas. There is also the damaging impact of secrecy. An important element of the DOJ subpoena guidelines is providing news organizations with notice when their records are sought, subject to some specified exceptions. Notice gives an opportunity not only to make legal objections in court, but also to invite public scrutiny of government overreach. Jim Risen, the target of the long-running subpoena fight in the Jeffrey Sterling Espionage Act case, ultimately lost his legal challenges when federal prosecutors sought the identities of the confidential sources he used in reporting on the CIA’s deeply flawed efforts to undermine Iran’s nuclear capabilities. But there is no question that Risen’s ability to make his case to the public — to remind people of the importance of the confidential sources, of the chilling effect that subpoenas can have, and of his own commitment to go to jail to protect a source — played a role in the prosecutors’ belated decision to drop the subpoena after years of seeking the right to compel his testimony.

That the government has doubled down on secrecy by claiming the right to classify the rules that DOJ uses in deciding when NSLs can be employed to target reporters is made all the more troubling by documents obtained last year by The New York Times and its reporter Charlie Savage in another FOIA case. In a lawsuit that is ongoing in the Southern District of New York, we have sought to have DOJ declassify additional portions of the Inspector General reports on surveillance.

Responding to the suit, DOJ last year released thousands of pages of  documents following both a classification and FOIA review. Included in the release was a redacted account from a 2010 DOJ Inspector General report of three leak investigations in which agents secretly obtained records of reporters in violation of DOJ rules (apparently using investigative tools other than NSLs). The three cases involved The Times, the Washington Post, and an unnamed third news entity. In each instance, records were obtained from cell phone providers or other communications companies.

The Inspector General’s language was unsparing: “[T]he FBI’s acquisition of these records constituted a complete breakdown in the required Department procedures.” There were, the Inspector General concluded, “serious lapses in training, supervision, and oversight [that] led to the abuses.” The IG report notes that the law enforcement investigators claimed to be unaware of the special approval requirements that were in place for subpoenaing reporters’ records, and the federal prosecutors said that “they did not correctly understand that the terminology used in the subpoenas or attachments could result in the acquisition of reporters’ records.”

While the takeaway might be that the rules do not matter if agents and prosecutors are going to disregard basic professional obligations, the cases also serve to remind that, for NSLs, internal rules are our only real hope for a check on abuse. And that is why having those rules subjected to public oversight is critical.

DOJ has said that the NSLs are “subject to an extensive oversight regime.” But it is impossible to know whether the kinds of procedures baked into the subpoena guidelines — for instance, a showing by the prosecutor of proven need and a lack of alternative ways of getting the information sought — are mirrored in NSL guidelines. If they are not, there is every incentive for investigators to look to NSLs and avoid the restrictions of the subpoena process. (In my experience, leak investigations overwhelmingly arise from reporting about national security, making them NSL-eligible.) But the classification of the NSL rules takes meaningful discussion of even such a threshold concern off the public agenda.

That level of secrecy is impossible to square with any fair notion of transparency or any realistic assessment of what national security requires. Whatever we may think of the DOJ subpoena guidelines, we at least know what they say, and we can argue about their adequacy and their deficiencies and how well they balance law enforcement needs and the protection of press freedom.

All of which raises an obvious question: Why is it that DOJ cannot provide the same level of transparency when it comes to the NSL guidelines?

Pfefferkorn, Riana - James Comey's default encryption bogeyman - 20160115

Pfefferkorn, Riana - James Comey's default encryption bogeyman - 20160115

FBI Director James Comey recently told the Senate Judiciary Committee that encryption routinely poses a problem for law enforcement. He stated that encryption has “moved from being available [only] to the sophisticated bad guy to being the default. So it’s now affecting every criminal investigation that folks engage in.”

This assertion may reflect a shift in the Director’s approach to trying to convince lawmakers to regulate the commercial availability of strong encryption. To date, the principal argument has been that encryption interferes with counterterrorism efforts. Federal officials asking for legislative intervention, or seeking to shame companies into maintaining security architectures that would not interfere with surveillance, generally invoke the fear of terrorist attacks. Such attacks, or the threat of them, can provoke cooperation or legislative action that would otherwise be difficult to effectuate. In August, for example, the intelligence community’s top lawyer suggested that a terror attack could be exploited to turn legislative opinion against strong encryption. And Comey’s testimony last month raised the specter of ISIL. He and other members of the intelligence community immediately mounted a full­court press against strong crypto following the tragedies in Paris and San Bernardino, even before investigators could conclude whether encrypted communications or devices played any role in either attack.

Proponents of strong encryption have long been suspicious of the claim that encryption interferes with counterterrorism investigations. Terrorism is quite rare in the US and encryption has never yet been shown to have thwarted investigations into any terrorist attacks that have taken place on US soil. This includes the May 2015 shooting in Garland, Texas that Comey has invoked. Comey points to the fact that one shooter exchanged encrypted text messages with “an overseas terrorist” shortly before the attack, but the FBI had already been monitoring one of the perpetrators for years and warned local authorities about him before the shooting. Plus, the FBI’s powerful ability to collect (unencrypted) metadata is the reason Comey knows the shooter sent those text messages.

Comey may be starting to recognize that his rationale for weakening encryption needs to hit closer to home if he hopes to persuade lawmakers and the American public. To that end, it looks like he, along with Manhattan District Attorney Cyrus Vance, is ready to argue that regular criminals — the kind more likely to predate on the general population — are getting away because of encryption.

What crimes, then, are law enforcement officials invoking in their latest calls for weakening encryption? If encryption affects “every” criminal investigation as Comey claims, you’d think that law enforcement would encounter encryption in investigations of the crimes it spends the most time and money working on. If so, then the majority of cases in which law enforcement encounters encryption should be drug cases. Statistically, the War on Drugs, not the War on Terror, would likely be the principal context in which mandatory encryption access for law enforcement would be used.

However, law enforcement’s anti­crypto advocacy hasn’t been focused on the War on Drugs. Much like Comey’s invocation of ISIL, other law enforcement leaders have asserted that the worst of the worst are the beneficiaries of strong security, focusing on murderers and sex offenders. Vance’s recent whitepaper, which calls for federal legislation mandating law enforcement access to encrypted devices, claims that iPhone device encryption using iOS 8 (which Apple cannot bypass) stymied the execution of around 111 search warrants in the space of a year. According to the report, those cases involved offenses including “homicide, attempted murder, sexual abuse of a child, sex trafficking, assault, and robbery.”

Vance’s list (which may or may not be comprehensive) is surprising. There is little overlap between the types of crimes where Vance claims Manhattan prosecutors encountered encryption, and the crimes which local and state law enforcement probably deal with most frequently. According to a newly­released FBI report, larceny, theft, assault, and drug offenses are the crimes most commonly reported by state and local law enforcement. Of those, only assault is on the Manhattan DA’s list. Drug crimes are not, even though drug arrests alone accounted for nearly a quarter of all arrests in Manhattan last year. By comparison, the other offenses on his list — homicide, robbery, sex crimes, and trafficking offenses — account for only a small fraction of reported crimes, according to the FBI report.

Not only are drug crimes common in the state and local context, they dominate the federal courts. Drug defendants are often arrested by local police, but prosecuted federally (which might help account for the absence from Vance’s list). Drug offenses top the federal courts’ most recent 12­month report on numbers of federal criminal defendants charged, by offense, which covers 17 offense categories. (The report doesn’t reflect investigations that are closed without a prosecution.) Similarly, the 2014 wiretap report, also issued by the federal courts, notes that a whopping 89 percent of all wiretaps (including 91 percent of federal wiretaps and 88 percent of state wiretaps) were for drug offenses. Homicide and assault (a combined category in the wiretap report) came in a distant second, at four percent. So one would expect that if there’s widespread use of encryption, it would proportionately impact drug crimes, and the homicide, assault, and other cases would be far behind.

State and federal wiretap statistics, combined with federal prosecution statistics, demonstrate that drug offenses are very high on law enforcement’s agenda — even as homicide clearance rates languish. And according to the FBI crime statistics report, drug offenses are one of the most commonly reported types of crime.

As more and more people carry smartphones that are encrypted by default, encountering device encryption becomes more likely to affect investigations where the crime is both common and a top law enforcement priority. That means drug offenses — and yet they are absent from Vance’s list. If you have concerns about the War on Drugs — and many people do because it is expensive, ineffectual, and disproportionately affects minorities, among other reasons — the War on Crypto is likely to make it worse.

We need more information about the facts underpinning the Manhattan DA’s report before we can say whether Vance has established a pressing law enforcement need for legislation. The report said that the office “was unable to execute” around 111 search warrants due to iOS 8 encryption. While 111 frustrated warrants may sound like a lot, that number doesn’t tell the full story. The report conspicuously fails to mention several important facts, such as whether prosecutors successfully pursued those cases using other evidence; the total number of search warrants issued for smartphones during the period cited; how many of those devices turned out to be encrypted; and of those, how many warrants were successfully executed nevertheless. If criminal investigations can succeed despite encryption, then device encryption’s detrimental impact on the public is marginal.

That’s already true for encryption of communications. 2014’s statistics for judicially-authorized wiretaps (which collect the contents of unencrypted phone calls and text messages in transit) show almost no adverse impact from encryption. Officials encountered encryption in 22 state court wiretaps out of a total of 2,275 — a sharp drop from 2013, when states came across 41 encrypted phones — and were unable to decipher plaintext in only two of the 22. For federal wiretaps, investigators encountered encryption in three wiretaps out of 1,279 total, of which two could not be decrypted.

When it comes to communications, Comey’s claim that encryption “affects every criminal investigation” is plainly an exaggeration. He and his colleagues have yet to show that the situation for devices is any different. So long as encryption has a negligible effect on law enforcement’s ability to do their jobs, their proposals to regulate encryption amount to a “solution” for a problem that doesn’t exist.

In the end, it’s the War on Drugs and other routine criminal investigations, not counterterrorism or “worst of the worst” criminal cases, that stand to benefit the most if Director Comey gets his wish for guaranteed access to the data on Americans’ encrypted smartphones. Yet officials cannily highlight ISIL recruitment, sex trafficking, and murder to promote their demands for weaker crypto, obscuring the lack of evidence that strong crypto in fact poses a significant problem for them.

This post draws a number of inferences from imperfect information, because comprehensive data about device encryption’s impact on law enforcement are simply not available. We don’t have the full picture of how law enforcement and intelligence agencies seek to compel or persuade tech companies to decrypt information for them (and on what legal authority), influence encryption standards, cooperate to share tools for bypassing crypto, or investigate crime by other means, including hacking tools. I’m researching these issues as part of the Stanford Center for Internet and Society’s Crypto Policy Project, and maybe they’ll also be considered by the crypto commission Congress plans to convene.

As Director Comey himself recently said, “without information, every single conversation in this country about policing and reform and justice is uninformed, and that is a very bad place to be.” Those words apply with equal force to the national conversation about encryption and law enforcement.