Tag Archives: Tor

Canadian Librarians Must Be Ready to Fight the Feds on Running a Tor Node - Motherboard 20160316

Canadian Librarians Must Be Ready to Fight the Feds on Running a Tor Node - Motherboard 20160316

Political dissidents and cyber criminals alike will soon be sending anonymous internet traffic through a library at Western University in Canada, thanks to a new “node” in the encrypted Tor network operated by staff there—the first to open at a library in the country.

In Canada, the legality of running a Tor node is essentially untested, making the high profile, institutionally-backed node at Western a potential target for the feds.

Tor is touted as a tool for people, such as journalists, to keep their browsing habits safe from spies and police. But more nefarious traffic, such as drug dealing or child pornography, also passes through the network. A small public library in New Hampshire began operating a Tor node last year, and faced pressure from the Department of Homeland Security to shut it down. The library resisted, and the node is still running.

"Frankly, in some ways, I would like to see them try"
“If any intelligence agency or law enforcement tries to intervene again, we will do the same thing that we did in New Hampshire: we will rally community support, we will get our very broad coalition of public interest organizations and luminary individuals, and amazing supporters, to support Western,” said Alison Macrina, director of the Library Freedom Project and adviser to the Tor project at Western.

“Frankly, in some ways, I would like to see them try,” she said.

Traffic going through Tor is encrypted and “hops” through three volunteer nodes—or relays—before reaching the regular web, thus staying relatively anonymous. At the moment, the Western node is running as a middle relay, which means that it operates as one of the three hops in the network, and is blind to the final destination of any traffic.

If the library were to switch its node to an “exit” (where Tor traffic finally enters the regular web), then information about where traffic is going could be known to Western—and that is what law enforcement would likely be interested in, Macrina said. She hopes that Western does make the switch, she added, because institutions are better suited to face legal pressure stemming from running a node than individuals. Staff from the Faculty of Information and Media Studies, the faculty at Western responsible for the node, could not be reached in time for comment.

Watch more from Motherboard: Buying Guns and Drugs on the Dark Net

“It's great news to see more libraries and universities running Tor nodes,” Ian Goldberg, a University of Waterloo professor and inventor of the popular OTR encryption protocol, who operates a Tor exit node at the school, wrote me in an email. Goldberg noted that a middle relay should have no issues, legally, although exit node operators often “get annoyed by people on the Internet contacting them to ask why they are attacking various websites, sending them [copyright] notices for sharing content (in the US), etc.”

Tor use has been raised in at least one criminal case involving child pornography in Canada. Toronto police also told Motherboard last year that the force has investigated people operating Tor exits in the past, particularly in cases involving child pornography. At the time, the Canadian Civil Liberties Association (CCLA) said they had “nothing to add” on the subject.

When asked if the CCLA would support Western staff if Canadian law enforcement pressured them to shut their node down, however, spokesperson Jonah Kanter said, ”In principle we are in favour of tools that protect privacy and will continue to research how Tor nodes can help accomplish that.”

Macrina emphasized that if push came to shove, Western should expect the support of the CCLA and other civil rights organizations in Canada. If the feds come knocking, they may very well need it.

Norte, Jose Carlos - Advanced Tor Browser Fingerprinting - 20160306

The ability to privately communicate through the internet is very important for dissidents living under authoritary regimes, activists and basically everyone concerned about internet privacy.

While the TOR network itself provides a good level of privacy, making difficult or even practically impossible to discover the real I.P. address of the tor users, this is by no means enough to protect users privacy on the web. When browsing the web, your identity can be discovered using browser exploits, cookies, browser history, browser plugins, etc.

Tor browser is a firefox browser preconfigured and modified to protect user privacy and identity while browsing the web using TOR. Browser plugins are disabled, history and cache aren’t persistent and everything is erased after closing the browser, etc.

The user fingerprinting problem

While preventing users IP address to be disclosed is a key aspect for protecting their privacy, a lot of other things need to be taken into consideration. Tor browser is preconfigured to prevent a lot of possible attacks on user privacy, not only the communications layer provided by tor itself.

One common problem that tor browser tries to address is user fingerprinting. If a website is able to generate a unique fingerprint that identifies each user that enters the page, then it is possible to track the activity of this user in time, for example, correlate visits of the user during an entire year, knowing that its the same user.

Or even worse, it could be possible to identify the user if the fingerprint is the same in tor browser and in the normal browser used to browse internet. It is very important for the tor browser to prevent any attempt on fingerprinting the user.

In the past, a lot of fingerprinting methods has been used and proposed and tor browser has been updated with countermeasures. Examples of that are reading text sizes out of a canvas element, screen dimensions, local time, operating system information, etc.

One famous example of browser fingerprinting was Canvas fingerprinting. As of today, almost everything that can be used to identify the user has been disabled in tor browser.

UberCookie

During the last weeks I have been able to fingerprint tor browser users in controlled environments and I think it could be interesting to share all the findings for further discussion and to improve tor browser.

All the provided fingerprinting methods are based on javascript (enabled by default in tor browser as of today). I have created a quick and dirty PoC called UberCookie available as a demo here:

Try ubercookie

Measuring time

One interesting countermeasure for fingerprint implemented in tor browser is that javascript Date.getTime() (unix time) only updated each 100ms. So you can’t measure events happening under 100ms. This is useful to prevent a javascript inside a webpage to measure events in order to fingerprint the user. Since for some of the things I wanted to try needed better time accuracy than 100ms, this was the first thing to bypass.

There are a lot of ways to measure times smaller than 100ms using javascript in tor browser, some are obvious, or ther are intersting.

The first one I implemented was simply increment a variable by 1 each millisecond using setInterval. Even if the precision is not at milisecond level, is extremly better than the 100ms accuracy provided by Date.getTime.

Another way you can use to measure time is to create an animation in CSS3, configured at 1ms interval and listen to the animationiteration event.

However, the better accuracy I could achieve was using setInterval incrementing inside a webworker.

Mouse wheel fingerprinting

The mouse wheel event in Tor Browser (and most browsers) leaks information of the underlying hardware used to scroll the webpage. The event provides information about the delta scrolled, however if you are using a normal computer mouse with a mouse wheel, the delta is always three, but if you are using a trackpad, the deltas are variable and related to your trackpad and your usage patterns.

Another leak in the mouse wheel, is the scroll speed that is linked to the configuration of the operating system and the hardware capabilities itself.

I have created a little experiment as a proof of concept, available here:

Mouse wheel information leak demo

This demo creates three graphs, one with the scrolling speed, another with the scrolling delta, and another one with the number of times the user scrolled in the red box.

Mouse Speed fingerprinting

Another interesting fingerprint that could reveal some entropy is the speed of the mouse moving acrross the webpage. Since the speed of the mouse is controlled by the operating system and related to hardware, and can be read using javascript if you can measure time using the mentioned strategies.

It could be interesting also to measure average mouse speed while the user is in the page moving the mouse.

CPU Benchmark fingerprinting

With the improved accuracy on time provided by the setInterval inside the WebWorker, it is easy to create a CPU intensive script (or even memory intensive) and measure how long it takes for the user browser to execute it.

I have done some tests with different computers, getting completely different results, all of them using the same tor browser version.

getClientRects fingerprinting

The most intersting fingerprinting vector I found on Tor Browser is getClientRects. Is strange that reading back from a canvas has been prevented but simply asking the browser javascript API how a specific DOM elements has been drawn on the screen has not been prevented or protected in any way.

getClientRects allows to get the exact pixel position and size of the box of a given DOM element. Depending on the resolution, font configuration and lots of other factors, the results of getClientRects are different, allowing for a very quick and easy fingerprinting vector, even better than the canvas fingerprinting that is fixed.

Example of getClientRects on the same page with same Tor Browser version on different computers:

Computer 1:

Computer 2:

As you can see, there is a lof of difference in the results of getClientRects between two computers using the same tor browser on the same page and on the same DOM Element.

Results

An example of running ubercookie PoC in one computer (computer 1):

Client rects: {"x":131.5,"y":462,"width":724,"height":19,"top":462,"right":855.5,"bottom":481,"left":131.5}

scrolling milis: [2,2,0,3,0,1,0,2,3,0,0,3,1,2,2,1,2,1,4,4,35,2,1,3,0,1,0,3,0,1,0,3,0,1,0,3,1,0,3,1,3,0,1,3,2,4,4,8,44,4,1,4,4,405,2,3,2,1,3,1,3,57,2,0,2,2,0,2,2,4,60,2,0,2,2,0,2,2,6,54,2,2,2,0,2,1,4,8]

scrolling deltas: [3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]

Biggest mouse step: 65

In a few seconds, the result of the CPU benchmark will appear, please wait...

CPU Mean: 3245

And the result of running it in a different computer (computer 2), same Tor browser version:

Client rects: {"x":159.51666259765625,"y":465.25,"width":664.6500244140625,"height":18.449996948242188,"top":465.25,"right":824.1666870117188,"bottom":483.6999969482422,"left":159.51666259765625}

scrolling milis: [0,3,0,2,2,2,2,0,3,0,2,1,2,2,1,3,1,1,4,1,2,1,1,3,1,2,2,3,2,5,3,3,5,3,0,0,2,0,2,0,1,1,0,2,0,3,2,1,1,3,1,3,2,3,1,3,2,2,2,2,0,2,3,2,2,2,244,0,2,1,2,1,3,2,0,2,0,1,2,1,0,2,0,3,1,0,2,1,1,1,2,1,1,1,1,1,1,2,2,1,2,2,2,2,1,4,2,2,2,2,2,4,2]

scrolling deltas: [3,0.975,1.65,1.5,1.725,2.25,2.775,2.4,3.15,3.375,3.975,3.675,4.35,4.95,5.625,5.55,5.25,5.25,4.2,6.3,9.975,13.95,7.575,6.9,2.85,5.925,8.85,0.9,4.425,3.675,4.725,2.625,2.4,5.475,2.625,3.675,5.4,5.775,7.275,6.975,8.175,9,8.475,3.45,2.475,2.25,0.6,1.8,11.1,8.4,8.475,8.1,7.5,6.375,8.175,4.95,4.8,4.275,3.525,3.375,1.125,2.7,2.175,1.95,1.65,1.2,1.05,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]

Biggest mouse step: 40

In a few seconds, the result of the CPU benchmark will appear, please wait...

CPU Mean: 4660.5

It is evident that the getClientRects are completly different, providing an interesting fingerprinting vector. The scrolling speed (milis) is also different. The scrolling deltas are very different, because of hardware differences. The mouse of computer 1 is faster, as you can see in ‘biggest mouse step’. The CPU benchmark provides different results, computer 1 being faster than computer 2.

Conclusion

It is easy to fingerprint users using tor browser to track their activity online and correlate their visits to different pages. getClientrects provides a very interesting vector for fingerprinting TOR Browser users. The CPU benchmark and the Mouse wheel and mouse speed methods provide even more information to distinguish between similar users.

This is What Tor Supporters Look Like: Cory Doctorow and Ben Wizner


Cory Doctorow and family

I've been using Tor for more than a decade. I travel all the time, and often find myself connected to manifestly untrustworthy networks -- from the nets at hacker conferences to the one the Chinese government provided for our use at a World Economic Forum event in Dalian. Tor is my assurance that I'm browsing safely, privately and anonymously. When I do investigative journalism work on national security subjects, my go-to first line of defense is Torbrowser.

That why we at Boing Boing operate a high speed, high quality exit node. By the way, just this year we received two law enforcement requests for records relating to that node, and despite all the doomsaying about how the cops would punish you for operating an anonymizing tool, in both cases, we sent polite letters explaining that we don't keep logs, and in both cases, the cops returned a polite thanks and went away.

I donate to Tor, and I trust Tor, but even if I didn't trust 'em, I'd still use it. The great thing about free/open projects like Tor is that they're designed to work even if the people who make them don't agree with you or want what's best for you.

Ben Wizner
Ben Wizner, in thinking about the ways that Tor facilitates his work, is very clear: “It’s not an overstatement to say that secure technology such as Tor has made the ACLU’s work with Edward Snowden possible, “ he says.

Like Laura Poitras, using encryption was a learning process for Wizner, facilitated by key teachers, the first of whom was Laura herself.

“I was someone who went through most of my life unaware of these tools,” he says. “Laura (Poitras) came to my office in 2011 and installed Adium for me. `This is how we are going to communicate,” she said. “And this will help you communicate with the rest of the world as well.”

Jacob Appelbaum, Chris Soghoian, Renata Avila and Daniel Kahn Gillmor were all instrumental to Wizner as a he followed a similar learning curve to Poitras, quickly becoming familiar with Tor, PGP, Tails and Signal for many aspects of his work as Director of the ACLU’s Speech, Privacy and Technology project. It was his next teacher that, as he says, “gets us to the heart of the story. Starting in July 2013 I had a need to be able to communicate securely with Edward Snowden.”

From the start, as Ben aided Snowden with legal advice, he learned from him as well.

“[I was…] dealing with someone who is a world-class security technologist and also an excellent and very patient teacher,” he says. “I was entering a mode of communication where he felt extremely at home and I did not. This was going to be the only means of communication for an unknown length of time and we needed to exchange critical information, get to know each other and build trust, all while I am hunting and pecking on this tiny burner keyboard. And I have learned over the months and years how profound and intimate a chat conversation can be.”

Somehow it worked, and worked so well in fact, that meeting Snowden in person was a different experience for Wizner that he had expected.

“That was the surprising thing,” he says. “Even though we had gotten to know each other so well over so many hours of online conversation, I still had the expectation that our real relationship would begin when we met face to face. And yet it turned out to be a continuation rather than a new chapter.”

Wizner thinks often about the role that secure technology continues to play in both providing the foundation for their work together, and more broadly, in Ed’s continued participation in the larger dialogue around encryption.

“On one level, secure technology like Tor and Tails, has allowed Ed to defeat exile in a really profound way. Physical isolation has been imposed, but Ed is able to continue communicating to larger audiences from wherever he is. All of the legal and strategic advice,” he adds, “that goes into making these opportunities available and accessible for him would not be possible without using secure communications tools like Tor.”

The attack that broke the Dark Web and how Tor plans to fix it - Fusion 20151130

The attack that broke the Dark Web—and how Tor plans to fix it - Fusion 20151130

Law enforcement has been complaining for years about the Web “going dark,” saying that encryption and privacy tools are frustrating their ability to track criminals online. But massive FBI operations over the last year that have busted ‘hidden sites’ used for the sale of drugs, hacking tools, and child pornography suggest the digital criminal world has gotten lighter, with law enforcement bragging that criminals can’t “hide in the shadows of the Dark Web anymore.” While mysterious about its tactics, law enforcement indicated that it had found a way to circumvent the tool on which these sites relied, a software called Tor. But criminals are not the only ones who rely on it.

Tor, or The Onion Router, is a browser that lets people use the Internet without being tracked and access hidden sites, as well as a software project that supports the ‘Dark Web,’ allowing websites (or “hidden services”) to be hosted in such a way that their location is impossible to determine. Last year, Tor suffered a large-scale attack that compromised the anonymity of its users over a period of at least six months. The attack was launched by academic researchers affiliated with Carnegie Mellon University whose motives remain murky because they now refuse to talk about it. In subsequent prosecutions of people who used Tor hidden services for criminal purposes, government lawyers have said evidence came from a “university-based research institute,” meaning that the academic exploration of the anonymity tool’s vulnerabilities may send some Tor users to prison.

Tor saw the attack coming, but failed to stop it.

A review of emails sent on Tor’s public list-serv reveals that Tor saw the attack coming, but failed to stop it. It raises questions about Tor’s ability to maintain the privacy of the 2 million people who use it every day—most of them activists, human rights workers, journalists, and security-minded computer users, not criminals—as well as how far academic researchers and law enforcement should go to undermine the privacy protections people seek online. In a phone interview last week, Tor chief architect Nick Mathewson explained for the first time exactly what happened and what Tor is doing to try to ensure it never happens again.

In February 2014, a developer named Sebastian “bastik” G.—who contributes to the maintenance of the anonymity network Tor in his free time—noticed something amiss with the backbone of the Dark Web.

Tor depends on a world-wide network of computers that mask users’ identities by encrypting their activity and bouncing it through a bunch of different stops on the way to its final destination; it’s like 100 people whispering secrets in gibberish to each other during a huge game of Telephone, so that it’s hard for an outsider to tell where a message started or where it ends. Tor relies on thousands of volunteers to run the servers that power the network, sometimes at great personal risk. Bastik saw that an internal monitoring program called “DocTor,” which scans the network for “hiccups,” was reporting that a ton of new computers from the same IP address were rapidly joining the network as new relay points.

Global daily Tor usage in 2012, from Tor's most recent annual report

Bastik sent an alarmed email to the Tor mailing list saying that it looked like someone was launching an attack: if a single party controls enough relay points, it could undo the anonymity of the network. It’s a phenomenon called a Sybil attack, named after a book about a woman with multiple personalities. It’s as if in that giant game of Telephone above, 40 of the 100 people were actually one person, making it more likely they’d figure out you were the one who told a terrible secret.

A Tor developer responded dismissively, saying he would loop back in a week and that Tor wasn’t overly concerned because they weren’t exit relays, which are the last stop in the game of whispers. Tor decided the relays didn’t pose a risk and ultimately did nothing to block them, a terrible mistake when it came to protecting the privacy of its users.

“I don’t think this is the best response we’ve ever done to an attack situation,” said Mathewson by phone.

Five months later, Michael McCord and Alexander Volynkin, two researchers at Pittsburgh-based Carnegie Mellon, announced that they had “broken” Tor, and discovered a way to identify hundred of thousands of users and find the true locations of thousands of ‘hidden’ websites.

The abstract from the Carnegie Mellon researchers' canceled Black Hat talk

“We know because we tested it, in the wild,” they bragged in the abstract for a security conference talk that was canceled shortly after it was announced. A Carnegie Mellon attorney told the Black Hat conference organizers that the talk relied on materials the university hadn’t approved for public release. The researchers refused to comment, saying questions should be directed to Carnegie Mellon’s Software Engineering Institute [SEI], the Department of Defense-funded center at which they were employed. The university refused to answer further questions about the project, or to say whether the information gathered was shared with law enforcement.

The attack was launched by academic researchers affiliated with Carnegie Mellon University whose motives remain murky because they now refuse to talk about it.

“We are not able to comment on Tor,” said SEI spokesperson Richard Lynch in an email this week.

But the answer seemed clear when, four months later, in November 2014, the FBI announced Operation Onymous (as in no longer Anonymous)—aglobal crackdown on the Dark Web, that included the seizure of hidden websites and the arrest of dozens of Tor users involved in online drug markets. (Recent court documents citing a “university-based research institute” support the link.) And this year, in July, the crackdown continued with Operation Shrouded Horizon, in which a site for cyber-criminals called Darkode, which was hosted on Tor hidden services, was dismantled and hundreds around the world were arrested. The FBI said in the press release that the global case was led by its field office in Pittsburgh, where Carnegie Mellon is based. The FBI would not comment this week on whether Carnegie Mellon’s research had been used in its operations.

For as much as the Dark Web relies on Tor, it’s a rinky-dink operation.

Mathewson and Tor founder Roger Dingledine, who met at MIT, have spent the last decade building up and maintaining Tor, which was originally a Naval Research Lab project to protect government communications. Eighty percent of its $2.5 million budget still comes from governments, including funding from the U.S. Defense Department and the U.S. State Department. For as much as the Dark Web relies on Tor, it’s a rinky-dink operation. There are 22 full- and part-time paid employees dispersed around the world and about 50 volunteers and academics who contribute time and code (just 10 of them solidly dedicated to it currently, said Mathewson). Tor depends on academic researchers to identify ways to improve the technology and shore up vulnerabilities, so it regularly sees people running experiments on the network, most of which become papers like these.

“It’s fairly normal for researchers to do benign but shifty looking activities,” said Mathewson. “Activity in the past has looked suspicious at the time, but ultimately did stuff that helped advance our art.”

The publication of the Black Hat schedule online in May 2014 was the first notice Tor got about what Carnegie Mellon had been up to. Tor reached out to the CMU researchers Volynkin and McCord but were told they couldn’t say more because of “institutional confidentiality issues.”

As the summer progressed, Tor slowly began realizing just how devastating the CMU project was. On June 12, 2014, someone from the Black Hat program committee sent Mathewson a copy of the researchers’ paper, alarmed that the attack, which involved injecting signals into Tor protocol headers, might be actively affecting Tor. After reading the paper, Mathewson began working on a countermeasure.

“It didn’t occur to me that they would run the attack in the wild on random users,” said Mathewson. “The way the attack was structured, it was a bad attack for anyone to get away with it. Once detected, it was very easy to block. It didn’t seem to me like a deep threat.”

On June 23, 2014, Mathewson says the researchers sent Tor an email that described their attack, but with fewer details than were in the paper, omissions that would have made the attack harder to block.

Two weeks later, on July 4, Mathewson was in Paris for a Tor developers’ meeting, an event that happens twice a year so that Tor’s far-flung network of contributors and volunteers can meet each other and discuss pressing issues. More than fifty people gathered at Mozilla’s offices in the center of Paris. It was productive but exhausting, a week of intense conversation, coding, and late nights with Internet friends rarely seen in person. On the last night of the week, Mathewson got back to his hotel room late and began running a test of his defense code to see if his countermeasure would work.

“Around 1 or 2 a.m., I discovered I was under attack,” said Mathewson. “The hidden services I was visiting were sending a signal saying what I was connecting to.”

He was shocked and immediately concerned about the danger for users. “Everyone who worked on this, including me, were about to get on airplanes,” Mathewson said. “I contacted Roger [Dingledine] and as many core developers as I could find who were awake at that hour. Not many were. I reached out to everyone at different hotels and figured out the best, immediate defense.”

There were only a few developers Mathewson trusted enough to work on it. They were spread thin but got enough trusted Tor directory authorities online to block-list the relays and servers involved in the attack.

Dingledine emailed the CMU researchers asking, “Is that you?” From that point on, the researchers stopped responding to emails from Tor. Their work, as it’s understood, has been decried as a huge breach of research ethics.

By the end of July 2014, Tor had issued a new version of its software with fixes for the attack and published a blog post about what had happened. Tor’s staff still believed at that point that the researchers had simply designed a reckless experiment with no intent to out users. But as the months went by, and law enforcement announced more and more operations that involved “breaking” the Dark Web, Tor’s anger at Carnegie Mellon grew. This month, Tor claimed, based on conversations with people it believes to be credible, that the FBI paid Carnegie Mellon $1 million to hack its network—a claim that the FBI and the university deny.

“The allegation that we paid CMU $1 million is inaccurate,” said a FBI spokesperson.

In the abstract for their Black Hat talk, the researchers said the attack cost only $3,000—presumably the hosting costs for its relay nodes. Putting aside Tor’s claim that the government ordered the attack, once it was known that the researchers were sitting on top of a bunch of IP addresses associated with Dark Web activity, the government would certainly approach them for the evidence, which CMU could have handed over willingly or under legal pressure.

What the researchers gathered wouldn’t just be the IP addresses of child pornographers and drug dealers, but activists, human rights workers, whistleblowers, and other noncriminals simply trying to navigate the Web privately.

Whether and what they handed over exactly, we still don’t know. But what the researchers gathered wouldn’t just be the IP addresses of child pornographers and drug dealers, but presumably anyone who used Tor between January and July 2014, which would include activists and human rights workers communicating in repressive countries, whistleblowers trying to stay anonymous while providing revealing documents to journalists, and other noncriminals simply trying to navigate the Web privately. Journalist and documentary director Laura Poitras has said she couldn’t have made contact with Edward Snowden or made Citizenfourwithout Tor.

“There’s an argument that this attack hurts all of the bad users of Tor so it’s a good thing,” said Mathewson. “But this was not a targeted attack going after criminals. This was broad. They were injecting their signals into as much hidden services traffic as they could without determining whether it was legal or illegal.”

“Civil liberties are under attack if law enforcement believes it can circumvent the rules of evidence by outsourcing police work to universities,” wrote Dingledine in a Tor blog post, which also questioned whether Carnegie Mellon had gotten approval from an institutional review board, a process that exists to ensure that academics don’t harm human research subjects.

Theoretically, Tor could sue the university and the researchers for, essentially, hacking its network. Tor spokesperson Kate Krauss says Tor is in the early stages of figuring out what it’s going to do legally. “We’re evaluating our options in this area,” she said.

It’s the difference between studying epidemiology by looking at a virus in skin grafts and releasing the virus in the wild.

“This attack was done without any regard for user privacy,” said Mathewson. “It’s the difference between studying epidemiology by looking at a virus in skin grafts and releasing the virus in the wild. The responsible thing to do when you come up with an attack is to get it fixed, not to carry it out on random strangers. That crosses the line from security research into malicious behavior.”

So, the big question many security-minded people have been asking since this attack was revealed is, ‘Can you still trust Tor?’

Mathewson says Tor has made major changes to its operation to prevent this kind of attack from working again, starting with “not extending security researchers the benefit of the doubt on anything.” It now has a set, strict procedure for how to respond when it sees a bunch of servers join its network. It will remove them by default rather than taking a ‘wait and see if they do something weird’ approach.

We now have a ‘block first, ask questions later’ policy.- Tor chief architect Nick Mathewson

“We seriously revamped our code that scans the network for suspicious behavior,” said Mathewson. “We now have a ‘block first, ask questions later’ policy.”

A Tor server now needs to do more to control a bunch of relay nodes to be considered a reliable hidden services directory, said Mathewson. Those are the places in the Tor network that point people to otherwise “dark” sites not exposed to the open Web. Tor is also working on what Mathewson calls a “new cryptographic trick” that will allow a hidden services directory to send someone to a hidden site (which they identify with a .onion Web address) without the directory knowing where it’s sending them.

“We’ve been working on a revamp of the hidden services design over the last year,” said Mathewson. “The implementation is in progress but it’s not done.”

A larger problem is a lack of manpower at Tor; this attack was successful because a concerning development didn’t get the attention it deserved. This is indicative of a larger problem in the security ecosystem: many of the critical tools we rely on for the privacy and security of our online activity are understaffed and underfunded. At the same time that Tor was under attack in 2014, a security researcher discovered the Heartbleed bug, a software flaw that affected a large chunk of the Internet, which stemmed from a mistake made in an OpenSSL codebase relied on by scores of Internet companies but supported by just one full-time nonprofit employee. Tor’s decentralized, crowdsourced model has strengths, but its tiny operation, with few full-time employees, has weaknesses as well—one of which was exploited here.

Tor recently launched a crowdfunding campaign to try to increase its number of individual funders so that it has more freedom in how it spends. “We are internally obsessed with getting more diverse with our funding and having unrestricted money,” said spokesperson Kate Krauss. “We want to solve problems as we see them as opposed to what an institutional funder is focused on.”

As for the question of ‘Can people trust Tor?’, Mathewson had a pragmatic response.

“There is no computer security program out there with 100% confidence that everything you do is going to be safe,” said Mathewson. “We can provide a high probability of safety and get better all the time. But no computer software ever written is able to provide absolute certainty. Have a back-up plan.”

FBI Refuses Motherboard's FOIA Request for Information About Attack on Tor - Motherboard 20151125

The FBI Refused Our FOIA Request for Information About Its Attack on Tor - Motherboard 20151125

The FBI and Carnegie Mellon University (CMU) really don't want to talk about their relationship.

The FBI has decided to neither confirm nor deny the existence of any emails, documents, or contracts between the agency and university in response to a Freedom of Information Act (FOIA) request filed by Motherboard.

“Please be advised that it is the FBI's policy to neither confirm nor deny the existence of any records which would tend to indicate or reveal whether an individual organization is supplying material or investigatory assistance to the FBI,” the response, dated November 19 2015 reads.

Motherboard filed the FOIA after it was revealed that a “university-based research institute” had provided the FBI with the IP addresses of dark web sites and Tor users between January and July 2014. Circumstantial evidence pointed to CMU, and specifically its Software Engineering Institute, being involved.

Those IP addresses led to the shutting of Silk Road 2.0 and a number of other dark web sites, as well as the arrests of Brian Farrell, charged with conspiracy to distribute heroin, methamphetamine and cocaine, and Gabriel Peterson-Siler, who is charged with possession of child pornography.

After Motherboard published its report, the Tor Project, the non-profit that maintains the Tor anonymity network, made an unsubstantiated claim that researchers from CMU had been paid at least $1 million to carry out an attack on Tor. An FBI spokesperson told Motherboard that this claim was inaccurate.

Then CMU finally broke its silence, and released a very carefully worded statementimplying that the institution had been subpoenaed for the IP addresses, rather than acting on any sort of contract.

Alexander Volynkin and Michael McCord, two researchers from CMU's Software Engineering Institute, were scheduled to give a talk at the Black Hat hacking conference in summer 2014. The researchers were going to reveal how a $3,000 piece of kit could unmask the IP addresses of Tor hidden services as well as their users. The talk, however, was abruptly canceled without explanation.

In the rejected FOIA request, Motherboard asked specifically for correspondence involving Volynkin and McCord.

“Acknowledging the FBI's liaison activities or investigatory techniques invites the risk of circumvention of federal law enforcement efforts,” the FBI's response continues. Previously when asked for more details on this case, a CMU spokesperson told Motherboard “We cannot comment on Tor.”

Motherboard will be appealing the FOIA decision.