Using Chrome with Tor on OS X

I’m living and traveling overseas. I want to have Tor as an option but I really just want to use it with Chrome — which I like a lot. My goal is to have the option to avoid national firewalls in some countries which use them. I’ve generally used SOCKS proxy over SSH in the past but it is good to have options. Plus, I have been reading Cory Doctorow’s Homeland (sequel to Little Brother) in which Tor is a prominent plot point in Homeland like “Finux” (Linux) and “Ordo” (PGP/GPG) in Cyrptonomicon.

I realize that Chrome sends information back to Google. I am even logged into Chrome, so this procedure isn’t hiding anything from them. Perhaps Chromium would be better. I’m not sure I want to constantly build from source every few weeks because Chromium is huge. These people have packaged vanilla Chromium plus Sparkle to update it. I may look into this in future.

The simplest way to use Tor for anonymized browsing is to download and install the Tor Browser Bundle. There are some aspects of this that I don’t find ideal — mostly I want to maintain Tor as part of my UNIX environment on OS X via MacPorts. I also like to have my hands in all the moving parts to learn how they work.

$ sudo port install tor

—> Updating database of binaries: 100.0%
—> Scanning binaries for linking errors: 100.0%
—> No broken files found.

$ tor
Mar 12 12:13:42.839 [notice] Tor v0.2.3.25 (git-17c24b3118224d65) running on Darwin.
Mar 12 12:13:42.840 [notice] Tor can’t help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Mar 12 12:13:42.840 [notice] Configuration file “/opt/local/etc/tor/torrc” not present, using reasonable defaults.
Mar 12 12:13:42.843 [notice] We were compiled with headers from version 2.0.19-stable of Libevent, but we’re using a Libevent library that says it’s version 2.0.21-stable.
Mar 12 12:13:42.843 [notice] Initialized libevent version 2.0.21-stable using method kqueue. Good.
Mar 12 12:13:42.843 [notice] Opening Socks listener on 127.0.0.1:9050
Mar 12 12:13:42.000 [notice] Parsing GEOIP file /opt/local/share/tor/geoip.
Mar 12 12:13:42.000 [notice] This version of OpenSSL has a known-good EVP counter-mode implementation. Using it.
Mar 12 12:13:42.000 [notice] OpenSSL OpenSSL 1.0.1e 11 Feb 2013 looks like version 0.9.8m or later; I will try SSL_OP to enable renegotiation
Mar 12 12:13:43.000 [notice] Reloaded microdescriptor cache. Found 3239 descriptors.
Mar 12 12:13:43.000 [notice] We now have enough directory information to build circuits.
Mar 12 12:13:43.000 [notice] Bootstrapped 80%: Connecting to the Tor network.
Mar 12 12:13:44.000 [notice] Heartbeat: Tor’s uptime is 0:00 hours, with 1 circuits open. I’ve sent 0 kB and received 0 kB.
Mar 12 12:13:44.000 [notice] Bootstrapped 85%: Finishing handshake with first hop.
Mar 12 12:13:45.000 [notice] Bootstrapped 90%: Establishing a Tor circuit.
Mar 12 12:13:48.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.
Mar 12 12:13:48.000 [notice] Bootstrapped 100%: Done.

Tor creates a SOCKS proxy listening on localhost 9050. My first thought was to create an OS X network Location for Tor which configures all of my network interfaces to use SOCKS on localhost 9050.

Tor location

This does work in that applications that use the OS networking stack will switch to passing their traffic to SOCKS on localhost 9050, but it isn’t necessarily good enough for anonymizing with Tor because of the DNS leaking problem. In particular, browsers — specifically Chrome — not only don’t send their DNS traffic to the SOCKS server by default which affects your anonomyzation by leaking unencrypted UDP DNS requests to your ISP but also interferes with resolving Tor services on .onion domains.

I wanted to try and use Chrome with Tor, so this presented a problem. Poking around, I discovered a Chromium design document which has the solution for forcing Chrome to send all traffic — including DNS — to a SOCKS server. It requires passing arguments to Chrome or Chromium when starting the app.

–proxy-server=”socks5://myproxy:8080
–host-resolver-rules=”MAP * 0.0.0.0 , EXCLUDE myproxy
In order to use this mechanism, you have to exit all Chrome/Chromium processes and launch a new process with the appropriate flags.
 

killall Google\ Chrome
sleep 1 # give processes a chance to exit before launching
open -a Google\ Chrome –args –proxy-server=”socks5://localhost:9050″ –host-resolver-rules=”MAP * 0.0.0.0, EXCLUDE localhost”

A nifty feature of OS X is Automator, which can turn a script into an app via the Application document type. Start Automator and create a new Application document and add the “run a shell script” Action and paste in the script above. Automator will then allow you to save a .app file which can live in your Applications folder.

Screen Shot 2013 03 12 at 11 23 27 AM

I saved this automation as “Google Chrome for Tor.app”. Launching “Google Chrome for Tor” will close all my sessions in Chrome and launch a new Chrome process tree configured as a SOCKS client on my local Tor proxy. Using the chrome://net-internals URL verifies that Chrome is talking to Tor and also sending all of its DNS requests through Tor.

Screen Shot 2013 03 12 at 11 40 24 AMScreen Shot 2013 03 12 at 11 40 24 AM

Also, as an aside and note to self. SSH can be used with Tor via netcat. This means that the SSH tunnel passes through the Tor network and is useful if ssh over TCP 22 is blocked or monitored. It is bloody slow over my — relatively slow-ish, high-ish latency connection in Africa — it reminds me of SSH over GPRS.

 

 

Advertisement

Dogpile on RIM

RIM caved to pressure from Saudi Arabia and will be installing servers there that can be monitored by Saudi authorities. Now, India has given RIM until August 31 to make a similar concession or have service suspended. I’m sure RIM will capitulate in order to stay in business. This is an unhappy precedent.

India is apparently also threatening to shut down Google and Skype messaging services unless the Indian government has the ability to intercept and monitor traffic.

Clearly, we need ubiquitous, secure and easy-to-use peer-to-peer cryptography so that governments have no central actors to put pressure on. Maybe the solution is OpenPGP but it needs to be much easier for people to use.

Bruce Schneier: U.S. enables Chinese hacking of Google

Notable cryptographer and security expert Bruce Schneier has a new essay up at CNN.

In order to comply with government search warrants on user data,Google created a backdoor access system into Gmail accounts. This feature is what the Chinese hackers exploited to gain access.

This problem isn’t going away. Every year brings more Internet censorship and control, not just in countries like China and Iran but in the U.S., the U.K., Canada and other free countries, egged on by both law enforcement trying to catch terrorists, child pornographers and other criminals and by media companies trying to stop file sharers.

The problem is that such control makes us all less safe. Whether the eavesdroppers are the good guys or the bad guys, these systems put us all at greater risk. Communications systems that have no inherent eavesdropping capabilities are more secure than systems with those capabilities built in. And it’s bad civic hygiene to build technologies that could someday be used to facilitate a police state.

Read the entire article at CNN.com. This essay is a follow-up to a previous Schneier essay, “Technology Shouldn’t Give Big Brother a Head Start”.

 

Schneier is the inventor of the Blowfish and TwoFish block cypher algorithms as well as the Solitair cypher used in Neil Stephenson’s Cryptonomicon. TwoFish was a finalist to become the NSA’s advanced encryption standard (AES) but ultimately lost the competition to Rijndael.

Who is reading your email?

Email has no privacy assurance whatsoever

Email is short for “electronic mail”. The implication is that this is a direct metaphorical equivalent for the familiar paper process, but it just ain’t so. One of the points of departure from user expectation is the concept of a sealed envelope.

When you put a paper letter in a paper envelope and seal it, there is an expectation of privacy because someone has to physically break the seal which is difficult to do without obviously damaging the envelope. Email, by contrast has no secure envelope. It is transferred over the Internet using plain text over TCP port 25.

The implication is that anyone can read your mail without you realizing it. Most people think the only point of concern is that an attacker capturing packets at a public WiFi hotspot will snoop your mail. The most obvious countermeasure is using SSL with webmail. This is a good countermeasure as far as it goes but it only encrypts the communication between you and your post office. However the transport of your message between post offices is still going to be in plain text and will likely pass through several routers on the Internet en route.

So what? Who can tap traffic on those routers? Perhaps some uber-hacker is siphoning of traffic for analysis but man that seems like a huge amount of data to sift through and not likely worth doing, right?

Unfortunately Deep Packet Inspection is now ubiquitous. ISPs and governments, including the USA, have widely deployed hardware to capture and mine data out of unencrypted data packets passing through the Internet. Effectively that means it is safe to assume that all of your email (and other traffic) is being captured and analyzed by one or more automated systems trying to determine if it matches patterns of “bad behavior”. Deep Packet Inspection, by the way, is the mechanism that ISPs use to provide “tiered service” and detect copyright violations.

Maybe you think this sounds OK because you aren’t doing anything wrong, but consider whether you trust all of the people who have access to this data to do no evil, trust these people and algorithms to never make mistakes and trust that they never lose control of your private data.

A Modest Proposal

RFC 2478 describes a cryptographic mechanism similar to SSL for web sites that places a strong cryptographic layer over the transfer of email via SMTP. This transport layer security is widely supported by mail servers today and is the mechanism by which SMTP client programs like Microsoft Outlook, Mozilla Thunderbird and Apple Mail are able to communicate securely with an SMTP gateway. Many mail providers, like Gmail, already require (or at least offer) SMTP over TLS connections for clients sending mail.

RFC 2478 was published more than ten years ago. Most, if not all, contemporary SMTP gateways are capable of supporting secure SMTP over TLS. If all mail servers simply transferred all messages between each other with TLS encryption then nobody could read the messages except the administrators of the destination and source post offices and the intended recipients. This could be phased in over a reasonable period of time.

  • SMTP in the clear becomes deprecated.
  • During the deprecation period servers are configured to attempt to communicate with TLS by default.
  • After some time, administrators can configure warnings in the headers and to-line like “[UNSECURED]”. This is an important user-education step.
  • SMTP in the clear becomes a banned protocol and servers are forbidden from supporting it. (Testing and troubleshooting SMTP can still be done via classical telnet by using OpenSSL as a wrapper.)
  • Servers will refuse connections from servers that do not offer TLS which will cause the message to bounce to the sender with a statement that their mail server is not secure.

Securing all SMTP traffic in a TLS envelope would go a long way toward restoring some reality to the baseline assumption that nobody is reading email in transit. The technology has been standardized since 1999. Why do we not have secure server-to-server mail transfer ten years later?

Facebook is utterly untrustworthy

Here are a few things to consider before putting any of your data into Facebook:

  1. Under the aegis of “we’re making some changes to give you more control,” Facebook is taking advantage of standard user click-though terms of service behavior to make your profile data public. (via Jason Calicanis)
  2. Whenever you take a Facebook quiz or use a Facebook plugin game, everything in your profile is available to the publisher of the quiz or game. Further, everything in the private profiles of all your friends is available to the widget publisher as well. The data collected by the publisher can be sold, resold or released in any way the publisher of the quiz or widget chooses. (via ACLU)
  3. The privacy controls in Facebook are deceptive and there is no way to opt out of sharing private data with Facebook apps. (via Electronic Frontier Foundation) Also, there is no screening process required for app developers. Anyone with a Facebook account can be an app developer.

Why would Facebook leak its users private data in this way? Well, they may be incompetent but it is not a compelling argument since they have built the worlds largest social network. The other possibility is that they want to convert the data in their systems into money. The leaking of private profile data to app publishers makes Facebook a wonderful platform for targeted marketing. It is particularly insidious because your data can be leaked even if you yourself are very careful but any of your friends uses any Facebook app.

Similarly, Jason Calicanis points out that the more data that is public on Facebook, the more it can be indexed by Google, Bing and Yahoo! to drive search traffic to Facebook. That traffic is monetized by selling ads.

Facebook shows an astonishing disregard for the privacy of its users. It appears to believe that its membership is too stupid to notice or care about the way that it is abusing their private data. It is amazing because the original value proposition of Facebook over MySpace was that Facebook had privacy controls. Clearly Facebook is not concerned with keeping its users data private. They are concerned with monetizing Facebook in advance of their IPO.

Perhaps it is time to send Facebook a message and delete your account.

Of course, you still have to trust Facebook to actually delete your data and they are utterly untrustworthy.

%d bloggers like this: