2-Step Verification Code Generator for UNIX Terminal

otp bigdigits
I have been using a time-based one time password (TOTP) generator on my phone with my clod-based accounts at Google, Amazon AWS, GitHub, Microsoft — every service that supports it — for years now. I have over a dozen of these and dragging my phone out every time I need a 2-factor token is a real pain.

I spend a lot of my time working on a trusted computer and I want to be able to generate the TOTP codes easily from that without having to use my phone. I also want to have reasonable confidence that the system is secure. I put together an old-school Bourne Shell script that does the job:

  • My OTP keys are stored in a file that is encrypted with gnupg and only decrypted momentarily to generate the codes.
  • The encrypted key file can by synchronized between computers using an untrusted service like DropBox or Google Drive as long as the private GPG key is kept secure.
  • I’m using oathtool from oath-toolkit to generate the on-time code.

Pro Tip: Most sites don’t intend you to have more than one token that generates passwords. Their enrollment process typically involves scanning a QR Code to enroll a new private key into Google Authenticator or other OATH client. I always take a screen shot of these QR Codes and keep them stored in a safe place.

Code

Save this script as an executable file in your path such as /usr/local/bin/otp.

#!/bin/sh
scriptname=`basename $0`
if [ -z $1 ]; then
echo "Generate OATH TOTP Password"
echo ""
echo "Usage:"
echo " $scriptname google"
echo ""
echo "Configuration: $HOME/.otpkeys"
echo "Format: name:key"
echo
echo "Preferably encrypt with gpg --armor to create .opkeys.asc"
echo "and then delete .optkeys"
echo ""
echo "Optionally set OTPKEYS_PATH environment variable to GPG"
echo "with path to GPG encrypted name:key file."
exit
fi
if [ -z "$(which oathtool)" ]; then
echo "othtool not found in \$PATH"
echo "try:"
echo "MacPorts: port install oath-toolkit"
echo "Debian: apt-get install oathtool"
echo "Red Hat: yum install oathtool"
exit
fi
if [ -z "$OTPKEYS_PATH" ]; then
if [ -f "$HOME/.otpkeys.asc" ]; then
otpkeys_path="$HOME/.otpkeys.asc"
else
otpkeys_path="$HOME/.otpkeys"
fi
else
otpkeys_path=$OTPKEYS_PATH
fi
if [ -z "$otpkeys_path" ]; then
>&2 echo "You need to create $otpkeys_path"
exit 1
fi
if [ "$otpkeys_path" = "$HOME/.otpkeys" ]; then
red='\033[0;31m'
NC='\033[0m' # No Color
>&2 echo "${red}WARNING: unencrypted ~/.otpkeys"
>&2 echo "do: gpg --encrypt --recipient your-email --armor ~/.otpkeys"
>&2 echo "and then delete ~/.otpkeys"
>&2 echo "${NC}"
otpkey=`grep ^$1 "$otpkeys_path" | cut -d":" -f 2 | sed "s/ //g"`
else
otpkey=`gpg --batch --decrypt "$otpkeys_path" 2> /dev/null | grep "^$1:" | cut -d":" -f 2 | sed "s/ //g"`
fi
if [ -z "$otpkey" ]
then
echo "$scriptname: TOTP key name not found"
exit
fi
oathtool --totp -b "$otpkey"
view raw otp hosted with ❤ by GitHub

In order to use my script you need to already have gnupg installed and configured with a private key.

You then need to create a plain text file that contains key:value pairs. Think of these as an associative array or dictionary where the lookup key is a memorable name and the value is a base32 encoded OATH key.

Example

fake:ORUGS4ZNNFZS2YJNMZQWWZJNNNSXS===
also-fake:ORUGS4ZNNFZS2YLMONXS2ZTBNNSQ====

Encrypt this file of name and key associations with gpg in ASCII-armor format with yourself as the recipient and save the output file as ~/.otpkeys.

$ gpg --encrypt --armor --recipient you@your-email-address.com otpkeys
$ mv otpkeys.asc ~/.otpkeys.asc

Now the script will start working. For example, generate a code for the “fake” key in the sample file (your result should be different as the time will be different):

$ otp fake
487036

Extracting keys from QR Codes

At this point you may be thinking, “OK, but how the hell do I get the OTP keys to encrypt into the .otpkeys file?”

The ZBar project includes a binary zbarimg which will extract the contents of a QR Code as text in your terminal. The OATH QR Codes contain a URL and a portion of that is an obvious base32 string that is the key. On rare occasions, you may need to pad the ‘=’ at the end of the string to make it a valid base32 string that works with oathtool becuase oathtool is picky.

My favorite package manager for OS X, MacPorts, doesn’t have ZBar so I had to build it from source. Homebrew has a formula for zbar. If you are using Linux, it is probably already packaged for you. Zbar depends on ImageMagick to build. If you have ImageMagick and its library dependencies, Zbar should build for you. Clone the Zbar repo with git, check out the tag for the most recent release — currently “0.10” — and build it.

$ git clone git@github.com:Zbar/Zbar
$ cd Zbar
$ git checkout 0.10
$ make
$ sudo make install

Once you have ZBar installed, you should have zbarimg in your path and you can use it to extract the otpauth URL from your QR Code screenshot.

$ zbarimg ~/Documents/personal/totp-fake-aws.png 
QR-Code:otpauth://totp/breiter@fake-aws?secret=ORUGS4ZNNFZS2YJNMZQWWZJNNNSXS===
scanned 1 barcode symbols from 1 images in 0 seconds

I hope you already have screen shots of all your QR Codes or else you will need to generate new OTP keys for all your services and take a screen shot of the QR Code this time.

Syncing OTP key file with other computers

The otp script looks for an environmental variable OTPKEYS_PATH. You can use this to move your otp key file database to another location than ~/.otpkeys.asc. For example put it in Google Drive point the otp script to it by setting OTPKEYS_PATH in ~/.bashrc.

#path to GPG-encrypted otp key database
export OTPKEYS_PATH=~/Google\ Drive/otpkeys.asc

Now you can generate your OTP codes from the terminal in your trusted computers whenever you need them and enjoy a respite from constantly dragging your phone out of your pocket.

Using Chrome with Tor on OS X

I’m living and traveling overseas. I want to have Tor as an option but I really just want to use it with Chrome — which I like a lot. My goal is to have the option to avoid national firewalls in some countries which use them. I’ve generally used SOCKS proxy over SSH in the past but it is good to have options. Plus, I have been reading Cory Doctorow’s Homeland (sequel to Little Brother) in which Tor is a prominent plot point in Homeland like “Finux” (Linux) and “Ordo” (PGP/GPG) in Cyrptonomicon.

I realize that Chrome sends information back to Google. I am even logged into Chrome, so this procedure isn’t hiding anything from them. Perhaps Chromium would be better. I’m not sure I want to constantly build from source every few weeks because Chromium is huge. These people have packaged vanilla Chromium plus Sparkle to update it. I may look into this in future.

The simplest way to use Tor for anonymized browsing is to download and install the Tor Browser Bundle. There are some aspects of this that I don’t find ideal — mostly I want to maintain Tor as part of my UNIX environment on OS X via MacPorts. I also like to have my hands in all the moving parts to learn how they work.

$ sudo port install tor

—> Updating database of binaries: 100.0%
—> Scanning binaries for linking errors: 100.0%
—> No broken files found.

$ tor
Mar 12 12:13:42.839 [notice] Tor v0.2.3.25 (git-17c24b3118224d65) running on Darwin.
Mar 12 12:13:42.840 [notice] Tor can’t help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Mar 12 12:13:42.840 [notice] Configuration file “/opt/local/etc/tor/torrc” not present, using reasonable defaults.
Mar 12 12:13:42.843 [notice] We were compiled with headers from version 2.0.19-stable of Libevent, but we’re using a Libevent library that says it’s version 2.0.21-stable.
Mar 12 12:13:42.843 [notice] Initialized libevent version 2.0.21-stable using method kqueue. Good.
Mar 12 12:13:42.843 [notice] Opening Socks listener on 127.0.0.1:9050
Mar 12 12:13:42.000 [notice] Parsing GEOIP file /opt/local/share/tor/geoip.
Mar 12 12:13:42.000 [notice] This version of OpenSSL has a known-good EVP counter-mode implementation. Using it.
Mar 12 12:13:42.000 [notice] OpenSSL OpenSSL 1.0.1e 11 Feb 2013 looks like version 0.9.8m or later; I will try SSL_OP to enable renegotiation
Mar 12 12:13:43.000 [notice] Reloaded microdescriptor cache. Found 3239 descriptors.
Mar 12 12:13:43.000 [notice] We now have enough directory information to build circuits.
Mar 12 12:13:43.000 [notice] Bootstrapped 80%: Connecting to the Tor network.
Mar 12 12:13:44.000 [notice] Heartbeat: Tor’s uptime is 0:00 hours, with 1 circuits open. I’ve sent 0 kB and received 0 kB.
Mar 12 12:13:44.000 [notice] Bootstrapped 85%: Finishing handshake with first hop.
Mar 12 12:13:45.000 [notice] Bootstrapped 90%: Establishing a Tor circuit.
Mar 12 12:13:48.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.
Mar 12 12:13:48.000 [notice] Bootstrapped 100%: Done.

Tor creates a SOCKS proxy listening on localhost 9050. My first thought was to create an OS X network Location for Tor which configures all of my network interfaces to use SOCKS on localhost 9050.

Tor location

This does work in that applications that use the OS networking stack will switch to passing their traffic to SOCKS on localhost 9050, but it isn’t necessarily good enough for anonymizing with Tor because of the DNS leaking problem. In particular, browsers — specifically Chrome — not only don’t send their DNS traffic to the SOCKS server by default which affects your anonomyzation by leaking unencrypted UDP DNS requests to your ISP but also interferes with resolving Tor services on .onion domains.

I wanted to try and use Chrome with Tor, so this presented a problem. Poking around, I discovered a Chromium design document which has the solution for forcing Chrome to send all traffic — including DNS — to a SOCKS server. It requires passing arguments to Chrome or Chromium when starting the app.

–proxy-server=”socks5://myproxy:8080
–host-resolver-rules=”MAP * 0.0.0.0 , EXCLUDE myproxy
In order to use this mechanism, you have to exit all Chrome/Chromium processes and launch a new process with the appropriate flags.
 

killall Google\ Chrome
sleep 1 # give processes a chance to exit before launching
open -a Google\ Chrome –args –proxy-server=”socks5://localhost:9050″ –host-resolver-rules=”MAP * 0.0.0.0, EXCLUDE localhost”

A nifty feature of OS X is Automator, which can turn a script into an app via the Application document type. Start Automator and create a new Application document and add the “run a shell script” Action and paste in the script above. Automator will then allow you to save a .app file which can live in your Applications folder.

Screen Shot 2013 03 12 at 11 23 27 AM

I saved this automation as “Google Chrome for Tor.app”. Launching “Google Chrome for Tor” will close all my sessions in Chrome and launch a new Chrome process tree configured as a SOCKS client on my local Tor proxy. Using the chrome://net-internals URL verifies that Chrome is talking to Tor and also sending all of its DNS requests through Tor.

Screen Shot 2013 03 12 at 11 40 24 AMScreen Shot 2013 03 12 at 11 40 24 AM

Also, as an aside and note to self. SSH can be used with Tor via netcat. This means that the SSH tunnel passes through the Tor network and is useful if ssh over TCP 22 is blocked or monitored. It is bloody slow over my — relatively slow-ish, high-ish latency connection in Africa — it reminds me of SSH over GPRS.

 

 

SUA Deprecated in Windows 8

The POSIX subsystem in Windows is headed for a slow death march, again. Not many people realize that Windows NT had a POSIX subsystem from the beginning which was enriched along the way to run a fork of OpenBSD called Interix. Originally the POSIX subsystem was bundled with Windows NT 3.1 and was a barely useful POSIX.1 environment to meet DoD purchasing requirements. Later, it was removed from the Windows core distribution, re-implemented by an ISV and called OpenNT and then Interix. Interix was acquired by Microsoft and sold for a while before being distributed free of charge. Later it was bundled with Services for UNIX 3.0 and 3.5 before being re-integrated into the Windows distribution as the Subsystem for UNIX-based Applications (SUA). At one time Interix actually ran Hotmail during the migration from a FreeBSD to NT backend.

After passing back to the Windows team and being rebranded SUA, Interix languished. The original developers scattered to other projects like Monad/PowerShell, left Microsoft or were never hired by Microsoft when it acquired the technology in the first place. Interix is maintained by a very small team at Microsoft India and these guys are focused primarily on just keeping it working through kernel updates. In practice, the quality of the product has been in decline. At one point, for example, it shipped with most of the .so shared libraries corrupt so that nothing that linked to those libraries would run. The toolkit that makes SUA useful lags many, many months behind the release of a new version of Windows and Microsoft required “premium” client SKUs (ultimate or enterprise) or server SKUs to access the technology which greatly limited its distribution. It is generally as unloved by the powers that be as anything could be, except perhaps IronPython and IronRuby which have already been killed.

The Windows 8 M3 developer preview shows that the other shoe has dropped:

Subsystem for UNIX-based Applications [DEPRECATED]

Subsystem for UNIX-based Applications (SUA) is a source-compatibility subsystem for compiling and running custom UNIX-based applications and scripts on a computer running Windows operating system. WARNING: SUA is deprecated starting with this release and will be completely removed from the next release. You should begin planning now to employ alternate methods for any applications, code, or usage that depends on this feature.

sua-deprecated

One obvious reason to deprecate SUA is that loading the extra subsystem makes Windows take a noticeably longer time to boot. The architecture is very much at odds with the instant boot goals of Windows 8.

There have been a number of developments over the last few years that makes Interix less compelling. Things like fast-CGI on IIS and an official PHP port from Zend, lots of dynamic languages with native Windows runtimes, mySQL and PostgreSQL for WIndows, C libraries like pthreads for win32 and msys which have made Interix less necessary. For perl-heads there is even Strawberry Perl which is supposed to be a lot more CPAN friendly than ActiveState perl. I think Hyper-V and PowerShell are the real strategic replacements for SUA, though. PowerShell integrates with COM and WMI and fits the object nature of Windows better than any POSIX shell could. Hyper-V lets you actually run your UNIX app on Windows on a supported Linux platform which I’m sure smells much less MacGuyver to CIOs than this weird Interix POSIX on Windows thing that nobody ever heard of.

From the time that Hyper-V officially supported RHEL with hast enlightened drivers and Jeffrey Snover decided that the new shell and automation for Windows would be based on .NET and pivoted to build Monad/PowerShell rather than putting KSH on every Windows machine, Interix’s days were numbered. Now it’s official, Interix will be gone from the world about 11 years when Windows 8 reaches end-of-life but if you are smart you will jump ship now because this product will have the minimum life support staff imaginable.

The Sad History of the Microsoft POSIX Subsystem

When Windows NT was first being developed, one of the goals was to make the kernel separate from the programming interface. NT was originally intended to be the successor to OS/2 but Microsoft also wanted to include compatibility to run Windows 3.x applications and to meet 1980s era DoD Orange Book and FIPS specifications to sell to the defense market.  As a result, Windows NT was developed as a multiuser platform with sophisticated discretionary access controls capabilities and it was implemented as a hybrid microkernel 3 userland environments:

  • Windows (Win32)
  • OS/2
  • POSIX

Microsoft had a falling out with IBM over Win32 and the NT project split from OS/2. The team focus shifted to Win32 so much that the Client-Server Runtime Subsystem (CSRSS) that hosts the Win32 API became mandatory and OS/2 and POSIX subsystems were never really completed but they were shipped with the first five versions of Windows NT through Windows 2000. The OS/2 subsystem could only run OS/2 1.0 command-line programs and had no presentation manager support. The POSIX subsystem supported POSIX.1 spec but provided no shells or UNIX-like environment of any kind. With the success of Win32 in the form of Windows 95, the development of the OS/2 and POSIX subsystems ceased. They  were entirely dead and gone from Windows XP and Windows Server 2003.

Meanwhile, around 1996,  Softway Systems developed a UNIX-to-Windows NT porting system called OpenNT. OpenNT was built on the NT POSIX subsystem but fleshed it out into a usable UNIX environment. This was at a time when UNIX systems where hugely expensive. Softway used OpenNT to re-target a number of UNIX applications for US Federal agencies onto Windows NT. In 1998, OpenNT was re-named Interix. Softway Systems also eventually built a full replacement for the NT POSIX subsystem in order to implement system calls that the Microsoft POSIX subsystem didn’t support and to develop a richer libc, single-rooted view of the file system and a functional gcc.

Microsoft acquired Softway and the Interix platform in 1999. Initially Interix 2.2 was made available as a fairly expensive paid add-on to Windows NT 4 and Windows 2000. Later it was incorporated as a component of Services for UNIX 3.0 and 3.5 (SFU) and SFU was made free-of-charge. When Interix became free, Microsoft removed the X11 server component that was previously bundled with Interix because in the wake of U.S. vs Microsoft, they did not want to defend law suits from the entrenched and expensive PC X Server industry but the X11 client libraries remained.

SFU/Interix 3.0 was released in early 2002 followed up with SFU 3.5 less than two years later and cool stuff got implemented like fast pthreads, fork(), setuid, ptys, deaemons with RC scripts including inetd and sendmail among other things. InteropSystems ported OpenSSH and developed a high-performance port of Apache using pthreads and other proof-of-concept ports like GTK and GIMP among many other things. Hotmail even ran on Interix. And enterprising people did cool things like a Linux ELF binary loader on top of Interix.

I got into this stuff and built and donated ports to the SFU/SUA community, including cadaver, ClamAV, GnuMP, libtool, NcFTP, neon, rxvt and gnu whois. My company sponsored the port of OpenSSH to Interix 6.0 for Vista SUA (because it broke backwards compatibility with Interix 3.5 binaries). We ran Interix on all of our workstations and servers. We used it for management, remote access and to interop with clients who used Solaris, Linux and OS X on various projects.

Slowly Going Off the Rails

With Windows Server 2003 R2 (and only R2), Interix became a core operating system component, rebranded as “Subsystem for UNIX Applications” (SUA). Around this time, the core development team was reformed in India rather than Redmond and some of the key Softway developers moved on to other projects like Monad (PowerShell) or left Microsoft. Interix for Windows Server 2003 R2 (aka Interix 5.2) was broken. It shipped with corrupt libraries and a number of new but flawed APIs and broke some previously stable APIs like select(). Also, related to the inclusion of Interix as an OS component, SP2 for Windows Server 2003 clobbers Interix 3.5 installations.

Things have been downhill from there. It’s not just that obvious things didn’t get implemented like a fully-functional poll() or updating binutils and gcc to something reasonably modern. The software suffered from severe bitrot.

One of the consequences of including SUA as an OS component has been that a bifurcation of the “subsystem” from the “tools”. The subsystem consists of just a few files: psxss.exe, psxss.dll, posix.exe and psxrun.exe. This implements the runtime and a terminal environment but nothing else, not even libc. In order to get shells, PTYs and usable programs, you have to install the “Utilities and SDK for UNIX-based Applications”  (aka tools) which is sizable download. Apparently Microsoft has concern about bundling GPL code onto the actual Windows media.

OK. This is a little weird but not a big deal except that the development timeline of the tools is now completely out of whack with Windows releases. The tools for Vista were only available in beta when Vista went gold and the version for Windows Server 2008 and Vista SP1 was not available until about a month after Vista SP1/Win2k8 was released. When Windows 7 was released no tools were available at all in July 2009 when Windows 7 was released. They didn’t become available until 8 months later in March 2010 and contain no new features.

To top things off, while SFU 3.5 ran on all versions of NT 5.x, SUA only runs on Windows Server and the Enterpise and Ultimate client editions. SUA is not available on Vista Business or Home and Windows 7 Professional and Home editions.

Is Interix Dead?

For some reason Microsoft seems to be ambivalent about this technology. On the one hand they bring it into the core of the OS and make it a “premium” feature that only Enterprise and Ultimate customers get to use and on the other they pare back development to almost nothing.

Interix has been supported with support forms and a ports tree maintained by InteropSystems collectively known as SUA Community which operates with supplemental funding from Microsoft. The /Tools ports tree is the source for key packages not provided by Microsoft such as Bash, OpenSSH, BIND, cpio and a ton of libraries that Microsoft does not bundle.  Microsoft has been increasingly reluctant to fund the SUA Community and has survey users on a number of occasions. The latest survey was very pointed and culminated with Microsoft cutting off funding and shuttering the SUA Community site on July 6th, 2010 but a few days later it was back online. I’m not sure how or why.

I have no inside knowledge but my gut says that Interix has lost internal support at Microsoft. It is being kept on life support because of loud complaints from important customers but it is going nowhere. I will be surprised if there is a Subsystem for UNIX-based Applications in Windows 8. I think the ambivalence is ultimately about an API war. At some level, the strategerizers have decided it is better to not dignify UNIX API with support. I think the calculus is that people will still use Windows but it chokes off oxygen for UNIX-like systems if it takes a lot of extra work to write cross-platform code for Windows and UNIX—the premise being that you write for Windows first because that’s where the market is. Furthermore, in a lot of business cases what is needed is Linux support or Red Hat Linux version X support in order to run something. I think Microsoft realizes that it is hard for Interix to beat Linux which is why SUSE and Red Hat Linux can be virtualized under Hyper-V.

I also believe that Microsoft sees C/C++ APIs as “legacy”. I think they want to build an OS that is verifiably secure and more reliable by being based on fully managed code. The enormous library of software built for the Windows API is a huge legacy problem to manage in migrating to such a system. Layering POSIX/UNIX on top of that makes it worse.

Whatever the reason, it seems pretty clear that Interix is dying.