2-Step Verification Code Generator for UNIX Terminal

otp bigdigits
I have been using a time-based one time password (TOTP) generator on my phone with my clod-based accounts at Google, Amazon AWS, GitHub, Microsoft — every service that supports it — for years now. I have over a dozen of these and dragging my phone out every time I need a 2-factor token is a real pain.

I spend a lot of my time working on a trusted computer and I want to be able to generate the TOTP codes easily from that without having to use my phone. I also want to have reasonable confidence that the system is secure. I put together an old-school Bourne Shell script that does the job:

  • My OTP keys are stored in a file that is encrypted with gnupg and only decrypted momentarily to generate the codes.
  • The encrypted key file can by synchronized between computers using an untrusted service like DropBox or Google Drive as long as the private GPG key is kept secure.
  • I’m using oathtool from oath-toolkit to generate the on-time code.

Pro Tip: Most sites don’t intend you to have more than one token that generates passwords. Their enrollment process typically involves scanning a QR Code to enroll a new private key into Google Authenticator or other OATH client. I always take a screen shot of these QR Codes and keep them stored in a safe place.

Code

Save this script as an executable file in your path such as /usr/local/bin/otp.

#!/bin/sh
scriptname=`basename $0`
if [ -z $1 ]; then
echo "Generate OATH TOTP Password"
echo ""
echo "Usage:"
echo " $scriptname google"
echo ""
echo "Configuration: $HOME/.otpkeys"
echo "Format: name:key"
echo
echo "Preferably encrypt with gpg --armor to create .opkeys.asc"
echo "and then delete .optkeys"
echo ""
echo "Optionally set OTPKEYS_PATH environment variable to GPG"
echo "with path to GPG encrypted name:key file."
exit
fi
if [ -z "$(which oathtool)" ]; then
echo "othtool not found in \$PATH"
echo "try:"
echo "MacPorts: port install oath-toolkit"
echo "Debian: apt-get install oathtool"
echo "Red Hat: yum install oathtool"
exit
fi
if [ -z "$OTPKEYS_PATH" ]; then
if [ -f "$HOME/.otpkeys.asc" ]; then
otpkeys_path="$HOME/.otpkeys.asc"
else
otpkeys_path="$HOME/.otpkeys"
fi
else
otpkeys_path=$OTPKEYS_PATH
fi
if [ -z "$otpkeys_path" ]; then
>&2 echo "You need to create $otpkeys_path"
exit 1
fi
if [ "$otpkeys_path" = "$HOME/.otpkeys" ]; then
red='\033[0;31m'
NC='\033[0m' # No Color
>&2 echo "${red}WARNING: unencrypted ~/.otpkeys"
>&2 echo "do: gpg --encrypt --recipient your-email --armor ~/.otpkeys"
>&2 echo "and then delete ~/.otpkeys"
>&2 echo "${NC}"
otpkey=`grep ^$1 "$otpkeys_path" | cut -d":" -f 2 | sed "s/ //g"`
else
otpkey=`gpg --batch --decrypt "$otpkeys_path" 2> /dev/null | grep "^$1:" | cut -d":" -f 2 | sed "s/ //g"`
fi
if [ -z "$otpkey" ]
then
echo "$scriptname: TOTP key name not found"
exit
fi
oathtool --totp -b "$otpkey"
view raw otp hosted with ❤ by GitHub

In order to use my script you need to already have gnupg installed and configured with a private key.

You then need to create a plain text file that contains key:value pairs. Think of these as an associative array or dictionary where the lookup key is a memorable name and the value is a base32 encoded OATH key.

Example

fake:ORUGS4ZNNFZS2YJNMZQWWZJNNNSXS===
also-fake:ORUGS4ZNNFZS2YLMONXS2ZTBNNSQ====

Encrypt this file of name and key associations with gpg in ASCII-armor format with yourself as the recipient and save the output file as ~/.otpkeys.

$ gpg --encrypt --armor --recipient you@your-email-address.com otpkeys
$ mv otpkeys.asc ~/.otpkeys.asc

Now the script will start working. For example, generate a code for the “fake” key in the sample file (your result should be different as the time will be different):

$ otp fake
487036

Extracting keys from QR Codes

At this point you may be thinking, “OK, but how the hell do I get the OTP keys to encrypt into the .otpkeys file?”

The ZBar project includes a binary zbarimg which will extract the contents of a QR Code as text in your terminal. The OATH QR Codes contain a URL and a portion of that is an obvious base32 string that is the key. On rare occasions, you may need to pad the ‘=’ at the end of the string to make it a valid base32 string that works with oathtool becuase oathtool is picky.

My favorite package manager for OS X, MacPorts, doesn’t have ZBar so I had to build it from source. Homebrew has a formula for zbar. If you are using Linux, it is probably already packaged for you. Zbar depends on ImageMagick to build. If you have ImageMagick and its library dependencies, Zbar should build for you. Clone the Zbar repo with git, check out the tag for the most recent release — currently “0.10” — and build it.

$ git clone git@github.com:Zbar/Zbar
$ cd Zbar
$ git checkout 0.10
$ make
$ sudo make install

Once you have ZBar installed, you should have zbarimg in your path and you can use it to extract the otpauth URL from your QR Code screenshot.

$ zbarimg ~/Documents/personal/totp-fake-aws.png 
QR-Code:otpauth://totp/breiter@fake-aws?secret=ORUGS4ZNNFZS2YJNMZQWWZJNNNSXS===
scanned 1 barcode symbols from 1 images in 0 seconds

I hope you already have screen shots of all your QR Codes or else you will need to generate new OTP keys for all your services and take a screen shot of the QR Code this time.

Syncing OTP key file with other computers

The otp script looks for an environmental variable OTPKEYS_PATH. You can use this to move your otp key file database to another location than ~/.otpkeys.asc. For example put it in Google Drive point the otp script to it by setting OTPKEYS_PATH in ~/.bashrc.

#path to GPG-encrypted otp key database
export OTPKEYS_PATH=~/Google\ Drive/otpkeys.asc

Now you can generate your OTP codes from the terminal in your trusted computers whenever you need them and enjoy a respite from constantly dragging your phone out of your pocket.

Using Chrome with Tor on OS X

I’m living and traveling overseas. I want to have Tor as an option but I really just want to use it with Chrome — which I like a lot. My goal is to have the option to avoid national firewalls in some countries which use them. I’ve generally used SOCKS proxy over SSH in the past but it is good to have options. Plus, I have been reading Cory Doctorow’s Homeland (sequel to Little Brother) in which Tor is a prominent plot point in Homeland like “Finux” (Linux) and “Ordo” (PGP/GPG) in Cyrptonomicon.

I realize that Chrome sends information back to Google. I am even logged into Chrome, so this procedure isn’t hiding anything from them. Perhaps Chromium would be better. I’m not sure I want to constantly build from source every few weeks because Chromium is huge. These people have packaged vanilla Chromium plus Sparkle to update it. I may look into this in future.

The simplest way to use Tor for anonymized browsing is to download and install the Tor Browser Bundle. There are some aspects of this that I don’t find ideal — mostly I want to maintain Tor as part of my UNIX environment on OS X via MacPorts. I also like to have my hands in all the moving parts to learn how they work.

$ sudo port install tor

—> Updating database of binaries: 100.0%
—> Scanning binaries for linking errors: 100.0%
—> No broken files found.

$ tor
Mar 12 12:13:42.839 [notice] Tor v0.2.3.25 (git-17c24b3118224d65) running on Darwin.
Mar 12 12:13:42.840 [notice] Tor can’t help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Mar 12 12:13:42.840 [notice] Configuration file “/opt/local/etc/tor/torrc” not present, using reasonable defaults.
Mar 12 12:13:42.843 [notice] We were compiled with headers from version 2.0.19-stable of Libevent, but we’re using a Libevent library that says it’s version 2.0.21-stable.
Mar 12 12:13:42.843 [notice] Initialized libevent version 2.0.21-stable using method kqueue. Good.
Mar 12 12:13:42.843 [notice] Opening Socks listener on 127.0.0.1:9050
Mar 12 12:13:42.000 [notice] Parsing GEOIP file /opt/local/share/tor/geoip.
Mar 12 12:13:42.000 [notice] This version of OpenSSL has a known-good EVP counter-mode implementation. Using it.
Mar 12 12:13:42.000 [notice] OpenSSL OpenSSL 1.0.1e 11 Feb 2013 looks like version 0.9.8m or later; I will try SSL_OP to enable renegotiation
Mar 12 12:13:43.000 [notice] Reloaded microdescriptor cache. Found 3239 descriptors.
Mar 12 12:13:43.000 [notice] We now have enough directory information to build circuits.
Mar 12 12:13:43.000 [notice] Bootstrapped 80%: Connecting to the Tor network.
Mar 12 12:13:44.000 [notice] Heartbeat: Tor’s uptime is 0:00 hours, with 1 circuits open. I’ve sent 0 kB and received 0 kB.
Mar 12 12:13:44.000 [notice] Bootstrapped 85%: Finishing handshake with first hop.
Mar 12 12:13:45.000 [notice] Bootstrapped 90%: Establishing a Tor circuit.
Mar 12 12:13:48.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.
Mar 12 12:13:48.000 [notice] Bootstrapped 100%: Done.

Tor creates a SOCKS proxy listening on localhost 9050. My first thought was to create an OS X network Location for Tor which configures all of my network interfaces to use SOCKS on localhost 9050.

Tor location

This does work in that applications that use the OS networking stack will switch to passing their traffic to SOCKS on localhost 9050, but it isn’t necessarily good enough for anonymizing with Tor because of the DNS leaking problem. In particular, browsers — specifically Chrome — not only don’t send their DNS traffic to the SOCKS server by default which affects your anonomyzation by leaking unencrypted UDP DNS requests to your ISP but also interferes with resolving Tor services on .onion domains.

I wanted to try and use Chrome with Tor, so this presented a problem. Poking around, I discovered a Chromium design document which has the solution for forcing Chrome to send all traffic — including DNS — to a SOCKS server. It requires passing arguments to Chrome or Chromium when starting the app.

–proxy-server=”socks5://myproxy:8080
–host-resolver-rules=”MAP * 0.0.0.0 , EXCLUDE myproxy
In order to use this mechanism, you have to exit all Chrome/Chromium processes and launch a new process with the appropriate flags.
 

killall Google\ Chrome
sleep 1 # give processes a chance to exit before launching
open -a Google\ Chrome –args –proxy-server=”socks5://localhost:9050″ –host-resolver-rules=”MAP * 0.0.0.0, EXCLUDE localhost”

A nifty feature of OS X is Automator, which can turn a script into an app via the Application document type. Start Automator and create a new Application document and add the “run a shell script” Action and paste in the script above. Automator will then allow you to save a .app file which can live in your Applications folder.

Screen Shot 2013 03 12 at 11 23 27 AM

I saved this automation as “Google Chrome for Tor.app”. Launching “Google Chrome for Tor” will close all my sessions in Chrome and launch a new Chrome process tree configured as a SOCKS client on my local Tor proxy. Using the chrome://net-internals URL verifies that Chrome is talking to Tor and also sending all of its DNS requests through Tor.

Screen Shot 2013 03 12 at 11 40 24 AMScreen Shot 2013 03 12 at 11 40 24 AM

Also, as an aside and note to self. SSH can be used with Tor via netcat. This means that the SSH tunnel passes through the Tor network and is useful if ssh over TCP 22 is blocked or monitored. It is bloody slow over my — relatively slow-ish, high-ish latency connection in Africa — it reminds me of SSH over GPRS.

 

 

Secure Password Hashing in C#

Long-term storage of logon credentials in computer systems is a serious point of vulnerability. Too many systems store credentials in an unencrypted or weakly encrypted form that system administrators or database power-users can view or edit. This may seem OK as long as a set of conditions were true:

  • The administrators of the system are known to be trustworthy and no untrusted users have the ability to view, alter or copy the password file or table.
  • The normal execution flow of the system could not be used to reveal plaintext passwords.
  • The online and backup repositories of authentication data are secure.
    Employee turnover, the increasing complexity of internet-connected systems, and the spread of knowledge and tools for executing simple security exploits all work together to change the rules of the game so that storing passwords in plaintext is no longer acceptable.  Disgruntled employees are now widely acknowledged to be the single largest vulnerability for most companies, and are the most common attackers of computer systems.  Defects in production software, runtimes and platforms can also disclose sensitive information to attackers. A litany of recent headlines where a wide range of organizations have lost control of their authentication database prove that systems that store “secret” but plaintext logon information are unsafe.

Theory of Password Hashing in Brief

The key to hardening authentication systems to use a strong cryptographic function to hash passwords into an unrecoverable blob. A strong hashing algorithm must guard against brute force offline attack via a rainbow tables precompiled hashes on dictionary words. Rainbow tables and software to use them are publicly available which lowers the password cracking bar to the level of script kiddie.

Niels Furguson and Bruce Schneier in Practical Cryptography describe strong a strong hashing technique for safe storage of passwords which requires.

  • A publicly tested secure hashing algorithm such as SHA-256. Neither MD5 nor SHA-1 are good enough anymore.
  • A randomly-generated value known as a Salt is mixed with the plaintext. The salt value is not a secret, it exists to make the output of the hash non-deterministic. There should be a unique salt for each user and a new random salt value should be generated every time a password is set.
  • The hash is stretched, meaning that the output is fed back through the algorithm a large number of times in order to slow down the hashing algorithm to make brute force attacks impractical.

In order to authenticate a user, the system retrieves the salt for the username, calculates the hash for the password the user provided and compares that hash to the value in the authentication database. If the hashes match, then access is granted.

Implementation in C#

In hope of making it easy for .NET developers to implement good authentication security, my company published a secure password hashing algorithm in C# under the BSD license in 2005. This algorithm is a part of the security subsystem of our PeopleMatrix product.

Because the code is no longer available on thoughtfulcomputing.com, so I’m republishing hit here.

Download HashManager source with a sample application.

//*****************************************************************
// <copyright file="HashManager.cs" company="WolfeReiter">
// Copyright (c) 2005 WolfeReiter, LLC.
// </copyright>
//*****************************************************************

/*
** Copyright (c) 2005 WolfeReiter, LLC
**
** This software is provided 'as-is', without any express or implied warranty. In no 
** event will the authors be held liable for any damages arising from the use of 
** this software.
**
** Permission is granted to anyone to use this software for any purpose, including 
** commercial applications, and to alter it and redistribute it freely, subject to 
** the following restrictions:
**
**    1. The origin of this software must not be misrepresented; you must not claim 
**       that you wrote the original software. If you use this software in a product,
**       an acknowledgment in the product documentation would be appreciated but is 
**       not required.
**
**    2. Altered source versions must be plainly marked as such, and must not be 
**       misrepresented as being the original software.
**
**    3. This notice may not be removed or altered from any source distribution.
**
*/

using System;
using System.Globalization;
using System.Text;
using System.Security.Cryptography;

namespace WolfeReiter.Security.Cryptography
{
	/// <summary>
	/// HashManager provides the service of cryptographically hashing and verifying strings
	/// against an existing hash. HashManager uses the SHA-256 hashing transform by default.
	/// </summary>
	/// <remarks>
	/// <para>The MD5 and SHA1 algorithms should be avoided because they are not strong enough
	/// to resist attack with modern processors. Also, it appears that the algorithms for MD5 and SHA1
	/// may have a flaw known as "collision". The short story is that  an attacker can discover the 
	/// plaintext used to generate the hash in much fewer steps than expected.</para>
	/// <para>For example, SHA1 should theoretically require 2<sup>80</sup> steps to brute-force recover 
	/// the original plaintext. Researchers have recently claimed to have a method of recovering SHA1 plaintext
	/// in 2<sup>33</sup> steps.</para></remarks>
	[Serializable]
	public sealed class HashManager 
	{
		private HashAlgorithm _transform;
		private readonly ulong _iterations;

		/// <summary>
		/// CTOR. Creates a new HashManager object that uses the SHA-256 algorithm and stretches the entropy in the hash
		/// with 2<sup>16</sup> iterations.
		/// </summary>
		public HashManager() : this("SHA-256",65536){}
		/// <summary>
		/// CTOR. Creates a new HashManager object that uses the specified well-known hash transform algorithm.
		/// </summary>
		/// <param name="transform">Well-known transform algorithm (eg. MD5, SHA-1, SHA-256, SHA-384, SHA-512, etc.).</param>
		/// <param name="iterations">Number of iterations used to sretch the entropy of the plaintext. 2<sup>16</sup> iterations
		/// is a recommended minimum.</param>
		/// <exception cref="ArgumentOutOfRangeException">Throws if iterations is less than 1.</exception>
		public HashManager(string transform, ulong iterations) : this(HashAlgorithm.Create(transform),iterations){}

		/// <summary>
		/// CTOR. Creates a new HashManager object that uses provided transform hash algorithm.
		/// </summary>
		/// <param name="transform">HashAlgorithm object to use.</param>
		/// <param name="iterations">Number of iterations used to sretch the entropy of the plaintext. 2<sup>16</sup> iterations
		/// is a recommended minimum.</param>
		/// <exception cref="ArgumentOutOfRangeException">Throws if iterations is less than 1.</exception>
		public HashManager(HashAlgorithm transform, ulong iterations)
		{
			if( iterations < 1 )
				throw new ArgumentOutOfRangeException("iterations", iterations, "The number of iterations cannot be less than 1");
			_transform  = transform;
			_iterations = iterations;
		}
		/// <summary>
		/// CTOR. Creates a new HashManager object that uses the specified well-known hash transform algorithm.
		/// </summary>
		/// <param name="transform">Well-known transform algorithm (eg. MD5, SHA-1, SHA-256, SHA-384, SHA-512, etc.).</param>
		/// <param name="iterations">Number of iterations used to sretch the entropy of the plaintext. 2<sup>16</sup> iterations
		/// is a recommended minimum.</param>
		/// <exception cref="ArgumentOutOfRangeException">Throws if iterations is less than 1.</exception>
		public HashManager(string transform, long iterations) : this(HashAlgorithm.Create(transform),iterations){}

		/// <summary>
		/// CTOR. Creates a new HashManager object that uses provided transform hash algorithm.
		/// </summary>
		/// <param name="transform">HashAlgorithm object to use.</param>
		/// <param name="iterations">Number of iterations used to sretch the entropy of the plaintext. 2<sup>16</sup> iterations
		/// is a recommended minimum.</param>
		/// <exception cref="ArgumentOutOfRangeException">Throws if iterations is less than 1.</exception>
		public HashManager(HashAlgorithm transform, long iterations)
		{
			if( iterations < 1 )
				throw new ArgumentOutOfRangeException("iterations", iterations, "The number of iterations cannot be less than 1");
			_transform  = transform;
			_iterations = (ulong)iterations;
		}
		/// <summary>
		/// Hashes a plaintext string.
		/// </summary>
		/// <param name="s">Plaintext to hash. (not nullable)</param>
		/// <param name="salt">Salt entropy to mix with the s. (not nullable)</param>
		/// <returns>HashManager byte array.</returns>
		/// <exception cref="ArgumentNullException">Throws if either the s or salt arguments are null.</exception>
		public byte[] Encode( string s, byte[] salt )
		{
			if( s==null )
				throw new ArgumentNullException("s");
			if( salt==null )
				throw new ArgumentNullException("salt");

            return Encode( ConvertStringToByteArray( s ), salt );
		}

        public byte[] Encode( byte[] plaintext, byte[] salt )
        {
            byte[] sp = salt;
            byte[] hash = plaintext;
            //stretching via multiple iterations injects entropy that makes collision plaintext recovery much
            //more difficult.
            for( ulong i = 0; i < _iterations; i++ )
            {
                sp = Salt( hash, salt );
                hash = _transform.ComputeHash( sp );
            }

            return hash;
        }

		private byte[] Salt( byte[] p, byte[] salt )
		{
			byte[] buff = new byte[p.Length + salt.Length];
			for( int i=0; i<p.Length; i++ )
				buff[i] = p[i];
			for( int i=0; i<salt.Length; i++ )
				buff[i + p.Length] = salt[i];
			
			return buff;
		}

		/// <summary>
		/// Verifies a plaintext string against an existing hash. This is a case-sensitive operation.
		/// </summary>
		/// <param name="s">Plaintext to verify</param>
		/// <param name="hash">HashManager to compare with the s</param>
		/// <param name="salt">Salt that was used with the original s to generate the hash</param>
		/// <returns>True if the s is the same as the one used to generate the original hash.
		/// Otherwise false.</returns>
		/// <exception cref="ArgumentNullException">Throws if any of the s, hash or salt arguments are null.</exception>
		public bool Verify( string s, byte[] hash, byte[] salt )
		{
			if( s == null )
				throw new ArgumentNullException("s");
			if( salt == null )
				throw new ArgumentNullException("salt");
			if( hash == null )
				throw new ArgumentNullException("hash");

            byte[] testhash = Encode( s, salt );
            return _Verify( testhash, hash );
		}

        public bool Verify( byte[] plaintext, byte[] hash, byte[] salt )
        {
            if( plaintext == null )
                throw new ArgumentNullException( "plaintext" );
            if( salt == null )
                throw new ArgumentNullException( "salt" );
            if( hash == null )
                throw new ArgumentNullException( "hash" );

            byte[] testhash = Encode( plaintext, salt );
            return _Verify( testhash, hash );
        }

        private bool _Verify( byte[] a, byte[] b )
        {
            if( a.Length != b.Length )
                return false;
            for( int i = 0; i < a.Length; i++ )
            {
                if( a[i] != b[i] )
                    return false;
            }
            return true;
        }

		/// <summary>
		/// Generates a 32 byte (256 bit) cryptographically random salt.
		/// </summary>
		/// <returns>256 element Byte array containing cryptographically random bytes.</returns>
		public static byte[] GenerateSalt()
		{
			return GenerateSalt(32);
		}
		/// <summary>
		/// Generates a cryptogrpahically random salt of arbirary size.
		/// </summary>
		/// <param name="size">size of the salt</param>
		/// <returns>Byte array containing cryptographically random bytes.</returns>
		public static byte[] GenerateSalt(int size)
		{
			Byte[] saltBuff = new Byte[size];
			RandomNumberGenerator.Create().GetBytes( saltBuff );
			return saltBuff;
		}

		/// <summary>
		/// Convert a hash (or salt or any other) byte[] to a hexadecimal string.
		/// </summary>
		/// <param name="hash">Byte[] to convert to hex.</param>
		/// <returns></returns>
		public static string ConvertToHexString( byte[] hash )
		{
			StringBuilder sb = new StringBuilder();
			foreach( byte b in hash )
			{
				string bytestr = Convert.ToInt32(b).ToString("X").ToLower();
				if( bytestr.Length==1)
					bytestr = '0' + bytestr;
				sb.Append( bytestr );
			}
			return sb.ToString();
		}

		/// <summary>
		/// Convert a hexadecimal string to byte[].
		/// </summary>
		/// <param name="s"></param>
		/// <returns></returns>
		public static byte[] ConvertFromHexString( string s )
		{
			string hex = s.ToLower();
			if( hex.StartsWith("0x") )
				hex = hex.Substring(2);

			//ensure an even hex number
			hex = (hex.Length % 2 == 0) ? hex : '0' + hex;
			//2 hex digits per byte
			byte[] b = new byte[hex.Length/2];
			for(int i=0,j=0; i<hex.Length; i+=2,j++)
			{
				b[j] = byte.Parse( hex.Substring(i,2), NumberStyles.HexNumber );
			}
			return b;
		}

		/// <summary>
		/// Convert a hash (or salt or any other) byte[] to a base64 string.
		/// </summary>
		/// <param name="hash">Byte[] to convert to base64</param>
		/// <returns></returns>
		public static string ConvertToBase64String( byte[] hash )
		{
			return Convert.ToBase64String(hash);
		}
		
		/// <summary>
		/// Convert a base64 encoded string into a byte array.
		/// </summary>
		/// <param name="s">Base64 string.</param>
		/// <returns>Byte array.</returns>
		public static byte[] ConvertFromBase64String( string s )
		{
			return Convert.FromBase64String( s );
		}

		/// <summary>
		/// Converts a String to a Byte array.
		/// </summary>
		/// <remarks>Only works on .NET string objects or Unicode encoded strings.</remarks>
		/// <param name="s">String to convert</param>
		/// <returns>Byte array.</returns>
		public static byte[] ConvertStringToByteArray( string s )
		{
			if (s == null)
				return null;
			Char[] chars = s.ToCharArray();
			Encoder encoder = Encoding.Unicode.GetEncoder();
			int bytecount = encoder.GetByteCount( chars, 0, chars.Length, true );
			byte[] outbuff = new byte[bytecount];
			encoder.GetBytes(chars, 0, chars.Length, outbuff, 0, true );
			return outbuff;
		}
	}
}

IP Address is Not Identity

whois

When TCP/IP first developed by DARPA in the 1970s, every host on the ARPANET got an IP address. The “hosts” file which still exists on every computer mapped addresses to hosts until it was superseded by the Domain Name System (DNS). Certainly it was possible to do tricks like mapping more than one name (A record) to an IP address or provide CNAME aliases and multihomed hosts with multiple IP addresses are allowed. But more-or-less, historically an IP address maps to a computer. Furthermore, until recently IP addresses were doled out by ARIN and others in big blocks. Anyone who had a hint of a need could get IP addresses in lots of 256 addresses, sometimes called a Class C subnet or a “/24”. In the late 1990s in the Mid-Atlantic region of the USA, a T-1 came with 256 IP addresses and it was easy to get another 256 or more with the most modest excuse.

Historically, therefore, there is a notion that an IP address is pretty much a host and that host is part of a block of IP addresses which are managed by some entity which owns that computer and all the others in the subnet.

We are out of IP addresses and this world where a host is an IP address and a /24 is controlled by a single entity  no longer exists.

IPv6 may be the solution but the reality is that nobody is using it. In today’s world IP addresses are shared by multiple computers and even multiple companies using a variety of schemes including

  • Virtual hosting
  • Network Address Translation (NAT)
  • Proxies
  • Dynamic address allocation (DHCP)
  • Shared service computing (SaaS, Application Service Provider, Cloud Computing, etc) The bottom line is that an IP address is no longer reliably associated with any kind of identity. That email you just got might be coming from a Google Apps IP address or maybe one from Office 365 that is originating mail for hundreds of companies. The IP address behind this web server is most assuredly being used to host hundreds or thousands of sites. On the client end, if you have a Verizon LTE device, then you have a publically un-routable 10.x.x.x address and are being NATed onto the public Internet with some IP address shared by many others.This new reality complicates Internet security decisions because these days IP addresses are more granular than hosts and maybe more granular than organizations. The bottom line is that manipulating access control by IP address should be considered a blunt instrument virtually guaranteed to carry unintended side-effects unless the parties owning the addresses are well-known to each other.

    Related

“Security Suite” Subscriptions are the Dumbest Idea in Computer Security

I recently came across a 2005 essay by Marcus Ranum entitled “The Six Dumbest Ideas in Computer Security”. #1 on Ranum’s list is “Default Permit”.

Another place where "Default Permit" crops up is in how we typically approach code execution on our systems. The default is to permit anything on your machine to execute if you click on it, unless its execution is denied by something like an antivirus program or a spyware blocker. If you think about that for a few seconds, you’ll realize what a dumb idea that is. On my computer here I run about 15 different applications on a regular basis. There are probably another 20 or 30 installed that I use every couple of months or so. I still don’t understand why operating systems are so dumb that they let any old virus or piece of spyware execute without even asking me. That’s "Default Permit."

#2 on Ranum’s list is a special case of #1 which he calls “Enumerating Badness”. Basically what that boils down to is keeping a running list of “bad” stuff and preventing that from happening.

"Enumerating Badness" is the idea behind a huge number of security products and systems, from anti-virus to intrusion detection, intrusion prevention, application security, and "deep packet inspection" firewalls. What these programs and devices do is outsource your process of knowing what’s good. Instead of you taking the time to list the 30 or so legitimate things you need to do, it’s easier to pay $29.95/year to someone else who will try to maintain an exhaustive list of all the evil in the world. Except, unfortunately, your badness expert will get $29.95/year for the antivirus list, another $29.95/year for the spyware list, and you’ll buy a $19.95 "personal firewall" that has application control for network applications. By the time you’re done paying other people to enumerate all the malware your system could come in contact with, you’ll more than double the cost of your "inexpensive" desktop operating system.

The prices have gone up a bit with inflation:

  • Norton Internet Security Suite $70/year
  • Kaspersky PURE Total Security $90/year
  • McAfee Total Protection $90/year

Basically what you get for your $60-90/year is a system that double-checks everything that you try to do isn’t something bad that it knows about and if it is tries to stop it and if it lets something nasty happen tries to fix it later. You have no guarantee that something bad won’t happen because your computer still defaults to executing all code and, as a bonus, your expensive new computer now runs like molasses.

Default Deny is an Available Option in Windows 7 (but not by default)

Windows 7 ships with a semi-obscure enterprise feature called AppLocker. What AppLocker can do is deny execution to all programs, scripts, installers and DLLs by default. Instead of the normal situation where everything runs, only the code that matches ApplLockers known-good rules is allowed to execute. It works in conjunction with non-administrator user accounts to ensure that the only code executing on your system is code you want executing. This sleeper that nobody has ever heard of is more effective at stopping malware than any security suite on the market can ever be.

Why does this work? Your every day account has limited rights so it can’t write files into protected parts of the operating system but only software installed into protected parts of the operating system are allowed to execute. That means its impossible to execute downloads, email attachments, files on USB drives or whatever. Even if your browser or a plugin like Flash is exploited by malicious code in a web page, it is severely limited in the damage it can do. The usual end game of browser exploit code is to download malware onto your computer, install it somewhere and execute it. With an AppLocker default deny policy the end game can’t happen. This makes an anti-malware system something of an afterthought. Antimalware software becomes nothing more than good hygiene rather than the beachhead of your computer security, so make sure to use something that is free, lightweight and unobtrusive.

The catch is that AppLocker is an “Enterprise” feature that is only available in Windows 7 Enterprise or Ultimate editions. Also, there is configured through the Group Policy enterprise management tool which is targeted at professional systems administrators rather than normal people.

It turns out to also be cheaper to upgrade to Windows 7 Ultimate than to pay for 3 years of anti-malware. Let’s assume that your computer has a 3-year life.

Windows Anytime Upgrade Price List

  • Windows 7 Starter to Windows 7 Ultimate: $164.99 or $55/year amortized over 3 years
  • Windows 7 Home Premium to Windows 7 Ultimate: $139.99 or $47/year amortized over 3 years
  • Windows 7 Professional to Windows 7 Ultimate: $129.99 or $43/year amortized over 3 years

Even if it weren’t cheaper than massive security suites, enabling a default deny execution policy is so fundamentally right it is crazy not to do it. Any corporate IT department deploying Windows 7 without enabling AppLocker is either incompetent or the organization places no value on information security. For home users, the configuration is doable but it is “enterprisey” which means the configuration interface is too daunting for most people.

If Microsoft cares about protecting its users, it should enhance AppLocker so that it has a consumer-friendly configuration interface and it should turn on AppLocker by default in all SKUs, just like the Windows Firewall is on by default.

The day can’t come soon enough that Windows ships with a default deny firewall and a default deny execution policy and limited rights users by default. Maybe it will all come together in Windows 8.

Really Least-Privilege Development: AppLocker and Visual Studio

AppLocker is a software execution policy tool in Windows 7 Enterprise and Ultimate and Windows Server 2008 R2. An AppLocker policy can be used to shift Windows from a  model where execution of code is permitted by default to a model where execution is denied by default. AppLocker is aware of binary exe, DLL/OCX, the scripting engines that ship with Windows and Windows Installer packages. The default rule sets for these categories will only allow code that is installed into the system or program files directories to execute. Once AppLocker is turned on, execution of code is denied by default and an unprivileged user cannot add executable code to the system.

If untrusted users can’t execute new code, then how can Visual Studio possibly work without making developers admin?

Grant Execute on Source/Build Tree: Fail

I thought this would be super simple and that all I would have to do was create Executable, DLL and Script path rules to grant execute on my source tree. At first, it seemed to work and I was off and running. Then I tried to build a big complicated project and the build failed all of a sudden. This solution had post-build rules but so what? I had script enabled for the build tree.

Build Actions Fail

Procmon shows that the pre- and post-build events are implemented as temporary batch scripts in %TEMP%. They are named <cryptic-number>.exec.cmd.

procmon-postbuild

Unfortunately %TEMP% is not a usable macro in AppLocker. You have to either create a generic Script rule to allow all *.exec.cmd scripts to execute or create rules for each Visual Studio user like C:\Users\<username>\AppData\Local\Temp\*.exec.cmd. Either way, post-build actions will start to work.

Web Apps Crash with Yellow Screen of Death

Another issue is that running Web apps with the built-in Visual Studio Development Web Server (aka Cassini) fails miserably. The obvious clue that this is AppLocker is “The program was blocked by group policy.”

webdev.webserver20-yellowdeath

The problem is that Cassini copies the DLLs and runs them from a subdirectory of %TEMP%\Temporary ASP.NET Files\.

webdev.webserver20-temp

 

In order for Cassini to work, you have to disable DLL rules or  create a DLL allow path rule for every developer in the form C:\Users\<username>\AppData\Local\Temp\Temporary ASP.NET Files\*.dll.

Visual Studio’s HelpLibAgent.exe Crashes

This one is a bit weirder and more surprising. Visual Studio 2010 has a new help system that operates as a local HTTP server. Invoking help with AppLocker DLL rules enabled generates a serious crash.

helplibagent-crash

I’m not sure why it does this but HelpLibAgent.exe generates a random string and then two .cs files based on that string and invokes the C# compiler to generate a DLL based on the random string which is dynamically loaded by HelpLibAgent. This seems weird on the face of it that and there’s nothing in that code that looks like it has to be generated on a per-user basis at all. Weird, weird, weird.

helplibagent-dll-compile

In order for this to work you have to allow any randomly named dll to load out of %TEMP% which means disabling DLL rules or modifying the rule that was necessary for Cassini:

C:\Users\<username>\AppData\Local\Temp\*.dll.

Summary

In order to run Visual Studio with AppLocker a user needs the following rules:

  • DLL, EXE and Script: Allow path on source tree / build directory structure
  • Script: Allow path on %TEMP%\*.cmd.exec
  • DLL: Allow path on %TEMP%\*.dll
  • Unfortunately %TEMP% is not available in AppLocker so a C:\Users\<username>\AppData\Local\Temp\* for every <username> needed has to exist. These are probably best implemented as local policies.
  • Optional: Allow script *.ps1. (This is pretty safe because PowerShell has its own tight script execution security model.)
  • It’s unfortunate that DLL rules have to be enabled for a well-known location like %TEMP% but that still doesn’t make the DLL rule useless.

    • OCX is still not permitted from %TEMP%
    • AppLocker DLL rules are complementary to CWDIllegalInDllSearch for mitigating DLL Hijacking because it provides a more granular options. This is particularly important if you need to use a global CWDIllegalInDllSearch setting of 1 or 2 for compatibility reasons.
    Once these rules are in place, the experience is seamless. The rules don’t get in the way of anything.
    Note that AppLocker script rules only apply to the scripting hosts that ship with Windows: CMD, Windows Scripting Host (.vbs and .js) and PowerShell). Perl, Python, Ruby, etc interpreters are not affected by AppLocker policy. Similarly, execution of Java jar files are not affected by AppLocker.
    It would be nice if DLL rules were a little smarter. For instance, I would like to be able to allow managed DLLs on some path but not native code.

More Granular Options for CWDIllegalInDllSearch Needed

I’m starting to see a class of issues where plugins rely on their libraries loading from the current working directory (CWD). To me this implies that the 0xffffffff option to completely disable loading from the CWD is not viable for most people in the near term but the 2 option of disabling network locations leaves open luring attacks based on removable storage like USB thumb drives or an attack that relies on an evil DLL that an attacker manages to place in an unprotected directory in the user’s profile somewhere.

I would like to see additional options to CWDIllegalInDllSearch. We need something between blocking loading of DLLs from CWD when CWD is a network location and blocking all loading from CWD at all. I want to be able to allow loading from CWD if CWD is a trusted location on my local computer in order to maintain compatibility with existing software that relies on this behavior but disable loading from CWD from any untrusted location.

A trusted location would be something that requires Administrator privilege to write into. That means the system folders and the “Program Files” folders.

I hope that Microsoft will realize the need for additional granularity and add some more options to CWDIllegalInDllSearch. The following would cover all the bases, I think:

  • 1 = disable CWD in LoadLibrary() search for WebDAV
  • 2 = disable CWD in LoadlLibrary() search for all networked location
  • 3 = option 2 and disable CWD in LoadLibrary() search for removable storage locations
  • 4 = disable CWD in LoadLibrary() anywhere except for “Program Files”
  • 0xffffffff = disable CWD in LoadLibrary() search
    WARNING: Note that options 3 and 4 are hypothetical. Setting CWDIllegalInDllSearch to 3 or 4 is currently equivalent to setting CWDIllegalInDllSearch to 0 and will enable the very insecure DLL loading behavior we are trying to eliminate.

My hypothetical CWDIllegalInDllSearch = 4 should provide equivalent security to CWDIllegalInDllSearch = 0xffffffff because anything in Program Files is trusted by definition. But it provides a backwards compatibility cushion for applications that require loading libraries from CWD.

There are two classes of applications that I can imagine which have a legitimate reason to load DLLs outside of Program Files and system.

  • Some kind of run-from-network corporate app
  • The latest generation of apps designed to run with least-privilege that install themselves into the current user’s profile or the all user’s profile like Chrome and WebEx but I’m not worried about these apps.

My feeling is that the corporate system administrators can handle provisioning rules such that their apps can run or virtualize them and the new apps that are installing themselves into unorthodox locations can fend for themselves. I think there is real utility in a CWD is OK in “Program Files” only mode and I hope that Microsoft will release an update to KB2264107 to give it to us.

Mozilla Compatible Silverlight 4 Plugin Requires Loading DLLs from CWD

chrome-silverlight-agcore-missingI visited a site yesterday in Chrome that tried to load Silverlight to provide a video player. I have KB2264107 installed and have globally disabled loading of DLLs from the current working directory in order to mitigate luring attacks against apps that use the default insecure DLL loading behavior of LoadLibrary(). Just like the Java plugin for Mozilla, Chrome generated a big fat bonk dialog trying to load the DLLs that the Silverlight plugin uses. The specific missing file is agcore.dll, which is found in “C:\Program Files (x86)\Microsoft Silverlight\4.0.50524.0” on my system.

I tried creating a symlink to agcore,dll so that agcore.dll is in the same directory as Chrome.exe, which fixes the bonk but Silverlight doesn’t work. I just end up with a black box where the movie player should be. I also tried adding the Silverlight directory to $env:path which removed the bonk but, instead, I got the “Install Microsoft Silverlight” button. I tried various combinations of symlinking DLLs and messing with the $env:path but I didn’t arrive at a combination that can actually work.

The only solution that I found is to dial the CWDIllegalInDllSearch value for Chrome and Firefox to 2 (DLLs not allowed to load from CWD if CWD is any remote, network location) instead of 0xffffffff (it also works to change this globally). I then have to hope that Firefox and Chrome are careful about how they are using CWD. I hope they are setting CWD just for loading the installed plugins in “Prgram Files” but cannot be lured into loading some evil DLL from a spurious location when doing something like opening an HTML document on a USB stick.


PS> Get-ItemProperty chrome.exe, firefox.exe | select pspath,cwdillegalindllsearch | fl


PSPath                : Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVer
                        sion\Image File Execution Options\chrome.exe
CWDIllegalInDllSearch : 2

PSPath                : Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVer
                        sion\Image File Execution Options\firefox.exe
CWDIllegalInDllSearch : 2




Java Built with Unsupported Old Compilers

When I turned off DLL loading from the current working directory to defeat DLL pre-loading luring attacks, one of the things I discovered was that the Java plug-in was broken in Firefox and Chrome. This problem of Java finding its C library is not new at all. The tubes are choked with posts and bug reports about getting various things that are dependent on Java to work when msvcr71.dll can’t be found. The new CWDIllegalInDllSearch = 0xFFFFFFFF option just exacerbates an existing deployment problem.

PS> cd 'C:\Program Files (x86)\Java\jre6\bin\new_plugin'
PS> dumpbin /dependents .\npjp2.dll
Microsoft (R) COFF/PE Dumper Version 10.00.30319.01
Copyright (C) Microsoft Corporation.  All rights reserved.


Dump of file .\npjp2.dll

File Type: DLL

  Image has the following dependencies:

    USER32.dll
    GDI32.dll
    ADVAPI32.dll
    MSVCR71.dll
    KERNEL32.dll
    ole32.dll

  Summary

        6000 .data
        3000 .rdata
        1000 .reloc
        7000 .rsrc
        4000 .text

The root of the Mozilla-compatible browser Java plugin problem is npj2.dll which is dynamically linked to msvcr71.dll. Because it is a plugin, the DLL loading is done by Windows on behalf of the executable and  (Chrome or Firefox) rather than the plugin. The new_plugin directory includes msvcr71.dll which probably helped if a browser changed its CWD to the new_plugin directory when loading npj2.dll but with searching CWD out of the picture it doesn’t.

Java simply doesn’t do a good job of loading its C runtime correctly on Windows. It’s not a new problem. I don’t understand why don’t just change a switch on the complier to /MT instead of /MD and statically link the C runtime into jpjp2.dll. That would make the whole problem go away.

There are also some other oddities here. There are 2 different C runtimes used in the 32-bit version of Java 6 for Windows. For some reason they are redistributing msvcr71.dll (C runtime from Visual Studio .NET 2003) and also msvcrt.dll which is supposed to be the name of the private C runtime used by Windows components. However, the msvcrt.dll in the Java directory is actually the C runtime from Visual Studio 6 or possibly a very old Platform SDK.

By implication Oracle/Sun is using the C/C++ optimizing compiler from Visual Studio 6 (1998) and Visual Studio .NET 2003 to build the 32-bit version of Java. Holy cow those are 5 and 3 versions back from the current compilers and between 12 and 7 years old. I’m pretty sure that Visual Studio 6 is no longer supported and unfortunately both predate the side-by-side C runtime distribution system that starts with Visual Studio 2005.

The x64 version of Java is even stranger. It links against a Microsoft  x64 C runtime library called msvcrt.dll. Again this is the name reserved for the private Windows platform C runtime but this msvcrt.dll has file version 6.10.2207.0 either from a very old version of the Windows Platform SDK that provided x64 compilation support prior to Visual Studio 2005 or from a tool chain that was available by request (and is no longer available) for Visual Studio 2003.

It seems like the Java team has made a bit of a fetish of using really old compilers for the Microsoft platforms. I can understand that there is a risk of breaking stuff when upgrading a tool chain but this has been taken a bit to the extreme by the Java build team. There is a cost to testing a huge platform like Java when building with a new tool chain. Sun was cash-constrained and, although popular, Java SE didn’t really make them any money directly.

Good News for Java 7 (Probably)

It looks like the problem is being addressed. OpenJDK 7 builds, it looks like Oracle is upgrading to the C/C++ compiler from Visual Studio 2010.

BEGIN WARNING: At this time (Spring/Summer 2010) JDK 7 is starting a transition to use the newest VS2010 Microsoft compilers. These build instructions are updated to show where we are going. We have a QA process to go through before official builds actually use VS2010. So for now, official builds are still using VS2003. No other compilers are known to build the entire JDK, including non-open portions. So for now you should be able to build with either VS2003 or VS2010. We do not guarantee that VS2008 will work, although there is sufficient makefile support to make at least basic JDK builds plausible. Visual Studio 2010 Express compilers are now able to build all the open source repositories, but this is 32 bit only. To build 64 bit Windows binaries use the the 7.1 Windows SDK.END WARNING.

The 32-bit OpenJDK Windows build requires Microsoft Visual Studio C++ 2010 (VS2010) Professional Edition or Express compiler. The compiler and other tools are expected to reside in the location defined by the variable VS100COMNTOOLS which is set by the Microsoft Visual Studio installer.

So maybe in 2011 Java will have its C runtime library sorted out by virtue of having a supported global mechanism to register the Visual Studio 2010 C runtime.

Java Browser Plugin for Mozilla Vulnerable to DLL Preloading Attack

chrome-java-bonkThe “Next Generation Java Plug-in 1.6.0_21 for Mozilla browsers” 32-bit version for Windows uses CWD to load its C runtime library (msvcr71.dll). If you have globally disabled loading libraries from the current working directory (CWD) by globally setting CWDIllegalInDllSearch to 0xfffffff, you will get a bonk like the one shown at the right. Firefox also fails to load the JVM but it doesn’t give any feedback about why it isn’t working.

Note that this is not a general problem with Java. Java desktop applications like Eclipse work and the Java ActiveX plugin for IE works. The problem is specific to the NSAPI plugin and this indicates that the Java plugin for Google Chrome and Mozilla Firefox is one of the applications that is vulnerable to a DLL preloading luring attack. Specifically, when a Java applet is loaded, the JVM will cause the browser to load msvcr71.dll from whatever is the current working directory for the browser.

The error can be fixed by dialing back your global CWDIllegalInDllSearch to 2 or create an exception for chrome and firefox. However, I would rather not open those programs to attack from a USB drive.

The first location that Windows uses to search for a DLL is the directory containing the binary executable. Placing a copy of msvcr71.dll in the same directory with Firefox and Chrome fixes the problem. The problem with that is if Java services the msvcr71.dll with a newer version, then Chrome and Firefox will cause the JRE to load the wrong C runtime causing bad things to maybe happen. Another option is to create a symbolic link (which requires Vista or later).

Incidentally, the mklink command is not a standalone utility, it is a built-in to the cmd.exe shell. If you want to use mklink via powershell, you need a function to invoke cmd.exe.

function mklink { & "$env:systemroot\system32\cmd.exe" /c mklink $args }

Running mklink requires Administrator privilege.

PS> cd C:\Users\breiter\appdata\Local\Google\Chrome\Application
PS> mklink msvcr71.dll 'C:\Program Files (x86)\Java\jre6\bin\msvcr71.dll'
symbolic link created for msvcr71.dll <<===>> C:\Program Files (x86)\Java\jre6\bin\msvcr71.dll
PS> cd 'C:\users\breiter\AppData\Local\Google\Chrome SxS\Application'
PS> mklink msvcr71.dll 'C:\Program Files (x86)\Java\jre6\bin\msvcr71.dll'
symbolic link created for msvcr71.dll <<===>> C:\Program Files (x86)\Java\jre6\bin\msvcr71.dll
PS> cd 'C:\Program Files (x86)\Mozilla Firefox'
PS> mklink msvcr71.dll 'C:\Program Files (x86)\Java\jre6\bin\msvcr71.dll'
symbolic link created for msvcr71.dll <<===>> C:\Program Files (x86)\Java\jre6\bin\msvcr71.dll
PS>

With the symlink fix in place, the Java version test page loads correctly in Chrome with no bonks with loading libraries from CWD disabled.

chrome-java-crt-symlink

%d bloggers like this: