I Modified vpnc Cisco VPN Client to Use OS X Native User Tunnels

OS X Built-in Cisco IPSec VPN Sucks

My company works with a client site that uses a Cisco ASA-based IPSec VPN for remote access. I’m not a fan. Theoretically, there is a Cisco IPSec VPN client built into OS X based on raccoon(8) from the KAME project which ended in 2006. We’ve been trying to use it since OS X 10.8 “Lion” without a lot of success. What the OS X IPSec GUI does is dynamically generate raccoon config files and invoke the raccoon binary as root for you when you click connect. It doesn’t work as well as one would hope.

  • Routes for subnets within the vpn are not reliably configured
  • The VPN disconnects randomly
  • The built-in VPN client does not reconnect automatically and
  • I have to authenticate with a password manually every time I want to connect
  • I also hate the bizarre rectangle with some vertical lines VPN status icon in the menu bar

Other Options

OK, not all of these are significant issues; but the random disconnects and unreliable subnet route configuration is a serious problem. One option is to obtain the latest source code for racoon(8) and try to build it and replace the system version with a newer one or have a parallel version. I’m not even sure that this is racoon’s fault. I have a suspicion that Cisco is doing something that is not strictly standards compliant. In any case, just building my own raccoon doesn’t imply it would work any better. I would probably have to get deep in the weeds of esoterica debugging what is actually causing my problems and figuring out a workaround — because I can’t change the Cisco server software. Another option is strongSwan. I have to be honest, strongSwan looks like a huge complex package to try when other options don’t work. Finally, there is vpnc which was actually where I started because I used the cisco-decrypt binary from vpnc to deobfuscate teh enc_GroupPwd field from the .pcf configuration file for the Cisco VPN client in order to provided the Shared Secret field for the OS X Cisco VPN client configuration.

It also happens that I’m a MacPorts user and vpnc is a package in MacPorts while strongSwan and raccoon are not (although strongSwan is available these days through HomeBrew). Since I had already installed vpnc in order to get cicso-decrypt I tried out vpnc as an alternative to racoon. It turned out that it didn’t totally eliminate the disconnects but it would generally stay connected for a really long time as long as the network was up and since the config file for vpnc stores the VPN password I didn’t have to remember it and type the damn thing in all the time. Then I had a brainwave that I could use launchd to run it in foreground mode (–no-detach) with the KeepAlive option. That way whenever a disconnect did happen the vpnc process would exit and launchd would restart it, reconnecting the tunnel automatically.

Nirvana.

Yosemite Throws a Wrench in TunTapOSX

VPNC relies on tuntaposx to provide character devices for attaching network tunnels or taps to. On OS X, this is a 3rd party kernel extension or kext. In OS X 10.10 Yosemite, only signed kernel extensions will load by default. Vpnc stopped working when I upgraded to Yosemite.

Remediation A

The simplest fix is to put Yosemite into kernel extension developer mode which essentially reverts the behavior to Mavericks and previous so that unsigned kexts load and work by setting a kernel boot argument and rebooting.

sudo nvram boot-args="kext-dev-mode=1"

This can be reverted thusly:

sudo nvram -d boot-args

Remediation B

The idea of requiring signed kexts seems like a good security measure. It limits the ability for anyone evil to inject code into the kernel, so I would prefer to keep it. It turns out that kext signing requires a special dispensation from Apple. Just a normal OS X developer account doesn’t cut it. You need to ask for a kext signing certificate and they have to approve it. I didn’t even try. I did try using the signed tuntaposx kexts out of Tunnelblick and that works but I also had a kernel panic happen while doing that. I don’t know that it was related but be forewarned.

Final Solution

I noticed that OpenVPN kept working when vpnc stopped. When I first started using OpenVPN it required tuntaposx in order to work, just like vpnc. I also noticed that OpenVPN routes were showing up on utun devices rather than tun. Intrigued, I looked into this a little deeper. It turns out that the xnu kernel in Darwin 10 shipped a feature called Native User Tunnels as part of OS X 10.6 Snow Leopard which I think came out of NetBSD and that very same KAME project that spawned raccoon. Native User Tunnels is a kernel feature that allows binding a tunnel to a special socket and reading and writing to it with standard socket APIs. At some point OpenVPN started using this feature on OS X and I wanted vpnc to do the same.

XNU Native User Tunnels for VPNC

I grafted in native user tunnel support in vpnc. You can build it yourself. The pre-requisites are libgpg-error, libgcrypt and libgnutls (libgpg-error is also a prerequisite for libgcrypt). The build process also needs pkg-config to find where the libraries are installed. Once you have those, such as port install libgcrypt gnutls pkg-config you can clone my repo and build vpnc. My version of vpnc patched for native user tunnels will still use tuntaposx for tap-based vpns or if you have more than 256 utun devices in use.

git clone https://github.com/breiter/vpnc.git
cd vpnc
make
sudo make install

The binaries install to /usr/local/sbin by default. You configure vpnc default.conf for your default VPN connection in /etc/vpnc/default.conf. See man vpnc for details.

A sample config file might look something like this where you supply the parts delimited with ** (note the ** is not part of the file format but part of what you replace).

IPSec gateway **vpn-server-hostname-or-ip**
IPSec ID **GroupName-from-.pcf**
IPSec secret **output-of-cisco-decrypt-here**
IKE Authmode psk
Xauth username **my-corporate-username**
Xauth password **super-secure-password**
NAT Traversal Mode cisco-udp
DPD idle timeout (our side) 0

Controlling VPNC with Launchd

I mentioned earlier that the happy scenario of using vpnc was combining it with launchd so that launchd would automatically restore your vpn tunnel after it is disconnected for any reason.

Place com.wolfereiter.vpnc.plist in /Library/LaunchDaemons.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd&quot;;>
<plist version="1.0">
<dict>
<key>Disabled</key>
<true/>
<key>Label</key>
<string>com.wolfereiter.vpnc</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/sbin/vpnc</string>
<string>--debug</string>
<string>2</string>
<string>--no-detach</string>
<string>/etc/vpnc/default.conf</string>
</array>
<key>StandardErrorPath</key>
<string>/var/log/vpnc/vpnc.log</string>
<key>StandardOutPath</key>
<string>/var/log/vpnc/vpnc.log</string>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<!-- NetworkState key is no longer implemented in OS X 10.10 Yosemite.
<dict>
<key>NetworkState</key>
<true/>
</dict> -->
<true/>
</dict>
</plist>

Create a directory to hold the log file defined in this plist.

mkdir /var/log/vpnc

Create vpnc.conf in /etc/newsyslog.d to clean up old logs.

# logfilename [owner:group] mode count size when flags [/pid_file] [sig_num]
/var/log/vpnc/*.log 644 3 1000 * J
view raw vpnc.conf hosted with ❤ by GitHub

Create vpnc-start script in /usr/local/bin.

#!/bin/sh
if [ "$(id -u)" -ne 0 ]; then
SELF=`echo $0 | sed -ne 's|^.*/||p'`
echo "$SELF must be run as root." 1>&2
echo "try: sudo $SELF" 1>&2
exit 1
fi
PLIST=/Library/LaunchDaemons/com.wolfereiter.vpnc.plist
CONF=`grep \.conf $PLIST | sed 's/<[^>]*>//g' | tr -d " \t"`
GATEWAY=`grep gateway $CONF`
ERROR=$( { /bin/launchctl load -w $PLIST; } 2>&1 )
if [ -z "$ERROR" ]; then
echo "starting vpnc daemon connection to $GATEWAY."
else
echo $ERROR
fi
view raw vpnc-start hosted with ❤ by GitHub

Create vpnc-stop script in /usr/local/bin.

#!/bin/sh
if [ "$(id -u)" -ne 0 ]; then
SELF=`echo $0 | sed -ne 's|^.*/||p'`
echo "$SELF must be run as root." 1>&2
echo "try: sudo $SELF" 1>&2
exit 1
fi
PLIST=/Library/LaunchDaemons/com.wolfereiter.vpnc.plist
CONF=`grep \.conf $PLIST | sed 's/<[^>]*>//g' | tr -d " \t"`
GATEWAY=`grep gateway $CONF`
ERROR=$( { /bin/launchctl unload -w $PLIST; } 2>&1 )
if [ -z "$ERROR" ]; then
echo "stopping vpnc daemon connection to $GATEWAY."
else
echo $ERROR
fi
view raw vpnc-stop hosted with ❤ by GitHub

Once you have everything set up and a working default.conf in /etc/vpnc, then you can use the vpnc-start command to launch vpnc in the background and vpnc-stop to close the tunnel. Once vpnc-start is invoked, launchd will keep it running through sleep/wake and moving around between wired and wireless connections. Whatever.

$ sudo vpnc-start
Password:
starting vpnc daemon connection to IPSec gateway vpn.corporate.com.
$ sudo vpnc-stop
stopping vpnc daemon connection to IPSec gateway vpn.corporate.com.
$ 

Happy tunneling.

Advertisement

Backup Windows Server on EC2 with PowerShell and AWS CLI

When we first started working with Windows Server or Amazon EC2, we ran into the problem of how to back up SQL Server databases. My initial thought was to somehow mount S3 as a volume and have SQL Server write backup jobs to it. That turns out to be kind of hard.

After a little head-scratching, I realized that EBS volume snapshots are stored in S3. Therefore, all we need to do is mount an EBS volume and have SQL Server write backup jobs to that and then snapshot that volume. Oh, and unmount it from the running Windows server before snapshot to make sure that we are taking clean snapshots.

It turns out to not be too hard to make this happen on Windows Server 2012 Core using a PowerShell script that drives the AWS CLI and does the hoodoo with the Windows volume manager to mount and dismount the EBS backup volume.

This solution requires PowerShell Storage Cmdlets which ship with Windows Sever 2012/ and Amazon Web Services python-based command-line interface. Works great on Windows Server Core.

The biggest problem that I ran into was that the script would run just fine when invoked manually or when I would trigger the job from schtasks while logged in, but it would hang and fail when run from scheduled tasks in the middle of the night. Long story short: when schtasks runs the job with nobody logged in it gives the .DEFAULT environment instead of the environment of the user context of the scheduled task. That meant that AWSCLI didn’t receive the correct %USERPROFILE% environmental variable and was not able to locate its config file with the user id and key.

The simplest solution was to wrap the invocation of the powershell script in a cmd batch script:

set USERPROFILE=C:\Users\Administrator\

powershell.exe -file “C:\Program Files\Invoke-BackupJob.ps1” -path E:\ -diskNumber 2 -ec2VolumeID vol-<id-number-here> -description “Important Backup” > “C:\Program Files\Utility\Log.txt”

Use the PowerShell Get-Disk cmdlet to figure out the disk number of the EBS volume in Windows and the EC2 console or AWSCLI to figure out the EBS volume ID in EC2.

[gist https://gist.github.com/breiter/7887782]

Compiling Mono 3.x from Source with LLVM Code Generation in Ubuntu LTS x64

The packages for Mono in Debian and Ubuntu are lagging far behind what is available from Xamarin for OS X, SUSE and OpenSUSE. The 2.10.x versions of Mono are .NET 2.0 runtime compatible and use Mono’s older Boem garbage collector — which tends to leak objects. The new generational “sgen” garbage collector is far superior All of the tutorials for building on Fedora/RHEL/CentOS and Debian/Ubuntu that I have seen build with the built-in code generation engine rather than LLVM. LLVM provides a significant speedup in the generated code of 20-30% but Mono can’t link against the version of LLVM that ships in Debian Wheezy or Ubuntu. You have to build a tweaked version from source — which is the tricky bit.

My goal was to build the same version of Mono that is shipped by Xamarin for OS X on Ubuntu 12.04 Precise Pangolin, including LLVM. That is mono 3.2.5 at the time of this writing.

Screen Shot 2013 12 01 at 10 58 51 AMPrerequisites

Mono is hosted on GitHub so we need git. It also requires GNU autotools, make and g++ to drive the build. 

sudo apt-get install libtool autoconf g++ gettext make git

We’re building something that isn’t managed by the system package manager, so according to the UNIX Filesystem Hierarchy Standard, the build product should go into the /opt file system. I’m going to use /opt/local and so before going any further let’s add that to the $PATH by editing /etc/environment with vi or your editor of choice:

PATH=“/opt/local/sbin:/opt/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games”

Also, we need to change the current environment:

export PATH=/opt/local/sbin:/opt/local/bin:$PATH

Libgdiplus

The first component of Mono that we need to build from source is libgdiplus. Libgdiplus is an open source implementation of the Windows GDI+ API for graphics and text rendering. Libgdiplus has a number of prerequisites because it basically just maps GDI+ function calls onto equivalent Linux libraries:

sudo apt-get install pkg-config libglib2.0-dev libpng12-dev libx11-dev libfreetype6-dev libfontconfig1-dev libtiff-dev libjpeg8-dev libgif-dev libexif-dev

mkdir ~/src; mdkr ~/src/mono; cd ~/src/mono

git clone https://github.com/mono/libgdiplus.git

cd libgdiplus

./configure –prefix=/opt/local

make

sudo make install

 

Mono-LLVM

Mono has a tweaked fork of LLVM. By default it will try to build x86 and x86_64 targets on Ubuntu x64 but the build will fail because there is no x86 runtime support installed.

cd ~/src/mono

git clone https://github.com/mono/llvm.git

cd llvm

git checkout mono

./configure –prefix=/opt/local –enable-optimized –enable-targets=x86_64

make

sudo make install

Mono with LLVM

Each official release of Mono is tagged in git. You should checkout the version that you want to build. In this case, I’m building Mono 3.2.5 with the LLVM and libgdiplus libraries we already built. Since there is no working version of Mono on the system, the very first time you build you need to do make get-monolite-latest which will pull down a minimal mono-based C# compiler to boostrap the build.

cd ~/src/mono

git clone https://github.com/mono/mono.git

cd mono

git checkout mono-3.2.5

export MONO_USE_LLVM=1

./autogen.sh –prefix=/opt/local –enable-llvm=yes –with-sgen=yes –with-gc=sgen

make get-monolite-latest

make EXTERNAL_MCS=${PWD}/mcs/class/lib/monolite/gmcs.exe

sudo make install

Assuming that all went well, you should now have a working build of mono 3.2.5 (or newer) in /opt/local/bin/ and executing mono –version should include LLVM: yes.

Depending on what you want to do with Mono, you may now want to build XSP (ASP.NET server, Apache module and FastCGI module for nginx) and/or F# 3.x.

XSP

The xsp2 server uses a .NET Framework 2.0/3.0/3.5 API and the xsp4 server provides a 4/4.5 API. The main tweak here is to set the PKG_CONFIG_PATH environmental variable so that the configure script can find mono in /opt/local.

cd ~/src/mono

git clone https://github.com/mono/xsp.git

cd xsp

git checkout 3.0.11

PKG_CONFIG_PATH=/opt/local/lib/pkgconfig ./autogen.sh –prefix=/opt/local

make

sudo make install

F# 3.1

cd ~/src/mono

git clone https://github.com/fsharp/fsharp

cd fsharp

git checkout fsharp_31

./autogen.sh –prefix=/opt/local

make

sudo make install

How to Launch AirPort Utility 5.6.1 on OS X 10.9 Mavericks

Apple changed the 802.11 framework that AirPort Utility 5.6 links against. The symbols exported by the new 802.11 framework don’t match the ones that AirPort Utiltiy 5.6 needs, so OS X can’t launch the app. 

However, it is possible to get the app to run if you have a copy of the old framework binary. The file in Mountain Lion is located in /System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current. Ideally, save a copy of the Apple80211 file in there before you upgrade.

Once you have Mavericks, you can put it back in an alternate subdirectory of /System/Library/PrivateFrameworks/Apple80211.framework/Versions/, such as MountainLion:

cd /System/Library/PrivateFrameworks/Apple80211.framework/Versions/
sudo mkdir MountainLion
sudo cp ~/Desktop/Apple80211 MountainLion

 

It doesn’t have to be there, just somewhere. Once you have a copy of the old Apple80211 framework binary, you can inject it into the Airport Utility process instead of using the default 802.11 framework.

Open a new Terminal:

export DYLD_INSERT_LIBRARIES=/System/Library/PrivateFrameworks/Apple80211.framework/\
Versions/MountainLion/Apple80211
cd /Applications/Utilities/AirPort\ Utility\ 5.6.1.app/Contents/MacOS
./Airport\ Utility&

AirPort Utility 5 6 1 on Mavericks

AirPort Utility should have launched for you. You will still have the incompatible app overlay on the Dock icon but it should launch and work.

With that working, you can wrap the shell script in an Automator app that you can launch from Finder or Spotlight.

  • Launch Automator and create a new Application. 
  • Add a Run Shell Script action.
  • Paste in the script above.
  • Assuming the Run button works, save the app and copy it to your /Applications/Utilities folder.

Automator airport

For convenience, you can download and use the Mountain Lion Apple80211 framework binary and Automator app.

Using Chrome with Tor on OS X

I’m living and traveling overseas. I want to have Tor as an option but I really just want to use it with Chrome — which I like a lot. My goal is to have the option to avoid national firewalls in some countries which use them. I’ve generally used SOCKS proxy over SSH in the past but it is good to have options. Plus, I have been reading Cory Doctorow’s Homeland (sequel to Little Brother) in which Tor is a prominent plot point in Homeland like “Finux” (Linux) and “Ordo” (PGP/GPG) in Cyrptonomicon.

I realize that Chrome sends information back to Google. I am even logged into Chrome, so this procedure isn’t hiding anything from them. Perhaps Chromium would be better. I’m not sure I want to constantly build from source every few weeks because Chromium is huge. These people have packaged vanilla Chromium plus Sparkle to update it. I may look into this in future.

The simplest way to use Tor for anonymized browsing is to download and install the Tor Browser Bundle. There are some aspects of this that I don’t find ideal — mostly I want to maintain Tor as part of my UNIX environment on OS X via MacPorts. I also like to have my hands in all the moving parts to learn how they work.

$ sudo port install tor

—> Updating database of binaries: 100.0%
—> Scanning binaries for linking errors: 100.0%
—> No broken files found.

$ tor
Mar 12 12:13:42.839 [notice] Tor v0.2.3.25 (git-17c24b3118224d65) running on Darwin.
Mar 12 12:13:42.840 [notice] Tor can’t help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Mar 12 12:13:42.840 [notice] Configuration file “/opt/local/etc/tor/torrc” not present, using reasonable defaults.
Mar 12 12:13:42.843 [notice] We were compiled with headers from version 2.0.19-stable of Libevent, but we’re using a Libevent library that says it’s version 2.0.21-stable.
Mar 12 12:13:42.843 [notice] Initialized libevent version 2.0.21-stable using method kqueue. Good.
Mar 12 12:13:42.843 [notice] Opening Socks listener on 127.0.0.1:9050
Mar 12 12:13:42.000 [notice] Parsing GEOIP file /opt/local/share/tor/geoip.
Mar 12 12:13:42.000 [notice] This version of OpenSSL has a known-good EVP counter-mode implementation. Using it.
Mar 12 12:13:42.000 [notice] OpenSSL OpenSSL 1.0.1e 11 Feb 2013 looks like version 0.9.8m or later; I will try SSL_OP to enable renegotiation
Mar 12 12:13:43.000 [notice] Reloaded microdescriptor cache. Found 3239 descriptors.
Mar 12 12:13:43.000 [notice] We now have enough directory information to build circuits.
Mar 12 12:13:43.000 [notice] Bootstrapped 80%: Connecting to the Tor network.
Mar 12 12:13:44.000 [notice] Heartbeat: Tor’s uptime is 0:00 hours, with 1 circuits open. I’ve sent 0 kB and received 0 kB.
Mar 12 12:13:44.000 [notice] Bootstrapped 85%: Finishing handshake with first hop.
Mar 12 12:13:45.000 [notice] Bootstrapped 90%: Establishing a Tor circuit.
Mar 12 12:13:48.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.
Mar 12 12:13:48.000 [notice] Bootstrapped 100%: Done.

Tor creates a SOCKS proxy listening on localhost 9050. My first thought was to create an OS X network Location for Tor which configures all of my network interfaces to use SOCKS on localhost 9050.

Tor location

This does work in that applications that use the OS networking stack will switch to passing their traffic to SOCKS on localhost 9050, but it isn’t necessarily good enough for anonymizing with Tor because of the DNS leaking problem. In particular, browsers — specifically Chrome — not only don’t send their DNS traffic to the SOCKS server by default which affects your anonomyzation by leaking unencrypted UDP DNS requests to your ISP but also interferes with resolving Tor services on .onion domains.

I wanted to try and use Chrome with Tor, so this presented a problem. Poking around, I discovered a Chromium design document which has the solution for forcing Chrome to send all traffic — including DNS — to a SOCKS server. It requires passing arguments to Chrome or Chromium when starting the app.

–proxy-server=”socks5://myproxy:8080
–host-resolver-rules=”MAP * 0.0.0.0 , EXCLUDE myproxy
In order to use this mechanism, you have to exit all Chrome/Chromium processes and launch a new process with the appropriate flags.
 

killall Google\ Chrome
sleep 1 # give processes a chance to exit before launching
open -a Google\ Chrome –args –proxy-server=”socks5://localhost:9050″ –host-resolver-rules=”MAP * 0.0.0.0, EXCLUDE localhost”

A nifty feature of OS X is Automator, which can turn a script into an app via the Application document type. Start Automator and create a new Application document and add the “run a shell script” Action and paste in the script above. Automator will then allow you to save a .app file which can live in your Applications folder.

Screen Shot 2013 03 12 at 11 23 27 AM

I saved this automation as “Google Chrome for Tor.app”. Launching “Google Chrome for Tor” will close all my sessions in Chrome and launch a new Chrome process tree configured as a SOCKS client on my local Tor proxy. Using the chrome://net-internals URL verifies that Chrome is talking to Tor and also sending all of its DNS requests through Tor.

Screen Shot 2013 03 12 at 11 40 24 AMScreen Shot 2013 03 12 at 11 40 24 AM

Also, as an aside and note to self. SSH can be used with Tor via netcat. This means that the SSH tunnel passes through the Tor network and is useful if ssh over TCP 22 is blocked or monitored. It is bloody slow over my — relatively slow-ish, high-ish latency connection in Africa — it reminds me of SSH over GPRS.

 

 

Async Google Spellcheck API Adaptor for TinyMCE

I recently added TinyMCE to a project in order to provide a stripped-down rich text editor with bold, italic and underline capability to a project. I discovered that the spell check functionality either required a client-side plugin for IE or a server-side implementation JSON RPC implementation called by TinyMCE via Ajax. Unfortunately, the only implementations for the server side provided by the TinyMCE project are in PHP and my project is in ASP.Net MVC 4.

Looking at the PHP implementations, one option is to adapt the Google Spellcheck API — which I didn’t even know existed. Basically this API allows you to post an XML document that contains a list of space-delimited words and get back a document which defines the substrings that are misspelled.

Using some examples of how the API works on the Google side, I was able to throw together a class that invokes it using the new async/await pattern in C# to create a Google Spellcheck API client that doesn’t block while wanting for its result.

using System;
using System.IO;
using System.Text;
using System.Net;
using System.Xml;
using System.Threading.Tasks;
using System.Collections.Generic;
using System.Diagnostics;

namespace WolfeReiter.Web.Utility
{
/*
 * http post to http://www.google.com/tbproxy/spell?lang=en&hl=en
 * 
 * Google spellcheck API request looks like this.
 * 
 * <?xml version="1.0" encoding="utf-8" ?> 
 * <spellrequest textalreadyclipped="0" ignoredups="0" ignoredigits="1" ignoreallcaps="1">
 * <text>Ths is a tst</text>
 * </spellrequest>
 * 
 * The response look like ...
 * 
 * <?xml version="1.0" encoding="UTF-8"?>
 * <spellresult error="0" clipped="0" charschecked="12">
 * <c o="0" l="3" s="1">This Th's Thus Th HS</c>
 * <c o="9" l="3" s="1">test tat ST St st</c>
 * </spellresult>
 */

    public class GoogleSpell
    {
        const string GOOGLE_REQUEST_TEMPLATE = "<?xml version=\"1.0\" encoding=\"utf-8\" ?><spellrequest textalreadyclipped=\"0\" ignoredups=\"0\" ignoredigits=\"1\" ignoreallcaps=\"1\"><text>{0}</text></spellrequest>";

        public async Task<IEnumerable<string>> SpellcheckAsync(string lang, IEnumerable<string> wordList)
        {
            //convert list of words to space-delimited string.
            var words = string.Join(" ", wordList);
            var result = (await QueryGoogleAsync(lang, words));

            var doc = new XmlDocument();
            doc.LoadXml(result);

            // Build misspelled word list
            var misspelledWords = new List<string>();
            foreach (var node in doc.SelectNodes("//c"))
            {
                var cElm = (XmlElement)node;
                //google sends back bad word positions to slice out of original data we sent.
                try
                {
                    var badword = words.Substring(Convert.ToInt32(cElm.GetAttribute("o")), Convert.ToInt32(cElm.GetAttribute("l")));
                    misspelledWords.Add(badword);
                }
                catch( ArgumentOutOfRangeException e)
                {
                    Trace.WriteLine(e);
                    Debug.WriteLine(e);
                }
            }
            return misspelledWords;
        }

        public async Task<IEnumerable<string>> SuggestionsAsync(string lang, string word)
        {
            var result = (await QueryGoogleAsync(lang, word));

            // Parse XML result
            var doc = new XmlDocument();
            doc.LoadXml(result);

            // Build misspelled word list
            var suggestions = new List<string>();
            foreach (XmlNode node in doc.SelectNodes("//c"))
            {
                var element = (XmlElement)node;
                if(!string.IsNullOrWhiteSpace(element.InnerText))
                {
                    foreach (var suggestion in element.InnerText.Split('\t'))
                    {
                        if (!string.IsNullOrEmpty(suggestion)) { suggestions.Add(suggestion); }
                    }
                }
            }

            return suggestions;
        }

        async Task<string> QueryGoogleAsync(string lang, string data)
        {
            var scheme     = "https";
            var server     = "www.google.com";
            var port       = 443;
            var path       = "/tbproxy/spell";
            var query      = string.Format("?lang={0}&hl={1}", lang, data);
            var uriBuilder = new UriBuilder(scheme, server, port, path, query);
            string xml     = string.Format(GOOGLE_REQUEST_TEMPLATE, EncodeUnicodeToASCII(data));

            var request           = WebRequest.CreateHttp(uriBuilder.Uri);
            request.Method        = "POST";
            request.KeepAlive     = false;
            request.ContentType   = "application/PTI26";
            request.ContentLength = xml.Length;

            // Google-specific headers
            var headers = request.Headers;
            headers.Add("MIME-Version: 1.0");
            headers.Add("Request-number: 1");
            headers.Add("Document-type: Request");
            headers.Add("Interface-Version: Test 1.4");

            using (var requestStream = (await request.GetRequestStreamAsync()))
            {
                var xmlData = Encoding.ASCII.GetBytes(xml);
                requestStream.Write(xmlData, 0, xmlData.Length);

                var response = (await request.GetResponseAsync());
                using (var responseStream = new StreamReader(response.GetResponseStream()))
                {
                    return responseStream.ReadToEnd();
                }
            }
        }

        string EncodeUnicodeToASCII(string s)
        {
            var builder = new StringBuilder();
            foreach(var c in s.ToCharArray())
            {
                //encode Unicode characters that can't be represented as ASCII
                if (c > 127) { builder.AppendFormat( "&#{0};", (int)c); }
                else { builder.Append(c); }
            }
            return builder.ToString();
        }

    }
}

The GoogleSpellChecker class below exposes two methods: SpellcheckAsync and SuggestionsAsync.

My MVC Controller class exposes this functionality to the TinyMCE by translating JSON back and forth to the GoogleSpell class.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using System.Web;
using System.Web.Mvc;
using WolfeReiter.Web.Utility;

namespace MvcProject.Controllers
{
    public class TinyMCESpellcheckGatewayController : AsyncController
    {
        [HttpPost]
        public async Task<JsonResult> Index(SpellcheckRequest model)
        {
            var spellService = new GoogleSpell();
            IEnumerable<string> result = null;
            if(string.Equals(model.method, "getSuggestions", StringComparison.InvariantCultureIgnoreCase))
            {
                result = (await spellService.SuggestionsAsync(model.@params.First().Single(), model.@params.Skip(1).First().Single()));
            }
            else //assume checkWords
            {
                result = (await spellService.SpellcheckAsync(model.@params.First().Single(), model.@params.Skip(1).First()));
            }
            string error = null;
            return Json( new { result, id = model.id, error } );
        }

        //class models JSON posted by TinyMCE allows MVC Model Binding to "just work"
        public class SpellcheckRequest
        {
            public SpellcheckRequest()
            {
                @params = new List<IEnumerable<string>>();
            }
            public string method { get; set; }
            public string id { get; set; }
            public IEnumerable<IEnumerable<string>> @params { get; set; }
        }
    }
}

Integrating the above controller with TinyMCE is straightforward. All that needs to happen is include the “spellchecker” plugin, the “spellchecker” toolbar button and set the spellchecker_rpc_url to point to the controller.

/*global $, jQuery, tinyMCE, tinymce */
/// <reference path="jquery-1.8.3.js" />
/// <reference path="jquery-ui-1.8.24.js" />
/// <reference path="modernizr-2.6.2.js" />
/// <reference path="tinymce/tinymce.jquery.js" />
/// <reference path="tinymce/tiny_mce_jquery.js" />
(function () {
    "use strict";

    $(document).ready(function () {

        $('textarea.rich-text').tinymce({
            mode: "exact",
            theme: "advanced",
            plugins: "safari,spellchecker,paste",
            gecko_spellcheck: true,
            theme_advanced_buttons1: "bold,italic,underline,|,undo,redo,|,spellchecker,code",
            theme_advanced_statusbar_location: "none",
            spellchecker_rpc_url: "/TinyMCESpellcheckGateway", //<-- point TinyMCE to GoolgeSpell adaptor controller
            /*strip pasted microsoft office styles*/
            paste_strip_class_attributes: "mso"
        });
       
    });
}());

That’s all there is to it. Here’s how TinyMCE renders on a <textarea class=”rich-text-“></textarea>.

Tinymce spellcheck

Solving Very Slow Visual Studio Build Times in VMWare

ClockIn 2009, we chose 15″ MacBook Pro BTO over Dell, HP and Lenovo offerings. Apple offered us the best hardware and equivalent lease terms but with much simpler servicing done by ETF rather than (often incorrect) paper statements and checks. 99% of the time we ran this MacBooks with Windows 7 under Boot Camp. When I started doing some iOS development, I ran Snow Leopard and Xcode under VirtualBox and when I got fed up with the flakiness, I used VMware Workstation. OS X didn’t run very well under virtualization mostly because accelerated Quartz Extreme drivers don’t exist for VMware Workstation. Still, it actually worked well enough and was much more convenient than dual booting — which is such a huge time suck. When the lease period ended, we renewed with Apple and decided to just use OS X as the host environment for a variety of reasons:

The transition went very smoothly. I was using a VM with Windows Server 2008 and Visual Studio 2010 for primary .NET web development. Configuring IIS Express to serve outside of localhost bound to a host-only adapter is great for cross-browser testing, but it can be even more useful to enable remote access to Fiddler and point external browsers to the Fiddler proxy running in the VM to get both client debugging and HTTP sniffing at the same time. All of this was working out great.
 
At the start of October I downloaded Windows Server 2012 and Visual Studio 2012 and created a new development VM. In order to minimize disk storage, I moved my source tree to a folder in the host OS X environment and exposed it via the “Shared Folders” feature of VMWare Fusion 5.x to Windows. At the same time I started working on a new project.

 

The Slowening

Once the project grew to 20, 30 and 50k lines of C# code, the build times started to become horrifically slow. When combined with running unit tests, build became a big time for a bathroom break or cup of coffee event like building a project in C 15 years ago. Builds would show cdc.exe running at ~50% CPU (e.g. 1 core) and some other stuff totaling ~70% CPU. The VM was not memory bound and network IO was minimal. This was my first substantial project using Code First EF, so I thought maybe the complex object graph is just hard for the C# compiler to deal with.
 
After a few weeks of increasingly painful build times, I was looking at breaking my solution up so that I could build against pre-compiled DLLs — anything to make it go faster. I ran across a post on SuperUser:

… (Full disclosure: I work on VMware Fusion.)

I have heard that storing the code on a “network” drive (either an HGFS share or an NFS/CIFS share on the host, accessed via a virtual ethernet device) is a bad idea. Apparently the build performance is pretty bad in this configuration.

Oh really? Hmmm. Maybe it isn’t that my class libraries are so complex but something else is going on. Here are some empirical measurements of rebuild time of an actual solution:

VMWare shared folder: 	50 sec
OS X SMB share: 	18 sec
within virtual disk:	 9 sec

Wow. Problem solved. Incremental builds are basically instantaneous and a full rebuild takes 9 seconds when the code is hosted inside the VM image. Not only does hosting the source code within the VM virtual disk make the build go 5.5x faster, the CPU time of csc.exe goes way down. I don’t know how the VMWare shared folder is implemented. It appears as a mapped drive to a UNC name to Windows but it is very slow. Moral of the story is just don’t host your source code on the host machine with VMWare. The performance penalty is just not worth it. If you need to share the source code tree inside the VM with the host OS, create a file share from the VM to the host over a host-only adapter.

 

Update

I can confirm this is still a problem in VMWare Fusion 6. I’m hoping maybe the new SMB implementation in Mavericks might greatly improve the performance of sharing source code from the host OS to the VM.

Update

I just updated my virtual machine to Windows Server 2012 R2 (aka Windows 8.1 server). It is running on VMWare Fusion 6. The build time of this large project is now 15 seconds over VMWare Shared Folders. Significant remaining issue is that Visual Studio uses a whole core of CPU to do nothing — just having a large solution open, not editing anything.

Pimping my 2011 MacBook Pro to 16GB RAM Running at 1600MHz

I am a fairly heavy user of memory-hungry VMWare VMs. I was running into a problem with excessive paging slowing down the host OS or even not being able to launch all the VMs I needed to simultaneously due to memory limitations of my pretty damn new 8GB RAM BTO late 2011 15″  Sandy Bridge MacBook Pro system.

The late 2011 Sandy Bridge 15″ MacBook Pro machines come with 1300MHz 9-9-9 non-ECC DDR3 SO-DIMM RAM configurable up to 8GB in a BTO configuration. The 2012 Ivy Bridge models come with RAM operating at 1600MHz and the Retina MacBook Pro has a 16GB BTO option. The chipsets are similar and I was pretty sure that the non-Retina model can support 16GB of RAM and the Sandy Bridge models can run at the 20% faster 1600MHz just like the Ivy Bridge ones.

I had some trouble finding 16GB 9-9-9  latency non-ECC DDR3 SO-DIMM RAM kits on the aftermarket and none were labelled as “for MacBook Pro”. There are a lot of options at 1333MHz from Kingston, OWC, Corsair, Crucial and iFixit but at 1600MHz there are slim pickings. I suspect that the reason that the 2011 MacBook Pro ships with 1300MHz memory is a cost/availability issue.

Corsair has a kit of 16GB with slower 10-10-10 latency. I’m not sure what the implication is of 10-10-10 latency at 1600MHz vs. 9-9-9 at 1300MHz but I know that Apple specs 9-9-9 memory in their systems, so I soldiered on. The only kit that had specs I was looking for was the HyperX PNP 1600MHz 9-9-9 16GB DDR3 non-ECC SO-DIMM kit from Kingston which I got from Amazon.

Memory access schematic from support.apple.com

Installation is pretty easy. You need a high quality static dissipative Phillips #00 screw driver to remove the 10 screws without damaging the heads. Once the back is off the computer, the memory slots are easily accessible in the center of the machine.

As you can see, the Kingston kit worked. OS X Mountain Lion recognizes the 16GB of RAM at 1600MHz and is quite happy. I have a lot of memory head room now. I can run all of my VM workloads simultaneously with iTunes, Pixelmator, OmniGraffle, MonoDevelop, Xcode, etc., etc. all running at once without any hiccups whatsoever. Overall, I’m very happy with this experiment. The Kinston kit screams.

Update:

I got a Geekbench score of 11020 with the new RAM installed.

Delete Unwanted Windows Apps from Launch Pad with VMWare Fusion

VMWare Fusion has an option to “Run Windows Apps from your Mac’s Applications folder”. This option also integrates with Launch Pad. When playing around with turning this on and off, I got into a situation where there were leftover Windows app icons that Fusion refused to delete.

It turns out that part of the Fusion magic is to put entries into the SQLite database that Dock.app uses to decide what to show in Lanch Pad. You can use the sqlite3 editor at the command-line in terminal to edit the database.

Heres’ the list of columns:

item_id|title|bundleid|storeid|category_id|moddate|bookmark

The sqlite3 command-line app lets you execute arbitrary SQL statements against a SQLite database. In my case, I had turned off the app integration feature with Launch Pad but had stray icons. Here’s a terminal SQL command to clear out any stray magic VMware Fusion virtual icons.

cd ~/Library/Application\ Support/Dock
sqlite3 *.db “delete from apps where bundleid like ‘com.vmware.proxyApp.%’;”
sqlite3 *.db ‘delete from apps where bundleid like “com.vmware.proxyApp.%”;’
killall Dock

The killall command will cause Dock.app to restart. And after restart, all VMWare induced Windows icons will be gone from Launch Pad.

Size an MKMapView to Fit its Annotations in iOS Without Futzing with Coordinate Systems

map viewAn oddity of MKMapView is that it always defaults to showing the western hemisphere of the Earth. That’s almost certainly not what you want. Instead, you have probably set an array of id <MKAnnotation> objects which you want to appear as pins on the map. If the pins were evenly distributed over the USA, then great. Otherwise, the MapView is not at all zoomed the way you want. It baffles me a map view doesn’t just zoom itself by default when you set annotations, but it doesn’t.

One approach might be to loop over all of your annotations and figure out what the max and min x and y values are across them all. Then get the center point of the map view and do some math to convert the coordinates and scale the map view.

Alternatively, MapKit has API that is separate from the map view itself for manipulating rectangles and regions which can do the job automatically.

The main problem is that id <MKAnnotation> objects contain a CoreLocation coordinate – CLLocationCoordinate2D – which represents a GPS coordinate while MapView has an MKCoordinateRegion which has a CLLocationCoordinate2D defining its center but uses an MKCoordinateRegion span struct to define the zoom level of the map view in terms of degrees of arc.

MapKit provides a mix of object-oriented API and C functions to convert an array of CLLocationCoordinate2D structs from id <MKAnnotation> objects into an MKCoordinateRegion suitable for setting the region in a MKMapView. These transformations are straightforward and don’t require fiddling with converting between coordinate systems.

Here is the code in a map view controller

#define MINIMUM_ZOOM_ARC 0.014 //approximately 1 miles (1 degree of arc ~= 69 miles)
#define ANNOTATION_REGION_PAD_FACTOR 1.15
#define MAX_DEGREES_ARC 360
//size the mapView region to fit its annotations
- (void)zoomMapViewToFitAnnotations:(MKMapView *)mapView animated:(BOOL)animated
{ 
    NSArray *annotations = mapView.annotations;
    int count = [mapView.annotations count];
    if ( count == 0) { return; } //bail if no annotations
    
    //convert NSArray of id <MKAnnotation> into an MKCoordinateRegion that can be used to set the map size
    //can't use NSArray with MKMapPoint because MKMapPoint is not an id
    MKMapPoint points[count]; //C array of MKMapPoint struct
    for( int i=0; i<count; i++ ) //load points C array by converting coordinates to points
    {
        CLLocationCoordinate2D coordinate = [(id <MKAnnotation>)[annotations objectAtIndex:i] coordinate];
        points[i] = MKMapPointForCoordinate(coordinate);
    }
    //create MKMapRect from array of MKMapPoint
    MKMapRect mapRect = [[MKPolygon polygonWithPoints:points count:count] boundingMapRect];
    //convert MKCoordinateRegion from MKMapRect
    MKCoordinateRegion region = MKCoordinateRegionForMapRect(mapRect);
    
    //add padding so pins aren't scrunched on the edges
    region.span.latitudeDelta  *= ANNOTATION_REGION_PAD_FACTOR;
    region.span.longitudeDelta *= ANNOTATION_REGION_PAD_FACTOR;
    //but padding can't be bigger than the world
    if( region.span.latitudeDelta > MAX_DEGREES_ARC ) { region.span.latitudeDelta  = MAX_DEGREES_ARC; }
    if( region.span.longitudeDelta > MAX_DEGREES_ARC ){ region.span.longitudeDelta = MAX_DEGREES_ARC; }
    
    //and don't zoom in stupid-close on small samples
    if( region.span.latitudeDelta  < MINIMUM_ZOOM_ARC ) { region.span.latitudeDelta  = MINIMUM_ZOOM_ARC; }
    if( region.span.longitudeDelta < MINIMUM_ZOOM_ARC ) { region.span.longitudeDelta = MINIMUM_ZOOM_ARC; }
    //and if there is a sample of 1 we want the max zoom-in instead of max zoom-out
    if( count == 1 )
    { 
        region.span.latitudeDelta = MINIMUM_ZOOM_ARC;
        region.span.longitudeDelta = MINIMUM_ZOOM_ARC;
    }
    [mapView setRegion:region animated:animated];
}

- (void)viewWillAppear:(BOOL)animated
{
    [self zoomMapViewToFitAnnotations:self.mapView animated:animated];
    //or maybe you would do the call above in the code path that sets the annotations array
}
%d bloggers like this: