Set up a dotnet core development environment with VS Code, MacPorts, and Docker

Install and configure MacPorts

While a package manager is not strictly necessary to get a dotnet core development environment up and running, it is extremely useful to have a single tool to update utilities rather than manually discovering the need to update and manually downloading them. It’s also useful to set up a consistent command line environment with Linux. You can use my step-by-step guide to set up MacPorts.

To sum up:

  • Install MacPorts
  • Set up paths in /etc/paths and /etc/manpaths
  • Set up /etc/shells
  • Install Linux-flavor core tools sudo port install bash bash-completion coreutils findutils grep gnutar gawk wget
  • Set up ~/.bash_profile and ~/.bashrc

Install mono and mssql-tools

The intellisense engine for C# dotnet core projects in VSCode on Mac and Linux doesn’t use the dotnet core compilers, it uses mono. Microsoft has also provided ports of the SQL Server tools, sqlcmd and bcp

Visual Studio for Mac also depends on mono. If you have and use Visual Studio for Mac, you don’t need to install mono here. On the other hand, if you have an abandoned installation of Visual Studio for Mac you may want to remove it and start over. I have instructions for uninstalling Visual Studio for Mac at the end of this document.

sudo port install mono mssql-tools

Install dotnet core SDK

MacPorts does not manage installs of the dotnet core SDKs, but Microsoft does offer scripts to install and uninstall. I created a simple shell script to combine these together to maintain the 2.1 and 3.1 LTS SDKs.

To sum up, install two scripts from Microsoft and one from me into /usr/local/bin:

curl -sSL \
    | sudo tee /usr/local/bin/dotnet-uninstall-pkgs > /dev/null
sudo chmod +x /usr/local/bin/dotnet-uninstall-pkgs
curl -sSL \
    | sudo tee /usr/local/bin/dotnet-install > /dev/null
chmod +x /usr/local/bin/dotnet-install
curl -sSL  \
    | sudo tee /usr/local/bin/dotnet-upgrade-sdks > /dev/null
chmod +x  /usr/local/bin/dotnet-upgrade-sdks

Once those three scripts are in place, you can install or upgrade the dotnet 2.1 LTS and dotnet 3.1 LTS SDKs to the latest versions. If you have previous versions of the dotnet SDKs installed, the script will remove them cleanly.

sudo dotnet-upgrade-sdks

After the script completes, you should have two dotnet core SDKs.

$ dotnet --list-sdks
2.1.804 [/usr/local/share/dotnet/sdk]
3.1.301 [/usr/local/share/dotnet/sdk]

Install dotnet global tools

The dotnet command is extensible. Commands added to dotnet are called “tools” which can be installed in a project or globally for your account. There are a large number of tools available, but two key ones are the dotnet-ef and libman tools.

The dotnet-ef tool is the Entity Framework core tool for generating and managing migrations. The libman tool is for managing javascript library packages in an aspnet core project as a replacement for bower which is becoming unmaintained and isn’t tied up with nodejs.

dotnet tool install --global dotnet-ef
dotnet tool install --global libman

Unfortunately dotnet tool doesn’t have any command for update-all, but it is straightforward to pipe output the list command to the update command.

# list global tools installed
# select tool <PACKAGE_ID>
# execute `dotnet tool update --global <PACKAGE_ID>`
dotnet tool list --global | awk 'NR > 2 {print $1}' | xargs -L1 dotnet tool update --global
curl -sSL \
    | sudo tee /usr/local/bin/dotnet-tool-update-all > /dev/null

You can now ensure your global tools are current:


bash-completion for dotnet

You can set up tab completions for dotnet

In order for this to work you have to have installed bash-completion, which should have been part of your MacPorts setup.

sudo port install bash bash-completion

code ~/.bashrc

if [ -f /opt/local/etc/profile.d/ ]; then
  . /opt/local/etc/profile.d/

_dotnet_bash_complete() {
  local word=${COMP_WORDS[COMP_CWORD]}

  local completions
  completions="$(dotnet complete --position "${COMP_POINT}" "${COMP_LINE}" 2>/dev/null)"
  if [ $? -ne 0 ]; then

  COMPREPLY=( $(compgen -W "$completions" -- "$word") )

#enable command-line completiong for dotnet
complete -f -F _dotnet_bash_complete dotnet

Install VSCode

Download and install VSCode from Once you install it, VSCode is self-updating.

The first time you launch VSCode, you need to manually install the code command.

cmd + shift + p type “install code” and select the “Install ‘code’ command in PATH” option.

What this does is create a symbolic link to the code command inside the app bundle for VSCode to /usr/local/bin/code.

$ ls -l `which code`
lrwxr-xr-x 1 breiter wheel 68 Jun 27  2017 /usr/local/bin/code -> '/Applications/Visual Studio'

Or alternatively, you could do this old-school by hand:

ln -s /usr/local/bin/code '/Applications/Visual Studio'

At this point, I would recommend that you install a handful of extensions to make C# and aspnet core intellisense and debugging work.

VSCode extensions you really need

# C# XML comments
code --install-extension k--kato.docomment
# CSS support in HTML (and Razor) documents
code --install-extension ecmel.vscode-html-css
# C# language and debugging support (from Microsoft)
code --install-extension ms-dotnettools.csharp

Some additional nice VSCode extensions

# bookmarks
code --install-extension alefragnani.Bookmarks
# aligment
code --install-extension annsk.alignment
# gitlens
code --install-extension eamodio.gitlens
# Docker
code --install-extension ms-azuretools.vscode-docker
# SQL Tools (nice client for various DB engines)
code --install-extension mtxr.sqltools
# Spell check in code
code --install-extension streetsidesoftware.code-spell-checker
# syntax and intellisense for .csproj files
code --install-extension tintoy.msbuild-project-tools

Install Azure Data Studio

Azure Data Studio is the open source, cross-platform, spiritual successor to Query Analyzer. It’s a dedicated SQL Server (and PostgreSQL) client based on a fork of VSCode that is much more lightweight than SQL Server Enterprise Studio. Download Azure Data Studio from GitHub

Recommended extensions:

  • Admin Pack for SQL Server
  • PostgreSQL

Install Docker for Mac

In order to run SQL Server on Mac, you need Docker for Mac. I also find it more convenient to run PostgreSQL in Docker than on the base OS. In addition, you need Docker to build and run Docker images and push them to a repository. VMWare Fusion 11.5.5 has implemented a dockerd runtime and there are other virtualization methods to get Docker running on Mac but by far the most straightforward is Docker Desktop for Mac.

Docker Desktop for Mac integrates with the macOS Hypervisor.framework using an enhanced fork of the bhyve hypervisor from FreeBSD called (HyperKit)[] that maintained by Docker as part of the Moby project. It works very well.

Download and install the “stable” version from

Install an HTTP protocol debugger

  • Charles proxy is a cross-platform HTTP protocol debugger built on Java that works on Windows, macOS, and Linux.
  • Proxyman is a new macOS native HTTP protocol debugger.
  • Fiddler is the de facto standard free HTTP protocol debugger on Windows built on .NET Windows Forms. Telerik has a rebuilt “Fiddler Everywhere” that is rebuilt to be cross-platform in beta.
  • mitmproxy is a command-line, open source HTTP debugger built on python. It works but is more difficult to use than the commercial ones above. sudo port install py-mitmproxy
  • Wireshark is the de facto standard TCP/IP protocol analyzer. sudo port install wireshark3 +qt

I haven’t tried Proxyman or Fiddler Everywhere. I have used mitmproxy and I would recommend paying for one of the GUI options. I use Charles and Wireshark regularly.

Visual Diff/Merge

I confess that I have a bit of a collection of these tools going. My general purpose favorite is Beyond Compare from Scooter Software. I also have Kaleidoscope and Sublime Merge.

I use smerge to browse git repos and to resolve merge conflicts in git. Sublime Merge is wicked fast, if somewhat inscrutable.

I use Kaleidoscope primarily for the git difftool command because it loads a multi-file diff into a set of tabs rather whereas Beyond Compare will load each file in a window sequentially, popping a new code diff window after you close the previous one.

diff and merge settings from my global ~/.gitconfig

    tool = Kaleidoscope
    tool = smerge
    keepBackup = false
[diff "tool.bc3"]
    trustExitCode = true
[merge "tool.bc3"]
    trustExitCode = true
[difftool "smerge"]
    cmd = smerge mergetool --no-wait \"$LOCAL\" \"$REMOTE\" -o \"$MERGED\"
    trustExitCode = true
[mergetool "smerge"]
    cmd = smerge mergetool \"$BASE\" \"$LOCAL\" \"$REMOTE\" -o \"$MERGED\"
    trustExitCode = true
[difftool "Kaleidoscope"]
  cmd = ksdiff --partial-changeset --relative-path \"$MERGED\" -- \"$LOCAL\" \"$REMOTE\"
[mergetool "Kaleidoscope"]
  cmd = ksdiff --merge --output \"$MERGED\" --base \"$BASE\" -- \"$LOCAL\" --snapshot \"$REMOTE\" --snapshot
  trustExitCode = true

BONUS: Cloud service clients

AWS and Azure have CLI interfaces to automate actions respectively: aws and az. Both are based on python3.

Install AWS command-line client

sudo port install py-awscli

Edit ~/.bashrc and add aws cli completions after initializing bash completion.

#enable command-line completion for aws
complete -C aws_completer aws

Install Azure command-line client

For some reason, the Azure team is all in on Homebrew for macOS they are sitting on an open community request for a .pkg package and have closed a request to add MacPorts package.

The only viable option outside of Homebrew is to use a Docker container or their shell script installer for Linux.

The CLI requires the following software:
– Python 3.6.x, 3.7.x or 3.8.x.
– libffi
– OpenSSL 1.0.2

Make sure you have the pre-requisites.

$ port installed|egrep '^\s+(libffi|python3|openssl)'
  libffi @3.2.1_0 (active)
  openssl @1.1.1g_0 (active)
  python3_select @0.0_1 (active)
  python38 @3.8.3_0 (active)

Also, I’ve looked at the script and it assumes that GNU coreutils are in the path. You need to have set up a Linux-style environment with MacPorts with coreutils in path replacing the BSD versions shipped from Apple for this to work. It might work if you ust have the md5sha1sum package installed instead, but keep in mind this script was designed for Linux + GNU environment.

curl -L | bash

Follow the prompts. It should all work. The script from Microsoft will install or update az.

I saved this as /usr/local/bin/install-azure-cli.

# install or update azure cli: `az`
# pre-requisites
# - Python 3.6.x, 3.7.x or 3.8.x in path
# - libffi
# - OpenSSL 1.0.2+
# - bash 5 in path
# - gnu coreutils in path
curl -L | bash
view raw hosted with ❤ by GitHub
curl -sSL \ 
  | sudo tee /usr/local/bin/install-azure-cli > /dev/null
sudo chmod +x /usr/local/bin/install-azure-cli

BONUS: Configure postifx smtp relay

It should not strictly necessary, but I often find that it useful to have a local MTA that works. Follow my guide to set up postfix in macOS to accept mail on your local machine port 25 and relay it through a smart host such as SES, GMail, or

BONUS: Clean uninstall of Visual Studio for Mac and Mono.framework

uninstall-vsmac script

Uninstall Visual Studio for Mac.

# Uninstall Visual Studio for Mac
echo "Uninstalling Visual Studio for Mac..."
sudo rm -rf "/Applications/Visual"
rm -rf ~/Library/Caches/VisualStudio
rm -rf ~/Library/Preferences/VisualStudio
rm -rf ~/Library/Preferences/Visual\ Studio
rm -rf ~/Library/Logs/VisualStudio
rm -rf ~/Library/VisualStudio
rm -rf ~/Library/Preferences/Xamarin/
rm -rf ~/Library/Application\ Support/VisualStudio
rm -rf ~/Library/Application\ Support/VisualStudio/7.0/LocalInstall/Addins/
# Uninstall Xamarin.Android
echo "Uninstalling Xamarin.Android..."
sudo rm -rf /Developer/MonoDroid
rm -rf ~/Library/MonoAndroid
sudo pkgutil --forget
sudo rm -rf /Library/Frameworks/Xamarin.Android.framework
# Uninstall Xamarin.iOS
echo "Uninstalling Xamarin.iOS..."
rm -rf ~/Library/MonoTouch
sudo rm -rf /Library/Frameworks/Xamarin.iOS.framework
sudo rm -rf /Developer/MonoTouch
sudo pkgutil --forget com.xamarin.monotouch.pkg
sudo pkgutil --forget com.xamarin.xamarin-ios-build-host.pkg
# Uninstall Xamarin.Mac
echo "Uninstalling Xamarin.Mac..."
sudo rm -rf /Library/Frameworks/Xamarin.Mac.framework
rm -rf ~/Library/Xamarin.Mac
# Uninstall Workbooks and Inspector
echo "Uninstalling Workbooks and Inspector..."
sudo /Library/Frameworks/Xamarin.Interactive.framework/Versions/Current/uninstall
# Uninstall the Visual Studio for Mac Installer
echo "Uninstalling the Visual Studio for Mac Installer..."
rm -rf ~/Library/Caches/XamarinInstaller/
rm -rf ~/Library/Caches/VisualStudioInstaller/
rm -rf ~/Library/Logs/XamarinInstaller/
rm -rf ~/Library/Logs/VisualStudioInstaller/
# Uninstall the Xamarin Profiler
echo "Uninstalling the Xamarin Profiler..."
sudo rm -rf "/Applications/Xamarin"
echo "Finished Uninstallation process."
view raw hosted with ❤ by GitHub

uninstall-mono script

Uninstall mono installed by the .pkg Mono installer.

sudo rm -rf /Library/Frameworks/Mono.framework
sudo pkgutil --forget com.xamarin.mono-MDK.pkg
sudo rm -rf /etc/paths.d/mono-commands
view raw hosted with ❤ by GitHub

Why and how to set up MacPorts package manager for macOS

Package managers for macOS

Do I need a package manager? If you never open the answer is definitely, no. macOS is fully functional out of the box with the software shipped by Apple. The base system of command line applications available in a default install are also good enough for poking around and getting started with learning the UNIX system. You need a package manager when you want to install UNIX tools that Apple doesn’t bundle or newer or different versions of tools that they do.

A package manager helps you to download, possibly compile, install, and update tools in the UNIX environment in macOS. Alternatively you can download and install things by hand, possibly configuring and compiling them by hand.

There are three main package managers for macOS:


Homebrew is currently the most popular of these, but it is “too clever by half”. My issue is primarily that it works by taking over /usr/local/bin and changing the permission on that directory. This is a security problem but also it is in conflict with the conceptual purpose of /usr/local/bin being the directory where I install programs myself. If Homebrew messes up or gets broken then it can be a big mess to clean it up without breaking anything that doesn’t belong to Homebrew.

Homebrew will also help you to install things that it doesn’t control and cannot update — which I don’t think it should do. I also find it’s beer metaphors of casks and cellars overly cute.

Homebrew is popular so it is probably the lowest friction option despite my criticism. You can also use it to automate installing apps from the App Store, commercial software, and UNIX utilities. This can be helpful if you set up a new Mac frequently or have a standard config to push out. I have heard that GitHub uses Homebrew for this.


Pkgsrc comes from NetBSD. It is the standard package manager for NetBSD and SmartOS. Packages for Red Hat Enterprise Linux / CentOS, macOS, and SmartOS are maintained by Joyent. The packages are mostly pre-built binaries and pkgsrc is fast and works well. All of the packages are installed into /opt/pkg which means they are safely isolated from your base system. If somehow you borked up pkgsrc, just rm -fr /opt/pkg and install it again. If you want to get rid of pkgsrc, just rm -fr /opt/pkg and go on with life.

On the downside, all of the GUI packages for macOS are built for X rather than Quartz and the repository is smaller than Homebrew and MacPorts.

If you work in an environment with some combination of RHEL, SmartOS, and macOS then you should consider strongly standardizing on pkgsrc. For example on RHEL, instead of adding EPEL and IUS you can just not install anything on top of the base system with yum/dnf and only use yum/dnf for updating the base system. Then use pkgsrc to install all of the additional software. Then you can enjoy a very similar configuration and maintenance stack across your server and workstation fleet.


MacPorts (neé DarwinPorts) was originally created by engineers working in the Apple UNIX engineering team as part of the OpenDarwin project. It came out about around the same time as OS X 10.2 Jaguar. Darwin is the open source underpinning of macOS and consists of the xnu kernel plus the BSD subsystem. MacPorts was hosted by Apple on MacOS Forge but has subsequently moved to GitHub.

MacPorts used to be the de facto standard for installing open source packages on OS X for a long time until it was dethroned by Homebrew. At the time, MacPorts was criticized for wasting time and space by installing its own dependencies rather than linking to the ones from Apple. It also used to install everything by compiling from source and still does compile from source quite a bit, which can be slow. Like pkgsrc, MacPorts installs into its own sandbox: /opt/local where it can’t hurt anything and can be easily discarded. MacPorts has a variants system that lets you choose a lot of granular options when installing packages. For example, you can have GUI apps built against the native Quartz window manager whenever possible. It has a huge library of ports that are community maintained and reliable. There are problems occasionally, but they are sorted out quickly.

I’ve used all three of these systems, but have settled on MacPorts as my preference for a combination of practical and aesthetic reasons.

Setting up MacPorts


Before installing MacPorts, you need to install Xcode from or the App Store and the Xcode command line tools. Once you have installed Xcode, open and run this command to install the command line tools:

xcode-select --install

Installing macports from pkg or source

You can now probably head over to and download a .pkg installer for your version of macOS. If you are using a beta of a new release or the hot, fresh bits of a .0 release, the .pkg may not be available and you will have to build from source. Either download the tarball and unpack it or clone the git repo and checkout the current release tag.

# use actual latest tarball
curl -O
tar xf MacPorts-2.6.2.tar.bz2


git clone
git checkout 10.6.2 # or whatever is the highest version number without a -beta or -rc suffix

Whichever way you got the source code, enter the directory in your, configure, build, and install.

sudo make install

Now you have a /opt/local directory and a port command.

Configure options


Set default variants options. I have not used X11 on macOS in years. I like to disable X and enable Quartz by default. I also like to add bash completion scripts whenever they are available.

sudo vi /opt/local/etc/macports/variants.conf

-x11 +no_x11 +quartz +bash_completion

If you live outside of the USA, it can be a significant speedup to change to a local mirror. I am using one in South Africa.


In macports.conf, set the rsync_server and rsync_dir to match your alternate mirror.

sudo vi /opt/local/etc/macports/macports.conf

# The rsync server for fetching MacPorts base during selfupdate. This
# setting is NOT used when downloading the ports tree; the sources for
# the ports tree are set in sources.conf. See
# for a list of
# available servers.

# Location of MacPorts base sources on rsync_server. If this references
# a .tar file, a signed .rmd160 file must exist in the same directory
# and will be used to verify its integrity. See
# to find the
# correct rsync_dir for a particular rsync_server.
#rsync_dir              release/tarballs/base.tar
rsync_dir               macports/release/tarballs/base.tar

In sources.conf change the path to your local mirror.

sudo vi /opt/local/etc/macports/sources.conf

#rsync:// [default]
rsync:// [default]


I like to have my path searched in this order:

  1. stuff I installed manually
  2. MacPorts
  3. macOS base system

MacPorts will stick itself into your PATH in your shell profile, which is a good default to make it work, but I prefer to handle this more systematically in a central location.

Edit the system default path:

sudo vi /etc/paths


Edit the system default manpath to resolve documentation in the same order as the binaries:

sudo vi /etc/manpaths


The gnubin paths are for installing GNU utilities that override the BSD versions in macOS to conform to a de facto standard configuration in a world dominated by Linux + GNU servers.

If you want a contemporarybash from MacPorts, you need to have it in /etc/shells so that it can be set as a user shell with chsh.

sudo vi /etc/shells



MacPorts works a lot like apt you need to update the local cache and then install or update your packages.

Update local cache and macports itself

sudo port selfupdate

Install a pacakge

sudo port install

Find package

port search

List packages

List installed packages

port installed


port list installed

List outdated packages

port outdated


port list outdated

Update outdated packages

sudo port upgrade outdated

Remove old packages

When port upgrades a package it doesn’t delete the old one, it moves it to an inactive state so that you can roll back if the new one does not work.

You can clean up old packages

sudo port uninstall inactive

Install GNU flavor like Linux

At this point, if you primarily work with Linux servers, it makes sense to install a GNU base system to override the BSD flavor of a default macOS install.

sudo port install bash bash-completion coreutils findutils grep gnutar gawk wget

I also like to install a fully patched git to make sure that I have the current features and the bash completion scripts.

sudo port install git git-lfs

Also the latest vim.

sudo port install vim +huge

Set up bash

Make sure you have a ~/.bashrc and ~/.bash_profile.

Edit ~/.bash_profile to add

#flags to hint build systems to find things in macports
CFLAGS="$CFLAGS -I/opt/local/include" 
CXXFLAGS="$CXXFLAGS -I/opt/local/include" 
LDFLAGS="$LDFLAGS -L/opt/local/lib"

If MacPorts altered your PATH then comment that out:

# MacPorts Installer addition on 2016-09-22_at_13:35:36: adding an appropriate PATH variable for use with MacPorts.
# export PATH="/opt/local/bin:/opt/local/sbin:$PATH"

At the very end of ~/.bash_profile load ~/.bashrc.

if [ -f ~/.bashrc ]; then
   source ~/.bashrc

In ~/.bashrc you can set up some preferences:


I’m not into the fancy prompts. I like a classic $.

#classic, minimalist prompt + current git branch
PS1='\$ '

Prevent ssh from messing up the title

# force reset of the current directory name in terminal title
# to reset it after SSH sessions end.
PROMPT_COMMAND='echo -ne "\033]0;$(basename ${PWD})\007"'

Bash completion

if [ -f /opt/local/etc/profile.d/ ]; then
  . /opt/local/etc/profile.d/

Git prompt

Again, I like something simple. You can look up the fancy things.

if [ -f /opt/local/share/git/ ]; then
  . /opt/local/share/git/
  PS1='\[\033[1;36m\]$(__git_ps1 "[%s] ")\[\033[0m\]\$ '

Colors like Debian and Ubuntu

export CLICOLOR=1

# The color designators are as follows:
# a     black
# b     red
# c     green
# d     brown
# e     blue
# f     magenta
# g     cyan
# h     light grey
# A     bold black, usually shows up as dark grey
# B     bold red
# C     bold green
# D     bold brown, usually shows up as yellow
# E     bold blue
# F     bold magenta
# G     bold cyan
# H     bold light grey; looks like bright white
# x     default foreground or background
# Note that the above are standard ANSI colors.  The actual display may differ depending on the color capabilities of the terminal in use.
# The order of the attributes are as follows:
# 1.   directory
# 2.   symbolic link
# 3.   socket
# 4.   pipe
# 5.   executable
# 6.   block special
# 7.   character special
# 8.   executable with setuid bit set
# 9.   executable with setgid bit set
# 10.  directory writable to others, with sticky bit
# 11.  directory writable to others, without sticky bit

if [[ $(which ls) = *gnubin* ]]; then
  # GNU ls colors
  eval "$(dircolors -b)"
  alias ls='ls --color=auto'
  #BSD ls colors
  #default colors
  #export LSCOLORS=exfxcxdxbxegedabagacad
  export LSCOLORS=xxfxcxdxbxegedabagacad
if [[ $(which grep) = *gnubin* ]]; then
  alias grep='grep --color=auto'
  alias egrep='egrep --color=auto'
  alias fgrep='fgrep --color=auto'
  export GREP_OPTIONS='--color=auto'
export GREP_COLOR='0;36' # regular;foreground-cyan
export MINICOM='--color on'

Prefered editor and pager

export EDITOR=vim
export PAGER=less

At this point if you open a new terminal, it should feel very much like a Linux install.

Install some other stuff

aws cli

sudo port install python38 py38-awscli
sudo port select --set python3 python38

Create a file ~/.aws/config that contains API key credentials like this:

aws_access_key_id = some-key-id
aws_secret_access_key = some-key-value
region = us-east-1

[profile some-name]
aws_access_key_id = some-key-id
aws_secret_access_key = some-key-value
region = us-east-1

Network tools

sudo port install nmap

sudo port install GeoLiteCity wireshark3 +geoip +python38 +qt5

sudo port install whatmask

sf-pwgen (password generator)
sudo port install sf-pwgen

axel (download accelerator)
sudo port install axel

sudo port install +http2 +openldap +ssl

tcping (ping tcp ports)
sudo port install tcping

httping (ping http)
sudo port install http

minicom (terminal emulator for connecting to serial devices)
sudo port install minicom

sudo port install openvpn2

Programming languages

sudo port install go

sudo port install rust

Java OpenJDK with IBM Eclipse OpenJ9 VM
sudo port install openjdk14-openj9


Java OpenJDK with Oracle HotSpot VM
sudo port install openjdk14

Microsoft SQL Server client tools: sqlcmd and bcp
sudo port install mssql-tools

msodbcsql17 has the following notes:
  To make this work with SSL you need to create a symbolic link as follows: 
   sudo mkdir -p /usr/local/opt/openssl/ 
   sudo ln -s /opt/local/lib /usr/local/opt/openssl/lib 

   This is because this port installs binaries meant to be used with Homebrew.

sudo mkdir -p /usr/local/opt/openssl/
sudo ln -s /opt/local/lib /usr/local/opt/openssl/lib


sudo port install p7zip

youtube-dl (download video offline from youtube and other sites)
sudo port install youtube-dl

dos2unix (convert line endings)
sudo port install dos2unix

sudo port install ghostscript

sudo port install rsync

On UNIX Shells

A (not so) brief history of UNIX shells

In UNIX, the shell is the text-mode program that interfaces between the user and the kernel via a teletype interface — which is usually purely a software construct these day. It interprets commands, starts programs as necessary, and pipes data between programs.

Like a lot of things in UNIX the original shell, /bin/sh, was created by Ken Thompson. In 1976, with UNIX System 7, the Thompson Shell was replaced with a new /bin/sh created by another colleague at Bell Labs, Stephen Bourne. The Bourne Shell had all the key features we expect today like unlimited string size, command substitution, redirection, loops, case statements and by 1979 was pretty much done.

In 1978, Bill Joy created the C shell /bin/csh with the intention of being more friendly as an interactive environment. It turned out to be a bad scripting environment but was popular at Berkely and became the default interactive shell in Berkely UNIX and BSD.

David Korn created a new shell /bin/ksh in the 1980s based on Stephen Bourne’s source code. Korn Shell was used a lot on Solaris and with Oracle things and OpenBSD.

Kenneth Almquist reimplemented a clone of the Bourne Shell for BSD as a part of the catastrophic 1990s copyright dispute with AT&T. Debian has forked ash to the Debian Almquist Shell dash.

The Bourne Again Shell bash is the GNU project reimplementation of Bourne Shell. GNU did not stop at cloning the Bourne Shell features, they put in a whole ton of interactive and programming features.

Z (zed) Shell is a reimplementation of bash with a more liberal license. zsh aims to have full compatibility with all of the bash features and even more features of its own.

There are more shells, but I’m going to stop now.

Common system shells

Ever since AT&T Research UNIX System 7, the world has agreed that the default system interpreter is “Bourne Shell”. This is codified in the POSIX.2 standard and Single UNIX Specification. Since not everyone who wanted to create a UNIX-type operating system had legal access to the Bourne Shell source code from AT&T, fancy later shells ksh, bash, and zsh have a trick where they pretend to be the lowly old Bourne Shell if they are named sh. ash and dash pretty much are just the same as good old sh and don’t have to do a lot of pretending.

You might be surprised how deeply the system shell /bin/sh is embedded. It is used by init to run startup scripts. It’s used by web servers to connect a user request to a CGI program. It’s used by mail servers to connect bits together internally. There are tons of system and server things that are connected together with /bin/sh.

This seemed pretty smart until the shellshock family of vulnerabilities in bash were discovered in 2014 which allowed tricking servers into running arbitrary code through public services on the Internet. Now it seems like a good idea that the system shell should be as minimal and hardened as possible.

Here’s how things break down in the real world:

Red Hat uses bash as /bin/sh.

Debian and Ubuntu use dash as /bin/sh and /bin/bash as the default interactive interpreter.

NetBSD uses ash as /bin/sh and FreeBSD has their own /bin/sh.

OpenBSD uses ksh as /bin/sh.

Apple is a bit rudderless. If I recall, originally Apple was tied to its BSD roots from NeXT and used pdksh for /bin/sh and a version of the C Shell as the default for interactive users in OS X. They changed that bash for both in 10.3 Panther to be more similar to Red Hat, but kept the rest of the core system utilities BSD not GNU.

Today Apple macOS uses a really, really old forked version of bash 3.2 with security patches applied as /bin/sh. Apple stopped including bash updates in macOS (neé OS X) because the GNU project changed the license of bash to GPLv3. In macOS 10.15 Catalina bash is still the system shell /bin/sh but they changed the default shell for new users to /bin/zsh and have added /bin/dash.

In retrospect, Apple’s half-hearted attempt to include bash as a linuxism was a mistake. I hope that the arrival of dash is a sign that Apple is going to delete their decrepit old version of bash and make dash the system shell soon.

Cut to the chase or “what I use”

For interactive shell use, I use bash.

I have tried all the shells. I really tried to like zsh but I have found by the time you install all the plugins and whatnot, it is painfully slow. Today I use bash everywhere as my interactive shell. Mostly this is because it’s installed and the default on every version of Linux. This means that I install my own modern copy of bash on a Mac.

For shell scripts, I generally use the #!/bin/sh shebang but am careful to use the Bourne Shell features and not BASH features. If you are writing a script that uses #!/bin/sh as the shebang, it needs to work with dash because that’s what is on Debian and Ubuntu.

If you really want to use something other than the 1978 Bourne Shell language for a shell script, don’t hard code a path to /bin/sh. Use the /bin/env trick to allow the system to find the first match in the path. Instead of #!/bin/bash use #!/bin/env bash.

Encrypting DNS on macOS with unbound and Cloudflare

The DNS protocol traditionally runs over UDP on port 53. This is very fast but totally insecure. DNS queries can be snooped or potentially altered by anyone on the network. In my office, I use a pfSense firewall with the unbound DNS resolver configured to resolve DNS over TLS. That way my ISP neither my ISP nor the local government in Zimbabwe can observe or fiddle with DNS results.

In the olden days when I used to go places, I might use a VPN to secure all of my traffic. This is not always the optimal solutions. Sometimes, I know that all of my sensitive traffic is already encrypted and secure — except for the DNS. And I have had problems where DNS is intercepted by the ISP or hotel for advertising or other purposes. I found this particularly useful when we were staying with family last summer who have Cox Internet that does some goofy thing with DNS interception.

Unfortunately, macOS does not have DNS over TLS or DNS over HTTPS as a built in feature, yet. But I can set up unbound as a DNS resolver which does support DNS over TLS.
sudo port install unbound

unbound has the following notes:
  An example configuration is provided at

  A startup item has been generated that will aid in starting unbound with
  launchd. It is disabled by default. Execute the following command to start it,
  and to cause it to launch at startup:

      sudo port load unbound

cd /opt/local/etc/unbound
sudo cp unbound.conf-dist unbound.conf
sudo vi unbound.conf

Find the “# forward-zones” section and insert the following:

        name: "."
  forward-tls-upstream: yes
  # Cloudflare DNS

These are the Cloudflare DNS endpoints for DNS over TLS with malware protection. You can substitute alternate resolvers.

Now, if I want, I can start unbound and change my network config to use localhost as the DNS provider.

sudo port load unbound
--->  Loading startupitem 'unbound' for unbound

$ sudo lsof -i :53
unbound 85991 unbound    4u  IPv6 0xe14e1013ac1fa599      0t0  UDP localhost:domain
unbound 85991 unbound    5u  IPv6 0xe14e1013da135451      0t0  TCP localhost:domain (LISTEN)
unbound 85991 unbound    6u  IPv4 0xe14e1013bac68b69      0t0  UDP localhost:domain
unbound 85991 unbound    7u  IPv4 0xe14e1013bb2dd361      0t0  TCP localhost:domain (LISTEN)

Now, I can change my DNS provider to and my DNS queries will be resolved by my local and cached by my local unbound instance and securely forwarded to Cloudflare over TLS.

This setup can mess with captive portals. You may need to remove the temporarily in order to authenticate to a guest WiFi system through their web page and then turn it back on.

Configurig the postfix MTA to securely forward to a smarthost on macOS

macOS ships with postfix but it is in a semi-disabled state. The launch daemon configuration provided doesn’t work and postfix will immediately exit

What I want is a working local MTA that forwards mail securely to a smarthost for delivery. This is mostly useful when building and testing scripts and server applications that need to send mail. It is convenient to have the default MTA localhost:25 be in a working state.

Here’s the goal:

  • accept smtp connections on localhost:25 from localhost without credentials
  • relay mail (for my domain) to a smart host that has a static IP and a reputation that will make delivery possible
  • don’t have my credentials or mail snooped or intercepted
  • hopefully not be blocked by ISPs and middleboxes

I am using Amazon SES for my smarthost, but it could be gmail,, or a corporate server. The details will be a little different depending on the smarthost.

I’m using Amazon SES in the us-east region:

My configuration is for macOS Catalina with MacPorts as my package manager and Amazon SES as my smarthost relay. The details are slightly different but the concepts are the same for other package managers and or Linux.

Secure tunneling to a smarthost

Many ISPs and corporate networks will filter, intercept, or otherwise interfere with SMTP connections. They can also interfere with SMTP with StartTLS. With StartTLS, the connection is initially plaintext and gets upgraded to TLS if the server reports it as the capability. This connection type is vulnerable to a downgrade attack called StripTLS which prevents the encryption from being negotiated.

I have found that SMTPS or SMTP over SSL where the client first establishes an TLS connection and then performs SMTP commands and mail transfer through the resulting TLS tunnel is the most reliable, secure connection type.

Unfortunately many MTAs — including Postfix bundled with macOS and Ubuntu — do not support SMTPS natively. My solution to this problem is to use stunnel to negotiate the SMTPS and map the remote smarthost to a local port, then configure Postfix to forward its mail there.

sudo port install stunnel

sudo vi /opt/local/etc/stunnel/stunnel.conf

#foreground = yes

#accept = 2525
client = yes
connect =

When running stunnel from a command-line to test things out you would want to uncomment the lines that are commented. But put the comments back in for running as a launch agent.

Then I create a launch agent configuration to start the stunnel connection whenever port localhost:2525 is requested, emulating classic inetd. The idea is that launchd listens on localhost:2525 and when it receives a connection will start stunnel and connect it to the port, but otherwise stunnel is not running. Launchd is the init process, so it is always running. On Linux, you would use a systemd unit or OpenRC script to do the same thing.

sudo vi /Library/LaunchAgents/org.macports.stunnel.plist

<plist version="1.0">

sudo launchctl load /Library/LaunchAgents/org.macports.stunnel.plist

Now my smarthost at Amazon SES is connected securely to my localhost port 2525 on demand.

Configuring postfix

I need to authenticate to SES, so I need a passwd database.

cd /etc/postfix
sudo mkdir sasl
# for SES, the username and password are AWS API key ID and value
echo " my-aws-key-id:my-aws-key-value" | sudo tee passwd
# now make a postfix database file
sudo hashmap passwd
# now there should be a plaintext passwd file and a postfix passwd.db file

Now we need to set up postfix to relay any authenticate to SES on localhost:2525 by editing /etc/postfix

At the end of the file we need something like this:

# your authorized domain, this may need to be edited somewhere farther up in
mydomain = 

inet_interfaces = loopback-only

relayhost = localhost:2525
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl/passwd

If postfix were running, we would do sudo postfix reload at this point.

Finally we can set up a launch daemon for postfix to get it running as a service. I used to just edit the launch daemon configuration provided by Apple to get postfix working but as of High Sierra that required disabling SIP and as of Catalina the file became part of the read-only system partition.

We need to create and load a launch daemon file in /Library/LaunchDaemons where we have read/write permissions.

sudo vi /Library/LaunchDaemons/org.postfix.master.plist

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "">
<plist version="1.0">

sudo launchctl load /Library/LaunchDaemons/org.postfix.master.plist

Viola, I can now use my local MTA to send mail and this works from almost anywhere.

Now, assuming that you are able to make an outbound connection to port 465, the authentication to the smarthost is correct, and that your domain is authorized with the smarthost, etc. things should be working.

If you want to use /usr/bin/mail you will need a valid mydomain in and possibly also aliases, see postfix documentation for details.

SSH muddles the title

Normally, the macOS title bar includes the current directory name. When you connect to a remote host with openssh on macOS, the title bar gets updated to be “$(whoami)@$(hostname): $(pwd)” instead. Unfortunately when you exit ssh, the terminal title bar is not restored and continues to say you are on a remote host.

Once you see it, you can’t unsee it.

I’m sorry.

My solution is to use arcane escape sequences to reset the Terminal title every time bash generates a new prompt:

# Add to ~/.bashrc
# force reset of the current directory name in terminal title
# to reset it after SSH sessions end.
PROMPT_COMMAND='echo -ne "\033]0;$(basename ${PWD})\007"'

The incantation is slightly different but conceptually the same for zsh.

# Add to ~/.zshrc
function clear_term_title {
# removes the text that ssh puts into the terminal title
printf '\033]0;\007'
PROMPT="$(clear_term_title)%% "

Automating dotnet core SDK updates on Mac

I really enjoy working with dotnet core. It is fast, open source, and cross-platform. My preference these days for working with the .NET stack is to build dotnet core apps natively on Mac with SQL Server or PostgreSQL on Docker for Mac. We can then easily deploy Docker containers or in some cases dotnet core on an actual Debian or RHEL server with Nginx. ASP.NET 4.x still only runs on Windows Server and for that I use VMWare Fusion and deploy with Kudu.

On Linux, Microsoft provides package manager repos to maintain the dotnet core SDK, which is awesome. Microsoft also publishes a menu of Docker containers to build and run dotnet core apps. On Windows, the Visual Studio updater will install dotnet core updates for you. On Mac the same is true of Visual Studio for Mac.

But I have no particular use for Visual Studio for Mac. I use VS Code and vim. I don’t need to have Visual Studio for Mac just as a glorified package manager and I simply don’t like having things installed that I do not use. There may be a Homebrew way to manage the dotnet core SDKs but I’m not a Homebrew kind of guy and I put no effort into researching this.

Fortunately Microsoft has provided a couple of useful scripts and It’s a pretty straightforward thing to script these together to maintain the LTS dotnet core SDKs.

sudo dotnet-upgrade-sdks

Problems solved. Now I need to figure out a solution to maintain mono since the Omnisharp intellisense engine in VS Code depends on mono.

Script to upgrade dotnet core LTS SDKs

# Get the MSFT uninstall script from GitHub:
# curl -sSL | sudo tee /usr/local/bin/dotnet-uninstall-pkgs > /dev/null
# sudo chmod +x /usr/local/bin/dotnet-uninstall-pkgs
# MSFT install script documented on
# curl -sSL | sudo tee /usr/local/bin/dotnet-install > /dev/null
# chmod +x /usr/local/bin/dotnet-install
# also set within
current_userid=$(id -u)
if [ $current_userid -ne 0 ]; then
echo "$(basename "$0") requires superuser privileges to run" >&2
exit 1
${install_cmd} --install-dir ${dotnet_install_root} --no-path --channel 2.1 # old LTS Channel
${install_cmd} --install-dir ${dotnet_install_root} --no-path --channel LTS # current LTS Channel
echo adding dotnet and tools to path.
# created by pkg installers but not the install script
echo ${dotnet_install_root} > ${dotnet_path_file}
echo "~/.dotnet/tools" > ${dotnet_tool_path_file}
eval $(/usr/libexec/path_helper -s)

Install the script with it’s dependencies

curl -sSL \
| sudo tee /usr/local/bin/dotnet-uninstall-pkgs > /dev/null
sudo chmod +x /usr/local/bin/dotnet-uninstall-pkgs
curl -sSL \
| sudo tee /usr/local/bin/dotnet-install > /dev/null
chmod +x /usr/local/bin/dotnet-install
curl -sSL \
| sudo tee /usr/local/bin/dotnet-upgrade-sdks > /dev/null
chmod +x /usr/local/bin/dotnet-upgrade-sdks

2-Step Verification Code Generator for UNIX Terminal

otp bigdigits
I have been using a time-based one time password (TOTP) generator on my phone with my clod-based accounts at Google, Amazon AWS, GitHub, Microsoft — every service that supports it — for years now. I have over a dozen of these and dragging my phone out every time I need a 2-factor token is a real pain.

I spend a lot of my time working on a trusted computer and I want to be able to generate the TOTP codes easily from that without having to use my phone. I also want to have reasonable confidence that the system is secure. I put together an old-school Bourne Shell script that does the job:

  • My OTP keys are stored in a file that is encrypted with gnupg and only decrypted momentarily to generate the codes.
  • The encrypted key file can by synchronized between computers using an untrusted service like DropBox or Google Drive as long as the private GPG key is kept secure.
  • I’m using oathtool from oath-toolkit to generate the on-time code.

Pro Tip: Most sites don’t intend you to have more than one token that generates passwords. Their enrollment process typically involves scanning a QR Code to enroll a new private key into Google Authenticator or other OATH client. I always take a screen shot of these QR Codes and keep them stored in a safe place.


Save this script as an executable file in your path such as /usr/local/bin/otp.

scriptname=`basename $0`
if [ -z $1 ]; then
echo "Generate OATH TOTP Password"
echo ""
echo "Usage:"
echo " $scriptname google"
echo ""
echo "Configuration: $HOME/.otpkeys"
echo "Format: name:key"
echo "Preferably encrypt with gpg --armor to create .opkeys.asc"
echo "and then delete .optkeys"
echo ""
echo "Optionally set OTPKEYS_PATH environment variable to GPG"
echo "with path to GPG encrypted name:key file."
if [ -z "$(which oathtool)" ]; then
echo "othtool not found in \$PATH"
echo "try:"
echo "MacPorts: port install oath-toolkit"
echo "Debian: apt-get install oathtool"
echo "Red Hat: yum install oathtool"
if [ -z "$OTPKEYS_PATH" ]; then
if [ -f "$HOME/.otpkeys.asc" ]; then
if [ -z "$otpkeys_path" ]; then
>&2 echo "You need to create $otpkeys_path"
exit 1
if [ "$otpkeys_path" = "$HOME/.otpkeys" ]; then
NC='\033[0m' # No Color
>&2 echo "${red}WARNING: unencrypted ~/.otpkeys"
>&2 echo "do: gpg --encrypt --recipient your-email --armor ~/.otpkeys"
>&2 echo "and then delete ~/.otpkeys"
>&2 echo "${NC}"
otpkey=`grep ^$1 "$otpkeys_path" | cut -d":" -f 2 | sed "s/ //g"`
otpkey=`gpg --batch --decrypt "$otpkeys_path" 2> /dev/null | grep "^$1:" | cut -d":" -f 2 | sed "s/ //g"`
if [ -z "$otpkey" ]
echo "$scriptname: TOTP key name not found"
oathtool --totp -b "$otpkey"
view raw otp hosted with ❤ by GitHub

In order to use my script you need to already have gnupg installed and configured with a private key.

You then need to create a plain text file that contains key:value pairs. Think of these as an associative array or dictionary where the lookup key is a memorable name and the value is a base32 encoded OATH key.



Encrypt this file of name and key associations with gpg in ASCII-armor format with yourself as the recipient and save the output file as ~/.otpkeys.

$ gpg --encrypt --armor --recipient otpkeys
$ mv otpkeys.asc ~/.otpkeys.asc

Now the script will start working. For example, generate a code for the “fake” key in the sample file (your result should be different as the time will be different):

$ otp fake

Extracting keys from QR Codes

At this point you may be thinking, “OK, but how the hell do I get the OTP keys to encrypt into the .otpkeys file?”

The ZBar project includes a binary zbarimg which will extract the contents of a QR Code as text in your terminal. The OATH QR Codes contain a URL and a portion of that is an obvious base32 string that is the key. On rare occasions, you may need to pad the ‘=’ at the end of the string to make it a valid base32 string that works with oathtool becuase oathtool is picky.

My favorite package manager for OS X, MacPorts, doesn’t have ZBar so I had to build it from source. Homebrew has a formula for zbar. If you are using Linux, it is probably already packaged for you. Zbar depends on ImageMagick to build. If you have ImageMagick and its library dependencies, Zbar should build for you. Clone the Zbar repo with git, check out the tag for the most recent release — currently “0.10” — and build it.

$ git clone
$ cd Zbar
$ git checkout 0.10
$ make
$ sudo make install

Once you have ZBar installed, you should have zbarimg in your path and you can use it to extract the otpauth URL from your QR Code screenshot.

$ zbarimg ~/Documents/personal/totp-fake-aws.png 
scanned 1 barcode symbols from 1 images in 0 seconds

I hope you already have screen shots of all your QR Codes or else you will need to generate new OTP keys for all your services and take a screen shot of the QR Code this time.

Syncing OTP key file with other computers

The otp script looks for an environmental variable OTPKEYS_PATH. You can use this to move your otp key file database to another location than ~/.otpkeys.asc. For example put it in Google Drive point the otp script to it by setting OTPKEYS_PATH in ~/.bashrc.

#path to GPG-encrypted otp key database
export OTPKEYS_PATH=~/Google\ Drive/otpkeys.asc

Now you can generate your OTP codes from the terminal in your trusted computers whenever you need them and enjoy a respite from constantly dragging your phone out of your pocket.

DIY Google Hangouts Client Independent of Chrome Browser

Hangouts Logo

Safari, Chrome and Battery Life

Remember when Chrome was new and fast and light and minimalist? The name Chrome was meant an in-joke to the UX jargon chrome, meaning the frame around an app. Chrome was just a frame to view the web. Those days are long gone. Now that Chrome has a plurality market share, Google is positioning it as an enhanced web experience, just like Microsoft did with IE. Chrome is a great browser but it also wants to be an operating system that has its own launcher and app ecosystem. It literally is an operating system when packaged as Chrome OS. Chrome is a large application these days.

With the power management improvements and battery shaming Apple built into OS X 10.9 and 10.10, is has become clear to me that Chrome requires a lot of power and memory to run. Running Chrome with only core Google plugins and extensions for Hangouts and Drive, I get about 2 hours less battery life on my 2014 15″ MacBook Pro 11,2. To put that in perspective, it is the same ballpark that I lose if I fire up VMWare to run my Windows Server 2012 R2 with Visual Studio. Running Chrome is literally a similar workload to running a hypervisor running a whole other operating system.

Enough with the Extensions

Transitioning away from Chrome is not easy, especially if you get hooked on the extension and app ecosystem. Without my even realizing it, I left the Open Web and moved into Google’s Web. I hadn’t really paid attention but it turns out that the extensions themselves each consume a lot of resources and I have run into extensions that monetize with sneaky tricks. My first step to wean myself out of this cesspool was to go on an acetic extension diet. In Chrome, I have two extensions:

In Safari and Firefox, I only have the Adblock Plus extension and nothing else.

(AdBlock Plus has become a controversial topic because of their extortion of big sites as a monetization strategy. I’ve turned off “acceptable ads” and I don’t want to see any ads. If it wasn’t AdBlock Plus, I would use something else and have done so in the past. This may make me a bad person. I don’t care. The ad networks are now a malware vector and the quantity of the ads is overwhelming. The internet needs a new monetization strategy.)

Hangouts and XMPP/Jabber

The extension diet caused me a problem because it killed Hangouts. We use Hangouts at my company so that’s a problem. I tried using the XMPP/Jabber protocol gateway to Hangouts but it is unsatisfactory:

  • The Jabber client stream doesn’t include any messages sent or received when Jabber is not connected
  • Jabber gets disconnected all the time
  • Voice and Video don’t work, although they used to when Hangouts was Google Talk
  • Google Voice voicemail messages are not delivered to Jabber
  • Google Voice SMS integration doesn’t work

So basically the XMPP gateway for Hangouts sucks.

Roll Your Own With

It turns out that there is a Hangouts page on Google+. This page works in Chrome but also in Safari and Firefox. Pretty much everything in the Hangouts works. The only problem is that I can’t remember to open a browser window and point it there.

If you kind of squint at the Hangouts Google+ page, it kind of looks like a cross between the Hangouts Chrome extension and the Hangouts Chrome app for Windows but with a bunch of other crap in there too. I got the idea that I could get something similar to the Hangouts App for Chrome for Windows and Chrome OS on OS X if I used to roll my own native app wrapper for Hangouts. is a tool for generating WebKit site wrapper apps and it works pretty well to solve my Hangouts problem.

  • Chat history works
  • SMS and Voicemail works
  • Voice and Video works
  • It does everything that I want it to do
  • I can even pop out chats in and out of a tab or new window

Screen Shot 2015 02 11 at 12 19 25 PM
Screen Shot 2015 02 11 at 12 24 11 PM

Recipe is a pretty geeky tool but the recipe to create a Hangouts app is pretty simple. At the most basic level, you can just create a new Fluid app by pointing to and be done. It will not work correctly until you set user agent string for your new to be Safari 7 but once you do that, it will work fine. You can use the Hangouts logo at the top of this article for the Dock icon.

By default Fluid apps use Safari’s cookies and will load Safari plugins. That means my Just WorksTM. I am logged in by my Google Apps token in Safari. The Google Voice and Video plugin that I installed for Chrome also works in Safari and in the to enable voice and video.

If you want to keep Hangouts open, even if you close the window, then in the Preferences go to Behavior and select “Closing the last browser window: only hides the window”.

If you want it to be more minimalist standalone app look, then it is mostly a matter of hiding elements with some custom CSS injection in the Window > Userstyles menu.

Pattern: **hangouts*

    div.Ege.qMc {

    div#gbq {

    div.gb_8.gb_Sc.gb_i.gb_Rc.gb_Qc {

    div.ona.Fdb.csa {

    div.Dge.fOa.vld {

    div.Ima.dacD0d {

    div.Bdc.FQb {

And to add a little slickness add a little Userscript to fix the logo link so it links to /hangouts to pop out the buddy list by default as shown in my screenshots.

Pattern: **hangouts*

    var i=0, 
        a = document.getElementsByClassName('gb_Wa gb_Ra'); //home logo link

    for(i=0; i<a.length; i++) {

    window.onload = function() {
        setTimeout(function() {
        var j, h = document.getElementsByClassName('qoeSyc uoNTwd'); //hangouts buddy list icon element
        for(j=0; j<h.length; j++) {
            h[j].click(); //open the buddy list
       }, 3000);

Overall, I’m pretty pleased with how this turned out. I’m able to easily control my logged-in status on Hangouts by launching or exiting the app from my Dock. All the key feautures of Hangouts that I use work.


These instructions are now obsolete. Google has created a standalone website for Hangouts at This site works great as a Fluid app without having to do any of the javascript and css hacks described above.

Screen Shot 2015 08 20 at 10 36 32 AM

I Modified vpnc Cisco VPN Client to Use OS X Native User Tunnels

OS X Built-in Cisco IPSec VPN Sucks

My company works with a client site that uses a Cisco ASA-based IPSec VPN for remote access. I’m not a fan. Theoretically, there is a Cisco IPSec VPN client built into OS X based on raccoon(8) from the KAME project which ended in 2006. We’ve been trying to use it since OS X 10.8 “Lion” without a lot of success. What the OS X IPSec GUI does is dynamically generate raccoon config files and invoke the raccoon binary as root for you when you click connect. It doesn’t work as well as one would hope.

  • Routes for subnets within the vpn are not reliably configured
  • The VPN disconnects randomly
  • The built-in VPN client does not reconnect automatically and
  • I have to authenticate with a password manually every time I want to connect
  • I also hate the bizarre rectangle with some vertical lines VPN status icon in the menu bar

Other Options

OK, not all of these are significant issues; but the random disconnects and unreliable subnet route configuration is a serious problem. One option is to obtain the latest source code for racoon(8) and try to build it and replace the system version with a newer one or have a parallel version. I’m not even sure that this is racoon’s fault. I have a suspicion that Cisco is doing something that is not strictly standards compliant. In any case, just building my own raccoon doesn’t imply it would work any better. I would probably have to get deep in the weeds of esoterica debugging what is actually causing my problems and figuring out a workaround — because I can’t change the Cisco server software. Another option is strongSwan. I have to be honest, strongSwan looks like a huge complex package to try when other options don’t work. Finally, there is vpnc which was actually where I started because I used the cisco-decrypt binary from vpnc to deobfuscate teh enc_GroupPwd field from the .pcf configuration file for the Cisco VPN client in order to provided the Shared Secret field for the OS X Cisco VPN client configuration.

It also happens that I’m a MacPorts user and vpnc is a package in MacPorts while strongSwan and raccoon are not (although strongSwan is available these days through HomeBrew). Since I had already installed vpnc in order to get cicso-decrypt I tried out vpnc as an alternative to racoon. It turned out that it didn’t totally eliminate the disconnects but it would generally stay connected for a really long time as long as the network was up and since the config file for vpnc stores the VPN password I didn’t have to remember it and type the damn thing in all the time. Then I had a brainwave that I could use launchd to run it in foreground mode (–no-detach) with the KeepAlive option. That way whenever a disconnect did happen the vpnc process would exit and launchd would restart it, reconnecting the tunnel automatically.


Yosemite Throws a Wrench in TunTapOSX

VPNC relies on tuntaposx to provide character devices for attaching network tunnels or taps to. On OS X, this is a 3rd party kernel extension or kext. In OS X 10.10 Yosemite, only signed kernel extensions will load by default. Vpnc stopped working when I upgraded to Yosemite.

Remediation A

The simplest fix is to put Yosemite into kernel extension developer mode which essentially reverts the behavior to Mavericks and previous so that unsigned kexts load and work by setting a kernel boot argument and rebooting.

sudo nvram boot-args="kext-dev-mode=1"

This can be reverted thusly:

sudo nvram -d boot-args

Remediation B

The idea of requiring signed kexts seems like a good security measure. It limits the ability for anyone evil to inject code into the kernel, so I would prefer to keep it. It turns out that kext signing requires a special dispensation from Apple. Just a normal OS X developer account doesn’t cut it. You need to ask for a kext signing certificate and they have to approve it. I didn’t even try. I did try using the signed tuntaposx kexts out of Tunnelblick and that works but I also had a kernel panic happen while doing that. I don’t know that it was related but be forewarned.

Final Solution

I noticed that OpenVPN kept working when vpnc stopped. When I first started using OpenVPN it required tuntaposx in order to work, just like vpnc. I also noticed that OpenVPN routes were showing up on utun devices rather than tun. Intrigued, I looked into this a little deeper. It turns out that the xnu kernel in Darwin 10 shipped a feature called Native User Tunnels as part of OS X 10.6 Snow Leopard which I think came out of NetBSD and that very same KAME project that spawned raccoon. Native User Tunnels is a kernel feature that allows binding a tunnel to a special socket and reading and writing to it with standard socket APIs. At some point OpenVPN started using this feature on OS X and I wanted vpnc to do the same.

XNU Native User Tunnels for VPNC

I grafted in native user tunnel support in vpnc. You can build it yourself. The pre-requisites are libgpg-error, libgcrypt and libgnutls (libgpg-error is also a prerequisite for libgcrypt). The build process also needs pkg-config to find where the libraries are installed. Once you have those, such as port install libgcrypt gnutls pkg-config you can clone my repo and build vpnc. My version of vpnc patched for native user tunnels will still use tuntaposx for tap-based vpns or if you have more than 256 utun devices in use.

git clone
cd vpnc
sudo make install

The binaries install to /usr/local/sbin by default. You configure vpnc default.conf for your default VPN connection in /etc/vpnc/default.conf. See man vpnc for details.

A sample config file might look something like this where you supply the parts delimited with ** (note the ** is not part of the file format but part of what you replace).

IPSec gateway **vpn-server-hostname-or-ip**
IPSec ID **GroupName-from-.pcf**
IPSec secret **output-of-cisco-decrypt-here**
IKE Authmode psk
Xauth username **my-corporate-username**
Xauth password **super-secure-password**
NAT Traversal Mode cisco-udp
DPD idle timeout (our side) 0

Controlling VPNC with Launchd

I mentioned earlier that the happy scenario of using vpnc was combining it with launchd so that launchd would automatically restore your vpn tunnel after it is disconnected for any reason.

Place com.wolfereiter.vpnc.plist in /Library/LaunchDaemons.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" ";;>
<plist version="1.0">
<!-- NetworkState key is no longer implemented in OS X 10.10 Yosemite.
</dict> -->

Create a directory to hold the log file defined in this plist.

mkdir /var/log/vpnc

Create vpnc.conf in /etc/newsyslog.d to clean up old logs.

# logfilename [owner:group] mode count size when flags [/pid_file] [sig_num]
/var/log/vpnc/*.log 644 3 1000 * J
view raw vpnc.conf hosted with ❤ by GitHub

Create vpnc-start script in /usr/local/bin.

if [ "$(id -u)" -ne 0 ]; then
SELF=`echo $0 | sed -ne 's|^.*/||p'`
echo "$SELF must be run as root." 1>&2
echo "try: sudo $SELF" 1>&2
exit 1
CONF=`grep \.conf $PLIST | sed 's/<[^>]*>//g' | tr -d " \t"`
GATEWAY=`grep gateway $CONF`
ERROR=$( { /bin/launchctl load -w $PLIST; } 2>&1 )
if [ -z "$ERROR" ]; then
echo "starting vpnc daemon connection to $GATEWAY."
echo $ERROR
view raw vpnc-start hosted with ❤ by GitHub

Create vpnc-stop script in /usr/local/bin.

if [ "$(id -u)" -ne 0 ]; then
SELF=`echo $0 | sed -ne 's|^.*/||p'`
echo "$SELF must be run as root." 1>&2
echo "try: sudo $SELF" 1>&2
exit 1
CONF=`grep \.conf $PLIST | sed 's/<[^>]*>//g' | tr -d " \t"`
GATEWAY=`grep gateway $CONF`
ERROR=$( { /bin/launchctl unload -w $PLIST; } 2>&1 )
if [ -z "$ERROR" ]; then
echo "stopping vpnc daemon connection to $GATEWAY."
echo $ERROR
view raw vpnc-stop hosted with ❤ by GitHub

Once you have everything set up and a working default.conf in /etc/vpnc, then you can use the vpnc-start command to launch vpnc in the background and vpnc-stop to close the tunnel. Once vpnc-start is invoked, launchd will keep it running through sleep/wake and moving around between wired and wireless connections. Whatever.

$ sudo vpnc-start
starting vpnc daemon connection to IPSec gateway
$ sudo vpnc-stop
stopping vpnc daemon connection to IPSec gateway

Happy tunneling.

%d bloggers like this: