Set up a dotnet core development environment with VS Code, MacPorts, and Docker

Install and configure MacPorts

While a package manager is not strictly necessary to get a dotnet core development environment up and running, it is extremely useful to have a single tool to update utilities rather than manually discovering the need to update and manually downloading them. It’s also useful to set up a consistent command line environment with Linux. You can use my step-by-step guide to set up MacPorts.

To sum up:

  • Install MacPorts
  • Set up paths in /etc/paths and /etc/manpaths
  • Set up /etc/shells
  • Install Linux-flavor core tools sudo port install bash bash-completion coreutils findutils grep gnutar gawk wget
  • Set up ~/.bash_profile and ~/.bashrc

Install mono and mssql-tools

The intellisense engine for C# dotnet core projects in VSCode on Mac and Linux doesn’t use the dotnet core compilers, it uses mono. Microsoft has also provided ports of the SQL Server tools, sqlcmd and bcp

Visual Studio for Mac also depends on mono. If you have and use Visual Studio for Mac, you don’t need to install mono here. On the other hand, if you have an abandoned installation of Visual Studio for Mac you may want to remove it and start over. I have instructions for uninstalling Visual Studio for Mac at the end of this document.

sudo port install mono mssql-tools

Install dotnet core SDK

MacPorts does not manage installs of the dotnet core SDKs, but Microsoft does offer scripts to install and uninstall. I created a simple shell script to combine these together to maintain the 2.1 and 3.1 LTS SDKs.

To sum up, install two scripts from Microsoft and one from me into /usr/local/bin:

curl -sSL \
    | sudo tee /usr/local/bin/dotnet-uninstall-pkgs > /dev/null
sudo chmod +x /usr/local/bin/dotnet-uninstall-pkgs
curl -sSL \
    | sudo tee /usr/local/bin/dotnet-install > /dev/null
chmod +x /usr/local/bin/dotnet-install
curl -sSL  \
    | sudo tee /usr/local/bin/dotnet-upgrade-sdks > /dev/null
chmod +x  /usr/local/bin/dotnet-upgrade-sdks

Once those three scripts are in place, you can install or upgrade the dotnet 2.1 LTS and dotnet 3.1 LTS SDKs to the latest versions. If you have previous versions of the dotnet SDKs installed, the script will remove them cleanly.

sudo dotnet-upgrade-sdks

After the script completes, you should have two dotnet core SDKs.

$ dotnet --list-sdks
2.1.804 [/usr/local/share/dotnet/sdk]
3.1.301 [/usr/local/share/dotnet/sdk]

Install dotnet global tools

The dotnet command is extensible. Commands added to dotnet are called “tools” which can be installed in a project or globally for your account. There are a large number of tools available, but two key ones are the dotnet-ef and libman tools.

The dotnet-ef tool is the Entity Framework core tool for generating and managing migrations. The libman tool is for managing javascript library packages in an aspnet core project as a replacement for bower which is becoming unmaintained and isn’t tied up with nodejs.

dotnet tool install --global dotnet-ef
dotnet tool install --global libman

Unfortunately dotnet tool doesn’t have any command for update-all, but it is straightforward to pipe output the list command to the update command.

curl -sSL \
    | sudo tee /usr/local/bin/dotnet-tool-update-all > /dev/null

You can now ensure your global tools are current:


bash-completion for dotnet

You can set up tab completions for dotnet

In order for this to work you have to have installed bash-completion, which should have been part of your MacPorts setup.

sudo port install bash bash-completion

code ~/.bashrc

if [ -f /opt/local/etc/profile.d/ ]; then
  . /opt/local/etc/profile.d/

_dotnet_bash_complete() {
  local word=${COMP_WORDS[COMP_CWORD]}

  local completions
  completions="$(dotnet complete --position "${COMP_POINT}" "${COMP_LINE}" 2>/dev/null)"
  if [ $? -ne 0 ]; then

  COMPREPLY=( $(compgen -W "$completions" -- "$word") )

#enable command-line completiong for dotnet
complete -f -F _dotnet_bash_complete dotnet

Install VSCode

Download and install VSCode from Once you install it, VSCode is self-updating.

The first time you launch VSCode, you need to manually install the code command.

cmd + shift + p type “install code” and select the “Install ‘code’ command in PATH” option.

What this does is create a symbolic link to the code command inside the app bundle for VSCode to /usr/local/bin/code.

$ ls -l `which code`
lrwxr-xr-x 1 breiter wheel 68 Jun 27  2017 /usr/local/bin/code -> '/Applications/Visual Studio'

Or alternatively, you could do this old-school by hand:

ln -s /usr/local/bin/code '/Applications/Visual Studio'

At this point, I would recommend that you install a handful of extensions to make C# and aspnet core intellisense and debugging work.

VSCode extensions you really need

# C# XML comments
code --install-extension k--kato.docomment
# CSS support in HTML (and Razor) documents
code --install-extension ecmel.vscode-html-css
# C# language and debugging support (from Microsoft)
code --install-extension ms-dotnettools.csharp

Some additional nice VSCode extensions

# bookmarks
code --install-extension alefragnani.Bookmarks
# aligment
code --install-extension annsk.alignment
# gitlens
code --install-extension eamodio.gitlens
# Docker
code --install-extension ms-azuretools.vscode-docker
# SQL Tools (nice client for various DB engines)
code --install-extension mtxr.sqltools
# Spell check in code
code --install-extension streetsidesoftware.code-spell-checker
# syntax and intellisense for .csproj files
code --install-extension tintoy.msbuild-project-tools

Install Azure Data Studio

Azure Data Studio is the open source, cross-platform, spiritual successor to Query Analyzer. It’s a dedicated SQL Server (and PostgreSQL) client based on a fork of VSCode that is much more lightweight than SQL Server Enterprise Studio. Download Azure Data Studio from GitHub

Recommended extensions:

  • Admin Pack for SQL Server
  • PostgreSQL

Install Docker for Mac

In order to run SQL Server on Mac, you need Docker for Mac. I also find it more convenient to run PostgreSQL in Docker than on the base OS. In addition, you need Docker to build and run Docker images and push them to a repository. VMWare Fusion 11.5.5 has implemented a dockerd runtime and there are other virtualization methods to get Docker running on Mac but by far the most straightforward is Docker Desktop for Mac.

Docker Desktop for Mac integrates with the macOS Hypervisor.framework using an enhanced fork of the bhyve hypervisor from FreeBSD called (HyperKit)[] that maintained by Docker as part of the Moby project. It works very well.

Download and install the “stable” version from

Install an HTTP protocol debugger

  • Charles proxy is a cross-platform HTTP protocol debugger built on Java that works on Windows, macOS, and Linux.
  • Proxyman is a new macOS native HTTP protocol debugger.
  • Fiddler is the de facto standard free HTTP protocol debugger on Windows built on .NET Windows Forms. Telerik has a rebuilt “Fiddler Everywhere” that is rebuilt to be cross-platform in beta.
  • mitmproxy is a command-line, open source HTTP debugger built on python. It works but is more difficult to use than the commercial ones above. sudo port install py-mitmproxy
  • Wireshark is the de facto standard TCP/IP protocol analyzer. sudo port install wireshark3 +qt

I haven’t tried Proxyman or Fiddler Everywhere. I have used mitmproxy and I would recommend paying for one of the GUI options. I use Charles and Wireshark regularly.

Visual Diff/Merge

I confess that I have a bit of a collection of these tools going. My general purpose favorite is Beyond Compare from Scooter Software. I also have Kaleidoscope and Sublime Merge.

I use smerge to browse git repos and to resolve merge conflicts in git. Sublime Merge is wicked fast, if somewhat inscrutable.

I use Kaleidoscope primarily for the git difftool command because it loads a multi-file diff into a set of tabs rather whereas Beyond Compare will load each file in a window sequentially, popping a new code diff window after you close the previous one.

diff and merge settings from my global ~/.gitconfig

    tool = Kaleidoscope
    tool = smerge
    keepBackup = false
[diff "tool.bc3"]
    trustExitCode = true
[merge "tool.bc3"]
    trustExitCode = true
[difftool "smerge"]
    cmd = smerge mergetool --no-wait \"$LOCAL\" \"$REMOTE\" -o \"$MERGED\"
    trustExitCode = true
[mergetool "smerge"]
    cmd = smerge mergetool \"$BASE\" \"$LOCAL\" \"$REMOTE\" -o \"$MERGED\"
    trustExitCode = true
[difftool "Kaleidoscope"]
  cmd = ksdiff --partial-changeset --relative-path \"$MERGED\" -- \"$LOCAL\" \"$REMOTE\"
[mergetool "Kaleidoscope"]
  cmd = ksdiff --merge --output \"$MERGED\" --base \"$BASE\" -- \"$LOCAL\" --snapshot \"$REMOTE\" --snapshot
  trustExitCode = true

BONUS: Cloud service clients

AWS and Azure have CLI interfaces to automate actions respectively: aws and az. Both are based on python3.

Install AWS command-line client

sudo port install py-awscli

Edit ~/.bashrc and add aws cli completions after initializing bash completion.

#enable command-line completion for aws
complete -C aws_completer aws

Install Azure command-line client

For some reason, the Azure team is all in on Homebrew for macOS they are sitting on an open community request for a .pkg package and have closed a request to add MacPorts package.

The only viable option outside of Homebrew is to use a Docker container or their shell script installer for Linux.

The CLI requires the following software:
– Python 3.6.x, 3.7.x or 3.8.x.
– libffi
– OpenSSL 1.0.2

Make sure you have the pre-requisites.

$ port installed|egrep '^\s+(libffi|python3|openssl)'
  libffi @3.2.1_0 (active)
  openssl @1.1.1g_0 (active)
  python3_select @0.0_1 (active)
  python38 @3.8.3_0 (active)

Also, I’ve looked at the script and it assumes that GNU coreutils are in the path. You need to have set up a Linux-style environment with MacPorts with coreutils in path replacing the BSD versions shipped from Apple for this to work. It might work if you ust have the md5sha1sum package installed instead, but keep in mind this script was designed for Linux + GNU environment.

curl -L | bash

Follow the prompts. It should all work. The script from Microsoft will install or update az.

I saved this as /usr/local/bin/install-azure-cli.

curl -sSL \ 
  | sudo tee /usr/local/bin/install-azure-cli > /dev/null
sudo chmod +x /usr/local/bin/install-azure-cli

BONUS: Configure postifx smtp relay

It should not strictly necessary, but I often find that it useful to have a local MTA that works. Follow my guide to set up postfix in macOS to accept mail on your local machine port 25 and relay it through a smart host such as SES, GMail, or

BONUS: Clean uninstall of Visual Studio for Mac and Mono.framework

uninstall-vsmac script

Uninstall Visual Studio for Mac.

uninstall-mono script

Uninstall mono installed by the .pkg Mono installer.

Why and how to set up MacPorts package manager for macOS

Package managers for macOS

Do I need a package manager? If you never open the answer is definitely, no. macOS is fully functional out of the box with the software shipped by Apple. The base system of command line applications available in a default install are also good enough for poking around and getting started with learning the UNIX system. You need a package manager when you want to install UNIX tools that Apple doesn’t bundle or newer or different versions of tools that they do.

A package manager helps you to download, possibly compile, install, and update tools in the UNIX environment in macOS. Alternatively you can download and install things by hand, possibly configuring and compiling them by hand.

There are three main package managers for macOS:


Homebrew is currently the most popular of these, but it is “too clever by half”. My issue is primarily that it works by taking over /usr/local/bin and changing the permission on that directory. This is a security problem but also it is in conflict with the conceptual purpose of /usr/local/bin being the directory where I install programs myself. If Homebrew messes up or gets broken then it can be a big mess to clean it up without breaking anything that doesn’t belong to Homebrew.

Homebrew will also help you to install things that it doesn’t control and cannot update — which I don’t think it should do. I also find it’s beer metaphors of casks and cellars overly cute.

Homebrew is popular so it is probably the lowest friction option despite my criticism. You can also use it to automate installing apps from the App Store, commercial software, and UNIX utilities. This can be helpful if you set up a new Mac frequently or have a standard config to push out. I have heard that GitHub uses Homebrew for this.


Pkgsrc comes from NetBSD. It is the standard package manager for NetBSD and SmartOS. Packages for Red Hat Enterprise Linux / CentOS, macOS, and SmartOS are maintained by Joyent. The packages are mostly pre-built binaries and pkgsrc is fast and works well. All of the packages are installed into /opt/pkg which means they are safely isolated from your base system. If somehow you borked up pkgsrc, just rm -fr /opt/pkg and install it again. If you want to get rid of pkgsrc, just rm -fr /opt/pkg and go on with life.

On the downside, all of the GUI packages for macOS are built for X rather than Quartz and the repository is smaller than Homebrew and MacPorts.

If you work in an environment with some combination of RHEL, SmartOS, and macOS then you should consider strongly standardizing on pkgsrc. For example on RHEL, instead of adding EPEL and IUS you can just not install anything on top of the base system with yum/dnf and only use yum/dnf for updating the base system. Then use pkgsrc to install all of the additional software. Then you can enjoy a very similar configuration and maintenance stack across your server and workstation fleet.


MacPorts (neé DarwinPorts) was originally created by engineers working in the Apple UNIX engineering team as part of the OpenDarwin project. It came out about around the same time as OS X 10.2 Jaguar. Darwin is the open source underpinning of macOS and consists of the xnu kernel plus the BSD subsystem. MacPorts was hosted by Apple on MacOS Forge but has subsequently moved to GitHub.

MacPorts used to be the de facto standard for installing open source packages on OS X for a long time until it was dethroned by Homebrew. At the time, MacPorts was criticized for wasting time and space by installing its own dependencies rather than linking to the ones from Apple. It also used to install everything by compiling from source and still does compile from source quite a bit, which can be slow. Like pkgsrc, MacPorts installs into its own sandbox: /opt/local where it can’t hurt anything and can be easily discarded. MacPorts has a variants system that lets you choose a lot of granular options when installing packages. For example, you can have GUI apps built against the native Quartz window manager whenever possible. It has a huge library of ports that are community maintained and reliable. There are problems occasionally, but they are sorted out quickly.

I’ve used all three of these systems, but have settled on MacPorts as my preference for a combination of practical and aesthetic reasons.

Setting up MacPorts


Before installing MacPorts, you need to install Xcode from or the App Store and the Xcode command line tools. Once you have installed Xcode, open and run this command to install the command line tools:

xcode-select --install

Installing macports from pkg or source

You can now probably head over to and download a .pkg installer for your version of macOS. If you are using a beta of a new release or the hot, fresh bits of a .0 release, the .pkg may not be available and you will have to build from source. Either download the tarball and unpack it or clone the git repo and checkout the current release tag.

# use actual latest tarball
curl -O
tar xf MacPorts-2.6.2.tar.bz2


git clone
git checkout 10.6.2 # or whatever is the highest version number without a -beta or -rc suffix

Whichever way you got the source code, enter the directory in your, configure, build, and install.

sudo make install

Now you have a /opt/local directory and a port command.

Configure options


Set default variants options. I have not used X11 on macOS in years. I like to disable X and enable Quartz by default. I also like to add bash completion scripts whenever they are available.

sudo vi /opt/local/etc/macports/variants.conf

-x11 +no_x11 +quartz +bash_completion

If you live outside of the USA, it can be a significant speedup to change to a local mirror. I am using one in South Africa.


In macports.conf, set the rsync_server and rsync_dir to match your alternate mirror.

sudo vi /opt/local/etc/macports/macports.conf

# The rsync server for fetching MacPorts base during selfupdate. This
# setting is NOT used when downloading the ports tree; the sources for
# the ports tree are set in sources.conf. See
# for a list of
# available servers.

# Location of MacPorts base sources on rsync_server. If this references
# a .tar file, a signed .rmd160 file must exist in the same directory
# and will be used to verify its integrity. See
# to find the
# correct rsync_dir for a particular rsync_server.
#rsync_dir              release/tarballs/base.tar
rsync_dir               macports/release/tarballs/base.tar

In sources.conf change the path to your local mirror.

sudo vi /opt/local/etc/macports/sources.conf

#rsync:// [default]
rsync:// [default]


I like to have my path searched in this order:

  1. stuff I installed manually
  2. MacPorts
  3. macOS base system

MacPorts will stick itself into your PATH in your shell profile, which is a good default to make it work, but I prefer to handle this more systematically in a central location.

Edit the system default path:

sudo vi /etc/paths


Edit the system default manpath to resolve documentation in the same order as the binaries:

sudo vi /etc/manpaths


The gnubin paths are for installing GNU utilities that override the BSD versions in macOS to conform to a de facto standard configuration in a world dominated by Linux + GNU servers.

If you want a contemporarybash from MacPorts, you need to have it in /etc/shells so that it can be set as a user shell with chsh.

sudo vi /etc/shells



MacPorts works a lot like apt you need to update the local cache and then install or update your packages.

Update local cache and macports itself

sudo port selfupdate

Install a pacakge

sudo port install

Find package

port search

List packages

List installed packages

port installed


port list installed

List outdated packages

port outdated


port list outdated

Update outdated packages

sudo port upgrade outdated

Remove old packages

When port upgrades a package it doesn’t delete the old one, it moves it to an inactive state so that you can roll back if the new one does not work.

You can clean up old packages

sudo port uninstall inactive

Install GNU flavor like Linux

At this point, if you primarily work with Linux servers, it makes sense to install a GNU base system to override the BSD flavor of a default macOS install.

sudo port install bash bash-completion coreutils findutils grep gnutar gawk wget

I also like to install a fully patched git to make sure that I have the current features and the bash completion scripts.

sudo port install git git-lfs

Also the latest vim.

sudo port install vim +huge

Set up bash

Make sure you have a ~/.bashrc and ~/.bash_profile.

Edit ~/.bash_profile to add

#flags to hint build systems to find things in macports
CFLAGS="$CFLAGS -I/opt/local/include" 
CXXFLAGS="$CXXFLAGS -I/opt/local/include" 
LDFLAGS="$LDFLAGS -L/opt/local/lib"

If MacPorts altered your PATH then comment that out:

# MacPorts Installer addition on 2016-09-22_at_13:35:36: adding an appropriate PATH variable for use with MacPorts.
# export PATH="/opt/local/bin:/opt/local/sbin:$PATH"

At the very end of ~/.bash_profile load ~/.bashrc.

if [ -f ~/.bashrc ]; then
   source ~/.bashrc

In ~/.bashrc you can set up some preferences:


I’m not into the fancy prompts. I like a classic $.

#classic, minimalist prompt + current git branch
PS1='\$ '

Prevent ssh from messing up the title

# force reset of the current directory name in terminal title
# to reset it after SSH sessions end.
PROMPT_COMMAND='echo -ne "\033]0;$(basename ${PWD})\007"'

Bash completion

if [ -f /opt/local/etc/profile.d/ ]; then
  . /opt/local/etc/profile.d/

Git prompt

Again, I like something simple. You can look up the fancy things.

if [ -f /opt/local/share/git/ ]; then
  . /opt/local/share/git/
  PS1='\[\033[1;36m\]$(__git_ps1 "[%s] ")\[\033[0m\]\$ '

Colors like Debian and Ubuntu

export CLICOLOR=1

# The color designators are as follows:
# a     black
# b     red
# c     green
# d     brown
# e     blue
# f     magenta
# g     cyan
# h     light grey
# A     bold black, usually shows up as dark grey
# B     bold red
# C     bold green
# D     bold brown, usually shows up as yellow
# E     bold blue
# F     bold magenta
# G     bold cyan
# H     bold light grey; looks like bright white
# x     default foreground or background
# Note that the above are standard ANSI colors.  The actual display may differ depending on the color capabilities of the terminal in use.
# The order of the attributes are as follows:
# 1.   directory
# 2.   symbolic link
# 3.   socket
# 4.   pipe
# 5.   executable
# 6.   block special
# 7.   character special
# 8.   executable with setuid bit set
# 9.   executable with setgid bit set
# 10.  directory writable to others, with sticky bit
# 11.  directory writable to others, without sticky bit

if [[ $(which ls) = *gnubin* ]]; then
  # GNU ls colors
  eval "$(dircolors -b)"
  alias ls='ls --color=auto'
  #BSD ls colors
  #default colors
  #export LSCOLORS=exfxcxdxbxegedabagacad
  export LSCOLORS=xxfxcxdxbxegedabagacad
if [[ $(which grep) = *gnubin* ]]; then
  alias grep='grep --color=auto'
  alias egrep='egrep --color=auto'
  alias fgrep='fgrep --color=auto'
  export GREP_OPTIONS='--color=auto'
export GREP_COLOR='0;36' # regular;foreground-cyan
export MINICOM='--color on'

Prefered editor and pager

export EDITOR=vim
export PAGER=less

At this point if you open a new terminal, it should feel very much like a Linux install.

Install some other stuff

aws cli

sudo port install python38 py38-awscli
sudo port select --set python3 python38

Create a file ~/.aws/config that contains API key credentials like this:

aws_access_key_id = some-key-id
aws_secret_access_key = some-key-value
region = us-east-1

[profile some-name]
aws_access_key_id = some-key-id
aws_secret_access_key = some-key-value
region = us-east-1

Network tools

sudo port install nmap

sudo port install GeoLiteCity wireshark3 +geoip +python38 +qt5

sudo port install whatmask

sf-pwgen (password generator)
sudo port install sf-pwgen

axel (download accelerator)
sudo port install axel

sudo port install +http2 +openldap +ssl

tcping (ping tcp ports)
sudo port install tcping

httping (ping http)
sudo port install http

minicom (terminal emulator for connecting to serial devices)
sudo port install minicom

sudo port install openvpn2

Programming languages

sudo port install go

sudo port install rust

Java OpenJDK with IBM Eclipse OpenJ9 VM
sudo port install openjdk14-openj9


Java OpenJDK with Oracle HotSpot VM
sudo port install openjdk14

Microsoft SQL Server client tools: sqlcmd and bcp
sudo port install mssql-tools

msodbcsql17 has the following notes:
  To make this work with SSL you need to create a symbolic link as follows: 
   sudo mkdir -p /usr/local/opt/openssl/ 
   sudo ln -s /opt/local/lib /usr/local/opt/openssl/lib 

   This is because this port installs binaries meant to be used with Homebrew.

sudo mkdir -p /usr/local/opt/openssl/
sudo ln -s /opt/local/lib /usr/local/opt/openssl/lib


sudo port install p7zip

youtube-dl (download video offline from youtube and other sites)
sudo port install youtube-dl

dos2unix (convert line endings)
sudo port install dos2unix

sudo port install ghostscript

sudo port install rsync

On UNIX Shells

A (not so) brief history of UNIX shells

In UNIX, the shell is the text-mode program that interfaces between the user and the kernel via a teletype interface — which is usually purely a software construct these day. It interprets commands, starts programs as necessary, and pipes data between programs.

Like a lot of things in UNIX the original shell, /bin/sh, was created by Ken Thompson. In 1976, with UNIX System 7, the Thompson Shell was replaced with a new /bin/sh created by another colleague at Bell Labs, Stephen Bourne. The Bourne Shell had all the key features we expect today like unlimited string size, command substitution, redirection, loops, case statements and by 1979 was pretty much done.

In 1978, Bill Joy created the C shell /bin/csh with the intention of being more friendly as an interactive environment. It turned out to be a bad scripting environment but was popular at Berkely and became the default interactive shell in Berkely UNIX and BSD.

David Korn created a new shell /bin/ksh in the 1980s based on Stephen Bourne’s source code. Korn Shell was used a lot on Solaris and with Oracle things and OpenBSD.

Kenneth Almquist reimplemented a clone of the Bourne Shell for BSD as a part of the catastrophic 1990s copyright dispute with AT&T. Debian has forked ash to the Debian Almquist Shell dash.

The Bourne Again Shell bash is the GNU project reimplementation of Bourne Shell. GNU did not stop at cloning the Bourne Shell features, they put in a whole ton of interactive and programming features.

Z (zed) Shell is a reimplementation of bash with a more liberal license. zsh aims to have full compatibility with all of the bash features and even more features of its own.

There are more shells, but I’m going to stop now.

Common system shells

Ever since AT&T Research UNIX System 7, the world has agreed that the default system interpreter is “Bourne Shell”. This is codified in the POSIX.2 standard and Single UNIX Specification. Since not everyone who wanted to create a UNIX-type operating system had legal access to the Bourne Shell source code from AT&T, fancy later shells ksh, bash, and zsh have a trick where they pretend to be the lowly old Bourne Shell if they are named sh. ash and dash pretty much are just the same as good old sh and don’t have to do a lot of pretending.

You might be surprised how deeply the system shell /bin/sh is embedded. It is used by init to run startup scripts. It’s used by web servers to connect a user request to a CGI program. It’s used by mail servers to connect bits together internally. There are tons of system and server things that are connected together with /bin/sh.

This seemed pretty smart until the shellshock family of vulnerabilities in bash were discovered in 2014 which allowed tricking servers into running arbitrary code through public services on the Internet. Now it seems like a good idea that the system shell should be as minimal and hardened as possible.

Here’s how things break down in the real world:

Red Hat uses bash as /bin/sh.

Debian and Ubuntu use dash as /bin/sh and /bin/bash as the default interactive interpreter.

NetBSD uses ash as /bin/sh and FreeBSD has their own /bin/sh.

OpenBSD uses ksh as /bin/sh.

Apple is a bit rudderless. If I recall, originally Apple was tied to its BSD roots from NeXT and used pdksh for /bin/sh and a version of the C Shell as the default for interactive users in OS X. They changed that bash for both in 10.3 Panther to be more similar to Red Hat, but kept the rest of the core system utilities BSD not GNU.

Today Apple macOS uses a really, really old forked version of bash 3.2 with security patches applied as /bin/sh. Apple stopped including bash updates in macOS (neé OS X) because the GNU project changed the license of bash to GPLv3. In macOS 10.15 Catalina bash is still the system shell /bin/sh but they changed the default shell for new users to /bin/zsh and have added /bin/dash.

In retrospect, Apple’s half-hearted attempt to include bash as a linuxism was a mistake. I hope that the arrival of dash is a sign that Apple is going to delete their decrepit old version of bash and make dash the system shell soon.

Cut to the chase or “what I use”

For interactive shell use, I use bash.

I have tried all the shells. I really tried to like zsh but I have found by the time you install all the plugins and whatnot, it is painfully slow. Today I use bash everywhere as my interactive shell. Mostly this is because it’s installed and the default on every version of Linux. This means that I install my own modern copy of bash on a Mac.

For shell scripts, I generally use the #!/bin/sh shebang but am careful to use the Bourne Shell features and not BASH features. If you are writing a script that uses #!/bin/sh as the shebang, it needs to work with dash because that’s what is on Debian and Ubuntu.

If you really want to use something other than the 1978 Bourne Shell language for a shell script, don’t hard code a path to /bin/sh. Use the /bin/env trick to allow the system to find the first match in the path. Instead of #!/bin/bash use #!/bin/env bash.

Encrypting DNS on macOS with unbound and Cloudflare

The DNS protocol traditionally runs over UDP on port 53. This is very fast but totally insecure. DNS queries can be snooped or potentially altered by anyone on the network. In my office, I use a pfSense firewall with the unbound DNS resolver configured to resolve DNS over TLS. That way my ISP neither my ISP nor the local government in Zimbabwe can observe or fiddle with DNS results.

In the olden days when I used to go places, I might use a VPN to secure all of my traffic. This is not always the optimal solutions. Sometimes, I know that all of my sensitive traffic is already encrypted and secure — except for the DNS. And I have had problems where DNS is intercepted by the ISP or hotel for advertising or other purposes. I found this particularly useful when we were staying with family last summer who have Cox Internet that does some goofy thing with DNS interception.

Unfortunately, macOS does not have DNS over TLS or DNS over HTTPS as a built in feature, yet. But I can set up unbound as a DNS resolver which does support DNS over TLS.
sudo port install unbound

unbound has the following notes:
  An example configuration is provided at

  A startup item has been generated that will aid in starting unbound with
  launchd. It is disabled by default. Execute the following command to start it,
  and to cause it to launch at startup:

      sudo port load unbound

cd /opt/local/etc/unbound
sudo cp unbound.conf-dist unbound.conf
sudo vi unbound.conf

Find the “# forward-zones” section and insert the following:

        name: "."
  forward-tls-upstream: yes
  # Cloudflare DNS

These are the Cloudflare DNS endpoints for DNS over TLS with malware protection. You can substitute alternate resolvers.

Now, if I want, I can start unbound and change my network config to use localhost as the DNS provider.

sudo port load unbound
--->  Loading startupitem 'unbound' for unbound

$ sudo lsof -i :53
unbound 85991 unbound    4u  IPv6 0xe14e1013ac1fa599      0t0  UDP localhost:domain
unbound 85991 unbound    5u  IPv6 0xe14e1013da135451      0t0  TCP localhost:domain (LISTEN)
unbound 85991 unbound    6u  IPv4 0xe14e1013bac68b69      0t0  UDP localhost:domain
unbound 85991 unbound    7u  IPv4 0xe14e1013bb2dd361      0t0  TCP localhost:domain (LISTEN)

Now, I can change my DNS provider to and my DNS queries will be resolved by my local and cached by my local unbound instance and securely forwarded to Cloudflare over TLS.

This setup can mess with captive portals. You may need to remove the temporarily in order to authenticate to a guest WiFi system through their web page and then turn it back on.

Configurig the postfix MTA to securely forward to a smarthost on macOS

macOS ships with postfix but it is in a semi-disabled state. The launch daemon configuration provided doesn’t work and postfix will immediately exit

What I want is a working local MTA that forwards mail securely to a smarthost for delivery. This is mostly useful when building and testing scripts and server applications that need to send mail. It is convenient to have the default MTA localhost:25 be in a working state.

Here’s the goal:

  • accept smtp connections on localhost:25 from localhost without credentials
  • relay mail (for my domain) to a smart host that has a static IP and a reputation that will make delivery possible
  • don’t have my credentials or mail snooped or intercepted
  • hopefully not be blocked by ISPs and middleboxes

I am using Amazon SES for my smarthost, but it could be gmail,, or a corporate server. The details will be a little different depending on the smarthost.

I’m using Amazon SES in the us-east region:

My configuration is for macOS Catalina with MacPorts as my package manager and Amazon SES as my smarthost relay. The details are slightly different but the concepts are the same for other package managers and or Linux.

Secure tunneling to a smarthost

Many ISPs and corporate networks will filter, intercept, or otherwise interfere with SMTP connections. They can also interfere with SMTP with StartTLS. With StartTLS, the connection is initially plaintext and gets upgraded to TLS if the server reports it as the capability. This connection type is vulnerable to a downgrade attack called StripTLS which prevents the encryption from being negotiated.

I have found that SMTPS or SMTP over SSL where the client first establishes an TLS connection and then performs SMTP commands and mail transfer through the resulting TLS tunnel is the most reliable, secure connection type.

Unfortunately many MTAs — including Postfix bundled with macOS and Ubuntu — do not support SMTPS natively. My solution to this problem is to use stunnel to negotiate the SMTPS and map the remote smarthost to a local port, then configure Postfix to forward its mail there.

sudo port install stunnel

sudo vi /opt/local/etc/stunnel/stunnel.conf

#foreground = yes

#accept = 2525
client = yes
connect =

When running stunnel from a command-line to test things out you would want to uncomment the lines that are commented. But put the comments back in for running as a launch agent.

Then I create a launch agent configuration to start the stunnel connection whenever port localhost:2525 is requested, emulating classic inetd. The idea is that launchd listens on localhost:2525 and when it receives a connection will start stunnel and connect it to the port, but otherwise stunnel is not running. Launchd is the init process, so it is always running. On Linux, you would use a systemd unit or OpenRC script to do the same thing.

sudo vi /Library/LaunchAgents/org.macports.stunnel.plist

<plist version="1.0">

sudo launchctl load /Library/LaunchAgents/org.macports.stunnel.plist

Now my smarthost at Amazon SES is connected securely to my localhost port 2525 on demand.

Configuring postfix

I need to authenticate to SES, so I need a passwd database.

cd /etc/postfix
sudo mkdir sasl
# for SES, the username and password are AWS API key ID and value
echo " my-aws-key-id:my-aws-key-value" | sudo tee passwd
# now make a postfix database file
sudo hashmap passwd
# now there should be a plaintext passwd file and a postfix passwd.db file

Now we need to set up postfix to relay any authenticate to SES on localhost:2525 by editing /etc/postfix

At the end of the file we need something like this:

# your authorized domain, this may need to be edited somewhere farther up in
mydomain = 

inet_interfaces = loopback-only

relayhost = localhost:2525
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl/passwd

If postfix were running, we would do sudo postfix reload at this point.

Finally we can set up a launch daemon for postfix to get it running as a service. I used to just edit the launch daemon configuration provided by Apple to get postfix working but as of High Sierra that required disabling SIP and as of Catalina the file became part of the read-only system partition.

We need to create and load a launch daemon file in /Library/LaunchDaemons where we have read/write permissions.

sudo vi /Library/LaunchDaemons/org.postfix.master.plist

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "">
<plist version="1.0">

sudo launchctl load /Library/LaunchDaemons/org.postfix.master.plist

Viola, I can now use my local MTA to send mail and this works from almost anywhere.

Now, assuming that you are able to make an outbound connection to port 465, the authentication to the smarthost is correct, and that your domain is authorized with the smarthost, etc. things should be working.

If you want to use /usr/bin/mail you will need a valid mydomain in and possibly also aliases, see postfix documentation for details.

SSH muddles the title

Normally, the macOS title bar includes the current directory name. When you connect to a remote host with openssh on macOS, the title bar gets updated to be “$(whoami)@$(hostname): $(pwd)” instead. Unfortunately when you exit ssh, the terminal title bar is not restored and continues to say you are on a remote host.

Once you see it, you can’t unsee it.

I’m sorry.

My solution is to use arcane escape sequences to reset the Terminal title every time bash generates a new prompt:

The incantation is slightly different but conceptually the same for zsh.

Automating dotnet core SDK updates on Mac

I really enjoy working with dotnet core. It is fast, open source, and cross-platform. My preference these days for working with the .NET stack is to build dotnet core apps natively on Mac with SQL Server or PostgreSQL on Docker for Mac. We can then easily deploy Docker containers or in some cases dotnet core on an actual Debian or RHEL server with Nginx. ASP.NET 4.x still only runs on Windows Server and for that I use VMWare Fusion and deploy with Kudu.

On Linux, Microsoft provides package manager repos to maintain the dotnet core SDK, which is awesome. Microsoft also publishes a menu of Docker containers to build and run dotnet core apps. On Windows, the Visual Studio updater will install dotnet core updates for you. On Mac the same is true of Visual Studio for Mac.

But I have no particular use for Visual Studio for Mac. I use VS Code and vim. I don’t need to have Visual Studio for Mac just as a glorified package manager and I simply don’t like having things installed that I do not use. There may be a Homebrew way to manage the dotnet core SDKs but I’m not a Homebrew kind of guy and I put no effort into researching this.

Fortunately Microsoft has provided a couple of useful scripts and It’s a pretty straightforward thing to script these together to maintain the LTS dotnet core SDKs.

sudo dotnet-upgrade-sdks

Problems solved. Now I need to figure out a solution to maintain mono since the Omnisharp intellisense engine in VS Code depends on mono.

Script to upgrade dotnet core LTS SDKs

Install the script with it’s dependencies

%d bloggers like this: