Solving Very Slow Visual Studio Build Times in VMWare

ClockIn 2009, we chose 15″ MacBook Pro BTO over Dell, HP and Lenovo offerings. Apple offered us the best hardware and equivalent lease terms but with much simpler servicing done by ETF rather than (often incorrect) paper statements and checks. 99% of the time we ran this MacBooks with Windows 7 under Boot Camp. When I started doing some iOS development, I ran Snow Leopard and Xcode under VirtualBox and when I got fed up with the flakiness, I used VMware Workstation. OS X didn’t run very well under virtualization mostly because accelerated Quartz Extreme drivers don’t exist for VMware Workstation. Still, it actually worked well enough and was much more convenient than dual booting — which is such a huge time suck. When the lease period ended, we renewed with Apple and decided to just use OS X as the host environment for a variety of reasons:

The transition went very smoothly. I was using a VM with Windows Server 2008 and Visual Studio 2010 for primary .NET web development. Configuring IIS Express to serve outside of localhost bound to a host-only adapter is great for cross-browser testing, but it can be even more useful to enable remote access to Fiddler and point external browsers to the Fiddler proxy running in the VM to get both client debugging and HTTP sniffing at the same time. All of this was working out great.
 
At the start of October I downloaded Windows Server 2012 and Visual Studio 2012 and created a new development VM. In order to minimize disk storage, I moved my source tree to a folder in the host OS X environment and exposed it via the “Shared Folders” feature of VMWare Fusion 5.x to Windows. At the same time I started working on a new project.

 

The Slowening

Once the project grew to 20, 30 and 50k lines of C# code, the build times started to become horrifically slow. When combined with running unit tests, build became a big time for a bathroom break or cup of coffee event like building a project in C 15 years ago. Builds would show cdc.exe running at ~50% CPU (e.g. 1 core) and some other stuff totaling ~70% CPU. The VM was not memory bound and network IO was minimal. This was my first substantial project using Code First EF, so I thought maybe the complex object graph is just hard for the C# compiler to deal with.
 
After a few weeks of increasingly painful build times, I was looking at breaking my solution up so that I could build against pre-compiled DLLs — anything to make it go faster. I ran across a post on SuperUser:

… (Full disclosure: I work on VMware Fusion.)

I have heard that storing the code on a “network” drive (either an HGFS share or an NFS/CIFS share on the host, accessed via a virtual ethernet device) is a bad idea. Apparently the build performance is pretty bad in this configuration.

Oh really? Hmmm. Maybe it isn’t that my class libraries are so complex but something else is going on. Here are some empirical measurements of rebuild time of an actual solution:

VMWare shared folder: 	50 sec
OS X SMB share: 	18 sec
within virtual disk:	 9 sec

Wow. Problem solved. Incremental builds are basically instantaneous and a full rebuild takes 9 seconds when the code is hosted inside the VM image. Not only does hosting the source code within the VM virtual disk make the build go 5.5x faster, the CPU time of csc.exe goes way down. I don’t know how the VMWare shared folder is implemented. It appears as a mapped drive to a UNC name to Windows but it is very slow. Moral of the story is just don’t host your source code on the host machine with VMWare. The performance penalty is just not worth it. If you need to share the source code tree inside the VM with the host OS, create a file share from the VM to the host over a host-only adapter.

 

Update

I can confirm this is still a problem in VMWare Fusion 6. I’m hoping maybe the new SMB implementation in Mavericks might greatly improve the performance of sharing source code from the host OS to the VM.

Update

I just updated my virtual machine to Windows Server 2012 R2 (aka Windows 8.1 server). It is running on VMWare Fusion 6. The build time of this large project is now 15 seconds over VMWare Shared Folders. Significant remaining issue is that Visual Studio uses a whole core of CPU to do nothing — just having a large solution open, not editing anything.

Advertisement

Pimping my 2011 MacBook Pro to 16GB RAM Running at 1600MHz

I am a fairly heavy user of memory-hungry VMWare VMs. I was running into a problem with excessive paging slowing down the host OS or even not being able to launch all the VMs I needed to simultaneously due to memory limitations of my pretty damn new 8GB RAM BTO late 2011 15″  Sandy Bridge MacBook Pro system.

The late 2011 Sandy Bridge 15″ MacBook Pro machines come with 1300MHz 9-9-9 non-ECC DDR3 SO-DIMM RAM configurable up to 8GB in a BTO configuration. The 2012 Ivy Bridge models come with RAM operating at 1600MHz and the Retina MacBook Pro has a 16GB BTO option. The chipsets are similar and I was pretty sure that the non-Retina model can support 16GB of RAM and the Sandy Bridge models can run at the 20% faster 1600MHz just like the Ivy Bridge ones.

I had some trouble finding 16GB 9-9-9  latency non-ECC DDR3 SO-DIMM RAM kits on the aftermarket and none were labelled as “for MacBook Pro”. There are a lot of options at 1333MHz from Kingston, OWC, Corsair, Crucial and iFixit but at 1600MHz there are slim pickings. I suspect that the reason that the 2011 MacBook Pro ships with 1300MHz memory is a cost/availability issue.

Corsair has a kit of 16GB with slower 10-10-10 latency. I’m not sure what the implication is of 10-10-10 latency at 1600MHz vs. 9-9-9 at 1300MHz but I know that Apple specs 9-9-9 memory in their systems, so I soldiered on. The only kit that had specs I was looking for was the HyperX PNP 1600MHz 9-9-9 16GB DDR3 non-ECC SO-DIMM kit from Kingston which I got from Amazon.

Memory access schematic from support.apple.com

Installation is pretty easy. You need a high quality static dissipative Phillips #00 screw driver to remove the 10 screws without damaging the heads. Once the back is off the computer, the memory slots are easily accessible in the center of the machine.

As you can see, the Kingston kit worked. OS X Mountain Lion recognizes the 16GB of RAM at 1600MHz and is quite happy. I have a lot of memory head room now. I can run all of my VM workloads simultaneously with iTunes, Pixelmator, OmniGraffle, MonoDevelop, Xcode, etc., etc. all running at once without any hiccups whatsoever. Overall, I’m very happy with this experiment. The Kinston kit screams.

Update:

I got a Geekbench score of 11020 with the new RAM installed.

The Sad History of the Microsoft POSIX Subsystem

When Windows NT was first being developed, one of the goals was to make the kernel separate from the programming interface. NT was originally intended to be the successor to OS/2 but Microsoft also wanted to include compatibility to run Windows 3.x applications and to meet 1980s era DoD Orange Book and FIPS specifications to sell to the defense market.  As a result, Windows NT was developed as a multiuser platform with sophisticated discretionary access controls capabilities and it was implemented as a hybrid microkernel 3 userland environments:

  • Windows (Win32)
  • OS/2
  • POSIX

Microsoft had a falling out with IBM over Win32 and the NT project split from OS/2. The team focus shifted to Win32 so much that the Client-Server Runtime Subsystem (CSRSS) that hosts the Win32 API became mandatory and OS/2 and POSIX subsystems were never really completed but they were shipped with the first five versions of Windows NT through Windows 2000. The OS/2 subsystem could only run OS/2 1.0 command-line programs and had no presentation manager support. The POSIX subsystem supported POSIX.1 spec but provided no shells or UNIX-like environment of any kind. With the success of Win32 in the form of Windows 95, the development of the OS/2 and POSIX subsystems ceased. They  were entirely dead and gone from Windows XP and Windows Server 2003.

Meanwhile, around 1996,  Softway Systems developed a UNIX-to-Windows NT porting system called OpenNT. OpenNT was built on the NT POSIX subsystem but fleshed it out into a usable UNIX environment. This was at a time when UNIX systems where hugely expensive. Softway used OpenNT to re-target a number of UNIX applications for US Federal agencies onto Windows NT. In 1998, OpenNT was re-named Interix. Softway Systems also eventually built a full replacement for the NT POSIX subsystem in order to implement system calls that the Microsoft POSIX subsystem didn’t support and to develop a richer libc, single-rooted view of the file system and a functional gcc.

Microsoft acquired Softway and the Interix platform in 1999. Initially Interix 2.2 was made available as a fairly expensive paid add-on to Windows NT 4 and Windows 2000. Later it was incorporated as a component of Services for UNIX 3.0 and 3.5 (SFU) and SFU was made free-of-charge. When Interix became free, Microsoft removed the X11 server component that was previously bundled with Interix because in the wake of U.S. vs Microsoft, they did not want to defend law suits from the entrenched and expensive PC X Server industry but the X11 client libraries remained.

SFU/Interix 3.0 was released in early 2002 followed up with SFU 3.5 less than two years later and cool stuff got implemented like fast pthreads, fork(), setuid, ptys, deaemons with RC scripts including inetd and sendmail among other things. InteropSystems ported OpenSSH and developed a high-performance port of Apache using pthreads and other proof-of-concept ports like GTK and GIMP among many other things. Hotmail even ran on Interix. And enterprising people did cool things like a Linux ELF binary loader on top of Interix.

I got into this stuff and built and donated ports to the SFU/SUA community, including cadaver, ClamAV, GnuMP, libtool, NcFTP, neon, rxvt and gnu whois. My company sponsored the port of OpenSSH to Interix 6.0 for Vista SUA (because it broke backwards compatibility with Interix 3.5 binaries). We ran Interix on all of our workstations and servers. We used it for management, remote access and to interop with clients who used Solaris, Linux and OS X on various projects.

Slowly Going Off the Rails

With Windows Server 2003 R2 (and only R2), Interix became a core operating system component, rebranded as “Subsystem for UNIX Applications” (SUA). Around this time, the core development team was reformed in India rather than Redmond and some of the key Softway developers moved on to other projects like Monad (PowerShell) or left Microsoft. Interix for Windows Server 2003 R2 (aka Interix 5.2) was broken. It shipped with corrupt libraries and a number of new but flawed APIs and broke some previously stable APIs like select(). Also, related to the inclusion of Interix as an OS component, SP2 for Windows Server 2003 clobbers Interix 3.5 installations.

Things have been downhill from there. It’s not just that obvious things didn’t get implemented like a fully-functional poll() or updating binutils and gcc to something reasonably modern. The software suffered from severe bitrot.

One of the consequences of including SUA as an OS component has been that a bifurcation of the “subsystem” from the “tools”. The subsystem consists of just a few files: psxss.exe, psxss.dll, posix.exe and psxrun.exe. This implements the runtime and a terminal environment but nothing else, not even libc. In order to get shells, PTYs and usable programs, you have to install the “Utilities and SDK for UNIX-based Applications”  (aka tools) which is sizable download. Apparently Microsoft has concern about bundling GPL code onto the actual Windows media.

OK. This is a little weird but not a big deal except that the development timeline of the tools is now completely out of whack with Windows releases. The tools for Vista were only available in beta when Vista went gold and the version for Windows Server 2008 and Vista SP1 was not available until about a month after Vista SP1/Win2k8 was released. When Windows 7 was released no tools were available at all in July 2009 when Windows 7 was released. They didn’t become available until 8 months later in March 2010 and contain no new features.

To top things off, while SFU 3.5 ran on all versions of NT 5.x, SUA only runs on Windows Server and the Enterpise and Ultimate client editions. SUA is not available on Vista Business or Home and Windows 7 Professional and Home editions.

Is Interix Dead?

For some reason Microsoft seems to be ambivalent about this technology. On the one hand they bring it into the core of the OS and make it a “premium” feature that only Enterprise and Ultimate customers get to use and on the other they pare back development to almost nothing.

Interix has been supported with support forms and a ports tree maintained by InteropSystems collectively known as SUA Community which operates with supplemental funding from Microsoft. The /Tools ports tree is the source for key packages not provided by Microsoft such as Bash, OpenSSH, BIND, cpio and a ton of libraries that Microsoft does not bundle.  Microsoft has been increasingly reluctant to fund the SUA Community and has survey users on a number of occasions. The latest survey was very pointed and culminated with Microsoft cutting off funding and shuttering the SUA Community site on July 6th, 2010 but a few days later it was back online. I’m not sure how or why.

I have no inside knowledge but my gut says that Interix has lost internal support at Microsoft. It is being kept on life support because of loud complaints from important customers but it is going nowhere. I will be surprised if there is a Subsystem for UNIX-based Applications in Windows 8. I think the ambivalence is ultimately about an API war. At some level, the strategerizers have decided it is better to not dignify UNIX API with support. I think the calculus is that people will still use Windows but it chokes off oxygen for UNIX-like systems if it takes a lot of extra work to write cross-platform code for Windows and UNIX—the premise being that you write for Windows first because that’s where the market is. Furthermore, in a lot of business cases what is needed is Linux support or Red Hat Linux version X support in order to run something. I think Microsoft realizes that it is hard for Interix to beat Linux which is why SUSE and Red Hat Linux can be virtualized under Hyper-V.

I also believe that Microsoft sees C/C++ APIs as “legacy”. I think they want to build an OS that is verifiably secure and more reliable by being based on fully managed code. The enormous library of software built for the Windows API is a huge legacy problem to manage in migrating to such a system. Layering POSIX/UNIX on top of that makes it worse.

Whatever the reason, it seems pretty clear that Interix is dying.

%d bloggers like this: