Quick tip if you see bad DLL or entry point’msobj80.dll’when building software with VS2008

Try stopping mspdbsrv.exe (the process that generates the pdb files during a build) if it is still running. My understanding is that it’s supposed to shut down at the end of the compilation but it seems that it can turn into a zombie process and if the latter happens, you can get the above error when linking your binaries.

Anyway, I just ran into this issue and stopping the process via the Task Manager resolved the issue for me.

On combining #import and /MP in C++ builds with VS2010

I’m currently busy porting a large native C++ project from VS2008 to VS2010 and one of the issues I keep running into was build times. The VS2008 build uses a distributed build system; Unfortunately the vendor doesn’t support VS2010 yet, so I couldn’t use the same infrastructure. In order to get a decent build speed, I started exploring MSBuild’s ability to build projects in parallel (which is fairly similar to VS2008’s ability to build projects in parallel) and the C++ compiler’s ability to make use of multiple processors/cores, aka the /MP switch.

Read More

Using CEDET-1.0 pre7 with Emacs 23.2

It’s been mentioned in several places that GNU Emacs versions sometime after 23.1.50 do come with an integrated version of CEDET. While I think that’s a superb idea it unfortunately managed to break my setup, which relies on a common set of emacs-lisp files that I hold under version control and distribute across the machines I work on. Those machines have different versions of GNU-based Emacsen (pure GNU, Emacs/W32, Carbon Emacs etc) so I can’t rely on the default CEDET. Unfortunately when I got a new machine and put a GNU Emacs 23.2 on there, my carefully crafted (OK, I’m exaggerating) .emacs wouldn’t play ball because the built-in CEDET conflicted with the pre7 that I had already installed on that machine.

I didn’t want to have to make extensive modifications to my .emacs, but a little time spent on Google brought up a post by Marco Bardelli on the CEDET-devel mailing list with a little code snippet that removes the built-in CEDET from the load-path. After putting this into my .emacs, my -pre7 config is working again.

For those in a similar quandary, here is the snippet in all its glory:

(setq load-path
      (remove (concat "/usr/share/emacs/" 
		      (substring emacs-version 0 -2) "/lisp/cedet")

Welcome back to the new blog, almost the same as the old blog

The move to the other side of the Atlantic from the UK is almost complete, I’m just waiting for my household items – and more importantly, my computer books etc – to turn up. So it’s time to start blogging again in the next few weeks. Due to some server trouble in the UK, combined with the fact that I do like Serendipity as a blogging system but was never 100% happy with it, I’ve switched to using WordPress on a server here in the US. The old blog will stay up, at least as long as the server stays put, but I won’t add any new content to the old blog.

Enough meta-blogging though, I should be back to the usual 1-2 post per month soon, so if you kindly could subscribe to the RSS feed and you’ll see when all of this is up and running again.

The homebuilt NAS/home server, revisited

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog.

I’ve blogged building my own NAS/home server before, see here, here, here and here.

After a few months, I think it might be time for an interim update.

In its original incarnation, the server wasn’t as stable as it should have been given my previous experience of FreeBSD. For some reason, it would crash every few weeks and sometimes even hang on reboot. Not good, especially as it happened a few times while I wasn’t home. I guess I should have heeded the warning about the zfs integration being experimental… Things got worse when I added a wireless card and retired my access point. Roughly around this point in time I got fed up with this enough to go back and start building an OpenSolaris VM to try out a mail server setup similar to the one I’m running on FreeBSD.

Before I got anywhere with this, FreeBSD 8.0 came out, so I upgraded. ZFS had be promoted from experimental, the wireless stack has been overhauled, etc pp. The stability problems disappeared and the machine has been utterly reliable since then. Where before, trying to use Time Machine from to back up my MacBook via the wireless network was a good way to a 50% chance to crash the server, it now “just works”. This is where I wanted to get and I’ve now got there. Performance also seems to have improved – copying large files from the server to my Windows 7 machine sees a reliable 78MB/s via my Gigabit network now.

I’ve still got a couple of small changes I want to make to the machine – for example, I’ve got 4GB of RAM that I want to put into the machine. This should enable zfs readahead which should give me further performance improvements. I also plan to add two more fans to blow cold air over the hard drives to keep them happy and working for longer. Edit: Actually it didn’t as 4GB RAM on the mainboard result in slightly less than 4GB available to the OS. I did enable the readahead manually, though.

If I built another home server, I would probably get another motherboard. Not that there is anything particularly wrong with the one in the machine, but the CPU fan speed control only goes down to 70%, so no wonder that the CPU fan is noisier than it should be.

Was it worth it? Overall I’d say yes, although I probably should have stuck to tried and tested technology (either using FreeBSD’s built-in RAID5 or use OpenSolaris with zfs). This caused unnecessary problems at the beginning and pushed up the cost as I was dithering between either. Next time I probably set up the server on OpenSolaris and run the mail server on FreeBSD in a VM running on OpenSolaris. Given that the current configuration is working, I leave it alone for the time being though.

Building a new home NAS/home server, part IV

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog.

I’ve done some more performance testing and while I’m not 100% happy with the results, I decided to keep using FreeBSD with zfs on the server for the time being. Pretty much all measurements on Linux (both a recent Ubuntu Server and CentOS 5.3) showed lower performance and while OpenSolaris is a lot faster when it comes to Disk I/O and thus would have been my first choice for a pure NAS, the effort in porting my current mail server configuration would have resulted in the server being ready sometime in 2010…

I might be exaggerating this a little but the effort of building amavisd-new is non-trivial due to its large number of dependencies. If packages or at least some sort of package build system like the FreeBSD ports is available but building it from scratch and ensuring that all the necessary small Perl components are present isn’t trivial. So FreeBSD it is for the time being. I will still be looking into rolling a few packages for OpenSolaris for at least some of the tools I’d like to see on there as the thought of running my home server on what is essentially an enterprise-class OS is very appealing to me.

Back to the server build. One issue that bugged me was the noise created by the server. It is not massively loud but the fans do create a humming noise that I find quite distracting. My house is also very quiet, especially at night, and the noise from the machine carries surprisingly far. If I had a dedicated “server room” or could banish the box to my garage it wouldn’t be too much of a problem but for continuing use in a quiet environment like my office, the machine was just a little too noisy out of the box. None of the components are super noisy but as all my other machines were built with noise reduction in mind, this one stood out. As a first measure, I replaced the power supply with an Arctic Cooling Fusion 550R. It is very quiet and also has the additional advantage of being 80+ certified like the OEM Antec is. I also replaced the rear fan in the case with a Scythe Kama PWM fan and then discovered that the motherboard doesn’t support PWM case fans. Oops. Better read the manual next time before ordering fans. Currently it’s running at full blast but it is barely audible so that’s fine. I guess I’ll be replacing that with a 3-pin fan soon or stick the original Antec on back in. The new PSU supports temperature control of two chassis fans, but those fans need to be 3 pin and not 4 pin.

This leaves the processor HSF unit as the last source of noise that I can actually do something about. A replacement of that is on its way. Normally the OEM AMD cooler is quite good and quiet but for some reason this one is rather noisy. Not that it’s loud but it’s got a humming/droning noise that really irritates me. Turning the fan down in the BIOS settings at least alleviated the noise for the time being and it’s reasonably quiet. Quiet enough at the moment and if it starts annoying me again, I’ve got a Scythe Katana 2 processor HSF here that I can put in.

With the additional items like PSU, fans and HSF units it’s come in a little over my original budget but it should be a robust and hopefully long-lived server.

If you intend to install FreeBSD with zfs, make sure you upgrade from 7.2-release to the latest 7-releng as you’ll get a newer version of zfs this way. This version is supposed to be more stable and also doesn’t need any further kernel tuning in a 64-bit environment. I also noticed that with a custom built kernel, there is a measurable performance improvement since the later version of the zfs code has gone into 7-releng.

Overall the system has been running without any incidents for a few weeks now and appears to be both stable and quiet. So I’ll now go back to blogging about software…

Building a new home NAS/home server, part III

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog.

Unfortunately the excitement from seeing OpenSolaris‘s disk performance died down pretty quickly when I noticed that putting some decent load on the network interface resulted in the network card locking up after a little while. I guess that’s what I get for using the on-board Realtek instead of opening the wallet a little further and buy an Intel PCI-E network card. That said, the lock-up was specific to OpenSolaris – neither Ubuntu nor FreeBSD exhibited this sort of behaviour. I could get OpenSolaris to lock up the network interface reproducibly while rsyncing from my old server.

This gave me the opportunity to try the third option I was considering, Ubuntu server. Again, this is the latest release with the latest kernel update installed. The four 1TB drives were configured as a RAID5 array using mdadm. Once the array had been rebuilt, I ran the same iozone test on it (basically just iozone -a with Excel-compatible output). To my surprise this was even slower than FreeBSD/zfs even thought the rsync felt faster. Odd that.

Here a few pretty graphs that show the results of the iozone write test – reading data was faster, as expected, but the critical bit for me is writing as the server does host all the backups for the various other
PCs around here and also gets to serve rather large files.

First, we have OpenSolaris’s write performance:


iozone performance on OpenSolaris
iozone performance on OpenSolaris


FreeBSD with zfs is noticeably slower, but still an improvement over my existing server – that one only has two drives in a mirrored configuration and oddly enough its write speed is about half of the one on the new server:

iozone performance on FreeBSD with zfs
iozone performance on FreeBSD with zfs

FreeBSD with the geom-based raid is slower but I believe that this is due to the smaller number of disks. With this implementation you need to use an odd number of disks so the fourth disk wasn’t doing anything during those tests. Not surprising that the overall transfer rate came in at roughly 3/4 of the zfs one.

iozone performance on FreeBSD with graid
iozone performance on FreeBSD with graid

Ubuntu with an mdadm raid array unfortunately brings up the rear. I was really surprised by this as it comes in below the performance of the 3 disk array under FreeBSD:

iozone performance on Ubuntu server
iozone performance on Ubuntu server

One thing I noticed is that while the four Samsung drives are SATA-300 drives, the disk I reused as a system drive is not. I’m not sure if that does make any difference but I’ll try to source a SATA-300 disk for use as the system disk to check if that makes any difference at all. I’m also not sure if FreeBSD and Linux automatically use the drives command queueing ability. On these comparatively slow but large drives that can make a massive difference. Other than that I’m a little stumped; I’m considering the purchase of the aforementioned Intel network card as that would hopefully allow me to run OpenSolaris but other than that I’m open to suggestions.