Building a new home NAS/home server, part IV

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog.

I’ve done some more performance testing and while I’m not 100% happy with the results, I decided to keep using FreeBSD with zfs on the server for the time being. Pretty much all measurements on Linux (both a recent Ubuntu Server and CentOS 5.3) showed lower performance and while OpenSolaris is a lot faster when it comes to Disk I/O and thus would have been my first choice for a pure NAS, the effort in porting my current mail server configuration would have resulted in the server being ready sometime in 2010…

I might be exaggerating this a little but the effort of building amavisd-new is non-trivial due to its large number of dependencies. If packages or at least some sort of package build system like the FreeBSD ports is available but building it from scratch and ensuring that all the necessary small Perl components are present isn’t trivial. So FreeBSD it is for the time being. I will still be looking into rolling a few packages for OpenSolaris for at least some of the tools I’d like to see on there as the thought of running my home server on what is essentially an enterprise-class OS is very appealing to me.

Back to the server build. One issue that bugged me was the noise created by the server. It is not massively loud but the fans do create a humming noise that I find quite distracting. My house is also very quiet, especially at night, and the noise from the machine carries surprisingly far. If I had a dedicated “server room” or could banish the box to my garage it wouldn’t be too much of a problem but for continuing use in a quiet environment like my office, the machine was just a little too noisy out of the box. None of the components are super noisy but as all my other machines were built with noise reduction in mind, this one stood out. As a first measure, I replaced the power supply with an Arctic Cooling Fusion 550R. It is very quiet and also has the additional advantage of being 80+ certified like the OEM Antec is. I also replaced the rear fan in the case with a Scythe Kama PWM fan and then discovered that the motherboard doesn’t support PWM case fans. Oops. Better read the manual next time before ordering fans. Currently it’s running at full blast but it is barely audible so that’s fine. I guess I’ll be replacing that with a 3-pin fan soon or stick the original Antec on back in. The new PSU supports temperature control of two chassis fans, but those fans need to be 3 pin and not 4 pin.

This leaves the processor HSF unit as the last source of noise that I can actually do something about. A replacement of that is on its way. Normally the OEM AMD cooler is quite good and quiet but for some reason this one is rather noisy. Not that it’s loud but it’s got a humming/droning noise that really irritates me. Turning the fan down in the BIOS settings at least alleviated the noise for the time being and it’s reasonably quiet. Quiet enough at the moment and if it starts annoying me again, I’ve got a Scythe Katana 2 processor HSF here that I can put in.

With the additional items like PSU, fans and HSF units it’s come in a little over my original budget but it should be a robust and hopefully long-lived server.

If you intend to install FreeBSD with zfs, make sure you upgrade from 7.2-release to the latest 7-releng as you’ll get a newer version of zfs this way. This version is supposed to be more stable and also doesn’t need any further kernel tuning in a 64-bit environment. I also noticed that with a custom built kernel, there is a measurable performance improvement since the later version of the zfs code has gone into 7-releng.

Overall the system has been running without any incidents for a few weeks now and appears to be both stable and quiet. So I’ll now go back to blogging about software…

Building a new home NAS/home server, part III

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog.

Unfortunately the excitement from seeing OpenSolaris‘s disk performance died down pretty quickly when I noticed that putting some decent load on the network interface resulted in the network card locking up after a little while. I guess that’s what I get for using the on-board Realtek instead of opening the wallet a little further and buy an Intel PCI-E network card. That said, the lock-up was specific to OpenSolaris – neither Ubuntu nor FreeBSD exhibited this sort of behaviour. I could get OpenSolaris to lock up the network interface reproducibly while rsyncing from my old server.

This gave me the opportunity to try the third option I was considering, Ubuntu server. Again, this is the latest release with the latest kernel update installed. The four 1TB drives were configured as a RAID5 array using mdadm. Once the array had been rebuilt, I ran the same iozone test on it (basically just iozone -a with Excel-compatible output). To my surprise this was even slower than FreeBSD/zfs even thought the rsync felt faster. Odd that.

Here a few pretty graphs that show the results of the iozone write test – reading data was faster, as expected, but the critical bit for me is writing as the server does host all the backups for the various other
PCs around here and also gets to serve rather large files.

First, we have OpenSolaris’s write performance:

 

iozone performance on OpenSolaris
iozone performance on OpenSolaris

 

FreeBSD with zfs is noticeably slower, but still an improvement over my existing server – that one only has two drives in a mirrored configuration and oddly enough its write speed is about half of the one on the new server:

iozone performance on FreeBSD with zfs
iozone performance on FreeBSD with zfs

FreeBSD with the geom-based raid is slower but I believe that this is due to the smaller number of disks. With this implementation you need to use an odd number of disks so the fourth disk wasn’t doing anything during those tests. Not surprising that the overall transfer rate came in at roughly 3/4 of the zfs one.

iozone performance on FreeBSD with graid
iozone performance on FreeBSD with graid

Ubuntu with an mdadm raid array unfortunately brings up the rear. I was really surprised by this as it comes in below the performance of the 3 disk array under FreeBSD:

iozone performance on Ubuntu server
iozone performance on Ubuntu server

One thing I noticed is that while the four Samsung drives are SATA-300 drives, the disk I reused as a system drive is not. I’m not sure if that does make any difference but I’ll try to source a SATA-300 disk for use as the system disk to check if that makes any difference at all. I’m also not sure if FreeBSD and Linux automatically use the drives command queueing ability. On these comparatively slow but large drives that can make a massive difference. Other than that I’m a little stumped; I’m considering the purchase of the aforementioned Intel network card as that would hopefully allow me to run OpenSolaris but other than that I’m open to suggestions.