The Lone C++ Coder's Blog

The Lone C++ Coder's Blog

The continued diary of an experienced C++ programmer. Thoughts on C++ and other languages I play with, Emacs, functional, non functional and sometimes non-functioning programming.

Timo Geusch

3-Minute Read

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul of these posts so I wanted to consolidate all the articles on the same blog.

The good news is that the hardware seems to be behaving it for a while now and everything appears to Just Work. FreeBSD makes things easy for me in this case as I’m very familiar with it so I only spent a few hours getting everything set up. So far, so good.

I’ve rsynced most of the data off the old server and while doing that I already had this nagging feeling that the transfer seemed to be well, a little slow. Yes, I transferred several hundred Gigabytes but nevertheless I thought it would take less time, so this morning I did a few performance tests using Samba 3.3 on the new server. Turns out the performance was around 12-25 MB/s and that’s a lot less than I expected. In fact, that’s a little too close to 100Mb network performance for my liking - even without using jumbo frames I would think that a straight data transfer would be faster than this. Unfortunately I had introduced a few unknowns into the equation, including using the FreeBSD implementation of zfs. I am not saying that it is the culprit but I’ll start with server disk performance measurements before I do anything else.

Ah well, looks like the old server needs to do its job a little longer then.

Update: Running iozone on the machine itself suggests that the write speed of 12MB/s-13MB/s I was seeing via Samba are more or less the actual transfer rate to disk on the zfs software raid array. I’m not sure if that’s a limitation on my hardware or a problem with the (still experimental) zfs implementation on FreeBSD. Nevertheless this is a little on the slow side to put it mildly (the four disks should easily saturate a gigabit network link) so I guess it’s time to wipe the zfs config and use FreeBSD’s native RAID5 instead for a check if that hypothesis is holding water. If it’s not then the bottleneck is somewhere else.

**Update II:**FreeBSD’s graid3 unfortunately needs an odd number of drives which left one of the 4 Samsung idle. Same tests suggest that the performance was 1/4 less than FreeBSD zfs with four disks so it seems that it was driving it as fast as it can. Hmm. Just to get a comparable figure I threw OpenSolaris on the box, again with the four 1TB drives as a single zfs tank with a single directory structure on it. With the default settings, the same iozone tests are nudging write speeds over 35MB/s. Not as fast as I expected by a long shot but noticeably better at the expense of a more painful configuration. Something to think about but I think I need to test a few other configurations before I can really make up my mind as to which way I want to go.

Recent Posts

Categories

About

A developer's journey. Still trying to figure out this software thing after several decades.