The blogging hiatus is almost over

I switched jobs in October last year and getting up to speed in the new role did take priority over anything else, so I had to put a few other endeavours including this blog on hold for a little while.

The publishing frequency will still be lower than previously, but as I have moved from a mostly managerial role back to a pure software engineering role, I will be able to post more articles in line with the original intent of the blog, ie more C++-related articles and of course, the usual smattering of “look what cool feature I found in Emacs”.

Thanks for not removing my blog from your RSS feed reader.

Of course I have to post something about Google Reader, too

The demise of Google reader viewed from a slightly different perspective. I find the analysis from someone who isn’t a proto-geek but rather an investment professional  interesting, mainly because there are insights that some like me – who doesn’t spend the whole day looking at companies and trying to figure out what they are doing as opposed to what they say they are doing – would and this case, have missed.

Obviously I’m also one of these dinosaurs who are Google Reader users and I’m also in the unhappy camp at the moment. With hindsight, it probably wasn’t too smart to put all eggs in one free basket, but part of the problem was that Google Reader is/was the 800 pound gorilla in the room and mostly squeezed out the competitors. I certainly have no objection to paying for this type of useful service but like a lot of other people the availability of decent clients for nearly all OSs I use was a major factor and for quite a while this has been a case of more-or-less Google Reader on nothing.

Welcome back to the new blog, almost the same as the old blog

The move to the other side of the Atlantic from the UK is almost complete, I’m just waiting for my household items – and more importantly, my computer books etc – to turn up. So it’s time to start blogging again in the next few weeks. Due to some server trouble in the UK, combined with the fact that I do like Serendipity as a blogging system but was never 100% happy with it, I’ve switched to using WordPress on a server here in the US. The old blog will stay up, at least as long as the server stays put, but I won’t add any new content to the old blog.

Enough meta-blogging though, I should be back to the usual 1-2 post per month soon, so if you kindly could subscribe to the RSS feed and you’ll see when all of this is up and running again.

Reblog: Building a new home NAS/home server, part II

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul of these posts so I wanted to consolidate all the articles on the same blog.

The good news is that the hardware seems to be behaving it for a while now and everything appears to Just Work. FreeBSD makes things easy for me in this case as I’m very familiar with it so I only spent a few hours getting everything set up. So far, so good.

I’ve rsynced most of the data off the old server and while doing that I already had this nagging feeling that the transfer seemed to be well, a little slow. Yes, I transferred several hundred Gigabytes but nevertheless I thought it would take less time, so this morning I did a few performance tests using Samba 3.3 on the new server. Turns out the performance was around 12-25 MB/s and that’s a lot less than I expected. In fact, that’s a little too close to 100Mb network performance for my liking – even without using jumbo frames I would think that a straight data transfer would be faster than this. Unfortunately I had introduced a few unknowns into the equation, including using the FreeBSD implementation of zfs. I am not saying that it is the culprit but I’ll start with server disk performance measurements before I do anything else.

Ah well, looks like the old server needs to do its job a little longer then.

Update: Running iozone on the machine itself suggests that the write speed of 12MB/s-13MB/s I was seeing via Samba are more or less the actual transfer rate to disk on the zfs software raid array. I’m not sure if that’s a limitation on my hardware or a problem with the (still experimental) zfs implementation on FreeBSD. Nevertheless this is a little on the slow side to put it mildly (the four disks should easily saturate a gigabit network link) so I guess it’s time to wipe the zfs config and use FreeBSD’s native RAID5 instead for a check if that hypothesis is holding water. If it’s not then the bottleneck is somewhere else.

Update II:FreeBSD’s graid3 unfortunately needs an odd number of drives which left one of the 4 Samsung idle. Same tests suggest that the performance was 1/4 less than FreeBSD zfs with four disks so it seems that it was driving it as fast as it can. Hmm. Just to get a comparable figure I threw OpenSolaris on the box, again with the four 1TB drives as a single zfs tank with a single directory structure on it. With the default settings, the same iozone tests are nudging write speeds over 35MB/s. Not as fast as I expected by a long shot but noticeably better at the expense of a more painful configuration. Something to think about but I think I need to test a few other configurations before I can really make up my mind as to which way I want to go.

Reblog: Building a new home NAS/home server, Part I

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog.

Up to now I’ve mostly been using recycled workstations as my home mail, SVN and storage server. Nothing really wrong with that as most workstations are fast enough but I’m running into disk space issues again after I started backing up all the important machines onto my server. That’s especially annoying as I started using Time Machine on my iMac and now haven’t got enough space left on the server to also back up the MacBook. Time Machine is great as a backup solution simply because it is so unobtrusive and it appears to just work.

Inspired by an article in the German magazine c’t, I decided that instead of finding another used desktop machine to recycle, I was going to build a proper home server with a few additional bells and whistles. I’m still using mostly desktop parts as I can’t really justify the expense and noise of “proper” server components but I tried to select decent quality parts. Here’s the hardware list:

1 x Antec NSK 6580B Black Mid Tower Case – With 430W Earthwatts PSU
1 x ASROCK A780LM AMD 760G Socket AM2+ VGA DVI 6 Channel Audio Mini-ATX Motherboard
1 x AMD Athlon X2 5050e Socket AM2 45W Energy Efficient Retail Boxed Processor
4 x Samsung EcoGreen F2 1TB Hard Drive SATAII 32MB Cache – OEM
4 x Startech Serial ATA Cable (1 End Right Angled) 18″
1 x Crucial 2GB kit (2x1GB) DDR2 800MHz/PC2-6400 Ballistix Memory Non-ECC Unbuffered CL4 Lifetime Warranty
1 x Startech Right Angle Serial ATA Cable (1 end) 24 Inch

This little lot cost me around £450 including shipping so it’s a relatively inexpensive machine and will hopefully be more powerful than you average, similarly priced NAS. It should also be reasonably quiet and energy efficient, at least for something that’s got a regular processor and five HDDs in it.

First impressions of the Antec case are good. I like buying Antec cases because they’re usually pretty high quality and this one certain doesn’t disappoint. The nice touches like the disk mounting trays with gel feet are great, there is enough space in there – well, more than enough for my uses – and it’s got an 80plus PSU.

disk-carrier-2 disk-carrier-1

The only downside I noticed when opening the case was that the 120mm chassis fan plugs into a HDD power supply and has a ‘speed switch’. Not that impressive but if the rest of the machine turns out to work well I’ll probably replace it with a high-throughput fan that is hooked up to the chassis fan connector. I also like the removable disk carrier that leaves enough space between the disks for some decent airflow. If I had some front mount fans that is.

disk-carrier-3

I’ll skip the buildup stuff. It’s not that hard anyway, plus there are plenty of howtos on the web already. Everything went together very smoothly. I noticed that there is a well designed space for two front-mounted fans right in front of the HDD bays. Given the number of HDDs I’m running in this box, I’ll put a couple more fans on the shopping list (edit: which I never did and it didn’t seem to have any detrimental effects on the longevity of the disks). As you can see, the case is nice and spacious (it was a relief working in there instead of the Mini-ATX cases I tend to prefer for desktops) and yes, I know that it needs a visit from the cable tie fairy. In addition to the components above, I used a fifth HDD for the OS alone, plus on the heap of bits that no self-respecting geek can be without, I found a DVD drive so I didn’t have to hook up the external USB DVD drive that I keep for such cases.

case-sideview

My first choice of OS for this box was OpenSolaris 2009.06. An older version had achieved very good performance scores in the above mentioned article so I thought it was worth a try. I also hoped to be able to try zfs on the ‘data’ disks. While setting up the machine with OpenSolaris was fine, I ran into the problem that I do require a lot more features than “just” a plain NAS – which was the reason to build the server myself in the first place. In addition to playing file and DHCP server, this machine would need to handle all incoming mail (that’ll be postfix, amavisd-new and policyd-weight then), run an IMAP server (dovecot), an NNTP server (leafnode) and a bunch of other small tools that are necessary but unfortunately not available as packages on OpenSolaris yet. While I wouldn’t mind building all these packages from source, keeping them up to date might turn into a problem as I don’t want to have to manually maintain these packages. While the list of packages seems to be fairly small, with the number of dependencies that amavisd and especially SpamAssassin have, this would be almost impossible for someone like me who doesn’t have a lot of time to spend on server maintenance.

For this reason I went with plan B, which was to keep the OS I’m currently using on my server (FreeBSD). FreeBSD has support for zfs – albeit still labelled as experimental so I’m not sure if it’s a good idea, but I’ll try it anyway – and I can set up a FreeBSD box almost in my sleep. Plus, with cron jobs updating the local ports collection it should be fairly straightforward to keep everything up to date. One change I will be making though is to switch from the regular x86 version of the OS to the 64 bit version. Partially because I can but mostly because zfs does seem to be working better on 64 bit systems. Oh, and I’m beginning to wonder if using only 2GB of RAM was a smart idea as zfs does appear to like loadsamemory. I guess we’ll find out in part II…