Some clarifications regarding last week’s anti-VC6 rant

This post started out as a comment to Len Holgate’s post referencing my anti-VC6 rant. The comment got a little out of hand size wise so I’ve decided to turn it into a separate blog post. I think I mixed a couple of issues together that should been separated better but weren’t – after all, my blog post was more of a rant.

First, if your client or employer is maintaining an existing system and the system is in maintenance mode only, we’re talking about a bug fix or a small enhancement then it doesn’t make sense to upgrade the compiler. That’s what I was referring to as a system that is “on life support”. Yes, it goes against the grain for me personally as a software engineer who likes to improve software but the effort spent on making the transition does not make sense from a business perspective.

What I do take issue with is when you are developing a new system or are working on a large refactor of an existing system that is in “embrace and extend mode”, and for whatever reason the client or employer decrees that it shall be written using VC6. That’s where the penalties come in and that is where the technical debt builds up at the start of a new project or at the exact point in time when you should be addressing the debt instead of adding to it.

The understanding of C++ and the use of its multi-paradigm nature has changed since VC6 was released, we have both new programming techniques and new libraries that (should) improve the quality of the code, its expressiveness and programmer productivity. The prime example of these libraries and the one I was thinking of when writing the rant of course is Boost. The earliest MS compiler they test against in 1.40 is VC7.1 aka VS2003 which is certainly a big improvement over VC6.

Yes, VC6 is likely to create smaller executables and build them faster. C++ compilers certainly are not getting any faster and long compile/link times have been a problem on projects I worked on. Shorter build times and especially smaller executables can be a benefit depending on your particular use case. A lot of the projects I worked on in the recent past are maths heavy and the calculations are performance critical. For these projects, an compiler that has an optimizer which can squeeze a 5% performance improvement out of the existing code one a modern CPU at the expense of 20% larger code is a no brainer. In at least one case it was cheaper to buy the Intel compiler to get better performance instead of putting more engineering time into performance improvements.

Yes, developers like shiny new tools and yes, I’ve worked with developers who considered GCC’s CVS HEAD the only compiler that was recent enough to complement their awesomeness. This is not something I generally agree with although I did update my own copy of Visual Studio from 2003 to 2008 (yes, I did skip 2005) when that version came out simply because it was so much better than its predecessors.

I still think that by insisting on the usage of tools that are positively ancient, programmers get needlessly hobbled and it is part of our job as programmers who care about what they do to educate the people who make these decisions as to why they aren’t necessarily good from an engineering point of view. I don’t think any of the Java programmers I work with would put up with having to work using Java 1.2 and forgo the improvements both in the language and in the available libraries, yet C++ programmers are regularly asked to do exactly that.

Why oh why do people insist on using compilers that are way out of date?

Why are so many companies hobbling their programmers with positively ancient and often positively crappy tools? For once I’m not ranting about companies that are too cheap to provide their C++ programmers with important tools like profilers and leak detectors – the usual “if these were important, the tool vendor would include them” argument, but the one tool right at the heart of the matter. The one none of us can work without in C++ space. I am, of course, talking about the compiler.

Read More

A couple of interesting blog posts…

A couple of links to other people’s interesting posts I’ve come across in the last few days.

Raymond Chen on “There is no law that says meetings can’t end early”. I wish more people would take this advice to heart, but that’s been on my Christmas wishlist right next to “create an agenda for a meeting and stick to it”.

Interesting blog post regarding the difference between technology for technology’s sake and using technology to create something. For the record, I tend to be more interested in doing something with my technology “toys” rather than having them purely for having them. That’s probably one of the reasons that I tend not to have the latest, greatest and fastest machines (or phones, cameras etc) but rather buy quality kit and use it for a few years instead.

Do programmers still buy printed books? I know I do, but I’m getting a little more choosy. I’ve bought a couple of books recently that I’d classify more as a waste of a perfectly good tree. That’s unfortunately not unusual for tech books, there are a couple of brilliant ones out there and an awful lot of, how shall I call it, rubbish. I’m also one of those throwbacks to an earlier age who prefer to hold a physical book in their hands rather than stare at a screen. The reason Antonio mentions – switching between book reading mode and computing mode – certainly has something to do with it; after all, if you’re immersed in a book you probably don’t check your email every thirty seconds whereas I’d be tempted to do that if I were to read the book on a computer. Reading on the iPhone is something I’d rather not do, my eyesight is bad enough as it is…

Antonio’s book list his previous blog post also looks very much like I should update my Amazon wish list.

My Emacs configuration file refactor

In a previous post I described that a few months ago, I moved the third party elisp code under version control to make it easier to move it between machines and ensure a consistent configuration across them. The one remaining problem to solve was putting the configuration files (.emacs and .gnus.el) under version control. One of the approaches I really liked was described by Nathaniel Flath but I figured that it was too heavyweight for my needs.

What I ended up doing was to move the configuration files into version control and then simply change the basic dot-files to load the file from the subdirectory that is under version control. My .emacs now reads like this:


(load-file "~/emacs-lisp/dot-emacs.el")

Not the most elegant and automated version but it works for me.

Building a new home NAS/home server, part IV

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog.

I’ve done some more performance testing and while I’m not 100% happy with the results, I decided to keep using FreeBSD with zfs on the server for the time being. Pretty much all measurements on Linux (both a recent Ubuntu Server and CentOS 5.3) showed lower performance and while OpenSolaris is a lot faster when it comes to Disk I/O and thus would have been my first choice for a pure NAS, the effort in porting my current mail server configuration would have resulted in the server being ready sometime in 2010…

I might be exaggerating this a little but the effort of building amavisd-new is non-trivial due to its large number of dependencies. If packages or at least some sort of package build system like the FreeBSD ports is available but building it from scratch and ensuring that all the necessary small Perl components are present isn’t trivial. So FreeBSD it is for the time being. I will still be looking into rolling a few packages for OpenSolaris for at least some of the tools I’d like to see on there as the thought of running my home server on what is essentially an enterprise-class OS is very appealing to me.

Back to the server build. One issue that bugged me was the noise created by the server. It is not massively loud but the fans do create a humming noise that I find quite distracting. My house is also very quiet, especially at night, and the noise from the machine carries surprisingly far. If I had a dedicated “server room” or could banish the box to my garage it wouldn’t be too much of a problem but for continuing use in a quiet environment like my office, the machine was just a little too noisy out of the box. None of the components are super noisy but as all my other machines were built with noise reduction in mind, this one stood out. As a first measure, I replaced the power supply with an Arctic Cooling Fusion 550R. It is very quiet and also has the additional advantage of being 80+ certified like the OEM Antec is. I also replaced the rear fan in the case with a Scythe Kama PWM fan and then discovered that the motherboard doesn’t support PWM case fans. Oops. Better read the manual next time before ordering fans. Currently it’s running at full blast but it is barely audible so that’s fine. I guess I’ll be replacing that with a 3-pin fan soon or stick the original Antec on back in. The new PSU supports temperature control of two chassis fans, but those fans need to be 3 pin and not 4 pin.

This leaves the processor HSF unit as the last source of noise that I can actually do something about. A replacement of that is on its way. Normally the OEM AMD cooler is quite good and quiet but for some reason this one is rather noisy. Not that it’s loud but it’s got a humming/droning noise that really irritates me. Turning the fan down in the BIOS settings at least alleviated the noise for the time being and it’s reasonably quiet. Quiet enough at the moment and if it starts annoying me again, I’ve got a Scythe Katana 2 processor HSF here that I can put in.

With the additional items like PSU, fans and HSF units it’s come in a little over my original budget but it should be a robust and hopefully long-lived server.

If you intend to install FreeBSD with zfs, make sure you upgrade from 7.2-release to the latest 7-releng as you’ll get a newer version of zfs this way. This version is supposed to be more stable and also doesn’t need any further kernel tuning in a 64-bit environment. I also noticed that with a custom built kernel, there is a measurable performance improvement since the later version of the zfs code has gone into 7-releng.

Overall the system has been running without any incidents for a few weeks now and appears to be both stable and quiet. So I’ll now go back to blogging about software…

Building a new home NAS/home server, part III

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog.

Unfortunately the excitement from seeing OpenSolaris‘s disk performance died down pretty quickly when I noticed that putting some decent load on the network interface resulted in the network card locking up after a little while. I guess that’s what I get for using the on-board Realtek instead of opening the wallet a little further and buy an Intel PCI-E network card. That said, the lock-up was specific to OpenSolaris – neither Ubuntu nor FreeBSD exhibited this sort of behaviour. I could get OpenSolaris to lock up the network interface reproducibly while rsyncing from my old server.

This gave me the opportunity to try the third option I was considering, Ubuntu server. Again, this is the latest release with the latest kernel update installed. The four 1TB drives were configured as a RAID5 array using mdadm. Once the array had been rebuilt, I ran the same iozone test on it (basically just iozone -a with Excel-compatible output). To my surprise this was even slower than FreeBSD/zfs even thought the rsync felt faster. Odd that.

Here a few pretty graphs that show the results of the iozone write test – reading data was faster, as expected, but the critical bit for me is writing as the server does host all the backups for the various other
PCs around here and also gets to serve rather large files.

First, we have OpenSolaris’s write performance:

 

iozone performance on OpenSolaris
iozone performance on OpenSolaris

 

FreeBSD with zfs is noticeably slower, but still an improvement over my existing server – that one only has two drives in a mirrored configuration and oddly enough its write speed is about half of the one on the new server:

iozone performance on FreeBSD with zfs
iozone performance on FreeBSD with zfs

FreeBSD with the geom-based raid is slower but I believe that this is due to the smaller number of disks. With this implementation you need to use an odd number of disks so the fourth disk wasn’t doing anything during those tests. Not surprising that the overall transfer rate came in at roughly 3/4 of the zfs one.

iozone performance on FreeBSD with graid
iozone performance on FreeBSD with graid

Ubuntu with an mdadm raid array unfortunately brings up the rear. I was really surprised by this as it comes in below the performance of the 3 disk array under FreeBSD:

iozone performance on Ubuntu server
iozone performance on Ubuntu server

One thing I noticed is that while the four Samsung drives are SATA-300 drives, the disk I reused as a system drive is not. I’m not sure if that does make any difference but I’ll try to source a SATA-300 disk for use as the system disk to check if that makes any difference at all. I’m also not sure if FreeBSD and Linux automatically use the drives command queueing ability. On these comparatively slow but large drives that can make a massive difference. Other than that I’m a little stumped; I’m considering the purchase of the aforementioned Intel network card as that would hopefully allow me to run OpenSolaris but other than that I’m open to suggestions.

Reblog: Building a new home NAS/home server, part II

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul of these posts so I wanted to consolidate all the articles on the same blog.

The good news is that the hardware seems to be behaving it for a while now and everything appears to Just Work. FreeBSD makes things easy for me in this case as I’m very familiar with it so I only spent a few hours getting everything set up. So far, so good.

I’ve rsynced most of the data off the old server and while doing that I already had this nagging feeling that the transfer seemed to be well, a little slow. Yes, I transferred several hundred Gigabytes but nevertheless I thought it would take less time, so this morning I did a few performance tests using Samba 3.3 on the new server. Turns out the performance was around 12-25 MB/s and that’s a lot less than I expected. In fact, that’s a little too close to 100Mb network performance for my liking – even without using jumbo frames I would think that a straight data transfer would be faster than this. Unfortunately I had introduced a few unknowns into the equation, including using the FreeBSD implementation of zfs. I am not saying that it is the culprit but I’ll start with server disk performance measurements before I do anything else.

Ah well, looks like the old server needs to do its job a little longer then.

Update: Running iozone on the machine itself suggests that the write speed of 12MB/s-13MB/s I was seeing via Samba are more or less the actual transfer rate to disk on the zfs software raid array. I’m not sure if that’s a limitation on my hardware or a problem with the (still experimental) zfs implementation on FreeBSD. Nevertheless this is a little on the slow side to put it mildly (the four disks should easily saturate a gigabit network link) so I guess it’s time to wipe the zfs config and use FreeBSD’s native RAID5 instead for a check if that hypothesis is holding water. If it’s not then the bottleneck is somewhere else.

Update II:FreeBSD’s graid3 unfortunately needs an odd number of drives which left one of the 4 Samsung idle. Same tests suggest that the performance was 1/4 less than FreeBSD zfs with four disks so it seems that it was driving it as fast as it can. Hmm. Just to get a comparable figure I threw OpenSolaris on the box, again with the four 1TB drives as a single zfs tank with a single directory structure on it. With the default settings, the same iozone tests are nudging write speeds over 35MB/s. Not as fast as I expected by a long shot but noticeably better at the expense of a more painful configuration. Something to think about but I think I need to test a few other configurations before I can really make up my mind as to which way I want to go.

Reblog: Building a new home NAS/home server, Part I

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog.

Up to now I’ve mostly been using recycled workstations as my home mail, SVN and storage server. Nothing really wrong with that as most workstations are fast enough but I’m running into disk space issues again after I started backing up all the important machines onto my server. That’s especially annoying as I started using Time Machine on my iMac and now haven’t got enough space left on the server to also back up the MacBook. Time Machine is great as a backup solution simply because it is so unobtrusive and it appears to just work.

Inspired by an article in the German magazine c’t, I decided that instead of finding another used desktop machine to recycle, I was going to build a proper home server with a few additional bells and whistles. I’m still using mostly desktop parts as I can’t really justify the expense and noise of “proper” server components but I tried to select decent quality parts. Here’s the hardware list:

1 x Antec NSK 6580B Black Mid Tower Case – With 430W Earthwatts PSU
1 x ASROCK A780LM AMD 760G Socket AM2+ VGA DVI 6 Channel Audio Mini-ATX Motherboard
1 x AMD Athlon X2 5050e Socket AM2 45W Energy Efficient Retail Boxed Processor
4 x Samsung EcoGreen F2 1TB Hard Drive SATAII 32MB Cache – OEM
4 x Startech Serial ATA Cable (1 End Right Angled) 18″
1 x Crucial 2GB kit (2x1GB) DDR2 800MHz/PC2-6400 Ballistix Memory Non-ECC Unbuffered CL4 Lifetime Warranty
1 x Startech Right Angle Serial ATA Cable (1 end) 24 Inch

This little lot cost me around £450 including shipping so it’s a relatively inexpensive machine and will hopefully be more powerful than you average, similarly priced NAS. It should also be reasonably quiet and energy efficient, at least for something that’s got a regular processor and five HDDs in it.

First impressions of the Antec case are good. I like buying Antec cases because they’re usually pretty high quality and this one certain doesn’t disappoint. The nice touches like the disk mounting trays with gel feet are great, there is enough space in there – well, more than enough for my uses – and it’s got an 80plus PSU.

disk-carrier-2 disk-carrier-1

The only downside I noticed when opening the case was that the 120mm chassis fan plugs into a HDD power supply and has a ‘speed switch’. Not that impressive but if the rest of the machine turns out to work well I’ll probably replace it with a high-throughput fan that is hooked up to the chassis fan connector. I also like the removable disk carrier that leaves enough space between the disks for some decent airflow. If I had some front mount fans that is.

disk-carrier-3

I’ll skip the buildup stuff. It’s not that hard anyway, plus there are plenty of howtos on the web already. Everything went together very smoothly. I noticed that there is a well designed space for two front-mounted fans right in front of the HDD bays. Given the number of HDDs I’m running in this box, I’ll put a couple more fans on the shopping list (edit: which I never did and it didn’t seem to have any detrimental effects on the longevity of the disks). As you can see, the case is nice and spacious (it was a relief working in there instead of the Mini-ATX cases I tend to prefer for desktops) and yes, I know that it needs a visit from the cable tie fairy. In addition to the components above, I used a fifth HDD for the OS alone, plus on the heap of bits that no self-respecting geek can be without, I found a DVD drive so I didn’t have to hook up the external USB DVD drive that I keep for such cases.

case-sideview

My first choice of OS for this box was OpenSolaris 2009.06. An older version had achieved very good performance scores in the above mentioned article so I thought it was worth a try. I also hoped to be able to try zfs on the ‘data’ disks. While setting up the machine with OpenSolaris was fine, I ran into the problem that I do require a lot more features than “just” a plain NAS – which was the reason to build the server myself in the first place. In addition to playing file and DHCP server, this machine would need to handle all incoming mail (that’ll be postfix, amavisd-new and policyd-weight then), run an IMAP server (dovecot), an NNTP server (leafnode) and a bunch of other small tools that are necessary but unfortunately not available as packages on OpenSolaris yet. While I wouldn’t mind building all these packages from source, keeping them up to date might turn into a problem as I don’t want to have to manually maintain these packages. While the list of packages seems to be fairly small, with the number of dependencies that amavisd and especially SpamAssassin have, this would be almost impossible for someone like me who doesn’t have a lot of time to spend on server maintenance.

For this reason I went with plan B, which was to keep the OS I’m currently using on my server (FreeBSD). FreeBSD has support for zfs – albeit still labelled as experimental so I’m not sure if it’s a good idea, but I’ll try it anyway – and I can set up a FreeBSD box almost in my sleep. Plus, with cron jobs updating the local ports collection it should be fairly straightforward to keep everything up to date. One change I will be making though is to switch from the regular x86 version of the OS to the 64 bit version. Partially because I can but mostly because zfs does seem to be working better on 64 bit systems. Oh, and I’m beginning to wonder if using only 2GB of RAM was a smart idea as zfs does appear to like loadsamemory. I guess we’ll find out in part II…