Picture showing "About This Mac" after I upgraded the CPU

Bit the bullet and upgraded my Mac Pro’s CPU

I’ve been an unashamed fan of the old “cheese grater” Mac Pro due to its sturdiness and expandability. Yes, they’re not the most elegant bit of kit out there but they are well built. And most importantly for me, they are expandable by plugging things inside the case, not by creating a Gordian Knot of hubs, Thunderbolt cables, USB cables and stacks of external disks all evenly scattered around a trash can. Oh, and they’re designed to go under a desk. Where mine happens to live, right next to my dual boot Linux/Windows development box.

Read More

Sprinkler controller upgrade part III – setting it up

Putting the OpenSprinkler and Raspberry Pi together was easy, getting them to run showed my inexperience when it comes to playing with hardware. The overall install went pretty smoothly and the documentation is good and easy to follow so I’m not going to ramble on about it for very long, but just throw up some notes.

First, my old card reader didn’t want to play with any of my computers. Now, the card reader is ancient, but should have been able to work with an SD card. No joy under and available OK, so I ended up having to get a new SD/microSD only card reader.

When writing the ospi image file to the SD card using Mac OS, make sure you write to the raw device and not to the slice (in my case /dev/rdisk4 and not /dev/disk4s1), otherwise you’ll end up with a non-booting OSPI and wonder why. Don’t ask me how I know

Also, the OSPI image doesn’t have Emacs pre-installed, so I obviously had to fix that. I mean, how would I be editing configuration files otherwise?

The hardware installation (aka screwing the controller to the wall and wiring it up) was pretty simple, to facilitate the install I had taken photos of the way the old controller was wired and use that as a guide.

The whole install went pretty smoothly and the controller has been running our sprinklers for a while now. Unfortunately the sprinkler_pi program that I really wanted to use to seems to have encountered a bug that has it trigger multiple valves at the same time; I’m planning to upgrade to the latest version and if necessary debug it a bit because I like its UI better than the default interval_program. The latter however just worked out of the box.

The only concern so far is that the CPU temperature on the Raspberry Pi seems a little high (it’s usually hovering around 60-65° Celcius as it’s outside in the garage. I might have to experiment with a CPU heat sink on that one.

Sprinkler controller upgrade part II – the Pi(e)s have arrived

The Raspberry Pis have landed. Guess which box contains the sensitive electronics and is worth about twice as much as the other one:

pi-parcels-1

That’s right:

pi-parcels-2

Geez Amazon, what is it about the shoddy packing when it comes to items that are bought via Amazon Fulfillment Services? This is not the first time I got something that can only be described as badly packaged.

The OpenSprinkler kit has also arrived, all I’m currently waiting for is a smaller memory card as the regular SD cards I bought are a little to big to fit into the OpenSprinkler case. Anyway, I should have the new hardware up and running on Friday.

The latest project – improving the home’s sprinkler system, part I of probably a lot

I normally don’t play much with hardware, mainly because there isn’t/wasn’t much I want to do that tends to require hardware that’s not a regular PC or maybe a phone or tablet. This one is different, because no self-respecting geek would want the usual rotary control “programmable” timer to run their sprinkler system, would they?

We do live at the edge of the desert and we have pretty strict watering restrictions here. I’m all for it – water being a finite resource and all that – and I want to improve our existing sprinkler system at the same time. It doesn’t help that the people who set up the sprinklers were probably among the lower bidders, to put it politely. OK, to be blunt they seem to have failed the “giving a shit” test when they put the system together. I’ve spent a lot of  last year’s “gardening hours” just trying to make it work somewhat. Not well, just “somewhat”. Time to fix that.

First step was researching hardware. I’m comfortable with Unix type OSs (obviously) and with seemingly the world and their dogs releasing small, low power consumption embedded Linux devices I figured one of them would be perfect. The original plan was to get a Raspberry Pi or a BeagleBone with relay shield/cape and drive the sprinkler valves that way. A bit more poking around the web led me to the various OpenSprinkler modules (standalone, Raspberry Pi shield and BeagleBone cape) and they look ideal for what I have in mind. I’m planning to order the Raspberry Pi version as one of the nice touches is that the Raspbian repository has packages for the Java JDK, which gives me bad ideas of hacking parts of the sprinkler system in Clojure or Armed Bear Common Lisp. I’m not sure that the system is powerful to run either, but one can dream.

The good thing about the various OpenSprinkler systems is that they have the 24V to 5V converter on board so the power supply isn’t a problem. There is already open source software for them that covers the normal requirements and either of them can control enough valves for our current needs without resorting to genius solutions like running two valves off the same controller output because someone installed a wiring loom that is one wire short of being able to control all valves individually. Apparently the fact that the water pressure wasn’t high enough to run two zones at the same time fell in the category of “not giving a shit”.

The next step after getting the hardware is to run convert the existing system to run off the new controller with some additional wiring to be able to control all zones individually. This will require fixing up some of the wiring issues and will also have to tie in with my project of running some Ethernet wiring around the house unless I decide to go wireless for the sprinkler controller. Haven’t figured that part out yet. Given that the controller is “headless” I’m tempted to hide it away out of sight and just run Ethernet and 24V power to it.

Once it’s all up and running I’ll look into adding some sensors for a bit more fine-grained control over the system. Rain sensors are not really helpful out here as it hardly ever rains during irrigation season. I’m thinking about adding at least a couple of moisture sensors for some of the more sensitive plants to ensure that they get the appropriate amount of water but not more than necessary. Not sure I’ll get around to that part this year, first the system needs to be up and running reliably before I go and break it again.

Stay tuned.

Ergodox keyboard built and reviewed by Phil Hagelberg

Phil Hagelberg published an interesting blog post about the Ergodox keyboard. I’m a self-confessed input hardware nerd and have been a Kinesis Ergo/Advantage user for over a dozen years now. I love those keyboards – otherwise I wouldn’t keep buying them – but Phil makes a very good point that they’re bulky, not something you quickly throw into a bag and take with you for a hacking session at the local coffee shop. It’s good to see alternatives out there, especially as there seems to be less of a focus on ergonomic input devices recently.

Will I try to build an Ergodox? Probably not right now. My muscle memory is pretty tied to the Kinesis keyboards but if I find myself traveling more, I’d definitely look into one.

If you are a professional programmer you owe it to yourself, your continuing career and health to check out higher quality keyboards. Unfortunately over the last decade or so there has been a race to the bottom when it comes to keyboard quality. Most users don’t notice the difference and those who would most likely would probably have a good keyboard hidden away or were willing to buy one. Join them.

Yes, I’m a keyboard snob, but then again I had the good fortune to start programming professionally when these were the gold standard. You owe it to yourself to be a keyboard snob, too.

Accessing the recovery image on a Dell Inspiron 530 when you can’t boot into the recovery partition

My hardware “scrap pile” contained a Dell Inspiron 530 – not the most glamorous of machines and rather out of date and old, too, but it works and it runs a few pieces of software that I don’t want to reboot my Mac for regularly. Problem was, I had to rebuild it because it had multiple OSs installed and none of them worked. Note to self – don’t mix 32 and 64 bit Windows on the same partition and expect it to work flawlessly.

I did still had the recovery partition, but it wasn’t accessible from the boot menu any more. Normally you’re supposed to use the advanced boot menu to access it. I couldn’t figure out how to boot into it. There is a Windows folder on the partition, but no obvious boot loader. I also didn’t want to pay Dell for a set of recovery disks, mainly because those would have cost more than the machine is worth to me.

Poking around the recovery partition showed a Windows image file that looked it contained the factory OS setting – its name, “Factory.wim” kinda gave that away – and the necessary imaging tool from Microsoft, called imageex.exe.

All I needed was a way to actually run them from an OS that wasn’t blocking the main disk, so I grabbed one of my Windows 7 disks, booted into installation repair mode and fired up a command prompt.

After I made sure that I was playing with the correct partition, I formatted the main Windows partition and then used imageex to apply Factory.wim to the newly cleansed partition. This restored the machine to factory settings even though I hadn’t been able to boot into the recovery partition to use the “official” factory reset.

Oh, and if the above sounds like gibberish to you, I would recommend that you don’t blindly follow these vague instructions unless you want to make sure you’re losing all the data on the machine.

As a bonus task, you also get to uninstall all the crapware loaded on the machine. Fortunately it looks like everything can be uninstalled from the control panel. While you’re installing and uninstalling, make sure you update the various wonderful pieces of software that come with the machine as they’ll be seriously outdated.

The time has finally come to rebuild my home server

Back in 2009 I built a “slightly more than NAS” home server and documented that build on my old blog. I’ve migrated the posts to this blog, you can find them here, here, here, here and the last one in the series here.

The server survived the move from the UK to the US, even though the courier service I used did a good job of throwing the box around, to the extent that a couple of disks had fallen out of their tool less bays. Nevertheless, it continued soldiering on after I put the drives back in and replaced a couple of broken SATA cables and a dead network card that hadn’t survived being hit by a disk drive multiple times.

I was recently starting to worry about the state of the disks in the server. They were the still original disks I put in when I built the machine, and they were desktop drives that weren’t really built for 24×7 usage, but they coped admirably with running continuously for four years. One started showing a couple of worrying errors (command timeouts and the like), so it was past time to replace all of them before things got any worse.

After some research into affordable NAS disk drives, I ended up with five WD Red 2TB disks for the ZFS raid portion of the NAS and an Intel X25-M SSD drive for the system disk.

First order of the day was to replace the failing disk, so armed with these instructions, I scrubbed the array (no data errors after over four years, yay) and replaced the disk. This time I did use a GPT partition on the zfs drive instead of using the raw disk like I did when I originally built the system. That way it’s at least obvious that there is data on the disk, plus I can tweak the partition sizes to account for slight differences in size of the physical disks. You can replace a raw disk with a GPT partitioned one, you just have to tweak the command slightly:

zpool replace data ada1 /dev/ada1p1

Basically, tell zfs to replace the raw device (ada1) with the newly created GPT partition (/dev/adap1).

The blog post with the instructions on how to replace the disks mentioned that resilvering can take some time, and boy were they not kidding:

root@nermal:~ # zpool status
  pool: data
 state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Fri Dec 27 07:17:54 2013
        131G scanned out of 3.26T at 121M/s, 7h32m to go
        32.8G resilvered, 3.93% done

At that rate I ended up replacing one disk per day until I managed to replace all four disks. Not that big a deal if I have to do this once every four years.

Why all four disks after I said above that I bought five? I had forgotten that you can’t add a disk drive to a raidz array to extend its capacity – if you want more disks in your array, you have to create a new array instead. Ideally I should have moved all the data to a separate, external disk, recreated the array with all five disks and then moved the data back from the external disk. As the motherboard doesn’t have USB3 connectivity this would have taken way to long so came up with a different approach.

Basically, instead of creating a single big zfs partition on every drive, I created two. One was the same size as the partition on the old 1TB drive, the second one filled the rest of the disk. I also needed to make sure that the partitions were correctly aligned as these drives have 4K sectors and I wanted to make sure I didn’t suffer from performance degradation due to the partition offset being wrong. After some digging around the Internet, this is what the partitioning commands look like:

gpart create -s gpt ada1

gpart add -a 4k -s 953870MB -t freebsd-zfs -l disk1 ada1 gpart add -a 4k -t freebsd-zfs -l disk1_new ada1

I first used a different method to try and get the alignment “right” by configuring the first partition to start at 1M, but it turned out that that’s probably an outdated suggestion and one should simply use gpart’s “-a” parameter to set the alignment.

Once I’ve redone the layout for all four disks with the dual partitions as I showed above, I should be able to add the fifth disk to the array and create a second raidz that uses five disks and also 4k sector alignment (which requires playing with gnop).

I guess that’s one way of mounting an SSD

The perils of buying a used computer – yes, I am too cheap or just not rich enough to buy a new Mac Pro – is that sometimes you find that you inherited “interesting” fixes.

Like this SSD mount:

mac-ssd-2 mac-ssd-1

 

Yes, that’s electrical tape and no, I don’t agree with this special mounting method. At least they did put some electrical tape between the case of the SSD and the case of the DVD drive.

mac-ssd-3

 

I guess this is one way of saving $20, which is what the correct 2.5″ SSD mounting frame for a recent-ish (2009/2010) Mac Pro costs:

A 2.5" SSD in the correct mounting frame for a Mac Pro
SSD in the correct mounting frame

That’s another warranty voided, then

Last night I did something I was adamant I wasn’t going to do, namely rooting my Android phone and installing CyanogenMod on it. Normally I don’t like messing with (smart)phones – they’re tools in the pipe wrench sense to me, they should hopefully not require much in the way of care & feeding apart from charging and the odd app or OS update. Of course, the odd OS update is can already be a problem as no official updates have been available for this phone (a Motorola Droid) for a while and between the provider-installed bloatware that couldn’t be uninstalled and the usual cruft that seems to accumulate on computers over time, the phone was really sluggish, often unresponsive and pretty much permanently complained about running out of memory. So far it appears that updating the OS and only installing a handful of apps that I actually use as opposed to the ones that I supposedly “need” has resulted in a much better user experience.

The whole process was comparatively painless, which I really appreciated. The biggest hurdle was getting the clockworkmod recovery image onto the phone. I ended up rebooting the Mac into Windows and install it via the Windows tools. Other than that, the installation went smoothly and didn’t leave me with a bricked phone so I’m happy with that part.

Why the effort, given my dislike for hacking smartphones? Well, for starters I can squeeze a little more life out of the phone. I’m eligible for an upgrade but thanks to Verizon’s shenanigans, sorry, added hoops (and added expense) required to jump through if you want to keep a grandfathered unlimited data plan, I don’t feel particularly compelled to spend money on a phone, especially if I have to pay full retail for an upgrade. I’m also not that big a fan of Android (I admit to preferring iOS) so I’m currently waiting on how the whole “unlocked iPhone” saga will play out with the iPhone 5. If I have to pay retail for a phone – any phone – I might as well use that as leverage to reduce the overall phone bill.

In the meantime I’ll see how I like the “new” Droid and better get used to occasionally reinstalling the OS on a phone, thus reminding me of the quip that Android truly is the Windows of smartphone OSs.

The homebuilt NAS/home server, revisited

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog.

I’ve blogged building my own NAS/home server before, see here, here, here and here.

After a few months, I think it might be time for an interim update.

In its original incarnation, the server wasn’t as stable as it should have been given my previous experience of FreeBSD. For some reason, it would crash every few weeks and sometimes even hang on reboot. Not good, especially as it happened a few times while I wasn’t home. I guess I should have heeded the warning about the zfs integration being experimental… Things got worse when I added a wireless card and retired my access point. Roughly around this point in time I got fed up with this enough to go back and start building an OpenSolaris VM to try out a mail server setup similar to the one I’m running on FreeBSD.

Before I got anywhere with this, FreeBSD 8.0 came out, so I upgraded. ZFS had be promoted from experimental, the wireless stack has been overhauled, etc pp. The stability problems disappeared and the machine has been utterly reliable since then. Where before, trying to use Time Machine from to back up my MacBook via the wireless network was a good way to a 50% chance to crash the server, it now “just works”. This is where I wanted to get and I’ve now got there. Performance also seems to have improved – copying large files from the server to my Windows 7 machine sees a reliable 78MB/s via my Gigabit network now.

I’ve still got a couple of small changes I want to make to the machine – for example, I’ve got 4GB of RAM that I want to put into the machine. This should enable zfs readahead which should give me further performance improvements. I also plan to add two more fans to blow cold air over the hard drives to keep them happy and working for longer. Edit: Actually it didn’t as 4GB RAM on the mainboard result in slightly less than 4GB available to the OS. I did enable the readahead manually, though.

If I built another home server, I would probably get another motherboard. Not that there is anything particularly wrong with the one in the machine, but the CPU fan speed control only goes down to 70%, so no wonder that the CPU fan is noisier than it should be.

Was it worth it? Overall I’d say yes, although I probably should have stuck to tried and tested technology (either using FreeBSD’s built-in RAID5 or use OpenSolaris with zfs). This caused unnecessary problems at the beginning and pushed up the cost as I was dithering between either. Next time I probably set up the server on OpenSolaris and run the mail server on FreeBSD in a VM running on OpenSolaris. Given that the current configuration is working, I leave it alone for the time being though.