Make GNU Emacs play nicely with a German keyboard layout on Mac OS X

I used to use Carbon Emacs on OS X for quite a while, but with the release of Emacs 24 I switched to the stock GNU Emacs distribution. While GNU Emacs works fine on OS X, once you throw a German keyboard layout in the mix it doesn’t work so well as OS X uses Option + Number keys for a variety of characters needed for programming like [] and {}. GNU Emacs uses Option as Meta out of the box so the key mapping doesn’t work overly well, especially if you do a lot of programming in the C family of languages.

I finally got fed up with this restriction as it seriously affected my enjoyment of cranking out code under OS X. A bit of duckduckgo’ing gave me the necessary settings to use Command as Meta and pass Option through to the underlying OS. From my slightly hazy memory, this was the default layout for Carbon Emacs.

Here’s the code that will switch the keys around:

(setq mac-command-modifier 'meta
      mac-option-modifier 'none
      default-input-method "MacOSX")

Yes, there is also the whole argument about the unsuitability of using a German keyboard layout for programming. A lot of German programmers I know switched to UK or US layout keyboards to make programming easier.

I can drive one of those keyboards if need be but I’m both a touch typist and I rely on Kinesis Advantage keyboards for my day-to-day work thanks to an injury from a less than ergonomic desk quite a few years back. While this old dog is generally happy to learn a new trick or three, I draw the line at switching keyboard layouts when there are other options available.

C++ memory management – why does my object still hang around after its destructor was called?

I recently came across a discussion on LinkedIn where someone had run into memory related undefined behaviour. This prompted me to write this post as it’s a common, subtle and often not very well understood bug that’s easily introduced into C++ code.

Before we start, please keep in mind that what we’re talking about here is undefined behaviour in any standardized version of C++. You’re not guaranteed to see this behaviour in your particular environment and pretty much anything might happen, including your cat hitting the power button on your computer.

The behaviour I am describing in this blog post is fairly common though – especially on Windows – and it’s one of the harder problems to debug, especially if you haven’t encountered it before or it’s been so long you can’t remember it (lucky you).

Let’s assume you have the following code:

class notMuch
{
public:
  notMuch(char const* str)
  {
    strncpy(m_str, str, 315);
  }

  char* getStr()
  {
    return &m_str[0];
  }

private:
  char m_str[315];
};

void oops()
{
  notMuch* nm = new notMuch("Oops");
  delete nm;
  std::string hmmm(nm->getStr());
}

Obviously this is a contrived example and decidedly bad code on multiple levels, but I constructed it like this to make a point.

Now assume that you are debugging the above code because you are trying to track down some odd memory behaviour. For some reason, everything is working fine when you run the code under the debugger, even though there is clearly something wrong in this code and you’d expect it to fail in a pretty spectacular fashion. Only that it doesn’t. You run the code and everything seems to be working, even though the code is clearly accessing an invalid object.

In order to understand what is happening here, we need to delve a bit deeper into C++ memory handling.

In desktop and server OSs, the call to operator new will be handled by your runtime library’s memory manager. The memory manager will attempt to locate a suitably sized chunk of memory and return it to operator new, which then makes use of this memory. In case of a low memory condition, the runtime’s memory manager will request additional memory from the operating system, but as OS calls tend to be a fairly expensive operations compared to checking an internal list, the runtime tries to minimize these calls and keep the requested memory around even if it’s not in use. When the object’s destructor is called, it takes care of its business – not much in this case as the object only contains an array of PODs, so the destructor is essentially a no-op – and then uses operator delete to “free” the memory.

Notice the quotation marks around “free”? That’s because in most cases, freeing the memory results in an entry in a list or bit field being updated. In the interest of performance, the memory isn’t being touched otherwise and now patiently waits to be reused, but the data in the chunk of memory remains unaltered until the memory chunk is being reused. The pointer to the object nm is still pointing to the (now freed) memory location. It’s invalid, but neither the runtime nor the user can determine that by simply looking at the pointer itself.

Unfortunately this means you now have an invalid pointer that’s pointing to an unusable object somewhere in the weeds that you shouldn’t be able to dereference. The trouble is that in a lot of implementations, you are still able to dereference the pointer and make use of the underlying object as if it hadn’t been deleted. In the example above, this works the majority of the time. If we don’t assume a multithreaded environment, there is no way the memory containing the invalid object will be touched, so it’s safe to access and the user ends up seeing the correct data.

So it’s bad, but not really, right?

No so fast.

Let’s now assume that before our call to getStr(), we insert another function call that triggers a memory allocation. Depending on the size of the requested chunk of memory and the current availability of other chunks of memory, the memory manager will occasionally reuse the memory that was previously occupied by nm. Not all the time, but occasionally. And as already mentioned, “nm” is indistinguishable from a valid pointer. All of a sudden, the attempt to create the string “hmmm” results in the string containing garbage or the program crashing, but the behaviour cannot be reproduced consistently and the crashes seem to depend on the phase of the moon.

The big question is, what can we do about this problem?

First, using RAII is a much better idea than raw pointer slinging. The above example contains a good reason why – if nm had been handled in an RAII fashion by using a std::shared_ptr or simply allocating the object on the stack the string hmmm would be constructed when nm was still in scope and a valid object.

Second, a lot of developers advocate nullptr’ing nm after the delete, at least in debug mode. That’s a valid approach and it will allow you to see the problem in the debugger, but it relies on the weakest link – the one located between the chair and the keyboard – to apply this pattern consistently. In any reasonably sized code base one can pretty much expect that someone will forget to apply the pattern in a critical piece of code, with predictable late night debugging sessions being the result.

Third, the use of a good memory debugger like Valgrind or Rational Purify will catch this issue and a whole raft of other, similar issues like double deletes, buffer overruns, memory leaks etc. If you’re writing production C++ code and you care about the quality of your work, you need to one of these tools.

I seem to be getting addicted to icicle-mode

Not that I’m doing much with it yet other than the more minibuffer completion, but I really notice when icicles is not installed or inactive, so I’ve ended up adding it to every Emacs installation I use. ELPA is coming in really handy as it’s a matter of just installing icicles via one of its repos rather than having to install it manually. I’m really going off manual installs of complex Emacs packages these days after doing it for so long.

One thing I really appreciate about icicles is that it subtly extends Emacs in such a way that you don’t notice its presence, but even after a couple of days I very much notice its absence. Of course its presence in the menus is much more obvious and pronounced, but I rarely use Emacs menus, preferring to drive it via the keyboard shortcuts. I’ll be experimenting with icicles a bit more because this mode has all the makings of a must have mode, at least for my setup.

And of course it’s another distraction from learning eshell. Oh well.

Anyway, check out icicles if you haven’t done so already.

How to turn off the shouty menus in Visual Studio, 2013 edition

Visual Studio 2013, much like its predecessor Visual Studio 2012, also “features” the SHOUTY uppercase menus. As in Visual Studio 2012, these can be turned off using a registry setting.

tl;dr – run this command in PowerShell:


Set-ItemProperty -Path HKCU:SoftwareMicrosoftVisualStudio12.0General -Name SuppressUppercaseConversion -Type DWord -Value 1

Getting emacs’ansi-term to play nicely with FreeBSD

I was playing with the various shell options – sorry, trying to learn eshell – this evening. While playing with eshell I learned about the second, fully fledged terminal emulator ansi-term.

Most of my machines here run FreeBSD, as does the machine that hosts this blog. FreeBSD’s terminal emulators don’t recognise eterm-color as a valid terminal type, plus FreeBSD uses termcap and not terminfo, so the supplied terminfo file was of limited use.

As usual, StackOverflow came to the rescue with this answer. After adding the termcap entry and rebuilding termcap.db, ansi-term is now recognized as a full featured terminal emulator, running inside emacs. So now I can do this:

Screen Shot 2014-01-07 at 10.09.17 PMThat’s midnight commander running on one of my FreeBSD servers, inside GNU Emacs on my Mac. Job done.

And yes, ESC-0 is passed through correctly so I can actually exit mc.

Ergodox keyboard built and reviewed by Phil Hagelberg

Phil Hagelberg published an interesting blog post about the Ergodox keyboard. I’m a self-confessed input hardware nerd and have been a Kinesis Ergo/Advantage user for over a dozen years now. I love those keyboards – otherwise I wouldn’t keep buying them – but Phil makes a very good point that they’re bulky, not something you quickly throw into a bag and take with you for a hacking session at the local coffee shop. It’s good to see alternatives out there, especially as there seems to be less of a focus on ergonomic input devices recently.

Will I try to build an Ergodox? Probably not right now. My muscle memory is pretty tied to the Kinesis keyboards but if I find myself traveling more, I’d definitely look into one.

If you are a professional programmer you owe it to yourself, your continuing career and health to check out higher quality keyboards. Unfortunately over the last decade or so there has been a race to the bottom when it comes to keyboard quality. Most users don’t notice the difference and those who would most likely would probably have a good keyboard hidden away or were willing to buy one. Join them.

Yes, I’m a keyboard snob, but then again I had the good fortune to start programming professionally when these were the gold standard. You owe it to yourself to be a keyboard snob, too.

Accessing the recovery image on a Dell Inspiron 530 when you can’t boot into the recovery partition

My hardware “scrap pile” contained a Dell Inspiron 530 – not the most glamorous of machines and rather out of date and old, too, but it works and it runs a few pieces of software that I don’t want to reboot my Mac for regularly. Problem was, I had to rebuild it because it had multiple OSs installed and none of them worked. Note to self – don’t mix 32 and 64 bit Windows on the same partition and expect it to work flawlessly.

I did still had the recovery partition, but it wasn’t accessible from the boot menu any more. Normally you’re supposed to use the advanced boot menu to access it. I couldn’t figure out how to boot into it. There is a Windows folder on the partition, but no obvious boot loader. I also didn’t want to pay Dell for a set of recovery disks, mainly because those would have cost more than the machine is worth to me.

Poking around the recovery partition showed a Windows image file that looked it contained the factory OS setting – its name, “Factory.wim” kinda gave that away – and the necessary imaging tool from Microsoft, called imageex.exe.

All I needed was a way to actually run them from an OS that wasn’t blocking the main disk, so I grabbed one of my Windows 7 disks, booted into installation repair mode and fired up a command prompt.

After I made sure that I was playing with the correct partition, I formatted the main Windows partition and then used imageex to apply Factory.wim to the newly cleansed partition. This restored the machine to factory settings even though I hadn’t been able to boot into the recovery partition to use the “official” factory reset.

Oh, and if the above sounds like gibberish to you, I would recommend that you don’t blindly follow these vague instructions unless you want to make sure you’re losing all the data on the machine.

As a bonus task, you also get to uninstall all the crapware loaded on the machine. Fortunately it looks like everything can be uninstalled from the control panel. While you’re installing and uninstalling, make sure you update the various wonderful pieces of software that come with the machine as they’ll be seriously outdated.

The time has finally come to rebuild my home server

Back in 2009 I built a “slightly more than NAS” home server and documented that build on my old blog. I’ve migrated the posts to this blog, you can find them here, here, here, here and the last one in the series here.

The server survived the move from the UK to the US, even though the courier service I used did a good job of throwing the box around, to the extent that a couple of disks had fallen out of their tool less bays. Nevertheless, it continued soldiering on after I put the drives back in and replaced a couple of broken SATA cables and a dead network card that hadn’t survived being hit by a disk drive multiple times.

I was recently starting to worry about the state of the disks in the server. They were the still original disks I put in when I built the machine, and they were desktop drives that weren’t really built for 24×7 usage, but they coped admirably with running continuously for four years. One started showing a couple of worrying errors (command timeouts and the like), so it was past time to replace all of them before things got any worse.

After some research into affordable NAS disk drives, I ended up with five WD Red 2TB disks for the ZFS raid portion of the NAS and an Intel X25-M SSD drive for the system disk.

First order of the day was to replace the failing disk, so armed with these instructions, I scrubbed the array (no data errors after over four years, yay) and replaced the disk. This time I did use a GPT partition on the zfs drive instead of using the raw disk like I did when I originally built the system. That way it’s at least obvious that there is data on the disk, plus I can tweak the partition sizes to account for slight differences in size of the physical disks. You can replace a raw disk with a GPT partitioned one, you just have to tweak the command slightly:

zpool replace data ada1 /dev/ada1p1

Basically, tell zfs to replace the raw device (ada1) with the newly created GPT partition (/dev/adap1).

The blog post with the instructions on how to replace the disks mentioned that resilvering can take some time, and boy were they not kidding:

root@nermal:~ # zpool status
  pool: data
 state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Fri Dec 27 07:17:54 2013
        131G scanned out of 3.26T at 121M/s, 7h32m to go
        32.8G resilvered, 3.93% done

At that rate I ended up replacing one disk per day until I managed to replace all four disks. Not that big a deal if I have to do this once every four years.

Why all four disks after I said above that I bought five? I had forgotten that you can’t add a disk drive to a raidz array to extend its capacity – if you want more disks in your array, you have to create a new array instead. Ideally I should have moved all the data to a separate, external disk, recreated the array with all five disks and then moved the data back from the external disk. As the motherboard doesn’t have USB3 connectivity this would have taken way to long so came up with a different approach.

Basically, instead of creating a single big zfs partition on every drive, I created two. One was the same size as the partition on the old 1TB drive, the second one filled the rest of the disk. I also needed to make sure that the partitions were correctly aligned as these drives have 4K sectors and I wanted to make sure I didn’t suffer from performance degradation due to the partition offset being wrong. After some digging around the Internet, this is what the partitioning commands look like:

gpart create -s gpt ada1

gpart add -a 4k -s 953870MB -t freebsd-zfs -l disk1 ada1 gpart add -a 4k -t freebsd-zfs -l disk1_new ada1

I first used a different method to try and get the alignment “right” by configuring the first partition to start at 1M, but it turned out that that’s probably an outdated suggestion and one should simply use gpart’s “-a” parameter to set the alignment.

Once I’ve redone the layout for all four disks with the dual partitions as I showed above, I should be able to add the fifth disk to the array and create a second raidz that uses five disks and also 4k sector alignment (which requires playing with gnop).