SE-Radio interviews Ron Lichty, must listen for development managers

The Software Engineering Radio podcast has just published an episode with an interview with Ron Lichty. If you’re either thinking about moving from development into management or already have, it’s well worth a listen. Unfortunately there is only so much that can be packed into an hour’s worth of a podcast, but just based on the podcast, Ron’s book “Managing the Unmanageable” just found itself added to my ever-growing reading list.

Direct link to the episode.

Installing specific major Java JDK versions on OS X via Homebrew

In an earlier post, I described how to install the latest version of the Oracle Java JDK using homebrew. What hadn’t been completely obvious to me when I wrote the original blog post is that the ‘java’ cask will install the latest major version of the JDK. As a result, when I upgraded my JDK install today, I ended up with an upgrade from Java 8 to Java 9. On my personal machine that’s not a problem, but what if I wanted to stick with a specific major version  of Java?

Read More

Emacs 25.3 released

Emacs 25.3 has been released on Monday. Given that it’s a security fix I’m downloading the source as I write this. If you’re using the latest Emacs I’d recommend you update your Emacs. The vulnerability as been around since Emacs 19.29, you probably want to upgrade anyway.

Build instructions for Ubuntu and friends are the same as before, the FreeBSD port appears to have been updated already and I’m sure homebrew is soon to follow if they haven’t updated it already.

Cleaning up UTF-8 character entities when exporting from WordPress to Jekyll

I’ve been experimenting with converting this blog to Jekyll or another static blog generator. I’m sticking with Jekyll at the moment due to its ease of use and its plugin environment. The main idea behind this is to reduce the resource consumption and hopefully also speed up the delivery of the blog. In fact, there is a static version of the blog available right now, even though it’s kinda pre-alpha and not always up to date. The Jekyll version also doesn’t have the comments set up yet nor does it have a theme I like, so it’s still very much work in slow progress.

To export the contents from WordPress to Jekyll I use the surprisingly named WordPress to Jekyll exporter plugin. This plugin dumps the whole WordPress data including pictures into a zip file in a format that is mostly markdown grokked by Jekyll. It doesn’t convert all the links to markdown, so the generated files need some manual cleanup. One problem I keep running into is that the exporter dumps out certain UTF-8 character entities as their numerical code. Unfortunately when processing the data with Jekyll afterwards, those UTF-8 entities get turned into strings that are displayed as is. Please note I’m not complaining about this functionality, I’d rather have this information preserved so I can rework it later on. So I wrote a script to help with this task.

Read More

Building Emacs 25.2 on XUbuntu 17.04

I haven’t done much with Ubuntu recently, but had to set up a laptop with XUbuntu 17.04. That came with Emacs 24.5 as the default emacs package, and as skeeto pointed out in the comments, with a separate emacs25 package for Emacs 25.1. I tend to run the latest release Emacs everywhere out of habit, so I revisited my build instructions to build a current Emacs on Ubuntu and its derivates. The good news is that in thanks to some changes in the Emacs build, the build is as straightforward as it used to be prior to the combination of Ubuntu 16.10 and Emacs 25.1. In other words, no need to remember to switch off PIE as was necessary when building GNU Emacs 25.1 on Ubuntu 16.10.

Here’s a brief recap of the build steps so you don’t have to go back and click your way through my old posts.

First, if you haven’t enabled the ‘source code’ repository in Ubuntu’s software sources, do so now. If you don’t, you’ll run into the following error when installing the build dependencies for Emacs:

E: You must put some 'source' URIs in your sources.list

Assuming you have added the source code repositories to your software sources, execute the following commands. The first command installs the build tools, the second one installs all the build dependencies for the stock Emacs build. Those dependencies will give you a fully functioning GUI Emacs. If you need additional third party libraries for additional functionality that aren’t covered by the regular Ubuntu Emacs build dependencies, make sure you install those also. I usually go with the stock configuration so for me, these are the two commands I need to run:

sudo apt install build-essential
sudo apt build-dep emacs25

On my fresh install of XUbuntu 17.04, the build-essential packages were already installed, so it may not be necessary to execute that step any longer. However, it was necessary in the past so I’m still leaving it in there as it makes sure you have the normal build setup.

You can install the build-deps for either the emacs or the emacs24 package instead of the one for the emacs25 package as I show in the example above. They all appear to install the same dependencies as trying to install all three doesn’t appear to result in any additional packages being installed.

At this point, it’s time to download the GNU Emacs 25.2 tarball from your favourite GNU mirror, extract it to a suitable place and do the usual configure/make/make install dance. I prefer to install my home built binaries in a local subtree in my user directory, hence the $HOME/local prefix passed to configure:

./configure --prefix=$HOME/local
make && make install

At this point, we’re good to go:

timo-xubuntu-VirtualBox% emacs --version
GNU Emacs 25.2.1
Copyright (C) 2017 Free Software Foundation, Inc.
You may redistribute copies of GNU Emacs
under the terms of the GNU General Public License.
For more information about these matters, see the file named COPYING.

The instructions above will also work for building Emacs 25.2 on older versions of Ubuntu, although you have to make sure that you pick the correct build-dep package to install the build dependencies first.

Why I don’t like getter and setter functions in C++, reason #314.15

This is a post I wrote several years ago and it’s been languishing in my drafts folder ever since. I’m not working on this particular codebase any more. That said, the problems caused by using Java-like getter and setter functions as the sole interface to the object in the context described in the post have a bigger impact these days as they will also affect move construction and move assignment. While I’m not opposed to using getter and setter functions in C++ in general, I am opposed to using them as the only member access method and especially in this particular context where they had to be used to initialise class members that were complex objects themselves.

The old code I was working on at the time –  and it was old, it had survived for over a decade with only minimal changes –  only allowed users to alter the contents of some user-defined data types via getter and setter functions. Even the constructors did not take arguments that allow single step construction of an object. Imagine a class that looks something like this – not the actual implementation, but this is a pretty good approximation:

class MyFoo {

  MyFoo clone();

  void Set(complex_object v);
  complex_object Get();

  complex_object val;

Also imagine that complex_object follows a similar interface pattern, plus it also contains objects that follow the same pattern again. The code required to create an object of type MyFoo would look something like this:

  MyFoo newFoo;

These classes including the contained classes don’t support C++ style copying via assignment operators and copy constructors. Copy constructors and assignment operators weren’t disabled either, but that only added a little icing on the cake when debugging the code. As a result the implementation of copy constructors – actually, ‘clone()’ functions, because someone clearly didn’t trust the built-in C++ mechanisms and decided to reinvent the wheel – was littered with calls like this:


Spotted the not-that-obvious problem ?

For starters, assuming m_some_member is comparatively expensive to construct, the code ends up first default constructing the member and then blatting over its contents and updating it again via the Set() call. Do the double initialisation often enough and you’ll start noticing the performance impact, as does your user.

Also, if you store this kind of object in a standard library container, you still have to copy its contents manually instead of making use of the existing standard library functionality to copy these objects. Storing objects that don’t really follow C++ value semantics in a standard container is generally a really bad idea anyway, but as we all know, no bad idea ever goes unimplemented. Anyay, even on the surface, the fact that the following, comparatively elegant code:

  std::copy(src.begin(), src.end(), dest.begin());

… turns into something really, really ugly like this, which should have given the implementer pause:

  SomeContainerType dest = destinationcontainer.begin();
  for (SomeContainerType::iterator i = somecontainer.begin();
        i != somecontainer.end();

Of course there is also the issue that the standard C++ library containers rely on the ability to copy objects held in them due to their value semantics. The containers also don’t know about your special clone functions and accessors. Happy debugging if the compiler-generated copy constructor doesn’t do what you’d expect it to do.

Morale of the story – if you want halfway efficient copy construction, ensure that objects which are contained in other objects properly support and if necessary, implement copy constructors and assignment operators. If you must write code like the above, at least don’t put the objects into standard library containers that expect value semantics. To prevent someone from putting them into standard library containers, you can either use my preferred solution of inheriting from something like boost::noncopyable or declare both the copy constructor and assignment operators private.

Part of the problem described above is that the code was mixing multiple incompatible idioms here, which is something that rarely works well. That is never a good idea unless your idea of fun is debugging your way through the night.

So, while functions that access internal members of an object are generally a good idea as part of encapsulation, please make sure that getter and setter functions aren’t the only possible way to access internal objects. Don’t make your life harder by encapsulating implementation details away from yourself to the extent that you end up with a very big gun pointed at your foot.

Tracking down why the iOS mail app couldn’t retrieve my email over an LTE connection

We all love the odd debugging story, so I finally sat down and wrote up how I debugged a configuration issue that got in the way of the iOS mail app’s ability to retrieve email while I was on the go.

tl;dr – iOS Mail uses IPV6 to access you email server when the server supports IPV6 and doesn’t fall back to IPV4 if the IPV6 connection attempt fails. If if fails, you don’t get an error, but you don’t get any email either.

The long story of why I sporadically couldn’t access my email from the iOS 10 Mail app

Somewhere around the time of upgrading my iPhone 6 to iOS 10 or even iOS 10.2, I lost the ability to check my email using the built-in iOS Mail app over an LTE connection. I am not really able to nail down the exact point in time was because I used Spark for a little while on my phone. Spark is a very good email app and I like it a lot, but it turned out that I’m not that much of an email power user on the go. I didn’t really need Spark as Apple had added the main reason for my Spark usage to the built-in Mail app. In case you’re wondering, it’s the heuristics determining which folder you want to move an email to that allow both Spark and now Mail to suggest a usually correct destination folder when you want to move the message.

Read More

RTFM, or how to make unnecessary work for yourself editing inf-mongo

Turns out I made some unnecessary “work” for myself when I tried to add support for command history to inf-mongo. As Mickey over at Mastering Emacs points out in a blog post, comint mode already comes with M-n and M-p mapped to comint-next-input and comint-previous-input. And of course they work in inf-mongo right out of the box. I still prefer using M-up and M-down, plus I learned a bit about sparse key maps and general interaction with comint-mode. So from that perspective, no time was wasted although it wasn’t strictly necessary to put in the work.

But with Emacs being the box of wonders it is, it’s still fun to learn about features and new ways of doing things even after using it for a couple of decades.

There are a more gems hidden in Mickey’s blog post so if you’re using anything that is based on comint, I would really recommend reading it.

Extending inf-mongo to support scrolling through command history

I’m spending a lot of time in the MongoDB shell at the moment, so of course I went to see if someone had built an Emacs mode to support the MongoDB shell. Google very quickly pointed me at endofunky’s inf-mongo mode, which implements a basic shell interaction mode with MongoDB using comint. We have a winner, well, almost. The mode does exactly what it says on the tin, but I wanted a little more, namely being able to scroll through my command history. Other repl modes like Cider have this functionality already, so it couldn’t be too hard to implement, could it?

Read More

The challenges of preserving digital content

A problem archivists have been bringing up for a while now is that with the majority of content going digital and the pace of change in storage mechanisms and formats, it’s becoming harder to preserve content even when it is not what would be considered old by the standards of other historic documents created by humanity.

Case in point – the efforts required to preserve even recent movies as described in this article on IEEE Spectrum. As the article mentions, we’ve already lost access to 90% of US movies made during the silent area and about 50% of movies made before 1950. I suspect that the numbers for the European film industry might be even worse thanks to World War 2. However, keep in mind that those are numbers for movies stored on a more durable medium (and yes, I know that the early nitrate film is about as flammable as they come).

One of the moist poignant quotes, regarding Pixar’s challenge when trying to re-render Finding Nemo in 3D nine years after it’s initial release:

The fact that the studio had lost access to its own film after less than a decade is a sobering commentary on the challenges of archiving computer-generated work.

Even consumer households will face the same issue sooner or later when it comes to preserving family photos, home movies and other digital content. Just looking aroud my office, there are terrabytes of data floating around, a fair number of photographs and video. And of course this blog doesn’t have a dead tree version either, nor have I ever had an article published in a software development magazine. Not to mention that the ones I would have like to publish in (C/C++ User’s Journal and Dr Dobbs) are now dead and gone as well. Their paper products are hopefully still being preserved, but how long are we able to read their digital archives?

If we had infinite space to store physical objects we could try to preserve the computers the content is stored on, but that’s not realistically possible either.

For me personally, I am trying to make sure that the relatively recent content migrates with me when I update my computers and remains accessible. So far I’ve been lucky that my approximately 10 years of digital photos are all still usable. Even with that precaution in place I will try to shoot more film again, but only after checking that my decade-old Minolta film scanner is still working. Fortunately I also own a flatbed scanner that can scan film up to 5″x4″ and as long as it remains functional and software like Hamrick’s Vuescan supports it, I should be OK with a dual analog/digital strategy. Plus places like Freestyle photo seem to be able to supply more film that’s been reissued these days compared to a few years ago.