Boost.Log, preventing the’unhandled exception’in Windows 7 when attempting to log to the event log

I recently ran into a requirements for retrofitting a logging library to an existing project. My first instinct was to throw Pantheios at it as I’ve used it before and It Just Worked. Unfortunately in this case, we needed the ability to log to more than two event sinks and it looked like this was getting a little awkward with Pantheios, which prompted me to look at Boost.Log.

After some digging through the documentation and the samples, I managed to get the logging going to the three event sinks we needed. So far, so good, but every time I started up the program it reported an unhandled exception on Windows 7 when it was trying to initialise the simple_event_log backend and the software wasn’t run as administrator. Curiously enough, the log messages still did appear in the event log, just with lots of unnecessary decoration.

The reason for this problem was that the registry key in HKEY_LOCAL_MACHINESystemCurrentControlSetServicesEventLog that the application needs access to both has to be present (if you’re not administrator, you don’t have the privileges to create it) and the user who runs the application also needs to be able to both read and write to it. Normally you’d need the installer to create the key as it tends to run with administrator privileges; the installer also needs to set the permissions on the created key to ‘Full Control’. Once both the key and the permissions are set correctly, the backend will register OK without any unhandled exceptions.

Unfortunately, if the event log backend can’t create the event log registry entry by itself during its initalisation phase, it is also necessary to point the event log at the file that contains the event messages. In order to do this, the installer also needs to create a string value in the newly added application-specific key that has the name “EventMessageFile” and that points at the correct boost_log dll.

Once the above entries are in the registry, logging to the event log using the simple_event_log backend Just Works, too.

A couple of noteworthy links

It’s bit of a link roundup from the past couple of months. Most of you probably saw these already as I’d think you’re probably reading the same blogs.

C++ links

VS2010 SP1 Beta: What’s in it for C++ developers. While I’m not going to chance installing the beta on my main developer workstation, it looks like there are some interesting features in the service pack. I hope that the IDE stability has also been improved.

Grr… My VC++ Project Is Building Slower in VS2010. What Do I Do Now? (A Step by Step Guide): A good guide showing what to look for when VS2010 builds appear to be slower than VS2008 builds of the same project.


Making CamelCase readable with glasses-mode – I’m not a big fan of CamelCase, but I must say this minor mode makes a big difference in readability.

General programming links

Give your programmers professional tools – just too true. I’m extremely lucky that a lot of the companies I worked for including my current employer do recognise that your average developer has different needs than someone working with, say, spreadsheets all day long but some of the hand-me-down boxes with small slow harddrives and slow processors that were inflicted on some dev teams I worked in were definitely putting the brakes on productivity.

Sometimes, std::set just doesn’t cut it from a performance point of view

A piece of code I recently worked with required data structures that hold unique, sorted data elements. The requirement for the data being both sorted and unique came from it being fed into std::set_intersection() so using an std::set seemed to be an obvious way of fulfilling these requirements. The code did fulfill all the requirements but I found the performance somewhat wanting in this particular implementation (Visual Studio 2008 with the standard library implementation shipped by Microsoft). The main problem was is that this code is extremely critical to the performance of the application and it simply wasn’t fast enough at this point.

Pointing the profiler at the code very quickly suggested that the cost of destruction of the std::set seemed to be rather out of proportion compared to the cost of the rest of the code. To make matters worse, the std::set in question was used as a temporary accumulator and was thus created and destroyed very often. Also, the cost was in the destruction of the elements in the std::set, so the obvious technique – keep the std::set around, but clear() its contents – did not yield any improvement in the overall runtime, mainly due to the fact that red-black trees (the underlying implementation of this – and most – std::sets) are very expensive to destroy as you have to traverse the whole tree and delete it node by node, even if the data held in the nodes is a POD. Clearly, a different approach was needed.

A post on suggested that I wasn’t the first person to run into this issue, nor did it appear to be that specific to my platform. The article also suggested a technique that should work for me. Basically, std::set provides three guarantees – the contents is sorted, the keys unique and the lookup speed for a given key is in the order of O(log n). As I mentioned in the introduction, I did only really need two of the three guarantees, namely unique elements in a sorted order; std::set_intersection() expects input iterators as its parameters so despite its name, it is not tied to using a std::set, even though it’s the obvious initial choice of data structure.

In this particular case – a container accumulating a varying number of PODs, that eventually get fed into std::set_intersection – I could make use of the fact that at the point of accumulation of the data, I basically didn’t care if the data was sorted and unique. As long as I could ensure the data in the container fulfilled both criteria before calling std::set_intersection, I should be fine.

The simplest way and the least expensive in terms of CPU time I was able to come up with was to accumulate the data in a std::vector of PODs, which is cheap to destroy. Much like as described in the above thread on I then took care of the sorted and unique requirements just before I fed it into the std::set_intersection:

std::vector<uint32_t> accumulator;

... accumulate lots of data ...

std::sort(accumulator.begin(), accumulator.end());

Note the call to accumulator.erase() – std::unique doesn’t actually remove the ‘superfluous’ elements from the std::vector, it just returns the iterator to the new end of the container. In the code I was using I couldn’t make use of this feature so I had to actually shrink the std::vector to only contain the elements of interest. Nevertheless, changing over from a real set to the ‘fake’ resulted in a speed increase of about 2x-3x which was very welcome.

Basically if you need a std::set, you need a std::set and you’ll have to keep in mind its restrictions when the container in question only has a short lifetime. I don’t advocate ripping out all sets from your application and replacing them with sorted, uniqified std::vectors. However in some cases like the one I described above when you don’t need the full functionality of a std::set, using a std::vector can offer a very reasonable alternative, especially when you identify the std::set as a bottleneck. Yes, you could probably also get O(log n) lookup speed in the sorted and uniqified vector by using std::binary_search but that’s getting a little too close to declaring “the only data C++ structure I’m familiar with is a std::vector”. Using the appropriate data structure (like a std::set) communicates additional information to the readers of your code; using workarounds like the one above are not that obvious and tend to obscure your code somewhat.

The joy of using outdated C++ compiler versions

Thud, thud, thud…

The sound of the developer’s head banging on the desk late at night.

What happened? Well, I had a requirement to make use of some smart pointers to handle a somewhat complicated resource management issue that was mostly being ignored in the current implementation, mainly on the grounds of it being slightly to complicated to handle successfully using manual pointer management. The result – not entirely unexpected – was a not so nice memory leak.

No smart pointer implementation was found lurking behind the sofa, so I bravely went where other people had gone before (and failed) – I bravely ignored the status of the Sun CC support in the boost library and downloaded the latest version (1.32.0 at the time of me orginially writing this). The compiler I’m using is marked as ‘horribly broken’ in the context of boost, but hey, I only wanted to use smart pointers so it can’t be that bad, right?

First attempts with a newer compiler (WS8/5.5) proved to be encouraging. The smart_ptr tests compiled, but a lot of them failed. After an extended printf debugging session it appears that the temporaries generated by the compiler got destroyed rather later than both the writers of the C++ standard and the boost developers expected. Employing some advanced google skillz soon brought to light that by default, the SUN compiler destroys temporaries not at the end of the statement as the standard suggests but rather when it encounters the end of the scope.

Great. In fact this shouldn’t have come as that much of a surprise as SUN makes a big song and dance about the compiler’s backward compatility – they state that code which has compiled on previous versions of the compiler will definitely still compile on the newer versions. This I found true almost all the time. Unfortunately in this particular case the feature turned into a stumbling block as the backward-compatible behaviour pretty much sabotaged the expected behaviour.

Fortunately the cure is at hand – the compiler supports a command line option (-feature=tmplife) that makes it behave like every other modern C++ compiler on the face of the earth. And hey presto, the tests suddenly pass. Well, obviously those that are supposed to pass!

Unfortunately the current compiler used in the production environment is 5.3/WS6.2, not 5.5/WS8. At least it also does support the tmplife feature, so I’m obviously only a stone’s throw away from getting working smart pointers, right?

Wrong. The smart pointer’d code did compile, but did it link? Of course not, that would be too easy. So back to the tests, but this time armed with the old compiler. The older SUN compilers use a template instantiation database (the infamous SunWS_cache direcotry) to store the object code resulting of the compiler instantiating templates. For some reason, the compiler or linker fail to pull in the necessary object code for the smart pointer externals and all that. Grrr. Closer inspection of the compiler’s man page suggested that the compiler can be convinced to put this information into the object file instead (using -instances=static instead of the default behaviour). This behaviour is the default on the 5.5 compiler, but optional in the 5.3 compiler…

So finally, the smart_ptr test successfully complete using the Sun 5.3 C++ compiler. And the application – with a bit more tweaking – is leaking considerably less memory. The joy of small victories.

Playing with SunStudio 11

This is by no means a review of SunStudio 11, even though I’ve used it for production software. There’s an awful lot of power in the IDE but I’m one of those old-skool guys who’s spent a lot of time learning and customising XEmacs and it’s still the editor I’m most comfortable with, so why change? For that reason, I’ve only ever used the IDE for debugging, for which it seems to be decent enough. As it’s written in Java as so many IDEs are these days (cue Eclipse) it’s not exactly the fastest IDE I’ve ever worked with but once it’s loaded up and running it appears to be decent enough.

The compiler is however a big step forward from just about any of the older SUN compilers I’ve used. It still has some quirks but there is comparative small number of them, so for most applications it now really looks like a proper standard C++ compiler, which is a big improvement over the previous efforts. Yes, there are still some quirks (a couple of them still show up in Boost) but it’s more unlikely that you stumble across those.

Overall I’d say that whichever compiler version you’re currently using, you should probably upgrade to this one. At least if you’re interested in writing reasonably modern C++, that is.