Halfway through GoingNative 2012

It’s almost time to go back for the second day, but before I do I’d like to suggest that if you haven’t had a chance to attend in pereson or watch the livecast, see if you can find the videos online. My understanding is that they should be available – I’m writing this on my phone so I can’t be bothered to look at the moment but I’ll check later.

Update: While the GoingNative site has the links to the sessions, I don’t seem to be able to find links to videos of the sessions. I thought they were recorded and not only streamed, but I might be wrong. Yeah, what else is new :).

Useful collection of Qt debug visualizers for Visual Studio

I had to reinstall VS2010 at work and because I clearly didn’t think this all the way through, forgot to save my autoexp.dat file before removing the old installation. And of course I didn’t realise what had happened until I had to dig deeper into some Qt GUI code that wasn’t quite working as expected, and of course I was prompted with the raw data.

Fortunately a quick search on Google led me to this page Human Machine Teaming Lab | Knowledge / Qt that contains a very comprehensive set of visualisers. I’d highly recommend them if you’re doing any sort of work with the Qt libraries. Just keep in mind that the Qt visualisers are for Visual Studio 2008 and 2010, so they’re anything but guaranteed to work with newer versions.

Visual Studio 2010 SP1 has been released

For those who are using Visual Studio 2010, the service pack has now been officially released:

Visual Studio 2010 Service Pack 1 General Availability – Visual C++ Team Blog – Site Home – MSDN Blogs

Edit: The download like doesn’t seem to work for me yet, given that it’s only gone General Availability today it might be worth checking back a little later.

Edit again – we have a general availability download link: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=75568aa6-8107-475d-948a-ef22627e57a5&displaylang=en

If your VS2010 C++ build is constantly rebuilding a project that hasn’t changed…

Check if you’re seeing the following output in the build pane:


InitializeBuildStatus:
  Creating ".unsuccessfulbuild" because "AlwaysCreate" was specified.

I’ve just fixed a bunch of these errors in one of our solutions here and all of these were caused by one of two issues:

  • The project file referenced files that were no present in the source tree
  • A custom build step either was supposed to generate a file but didn’t, or the file ended up in the wrong place

In order to find out if there are missing files that trigger the perma-rebuild, you’ll also have to enable Visual Studio’s debug output as described in this stackoverflow answer.

How to view undecorated DLL-exported C++ symbols in Visual Studio 2010

Yes, it’s one of those “note to self” posts, but I keep forgetting how to do it.

As the first step, you run dumpbin /EXPORTS and redirect the output into a file because the utility that unmangles the names (undname.exe) doesn’t appear to be able to take piped input via stdin. Then, run undname <filename>, with <filename> being the file that contains the exported symbols.

At least that way the symbols become mostly readable.

Boost.Log, preventing the’unhandled exception’in Windows 7 when attempting to log to the event log

I recently ran into a requirements for retrofitting a logging library to an existing project. My first instinct was to throw Pantheios at it as I’ve used it before and It Just Worked. Unfortunately in this case, we needed the ability to log to more than two event sinks and it looked like this was getting a little awkward with Pantheios, which prompted me to look at Boost.Log.

After some digging through the documentation and the samples, I managed to get the logging going to the three event sinks we needed. So far, so good, but every time I started up the program it reported an unhandled exception on Windows 7 when it was trying to initialise the simple_event_log backend and the software wasn’t run as administrator. Curiously enough, the log messages still did appear in the event log, just with lots of unnecessary decoration.

The reason for this problem was that the registry key in HKEY_LOCAL_MACHINESystemCurrentControlSetServicesEventLog that the application needs access to both has to be present (if you’re not administrator, you don’t have the privileges to create it) and the user who runs the application also needs to be able to both read and write to it. Normally you’d need the installer to create the key as it tends to run with administrator privileges; the installer also needs to set the permissions on the created key to ‘Full Control’. Once both the key and the permissions are set correctly, the backend will register OK without any unhandled exceptions.

Unfortunately, if the event log backend can’t create the event log registry entry by itself during its initalisation phase, it is also necessary to point the event log at the file that contains the event messages. In order to do this, the installer also needs to create a string value in the newly added application-specific key that has the name “EventMessageFile” and that points at the correct boost_log dll.

Once the above entries are in the registry, logging to the event log using the simple_event_log backend Just Works, too.

Sometimes, std::set just doesn’t cut it from a performance point of view

A piece of code I recently worked with required data structures that hold unique, sorted data elements. The requirement for the data being both sorted and unique came from it being fed into std::set_intersection() so using an std::set seemed to be an obvious way of fulfilling these requirements. The code did fulfill all the requirements but I found the performance somewhat wanting in this particular implementation (Visual Studio 2008 with the standard library implementation shipped by Microsoft). The main problem was is that this code is extremely critical to the performance of the application and it simply wasn’t fast enough at this point.

Pointing the profiler at the code very quickly suggested that the cost of destruction of the std::set seemed to be rather out of proportion compared to the cost of the rest of the code. To make matters worse, the std::set in question was used as a temporary accumulator and was thus created and destroyed very often. Also, the cost was in the destruction of the elements in the std::set, so the obvious technique – keep the std::set around, but clear() its contents – did not yield any improvement in the overall runtime, mainly due to the fact that red-black trees (the underlying implementation of this – and most – std::sets) are very expensive to destroy as you have to traverse the whole tree and delete it node by node, even if the data held in the nodes is a POD. Clearly, a different approach was needed.

A post on gamedev.net suggested that I wasn’t the first person to run into this issue, nor did it appear to be that specific to my platform. The article also suggested a technique that should work for me. Basically, std::set provides three guarantees – the contents is sorted, the keys unique and the lookup speed for a given key is in the order of O(log n). As I mentioned in the introduction, I did only really need two of the three guarantees, namely unique elements in a sorted order; std::set_intersection() expects input iterators as its parameters so despite its name, it is not tied to using a std::set, even though it’s the obvious initial choice of data structure.

In this particular case – a container accumulating a varying number of PODs, that eventually get fed into std::set_intersection – I could make use of the fact that at the point of accumulation of the data, I basically didn’t care if the data was sorted and unique. As long as I could ensure the data in the container fulfilled both criteria before calling std::set_intersection, I should be fine.

The simplest way and the least expensive in terms of CPU time I was able to come up with was to accumulate the data in a std::vector of PODs, which is cheap to destroy. Much like as described in the above thread on gamedev.net. I then took care of the sorted and unique requirements just before I fed it into the std::set_intersection:

std::vector<uint32_t> accumulator;

... accumulate lots of data ...

std::sort(accumulator.begin(), accumulator.end());
accumulator.erase(std::unique(accumulator.begin(),
                              accumulator.end()),
                  accumulator.end());

Note the call to accumulator.erase() – std::unique doesn’t actually remove the ‘superfluous’ elements from the std::vector, it just returns the iterator to the new end of the container. In the code I was using I couldn’t make use of this feature so I had to actually shrink the std::vector to only contain the elements of interest. Nevertheless, changing over from a real set to the ‘fake’ resulted in a speed increase of about 2x-3x which was very welcome.

Basically if you need a std::set, you need a std::set and you’ll have to keep in mind its restrictions when the container in question only has a short lifetime. I don’t advocate ripping out all sets from your application and replacing them with sorted, uniqified std::vectors. However in some cases like the one I described above when you don’t need the full functionality of a std::set, using a std::vector can offer a very reasonable alternative, especially when you identify the std::set as a bottleneck. Yes, you could probably also get O(log n) lookup speed in the sorted and uniqified vector by using std::binary_search but that’s getting a little too close to declaring “the only data C++ structure I’m familiar with is a std::vector”. Using the appropriate data structure (like a std::set) communicates additional information to the readers of your code; using workarounds like the one above are not that obvious and tend to obscure your code somewhat.