Quick tip if you see bad DLL or entry point’msobj80.dll’when building software with VS2008

Try stopping mspdbsrv.exe (the process that generates the pdb files during a build) if it is still running. My understanding is that it’s supposed to shut down at the end of the compilation but it seems that it can turn into a zombie process and if the latter happens, you can get the above error when linking your binaries.

Anyway, I just ran into this issue and stopping the process via the Task Manager resolved the issue for me.

On combining #import and /MP in C++ builds with VS2010

I’m currently busy porting a large native C++ project from VS2008 to VS2010 and one of the issues I keep running into was build times. The VS2008 build uses a distributed build system; Unfortunately the vendor doesn’t support VS2010 yet, so I couldn’t use the same infrastructure. In order to get a decent build speed, I started exploring MSBuild’s ability to build projects in parallel (which is fairly similar to VS2008’s ability to build projects in parallel) and the C++ compiler’s ability to make use of multiple processors/cores, aka the /MP switch.

Read More

Using CEDET-1.0 pre7 with Emacs 23.2

It’s been mentioned in several places that GNU Emacs versions sometime after 23.1.50 do come with an integrated version of CEDET. While I think that’s a superb idea it unfortunately managed to break my setup, which relies on a common set of emacs-lisp files that I hold under version control and distribute across the machines I work on. Those machines have different versions of GNU-based Emacsen (pure GNU, Emacs/W32, Carbon Emacs etc) so I can’t rely on the default CEDET. Unfortunately when I got a new machine and put a GNU Emacs 23.2 on there, my carefully crafted (OK, I’m exaggerating) .emacs wouldn’t play ball because the built-in CEDET conflicted with the pre7 that I had already installed on that machine.

I didn’t want to have to make extensive modifications to my .emacs, but a little time spent on Google brought up a post by Marco Bardelli on the CEDET-devel mailing list with a little code snippet that removes the built-in CEDET from the load-path. After putting this into my .emacs, my -pre7 config is working again.

For those in a similar quandary, here is the snippet in all its glory:

(setq load-path
      (remove (concat "/usr/share/emacs/" 
		      (substring emacs-version 0 -2) "/lisp/cedet")
	      load-path))

Welcome back to the new blog, almost the same as the old blog

The move to the other side of the Atlantic from the UK is almost complete, I’m just waiting for my household items – and more importantly, my computer books etc – to turn up. So it’s time to start blogging again in the next few weeks. Due to some server trouble in the UK, combined with the fact that I do like Serendipity as a blogging system but was never 100% happy with it, I’ve switched to using WordPress on a server here in the US. The old blog will stay up, at least as long as the server stays put, but I won’t add any new content to the old blog.

Enough meta-blogging though, I should be back to the usual 1-2 post per month soon, so if you kindly could subscribe to the RSS feed and you’ll see when all of this is up and running again.

The homebuilt NAS/home server, revisited

This is a reblog of my “building a home NAS server” series on my old blog. The server still exists, still works but I’m about to embark on an overhaul so I wanted to consolidate all the articles on the same blog.

I’ve blogged building my own NAS/home server before, see here, here, here and here.

After a few months, I think it might be time for an interim update.

In its original incarnation, the server wasn’t as stable as it should have been given my previous experience of FreeBSD. For some reason, it would crash every few weeks and sometimes even hang on reboot. Not good, especially as it happened a few times while I wasn’t home. I guess I should have heeded the warning about the zfs integration being experimental… Things got worse when I added a wireless card and retired my access point. Roughly around this point in time I got fed up with this enough to go back and start building an OpenSolaris VM to try out a mail server setup similar to the one I’m running on FreeBSD.

Before I got anywhere with this, FreeBSD 8.0 came out, so I upgraded. ZFS had be promoted from experimental, the wireless stack has been overhauled, etc pp. The stability problems disappeared and the machine has been utterly reliable since then. Where before, trying to use Time Machine from to back up my MacBook via the wireless network was a good way to a 50% chance to crash the server, it now “just works”. This is where I wanted to get and I’ve now got there. Performance also seems to have improved – copying large files from the server to my Windows 7 machine sees a reliable 78MB/s via my Gigabit network now.

I’ve still got a couple of small changes I want to make to the machine – for example, I’ve got 4GB of RAM that I want to put into the machine. This should enable zfs readahead which should give me further performance improvements. I also plan to add two more fans to blow cold air over the hard drives to keep them happy and working for longer. Edit: Actually it didn’t as 4GB RAM on the mainboard result in slightly less than 4GB available to the OS. I did enable the readahead manually, though.

If I built another home server, I would probably get another motherboard. Not that there is anything particularly wrong with the one in the machine, but the CPU fan speed control only goes down to 70%, so no wonder that the CPU fan is noisier than it should be.

Was it worth it? Overall I’d say yes, although I probably should have stuck to tried and tested technology (either using FreeBSD’s built-in RAID5 or use OpenSolaris with zfs). This caused unnecessary problems at the beginning and pushed up the cost as I was dithering between either. Next time I probably set up the server on OpenSolaris and run the mail server on FreeBSD in a VM running on OpenSolaris. Given that the current configuration is working, I leave it alone for the time being though.

A couple of useful Emacs modes

highlight-changes-mode – as the name implies, it highlights changes that you make to a file. I do find it useful for the typical scenario of checking out a file, making a couple of smaller changes to it and then having to diff it to work out what you actually changed. As mentioned over at Emacswiki it doesn’t play too nicely with font-locking but I’ll try out some of the suggestions in the “Taming Highlight-Changes-Mode” section on this page.

nxml-mode – my preferred mode for editing XML. Of course it would be better if I could be bothered to create schemas for some of the files I’m editing but even without them, it does a pretty good job. As it’s trying to parse the XML that you write, it’s very helpful when it comes to highlighting mismatched tags or auto complete tags.

ido-mode – I’ve only recently started to use it and I’m still trying to work out if it is useful enough for me or if the improved file finding capability does bother me more than it helps. Yes, I know it can do a lot more but so far I’m only using the improved file finding and buffer switching. I really rate the buffer switching which is the main reason I leave it turned on.

Not really a mode, but I like using bm.el for visible bookmarks. I don’t use bookmarks that often but the package is extremely useful when I do need them.

Some clarifications regarding last week’s anti-VC6 rant

This post started out as a comment to Len Holgate’s post referencing my anti-VC6 rant. The comment got a little out of hand size wise so I’ve decided to turn it into a separate blog post. I think I mixed a couple of issues together that should been separated better but weren’t – after all, my blog post was more of a rant.

First, if your client or employer is maintaining an existing system and the system is in maintenance mode only, we’re talking about a bug fix or a small enhancement then it doesn’t make sense to upgrade the compiler. That’s what I was referring to as a system that is “on life support”. Yes, it goes against the grain for me personally as a software engineer who likes to improve software but the effort spent on making the transition does not make sense from a business perspective.

What I do take issue with is when you are developing a new system or are working on a large refactor of an existing system that is in “embrace and extend mode”, and for whatever reason the client or employer decrees that it shall be written using VC6. That’s where the penalties come in and that is where the technical debt builds up at the start of a new project or at the exact point in time when you should be addressing the debt instead of adding to it.

The understanding of C++ and the use of its multi-paradigm nature has changed since VC6 was released, we have both new programming techniques and new libraries that (should) improve the quality of the code, its expressiveness and programmer productivity. The prime example of these libraries and the one I was thinking of when writing the rant of course is Boost. The earliest MS compiler they test against in 1.40 is VC7.1 aka VS2003 which is certainly a big improvement over VC6.

Yes, VC6 is likely to create smaller executables and build them faster. C++ compilers certainly are not getting any faster and long compile/link times have been a problem on projects I worked on. Shorter build times and especially smaller executables can be a benefit depending on your particular use case. A lot of the projects I worked on in the recent past are maths heavy and the calculations are performance critical. For these projects, an compiler that has an optimizer which can squeeze a 5% performance improvement out of the existing code one a modern CPU at the expense of 20% larger code is a no brainer. In at least one case it was cheaper to buy the Intel compiler to get better performance instead of putting more engineering time into performance improvements.

Yes, developers like shiny new tools and yes, I’ve worked with developers who considered GCC’s CVS HEAD the only compiler that was recent enough to complement their awesomeness. This is not something I generally agree with although I did update my own copy of Visual Studio from 2003 to 2008 (yes, I did skip 2005) when that version came out simply because it was so much better than its predecessors.

I still think that by insisting on the usage of tools that are positively ancient, programmers get needlessly hobbled and it is part of our job as programmers who care about what they do to educate the people who make these decisions as to why they aren’t necessarily good from an engineering point of view. I don’t think any of the Java programmers I work with would put up with having to work using Java 1.2 and forgo the improvements both in the language and in the available libraries, yet C++ programmers are regularly asked to do exactly that.