How to make a self-signed SSL certificate work with Windows RT’s Mail App on a Microsoft Surface RT

Long title, I know…

I was trying to get Windows RT’s Mail App to access the email on my own server. The server uses IMAPS with s self-signed certificate as I only want SSL for it encryption and don’t really need it for authentication purposes as well. As long as it is the correct self-signed certificate I’m happy.

The Mail app however rejects certificates that weren’t signed by a trusted authority and doesn’t offer an obvious exception mechanism (like Thunderbird or Apple Mail) that circumvents the need for a trusted certificate. The original Mail app that came with my surface also displays only a very cryptic error messages, but the latest update from earlier this week correctly suggests that one needs to add the self-signed certificate to the certificate storage in order to get Mail to recognize the certificate.

In my case the saving grace is that I use the same cert to secure the webmail access so IE can easily access the certificate. However as Joe User, you can’t add another certificate to the certificate store – you have to be Administrator to be able to add a certificate and I initially couldn’t find an obvious way to run IE as Administrator.

The trick turns out to be that you have to run IE from the desktop (yes, the Surface RT has a standard Windows Desktop, too). The easiest way to get there is to run IE from the ’tile’ UI, pull up the bottom menu and select ‘view on desktop’ from the settings icon menu. Once you are on the desktop, right-click (two-finger click on the ZX81 keyboard cover) on the IE icon. Bummer, no ‘Run as Administrator’ menu entry. However, there is an entry in this menu that says ‘Internet Explorer’. Right click/two finger click on that one and you get ‘Run as Administrator’. I fired up IE as administrator and the buttons to install the certificate were no longer greyed out.

At this point there was one last hurdle to climb over – if you let IE determine where the certificate is saved, Mail still does not recognize the certificate. You have to install it in ‘Trusted Root Certification Authorities’. And now, I can finally read my email on my Surface RT. Just be aware of the security implications of doing so as your certificate can now act as a root certificate for other certificates. Of course, you could simply get a ‘real’ certificate and not have that sort of security issue.

The above worked for me because I use the same certificate for two purposes. If you can’t simply access the certificate via a browser you’ll have to download the certificate onto your machine as a file and then use certmgr to import it. Again, you’ll most likely will have to run certmgr as Administrator as it won’t allow file operations otherwise.

I don’t want to see another “using namespace xxx;” in a header file ever again

There, I’ve said it. No tiptoeing around.

As a senior developer/team lead, I get involved in hiring new team members and in certain cases also help out other teams with interviewing people. As part of the interview process, candidates are usually asked to write code, so I review a lot of code submissions. One trend I noticed with recent C++ code submissions is that the first like I
encounter in *any* header file is

using namespace std;

If we happen to review the code submission using our code review system (a practice I’d highly recommend), the above line is usually followed by a comment along the lines of “Timo is not going to like this”. And they’re right, I don’t.

So, why am I convinced that this is really, really bad practice despite a ton of (maybe not very good) C++ textbooks containing the above piece of code verbatim? Let’s for a moment review what the above statement does. Basically, it pulls the whole contents of the namespace “std” (or any other namespace that the author used the using statement
for) into the current namespace with no exceptions. And I mean *anything*, not just the one or two classes/typedefs/templates you’re trying to use. Now, the reason that namespaces were introduced in the first place is to improve modularization and reduce the chances of naming conflicts. It basically allows you to write the following
code and ensure that the compiler picks the correct implementations:

std::vector<std::string> names;
my_cool_reimplementation::vector<our_internal_stuff::string> othernames;

Now assume that we’re trying to reduce the amount of typing and put the above using statement in the code (or worse, use both namespaces) and write the following:

vector<string> names;
vector<our_internal_stuff::string> othernames;

If the author of the code is very lucky, the correct implementation of vector will be picked, at least initially. And some time down the road, you’ll encounter strange compiler errors. Good luck finding those – I’ve been in situations where it took days to track down this sort of problem. That’s a heck of a timewaster that you just got for saving to
type five characters.

Also, if you’re putting the using statement into the header file, you’re aggravating the problem because the conflicts you will run into sooner or later will be in a module far, far away for no apparent reason until you find out that three layers down, one include file happens to include the file that contains the using directive and suddenly polluted
whichever namespace the file contained with the whole contents of namespace std.

So why is using namespace std; found in so many textbooks? My theory is that it does reduces visual clutter and also helps with the layout of the book. In a dead tree book, you only have very limited amounts of space so you need to make the most of it. Code examples are generally fairly trivial – well, they have to be if you want the get the point across. Neither of these really hold true when it comes to production code in the age of 24″ monitors and text editors that can handle more than 60-80 characters per line (try it, it works!). So, don’t do it.

So, what can you do if you absolutely, positively have to use a using declaration in a header file? There are other ways to reduce the impact of having to do so – you can use one of them, or all of them in various combinations.

First, you can simply use a typedef. I would suggest that this is good practise anyway even if I don’t always follow my own advice. Using a typedef actually has two benefits – it makes a typename more readable and it documents the author’s intent if a well chosen name has been used. Compare the following declarations:

std::map<std::string, long> clientLocations;

typedef std::map<std::string, long> ClientNameToZip;
ClientNameToZip clientLocations;

The second declaration – even though it spans two lines – is much more self documenting than the first one and it gets rid of the namespace fuzz at the same time.

Another option is to limit the scope of the using statement in two ways – only “using” the names of the symbols you need to use, for example:

using std::string;

Again, just throwing this statement into a header file is almost as bad an idea as putting in “using namespace”, so you should limit its visibility by making use of C++ scopes to ensure that your using declaration really only affects the parts of the code that need to see your using declaration in the first place. For example, you could limit the scope to a class declaration:

namespace bar
  struct zzz

Unfortunately what you then can't do is simply pull in the namespace into a class definition in a header file:

class foo
  using namespace bar;

  zzz m_snooze; // Doesn'tpull in bar::zzz, would be nice though

What you can do if the design of your code allows for it is to pull the ‘zzz’ namespace into the namespace that contains class foo, like so:

namespace yxxzzz
  using namespace zzz;

  class foo
    zzz m_snooze;

This style has the advantage that you’re “only” cluttering a single namespace with your using directive rather than the whole global namespace. At least this way, you are limiting possible naming conflicts to a single namespace. But it’s still a headache if you are using the, err, using directive in a header file this way.

Another way to limit the scope of a using directive to a single function, for example like this:

void temp()
  using namespace std;
  string test = "fooBar";

This works in both header and implementation files.

Actually, I would write the above code like this, because there is no good reason (but plenty of bad ones) to pull in the whole std namespace if you only need part of it.

void temp()
  using std::string;
  string test = "fooBar";

In either case, you are restricting the visibility of a using directive to the part of the code that requires it rather than throwing it right into everybody’s way. The bigger your projects get, the more important it is to ensure proper modularisation and minimise unintended, harmful side effects.

That’s another warranty voided, then

Last night I did something I was adamant I wasn’t going to do, namely rooting my Android phone and installing CyanogenMod on it. Normally I don’t like messing with (smart)phones – they’re tools in the pipe wrench sense to me, they should hopefully not require much in the way of care & feeding apart from charging and the odd app or OS update. Of course, the odd OS update is can already be a problem as no official updates have been available for this phone (a Motorola Droid) for a while and between the provider-installed bloatware that couldn’t be uninstalled and the usual cruft that seems to accumulate on computers over time, the phone was really sluggish, often unresponsive and pretty much permanently complained about running out of memory. So far it appears that updating the OS and only installing a handful of apps that I actually use as opposed to the ones that I supposedly “need” has resulted in a much better user experience.

The whole process was comparatively painless, which I really appreciated. The biggest hurdle was getting the clockworkmod recovery image onto the phone. I ended up rebooting the Mac into Windows and install it via the Windows tools. Other than that, the installation went smoothly and didn’t leave me with a bricked phone so I’m happy with that part.

Why the effort, given my dislike for hacking smartphones? Well, for starters I can squeeze a little more life out of the phone. I’m eligible for an upgrade but thanks to Verizon’s shenanigans, sorry, added hoops (and added expense) required to jump through if you want to keep a grandfathered unlimited data plan, I don’t feel particularly compelled to spend money on a phone, especially if I have to pay full retail for an upgrade. I’m also not that big a fan of Android (I admit to preferring iOS) so I’m currently waiting on how the whole “unlocked iPhone” saga will play out with the iPhone 5. If I have to pay retail for a phone – any phone – I might as well use that as leverage to reduce the overall phone bill.

In the meantime I’ll see how I like the “new” Droid and better get used to occasionally reinstalling the OS on a phone, thus reminding me of the quip that Android truly is the Windows of smartphone OSs.

A(nother) tool post

I generally don’t post that much about the tools I use as they’re pretty standard fare and most of the time, your success as a programmer depends more on your skills than on your tools. Mastery of your tools will make you a better software engineer, but if you put the tools first, you end up with the cart before the horse.

I guess people have noticed that I use Emacs a lot :). My use of it is mainly for writing and editing code (and some newsgroup reading at home using Gnus) and I generally use it only for longer coding sessions. As a lot of my work is on Windows, one of the main tools I use is Visual Studio – almost exclusively 2010 right now, although I’ve taken a few peeks at 2012 and have used pretty much every version since VC++ 4. While I tend to use Emacs as soon as I’m editing more than two lines I tend to make the small changes that you get to make while debugging in Visual Studio.

I did actually try SublimeText 2 a little while back as people were raving about it in various podcasts and on blogs. I did like its speed and uncluttered appearance but quickly fell into the “old dog refusing to learn new tricks” routine. OK, this is a slight exaggeration but after a few days I didn’t get the feeling that using it over Emacs did anything to my enjoyment of programming or my productivity. I think part of the problem is that using any editor out of the box versus an editor that one has used over several years and customized   to reflect new things learned about both one’s own tool use and ideas borrowed from co-workers is simply not a fair comparison, but it is similar to a musician trying out a new instrument after getting comfortable with his current instrument over a few years. If you do care about your craft an editor is a central piece of your workflow and once you can use it without having to think about how you accomplish a certain task, it gets harder to change.

Nevertheless, I would recommend that if you are a programmer in search for a better editor than the one your IDE offers and don’t want to invest the substantial amount of time to get comfortable and productive with the editing dinosaurs like Vi/Vim and Emacs, go check out SublimeTex t2. I promise it is worth your while.

Oddly enough my main takeaway from trying out SublimeText 2 was that I – who has been proponent of high-contrast colour schemes for editors like the one used by the old Norton Editor (yellow on blue) – really got into the low-contrast default theme. So much in fact that with Emacs 24’s theme support, I’m now using this version of zenburn for Emacs. The other takeaway was that I really appreciated the fast startup time and having a slightly better editor around than Notepad when it came to all the quick editing tasks one has to accomplish that would either mess up my carefully set up Emacs or require a second instance of Emacs for “scratch editing”. I ended up with Notepad++ for that and it seems to do the job admireably so far.

Another of the tools I discovered on Scott’s Hanselman’s blog is Console2. I much prefer it over the standard Windows command prompt so give it a whirl! I also tried ConEmu as Scott recommended that in a separate post – I’m a little undecided as of yet which one I prefer, both seem to work just fine and are a massive improvement over the original MS command window – if you’re a command line junkie like I am, having one or two tabbed command windows hanging around rather than a plethora of command windows is a massive boon. Try both, see which one you prefer – so far I do like the feel of ConEmu actually being developed (there are frequent new releases available) but Console2 simply seems to work, too. I think it’s just a matter of personal preference.

Speaking of command line tools, I recently had one of these “wow, these guys are still around” moments when I discovered that JPSoft is still offering a replacement command line processor for windows. I used to be a pretty heavy user of both 4Dos and 4NT back in the early nineties when they were distributed as shareware and you had to buy the documentation as a proper dead tree version to acquire a license. Actually I think I still have both manuals in a storage unit somewhere…

Anyway, I have been playing with the latest incarnation of their free command line tool (TCC/LE) and I really like it. Enough so that I’ll probably end up buying the full version. Basically if you are looking for a “DOS prompt on steriods” rather than using Bash via Cygwin or Powershell – which aren’t always that useful, especially if you need compatibility with existing batch files – I would strongly recommend you check it out.

If you want to remove a C++ project from a Visual Studio 2010 solution

… make sure that you have removed all dependencies on the project that you are about to remove before you remove the project from the solution.

If you don’t, the projects that still have dependencies on the project you just removed will retain the dependencies, but the dependencies will have become invisible and the only way to rid yourself of the “phantom dependencies” is by editing the actual vxcproj files with a text editor and remove the dependency entry in there manually.

Don’t ask how long it took me to figure that out.


Why I still use a separate editor

There is a lot that modern IDEs do well, but uncluttered writing space isn’t one of them. Once you add the various views of your project, the debug window, the source control window and various other important panes you’re left with a tiny viewport into your code. The visual clutter can be disabled of course, but you’ll get it back sooner or later. When you switch back to debug mode or build mode, for example.

Read More

Halfway through GoingNative 2012

It’s almost time to go back for the second day, but before I do I’d like to suggest that if you haven’t had a chance to attend in pereson or watch the livecast, see if you can find the videos online. My understanding is that they should be available – I’m writing this on my phone so I can’t be bothered to look at the moment but I’ll check later.

Update: While the GoingNative site has the links to the sessions, I don’t seem to be able to find links to videos of the sessions. I thought they were recorded and not only streamed, but I might be wrong. Yeah, what else is new :).

Moving to a multi-VHD Windows installation to separate work and personal data

I had been thinking about setting myself up with a way to work from home in a disconnected fashion. Most of the places I’ve worked at in the past required me to remote into the work desktop, which is a good idea if both sides have 100% uptime on their network connection and no issues with them being affected by adverse weather. Which in reality means that the connections tended to be unstable if the weather dictated that one really, really wanted to work from home on a particular day because snowfall was horizontal, for example. My current employer is more enlightened in this matter so my suggestion of locking all the necessary tools and source code inside a VM that would allow me to work from home even if the Internet connection was unavailable at either end was given the go ahead. Given that my desktop here is plenty powerful for most development tasks (it’s an older Intel Mac Pro with dual Xeons), this should be an idea solution.

Only, with the VM software I was trying out, the virtualised disk throughput was lacking a little. The product I’m working on uses Qt and it took a day to build the commercial version of 4.7.4 inside the VM, with one of the Xeons allocated to VM duty. Oops. Some more digging pretty much confirmed that the main issues was the disk throughput or lack thereof. At this point I came across Scott Hanselman’s article on how to boot Windows off a VHD. My understanding is that Bootcamp only supports booting of a single Windows partition so this sounded ideal to me – just put a VHD with all the tools and the source code on the boot partition I already have, then boot from the VHD if I need to. Donn Felker’s blog entry on booting off a VHD on a Bootcamp’d Mac added the one missing piece of information, namely that one should ignore the warning from the Windows 7 installer that the disk (VHD) you’re about to install on isn’t support and that there might be driver issues. Just go ahead and do it anyway.

After the installation and dropping all the tools on the VHD – I’m getting a little too familiar with the Visual Studio installer by now – Qt built pretty much in the expected time and the project itself can also be build within a reasonable amount of time. My guess the build is 5%-10% slower than on the work machine, but the work machine is building on an SSD and obviously hasn’t got a virtualised hard disk to deal with either. On the other hand my own machine has the benefit of 8 real cores.

Why all the effort? I don’t like mixing work projects and my own stuff, for starters. If I can lock work into a VM or at least some kind of a sandbox, there’s less of a chance of accidental cross-pollination between the two and no licensing headaches either. The latter is especially important to me as there are some software licenses that are “duplicated” in the sense that I have both a work and a personal license. And of course there’s the little detail that the work VM data can simply be destroyed by deleting the VM/VHD if it proves necessary.

Even though I did originally intend to only set up a single VHD for work purposes and keep all the personal software and data on my main disk, I’ve ended up creating a second VHD specifically for a couple of car racing simulators that I use (iRacing and rFactor). I’m not a big gamer but I do like track driving in the real world and using the simulators tends to help with familiarising yourself with a track, plus it helps in the off season, too. iRacing had a bit of a problem with the various bits of security software I have installed on my main Windows and given that I had a spare license anyway, it made sense to put it in its own “virgin Windows” sandbox. No issues since. Well, none related to the software…