Turns out it’s not only Windows 8 that has its telnet client disabled, Windows 10 is in the same boat. I’ve been using Windows 10 for quite a while now and just discovered this issue. Anyway, the way to enable is as follows:
I’ve been a Xubuntu user for years after switching from OpenSuse. I liked its simplicity and the fact that it just worked out of the box, but I was getting more and more disappointed with Ubuntu packages being out of date, sorry, stable. Having to rebuild a bunch of packages on every install was getting a little old. Well, they did provide material for all those “build XXX on Ubuntu” posts. Recently I’ve been playing with Manjaro Linux in a VM as I had been looking for an Arch Linux based distribution that gave me the right balance between DIY and convenience. I ended up liking it so much that I did a proper bare metal install on my main desktop. The install was pretty smooth apart from a issue with getting my AMD RX 470 graphics card to work.
This blog is self-hosted, together with some other services on a FreeBSD virtual server over at RootBSD. Yes, I’m one of those weirdos who hosts their own servers – even if they’re virtual – instead of just using free or buying services.
I recently had to migrate from the old server instance I’ve been using since 2010 to a new, shiny FreeBSD 10 server. That prompted a review of various packages I use via the FreeBSD ports collection and most importantly, resulted in a decision to upgrade from PHP 5.6 to PHP 7.0 “while we’re in there”.
I’ve moved from using Apache as a web server to nginx for various projects. The machines I’m running these projects on are a somewhat resource constrained and nginx deals with low resource machines much better than Apache does and tends to serve content faster in those circumstances. For example switching the machine that hosts this WordPress blog from Apache and mod_php to nginx with php-fpm improved the pingdom load times on this blog by about 30% with no other changes.
The last piece of the puzzle was to get newsyslog to rotate the nginx logs. The instructions on BSD Start suggest they’re for FreeBSD 10, but they work fine on FreeBSD 9.x as well.
Yes, I know Ubuntu and Xubuntu already come with Chromium in their official package repositories, but sometimes it does help to have the official/commercial version installed in addition to the Open Source one. I actually both installed right now, plus Firefox and Vivaldi. You could almost think I’m some sort of web developer or something.
The Google Chrome installation instructions for 14.04 on Ubuntu Portal also work fine for 15.04 with one wrinkle. If you use the installation via PPA method, you need to run the following command to actually install the stable version of Chrome:
sudo apt-get install google-chrome-stable
The page above lists the command as
sudo apt-get install google-chrome but that doesn’t work for me on 15.04.
I would recommend installing Chrome via PPA as that will integrate into the automatic update process. One less thing to think about when keeping your machine up to date.
In a previous blog post I explained how you can substantially improve the performance of git on Windows updating the underlying SSH implementation. This performance improvement is very worthwhile in a standard Unix-style git setup where access to the git repository is done using ssh as the transport layer. For a regular development workstation, this update works fine as long as you keep remembering that you need to check and possibly update the ssh binaries after every git update.
I’ve since run into a couple of other issues that are connected to using OpenSSH on Windows, especially in the context of a Jenkins CI system.
Accessing multiple git repositories via OpenSSH can cause problems on Windows
I’ve seen this a lot on a Jenkins system I administer.
When Jenkins is executing a longer-running git operation like a clone or large update, it can also check for updates on another project. During the check, you’ll suddenly see an “unrecognised host” message pop up on the console you’re running Jenkins from and it’s asking you to confirm the host fingerprint/key for the git server it uses all the time. What’s happening behind the scenes is that the first ssh process is locking .ssh/known_hosts and the second ssh process suddenly can’t check the host key due to the lock.
This problem occurs if you’re using OpenSSH on Windows to access your git server. PuTTY/Pageant is the recommended setup but I personally prefer using OpenSSH because if it is working, it’s seamless the same way it works on a Unix machine. OK, the real reason is that I tend to forget to start pageant and load its keys but we don’t need to talk about that here.
One workaround that is being suggested for this issue is to turn off the key check and make /dev/null “storage” for known_hosts. I don’t personally like that approach much as it feels wrong to me – why add security by insisting on using ssh as a transport and then turn off said security, which results in a somewhat performance challenged git on Windows with not much in the way of security?
Another workaround improves performance, gets rid of the parallel access issue and isn’t much less safe.
Use http/https transport for git on Windows
Yes, I know that git is “supposed” to use ssh, but using http/https access on Windows just works better. I’m using the two interchangeably even though my general preference would be to just use https. If you have to access the server over the public Internet and it contains confidential information, I’d probably still use ssh, but I’d also question why you’re not accessing it over a VPN tunnel. But I digress.
The big advantages of using http for git on Windows is that it works better than ssh simply by virtue of not being a “foreign object” in the world of Windows. There is also the bonus that clones and large updates tend to be faster even compared to a git installation with updated OpenSSH binaries. As an aside, when I tested the OpenSSH version that is shipped with git for Windows against PuTTY/Pageant, the speeds are roughly the same so you’ll be seeing the performance improvements no matter which ssh transport you use.
As a bonus, it also gets rid of the problematic race condition that is triggered by the locking of known_hosts.
It’s not all roses though as it’ll require some additional setup on behalf of your git admin. Especially if you use a tool like gitolite for access control, the fact that you end up with two paths in and out of your repository (ssh and http) means that you essentially have to manage two types of access control as the http transport needs its own set of access control. Even with the additional setup cost, in my experience offering both access methods is worth it if you’re dealing with repositories that are a few hundred megabytes in size or even gigabytes in size. It still takes a fair amount of time to shovel an large unbundled git repo across the wire this way, but you’ll be drinking less coffee while waiting for it to finish.
If you haven’t heard about the bash “shellshock” bug yet, it may be time to peek out from underneath the rock you’ve been under ;). While bash isn’t installed as standard on FreeBSD, there’s a very good chance that someone either installed it because it’s their preferred shell or because one of the ports lists it as a dependency. Either way, now would be a really good time to check if your machine has bash installed if you haven’t done so already. Go on, I’ll wait.
Anyhow, right now you really need to check for updates on the bash port on a daily basis as the updates are coming in at a pretty furious rate. I’m guessing we’ll be back to normal pretty soon, but right now with exploits already in wild, your server will need some extra grooming.
Of course the other, simpler option is to uninstall bash, unless one of the ports you are using has it as a dependency…
For security reasons, apparently. I can see that making sense with the telnet server but the client? It’s an invaluable network debugging tool, after all, especially in heterogeneous networks.
Anyway, here is how you re-enable it. Just leave the server off.
Long title, I know…
I was trying to get Windows RT’s Mail App to access the email on my own server. The server uses IMAPS with s self-signed certificate as I only want SSL for it encryption and don’t really need it for authentication purposes as well. As long as it is the correct self-signed certificate I’m happy.
The Mail app however rejects certificates that weren’t signed by a trusted authority and doesn’t offer an obvious exception mechanism (like Thunderbird or Apple Mail) that circumvents the need for a trusted certificate. The original Mail app that came with my surface also displays only a very cryptic error messages, but the latest update from earlier this week correctly suggests that one needs to add the self-signed certificate to the certificate storage in order to get Mail to recognize the certificate.
In my case the saving grace is that I use the same cert to secure the webmail access so IE can easily access the certificate. However as Joe User, you can’t add another certificate to the certificate store – you have to be Administrator to be able to add a certificate and I initially couldn’t find an obvious way to run IE as Administrator.
The trick turns out to be that you have to run IE from the desktop (yes, the Surface RT has a standard Windows Desktop, too). The easiest way to get there is to run IE from the ’tile’ UI, pull up the bottom menu and select ‘view on desktop’ from the settings icon menu. Once you are on the desktop, right-click (two-finger click on the ZX81 keyboard cover) on the IE icon. Bummer, no ‘Run as Administrator’ menu entry. However, there is an entry in this menu that says ‘Internet Explorer’. Right click/two finger click on that one and you get ‘Run as Administrator’. I fired up IE as administrator and the buttons to install the certificate were no longer greyed out.
At this point there was one last hurdle to climb over – if you let IE determine where the certificate is saved, Mail still does not recognize the certificate. You have to install it in ‘Trusted Root Certification Authorities’. And now, I can finally read my email on my Surface RT. Just be aware of the security implications of doing so as your certificate can now act as a root certificate for other certificates. Of course, you could simply get a ‘real’ certificate and not have that sort of security issue.
The above worked for me because I use the same certificate for two purposes. If you can’t simply access the certificate via a browser you’ll have to download the certificate onto your machine as a file and then use certmgr to import it. Again, you’ll most likely will have to run certmgr as Administrator as it won’t allow file operations otherwise.
Last night I did something I was adamant I wasn’t going to do, namely rooting my Android phone and installing CyanogenMod on it. Normally I don’t like messing with (smart)phones – they’re tools in the pipe wrench sense to me, they should hopefully not require much in the way of care & feeding apart from charging and the odd app or OS update. Of course, the odd OS update is can already be a problem as no official updates have been available for this phone (a Motorola Droid) for a while and between the provider-installed bloatware that couldn’t be uninstalled and the usual cruft that seems to accumulate on computers over time, the phone was really sluggish, often unresponsive and pretty much permanently complained about running out of memory. So far it appears that updating the OS and only installing a handful of apps that I actually use as opposed to the ones that I supposedly “need” has resulted in a much better user experience.
The whole process was comparatively painless, which I really appreciated. The biggest hurdle was getting the clockworkmod recovery image onto the phone. I ended up rebooting the Mac into Windows and install it via the Windows tools. Other than that, the installation went smoothly and didn’t leave me with a bricked phone so I’m happy with that part.
Why the effort, given my dislike for hacking smartphones? Well, for starters I can squeeze a little more life out of the phone. I’m eligible for an upgrade but thanks to Verizon’s shenanigans, sorry, added hoops (and added expense) required to jump through if you want to keep a grandfathered unlimited data plan, I don’t feel particularly compelled to spend money on a phone, especially if I have to pay full retail for an upgrade. I’m also not that big a fan of Android (I admit to preferring iOS) so I’m currently waiting on how the whole “unlocked iPhone” saga will play out with the iPhone 5. If I have to pay retail for a phone – any phone – I might as well use that as leverage to reduce the overall phone bill.
In the meantime I’ll see how I like the “new” Droid and better get used to occasionally reinstalling the OS on a phone, thus reminding me of the quip that Android truly is the Windows of smartphone OSs.