Sprinkler controller upgrade part III – setting it up

Putting the OpenSprinkler and Raspberry Pi together was easy, getting them to run showed my inexperience when it comes to playing with hardware. The overall install went pretty smoothly and the documentation is good and easy to follow so I’m not going to ramble on about it for very long, but just throw up some notes.

First, my old card reader didn’t want to play with any of my computers. Now, the card reader is ancient, but should have been able to work with an SD card. No joy under and available OK, so I ended up having to get a new SD/microSD only card reader.

When writing the ospi image file to the SD card using Mac OS, make sure you write to the raw device and not to the slice (in my case /dev/rdisk4 and not /dev/disk4s1), otherwise you’ll end up with a non-booting OSPI and wonder why. Don’t ask me how I know

Also, the OSPI image doesn’t have Emacs pre-installed, so I obviously had to fix that. I mean, how would I be editing configuration files otherwise?

The hardware installation (aka screwing the controller to the wall and wiring it up) was pretty simple, to facilitate the install I had taken photos of the way the old controller was wired and use that as a guide.

The whole install went pretty smoothly and the controller has been running our sprinklers for a while now. Unfortunately the sprinkler_pi program that I really wanted to use to seems to have encountered a bug that has it trigger multiple valves at the same time; I’m planning to upgrade to the latest version and if necessary debug it a bit because I like its UI better than the default interval_program. The latter however just worked out of the box.

The only concern so far is that the CPU temperature on the Raspberry Pi seems a little high (it’s usually hovering around 60-65° Celcius as it’s outside in the garage. I might have to experiment with a CPU heat sink on that one.

SociBook del.icio.us Digg Facebook Google Yahoo Buzz StumbleUpon

Sprinkler controller upgrade part II – the Pi(e)s have arrived

The Raspberry Pis have landed. Guess which box contains the sensitive electronics and is worth about twice as much as the other one:

pi-parcels-1

That’s right:

pi-parcels-2

Geez Amazon, what is it about the shoddy packing when it comes to items that are bought via Amazon Fulfillment Services? This is not the first time I got something that can only be described as badly packaged.

The OpenSprinkler kit has also arrived, all I’m currently waiting for is a smaller memory card as the regular SD cards I bought are a little to big to fit into the OpenSprinkler case. Anyway, I should have the new hardware up and running on Friday.

SociBook del.icio.us Digg Facebook Google Yahoo Buzz StumbleUpon

The latest project – improving the home’s sprinkler system, part I of probably a lot

I normally don’t play much with hardware, mainly because there isn’t/wasn’t much I want to do that tends to require hardware that’s not a regular PC or maybe a phone or tablet. This one is different, because no self-respecting geek would want the usual rotary control “programmable” timer to run their sprinkler system, would they?

We do live at the edge of the desert and we have pretty strict watering restrictions here. I’m all for it - water being a finite resource and all that – and I want to improve our existing sprinkler system at the same time. It doesn’t help that the people who set up the sprinklers were probably among the lower bidders, to put it politely. OK, to be blunt they seem to have failed the “giving a shit” test when they put the system together. I’ve spent a lot of  last year’s “gardening hours” just trying to make it work somewhat. Not well, just “somewhat”. Time to fix that.

First step was researching hardware. I’m comfortable with Unix type OSs (obviously) and with seemingly the world and their dogs releasing small, low power consumption embedded Linux devices I figured one of them would be perfect. The original plan was to get a Raspberry Pi or a BeagleBone with relay shield/cape and drive the sprinkler valves that way. A bit more poking around the web led me to the various OpenSprinkler modules (standalone, Raspberry Pi shield and BeagleBone cape) and they look ideal for what I have in mind. I’m planning to order the Raspberry Pi version as one of the nice touches is that the Raspbian repository has packages for the Java JDK, which gives me bad ideas of hacking parts of the sprinkler system in Clojure or Armed Bear Common Lisp. I’m not sure that the system is powerful to run either, but one can dream.

The good thing about the various OpenSprinkler systems is that they have the 24V to 5V converter on board so the power supply isn’t a problem. There is already open source software for them that covers the normal requirements and either of them can control enough valves for our current needs without resorting to genius solutions like running two valves off the same controller output because someone installed a wiring loom that is one wire short of being able to control all valves individually. Apparently the fact that the water pressure wasn’t high enough to run two zones at the same time fell in the category of “not giving a shit”.

The next step after getting the hardware is to run convert the existing system to run off the new controller with some additional wiring to be able to control all zones individually. This will require fixing up some of the wiring issues and will also have to tie in with my project of running some Ethernet wiring around the house unless I decide to go wireless for the sprinkler controller. Haven’t figured that part out yet. Given that the controller is “headless” I’m tempted to hide it away out of sight and just run Ethernet and 24V power to it.

Once it’s all up and running I’ll look into adding some sensors for a bit more fine-grained control over the system. Rain sensors are not really helpful out here as it hardly ever rains during irrigation season. I’m thinking about adding at least a couple of moisture sensors for some of the more sensitive plants to ensure that they get the appropriate amount of water but not more than necessary. Not sure I’ll get around to that part this year, first the system needs to be up and running reliably before I go and break it again.

Stay tuned.

SociBook del.icio.us Digg Facebook Google Yahoo Buzz StumbleUpon

I prefer ConEmu over Console2, and so should you…

OK, I admit it – I’m a dinosaur. I still use the command line a lot as I’m subscribing to the belief that I can often type faster than I can move my hand off the keyboard to the mouse, click, and move my hand back. Plus, I grew up in an era when the command line was what you got when you turned on the computer, and Windows 2.0 or GEM was a big improvement.

One of the neat features of the console emulators on both on Linux and Mac OS X was and is that you could run a set of shells in a tabbed single console window. A post on Scott Hanselman’s blog put me onto Console2. That was more like it and I pretty much immediately housed my Windows shells – either cmd.exe or PowerShell – in there. Much better, but over time the pace of development slowed and the last beta release dates from 2011. It’s not like the Beta is buggy or anything – in fact, in my experience it works very nicely indeed – but of course as a software engineer I like shiny new things.

Enter, via another post on Scott Hanselman’s blog, ConEmu – or ConEmu-Maximus5, to give it its full name. If Console2 is the VW Golf to the stock Windows’ console emulator’s 1200cc VW Bug, then ConEmu is the VW Phaeton to Console2′s VW Golf. It’s got a lot more features, it’s actively developed, it works well with Far Manager if you miss the Norton Commander days and it’s highly configurable. Of course, it also can handle transparent backgrounds, but so can Console2.

For me, it has one killer feature – recent versions detect which shells you have installed on your machine and offer you a selection via the green “new tab” button (the one that looks a bit like a French Pharmacy sign), with a choice of running them either as a regular user or admin user:

ConEmu with visible command line processor menu

ConEmu with visible command line processor menu

Why is this such a big deal? Well, it’s neat if you’re using both PowerShell and cmd.exe, but for me it’s a killer feature because I like using TCC/LE, at least at home. TCC/LE is the familiar Windows command prompt at first glance but in the same way that ConEmu is a much expanded console emulator compared to the regular Windows one, TCC/LE is a much expanded command prompt that is a lot more feature rich and has a lot of sensible extensions. And because I’m such a dinosaur, I’ve actually been using its predecessors (4DOS and 4NT) way back when they were distributed as shareware on a floppy disk and you had to buy the manuals for them to get the registration code. And yes, I still have at least the 4DOS manual.

Back to console emulators, though. If I wanted to go nitpicking, both ConEmu and Console2 work less well over an RDP connection than the stock console, which is noticeable if you tend to remote into machines quite frequently. It’s not that they work badly, but Microsoft clearly spent a lot of time optimising the stock console to work well over RDP (or to have RDP work well with the stock console), so there is a bit of lag when scrolling. It doesn’t make either tool unusable but you notice it’s there.

Anyway, if you check out one new tool this week, make it ConEmu.

SociBook del.icio.us Digg Facebook Google Yahoo Buzz StumbleUpon

The coder/programmer/software engineer debate seems to be rising from the undead again

First, a confession – I actually occasionally call myself a coder, but in a tongue in cheek, post-modern and ironic way. Heck, it does make for a good blog title and license plate.

Nevertheless, with all the recent “coding schools” cropping up all over the place – at least if you are in the Bay Ara – it does seem that being able to code in the context of a reasonably sought after web technology without much further formal training is the path to new, fulfilling careers and of course untold riches in an economy where recent graduates in all fields have problems finding work. Well, at least a career that allows you to rent a room instead of crashing on somebody’s couch.

There are some problems with the whole “mere coder” thing. Dave Winer has some interesting thoughts on his blog, and I agree with a lot of what he says. Scott Radcliff has some additional thoughts, which I also find myself agreeing with a lot.

The process of building a software system is a lot more than just coding unless you subscribe to the view that all the coders do is to take the gospel handed down by A Visionary Leader and convert it into software. Anybody who’s ever built a moderately complex system knows that software doesn’t happen that way, at least not the type of software that doesn’t collapse under its own weight shortly after its release. Of course that doesn’t sit well with the notion of The Visionary Genius that is all that is required to build the next and the narrative doesn’t work at all once you recognise that building software is, in most cases, a team sport.

Expecting someone who has learned to write code to o create a great piece of software is like expecting someone who’s just gone through a foreign language course of similar length to go and write the next great novel in that particular language. Sometimes you get lucky, but most of the time the flow isn’t there, the language isn’t yet used in an idiomatic way and all that together makes for less than an enjoyable experience. Some of the people who manage to enter the profession of software development will learn on the job and grow into people who can build robust, maintainable systems and those are to be congratulated.

The way most people use the term coder, ie in a non-postmodern ironic way, reminds me very much of the time when a programmer’s job was to unthinkingly program as part of a little cog in the giant waterfall development machine, which led to the WIMP (Why Isn’t Mary Programming) acronym. Basically, if the keyboard wasn’t constantly clattering, there was no programming going on. After all, how hard could it be, or as Scott Adams put it so nicely:

That said, maybe we are at a point in time where an army of coders as the modern equivalent of the typing pool is able to create good enough software. Given that most users are conditioned to believe that software is infuriating and buggy, we as an industry might well get away with. Is that the world we want to live in, though?

As creators of software, we do have the ability to choose the environment that we work and live in. If you care about quality, work with like-minded people.

Me? I prefer to call myself a software craftsman. I’m not an engineer, I write code but that’s almost an afterthought once the design is figured out but at the end of the day what I do is build something that wasn’t there before, using vague guidelines that are wrong as often as they are right while trying to tease out what the customer needs rather than what they tell me they want. In most cases there is no detailed blueprint, no perfect specification that only needs to be translated 1:1 into lines of code. Instead, I get to use my experience and judgement to fill in the blanks that most people – myself included – probably didn’t even think were there.

Sounds like a job for a craftsman to me.

SociBook del.icio.us Digg Facebook Google Yahoo Buzz StumbleUpon

Initial thoughts on Swift

Like pretty much every other programmer with a Mac, I’m currently looking at Swift. Will I write anything but toy programs in it? I don’t know yet – I don’t really write any Mac-ish software on my Mac,  just unix-ish programs. If Swift doesn’t escape the OS X and iOS ecosystems it’ll be a nice exercise in a neat language that’s not really that relevant to the world at large, or at least to my part of the world at large. Not that this sort of vendor lock-in can’t work well – Visual Basic 6, anybody?

I hope Apple will open the language to be used outside its software ecosystem but from what I have read so far, it’s tightly integrated into the iOS and Mac OS X runtimes so I’m not sure how feasible that even would be. Still, I’m pretty sure it’ll remain on my list of languages to play with.

SociBook del.icio.us Digg Facebook Google Yahoo Buzz StumbleUpon

Someone is building a BBC Micro emulator in Javascript

For those of us who remember when the BBC Micro was the home computer with the fastest Basic implementation available, a long time ago, Matt Godbolt is building an emulator in JavaScript. First post of his series can be found here.

It’s amazing how far we have come since I started playing with computers, yet they’re still not fast enough.

SociBook del.icio.us Digg Facebook Google Yahoo Buzz StumbleUpon

I didn’t realise Emacs has an elisp repl

When it comes to Emacs, I am an amateur at best, but part of the fun is that I keep discovering new useful functionality. Thanks to a post over at Mastering Emacs, I’m now aware of ielm, which came in very handy when I was trying to build an elisp function that would automatically pull all the packages I regularly use via ELPA if they weren’t installed already.

SociBook del.icio.us Digg Facebook Google Yahoo Buzz StumbleUpon

Running Emacs from a Windows Explorer context menu

It’s one of those days, thanks to a hard disk going south I ended up having to rebuild the system drive on one of my machines. After putting the important software back on there – “Outlook and Emacs”, as one of my colleagues calls it – I had to reapply some of the usual tweaks that make a generic developer workstation my developer workstation.

One of the changes I wanted to make was to have an “Edit in Emacs” type context menu in Windows Explorer. The only reason I was keeping another editor around was because it’s a feature I use regularly but hadn’t got around to setting up for Emacs.

StackOverflow to the rescue, as usual. I used the registry script provided in the first answer and tweaked it slightly. In contrast to a lot of people, I don’t keep Emacs running all the time but only when I’m editing something. For that reason like my Emacs setups to either start a new instance if there is no Emacs running or use an existing instance as a server if there happens to be one running.

With my little tweaks to start an instance of Emacs even if there is no Emacs server running, this is what the updated registry script looks like:

Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\*\shell]
[HKEY_CLASSES_ROOT\*\shell\openwemacs]
@="&Edit with Emacs"
[HKEY_CLASSES_ROOT\*\shell\openwemacs\command]
@="C:\\Tools\\emacs\\bin\\emacsclientw.exe -a C:\\Tools\\emacs\\bin\\runemacs.exe -n \"%1\""
[HKEY_CLASSES_ROOT\Directory\shell\openwemacs]
@="Edit &with Emacs"
[HKEY_CLASSES_ROOT\Directory\shell\openwemacs\command]
@="C:\\Tools\\emacs\\bin\\emacsclientw.exe -a C:\\Tools\\emacs\\bin\\runemacs.exe -n \"%1\""

There’s another neat little tweak in there, too – the directory “C:\tools\emacs is actually a symbolic link to the current installed version of Emacs on this machine so whenever I update my version of Emacs, I don’t have to redo other scripts and settings that try to use the current version of Emacs.

This might be an old hat to most Unixheads, but it’s slightly unusual on Windows so I figured I’ll mention that it’s possible to do something like this on Windows also.

SociBook del.icio.us Digg Facebook Google Yahoo Buzz StumbleUpon

Improving the performance of Git for Windows

Admittedly I’m  not the biggest fan of git – I prefer Mercurial – but we’re using it at work and it does a good job as a DVCS. However, we’re mostly a Windows shop and the out of the box performance of Git for Windows is anything but stellar. That’s not too much bother with most of our repos but we have a couple of fairly big ones and clone performance with those matters.

I finally got fed up with the performance after noticing that cloning the same large repo from the same Linux server to a FreeBSD box was over an order of magnitude faster and decided to start digging around for a solution.

The clone performance left a lot to be desired using either PuTTY or the bundled OpenSSH as the SSH client. I finally settled on using OpenSSH as I find it easier to deal with multiple keys. Well, it might just be easier if you’re a Unixhead.

Anyway, my search led me to this discussion, which implies that the problem lies with the version of OpenSSH and OpenSSL that comes prepackaged with Git for Windows. The version is rather out of date. Now, I had come across this discussion before and as a result attempt to build my own SSH binary that included the high performance ssh patches, but even after I got those to build using cygwin, I never managed to actually get it to work with Git for Windows. Turns out I was missing a crucial detail. It looks like the Git for Windows binaries ignore the PATH variable when they look for their OpenSSH binaries and just look in their local directory. After re-reading the above discussion, it turned out that the easiest way to get Git for Windows to recognise the new ssh binaries is to simply overwrite the ones that are bundled with the Git for Windows installer.

*** Bad hack alert ***

The simple recipe to improve the Git performance on Windows when using a git+ssh server is thus:

  • Install Git for Windows and configure it to use OpenSSH
  • Install the latest MinGW system. You only need the base system and their OpenSSH binaries. The OpenSSH and OpenSSL binaries that come with this installation are much newer than the ones you get with the Git for Windows installer.
  • Copy the new SSH binaries into your git installation directory. You will need to have local administrator rights for this as the binaries reside under Program Files (or “Program Files (x86)” if you’re on a 64 bit OS). The binaries you need to copy are msys-crypto-1.0.0.dll, msys-ssl-1.0.0.dll, ssh-add.exe, ssh-agent.exe, ssh-keygen.exe, ssh-keyscan.exe and ssh.exe

After the above modifications, the clone performance on my Windows machine went from 1.5MB/s – 2.5MB/s to 10MB/s-28MB/s depending on which part of the repo it was processing. That’s obviously a major speedup but another nice side effect is that this change will result in noticeable performance improvements for pretty much all operations that involve a remote git repo. They all feel much snappier.

 

SociBook del.icio.us Digg Facebook Google Yahoo Buzz StumbleUpon