Software Engineering when Hardware MattersPosted: November 12, 2012
It’s a funny thing. You spend quite a few years, happily programming along, mastering and perfecting your craft, learning languages, frameworks, environments, toolchains and the like, and you’d expect that after 25 years or so, you’d reach a point where you could coast on what you’ve learned and just deliver on the best bits you’ve ever produced in your life…
I recently read an accounting of a talk that Herb Sutter game on Microsoft’s embrace of C++ 11. A brief of what new features Microsoft is adopting:
Variadic templates; uniform initialization and initializer_lists; delegating constructors; raw string literals; explicit conversion operators ;and default template arguments for function templates.
I don’t know about you, but I don’t know what half those words mean, nor how they’re going to make my programming life easier. The article is interesting in that Sutter points out that for a stretch of 10 years, the C++ team had been focused on making managed code a more palatable thing. Now they’re focused on C++ compliance. Meanwhile, gcc marches on, and CLang seems to be slowly but surely becoming the darling alternative for some.
But, my point is about how the world changes. In the past, I might have benefited from the latest and greatest feature in such and such language. But, when will C++ have a memory manager? Perhaps in a few more releases of the standard, and…
Meanwhile the world marches on.
I recently purchased this very nice ASUS RT-N66U router. On the one hand, it’s you’re basic router, it can do a router’s job. But wait a minute, take a look on the inside. It has 256Mb of RAM!! It also has a nice decent CPU in it. On raw compute power, this is more powerful than the Raspberry pi. Of course it costs several times as much, but that got me to thinking. This isn’t just a router. This is a headless compute node with more connectivity options than your typical PC. Just think about it. How much effort and cost would it take to outfit your average desktop PC with 4 ports of gigabit speed, plus 450Mbit WiFi? I’m sure your could find a board to do it, and then you’d have to deal with the drivers working with whatever OS is running on your machine, and integrating with the frameworks, all the way up the stack, until you’d finally get your hand on the bits coming off the device.
But, here it is, a compute node for less than $200.
And how do you program such a device? With Linux of course! Pick your distro, either dd-wrt, or openwrt (not currently supporting this router), and away you go with your usual programming. There’s bonuses though. Not only do you get the shield of Linux, but you can also access bits and pieces of the hardware directly as well.
With my diminutive TP-Link nano router, I can access a couple of GPIO lines, as well as serial port, and even a blinky light. Suddenly as a software guy, I’m concerned about the hardware and how I program at a much lower level than I’m used to. For hardware, suddenly there are interrupt service routines, and key debounce routines, and the like. Most of us have left that stuff long behind, or never even heard of it. But, in today’s world, where you’re integrating motors, servos, sensors, and doing realtime operations like pick and place, or tool paths for 3D printers, those lower level things seem to matter.
So, what about all that higher level stuff provided by the myriad frameworks and the like that we’ve all grown to love and adore over the years? They still have their place. I still don’t want to implement an HTML viewer. Having WebKit in the world is probably good enough. But, hooking up that HTML view to control things in my home, like light switches and thermostats, now that’s where things are starting to become very interesting.