So, my kids wanted to buy me a large teddy bear for my birthday. There so happened to be one at the local Safeway, but it was $75. The last time we bought a giant stuffed thing, it was a giant dog from Costco. I don’t remember the price, but I thought, Costco, it’s got to be cheaper…
We went down to Costco, but I we haven’t had a membership there for years. Time to renew. One thing led to another, and rather than the simple run of the mill membership, I allowed myself to be talked into the “Executive” membership, which ‘gives’ you a credit card, and a $60 cash back card (offsetting the extra expense of the super membership). Well, how bad could it be. I went from having really no credit cards last year, to having 4 of them today. That must be good for credit worthiness right? At any rate, I finally got the card, and thought, hay, I might as well read all the fine print.
The first thing that came in the mail was the “Account approval notice”. This one is interesting because it’s basically just the “congratulations, you’re approved for a card, it will be coming in the mail shortly”. It does list the credit limit, the outrageous interest rates, and down at the bottom, below the fold, “Personalize your PIN”. Aha! This normally discarded little piece of paper is the one that has the credit card PIN, which most people don’t know. For an ATM card, you always know the PIN because without it, you basically can’t use it. But, your credit card PIN? I don’t usually know that, and why? Because I’m not looking for it, and I usually throw away this intro piece of paper. Well, now I know, and I’ll try to keep track of this radom 4 digits.
Next up, the giant new card package. This is the set of papers which include the terms and conditions in minute detail. This shows the 29% rate you’ll be charged whenever you do anything wrong (like not pay your bill on time), as well as the ‘arbitration’ clause, which ensures you never sue them whenever they do something wrong. One small piece of paper in this set says “FACTS” at the top of it.
The FACTS sheet. This piece of paper tells me about the many ways in which they’re going to use the information they gather on me to market to me. Not only the company itself, but their affiliates, and even non-affiliates (basically anyone who wants the data). This is normally a throw away piece as well, but this time I decided to read the fine print. What I found was one section titled “To limit our sharing”. Well, that sounds good. Call a phone number, go through some live menu choices, and there you have it, you’ve limited the usage of this data. All you can do is limit the affiliate usage of your data, but it’s something. I even chose the option to have them send me a piece of paper indicating the choices that I made.
I feel really proud of myself. I normally ignore most of the stuff that comes from credit card companies, as most of it is marketing trying to sign me up for more credit cards, or point systems, or whatever. This time, I really dug in, and caught some interesting details. I’m curious to see how the “don’t market to me” thing works out. Of course, once you click off that checkbox, they probably simply sell your info off to someone else to harvest. I feel like that’s what happens when you unsubscribe from an email list as well, but I can’t prove it.
At any rate, I learned something new today. Read some of the fine print, try out a little something you haven’t in the past, and go on an adventure!
My birthday is coming up in November, and just today I was clicking through one of those web sites that says “45 discounts seniors can enjoy”. I’ve been doing “computing” in one form or another since I was about 10 years old, and I’m about to be 53. If I can do the math, that’s been a very long time. Looking back on my earlier years, I recognize a cocky genius of a software engineering talent (if I do say so myself). In more recent years, it hasn’t been about an ability to sling code hard and fast, but rather reflecting upon years of developing various kinds of systems to come up with non-obvious solutions faster than I would have otherwise.
Aging in tech typically means sliding slowly into a management position, slowly losing your tech chops, and mostly riding herd over the young guns that are coming up through the ranks. I’ve taken a slightly different path over the past few years. I did manage the cats who created some very interesting tech for Microsoft: XML, LINQ, ACS/Service Bus, Application Gateway, but more recently I found myself writing actual code myself, while inventing new ways to hire for diversity (http:/aka.ms/leapit). It is this latter initiative that I find very fascinating and invigorating as I age in tech.
The premise of the LEAP program is that ‘tech’ broadly speaking, has advanced enough in terms of complexity, that some things are now easier to achieve than they might have been 10 – 15 years ago. The kinds of “programming” that we’re doing is changing. Whereas 15 years ago, having the skills to debug windows kernel was a great thing to look for, today, being able to do a mash up with they myriad web frameworks that are available is most interesting. Knowing R or machine learning tools is increasingly important. Those kernel debug skills, not so much.
But still, there’s need for old codgers to apply themselves in ever creative ways. I look out onto the tech landscape, and I see myriad opportunities. I see the continent of Africa sitting there, daring us to capture it and harvest the energy and greatness that awaits. I see urban environments across the US, who are all consumers of tech, and can be turned into creators of tech just as easily. I see AI applications that can be applied to our ever burgeoning populations of elder folks, robots, AI, automation of various forms. As an older technologist, rather than going softly into that good night, lamenting the loss of my lightning quick programming skills, I see opportunity to leverage what I’ve learned over the years to identity opportunities, and marshal teams of engineers to go after them, adding guidance and experience where necessary, but otherwise just getting out of the way so the energetic engineers can do their thing.
I may or may not be able to pass a typical tech interview screen these days, but I’m more concerned with changing how we interview for tech roles in the first place. I’m more likely to identify how to incorporate the views of youth, the elderly, the farmer, the street performer, into the evolution of tech offerings to make their lives better. I’m more likely to, without fear, create a tech start up with a clear purpose, and the financial support necessary to see it through its early rounds.
Aging in tech can be a harrowing experience. In some cases we age out of certain roles, but with some foresight and thoughtfulness, we leverage our years of experience to do ever greatly impacting things, while avoiding merely being surpassed by our up and coming peers.
So, as I age in tech, I’m looking forward to the discounts that are coming when I reach 55. I’m looking forward to the seniors menu at Denny’s. I’m looking forward to being able to think of anything I can imagine, and actually turn it into something that is helpful to society.
Aging in tech is something that happens to everyone, from the first line of code you write, to the last breath you take. I’ve thoroughly enjoyed the journey thus far, and am looking forward to many more years to come.
For the past few years, I’ve had this HP Photosmart printer. It’s served me well, with nary a problem. Recently, I needed to replace ink, so I spent the usual $60+ to replace all the cartridges, and then it didn’t work…
An endless cycle of “check the ink” ensued, at which point I thought, OK, I can buy some more cartridges, rinse, repeat, or I can buy another printer. This is the problem with printers these days. Since all the money is made on the consumables, buying a new printer is anywhere from a rebated ‘free’, to a few hundred dollars. Even laser printers, which used to cost $10,000 when Apple came out with their first one back in the day, are a measly $300 for a color laser!
So, I did some research. In the end I decided on the HP MFP M277dw.
It’s a pretty little beast. It came with an installation CD, which is normal for such things. But, since my machine doesn’t have a CD/DVD/BFD player in it, I installed software from their website instead.
It’s not often that I install hardware in my machine, so it’s a remarkable event. It’s kind of like those passwords you only have to use once a year. You’ll naturally try to follow the most expedient path. So, I download and install the HP installer appropriate for this device and my OS. No MD5 checksum available, so I just trust that the download from HP (at least over HTTPS) is good. But, these days, any compromise to that software is probably deep in the firmware of the printer already.
The screens are typical, a list of actions that are going to occur by default. These include automatic update, customer feedback, and some other things that don’t sound that interesting to the core functioning of my printer. The choice to turn these options off are hidden behind a blue colored link at the bottom of the screen. Quite unobtrusive, and if I’m color blind, I won’t even notice it. It’s not a button, just some blue text. So, I click the text, which turns on some check boxes, which I can check to turn off various features.
So, further with the installation, “Do I want HP Connect?” Well, I don’t know, I don’t know what that is. So, I leave that checked. Things rumble along, and a couple of test print pages are printed. One says: “Congratulations!” and proceeds to give me the details on how I can send email to my printer for printing from anywhere on the planet! Well, that’s not what I want, and I’m sure involves having the printer talk to service out in the internet looking for print requests, or worse, it’s installed a reverse proxy on my network, punching a vulnerability hole in the same. It just so happens a web page for printer configuration shows up as well, and I figure out how to turn that particular feature off. But what else did it do.
Up pops a dialog window telling me it would like to authenticate my cartridges, giving me untold riches in the process. Just another attempt to get more information on my printer, my machines, and my usage. I just close that window, and away we go.
I’m thinking, I’m a Microsoft employee. I’ve been around computers my entire life. I probably upgrade things more than the average user. I know hardware, identity, security, networking, and the like. I’m at least an “experienced” user. It baffles me to think of how a ‘less experienced’ user would deal with this whole situation. Most likely, they’d go with the defaults, just clicking “OK” when required to get the darned thing running. In so doing, they’d be giving away a lot more information than they think, and exposing their machine to a lot more outside vulnerabilities than they’d care to think about. There’s got to be a better way.
Ideally, I think I’d have a ‘home’ computer, like ‘Jarvis’ for Tony Stark. This is a home AI that knows about me, my family, our habits and concerns. When I want to install a new piece of kit in the house, I should just be able to put that thing on the network, and Jarvis will take care of the rest, negotiating with the printer and manufacturer to get basic drivers installed where appropriate, and only sharing what personal information I want shared, based on knowing my habits and desires. This sort of digital assistant is needed even more by the elderly, who are awash in technology that’s rapidly escaping their grasp. Heck, forget the elderly, even average computer users who’s interaction with a ‘computer’ extends to their cell phones, tablets, and console gaming rigs, this stuff is just not getting any easier.
So, more than just hope, this lesson in hardware installation reminds me that the future of computing doesn’t always lie in the shiny new stuff. Sometimes it’s just about making the mundane work in an easier, more secure fashion.
So, I’ve written quite a lot about computicles over the past few years. In most of those articles, I’m talking about the software implementation of tiny units of computation. The idea for computicles stems from a conversation I had with my daughter circa 2007 in which I was laying out a grand vision of the world where units of computation would be really small, fit in your hand sized, and be able to connect and do stuff fairly easily. That was my envisioning of ubiquitous computing. And so, yesterday, I received the latest creation from HardKernel, the Odroid HC1 (HC – Home Cloud).
Hardkernel is a funny company. I’ve been buying at least one of everything they’ve made in the past 5 years or so. They essentially make single board computers in the “credit card” form factor. What you see in the picture is the HC1, with an attached SSD of 120Gb. The SSD is 2.5″ standard, so that gives you a sense of the size of the thing. The black is aluminum, and it’s the heat sink for the unit.
The computer bit of it is essentially a reworked Odroid XU4, which all by itself is quite a strong computer in this category. Think Raspberry Pi, but 4 or 5 times stronger. The HC1 has a single Sata connector to slot in whatever hard storage you choose. No extra ribbon cables or anything like that. The XU4 itself can run variants of Linux, as well as Android. The uSD card sticking out the right side provides the OS. In this particular case I’m using OMV (Open Media Vault), because I wanted to try the unit out as a NAS in my home office.
One of the nice things about the HC1 is that it’s stackable. So, I can see piling up 3 or 4 of these to run my local server needs. Of course, when you compare to the giant beefy 64-bit super server that I’m currently typing at, these toy droids give it very little competition in the way of absolute compute power. They even did an analysis of bitcoin mining and determined a number of years it would take to get a return on their investment. But, computing, and computicles aren’t about absolute concentrated power. Rather, they are about distributed processing.
Right now I have a Synology, probably the equivalent of today’s DS1517. That thing has RAID up the wazoo, redundant power, multiple nics, and it’s just a reliable workhorse that just won’t quit, so far. The base price starts at $799, before you actually start adding storage media. The HC1 starts at $49. Of course there’s no real comparison in terms of reliability, redundancy, and the like, or is there?
Each HC1 can hold a single disk. You can throw on whatever size and variety you like. This first one has a Samsung SSD that’s a couple years old, at 120Gb. These days you can get 250Gb for $90. You can go up to 4TB with an SSD, but that’s more like a $1600 proposition. So, I’d be sticking with the low end. That makes a reasonable storage computicle roughly $150.
You could of course skip the SSD or HDD and just stick a largish USB thumb drive, or 128Gb uSD for $65, but the speed on that interface isn’t going to be nearly as fast as the Sata III interface the HC1 is sporting. So, great for a small time use, but for relatively constant streaming and download, the SSD solutions, and even HDD solutions will be more robust.
So, what’s the use case? Well, besides the pure geekery of the thing, I’m trying to imagine more appliance like usage. I’m imagining what it looks like to have several of these placed throughout the house. Maybe one is configured as a YouTube downloader, and that’s all it does all the time, shunting to the larger Synology every once in a while.
How about video streaming? Right now that’s served up by the Synology running a Plex server, which is great for most clients, but sometimes, it’s just plain slow, particularly when it comes to converting video clips from ancient cameras and cell phones. Having one HC1 dedicated to the task of converting clips to various formats that are known to be used in our house would be good. Also, maybe serving up the video itself? The OMV comes with a minidlna server, which works well with almost all the viewing kit we have. But, do we really have any hiccups when it comes to video streaming from the Synology? Not enough to worry about, but still.
Maybe it’s about multiple redundant distributed RAID. With 5 – 10 of these spread throughout the house, each one could fail in time, be replaced, and nothing would be the wiser. I could load each with a couple of terabytes, and configure some form of pleasant redudancy across them and be very happy. But, then there’s the cloud. I actually do feel somewhat reassured having the ability to backup somewhere else. As recent flooding in Texas shows, as well as wildfires, no matter how much redundancy you have locally, it’s local.
Then there’s compute. Like I said, a single beefy x64 machine with a decent GPU is going to smoke any single one of these. Likewise if you have a small cluster of these. But, that doesn’t mean it’s not useful. Odroid boards are ARM based, which makes them inherently low power consumption compared to their intel counterparts. If I’ve have some relatively light loads that are trivially parallelizable, then having a cluster of a few of these might make some sense. Again with the ubiquitous computing, if I want to have the Star Trek style “computer, where’s my son”, or “turn on the lights in the garage”, without having to send my voice to the cloud constantly, then performing operations such as speech recognition on a little cluster might be interesting.
The long and short of it is that having a compute/storage module in the $150 range makes for some interesting thinking. It’s surely not the only storage option in this range, but the combination of darned good hardware, tons of software support, low price and easy assembly, gives me pause to consider the possibilities. Perhaps the hardware has finally caught up to the software, and I can start realizing computicles in physical as well as soft form.
So, a few months back, I finished the ultra-cool tower PC build. A strong motivator for building that system was to utilize a liquid cooling system, because I had never done so before. So, how has it gone these months later?
Well, it started with some strange sound coming from the pump on the reservoir. It was making some clicking sound, and I couldn’t really understand why. Then I felt the tubing coming out of the top of the CPU, and it was feeling quite warm. Basically, the liquid cooling system was not cooling the system.
But, I’m a tinkerer, so I figured I’d just take it apart and figure out what was going on. I took apart all the tubing, and took the CPU cooling block off the CPU as well. I opened up that block, and what did I see? A bunch of gunk clogging the very fine fins within the cooling block. It was this white chalky looking stuff, and it was totally preventing the water from flowing through. As it turns out, the Thermaltake system that I installed came with some Thermaltake liquid coolant, and that stuff turns out to be total crap. After reading some reviews, it seems like a common affliction that using this coolant: Thermaltake C1000 red will eventually leave a white residue clogging the very fine parts of your cooling loop, forcing you to flush and refill or worse.
Well, that’s a bummer, and I would have been ok after that discovery. Problem is, along the way, I put the system back together, turned it on to reflush the system, and walked away for a bit…
Luckily, my Android phone that I used to take the various pictures factory reset itself, so I no longer have the evidence of my hasty failure. It so happens that when the CPU cooling block was slill clogged, and I put the system together, I didn’t tighten down the tube connecting to the block tight enough. Enough pressure built up that the tube popped off. Needless to say, I’ll need to replace the carpet in my home office. And, I can tell you, the effect of spilling about a half gallon of water on the inerds of your running computer motherboard, power supply, and all the rest, is almost certain death for those components…
So, I started by dryingeverything off as best I could. I used alcohol and q-tips to dab up obvious stuff. The motherboard simply would not turn on again. There are a lot of things that could be wrong, but I thought I’d start with the motherboard.
I ordered a new motherboard. This time around I got the Gigabyte GA-Z170X-Gaming 7. This is not the exact same motherboard as the original. It doesn’t have the option to change the bios without RAM being installed, and it doesn’t have as many power phases, but, for my needs, saving $120 was fine, since I lost the motivation to go all out in this replacement.
The motherboard was same day delivery (which is why Amazon is great). It installed without a flaw. Turned it on and… glitchy internal video! Aagghh. OK, return this, and in another day get another of the same. This time… No problems. Liquid cooling system back together, killer video card installed, monitors hooked up, and it all works as flawlessy as before, if not better.
This time around, I’m not installing any fancy cooling liquid. I’ve done my homework, and everyone who actually does these systems to run for the long run simply uses distilled water, and perhaps some biocide. I chose to get one of those sliver spirals to act as the biocide. That way there’s not chemicals to deal with.
When the silver coil arrives, I’ll have to drain the pipes one more time to install it in the reservoir. I’ll also take the opportunity to use pipe cleaners on the tubes, which have become a bit milky looking due to the sediment from the C1000 cooling liquid. I now have a checklist for assembling the cooling system, to ensure I tighten all the right fittings, and hopefully avoid another spillage mishap.
Thankfully the CPU, memory sticks, video card, power supply, and nvme memory were all spared spoilage from the flooding incident. That would have effectively been a new PC build (darn those CPUs are expensive).
Lesson learned. There’s quite a difference between building a liquid cooling system for looks, vs building one that will actually function for years to come. I will now avoid Thermaltake like the plague, as I’ve found much better parts. Next machine I build will likely not use liquid cooling at all, because it won’t be as visible, so the aesthetic isn’t critical, and the benefits are fairly minimal. Enthusiasm is great, because it leads to doing new things. But, I have to temper my enthusiasm with more research and caution. I don’t mind paying more, piecing things together, rather than going for the all-in-one kit.
Now, back to computing!
I’ve gone back and forth on this over the years. For the most part we’re ‘cord cutters’. For me it wasn’t about cost, but about changing viewing habits. We found that of the cable offerings, all we were really using was the connection to sling tv so we could watch Indian serials and movies. Well, with roku, that’s just a single paid “channel”. Then came the Amazon Firestick, and all the video content that comes with Prime. Netflix rounds out the offerings that are most common, and with them creating new content of their own, the likes of HBO and Starz begin to pale into a distant memory.
So, what about DVDs? Well, most of the time, content available on DVD is available through one of our online subscriptions. But, not always. Netflix doesn’t have everything, and in particular, they don’t have some stuff that I would consider to be archival. Even if they do have something, they may not have it for very long.
I have a strategy around DVD purchasing. In general I’ll only purchase a DVD if it’s less than $10. I can justify this as it’s less than the price of a single admission to a movie theatre. Also, if the DVD is cheaper than the rental price on Amazon, I might buy it. I’ll buy those compilation DVDs that are like “Oceans 11, 12, 13, and Original”. That’s 4 movies in all, at least a couple of which I’ve seen a couple of times, and would watch again. The very first one from the 60s was interesting, because although they “got away with it”, they didn’t end up with anything. I’ll also purchase DVDs while in India, or on Amazon because they won’t show up necessarily in the US. Ra-One for example, or the Dhoom movies (although the latest did show up).
I have 118 DVDs now. I do two things. 1, I archive the .ISO file and store it on the Synology NAS. Then I use Handbrake to convert to a .mkv file, so that I can serve it up easily using a Plex client on the roku, or firestick, or any client in the home (iPad, phones, guest laptops). This is great. But, running a home NAS is an interesting business. The Synology is pretty good, and the one I have has been in almost constant operation for about 4 years now. I’ve added one disk, so it has roughly 5 terabytes of storage, with a couple of storage bays open. At some point within the next 4 years, I’ll be contemplating replacing that thing, at a cost of who knows what. In the meanwhile, I hope to gosh nothing catastrophic happens to it, because other than being RAID, I don’t have a backup, or who knows if it suffers a debilitating virus.
Which brings me to a secondary analysis. I could leverage OneDrive, or some other cloud storage mechanism to archive all this stuff, and just use the home NAS as a local cache. That would give me the quick access that I want, and the security of a cloud backed up thing to boot. That would be a great solution for what I travel. I can still have access to various files, without having to expose my home NAS to the wilds of the internet. The cost might be about the same as purchasing a new NAS in a few years, so that’s something to look into.
On top of putting my data into an easily accessible place, I can then use it as a dataset to do various experimentations. What is it about the types of movies that I collect. Run some cloud based analytics on the images, dialogs, years, actors, etc. Basically, I could run my own little Netflix scoring engine, and on my own decide what kinds of new movies might be of interest to me. And then, I wonder if I could sell this information to advertisers, or movie makers? Something to think about.
And so, I find myself continuing to archive my DVDs. It’s something I’ve gone into and out of doing over the past 15 years. Today, they’re so cheap that even though a lot of content can be found through streaming services, it’s worth the convenience to store them and make them available locally. We’ll see if using the cloud as backup, or as primary storage, makes sense.
So, I received an email a few weeks back which essentially said “would you consider a role working for the CTO as a Technical Advisor”. Well, at first, I wasn’t sure what to think, but then I actually talked to who was asking me the question, and I thought, “wait a minute, this could be a really cool thing”.
It’s like this. At Microsoft, we don’t always have a person in the role of CTO. Bill Gates was “Chief Scientist” at one point, and Craig Mundie I think had the CTO role, as did Ray Ozzie. Sometimes it works, sometimes it’s a distraction.
The current CTO is Kevin Scott, and before I actually met him, the #1 comment everyone said about him was “he’s a really cool guy”. Well, after meeting him, I have the same sentiment. Kevin’s not an industry luminary from the birth days of the personal computing industry like Ray Ozzie was, he’s an engineer’s engineer with a pedigree that extends through Google, a startup adMob, and LinkedIn, where he continues to be responsible for their backend stuff.
I’ve been at Microsoft for 18 years, which means I’ve done a fair number of things, and I know a fair number of people. The first aspect of being a TA is getting around, meeting with people, and spreading the word that there’s actually a CTO.
What does the CTO do? Well, the best description I can give is the CTO acts as the dev manager/architect for the company. The scope and responsibility of the CTO can be very broad. Part of it is about efficiency of our joint engineering objectives. Part of it is making sure we’re marching to the beat of a similar drummer. Can you imagine, Microsoft has a few multi-billion dollar businesses, led by business managers who are fairly autonomous, and have quite strong independent personalities, or they would not be in the positions they are in. And along comes the CTO to help unify them.
Really, the job is being fairly impartial where necessary, and just reminding people of their shared goals and objectives, and helping them to reinforce achieving them.
Being a TA to the CTO? Mostly it’s about going deep in areas. Kevin Scott is a fast learner, fully capable of digesting tons of info, and fabricating a well informed opinion on his own. The challenge is one of time. Microsoft is vast, and if you want to go beyond the surface level in many areas, you’d spend all your time in meetings, and not actually be able to synthesize anything. So, the TA role. We have those infinite number of meetings, going deep on multiple topics, synthesizing to a certain level, and surfacing interesting bits to Kevin where decisions and direction might be required.
The the surface description of the role and responsibility. The truth is, it’s not at all a well defined role. Eric Rudder was Bill Gate’s TA, for five years, and he was quite a force, doing more than just feeding Bill Gates opinions on what he heard in the company. We’ll see what our current office of the CTO is capable of, and what kinds of value we are going to impart on the company.
I am excited for this latest opportunity. I think it’s a fitting role for where I’m at in my career, and what value I can contribute to the company. So, here we go!