Tinkerer’s Closet – Selecting hardware

Well, there’s no better reason to clean out your closet or workspace, than to fill it with new stuff!! I’ve been spending a fair amount of time cleaning out the old stuff, and I’ve gotten far enough along that I feel it’s safe to poke my head up and see what’s new and exciting in the world of hardware components today.

What to look at though, so I don’t just fill up with a bunch more stuff that doesn’t get used for the next 10 years? Well, this time around I’m going project based. That means, I will limit my searching to stuff that can help a project at hand. Yes, it’s useful to get some items just for the learning of it, but for a hoarder, it’s better to have an actual project in mind before making purchases.

On the compute front, I’ve been standardizing the low end around ESP 32 modules. I’ve mentioned this in the past, but it’s worth a bit more detail. The company, Espressif, came along within the past decade, and just kind of took the maker community by storm. Low cost, communications built in (wifi, bluetooth), capable processors (32-bit). They are a decent replacement at the low end of things, taking the place of the venerable Arduino, which itself was a watershed in its day.

The keen thing about the Espressif modules is how programmable they are. You can use the Aruino IDE, or PlatformIO (tied to Visual Studio), or their standalone IDE. You can program it like a single CPU with full control of everything, or you can run a Real-time OS (FreeRTOS) on it. This makes it super easy to integrate into anything from simple servo motor control, to full on robotics.

As for form factor, I’m currently favoring the Adafruit ‘feather’ forms. The ‘feather’ form factor is a board specification, which puts pins in certain locations, regardless of which actual processor is on the board. This makes it a module that can be easily integrated into designs, because you have known patterns to build around. I have been using the ESP32 Feather V2 primarily.

It’s just enough. USB-C connector for power, and programming. Battery connector for easy deployment (battery charges when USB-C is plugged in). STEMMA QT connector (tiny 8 pin connector) for easy I2C connection of things like joysticks, sensors, anything on the I2C bus. Antenna built in (wifi/bluetooth radio on the right, with black PCB antenna next to it).

It’s just a handy little package, and my current best “computicle”. You can go even smaller, and get ESP 32 modules in different packages, but this is the best for prototyping in my lab.

As an aside, I want to mention Adafruit, the company, as a good source for electronic components. You can checkout their about page to get their history. Basically, they were created in 2005, and have been cranking out the hits in the maker space ever since. What attracted me to them initially was their tutorials on the components they sell. They have kits and tutorials on how to solder, as well as how to integrate motors into an ESP 32 design. Step by step, detailed specs, they’re just good people. They also pursue quality components. I mean, every USB cable is the same right? Nope, and they go through the myriad options, and only sell the best ones. So, if you’re in the market, check them out, at least for their tutorials.

Going up the scale from here, you have “Single Board Computers”. The mindshare leader in this space is definitely the Raspberry Pi. When they sprung onto the scene, there really wasn’t any option in the sub-$50 range. Since then (2006ish), there has been an entire renaissance and explosion of single board computers. They are typically characterized by: Arm based processor, 4-8Gb RAM, USB powered, HDMI output, a couple of rows of IO pins, running Linux (Ubuntu typically).

I’ve certainly purchased my share of Raspberry Pi boards, and variants. I tend to favor those coming from Hard Kernel. I find their board innovations over the years to be better than what the Pi Foundation is typically doing. Also, they are more readily available. Hard Kernel has commercial customers that use their boards in embedded applications, so they tend to have Long Term Support for them. They have boards based on ARM typically, meant to run Linux, but they also have Windows based boards as well.

Here’s a typical offering,

The Odroid M1S.

The one thing that’s critical to have in a single board computer is software support. There are as many single board computers available in the world as there are grains of sand on a beach. What differentiates them is typically the software support, and the community around it. This is why the Raspberry Pi has been so popular. They have core OS support, and a super active community that’s always making contributions.

I find the Odroid boards to be similar, albeit a much smaller community. They do have core OS support, and typically get whatever changes they make integrated into the mainline Linux development tree.

This M1S I am considering as a brain for machines that need more than what the ESP32 can handle. A typical situation might be a CNC machine, where I want to have a camera to watch how things are going, and make adjustments if things are out of wack. For example, the camera sees that the cutting bit has broken, and will automatically stop the machine. Or, it can see how the material is burring or burning, and make adjustments to feeds and speeds automatically.

For such usage, it’s nice to have the IO pins available, but communicating over I2C, CANBus, or other means, should be readily available.

This is reason enough for me to purchase one of these boards. I will look specifically for pieces I can run on it, like OpenCV or some other visual module for the vision stuff. I have another CNC router that I am about to retrofit with new brains, and this could be the main brain, while the ESP32 can be used for the motor control side of things.

Last is the dreamy stuff.

The BeagleV-Fire

This is the latest creation of BeagleBoard.org. This organization is awesome because they are dedicated to creating “open source” hardware designs. That’s useful to the community because it means various people will create variants of the board for different uses, making the whole ecosystem more robust.

There are two special things about this board. One is that it uses a RISC-V chip, instead of ARM. RISC-V is an instruction set, which itself is open source, and license free. It is a counter to the ARM chips, which have license fees and various restrictions. RISC-V in general, will likely take up the low end of the market for CPU processors in all sorts of applications that typically had ARM based chips.

The other feature of these boards is onboard integrated FPGA (Field Programmable Gate Array). FPGA is a technology which makes the IO pins programmable. If you did not have a USB port, or you wanted another one, you could program some pins on the chip to be that kind of port. You can even program a FPGA to emulate a CPU, or other kinds of logic chips. Very flexible stuff, although challenging to program.

I’ve had various FPGA boards in the past, and even ones that are integrated with a CPU. This particular board is another iteration of the theme, done by a company that has been a strong contributor in the maker community for quite some time.

Why I won’t buy this board, as much as I want to; I don’t have an immediate need for it. I want to explore FPGA programming, and this may or may not be the best way to learn that. But, I don’t have an immediate need. Getting an Odroid for creating a smarter CNC makes sense right now, so one of those boards is a more likely purchase in the near term. It might be that in my explorations of CNC, I find myself saying “I need the programmability the BeagleBone has to offer”, but it will be a discovery based on usage, rather than raw “I want one!”, which is a departure from my past tinkerings.

At this point, I don’t need to explore above Single Board computers. They are more than powerful enough for the kinds of things I am exploring, so nothing about rack mountable servers and kubernetes clusters.

At the low end, ESP32 as my computicles. At the high end, Hard Kernel based Single Board Computers for brains.


Tinkerer’s Closet – Hardware Refresh

I am a tinkerer by birth. I’ve been fiddling about with stuff since I was old enough to take a screwdriver off my dad’s workbench. I’ve done mechanical things, home repair, wood working, gardening, 3d printing, lasering, just a little bit of everything over the years. While my main profession for about 40 years has been software development, I make the rounds through my various hobbies on a fairly regular basis.

Back around 2010, it was the age of 3D printers, and iOT devices. I should say, it was the genesis of those movements, so things were a little rough. 3D printers, for example, are almost formulaic at this point. Kits are easily obtained, and finished products can be had for $300-$400 for something that would totally blow away what we had in 2010.

At that time, I was playing around with tiny devices as well. How to make a light turn on from the internet. How to turn anything on, from a simple radio controller. As such, I was into Arduino microcontrollers, which we making the rounds of popularity, and much later, the Raspberry Pi, and other “Single Board Computers”. There were also tons of sensor modules (temperature, accelerometers, light, moisture, motion, etc), and little radio transmitters and receivers. The protocols were things like Xigbee, and just raw radio waves that could be decoded to ASCII streams.

As such, I accumulated quite a lot of kit to cover all the bases. My general moto was; “Buy two of everything, because if one breaks…”

The purchasing playground for all this kit was limited to a few choice vendors. In the past it would have been Radio Shack and HeathKit, but in 2010, it was:

AdaFruit

SeeedStudio

SparkFun

There were myriad other creators coming up with various dev boards, like the low power JeeLabs, or Dangerous Prototypes and their BusPirate product (still going today). But, mostly their stuff would end up at one of these reliable vendors, along with their own creations.

Lately, and why I’m writing this missive, I’ve been looking at the landscape of my workshop, wanting to regain some space, and make space for new projects. As such, I started looking through those hidey holes, where electronics components tend to hide, and hang out for generations. I’ve started going through the plastic bins, looking for things that are truly out of date, no longer needed, never going to find their way into a project, no longer supported by anyone, and generally, just taking up space.

To Wit, I’ve got a growing list of things that are headed for the scrap heaps;

433Mhz RF link kit, 915Mhz RF Link Kit, Various versions of Arduinos, Various versions of Raspberry Pi, TV B Gone KIt (built one, tossing the other, maybe save for soldering practice for the kids), Various Xigbee modules, Parallax Propellar (real neat stuff), SIM Card Reader, Gadget Factory FPGA boards and wings, trinkets, wearables, and myriad other things either as kits, boards, and what have you.

I’m sad to see it go, knowing how lovingly I put it all together over the years. But, most of that stuff from from 13 years ago. Things have advanced since then.

It used to be the “Arduino” was the dominant microcontroller and form factor for very small projects. Those boards could run $30, and weren’t much compared to what we have today. Nowadays, the new kids in town are the ESP 32 line of compute modules, along with form factors such as the Adafruit supported “Feather”. A lot of the modules you used to buy separate, like Wifi, are just a part of the chip package, along with BlueTooth. Even the battery charging circuitry, which used to be a whole separate board, is just a part of the module now. I can buy a feather board for $15, and it will have LiPo charging circuitry, USB-C connectivity for power and programming, Wifi (abgn), and BlueTooth LE. The same board will have 8 or 16Mb or RAM, and possibly even dual cores! That’s about $100 worth of components from 2010, all shrunken down to a board about the size of my big thumb. Yes, it’s definitely time to hit refresh.

So, I’m getting rid of all this old stuff, with a tear in my eye, but a smile on my face, because there’s new stuff to be purchased!! The hobby will continue.

I’m happily building new machines, so my purchases are more targeted than the general education I was pursuing back then. New CPUs, new instructions sets, new data sheets, new capabilities, dreams, and possibilities. It’s both a sad and joyous day, because some of the new stuff at the low end even has the words “AI Enabled” on it, so let’s see.


It’s all about the Artificial Intelligence?

I can remember when in the 1980s and 90s, we programmers were talking about things such as SmallTalk, Lisp machines, the Prolog programming language, and this upstart C++. Lots of discussions around modularity, simulation, patterns of programming and whatnot. Back then, we even had the discussions around neural networks, back propagation, and even nano scale computers, driven by push rods…

The machines of the time were not anywhere near as capable as even my now ‘older’ iPhone 12. I mean, a machine with a few megabytes of RAM, let alone a hard disk with a 100Megabytes, would have been extraordinary. And yet, we envisaged the rise of the machines (Terminator – first DVD ever!), and if you were a programmer, “Snowcrash” fueled fantasies of a world of connected intelligence, that drove a generation to create the distributed multi-player gaming environments we have today.

If we had then, what we have now, in terms of hardware, would we already be served by our robotic assistants? Probably not, but we’ve had decades to stew on the science of the tech, to refine the ‘artificial’ in the intelligence, and finally, the neural networks of yore have enough horsepower and training data today to fulfill a lot of the fantasy operations we envisaged back then.

I have mentioned a few key technologies in newsletters over the past year, so here I want to launch into a bit more detail about what I’ve been using, and how I see it impacting the future of programming. I have been working with Microsoft recently on the topic of “The Future of Work”. In particular, I have been exploring how roles change, how engineering itself changes, and how and when we might expect those changes.

One particular thing of note is that it’s very hard for anyone, even those creating the tech, to predict which specific features will be emerging in any given timeframe. Just like early on, it was hard to know when a machine might be able to beat a chess grand master (Feb 1996). Instead of trying to predict the feature futures, I’ve instead started to predict a chronology. For example, I don’t know when copilot or ChatGPT will be able to write code for any given situation better than I can, but I know this will happen eventually. Knowing this, I also know that my job as a ‘software engineer’ will change over time, to the point where I am no longer doing the mundane code writing, but have switched to the more abstract systems design, so I can prepare for that.

Artificial Intelligence is a very broad term of course. In common speech, I take it to mean “that which we don’t quite understand, but we know it’s related to computers, and it’s going to be taking my job some day”. This can be a fearful interpretation. It’s the unknown. Much like the emergence of the ‘horseless carriage’, it might cause some fear at first, but over time, with gained experience, it becomes less a thing to fear, and more a tool to fuel a whole new industrial age.

What are some of the sign posts in my chronology? I’ll use “computers” to represent the thing that is evolving.

  1. Computers will be able to assist in the development of software. That assistance will be rudimentary, more of a ‘copy/paste’ on overdrive, but it will accelerate software writing by 30%
  2. Computers will be able to create a small scale system based on domain knowledge and coding abilities.
  3. Computers will be able to interpret large scale system designs and generate and verify code for such systems, according to a human derived specification
  4. Computers will be able to create systems on the fly via human conversational interactions.
  5. Computers will be able to create new systems without human input, to satisfy the needs of a group of people, or other entities.

Those are sign posts along a journey of development, only considering software development. Over the next few months, I will explore further what each of these sign posts might really look like, and what we can do to prepare for their arrival and maximize our benefit from them.

I’ve been doing tech for 40+ years at this point. It’s been a year since I left Microsoft, and for that entire time I’ve been using the likes of ChatGPT and GitHub copilot to enhance my coding capabilities. I have been able to create a level and quantity of software in that short amount of time that I would not have been able to do in the past without such tools. We are at an inflection point, where the tools are good enough that we need to make very conscious choices about how to engage and use them to ensure we reap the benefits to the betterment of humanity, rather than cowering in fear of what might be done to us.


Impressions – One week in a Tesla Model Y

We recently took a family vacation to Southern California. Some fun in the sun at the beaches, with a side trip to Disneyland to boot. We had first considered driving down from Seattle to Newport Beach, a drive we’ve actually done before, but we ultimately decided to fly down instead, and rent a car.

When I went to the Hertz site, I saw they had Tesla’s for rent, at a lower price than a comparable sedan, so I thought, “why not, I’ve never actually driven one before, let’s see what all the fuss is about”. So, a Tesla Model Y would await.

After a brief scare at the rental facility (you’re going to wait 3 hours to get the car), turns out, it was actually sitting right there waiting for us (thank you super credit card).

Now, I’m clearly a tech head, and have known about and followed the Tesla story from day one. Surprisingly, I’ve never actually driven one in all these years. I’ve read tons of reviews, and even own some shares of the stock, but actually experiencing the hype, not so much. My perspectives are a mix of technical analysis, and pragmatics. I’ll start with the technical perspective.

You get in, it’s a car, solid enough, comfortable enough, slow roll out of the rental facility at 10mph. When can I hit that accelerator and be thrown back into my seat like I’m launching a jet fighter? OK, we finally hit the freeway and… POW!! Shazowaa!! What a rush. OK, that lasted about 3 seconds, now settle down.

The drive

First is the ‘accelerator’ otherwise known as the ‘gas pedal’. Of course there’s no gas, but push the pedal on the right, and you go faster. Electric cars typically have regenerative braking. What that means is that the motor itself is used to help slow the car down, and while it’s doing that it generates electricity to recharge the battery a bit. You notice it most with slow city driving. Take your foot off the accelerator, and you instantly start to slow down. It took me several tries before I got good at stopping at the right place at a stop light. I was usually far short, because in our gas cars, taking your foot off the accelerator just puts the car into ‘roll’, it just keeps going til you put your foot on the brake. Of course, with the Tesla, you can actually tune this. There is a ‘roll’ mode, but I left the regen braking on because I wanted that new experience.

As far as handling, it’s a car, turn the wheel, and it turns. One thing I did notice was that when I put on the turn signal to change lanes, it would automatically turn off once we were in the lane. That’s different than my much older Toyota Venza, which requires a certain amount of physically turning the wheel before it “clicks” and turns itself off.

In general, it drives like a car. Nothing super spectacular, and no glaring omissions. Although I did not figure out how to use the full self driving, there was this “go the speed of traffic” feature we did use. I used it in stop and go traffic. Just turn it on, and you can remove your foot from the accelerator and brake. When the car in front of you moves, the Tesla will move. When it slows down or stops, the Tesla will follow suit. This is great for saving you feet from pedal fatigue.

Modernity

There are a class of conveniences that I think can be chalked up to simply being a modern car. My Venza is circa 2010, so a bit long in the tooth (and it lacks bluetooth playback). Many of the features of the Tesla can probably be found in any modern car. Of course bluetooth pairing, although it did not like my wife’s One+, but paired with my iPhone 12. Very curious that. Having Netflix and Youtube built in was a plus for when we visited the charging station (more on that later). There is a convenient ‘click to talk’ feature as well. Just click the button no the stearing wheel and say what you want “turn the AC up”. This is great, because since all the commands are hidden in the touch screen, and there are very few physical buttons in the cabin, having viable voice command is an absolute must.

Fit and Finish

In general, it seemed to be a fairly solid car. In particular the doors slam solid. The frunk though, that seems to be another story. That was the flimsiest hood I’ve every experienced. I thought I’d bend/break it if I slammed it down, rather than cradling it back into its closed position. I hope that improves over time, it seems like an oversight on an otherwise solid feeling car.

The ‘sun roof’ extends the whole length of the car. That was a funny one in the beginning because we were first looking for the ‘sun roof’ before we realized it was the whole roof.

Range/Charge Anxiety

Never having driven an electric, I was not familiar with how far you can go on a charge, when you should recharge, etc. Since we were staying at a hotel that did not have an on-premise charging station, I was probably more conscious about our state of charge each time we went out. We started at about 70%, and went 30 miles to the hotel, down to 50ish%. Then a trip to Long Beach, and we were looking for a super charger for the return trip. 20 minutes on the super charger and we were back to 80%. A few more local small trips, and by friday, we were back in the 40% range, and looking for another top up. Another super charger, another 20 minutes, another 80%. We did not bother charging before returning the car, but there was a charging station near the airport.

I can say I felt range anxiety. We were going to take a side trip to Arizona, and the internal mapping app showed we’d make it with a couple of charges along the way, but I thought better of it, and we did not take that trip.

I’m sure if I were a regular driver, and charging it at home, I would not feel the anxiety, and I’d love never having to go to a gas station again, ever.

Final Impressions

The Tesla Model Y is a great rental car. A dual motor, maxed out unit is quite nice. There’s a bit of getting used to, but if you already have a modern car, the differences are so minor, you get over them within minutes.

Would I buy one for my family? Well, no, not really. This didn’t come down to any technical features, it came down to pragmatics. We have a family of 4, with two young children. Our lives require a minivan, or equivalent. We need to hault stuff, bikes, equipment, multiple kids (beyond our own), and a sedan just doesn’t do it for us. I’m not quite sure how a model Y gets an “SUV” designation, but it’s not even a ‘station wagon’ like the Venza is.

For our family, we need utility, either that of a minivan (Chrysler Pacifica is our daily driver), or a king cab truck.

I’m desperate to get into electrical vehicles, and we’re just pumping out more pollution every time we drive today, but I want to make a practical choice if I can. I would love to get an all electric minivan, although I haven’t seen one in production that would foot the bill yet. Barring that, I would not mind a Ford F-150 lightning. I had money down on a Rivian originally, but cancelled at the last moment because the price was just too outrageous for what it was (a mid-sized truck).

In short, the Tesla Model Y was a great rental car experience, and I will likely rent one again next time. As a practical matter for our family though, it’s not the car for us to make as our next purchase.


Looking Back to the Road Ahead

As 2022 comes to a close, I am somewhat reflective, but, mostly looking ahead. This past year was certainly tumultuous on several fronts. Coming more solidly out of Covid protocols, kids firmly back in school, life contemplated, and perhaps most impactful personally, leaving Microsoft after 24 years of service.

What did I do in my first days after leaving the company? Start coding of course! I started coding when I was 12 years old on a Commodore PET. To say ‘coding is in my blood’, would be an understatement. I have been coding longer than I’ve been able to hold coherent conversations with adults, that’s how long I’ve been coding, and I don’t see stopping any time soon.

I’ve always thought that coding is story telling. You’re telling a story, converting some sort of desire into language the computer can understand and execute. The computer, for its part, is super simplistic, with a limited vocabulary. Just think about it. How much work would you have to put into telling someone the directions to your house if you could only communicate in numbers, arithmetic, and simple logic ‘if’, ‘compare’, ‘then’. You don’t have the higher order stuff like “get on the highway, head south”. You have to go all the way back to first principles, and somehow encode the ‘highway’, and ‘head south’. And that’s why we’ve had programming languages as long as we’ve had computers, and it’s also why we’ll continue to develop more programming languages, because, this stuff is just too hard.

My recent weeks have been filled with various realizations related to the state of computing. When you have the leisure to take a step back, and just observe the state of the art in computing these days, you can gain both an appreciation, and a feeling of being overwhelmed at the same time.

It wasn’t too long ago that the company Adapteva: http://www.adapteva.com was pioneering, and pushing on a CPU architecture that had 64 64-bit RISC processors in a single package. That was the parallela computer. The experimental board was roughly raspberry pi sized, and packed quite a punch. The company did not survive, but now 64 cores is not outrageous, at least for data center class machines and workstations.

Meanwhile, nVidia, and AMD, and intel, have been pushing the compute cores in graphics processors into the hundreds and thousands. At this point, the GPU is the new CPU, with the CPU being relegated to mundane operating systems tasks such as managing memory and interacting with peripherals. Most of the computation of consequence is happening on the GPU now. And, accordingly, the GPU now commands the lion’s share of the PC price. This makes sense, as the CPU has become a commodity part, with the AMD/intel wars at a point of equilibrium. No longer can they win by juicing clock rates, now it’s all about cores, and they just keep leepfrogging. nVidia is not standing still, and will be dipping their toe into the general computing (as it relates to data centers at least) market in due time.

nVidia, long a critical piece of the High Performance Computing (HPC) scene, is pushing further down the stack. They’re driving a new three letter acronym Data Processing Unit (DPU). With a nod to madernity, and decades of experience in the HPC realm, the DPU promises to be a modern replacement for a lot of the disparate discreet pieces of computing found in and around data centers.

nVidia isn’t slouching on graphics though. Aside from their hardware, they continue to make strides in the realm of graphics algorithms. NeuralVDB is one of those areas of innovation. Improving the ability to render things like water, fire, smoke and clouds, it’s about the algorithm, and not about the hardware. Bottom line, better looking simulations, in less time, while requiring less energy. This is a great direction to go.

But this is just the graphics area related to nVidia. There has been an explosion of algorithms in the “AI” space as well. While the headliner might be OpenAI and their various efforts, such as Dall*E, which can generate any image you can imagine, there are other efforts as well. The OpenAI Whisper project is all about achieving even better voice to text translation (English primarily).

Not to be left in the dark, Google, Microsoft, Meta, even IBM and myriad researchers in companies, universities, and private labs, are all driving hard on several fronts to evolve myriad technologies. This is the ‘overwhelm’ part. One thing is sure, the pace of change is accelerating. We don’t even have to wait for the advent of ‘quantum computing’, the future is now.

The opportunities in all this are tremendous, but it takes a different perspective than we’ve had in the past to ride the waves of innovation. There will be no single winner here, at least not yet. The various algorithms and frameworks that are emerging are real game changers. Dall*E, and the like, and making it possible for everyday individuals to come up with reasonable artwork, for example. This could be a threat to those who make their living in the creative arts, or it could be a tremendous new tool to add to their arsenal. More imagination and tweaking are required to make truly brilliant art compared to the standard fair individuals such as myself might come up with.

One thing that has emerged from all this, and the thing that really gets me thinking is, conversational computing might start to emerge now. What I mean by that; Dall*E, and others, work off of prompts you type in: “A teddy bear washing dishes”. You don’t write C or JavaScript, or Renderman, you just type plain english, and the computer turns that into the image you seek. Well, what if we take that further. “Show this picture to my brother”. An always listening system, that has observed things like “brother”, and knows the context of the picture I’m talking about, and has learned myriad ways to send something to my brother, will figure out what to do, without much prompting from me. In the case of ambiguity, it will ask me questions, and I can provide further guidance.

This is going far beyond “hay Siri”, which is very limited to specific tasks. This is the confluence of AI, digital assistant, digital me, visualization and conversational computing.

When I look back over the roughly 40 years of computing that I’ve been engaged in, I see the evolution of computers from the first PC and hobbyist machines, to the super computers we all carry around in our pockets in the form of cell phones. Computing is becoming ubiquitous, as it is woven into the very fabric of our existence. Our programming is evolving, and is reaching a breaking point where we’ll stop using specialized languages to ‘program’ the machine, and we’ll begin to have conversations with them instead, voicing our desires and intents rather than giving explicit instructions.

It’s been a tremendous year, with many changes. I am glad I’ve had the opportunity to lift my head up, have a look around, and dive in a new direction leveraging all the magic that is being created.


2+ Decades @ Microsoft : A Retrospective

I joined Microsoft in November of 1998.  During black history month in 2022, I sent out an email to my friends and colleagues giving a brief summary of my time at the company, my own personal “black history”.  Over the past year, I’ve been engaged in some personal brand evolution, and I thought blogging is good for long form communications.  So here, I’m going to repeat some of that Microsoft history as a way to set the stage for the future.  So, here, almost unedited, is the missive I shared with various people as a reflection on black history month, 2022.

Hello,

If you’re receiving this, you’re probably no stranger to receiving missives from me on occasion.

Here we are at the end of black history month. 

I am black, and my recent history is 24 years of service at Microsoft.

I’ve done a lot in those years from delivering core technology (XML), to creating Engineering Excellence in India (2006 – 2009), to dev managing the early Access Control and Service Bus components of the earliest incarnation of Azure. 

I’ve also had the pleasure of creating the LEAP program, which is helping to make our industry more inclusive, and helped to establish Kevin Scott in the freshly re-birthed Office of the CTO. While in OCTO, inspired and guided by a young African engineer, I had the pleasure of supporting the push into our African dev centers (Kenya and Nigeria), which now number around 650 employees.

My current push is to hire folks in the Caribbean, yet another relatively untapped talent market.

This past couple of years have been particularly charged/poignant, with the dual of covid, and various events leading to the emergence of “Black Lives Matter”.

Throughout the arc of the 24 years I have spent in the company, I have gone from “I’m just here to do a job”, to “There is a job I MUST do to support my black community”.  I have been happy that the company has given me the leeway to do what I do, while occasionally participating in bread and butter activities. 

I am encouraged to see and interact with a lot more melanin enhanced people from around the world, and in the US specifically.  We have a long road to go, but we are in fact making progress.

Over the past year, I have thought about what I can do, how I can leverage my 35+ years of experience in tech, to empower even more people, and enable the next generation to leapfrog my own achievements.  To that end, I’ve started speaking out, starting ventures, providing support, beyond the confines of our corporate walls.  I have appeared on several podcasts over the past couple of months, and will continue to appear in a lot more.  This year I will be making appearances at conferences, writing a book, etc.

If you’re interested in following along, and getting some insights about this guy that pesters you in email on occasion, you can check out my web site, which is growing and evolving.

William A Adams (william-a-adams.com)

William A Adams Podcast Guest Appearances

At the bottom of the media page (the second link), you’ll see a  piece by the computer history museum in silicon valley.  Some have already seen it, but there’s actually a blog the museum did that goes along with it.  It’s one of those retrospectives of a couple of black OGs in tech (me and my brother) from the earlier days in silicon valley, up to the present.

And so it goes.  We have spent another month reflecting on blackness in America.  We are making positive strides, and have so much more to achieve.  I am grateful for the company that I keep, and the continued support that I enjoy in these endeavors.

Don’t be surprised if I ask you to come and give a talk somewhere in the Caribbean within the coming year.  We are transforming whole communities with the simple acts of being mindful, intentional, and present.

  • William

And with that, dear reader, welcome back to my blog, wherein I will be a regular contributor, sharing thoughts in long form, sometimes revisiting topics of old, and mostly exploring topics anew.


Reading Fine Print – A new credit card

So, my kids wanted to buy me a large teddy bear for my birthday.  There so happened to be one at the local Safeway, but it was $75.  The last time we bought a giant stuffed thing, it was a giant dog from Costco.  I don’t remember the price, but I thought, Costco, it’s got to be cheaper…

We went down to Costco, but I we haven’t had a membership there for years.  Time to renew.  One thing led to another, and rather than the simple run of the mill membership, I allowed myself to be talked into the “Executive” membership, which ‘gives’ you a credit card, and a $60 cash back card (offsetting the extra expense of the super membership).  Well, how bad could it be.  I went from having really no credit cards last year, to having 4 of them today.  That must be good for credit worthiness right?  At any rate, I finally got the card, and thought, hay, I might as well read all the fine print.

The first thing that came in the mail was the “Account approval notice”.  This one is interesting because it’s basically just the “congratulations, you’re approved for a card, it will be coming in the mail shortly”.  It does list the credit limit, the outrageous interest rates, and down at the bottom, below the fold, “Personalize your PIN”.  Aha!  This normally discarded little piece of paper is the one that has the credit card PIN, which most people don’t know.  For an ATM card, you always know the PIN because without it, you basically can’t use it.  But, your credit card PIN?  I don’t usually know that, and why?  Because I’m not looking for it, and I usually throw away this intro piece of paper.  Well, now I know, and I’ll try to keep track of this radom 4 digits.

Next up, the giant new card package.  This is the set of papers which include the terms and conditions in minute detail.  This shows the 29% rate you’ll be charged whenever you do anything wrong (like not pay your bill on time), as well as the ‘arbitration’ clause, which ensures you never sue them whenever they do something wrong.  One small piece of paper in this set says “FACTS” at the top of it.

The FACTS sheet.  This piece of paper tells me about the many ways in which they’re going to use the information they gather on me to market to me.  Not only the company itself, but their affiliates, and even non-affiliates (basically anyone who wants the data).  This is normally a throw away piece as well, but this time I decided to read the fine print.  What I found was one section titled “To limit our sharing”.  Well, that sounds good.  Call a phone number, go through some live menu choices, and there you have it, you’ve limited the usage of this data.  All you can do is limit the affiliate usage of your data, but it’s something.  I even chose the option to have them send me a piece of paper indicating the choices that I made.

I feel really proud of myself.  I normally ignore most of the stuff that comes from credit card companies, as most of it is marketing trying to sign me up for more credit cards, or point systems, or whatever.  This time, I really dug in, and caught some interesting details.  I’m curious to see how the “don’t market to me” thing works out.  Of course, once you click off that checkbox, they probably simply sell your info off to someone else to harvest.  I feel like that’s what happens when you unsubscribe from an email list as well, but I can’t prove it.

At any rate, I learned something new today.  Read some of the fine print, try out a little something you haven’t in the past, and go on an adventure!


Aging in Tech

My birthday is coming up in November, and just today I was clicking through one of those web sites that says “45 discounts seniors can enjoy”.  I’ve been doing “computing” in one form or another since I was about 10 years old, and I’m about to be 53.  If I can do the math, that’s been a very long time.  Looking back on my earlier years, I recognize a cocky genius of a software engineering talent (if I do say so myself).  In more recent years, it hasn’t been about an ability to sling code hard and fast, but rather reflecting upon years of developing various kinds of systems to come up with non-obvious solutions faster than I would have otherwise.

Aging in tech typically means sliding slowly into a management position, slowly losing your tech chops, and mostly riding herd over the young guns that are coming up through the ranks.  I’ve taken a slightly different path over the past few years.  I did manage the cats who created some very interesting tech for Microsoft: XML, LINQ, ACS/Service Bus, Application Gateway, but more recently I found myself writing actual code myself, while inventing new ways to hire for diversity (http:/aka.ms/leapit).  It is this latter initiative that I find very fascinating and invigorating as I age in tech.

The premise of the LEAP program is that ‘tech’ broadly speaking, has advanced enough in terms of complexity, that some things are now easier to achieve than they might have been 10 – 15 years ago.  The kinds of “programming” that we’re doing is changing.  Whereas 15 years ago, having the skills to debug windows kernel was a great thing to look for, today, being able to do a mash up with they myriad web frameworks that are available is most interesting.  Knowing R or machine learning tools is increasingly important.  Those kernel debug skills, not so much.

But still, there’s need for old codgers to apply themselves in ever creative ways.  I look out onto the tech landscape, and I see myriad opportunities.  I see the continent of Africa sitting there, daring us to capture it and harvest the energy and greatness that awaits.  I see urban environments across the US, who are all consumers of tech, and can be turned into creators of tech just as easily.  I see AI applications that can be applied to our ever burgeoning populations of elder folks, robots, AI, automation of various forms.  As an older technologist, rather than going softly into that good night, lamenting the loss of my lightning quick programming skills, I see opportunity to leverage what I’ve learned over the years to identity opportunities, and marshal teams of engineers to go after them, adding guidance and experience where necessary, but otherwise just getting out of the way so the energetic engineers can do their thing.

I may or may not be able to pass a typical tech interview screen these days, but I’m more concerned with changing how we interview for tech roles in the first place.  I’m more likely to identify how to incorporate the views of youth, the elderly, the farmer, the street performer, into the evolution of tech offerings to make their lives better.  I’m more likely to, without fear, create a tech start up with a clear purpose, and the financial support necessary to see it through its early rounds.

Aging in tech can be a harrowing experience.  In some cases we age out of certain roles, but with some foresight and thoughtfulness, we leverage our years of experience to do ever greatly impacting things, while avoiding merely being surpassed by our up and coming peers.

So, as I age in tech, I’m looking forward to the discounts that are coming when I reach 55.  I’m looking forward to the seniors menu at Denny’s.  I’m looking forward to being able to think of anything I can imagine, and actually turn it into something that is helpful to society.

Aging in tech is something that happens to everyone, from the first line of code you write, to the last breath you take.  I’ve thoroughly enjoyed the journey thus far, and am looking forward to many more years to come.


Obsolete and vulnerable?

For the past few years, I’ve had this HP Photosmart printer.  It’s served me well, with nary a problem.  Recently, I needed to replace ink, so I spent the usual $60+ to replace all the cartridges, and then it didn’t work…

An endless cycle of “check the ink” ensued, at which point I thought, OK, I can buy some more cartridges, rinse, repeat, or I can buy another printer.  This is the problem with printers these days.  Since all the money is made on the consumables, buying a new printer is anywhere from a rebated ‘free’, to a few hundred dollars.  Even laser printers, which used to cost $10,000 when Apple came out with their first one back in the day, are a measly $300 for a color laser!

So, I did some research.  In the end I decided on the HP MFP M277dw

It’s a pretty little beast.  It came with an installation CD, which is normal for such things.  But, since my machine doesn’t have a CD/DVD/BFD player in it, I installed software from their website instead.

It’s not often that I install hardware in my machine, so it’s a remarkable event.  It’s kind of like those passwords you only have to use once a year.  You’ll naturally try to follow the most expedient path.  So, I download and install the HP installer appropriate for this device and my OS.  No MD5 checksum available, so I just trust that the download from HP (at least over HTTPS) is good.  But, these days, any compromise to that software is probably deep in the firmware of the printer already.

The screens are typical, a list of actions that are going to occur by default.  These include automatic update, customer feedback, and some other things that don’t sound that interesting to the core functioning of my printer.  The choice to turn these options off are hidden behind a blue colored link at the bottom of the screen.  Quite unobtrusive, and if I’m color blind, I won’t even notice it.  It’s not a button, just some blue text.  So, I click the text, which turns on some check boxes, which I can check to turn off various features.

So, further with the installation, “Do I want HP Connect?”  Well, I don’t know, I don’t know what that is.  So, I leave that checked.  Things rumble along, and a couple of test print pages are printed.  One says: “Congratulations!” and proceeds to give me the details on how I can send email to my printer for printing from anywhere on the planet!  Well, that’s not what I want, and I’m sure involves having the printer talk to service out in the internet looking for print requests, or worse, it’s installed a reverse proxy on my network, punching a vulnerability hole in the same.  It just so happens a web page for printer configuration shows up as well, and I figure out how to turn that particular feature off.  But what else did it do.

Up pops a dialog window telling me it would like to authenticate my cartridges, giving me untold riches in the process.  Just another attempt to get more information on my printer, my machines, and my usage.  I just close that window, and away we go.

I’m thinking, I’m a Microsoft employee.  I’ve been around computers my entire life.  I probably upgrade things more than the average user.  I know hardware, identity, security, networking, and the like.  I’m at least an “experienced” user.  It baffles me to think of how a ‘less experienced’ user would deal with this whole situation.  Most likely, they’d go with the defaults, just clicking “OK” when required to get the darned thing running.  In so doing, they’d be giving away a lot more information than they think, and exposing their machine to a lot more outside vulnerabilities than they’d care to think about.  There’s got to be a better way.

Ideally, I think I’d have a ‘home’ computer, like ‘Jarvis’ for Tony Stark.  This is a home AI that knows about me, my family, our habits and concerns.  When I want to install a new piece of kit in the house, I should just be able to put that thing on the network, and Jarvis will take care of the rest, negotiating with the printer and manufacturer to get basic drivers installed where appropriate, and only sharing what personal information I want shared, based on knowing my habits and desires.  This sort of digital assistant is needed even more by the elderly, who are awash in technology that’s rapidly escaping their grasp.  Heck, forget the elderly, even average computer users who’s interaction with a ‘computer’ extends to their cell phones, tablets, and console gaming rigs, this stuff is just not getting any easier.

So, more than just hope, this lesson in hardware installation reminds me that the future of computing doesn’t always lie in the shiny new stuff.  Sometimes it’s just about making the mundane work in an easier, more secure fashion.

 


Media Marshalling – Why I still archive my DVDs

I’ve gone back and forth on this over the years.  For the most part we’re ‘cord cutters’.  For me it wasn’t about cost, but about changing viewing habits.  We found that of the cable offerings, all we were really using was the connection to sling tv so we could watch Indian serials and movies.  Well, with roku, that’s just a single paid “channel”.  Then came the Amazon Firestick, and all the video content that comes with Prime.  Netflix rounds out the offerings that are most common, and with them creating new content of their own, the likes of HBO and Starz begin to pale into a distant memory.

So, what about DVDs?  Well, most of the time, content available on DVD is available through one of our online subscriptions.  But, not always.  Netflix doesn’t have everything, and in particular, they don’t have some stuff that I would consider to be archival.  Even if they do have something, they may not have it for very long.

I have a strategy around DVD purchasing.  In general I’ll only purchase a DVD if it’s less than $10.  I can justify this as it’s less than the price of a single admission to a movie theatre.  Also, if the DVD is cheaper than the rental price on Amazon, I might buy it.  I’ll buy those compilation DVDs that are like “Oceans 11, 12, 13, and Original”.  That’s 4 movies in all, at least a couple of which I’ve seen a couple of times, and would watch again.  The very first one from the 60s was interesting, because although they “got away with it”, they didn’t end up with anything.  I’ll also purchase DVDs while in India, or on Amazon because they won’t show up necessarily in the US.  Ra-One for example, or the Dhoom movies (although the latest did show up).

I have 118 DVDs now.  I do two things.  1, I archive the .ISO file and store it on the Synology NAS.  Then I use Handbrake to convert to a .mkv file, so that I can serve it up easily using a Plex client on the roku, or firestick, or any client in the home (iPad, phones, guest laptops).  This is great.  But, running a home NAS is an interesting business.  The Synology is pretty good, and the one I have has been in almost constant operation for about 4 years now.  I’ve added one disk, so it has roughly 5 terabytes of storage, with a couple of storage bays open.  At some point within the next 4 years, I’ll be contemplating replacing that thing, at a cost of who knows what.  In the meanwhile, I hope to gosh nothing catastrophic happens to it, because other than being RAID, I don’t have a backup,  or who knows if it suffers a debilitating virus.

Which brings me to a secondary analysis.  I could leverage OneDrive, or some other cloud storage mechanism to archive all this stuff, and just use the home NAS as a local cache.  That would give me the quick access that I want, and the security of a cloud backed up thing to boot.  That would be a great solution for what I travel.  I can still have access to various files, without having to expose my home NAS to the wilds of the internet.  The cost might be about the same as purchasing a new NAS in a few years, so that’s something to look into.

On top of putting my data into an easily accessible place, I can then use it as a dataset to do various experimentations.  What is it about the types of movies that I collect.  Run some cloud based analytics on the images, dialogs, years, actors, etc.  Basically, I could run my own little Netflix scoring engine, and on my own decide what kinds of new movies might be of interest to me.  And then, I wonder if I could sell this information to advertisers, or movie makers?  Something to think about.

And so, I find myself continuing to archive my DVDs.  It’s something I’ve gone into and out of doing over the past 15 years.  Today, they’re so cheap that even though a lot of content can be found through streaming services, it’s worth the convenience to store them and make them available locally.  We’ll see if using the cloud as backup, or as primary storage, makes sense.