Hello Scene – What’s in a Window?

Yes, what is a Window? How do I draw, how do I handle the user’s mouse/keyboard/joystick/touch/gestures?

As a commenter pointed out on the my last post, I’ve actually covered these topics before. Back then, the operative framework was ‘drawproc’. The design center for drawproc was being able to create ‘modules’, which were .dll files, and then load them dynamically with drawproc at runtime. I was essentially showing people how something like an internet browser might be able to work.

Things have evolved since then, and what I’m presenting here goes more into the design choices I’ve made along the way. So, what’s in a “Window”?

It’s about a couple of things. Surely it’s about displaying things on the screen. In most applications, the keyboard, mouse, and other ‘events’, are also handled by the Window, or at least strongly related to it. This has been true in the Windows environment from day one, and still persists to this day. For my demo scene apps, I want to make things are easy as possible. Simply, I want a pointer to some kind of frame buffer, where I can just manipulate the values of individual pixels. That’s first and foremost.

How to put random pixels on the screen? In minwe, there are some global variables created as part of the runtime construction. So, let’s look at a fairly simple application and walk through the bits and pieces available.

Just a bunch of random pixels on the screen. Let’s look at the code.

#include "apphost.h"

void drawRandomPoints()
	for (size_t i = 0; i < 200000; i++)
		size_t x = random_int(canvasWidth-1);
		size_t y = random_int(canvasHeight-1);
		uint32_t gray = random_int(255);

		canvasPixels[(y * canvasWidth) + x] = PixelRGBA(gray, gray, gray);

void onLoad()
	setCanvasSize(800, 600);

That’s about as simple a program as you can write and put something on the screen. In the ‘onLoad()’, we set the size of the ‘canvas’. The canvas is important as it’s the area of the window upon which drawing will occur. Along with this canvas comes a pointer to the actual pixel data that is behind the canvas. A ‘pixel’ is this data structure.

struct PixelRGBA 
    uint32_t value;

That looks pretty simplistic, and it really is. Pixel values are one of those things in computing that has changed multiple times, and there are tons of representations. If you want to see all the little tidbits of how to manipulate the pixel values, you can checkout the source code: pixeltypes.h

In this case, the structure is the easiest possible, taylored to the Windows environment, and how quickly you can present something on the screen with the least amount of fuss. How this actually gets displayed on screen is by calling the ancient GDI API ‘StretchDIBits’:

    int pResult = StretchDIBits(hdc,
        SrcWidth, SrcHeight,

The fact that I’m using something from the GDI interface is a bit of a throwback, and current day Windows developers will scoff, smack their foreheads in disgust, and just change the channel. But, I’ll tell you what, for the past 30 years, this API has existed and worked reliably, and counter to any deprecation rumors you may have heard, it seems to be stable for the forseeable future. So, why not DirectXXX something or other? Well, even DirectX still deals with a “DevicContext”, which will show up soon, and I find the DirectXXX interfaces to be a lot of overkill for a very simple demo scene, so here I stick with the old.

There are lots of bits and pieces in that call to StretchDIBits. What we’re primarily interested in here is the ‘gAppSurface->getData()’. This will return the same pointer as ‘canvasPixels’. The other stuff is boilerplate. The best part is, I’ve encapsulated it in the framework, such that I’ll never actually call this function directly. The closest I’ll come to this is calling ‘refreshScreen()’, which will then make this call, or other necessary calls to put whatever is in the canvasPixels onto the actual display.

And where does this pixel pointer come from in the first place? Well, the design considerations here are about creating something that interacts well with the Windows APIs, as well as something I have ready access to. The choice I make here is to use a DIBSection. The primary thing we need to interact with the various drawing APIs (even DirectX), is a DeviceContext. This is basically a pointer to a datastructure that Windows can deal with. There are all kinds of DeviceContexts, from ones that show up on a screen, to ones that are associated with printers, or ones that are just in memory. We want the latter. There are lots of words to describe this, but the essential code can be found in User32PixelMap.h , and the real working end of that is here:

bool init(int awidth, int aheight)
    fFrame = { 0,0,awidth,aheight };

    fBytesPerRow = winme::GetAlignedByteCount(awidth, bitsPerPixel, alignment);

    fBMInfo.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
    fBMInfo.bmiHeader.biWidth = awidth;
    fBMInfo.bmiHeader.biHeight = -(LONG)aheight;    // top-down DIB Section
    fBMInfo.bmiHeader.biPlanes = 1;
    fBMInfo.bmiHeader.biBitCount = bitsPerPixel;
    fBMInfo.bmiHeader.biSizeImage = fBytesPerRow * aheight;
    fBMInfo.bmiHeader.biClrImportant = 0;
    fBMInfo.bmiHeader.biClrUsed = 0;
    fBMInfo.bmiHeader.biCompression = BI_RGB;
    fDataSize = fBMInfo.bmiHeader.biSizeImage;

    // We'll create a DIBSection so we have an actual backing
    // storage for the context to draw into
    // BUGBUG - check for nullptr and fail if found
    fDIBHandle = ::CreateDIBSection(nullptr, &fBMInfo, DIB_RGB_COLORS, &fData, nullptr, 0);

    // Create a GDI Device Context
    fBitmapDC = ::CreateCompatibleDC(nullptr);

    // select the DIBSection into the memory context so we can 
    // peform operations with it
    fOriginDIBHandle = ::SelectObject(fBitmapDC, fDIBHandle);

    // Do some setup to the DC to make it suitable
    // for drawing with GDI if we choose to do that
    ::SetBkMode(fBitmapDC, TRANSPARENT);
    ::SetGraphicsMode(fBitmapDC, GM_ADVANCED);

    return true;

That’s a lot to digets, but there are only a couple pieces that really matter. First, in the call to CreateDIBSection, I pass in fData. This will be filled in with a pointer to the actual pixel data. We want to retain that, as it’s what we use for the canvasPixels pointer. There’s really no other place to get this.

Further down, we see the creation of a DeviceContext, and the magic incantation of ‘SelectObject’. This essentially associates the bitmap with that device context. Now, this is setup for both Windows to make graphics library calls, as well as us to do whatever we want with the pixel pointer. This same trick makes is possible to use other libraries, such as freetype, or blend2d, pretty much anything that just needs a pointer to a pixel buffer. So, this is one of the most important design choices to make. Small, light weight, suppors multiple different ways of working, etc.

I have made some other simplifying assumptions while pursuing this path. One is in the pixel representation. I chose rgba-32bit, and not 15 or 16 or 24 or 8 bit, which are all valid and useful pixel formats. That is basically in recognition that when it comes to actually just putting pixels on the screen, 32-bit is by far the most common, so using this as the native format will introduce the least amount of transformations, and thus speed up the process of putting things on the screen.

There is a bit of an implied choice here as well, which needs to be resolved one way or another when switching between architectures. This code was designed for the x64 (intel/AMD) environment, where “little-endian” is how integers are represented in memory. If you’re not familiar with this, a brief tutorial.

This concerns how integer values are actually laid out in memory. Let’s look at a hexadecimal representation of a number for easy viewing: 0xAABBCCDD (2,864,424,397)

On a Big Endian machine, which ‘endian’ determines which part of the number is in the lowest memory address, this would be represented in memory just as it looks


On a Little Endiam machine, this would be laid out in memory as:


So, how do we create our pixels?

PixelRGBA(uint32_t r, uint32_t g, uint32_t b, uint32_t a) : value((r << 16) | (g << 8) | b | (a << 24)) {}

A bunch of bit shifty stuff leaves us with:


  • AA – Alpha
  • RR – Red
  • GG – Green
  • BB – Blue

On a Little Endian machine, this will be represented in memory (0’th offset first) as:


This might be called ‘bgra32’ in various places. And that’s really a native format for Windows. Of course, since this has been a well worked topic over the years, and there’s hardware in the graphics card to deal with it, one way or the other, it doesn’t really matter which way round things go, but it’s also good to know what’s happening under the covers, so if you want to use convenient APIs, you can, but if you want the most raw speed, you can forgo such APIs and roll your own.

Just a couple of examples.

  • 0xffff0000 – Red
  • 0xff00ff00 – Green
  • 0xff0000ff – Blue
  • 0xff00ffff – Turquoise
  • 0xffffff00 – Yellow
  • 0xff000000 – Black
  • 0xffffffff – White

Notice that in all cases the Alpha ‘AA’ was always ‘ff’. By convention, this means these pixels are fully opaque, non-transparent. For now, we’ll just take it as necessary, and later we’ll see how to deal with transparency.

Well, this has been a hand full, but now we know how to manipulate pixels on the screen (using canvasPixels), and we know where the pixels came from, and how to present the values in the window. With a little more work, we can have some building blocks for simple graphics.

One of the fundamentals of drawing most primitives in 2D, is the horizontal line span. If we can draw horizontal lines quickly, then we can build up to other primitives, such as rectangles, triangles, and polygons. So, here’s some code to do those basics.

#include "apphost.h"

// Some easy pixel values
#define black	PixelRGBA(0xff000000)
#define white	PixelRGBA(0xffffffff)
#define red		PixelRGBA(0xffff0000)
#define green	PixelRGBA(0xff00ff00)
#define blue	PixelRGBA(0xff0000ff)
#define yellow	PixelRGBA(0xffffff00)

// Return a pointer to a specific pixel in the array of
// canvasPixels
INLINE PixelRGBA* getPixelPointer(const int x, const int y) 
    return &((PixelRGBA*)canvasPixels)[(y * canvasWidth) + x]; 

// Copy a pixel run as fast as we can
// to create horizontal lines.
// We do not check boundaries here.
// Boundary checks should be done elsewhere before
// calling this routine.  If you don't, you run the risk
// of running off the end of memory.
// The benefit is faster code possibly.
// This is the workhorse function for most other
// drawing primitives
INLINE void copyHLine(const size_t x, const size_t y, const size_t len, const PixelRGBA& c)
    unsigned long * dataPtr = (unsigned long*)getPixelPointer(x, y);
    __stosd(dataPtr, c.value, len);

// Draw a vertical line
// done as quickly as possible, only requiring an add
// between each pixel
// not as fast as HLine, because the pixels are not contiguous
// but pretty fast nonetheless.
INLINE void copyVLine(const size_t x, const size_t y, const size_t len, const PixelRGBA& c)
    size_t rowStride = canvasBytesPerRow;
    uint8_t * dataPtr = (uint8_t *)getPixelPointer(x, y);

    for (size_t counter = 0; counter < len; counter++)
        *((PixelRGBA*)dataPtr) = c;
        dataPtr += rowStride;

// create a rectangle by using copyHLine spans
// here we do clipping
INLINE void copyRectangle(const int x, const int y, const int w, const int h, const PixelRGBA &c)
    // We calculate clip area up front
    // so we don't have to do clipLine for every single line
    PixelRect dstRect = gAppSurface->frame().intersection({ x,y,w,h });

    // If the rectangle is outside the frame of the pixel map
    // there's nothing to be drawn
    if (dstRect.isEmpty())

    // Do a line by line draw
    for (int row = dstRect.y; row < dstRect.y + dstRect.height; row++)
        copyHLine(dstRect.x, row, dstRect.width, c);

// This gets called before the main application event loop
// gets going.
// The application framework calls refreshScreen() at least
// once after this, so we can do some drawing here to begin.
void onLoad()
	setCanvasSize(320, 240);

	// clear screen to white

	copyRectangle(5, 5, 205, 205, yellow);

    copyHLine(5, 10, 205, red);

    copyHLine(5, 200, 205, blue);

    copyVLine(10, 5, 205, green);
    copyVLine(205, 5, 205, green);


The function ‘getPixelPointer()’ is pure convenience. Just gives you a pointer to a particular pixel in the canvasPixels array. It’s a jumping off point. The function copyHLine is the workhorse, that will be used time and again in many situations. In this particular case, there is no boundary checking going on, so that’s a design choice. Leaving off boundary checking makes the routine faster, by a tiny bit, but it adds up when you’re potentially doing millions of lines at a time.

The implementation of the copyHLine() functions contains a bit of something you don’t see every day.

__stosd(dataPtr, c.value, len);

This is a compiler intrinsice specific to the Windows system. It essentially operates like a memcpy(), but instead of copy one byte over a memory range, it copies a 32-bit value over that range. This is perfect for rapidly copying our 32-bit pixel value to fill a span in the canvasPixels array. Being a compiler intrinsic, we can assume it’s implemented to be the most optimal code to implement this feature. Of course, you can only know for sure if you do some measurements. For now, we’ll stick with it as it does what we want.

The copyRectangle() function simply calls the copyHLine() function the required number of times. Notice here that we do clipping of the rectangle up front (intersection). Since we decided copyHLine() would not do any clipping, we do the clipping in the higher level primitives. Doing clipping here only occurs once, then we can feed known valid coordinates and lengths to the copyHLine() routine without having to do it in the inner loop.

Deciding when to clip, or range check is a key aspect of the framework. Delaying such decisions to the highest levels possible is a good design strategy. Of course, you can change these choices to match whatever you want to do. This is a key aspect of the framework’s design as well.

The framework will always try to be light weight and composable. It tries to keep the opinionated API as minimal as possible, not forcing particular design philosophies at the exclusion of others.

With that, we’re at a good stopping point. We’ve got a window up on the screen. We know how to draw everything from pixels to straight lines and rectangles, and our executable is only 39K in size. That in and of itself is interesting, and over the next couple of articles we’ll see whether we can maintain that small size while increasing capability. Remember, the size of the Commodore 64 computer of old, which was part of the demo scene, had only 64K of RAM to play with. Let’s see what we can do with the same constraint.

Next time around, some input and animation timers.

Hello Scene – Win32 Wrangling

My style of software coding involves a lot of quick and dirty prototyping. Sometimes I’m simply checking out the API of some library, other times I’m trying to hash out the details of some routine I myself am writing. Whatever the case, I want to get the boilerplate code out of the way. On Windows, I don’t want to worry about whatever startup code is involved, I just want to put a window on the screen (or not), and start writing my code.

Case in point, I have a project called minwe, wherein I have created a framework for simple apps. One of the common functions to implement in your own code, to get started, is the ‘onLoad()’ function:

#include "apphost.h"

void onLoad()
	setCanvasSize(320, 240);

This looks familiar to the way I might write some web page code. The ‘onLoad()’, the introduction of a ‘canvas’. All you have to do is implement this one function, and suddenly you have an application window on the screen. It won’t do much, but it at least deals with mouse and keyboard input, and you can close it to exit the application.

simple application window

So, what’s the work behind this simplicity, and how do you write something ‘real’? On Windows, there’s a long history of code being written in a simple boilerplate way. You need to know the esoteric APIs to create a Window, running a ‘message loop’, handling myriad system and application defined messages and the like. The classic Windows message loop, for example, looks something like this:

void run()
    // Make sure we have all the event handlers connected

    // call the application's 'onLoad()' if it exists
    if (gOnloadHandler != nullptr) {

    // Do a typical Windows message pump
    MSG msg;
    LRESULT res;


    while (true) {
        // we use peekmessage, so we don't stall on a GetMessage
        // should probably throw a wait here
        // WaitForSingleObject
        BOOL bResult = ::PeekMessageA(&msg, NULL, 0, 0, PM_REMOVE);
        if (bResult > 0) {
            // If we see a quit message, it's time to stop the program
            if (msg.message == WM_QUIT) {

            res = ::TranslateMessage(&msg);
            res = ::DispatchMessageA(&msg);
            // call onLoop() if it exists
            if (gOnLoopHandler != nullptr) {

The meat and potatoes of most Windows apps is the Peek/Translate/Dispatch, in an infinite loop. It’s been this way since the beginning of Windows 1.0, and continues to this day. There are tons of frameworks which variously hide this from the programmer, but at the core, it’s still the same.

For my demo scene purposes, I too want to hide this boilerplate, with some enhancements. If you want to follow along, the minwe project contains all that I’m showing here. A common method is to use some C/C++, C# or other library, to encapsulate Windows functions. Then you’re left with a fairly large API in the form of objects that you must learn to manipulate. That’s too hard for me, and requires a large amount of knowledge be used to memorize this API. I want much less than that, but still want all the boilerplate stuff covered.

In the case of ‘onLoad()’, it’s one of those functions that if the user’s code implements it, it will be called. If you don’t implement it, there’s no harm. In a way, you can think of the application shell as being an object, and you are specializing this object by implementing certain functions. If you don’t implement a particular function, the default behavior for that function will be executed. In most cases this simply means nothing will happen.

This is the first bit of magic that minwe implements. This magic is performed using dynamic loading. Dynamic loading simply means I look for a pointer to a function at runtime, rather than at compile time. The crux of this code is as follows:

// Look for the dynamic routines that will be used
// to setup client applications.
// Most notable is 'onLoad()' and 'onUnload'
void registerHandlers()
    // we're going to look within our own module
    // to find handler functions.  This is because the user's application should
    // be compiled with the application, so the exported functions should
    // be attainable using 'GetProcAddress()'

    HMODULE hInst = ::GetModuleHandleA(NULL);

    // Start with our default paint message handler
    gPaintHandler = HandlePaintMessage;

    // One of the primary handlers the user can specify is 'onPaint'.  
    // If implemented, this function will be called whenever a WM_PAINT message
    // is seen by the application.
    WinMSGObserver handler = (WinMSGObserver)::GetProcAddress(hInst, "onPaint");
    if (handler != nullptr) {
        gPaintHandler = handler;

    // Get the general app routines
    // onLoad()
    gOnloadHandler = (VOIDROUTINE)::GetProcAddress(hInst, "onLoad");
    gOnUnloadHandler = (VOIDROUTINE)::GetProcAddress(hInst, "onUnload");

    gOnLoopHandler = (VOIDROUTINE)::GetProcAddress(hInst, "onLoop");

If you look back at the ‘run()’ function, you see the first function called is ‘registerHandlers()’. When compiling the application, the appmain.cpp file is included as part of the project. This single file contains all the Windows specific bits and magic incantations. Here is usage of the Windows specific GetModuleHandle(), and the real workhorse, ‘GetProcAddress()’. GetProcAddress() is essentially asking the loaded application for a pointer to a function with a specified name. If that function is found within the executable file, the pointer is returned. If the function is not found, then NULL is returned.

typedef void (* VOIDROUTINE)();
static VOIDROUTINE gOnloadHandler = nullptr;    
gOnloadHandler = (VOIDROUTINE)::GetProcAddress(hInst, "onLoad");

From the top, in classic C/C++ style, if you want to define pointer to a function with a particular signature (parameters and return type), you do that typedef thing. In modern C++ you can do it differently, but this is simple, and you only do it once.

So, the ‘gOnloadHandler’, is a pointer to a function that takes no parameters, and returns nothing. When you look back at the application code, the ‘onLoad()’ function matches this criteria. It’s a function that takes no parameters, and returns nothing. When GetProcAddress() is called, it will find our implementation of ‘onLoad()’, and assign that pointer to our gOnloadHandler variable.

There is one more little bit of magic that makes this work though, and it’s a critical piece. In order for this function to show up in our compiled application as something that can be found using GetProcAddress(), it must be ‘exported’. And thus, in the apphost.h file, you will find:

#define APP_EXPORT		__declspec(dllexport)
APP_EXPORT void onLoad();	// upon loading application

The #define is there for convenience. The __declspec(dllexport) is the magic that must precede the declaration of the onLoad() function. Without this, the compiler will not make the ‘onLoad()’ name available for the GetProcAddress() to find at runtime. So, even if you implement the function, it will not be found.

If you refer back to the implementation of the ‘run()’ function, you see that right after the function pointers are attempted to be loaded, we try to execute the onSetup() function;

void run()
    // Make sure we have all the event handlers connected

    // call the application's 'onLoad()' if it exists
    if (gOnloadHandler != nullptr) {

And during each loop iteration, if the function ‘onLoop()’ has been implemented, that will be called.

This is a very simple and powerful technique. The ability to find a function within an executable has been there from the beginning. It’s perhaps a little known mechanism, but very powerful. With it, we can begin to create a simple shell of a programming environment which feels more modern, and abstracts us far away from the Windows specifics. At this first level of abstraction, there are only three functions the application can implement that get this dynamic loading treatment;

// The various 'onxxx' routines are meant to be implemented by
// application environment code.  If they are implemented
// the runtime will load them in and call them at appropriate times
// if they are not implemented, they simply won't be called.
APP_EXPORT void onLoad();	// upon loading application
APP_EXPORT void onUnload();

APP_EXPORT void onLoop();	// called each time through application main loop

There is ‘onLoad()’, which has been discussed. There is also ‘onUnload()’, which is there for cleaning up anything that needs to be cleaned up before the application closes, and of course, there’s ‘onLoop()’, which is called every time through the main event loop, when we’re not processing other messages.

That’s it for this first installment. I’ve introduced the minwe repository for those who want to follow along with the real code of this little tool kit. I’ve expressed the magic incantations which allow you to create a simple application without worrying about Windows esoterica, and I’ve showed you how you can begin to specialize your application by implementing a couple of key features.

Next time around, I’ll share the functions the runtime provides, discuss that application loop in more detail, talk about putting pixels on the screen, and how to deal with mouse and keyboard.

Have You Scene My Demo?

MultiWall displaying live video

I first started programming around 1976. Way back then, the machines were Commodore PET, RadioShack TRS-80, and later Apple I and the like. The literal birth of ‘personal’ computing. A spreadsheet (VisiCalc) was the talk of the town, transforming mainframe work, and thus ‘do at office’ into “do at home”, work. That was, I think, the original impetus for “work from home”. And here we are again…

Back then, on my Commodore PET, I programmed in ‘machine code’, and ‘assembly’, and later ‘BASIC’. Ahh, the good old days of an accumulator, stack, 8K of RAM, and a cassette tape drive. A lot of very clever programming was achieved in these very limited computers. With the rise of the Commodore 64, Apple II, and Amiga, we saw the birth of the “demo scene”. These were smallish programs that stretched the capabilities of the machine to its breaking point. Live synthetic music, three-D fly throughs, kaleidoscopic effects, color palette shifting waterfalls. It was truly amazing stuff.

Roll forward a few decades, and we now carry in our pockets machines that have the power of the ‘mainframes’ of those days, in the size of a cigarette case. It’s truly amazing. And what do we do with all that power? Mostly chit chat, email, web browsing, movie watching, and maybe some game playing. On some very rare occasions, we actually make phone calls.

Well, for one reason or another, I find myself wanting to go back to those roots, and really fiddle with the machine at a very low level. I might not get all the way down to machine code (very painful and tedious), but I can certainly come down out of the various frameworks, and create some interesting little apps.

And again, why?

Because, that’s what programmers do. If you ever ask yourself, ‘how can I program this effect, or just throw together this thing, but I want it to be really special’, then you’ve got the bug to tinker. Here I’m going to lay out how I tinker in code. What I play with, how I build up my tools, what design choices I make, and the like.

This post, in particular, serves as an intro to a series, “Have You Scene My Demo”. The image at the top of this article is showing off window, which is displaying 30 instances of live video being captured from a corner of my development machine. It’s specialness comes from thinking different about how to do capture, how to display, how to track time, and the like. The actual detailed walkthrough will be for another time.

What makes for a good playground?

Well, it kind of depends on what kinds of applications you want to develop, but roughly speaking, I want some tools so I can deal with:

UI – Mouse, Keyboard, joystick, touch, pen




Those are the basic building blocks. In my particular case, I want the code to be portable ready, so I try to lift up from the Windows core as soon as possible, but I’m in no hurry to make it truly cross platform. For that, I would probably just do web development. No, these demos are to explore what’s possible with a particular class of machines, running a particular OS. I want to plumb the depths, and surface the deepest darkest secrets and amazements as are possible.

In brief, my environment is Windows. My programming language is C/C++. Beyond that, I use the graphics library ‘blend2d’, simply because it’s super fast, and does a lot of great modern stuff, and it’s really easy to use. I could in fact write my own 2D graphics library (and in a couple of demos I will), but for the most part, it’s best to rely on this little bit of code.

So, buckle up. What’s going to appear here, in short order, are some hints, tips, and code for how I create tinker tools for my software development.

2+ Decades @ Microsoft : A Retrospective

I joined Microsoft in November of 1998.  During black history month in 2022, I sent out an email to my friends and colleagues giving a brief summary of my time at the company, my own personal “black history”.  Over the past year, I’ve been engaged in some personal brand evolution, and I thought blogging is good for long form communications.  So here, I’m going to repeat some of that Microsoft history as a way to set the stage for the future.  So, here, almost unedited, is the missive I shared with various people as a reflection on black history month, 2022.


If you’re receiving this, you’re probably no stranger to receiving missives from me on occasion.

Here we are at the end of black history month. 

I am black, and my recent history is 24 years of service at Microsoft.

I’ve done a lot in those years from delivering core technology (XML), to creating Engineering Excellence in India (2006 – 2009), to dev managing the early Access Control and Service Bus components of the earliest incarnation of Azure. 

I’ve also had the pleasure of creating the LEAP program, which is helping to make our industry more inclusive, and helped to establish Kevin Scott in the freshly re-birthed Office of the CTO. While in OCTO, inspired and guided by a young African engineer, I had the pleasure of supporting the push into our African dev centers (Kenya and Nigeria), which now number around 650 employees.

My current push is to hire folks in the Caribbean, yet another relatively untapped talent market.

This past couple of years have been particularly charged/poignant, with the dual of covid, and various events leading to the emergence of “Black Lives Matter”.

Throughout the arc of the 24 years I have spent in the company, I have gone from “I’m just here to do a job”, to “There is a job I MUST do to support my black community”.  I have been happy that the company has given me the leeway to do what I do, while occasionally participating in bread and butter activities. 

I am encouraged to see and interact with a lot more melanin enhanced people from around the world, and in the US specifically.  We have a long road to go, but we are in fact making progress.

Over the past year, I have thought about what I can do, how I can leverage my 35+ years of experience in tech, to empower even more people, and enable the next generation to leapfrog my own achievements.  To that end, I’ve started speaking out, starting ventures, providing support, beyond the confines of our corporate walls.  I have appeared on several podcasts over the past couple of months, and will continue to appear in a lot more.  This year I will be making appearances at conferences, writing a book, etc.

If you’re interested in following along, and getting some insights about this guy that pesters you in email on occasion, you can check out my web site, which is growing and evolving.

William A Adams (william-a-adams.com)

William A Adams Podcast Guest Appearances

At the bottom of the media page (the second link), you’ll see a  piece by the computer history museum in silicon valley.  Some have already seen it, but there’s actually a blog the museum did that goes along with it.  It’s one of those retrospectives of a couple of black OGs in tech (me and my brother) from the earlier days in silicon valley, up to the present.

And so it goes.  We have spent another month reflecting on blackness in America.  We are making positive strides, and have so much more to achieve.  I am grateful for the company that I keep, and the continued support that I enjoy in these endeavors.

Don’t be surprised if I ask you to come and give a talk somewhere in the Caribbean within the coming year.  We are transforming whole communities with the simple acts of being mindful, intentional, and present.

  • William

And with that, dear reader, welcome back to my blog, wherein I will be a regular contributor, sharing thoughts in long form, sometimes revisiting topics of old, and mostly exploring topics anew.

Microsoft, the Musical?

Well, it was primarily driven by Microsoft summer 2019 interns

I saw reference to this while browsing through some Teams channels, and I thought, Oh, ok, one of those shaky cell phone productions, let’s see…

Oh boy was I wrong. This is full on “ready for broadway” style musical entertainment. My takeaways: We have cool interns, look at that diverse bunch, they can sing, they can produce, they did this as a passion project while still maintaining their day jobs…

I’ve never been a corporate apologist, and I’ve poked the bear in the eye more times than I’ve stroked it, but this made me feel really happy. I was entertained, and proud that the company I work for can show such fun and mirth, while doing a tong-in-cheek sendup. Tech is supposed to be fun, and this was fun.

I’m sure other companies will follow suit in years to come, or they already do this at other companies, and I just haven’t see it.

Watching this was a well spent 10 minutes to start my week.

Did I really need 3 desktops?

It’s been about 3 years since I built a monster of a desktop machine with water cooling, fans, LEDs and all that. Long time readers will remember the disastrous outcome of that adventure as the liquid leaked, drown my electronics, and caused me to essentially abandon the thing. I somewhat recovered from the fiasco by purchasing a new motherboard, and trying to limp along with various of the components from that original build. Recently, I decided to throw in the towel and start from scratch. This time around, I decided to go with an AMD build, because AMD seems to be doing some good stuff with their Ryzen and beyond chippery. So, I put together a rig around a Ryzen 7, 32Gb RAM, same old nVidia 1060 video card. Upgraded the SSD to the latest Samsung 980? or whatever it is.

That system was acting a bit flakey, to the point I thought it was defective, so I built another one, this time with Ryzen 5, and nothing from the old builds. New power supply, SSD, RAM, video card. That one worked, and it turns out the Ryzen 7 based system worked as well. Turns out it only needed a bios update to deal with networking not handling the sleep state of Windows 10.

So, now I have two working desktop machines. But wait, the second motherboard from the previous disastrous PC build probably still works? Maybe it just needs new power supply and other components and I can resurrect it? And hay, how about that Athlon system sitting over there? That was my daily driver since 2010, until I decided to build the intel water cooled disaster. I think that machine will make for a good build for the workshop in the garage. I need something out there to run the CNC machine, or at least play some content when I’m out there.

I did finally decommission one machine. The Shuttle PC I built with my daughter circa 2005 finally gave up the ghost. Tried to start it, and the 2Tb hard drive just clicks… Too bad. That machine was quite a workhorse when it came to archiving DVDs and other disks over the years. May it rest in peace.

There was one bit of surgery I did on an old Laptop. I had a ThinkPad X1 Carbon from work which succumbed to the elements last year some time. I had tech support take the ssd out of it so I could transfer to somewhere else. Given the machine is about 4+ years old, it wasn’t as simple as it being an nvme SSD. Oh no, it was some special sort of thing which required quite a lot of searching about to find an appropriate adapter. I finally found it, and plugged the SSD into it, then plugged that into an external enclosure, then to USB 3.0, and finally got the old stuff off of it! So, now I have this awesome adapter card that I could only use once, awaiting the next old X1 Carbon someone needs to backup.

All ramblings aside, I’ve recently been engaged in writing some code related to parsing. Two bits in particular, gcode parsing, json streaming parser, are bits that I’ll be writing about.

And so it goes.

As the Tech Turns

I joined the Office of the CTO at Microsoft just over two years ago. I was a ‘founding member’ as Kevin Scott was a new CTO for Microsoft. I have now moved on to another job, in a different CTO Office (Mark Russinovich in Azure).

I noticed one thing while I was in OCTO, I essentially stopped blogging. Why was that? Well, probably the main reason is the fact that when your in that particular office, you’re privy to all sorts of stuff, most of it very private, either to Microsoft, or other partners in the industry. Not really the kind of stuff you want to write about in a loud way. My colleague Mat Velosso managed to maintain a public voice while in the office, but I didn’t find I could do it. Now as it turns out, my new job is all about having a voice, and helping to make Engineering at Microsoft that much better.

But, before I get into all that, I want to reflect on tech.

I’m in my home office, and I’m looking around at all this clutter. I’ve got SD Cards that are 8Gb sitting around with who knows what on them. I’ve got motherboards sitting openly on my bench. I’ve got more single board computers than you can shake a stick it. Various bits and bobs, outdated tech books, mice and keyboards galore, laptops that have been long since abandoned, and 5 23″ LCD displays sitting on the floor.

That’s just in my office. My other cave has similar amounts of computers, displays, tvs, and other tech leavings from the past 10 years of hobbying around, trying to stay current. What to do?

Well, donate all that can to good places. Don’t give working displays to PC recycle, they’ll just tear them apart. Find a school, non-profit, deserving person. Then, all those Raspberry Pi versions you never took out of their boxes, send them to the PC recycler, or give them to a school. If there’s one thing I’ve learned about single board computers, if you don’t actually have an immediate use for them, they’re not worth buying.

Books, to the library, or if you have a local “Half Price Books” maybe you can get some money back. More than likely, if they’re older than 5 years, they’re headed to the compost pile.

I have saved one set of PS/2 style keyboard/mouse, because, they’re actually useful.

I want to reconfigure my office. Now that 4K UHD monitor/tvs are $250, it makes sense to purchase them as decorations for a room. A couple of those 55″ versions up on the walls gives me connectivity for any computers, as well as an ability to do things like use them as a false window. I need more workspace. My current configuration is sets of drawers, which hide who knows what, and counter top which is too narrow, and book shelves, which can’t hold very much weight. So, out it goes, and in come the wire rack shelving units, 24″ deep.

Copy off all the stuff from those random SD cards, and throw away the ones that are less than 32Gb, because you know you’ll lose them before you ever use them again, and you’ll always be wondering what’s on them. Digitize those photo albums, use one of your many SBCs to setup a NAS, and copy everything there, and backup to a cloud service

For me, new job, new tech, new office space. time to blab and share again.

Commemorating MLK’s Passing

Dr. Martin Luther King Junior was assassinated April 4th, 1968. That was 51 years ago today. In order to commemorate his passing, I penned the following and shared it with my coworkers at Microsoft.

On this very important day in history, I am contemplative.  As we consider the importance of naming our ERG, I am reflective upon how we got here.

I was only 4 years old on that fateful day when “they killed Martin!”, so I don’t remember much, other than adults crying, smoking, drinking, talking adult talk, nervous wringing of hands, and us kids playing outside.

In my tech career of roughly 35 years, I’ve often been “the only…”.  It’s been an interesting walk.  In most recent years, as I become more Yoda and less Samuel Jackson, I have contemplated these things:

Would I have died fighting rather than be put on the ship

Would I have jumped into the ocean rather than be taken

Would I have fought back upon first being whipped, choosing pride and honor over subjugation

Would I have had the bravery to run away

Would I have had the bravery to help those who ran away

Would I have had the courage to learn to read

Would I have had the strength to send my children to school

Would I have had the strength to drink from the water fountain not meant for me

Would I have had the courage to simply sit

Would I have had the tenacity to face the smoke bombs, water cannons and dogs

Would I have had the conviction to carry on a struggle, long after my inspirational leader was lost…

And here I sit today, and I contemplate.  Will I recognize my calling?  Will I recognize my civil rights moment?  Will I be able to throw off my golden handcuffs and do what’s right?

If we collectively think “Black” means anything, we collectively can’t ignore the passing of this particular day.  I encourage us all to reflect on who we are, where we come from, and where we intend to go in the future.

My First LEAP Video

Here it is, the first video that I’ve done related to LEAP:

Reading Fine Print – A new credit card

So, my kids wanted to buy me a large teddy bear for my birthday.  There so happened to be one at the local Safeway, but it was $75.  The last time we bought a giant stuffed thing, it was a giant dog from Costco.  I don’t remember the price, but I thought, Costco, it’s got to be cheaper…

We went down to Costco, but I we haven’t had a membership there for years.  Time to renew.  One thing led to another, and rather than the simple run of the mill membership, I allowed myself to be talked into the “Executive” membership, which ‘gives’ you a credit card, and a $60 cash back card (offsetting the extra expense of the super membership).  Well, how bad could it be.  I went from having really no credit cards last year, to having 4 of them today.  That must be good for credit worthiness right?  At any rate, I finally got the card, and thought, hay, I might as well read all the fine print.

The first thing that came in the mail was the “Account approval notice”.  This one is interesting because it’s basically just the “congratulations, you’re approved for a card, it will be coming in the mail shortly”.  It does list the credit limit, the outrageous interest rates, and down at the bottom, below the fold, “Personalize your PIN”.  Aha!  This normally discarded little piece of paper is the one that has the credit card PIN, which most people don’t know.  For an ATM card, you always know the PIN because without it, you basically can’t use it.  But, your credit card PIN?  I don’t usually know that, and why?  Because I’m not looking for it, and I usually throw away this intro piece of paper.  Well, now I know, and I’ll try to keep track of this radom 4 digits.

Next up, the giant new card package.  This is the set of papers which include the terms and conditions in minute detail.  This shows the 29% rate you’ll be charged whenever you do anything wrong (like not pay your bill on time), as well as the ‘arbitration’ clause, which ensures you never sue them whenever they do something wrong.  One small piece of paper in this set says “FACTS” at the top of it.

The FACTS sheet.  This piece of paper tells me about the many ways in which they’re going to use the information they gather on me to market to me.  Not only the company itself, but their affiliates, and even non-affiliates (basically anyone who wants the data).  This is normally a throw away piece as well, but this time I decided to read the fine print.  What I found was one section titled “To limit our sharing”.  Well, that sounds good.  Call a phone number, go through some live menu choices, and there you have it, you’ve limited the usage of this data.  All you can do is limit the affiliate usage of your data, but it’s something.  I even chose the option to have them send me a piece of paper indicating the choices that I made.

I feel really proud of myself.  I normally ignore most of the stuff that comes from credit card companies, as most of it is marketing trying to sign me up for more credit cards, or point systems, or whatever.  This time, I really dug in, and caught some interesting details.  I’m curious to see how the “don’t market to me” thing works out.  Of course, once you click off that checkbox, they probably simply sell your info off to someone else to harvest.  I feel like that’s what happens when you unsubscribe from an email list as well, but I can’t prove it.

At any rate, I learned something new today.  Read some of the fine print, try out a little something you haven’t in the past, and go on an adventure!