Hello Scene – Timing, Recording, Keyboarding

Animated Bezier curve

If there’s no recording, did it happen? One of the most useful features of the demo scene is the ability to record what’s being generated on the screen. There are plenty of ways to do that, external to the program. At the very least though, you need to be able to save individual frames.

But, I’m getting slightly ahead of myself.

First, we need to revisit the main event loop and see how it is that we get regularly timed ‘frames’ in the first place. Here is the code that’s generating that Bezier animation, bezeeaye.cpp.

#include "gui.h"
#include "sampler.h"
#include "sampledraw2d.h"



int segments = 50;			// more segments make smoother curve
int dir = 1;				// which direction is the animation running
RainbowSampler s(1.0);		// Sample a rainbow of colors

int currentIteration = 1;	// Changes during running
int iterations = 30;		// increase past frame rate to slow down


void drawRandomBezier(const PixelRect bounds)
{
	// clear to black
	gAppSurface->setAllPixels(PixelRGBA(0xff000000));

	// draw axis line
	line(*gAppSurface, bounds.x, bounds.y+bounds.height / 2, bounds.x+bounds.width, bounds.y + bounds.height / 2, PixelRGBA(0xffff0000));

	int x1 = bounds.x;
	int y1 = bounds.y + bounds.height / 2;

	int x2 = (int)maths::Map(currentIteration,1, iterations, 0, bounds.x + bounds.width - 1);
	int y2 = bounds.y;

	int x3 = (int)maths::Map(currentIteration, 1, iterations, bounds.x+bounds.width-1, 0);
	int y3 = bounds.y+bounds.height-1;

	int x4 = bounds.x+bounds.width-1;
	int y4 = bounds.y+bounds.height / 2;

	sampledBezier(*gAppSurface, x1, y1, x2, y2, x3, y3, x4, y4, segments, s);

	// Draw control lines
	line(*gAppSurface, x1, y1, x2, y2, PixelRGBA(0xffffffff));
	line(*gAppSurface, x2, y2, x3, y3, PixelRGBA(0xffffffff));
	line(*gAppSurface, x3, y3, x4, y4, PixelRGBA(0xffffffff));

	currentIteration += dir;
	
	// reverse direction if needs be
	if ((currentIteration >= iterations) || (currentIteration <= 1))
		dir = dir < 1 ? 1 : -1;
}


void setup()
{
	setCanvasSize(800, 600);
	setFrameRate(15);
}

void keyReleased(const KeyboardEvent& e) {
	switch (e.keyCode) {
	case VK_ESCAPE:
		halt();
		break;

	case VK_UP:
		iterations += 1;
		break;

	case VK_DOWN:
		iterations -= 1;
		if (iterations < 2)
			iterations = 2;
		break;
	
	case 'R':
		recordingToggle();
		break;
	}
}

void onFrame()
{
	drawRandomBezier({ 0,0,canvasWidth,canvasHeight });
}

It’s a little different than we’ve seen before. First off, at the top, we see ‘include “gui.h“‘. Up to this point, we’ve only been including ‘apphost.h‘, which has served us well in terms of offering just enough to get the job done of putting a window on the screen, and dealing with mouse and keyboard input through a publish/subscribe eventing system. Well, gui.h offers another level of abstraction/simplicity. As we’ve seen before, you are not forced to use this level of convenience, but if you find yourself writing apps that use mouse and keyboard often, and you want a timing system, gui.h makes it much easier.

There are three primary functions being used from gui.h, setup(), keyReleased(), onFrame(). So let’s look a little deeper into gui.h

//
// gui.h
// //
// apphost.h/appmain.cpp give a reasonable core for a windows
// based program.  It follows a pub/sub paradigm for events, which
// is pretty simple to use.
//
// gui.h/.cpp gives you a function based interface which which 
// is similar to p5 (processing), or other very simple APIs
// If you want something very simple, where you can just implement
// the functions that you use, include gui.h in your application.
//
#include "apphost.h"
#include "sampledraw2d.h"
#include "recorder.h"

#pragma comment (lib, "Synchronization.lib")

#ifdef __cplusplus
extern "C" {
#endif


	// Application can calls these, they are part of the 
	// gui API
	APP_EXPORT void fullscreen() noexcept;
	APP_EXPORT void background(const PixelRGBA &c) noexcept;
	APP_EXPORT void setFrameRate(const int);

	// Application can call these to handle recording
	APP_EXPORT void recordingStart();
	APP_EXPORT void recordingStop();
	APP_EXPORT void recordingPause();
	APP_EXPORT void recordingToggle();


	// Application can implement these
	// Should at least implement setup(), so canvas size
	// can be set
	APP_EXPORT void setup();

	// If application implements 'onFrame()', it
	// is called based on the frequency of the 
	// frame rate specified
	APP_EXPORT void onFrame();

	// keyboard event processing
	// Application can implement these
	typedef void (*KeyEventHandler)(const KeyboardEvent& e);

	APP_EXPORT void keyPressed(const KeyboardEvent& e);
	APP_EXPORT void keyReleased(const KeyboardEvent& e);
	APP_EXPORT void keyTyped(const KeyboardEvent& e);

	// mouse event processing
	// Application can implement these
	typedef void (*MouseEventHandler)(const MouseEvent& e);

	APP_EXPORT void mouseClicked(const MouseEvent& e);
	APP_EXPORT void mouseDragged(const MouseEvent& e);
	APP_EXPORT void mouseMoved(const MouseEvent& e);
	APP_EXPORT void mousePressed(const MouseEvent& e);
	APP_EXPORT void mouseReleased(const MouseEvent& e);
	APP_EXPORT void mouseWheel(const MouseEvent& e);
	APP_EXPORT void mouseHWheel(const MouseEvent& e);



#ifdef __cplusplus
}
#endif

#ifdef __cplusplus
extern "C" {
#endif
	// These are variables available to the application
	// Size of the application area, set through
	// setCanvasSize()
	APP_EXPORT extern int width;
	APP_EXPORT extern int height;

	APP_EXPORT extern uint64_t frameCount;
	APP_EXPORT extern uint64_t droppedFrames;

	APP_EXPORT extern PixelRGBA* pixels;

	// Keyboard Globals
	APP_EXPORT extern int keyCode;
	APP_EXPORT extern int keyChar;

	// Mouse Globals
	APP_EXPORT extern bool mouseIsPressed;	// a mouse button is currently pressed
	APP_EXPORT extern int mouseX;			// last reported location of mouse
	APP_EXPORT extern int mouseY;			
	APP_EXPORT extern int mouseDelta;		// last known delta of mouse wheel
	APP_EXPORT extern int pmouseX;
	APP_EXPORT extern int pmouseY;


#ifdef __cplusplus
}
#endif

Similar to what we had in apphost.h, there are some functions, that if implemented, will be called at appropriate times, and if they’re not implemented, nothing additional will occur. Additionally, as a convenience, there are some global variables that are available.

So, let’s look at some of the guts in gui.cpp

// Called by the app framework as the first thing
// that happens after the app framework has set itself
// up.  We want to do whatever registrations are required
// for the user's app to run inside here.
void onLoad()
{
    HMODULE hInst = ::GetModuleHandleA(NULL);
    setFrameRate(15);

    // Look for implementation of keyboard events
    gKeyPressedHandler = (KeyEventHandler)GetProcAddress(hInst, "keyPressed");
    gKeyReleasedHandler = (KeyEventHandler)GetProcAddress(hInst, "keyReleased");
    gKeyTypedHandler = (KeyEventHandler)GetProcAddress(hInst, "keyTyped");

    // Look for implementation of mouse events
    gMouseMovedHandler = (MouseEventHandler)GetProcAddress(hInst, "mouseMoved");
    gMouseClickedHandler = (MouseEventHandler)GetProcAddress(hInst, "mouseClicked");
    gMousePressedHandler = (MouseEventHandler)GetProcAddress(hInst, "mousePressed");
    gMouseReleasedHandler = (MouseEventHandler)GetProcAddress(hInst, "mouseReleased");
    gMouseWheelHandler = (MouseEventHandler)GetProcAddress(hInst, "mouseWheel");
    gMouseHWheelHandler = (MouseEventHandler)GetProcAddress(hInst, "mouseHWheel");
    gMouseDraggedHandler = (MouseEventHandler)GetProcAddress(hInst, "mouseDragged");



    gSetupHandler = (VOIDROUTINE)GetProcAddress(hInst, "setup");
    gDrawHandler = (VOIDROUTINE)GetProcAddress(hInst, "onFrame");

    subscribe(handleKeyboardEvent);
    subscribe(handleMouseEvent);

    // Start with a default background before setup
    // does something.
    background(PixelRGBA(0xffffffff));

    // Call a setup routine if the user specified one
    if (gSetupHandler != nullptr) {
        gSetupHandler();
    }

    // setup the recorder
    gRecorder.init(&*gAppSurface);

    // 
    // If there was any drawing done during setup
    // display that at least once.
    refreshScreen();
}

Here is an implementation of the ‘onLoad()’ function. When we were just including ‘apphost.h’, our demo code implemented this function to get things started. If you include gui.cpp in your project, it implements ‘onLoad()’, and in turn looks for an additional set of dynamic functions to be loaded. In addition to mouse and keyboard functions, it looks for ‘setup()’ and ‘onFrame()’.

The ‘setup()’ function serves a similar role to ‘onLoad()’. It’s a place where the application has a chance to set the size of the canvas, and do whatever other setup operations it wants.

The ‘onFrame()’ function is where things get really interesting. We want to be able to have an event loop that is something like;

  • Perform various OS routines that must be performed
  • Check to see if it’s time to inform the app to draw a frame
  • Call ‘onFrame()’ if the time is right
  • Go back to main event loop

This all occurs in the ‘onLoop()’ function, which gui.cpp has an implementation for. Again, in previous examples, the application code itself would implement this function, and it would be called a lot, very rapidly, with no control over timing. By implementing the ‘onLoop()’ here, we can impose some semblance of order on the timing. It looks like this.

// Called by the app framework
// This will be called every time through the main app loop, 
// after the app framework has done whatever processing it 
// needs to do.
//
// We deal with frame timing here because it's the only place
// we have absolute control of what happens in the user's app
//
// if we're just past the frame time, then call the 'draw()'
// function if there is one.
// Otherwise, just let the loop continue
// 

void onLoop()
{
    // We'll wait here until it's time to 
    // signal the frame
    if (fsw.millis() > fNextMillis)
    {
        // WAA - Might also be interesting to get absolute keyboard, mouse, 
        // and joystick positions here.
        //

        frameCount += 1;
        if (gDrawHandler != nullptr) {
            gDrawHandler();
        }
        
        gRecorder.saveFrame();

        // Since we're on the clock, we will refresh
        // the screen.
        refreshScreen();
        
        // catch up to next frame interval
        // this will possibly result in dropped
        // frames, but it will ensure we keep up
        // to speed with the wall clock
        while (fNextMillis <= fsw.millis())
        {
            fNextMillis += fInterval;
        }
    }

}

Pretty straight forward. There is an object “StopWatch”, which keeps track of the seconds that have gone by since the app was started. This basic clock can be used to set intervals, and check timing. That’s what’s happening here:

if (fsw.millis() > fNextMillis)

It’s fairly rudimentary, and you’re only going to get millisecond accuracy, at best, but for most demos, that are running at 30 – 60 frames per second, it’s more than adequate. Within this condition, if the user has implemented ‘onFrame()’, that function is called, then ‘refreshScreen()’ is called, and we’re done with our work for this loop iteration.

And that’s how we get timing.

You can set a frame rate with ‘setFrameRate(fps)’, wherein you set the number of frames per second you want your animation to run at. The default is 15 frames per second, which is good enough to get things going. Being able to maintain a certain frame rate is dependent on the amount of time you spend during your ‘onFrame()’ implementation. Your frame drawing is not pre-emted by the runtime, so if you take longer than your allotted time, you’ll begin to drop frames.

What you do within your ‘onFrame()’ is completely up to you. You can do nothing, you can draw a little bit, you can draw a lot. You can maintain your own state information, however you want. You can have a retained drawing context, or you can have a nothing retained context, it’s completely in your control.

On to Recording

So, great, now we have the ability to march to the beat of a drum. How do I record this stuff?

Looking again at gui.h, we see there are some functions related to recording.

	// Application can call these to handle recording
	APP_EXPORT void recordingStart();
	APP_EXPORT void recordingStop();
	APP_EXPORT void recordingPause();
	APP_EXPORT void recordingToggle();

The default recording offered by gui.cpp is extremely simple. The process is to essentially just save a snapshot of the canvas into a file whenver told to. The default recorder uses the very simple and ancient ‘.ppm’ file format. These files have no compression, and are 24-bits per pixel. Extremely wasteful, and extremely big and slow, but super duper simple to generate. Hidden in that ‘onLoop()’ call within gui.cpp is this:

        gRecorder.saveFrame();

It will just save the current canvas into a .ppm file, with a name that increments with each frame written.

For the above bezier animation, the partial file list looks likes this:

A bunch of files, each one more than a megabyte in this particular case.

Now that you have a bunch of files, you can use various mechanisms to turn them into an animation. I tend to use this program ffmpeg, because it’s been around forever, it’s free, and it does the job.

d:bezeeaye> ffmpeg -framerate 15 -i frame%06.ppm bezeeaye.mp4

And that’s it, the animation file is generated, and you’re a happy camper, able to show off your creations to the world.

Just one small thing, within this particular application, I use the keyboard to turn recording on and off, so when the user presses the ‘R’ key, recording will start or stop. You can make this as fancy as you like, implementing a specific control with a UI button and all that. The bottom line is, you just need to call the ‘recordingToggle()’ function, and the right thing will happen.

Conclusion

At this point, we can create demo apps that do some drawing, keyboard and mouse input, drawing with a specific frame rate, recording and the like. There are a couple more capabilities to explore in the realm of drawing, like text rendering, but we’ve got a fairly complete set. Next time around, we’ll do some fun stuff with screen capturing, possibly throw in some networking, and throw up some text for completeness.

One of the primary considerations here is to consider the design decisions that are being made. There are choices to be made from what language to use, to how memory is managed. In addition, there are considerations for abstractions, how much is too much, and how best to create code through composition. I’ll be highlighting a little more of those choices as I close out this series.

Until then, write yourself some code and join in the fun.


Hello Scene – Events, Organization, more drawing

There are some design principles I’m after with my little demo scene library. Staring at that picture is enough to make your eyes hurt, but we’ll explore when it’s time to call it quits on your own home brew drawing library, and rely on the professionals. We’re also going to explore the whole eventing model, because this is where a lot of fun can come into the picture.

What is eventing then? Mouse, keyboard, touch, pen, all those ways the user can give input to a program. At times, the thing I’m trying to explore is the eventing model itself, so I need some flexibility in how the various mouse and keyboard events are percolated through the system. I don’t want to be forced into a single model designated by the operating system, so I build up a structure that gives me that flexibility.

First things first though. On Windows, and any other system, I need to actually capture the mouse and keyboard stuff, typically decode it, and then deal with it in my world. That code looks like this in the appmain.cpp file.

/*
    Generic Windows message handler
    This is used as the function to associate with a window class
    when it is registered.
*/
LRESULT CALLBACK MsgHandler(HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam)
{
    LRESULT res = 0;

    if ((msg >= WM_MOUSEFIRST) && (msg <= WM_MOUSELAST)) {
        // Handle all mouse messages
        HandleMouseMessage(hWnd, msg, wParam, lParam);
    }
    else if (msg == WM_INPUT) {
        res = HandleHIDMessage(hWnd, msg, wParam, lParam);
    }
    else if (msg == WM_DESTROY) {
        // By doing a PostQuitMessage(), a 
        // WM_QUIT message will eventually find its way into the
        // message queue.
        ::PostQuitMessage(0);
        return 0;
    }
    else if ((msg >= WM_KEYFIRST) && (msg <= WM_KEYLAST)) {
        // Handle all keyboard messages
        HandleKeyboardMessage(hWnd, msg, wParam, lParam);
    }
    else if ((msg >= MM_JOY1MOVE) && (msg <= MM_JOY2BUTTONUP)) 
    {
        // Legacy joystick messages
        HandleJoystickMessage(hWnd, msg, wParam, lParam);
    }
    else if (msg == WM_TOUCH) {
        // Handle touch specific messages
        //std::cout << "WM_TOUCH" << std::endl;
        HandleTouchMessage(hWnd, msg, wParam, lParam);
    }
    //else if (msg == WM_GESTURE) {
    // we will only receive WM_GESTURE if not receiving WM_TOUCH
    //}
    //else if ((msg >= WM_NCPOINTERUPDATE) && (msg <= WM_POINTERROUTEDRELEASED)) {
    //    HandlePointerMessage(hWnd, msg, wParam, lParam);
    //}
    else if (msg == WM_ERASEBKGND) {
        //loopCount = loopCount + 1;
        //printf("WM_ERASEBKGND: %d\n", loopCount);
        if (gPaintHandler != nullptr) {
            gPaintHandler(hWnd, msg, wParam, lParam);
        }

        // return non-zero indicating we dealt with erasing the background
        res = 1;
    }
    else if (msg == WM_PAINT) {
        if (gPaintHandler != nullptr) 
        {
                gPaintHandler(hWnd, msg, wParam, lParam);
        }
    }
    else if (msg == WM_WINDOWPOSCHANGING) {
        if (gPaintHandler != nullptr) 
        {
            gPaintHandler(hWnd, msg, wParam, lParam);
        }
    }
    else if (msg == WM_DROPFILES) {
        HandleFileDropMessage(hWnd, msg, wParam, lParam);
    }
    else {
        // Not a message we want to handle specifically
        res = ::DefWindowProcA(hWnd, msg, wParam, lParam);
    }

    return res;
}

Through the magic of the Windows API, this function ‘MsgHandler’ is going to be called every time there is a Windows Message of some sort. It is typical of all Windows applications, in one form or another. Windows messages are numerous, and very esoteric. There are a couple of parameters, and the values are typically packed in as bitfields of integers, or pointers to data structures that need to be further decoded. Plenty of opportunity to get things wrong.

What we do here is capture whole sets of messages, and hand them off to another function to be processed further. In the case of mouse messages, we have this little bit of code:

    if ((msg >= WM_MOUSEFIRST) && (msg <= WM_MOUSELAST)) {
        // Handle all mouse messages
        HandleMouseMessage(hWnd, msg, wParam, lParam);
    }

So, first design choice here, is delegation. We don’t know how any application is going to want to handle the mouse messages, so we’re just going to capture them, and send them somewhere. In this case, the HandleMouseMessage() function.

/*
    Turn Windows mouse messages into mouse events which can
    be dispatched by the application.
*/
LRESULT HandleMouseMessage(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam)
{   
    LRESULT res = 0;
    MouseEvent e;

    e.x = GET_X_LPARAM(lParam);
    e.y = GET_Y_LPARAM(lParam);

    auto fwKeys = GET_KEYSTATE_WPARAM(wParam);
    e.control = (fwKeys & MK_CONTROL) != 0;
    e.shift = (fwKeys & MK_SHIFT) != 0;

    e.lbutton = (fwKeys & MK_LBUTTON) != 0;
    e.rbutton = (fwKeys & MK_RBUTTON) != 0;
    e.mbutton = (fwKeys & MK_MBUTTON) != 0;
    e.xbutton1 = (fwKeys & MK_XBUTTON1) != 0;
    e.xbutton2 = (fwKeys & MK_XBUTTON2) != 0;
    bool isPressed = e.lbutton || e.rbutton || e.mbutton;

    // Based on the kind of message, there might be further
    // information to be decoded
    // mostly we're interested in setting the activity kind
    switch(msg) {
        case WM_LBUTTONDBLCLK:
	    case WM_MBUTTONDBLCLK:
	    case WM_RBUTTONDBLCLK:
            break;

        case WM_MOUSEMOVE:
            e.activity = MOUSEMOVED;
            break;

        case WM_LBUTTONDOWN:
        case WM_RBUTTONDOWN:
        case WM_MBUTTONDOWN:
        case WM_XBUTTONDOWN:
            e.activity = MOUSEPRESSED;
            break;
        case WM_LBUTTONUP:
        case WM_RBUTTONUP:
        case WM_MBUTTONUP:
        case WM_XBUTTONUP:
            e.activity = MOUSERELEASED;
            break;
        case WM_MOUSEWHEEL:
            e.activity = MOUSEWHEEL;
            e.delta = GET_WHEEL_DELTA_WPARAM(wParam);
            break;
        case WM_MOUSEHWHEEL:
            e.activity = MOUSEHWHEEL;
            e.delta = GET_WHEEL_DELTA_WPARAM(wParam);
            break;
            
        break;
    }

    gMouseEventTopic.notify(e);

    return res;
}

Here, I do introduce a strong opinion. I create a specific data structure to represent a MouseEvent. I do this because I want to make sure to decode all the mouse event has to offer, and present it in a very straight forward data structure that applications can access easily. So, the design choice is to trade off some memory for the sake of ease of consumption. In the uievent.h file, are various data structures that represent the various event structures, for mouse, keyboard, joystick, touch, even file drops, and pointers in general. That’s not the only kinds of messages that can be decoded, but it’s the ones used most for user interaction.

// Basic type to encapsulate a mouse event
enum {
    // These are based on regular events
    MOUSEMOVED,
    MOUSEPRESSED,
    MOUSERELEASED,
    MOUSEWHEEL,         // A vertical wheel
    MOUSEHWHEEL,        // A horizontal wheel

    // These are based on application semantics
    MOUSECLICKED,
    MOUSEDRAGGED,

    MOUSEENTERED,
    MOUSEHOVER,         // like move, when we don't have focus
    MOUSELEFT           // exited boundary
};

struct MouseEvent {
    int id;
    int activity;
    int x;
    int y;
    int delta;

    // derived attributed
    bool control;
    bool shift;
    bool lbutton;
    bool rbutton;
    bool mbutton;
    bool xbutton1;
    bool xbutton2;
};

From a strict performance perspective, this data structure should be a “cache line size” amount of data ideally, so the processor cache will handle it most efficiently. But, that kind of optimization can be tackled later if this is really beneficial. Initially, I’m just concerned with properly decoding the information and presenting it in an easy manner.

At the very end of HandleMouseMessage(), we see this interesting call before the return

    gMouseEventTopic.notify(e);

OK. This is where we depart from the norm of mouse handling and introduce a new concept, the publish/subscribe mechanism.

So far, we’ve got a tight coupling between the event coming in through MsgHandler, and being processed at HandleMouseMessage(). Within an application, the next logical step might be to have this explicitly call the mouse logic of the application. But, that’s not very flexible. What I’d really like to do is say “hay system, call this specific function I’m going to give you whenever a mouse event occurs”. But wait, didn’t we already do that with HandleMouseMessage()? Yes, in a sense, but that was primarily to turn the native system mouse message into something more palatable.

In general terms, I want to view the system as a publish/subscribe pattern. I want to look at the system as if it’s publishing various bits of information, and I want to ‘subscribe’ to various topics. What’s the difference? With the tightly couple function calling thing, one function calls another, calls another, etc. With pub/sub, the originator of an event doesn’t know who’s interested in it, it just knows that several subscribers have said “tell me when an event occurs”, and that’s it.

OK, so how does this work?

I need to tell the application runtime that I’m interested in receiving mouse events. I need to implement a function that has a certain interface to it, and ‘subscribe’ to the mouse events.

// This routine will be called whenever there
// is a mouse event in the application window
void handleMouseEvent(const MouseEventTopic& p, const MouseEvent& e)
{
    mouseX = e.x;
    mouseY = e.y;

    switch (e.activity)
    {
    case MOUSERELEASED:
        // change the color for the cursor
        cColor = randomColor();
        break;
    }
}

void onLoad()
{
    subscribe(handleMouseEvent);
}

That’s pretty much it. In the ‘onLoad()’ implementation, I call ‘subscribe()’, passing in a pointer to the function that will receive the mouse events when they occur. If you’re content with this, jump over the following section, and continue at Back To Sanity. Otherwise, buckle in for some in depth.

There are several subscribe() functions. Each one of them is a convenience for registering a function to be called in response to information being available for a specific topic. You can see these in apphost.h

APP_EXPORT void subscribe(SignalEventTopic::Subscriber s);
APP_EXPORT void subscribe(MouseEventTopic::Subscriber s);
APP_EXPORT void subscribe(KeyboardEventTopic::Subscriber s);
APP_EXPORT void subscribe(JoystickEventTopic::Subscriber s);
APP_EXPORT void subscribe(FileDropEventTopic::Subscriber s);
APP_EXPORT void subscribe(TouchEventTopic::Subscriber s);
APP_EXPORT void subscribe(PointerEventTopic::Subscriber s);

The construction ‘EventTopic::Subscriber’ is a manifestation of how these Topics are constructed. Let’s take a look at the Topic template to understand a little more deeply. The comments in the code below give a fair explanation of the Topic template. Essentially, you just want to have a way to identify a topic, and construct a function signature to match. The topic contains two functions of interest. ‘subscribe()’, allows you to register a function to be called when the topic wants to publish information, and the ‘notify()’ function, which is the way in which the information is actually published.

/*
	Publish/Subscribe is that typical pattern where a 
	publisher generates interesting data, and a subscriber
	consumes that data.

	The Topic class contains both the publish and subscribe
	interfaces.


	Whatever is responsible for indicating the thing happened
	will call the notify() function of the topic, and the
	subscribed function will be called.

	The Topic does not incoroporate any threading model
	A single topic is not a whole pub/sub system
	Multiple topics are meant to be managed together to create
	a pub/sub system.

	Doing it this way allows for different forms of composition and
	general topic management.

	T - The event payload, this is the type of data that will be
	sent when a subscriber is notified.

	The subscriber is a functor, that is, anything that has the 
	function signature.  It can be an object, or a function pointer,
	essentially anything that resolves as std::function<void>()

	This is a very nice pure type with no dependencies outside
	the standard template library
*/
template <typename T>
class Topic
{
public:
	// This is the form of subscriber
	using Subscriber = std::function<void(const Topic<T>& p, const T m)>;

private:
	std::deque<Subscriber> fSubscribers;

public:
	// Notify subscribers that an event has occured
	// Just do a simple round robin serial invocation
	void notify(const T m)
	{
		for (auto & it : fSubscribers) {
			it(*this, m);
		}
	}

	// Add a subscriber to the list of subscribers
	void subscribe(Subscriber s)
	{
		fSubscribers.push_back(s);
	}
};

So, it’s a template. Let’s look at some instantiations of the template that are made within the runtime.

// Within apphost.h
// Make Topic publishers available
using SignalEventTopic = Topic<intptr_t>;

using MouseEventTopic = Topic<MouseEvent&>;
using KeyboardEventTopic = Topic<KeyboardEvent&>;
using JoystickEventTopic = Topic<JoystickEvent&>;
using FileDropEventTopic = Topic<FileDropEvent&>;
using TouchEventTopic = Topic<TouchEvent&>;
using PointerEventTopic = Topic<PointerEvent&>;


// Within appmain.cpp
// Topics applications can subscribe to
SignalEventTopic gSignalEventTopic;
KeyboardEventTopic gKeyboardEventTopic;
MouseEventTopic gMouseEventTopic;
JoystickEventTopic gJoystickEventTopic;
FileDropEventTopic gFileDropEventTopic;
TouchEventTopic gTouchEventTopic;
PointerEventTopic gPointerEventTopic;

The application runtime, as we saw in the HandleMouseMessage() function, will then call the appropriate topic’s ‘notify()’ function, to let the subscribers know there’s some interesting information being published. Perhaps this function should be renamed to ‘publish()’.

And that’s it. All this pub/sub machinery makes it so that we can be more flexible about when and how we handle various events within the system. You can go further and create whatever other constructs you want from here. You can add queues, multiple threads, duplicates. You can decide you want to have two places react to mouse events, completely unbeknownst to each other.

Back to Sanity

Alright, let’s see how an application can actually use all this. This is the mousetrack application.

mousetrack

The app is simple. The read square follows the mouse around while it’s within the boundary of the window. All the lines from the top and bottom terminate at the point of the mouse as well.

In this case, we want to track where the mouse is, make note of that location, and use it in our drawing routines. In addition, of course, we want to draw the lines, circles, square, and background.

/*
    Demonstration of how to subscribe
    to keyboard and mouse events.

    Using encapsulated drawing and PixelArray
*/
#include "apphost.h"
#include "draw.h"


// Some easy pixel color values
#define black	PixelRGBA(0xff000000)
#define white	PixelRGBA(0xffffffff)
#define red		PixelRGBA(0xffff0000)
#define green	PixelRGBA(0xff00ff00)
#define blue	PixelRGBA(0xff0000ff)
#define yellow	PixelRGBA(0xffffff00)


// Some variables to track mouse and keyboard info
int mouseX = 0;
int mouseY = 0;
int keyCode = -1;

// global pixel array (gpa)
// The array of pixels we draw into
// This will just wrap what's already created
// for the canvas, for convenience
PixelArray gpa;

PixelPolygon gellipse1;
PixelPolygon gellipse2;

// For the application, we define the size of 
// the square we'll be drawing wherever the mouse is
constexpr size_t iconSize = 64;
constexpr size_t halfIconSize = 32;

// Define the initial color of the square we'll draw
// clicking on mouse, or space bar, will change color
PixelRGBA cColor(255, 0, 0);

// Simple routine to create a random color
PixelRGBA randomColor(uint32_t alpha = 255)
{
    return { 
        (uint32_t)maths::random(255), 
        (uint32_t)maths::random(255), 
        (uint32_t)maths::random(255), alpha };
    }

// This routine will be called whenever there
// is a mouse event in the application window
void handleMouseEvent(const MouseEventTopic& p, const MouseEvent& e)
{
    // Keep track of the current mouse location
    // Use this in the drawing routine
    mouseX = e.x;
    mouseY = e.y;

    switch (e.activity)
    {
    case MOUSERELEASED:
        // change the color for the cursor
        cColor = randomColor();
        break;
    }
}

// Draw some lines from the top and bottom edges of
// the canvas, converging on the 
// mouse location
void drawLines(PixelArray &pa)
{
    // Draw some lines from the edge to where
    // the mouse is
    for (size_t x = 0; x < pa.width; x += 4)
    {
        draw::copyLine(pa, x, 0, mouseX, mouseY, white);
    }

    for (size_t x = 0; x < pa.width; x += 16)
    {
        draw::copyLine(pa, x, pa.height-1, mouseX, mouseY, white, 1);
    }

}

// Simple routine to create an ellipse
// based on a polygon.  Very crude, but
// useful enough 
INLINE void createEllipse(PixelPolygon &poly, ptrdiff_t centerx, ptrdiff_t centery, ptrdiff_t xRadius, ptrdiff_t yRadius)
{
    static const int nverts = 72;
    int steps = nverts;

    ptrdiff_t awidth = xRadius * 2;
    ptrdiff_t aheight = yRadius * 2;

    for (size_t i = 0; i < steps; i++) {
        auto u = (double)i / steps;
        auto angle = u * (2 * maths::Pi);

        ptrdiff_t x = (int)std::floor((awidth / 2.0) * cos(angle));
        ptrdiff_t y = (int)std::floor((aheight / 2.0) * sin(angle));
        poly.addPoint(PixelCoord({ x + centerx, y + centery }));
    }
    poly.findTopmost();
}

// Each time through the main application 
// loop, do some drawing
void onLoop()
{
    // clear screen to black to start
    draw::copyAll(gpa, black);

    drawLines(gpa);

    // draw a rectangle wherever the mouse is
    draw::copyRectangle(gpa, 
        mouseX-halfIconSize, mouseY-halfIconSize, 
        iconSize, iconSize, 
        cColor);

    // Draw a couple of green ellipses
    draw::copyPolygon(gpa, gellipse1, green);
    draw::copyPolygon(gpa, gellipse2, green);

    // force the canvas to be drawn
    refreshScreen();
}

// Called as the application is starting up, and
// before the main loop has begun
void onLoad()
{
    setTitle("mousetrack");

    // initialize the pixel array
    gpa.init(canvasPixels, canvasWidth, canvasHeight, canvasBytesPerRow);


    createEllipse(gellipse1, 120, 120, 30, 30);
    createEllipse(gellipse2, (ptrdiff_t)gpa.width - 120, 120, 30, 30);

    // setup to receive mouse events
    subscribe(handleMouseEvent);
}

At the end of the ‘onLoad()’, we see the call to subscribe for mouse events. Within handleMouseEvent(), we simply keep track of the mouse location. Also, if the user clicks a button, we will change the color of the rectangle to be drawn.

Well, that’s pretty much it. We’ve wandered through the pub/sub mechanism for event dispatch, and looked specifically at how this applies to the mouse messages coming from the Windows operating system. The design principle here is to be loosely coupled, and allow the application developer to create the constructs that best suite their needs, without imposing too much of an opinion on how that must go.

I snuck in a bit more drawing. Now there are lines, in any direction, and thickness, as well as rudimentary polygons.

In the next installment, I’ll look a bit deeper into the drawing, and we’ll look at things like screen capture, and how to record our activities to turn into demo movies.


Hello Scene – What’s in a Window?

Yes, what is a Window? How do I draw, how do I handle the user’s mouse/keyboard/joystick/touch/gestures?

As a commenter pointed out on the my last post, I’ve actually covered these topics before. Back then, the operative framework was ‘drawproc’. The design center for drawproc was being able to create ‘modules’, which were .dll files, and then load them dynamically with drawproc at runtime. I was essentially showing people how something like an internet browser might be able to work.

Things have evolved since then, and what I’m presenting here goes more into the design choices I’ve made along the way. So, what’s in a “Window”?

It’s about a couple of things. Surely it’s about displaying things on the screen. In most applications, the keyboard, mouse, and other ‘events’, are also handled by the Window, or at least strongly related to it. This has been true in the Windows environment from day one, and still persists to this day. For my demo scene apps, I want to make things are easy as possible. Simply, I want a pointer to some kind of frame buffer, where I can just manipulate the values of individual pixels. That’s first and foremost.

How to put random pixels on the screen? In minwe, there are some global variables created as part of the runtime construction. So, let’s look at a fairly simple application and walk through the bits and pieces available.

Just a bunch of random pixels on the screen. Let’s look at the code.

#include "apphost.h"

void drawRandomPoints()
{
	for (size_t i = 0; i < 200000; i++)
	{
		size_t x = random_int(canvasWidth-1);
		size_t y = random_int(canvasHeight-1);
		uint32_t gray = random_int(255);

		canvasPixels[(y * canvasWidth) + x] = PixelRGBA(gray, gray, gray);
	}
}

void onLoad()
{
	setCanvasSize(800, 600);
	drawRandomPoints();
}

That’s about as simple a program as you can write and put something on the screen. In the ‘onLoad()’, we set the size of the ‘canvas’. The canvas is important as it’s the area of the window upon which drawing will occur. Along with this canvas comes a pointer to the actual pixel data that is behind the canvas. A ‘pixel’ is this data structure.

struct PixelRGBA 
{
    uint32_t value;
};

That looks pretty simplistic, and it really is. Pixel values are one of those things in computing that has changed multiple times, and there are tons of representations. If you want to see all the little tidbits of how to manipulate the pixel values, you can checkout the source code: pixeltypes.h

In this case, the structure is the easiest possible, taylored to the Windows environment, and how quickly you can present something on the screen with the least amount of fuss. How this actually gets displayed on screen is by calling the ancient GDI API ‘StretchDIBits’:

    int pResult = StretchDIBits(hdc,
        xDest,yDest,
        DestWidth,DestHeight,
        xSrc,ySrc,
        SrcWidth, SrcHeight,
        gAppSurface->getData(),&info,
        DIB_RGB_COLORS,
        SRCCOPY);

The fact that I’m using something from the GDI interface is a bit of a throwback, and current day Windows developers will scoff, smack their foreheads in disgust, and just change the channel. But, I’ll tell you what, for the past 30 years, this API has existed and worked reliably, and counter to any deprecation rumors you may have heard, it seems to be stable for the forseeable future. So, why not DirectXXX something or other? Well, even DirectX still deals with a “DevicContext”, which will show up soon, and I find the DirectXXX interfaces to be a lot of overkill for a very simple demo scene, so here I stick with the old.

There are lots of bits and pieces in that call to StretchDIBits. What we’re primarily interested in here is the ‘gAppSurface->getData()’. This will return the same pointer as ‘canvasPixels’. The other stuff is boilerplate. The best part is, I’ve encapsulated it in the framework, such that I’ll never actually call this function directly. The closest I’ll come to this is calling ‘refreshScreen()’, which will then make this call, or other necessary calls to put whatever is in the canvasPixels onto the actual display.

And where does this pixel pointer come from in the first place? Well, the design considerations here are about creating something that interacts well with the Windows APIs, as well as something I have ready access to. The choice I make here is to use a DIBSection. The primary thing we need to interact with the various drawing APIs (even DirectX), is a DeviceContext. This is basically a pointer to a datastructure that Windows can deal with. There are all kinds of DeviceContexts, from ones that show up on a screen, to ones that are associated with printers, or ones that are just in memory. We want the latter. There are lots of words to describe this, but the essential code can be found in User32PixelMap.h , and the real working end of that is here:

bool init(int awidth, int aheight)
{
    fFrame = { 0,0,awidth,aheight };

    fBytesPerRow = winme::GetAlignedByteCount(awidth, bitsPerPixel, alignment);

    fBMInfo.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
    fBMInfo.bmiHeader.biWidth = awidth;
    fBMInfo.bmiHeader.biHeight = -(LONG)aheight;    // top-down DIB Section
    fBMInfo.bmiHeader.biPlanes = 1;
    fBMInfo.bmiHeader.biBitCount = bitsPerPixel;
    fBMInfo.bmiHeader.biSizeImage = fBytesPerRow * aheight;
    fBMInfo.bmiHeader.biClrImportant = 0;
    fBMInfo.bmiHeader.biClrUsed = 0;
    fBMInfo.bmiHeader.biCompression = BI_RGB;
    fDataSize = fBMInfo.bmiHeader.biSizeImage;

    // We'll create a DIBSection so we have an actual backing
    // storage for the context to draw into
    // BUGBUG - check for nullptr and fail if found
    fDIBHandle = ::CreateDIBSection(nullptr, &fBMInfo, DIB_RGB_COLORS, &fData, nullptr, 0);


    // Create a GDI Device Context
    fBitmapDC = ::CreateCompatibleDC(nullptr);

    // select the DIBSection into the memory context so we can 
    // peform operations with it
    fOriginDIBHandle = ::SelectObject(fBitmapDC, fDIBHandle);

    // Do some setup to the DC to make it suitable
    // for drawing with GDI if we choose to do that
    ::SetBkMode(fBitmapDC, TRANSPARENT);
    ::SetGraphicsMode(fBitmapDC, GM_ADVANCED);

    return true;
}

That’s a lot to digets, but there are only a couple pieces that really matter. First, in the call to CreateDIBSection, I pass in fData. This will be filled in with a pointer to the actual pixel data. We want to retain that, as it’s what we use for the canvasPixels pointer. There’s really no other place to get this.

Further down, we see the creation of a DeviceContext, and the magic incantation of ‘SelectObject’. This essentially associates the bitmap with that device context. Now, this is setup for both Windows to make graphics library calls, as well as us to do whatever we want with the pixel pointer. This same trick makes is possible to use other libraries, such as freetype, or blend2d, pretty much anything that just needs a pointer to a pixel buffer. So, this is one of the most important design choices to make. Small, light weight, suppors multiple different ways of working, etc.

I have made some other simplifying assumptions while pursuing this path. One is in the pixel representation. I chose rgba-32bit, and not 15 or 16 or 24 or 8 bit, which are all valid and useful pixel formats. That is basically in recognition that when it comes to actually just putting pixels on the screen, 32-bit is by far the most common, so using this as the native format will introduce the least amount of transformations, and thus speed up the process of putting things on the screen.

There is a bit of an implied choice here as well, which needs to be resolved one way or another when switching between architectures. This code was designed for the x64 (intel/AMD) environment, where “little-endian” is how integers are represented in memory. If you’re not familiar with this, a brief tutorial.

This concerns how integer values are actually laid out in memory. Let’s look at a hexadecimal representation of a number for easy viewing: 0xAABBCCDD (2,864,424,397)

On a Big Endian machine, which ‘endian’ determines which part of the number is in the lowest memory address, this would be represented in memory just as it looks

AA BB CC DD

On a Little Endiam machine, this would be laid out in memory as:

DD CC BB AA

So, how do we create our pixels?

PixelRGBA(uint32_t r, uint32_t g, uint32_t b, uint32_t a) : value((r << 16) | (g << 8) | b | (a << 24)) {}


A bunch of bit shifty stuff leaves us with:

AARRGGBB

  • AA – Alpha
  • RR – Red
  • GG – Green
  • BB – Blue

On a Little Endian machine, this will be represented in memory (0’th offset first) as:

BB GG RR AA

This might be called ‘bgra32’ in various places. And that’s really a native format for Windows. Of course, since this has been a well worked topic over the years, and there’s hardware in the graphics card to deal with it, one way or the other, it doesn’t really matter which way round things go, but it’s also good to know what’s happening under the covers, so if you want to use convenient APIs, you can, but if you want the most raw speed, you can forgo such APIs and roll your own.

Just a couple of examples.

  • 0xffff0000 – Red
  • 0xff00ff00 – Green
  • 0xff0000ff – Blue
  • 0xff00ffff – Turquoise
  • 0xffffff00 – Yellow
  • 0xff000000 – Black
  • 0xffffffff – White

Notice that in all cases the Alpha ‘AA’ was always ‘ff’. By convention, this means these pixels are fully opaque, non-transparent. For now, we’ll just take it as necessary, and later we’ll see how to deal with transparency.

Well, this has been a hand full, but now we know how to manipulate pixels on the screen (using canvasPixels), and we know where the pixels came from, and how to present the values in the window. With a little more work, we can have some building blocks for simple graphics.

One of the fundamentals of drawing most primitives in 2D, is the horizontal line span. If we can draw horizontal lines quickly, then we can build up to other primitives, such as rectangles, triangles, and polygons. So, here’s some code to do those basics.

#include "apphost.h"

// Some easy pixel values
#define black	PixelRGBA(0xff000000)
#define white	PixelRGBA(0xffffffff)
#define red		PixelRGBA(0xffff0000)
#define green	PixelRGBA(0xff00ff00)
#define blue	PixelRGBA(0xff0000ff)
#define yellow	PixelRGBA(0xffffff00)

// Return a pointer to a specific pixel in the array of
// canvasPixels
INLINE PixelRGBA* getPixelPointer(const int x, const int y) 
{ 
    return &((PixelRGBA*)canvasPixels)[(y * canvasWidth) + x]; 
}

// 
// Copy a pixel run as fast as we can
// to create horizontal lines.
// We do not check boundaries here.
// Boundary checks should be done elsewhere before
// calling this routine.  If you don't, you run the risk
// of running off the end of memory.
// The benefit is faster code possibly.
// This is the workhorse function for most other
// drawing primitives
INLINE void copyHLine(const size_t x, const size_t y, const size_t len, const PixelRGBA& c)
{
    unsigned long * dataPtr = (unsigned long*)getPixelPointer(x, y);
    __stosd(dataPtr, c.value, len);
}

// Draw a vertical line
// done as quickly as possible, only requiring an add
// between each pixel
// not as fast as HLine, because the pixels are not contiguous
// but pretty fast nonetheless.
INLINE void copyVLine(const size_t x, const size_t y, const size_t len, const PixelRGBA& c)
{
    size_t rowStride = canvasBytesPerRow;
    uint8_t * dataPtr = (uint8_t *)getPixelPointer(x, y);

    for (size_t counter = 0; counter < len; counter++)
    {
        *((PixelRGBA*)dataPtr) = c;
        dataPtr += rowStride;
    }
}

//
// create a rectangle by using copyHLine spans
// here we do clipping
INLINE void copyRectangle(const int x, const int y, const int w, const int h, const PixelRGBA &c)
{
    // We calculate clip area up front
    // so we don't have to do clipLine for every single line
    PixelRect dstRect = gAppSurface->frame().intersection({ x,y,w,h });

    // If the rectangle is outside the frame of the pixel map
    // there's nothing to be drawn
    if (dstRect.isEmpty())
        return;

    // Do a line by line draw
    for (int row = dstRect.y; row < dstRect.y + dstRect.height; row++)
    {
        copyHLine(dstRect.x, row, dstRect.width, c);
    }
}

// This gets called before the main application event loop
// gets going.
// The application framework calls refreshScreen() at least
// once after this, so we can do some drawing here to begin.
void onLoad()
{
	setCanvasSize(320, 240);

	// clear screen to white
	gAppSurface->setAllPixels(white);

	copyRectangle(5, 5, 205, 205, yellow);

    copyHLine(5, 10, 205, red);

    copyHLine(5, 200, 205, blue);

    copyVLine(10, 5, 205, green);
    copyVLine(205, 5, 205, green);

}

The function ‘getPixelPointer()’ is pure convenience. Just gives you a pointer to a particular pixel in the canvasPixels array. It’s a jumping off point. The function copyHLine is the workhorse, that will be used time and again in many situations. In this particular case, there is no boundary checking going on, so that’s a design choice. Leaving off boundary checking makes the routine faster, by a tiny bit, but it adds up when you’re potentially doing millions of lines at a time.

The implementation of the copyHLine() functions contains a bit of something you don’t see every day.

__stosd(dataPtr, c.value, len);

This is a compiler intrinsice specific to the Windows system. It essentially operates like a memcpy(), but instead of copy one byte over a memory range, it copies a 32-bit value over that range. This is perfect for rapidly copying our 32-bit pixel value to fill a span in the canvasPixels array. Being a compiler intrinsic, we can assume it’s implemented to be the most optimal code to implement this feature. Of course, you can only know for sure if you do some measurements. For now, we’ll stick with it as it does what we want.

The copyRectangle() function simply calls the copyHLine() function the required number of times. Notice here that we do clipping of the rectangle up front (intersection). Since we decided copyHLine() would not do any clipping, we do the clipping in the higher level primitives. Doing clipping here only occurs once, then we can feed known valid coordinates and lengths to the copyHLine() routine without having to do it in the inner loop.

Deciding when to clip, or range check is a key aspect of the framework. Delaying such decisions to the highest levels possible is a good design strategy. Of course, you can change these choices to match whatever you want to do. This is a key aspect of the framework’s design as well.

The framework will always try to be light weight and composable. It tries to keep the opinionated API as minimal as possible, not forcing particular design philosophies at the exclusion of others.

With that, we’re at a good stopping point. We’ve got a window up on the screen. We know how to draw everything from pixels to straight lines and rectangles, and our executable is only 39K in size. That in and of itself is interesting, and over the next couple of articles we’ll see whether we can maintain that small size while increasing capability. Remember, the size of the Commodore 64 computer of old, which was part of the demo scene, had only 64K of RAM to play with. Let’s see what we can do with the same constraint.

Next time around, some input and animation timers.


Hello Scene – Win32 Wrangling

My style of software coding involves a lot of quick and dirty prototyping. Sometimes I’m simply checking out the API of some library, other times I’m trying to hash out the details of some routine I myself am writing. Whatever the case, I want to get the boilerplate code out of the way. On Windows, I don’t want to worry about whatever startup code is involved, I just want to put a window on the screen (or not), and start writing my code.

Case in point, I have a project called minwe, wherein I have created a framework for simple apps. One of the common functions to implement in your own code, to get started, is the ‘onLoad()’ function:

#include "apphost.h"

void onLoad()
{
	setCanvasSize(320, 240);
}

This looks familiar to the way I might write some web page code. The ‘onLoad()’, the introduction of a ‘canvas’. All you have to do is implement this one function, and suddenly you have an application window on the screen. It won’t do much, but it at least deals with mouse and keyboard input, and you can close it to exit the application.

simple application window

So, what’s the work behind this simplicity, and how do you write something ‘real’? On Windows, there’s a long history of code being written in a simple boilerplate way. You need to know the esoteric APIs to create a Window, running a ‘message loop’, handling myriad system and application defined messages and the like. The classic Windows message loop, for example, looks something like this:

void run()
{
    // Make sure we have all the event handlers connected
    registerHandlers();

    // call the application's 'onLoad()' if it exists
    if (gOnloadHandler != nullptr) {
        gOnloadHandler();
    }

    // Do a typical Windows message pump
    MSG msg;
    LRESULT res;

    showAppWindow();

    while (true) {
        // we use peekmessage, so we don't stall on a GetMessage
        // should probably throw a wait here
        // WaitForSingleObject
        BOOL bResult = ::PeekMessageA(&msg, NULL, 0, 0, PM_REMOVE);
        
        if (bResult > 0) {
            // If we see a quit message, it's time to stop the program
            if (msg.message == WM_QUIT) {
                break;
            }

            res = ::TranslateMessage(&msg);
            res = ::DispatchMessageA(&msg);
        }
        //else 
		{
            // call onLoop() if it exists
            if (gOnLoopHandler != nullptr) {
                gOnLoopHandler();
            }
        }
    }
}

The meat and potatoes of most Windows apps is the Peek/Translate/Dispatch, in an infinite loop. It’s been this way since the beginning of Windows 1.0, and continues to this day. There are tons of frameworks which variously hide this from the programmer, but at the core, it’s still the same.

For my demo scene purposes, I too want to hide this boilerplate, with some enhancements. If you want to follow along, the minwe project contains all that I’m showing here. A common method is to use some C/C++, C# or other library, to encapsulate Windows functions. Then you’re left with a fairly large API in the form of objects that you must learn to manipulate. That’s too hard for me, and requires a large amount of knowledge be used to memorize this API. I want much less than that, but still want all the boilerplate stuff covered.

In the case of ‘onLoad()’, it’s one of those functions that if the user’s code implements it, it will be called. If you don’t implement it, there’s no harm. In a way, you can think of the application shell as being an object, and you are specializing this object by implementing certain functions. If you don’t implement a particular function, the default behavior for that function will be executed. In most cases this simply means nothing will happen.

This is the first bit of magic that minwe implements. This magic is performed using dynamic loading. Dynamic loading simply means I look for a pointer to a function at runtime, rather than at compile time. The crux of this code is as follows:

//
// Look for the dynamic routines that will be used
// to setup client applications.
// Most notable is 'onLoad()' and 'onUnload'
//
void registerHandlers()
{
    // we're going to look within our own module
    // to find handler functions.  This is because the user's application should
    // be compiled with the application, so the exported functions should
    // be attainable using 'GetProcAddress()'

    HMODULE hInst = ::GetModuleHandleA(NULL);

    // Start with our default paint message handler
    gPaintHandler = HandlePaintMessage;


    // One of the primary handlers the user can specify is 'onPaint'.  
    // If implemented, this function will be called whenever a WM_PAINT message
    // is seen by the application.
    WinMSGObserver handler = (WinMSGObserver)::GetProcAddress(hInst, "onPaint");
    if (handler != nullptr) {
        gPaintHandler = handler;
    }

    // Get the general app routines
    // onLoad()
    gOnloadHandler = (VOIDROUTINE)::GetProcAddress(hInst, "onLoad");
    gOnUnloadHandler = (VOIDROUTINE)::GetProcAddress(hInst, "onUnload");

    gOnLoopHandler = (VOIDROUTINE)::GetProcAddress(hInst, "onLoop");
}

If you look back at the ‘run()’ function, you see the first function called is ‘registerHandlers()’. When compiling the application, the appmain.cpp file is included as part of the project. This single file contains all the Windows specific bits and magic incantations. Here is usage of the Windows specific GetModuleHandle(), and the real workhorse, ‘GetProcAddress()’. GetProcAddress() is essentially asking the loaded application for a pointer to a function with a specified name. If that function is found within the executable file, the pointer is returned. If the function is not found, then NULL is returned.

typedef void (* VOIDROUTINE)();
static VOIDROUTINE gOnloadHandler = nullptr;    
gOnloadHandler = (VOIDROUTINE)::GetProcAddress(hInst, "onLoad");

From the top, in classic C/C++ style, if you want to define pointer to a function with a particular signature (parameters and return type), you do that typedef thing. In modern C++ you can do it differently, but this is simple, and you only do it once.

So, the ‘gOnloadHandler’, is a pointer to a function that takes no parameters, and returns nothing. When you look back at the application code, the ‘onLoad()’ function matches this criteria. It’s a function that takes no parameters, and returns nothing. When GetProcAddress() is called, it will find our implementation of ‘onLoad()’, and assign that pointer to our gOnloadHandler variable.

There is one more little bit of magic that makes this work though, and it’s a critical piece. In order for this function to show up in our compiled application as something that can be found using GetProcAddress(), it must be ‘exported’. And thus, in the apphost.h file, you will find:

#define APP_EXPORT		__declspec(dllexport)
APP_EXPORT void onLoad();	// upon loading application

The #define is there for convenience. The __declspec(dllexport) is the magic that must precede the declaration of the onLoad() function. Without this, the compiler will not make the ‘onLoad()’ name available for the GetProcAddress() to find at runtime. So, even if you implement the function, it will not be found.

If you refer back to the implementation of the ‘run()’ function, you see that right after the function pointers are attempted to be loaded, we try to execute the onSetup() function;

void run()
{
    // Make sure we have all the event handlers connected
    registerHandlers();

    // call the application's 'onLoad()' if it exists
    if (gOnloadHandler != nullptr) {
        gOnloadHandler();
    }

And during each loop iteration, if the function ‘onLoop()’ has been implemented, that will be called.

This is a very simple and powerful technique. The ability to find a function within an executable has been there from the beginning. It’s perhaps a little known mechanism, but very powerful. With it, we can begin to create a simple shell of a programming environment which feels more modern, and abstracts us far away from the Windows specifics. At this first level of abstraction, there are only three functions the application can implement that get this dynamic loading treatment;

// The various 'onxxx' routines are meant to be implemented by
// application environment code.  If they are implemented
// the runtime will load them in and call them at appropriate times
// if they are not implemented, they simply won't be called.
APP_EXPORT void onLoad();	// upon loading application
APP_EXPORT void onUnload();

APP_EXPORT void onLoop();	// called each time through application main loop

There is ‘onLoad()’, which has been discussed. There is also ‘onUnload()’, which is there for cleaning up anything that needs to be cleaned up before the application closes, and of course, there’s ‘onLoop()’, which is called every time through the main event loop, when we’re not processing other messages.

That’s it for this first installment. I’ve introduced the minwe repository for those who want to follow along with the real code of this little tool kit. I’ve expressed the magic incantations which allow you to create a simple application without worrying about Windows esoterica, and I’ve showed you how you can begin to specialize your application by implementing a couple of key features.

Next time around, I’ll share the functions the runtime provides, discuss that application loop in more detail, talk about putting pixels on the screen, and how to deal with mouse and keyboard.


Have You Scene My Demo?

MultiWall displaying live video

I first started programming around 1978. Way back then, the machines were Commodore PET, RadioShack TRS-80, and later Apple I and the like. The literal birth of ‘personal’ computing. A spreadsheet (VisiCalc) was the talk of the town, transforming mainframe work, and thus ‘do at office’ into “do at home”, work. That was, I think, the original impetus for “work from home”. And here we are again…

Back then, on my Commodore PET, I programmed in ‘machine code’, and ‘assembly’, and later ‘BASIC’. Ahh, the good old days of an accumulator, stack, 8K of RAM, and a cassette tape drive. A lot of very clever programming was achieved in these very limited computers. With the rise of the Commodore 64, Apple II, and Amiga, we saw the birth of the “demo scene”. These were smallish programs that stretched the capabilities of the machine to its breaking point. Live synthetic music, three-D fly throughs, kaleidoscopic effects, color palette shifting waterfalls. It was truly amazing stuff.

Roll forward a few decades, and we now carry in our pockets machines that have the power of the ‘mainframes’ of those days, in the size of a cigarette case. It’s truly amazing. And what do we do with all that power? Mostly chit chat, email, web browsing, movie watching, and maybe some game playing. On some very rare occasions, we actually make phone calls.

Well, for one reason or another, I find myself wanting to go back to those roots, and really fiddle with the machine at a very low level. I might not get all the way down to machine code (very painful and tedious), but I can certainly come down out of the various frameworks, and create some interesting little apps.

And again, why?

Because, that’s what programmers do. If you ever ask yourself, ‘how can I program this effect, or just throw together this thing, but I want it to be really special’, then you’ve got the bug to tinker. Here I’m going to lay out how I tinker in code. What I play with, how I build up my tools, what design choices I make, and the like.

This post, in particular, serves as an intro to a series, “Have You Scene My Demo”. The image at the top of this article is showing off window, which is displaying 30 instances of live video being captured from a corner of my development machine. It’s specialness comes from thinking different about how to do capture, how to display, how to track time, and the like. The actual detailed walkthrough will be for another time.

What makes for a good playground?

Well, it kind of depends on what kinds of applications you want to develop, but roughly speaking, I want some tools so I can deal with:

UI – Mouse, Keyboard, joystick, touch, pen

Windowing

Sound

Networking

Those are the basic building blocks. In my particular case, I want the code to be portable ready, so I try to lift up from the Windows core as soon as possible, but I’m in no hurry to make it truly cross platform. For that, I would probably just do web development. No, these demos are to explore what’s possible with a particular class of machines, running a particular OS. I want to plumb the depths, and surface the deepest darkest secrets and amazements as are possible.

In brief, my environment is Windows. My programming language is C/C++. Beyond that, I use the graphics library ‘blend2d’, simply because it’s super fast, and does a lot of great modern stuff, and it’s really easy to use. I could in fact write my own 2D graphics library (and in a couple of demos I will), but for the most part, it’s best to rely on this little bit of code.

So, buckle up. What’s going to appear here, in short order, are some hints, tips, and code for how I create tinker tools for my software development.


2+ Decades @ Microsoft : A Retrospective

I joined Microsoft in November of 1998.  During black history month in 2022, I sent out an email to my friends and colleagues giving a brief summary of my time at the company, my own personal “black history”.  Over the past year, I’ve been engaged in some personal brand evolution, and I thought blogging is good for long form communications.  So here, I’m going to repeat some of that Microsoft history as a way to set the stage for the future.  So, here, almost unedited, is the missive I shared with various people as a reflection on black history month, 2022.

Hello,

If you’re receiving this, you’re probably no stranger to receiving missives from me on occasion.

Here we are at the end of black history month. 

I am black, and my recent history is 24 years of service at Microsoft.

I’ve done a lot in those years from delivering core technology (XML), to creating Engineering Excellence in India (2006 – 2009), to dev managing the early Access Control and Service Bus components of the earliest incarnation of Azure. 

I’ve also had the pleasure of creating the LEAP program, which is helping to make our industry more inclusive, and helped to establish Kevin Scott in the freshly re-birthed Office of the CTO. While in OCTO, inspired and guided by a young African engineer, I had the pleasure of supporting the push into our African dev centers (Kenya and Nigeria), which now number around 650 employees.

My current push is to hire folks in the Caribbean, yet another relatively untapped talent market.

This past couple of years have been particularly charged/poignant, with the dual of covid, and various events leading to the emergence of “Black Lives Matter”.

Throughout the arc of the 24 years I have spent in the company, I have gone from “I’m just here to do a job”, to “There is a job I MUST do to support my black community”.  I have been happy that the company has given me the leeway to do what I do, while occasionally participating in bread and butter activities. 

I am encouraged to see and interact with a lot more melanin enhanced people from around the world, and in the US specifically.  We have a long road to go, but we are in fact making progress.

Over the past year, I have thought about what I can do, how I can leverage my 35+ years of experience in tech, to empower even more people, and enable the next generation to leapfrog my own achievements.  To that end, I’ve started speaking out, starting ventures, providing support, beyond the confines of our corporate walls.  I have appeared on several podcasts over the past couple of months, and will continue to appear in a lot more.  This year I will be making appearances at conferences, writing a book, etc.

If you’re interested in following along, and getting some insights about this guy that pesters you in email on occasion, you can check out my web site, which is growing and evolving.

William A Adams (william-a-adams.com)

William A Adams Podcast Guest Appearances

At the bottom of the media page (the second link), you’ll see a  piece by the computer history museum in silicon valley.  Some have already seen it, but there’s actually a blog the museum did that goes along with it.  It’s one of those retrospectives of a couple of black OGs in tech (me and my brother) from the earlier days in silicon valley, up to the present.

And so it goes.  We have spent another month reflecting on blackness in America.  We are making positive strides, and have so much more to achieve.  I am grateful for the company that I keep, and the continued support that I enjoy in these endeavors.

Don’t be surprised if I ask you to come and give a talk somewhere in the Caribbean within the coming year.  We are transforming whole communities with the simple acts of being mindful, intentional, and present.

  • William

And with that, dear reader, welcome back to my blog, wherein I will be a regular contributor, sharing thoughts in long form, sometimes revisiting topics of old, and mostly exploring topics anew.


Microsoft, the Musical?

Well, it was primarily driven by Microsoft summer 2019 interns

I saw reference to this while browsing through some Teams channels, and I thought, Oh, ok, one of those shaky cell phone productions, let’s see…

Oh boy was I wrong. This is full on “ready for broadway” style musical entertainment. My takeaways: We have cool interns, look at that diverse bunch, they can sing, they can produce, they did this as a passion project while still maintaining their day jobs…

I’ve never been a corporate apologist, and I’ve poked the bear in the eye more times than I’ve stroked it, but this made me feel really happy. I was entertained, and proud that the company I work for can show such fun and mirth, while doing a tong-in-cheek sendup. Tech is supposed to be fun, and this was fun.

I’m sure other companies will follow suit in years to come, or they already do this at other companies, and I just haven’t see it.

Watching this was a well spent 10 minutes to start my week.


Did I really need 3 desktops?

It’s been about 3 years since I built a monster of a desktop machine with water cooling, fans, LEDs and all that. Long time readers will remember the disastrous outcome of that adventure as the liquid leaked, drown my electronics, and caused me to essentially abandon the thing. I somewhat recovered from the fiasco by purchasing a new motherboard, and trying to limp along with various of the components from that original build. Recently, I decided to throw in the towel and start from scratch. This time around, I decided to go with an AMD build, because AMD seems to be doing some good stuff with their Ryzen and beyond chippery. So, I put together a rig around a Ryzen 7, 32Gb RAM, same old nVidia 1060 video card. Upgraded the SSD to the latest Samsung 980? or whatever it is.

That system was acting a bit flakey, to the point I thought it was defective, so I built another one, this time with Ryzen 5, and nothing from the old builds. New power supply, SSD, RAM, video card. That one worked, and it turns out the Ryzen 7 based system worked as well. Turns out it only needed a bios update to deal with networking not handling the sleep state of Windows 10.

So, now I have two working desktop machines. But wait, the second motherboard from the previous disastrous PC build probably still works? Maybe it just needs new power supply and other components and I can resurrect it? And hay, how about that Athlon system sitting over there? That was my daily driver since 2010, until I decided to build the intel water cooled disaster. I think that machine will make for a good build for the workshop in the garage. I need something out there to run the CNC machine, or at least play some content when I’m out there.

I did finally decommission one machine. The Shuttle PC I built with my daughter circa 2005 finally gave up the ghost. Tried to start it, and the 2Tb hard drive just clicks… Too bad. That machine was quite a workhorse when it came to archiving DVDs and other disks over the years. May it rest in peace.

There was one bit of surgery I did on an old Laptop. I had a ThinkPad X1 Carbon from work which succumbed to the elements last year some time. I had tech support take the ssd out of it so I could transfer to somewhere else. Given the machine is about 4+ years old, it wasn’t as simple as it being an nvme SSD. Oh no, it was some special sort of thing which required quite a lot of searching about to find an appropriate adapter. I finally found it, and plugged the SSD into it, then plugged that into an external enclosure, then to USB 3.0, and finally got the old stuff off of it! So, now I have this awesome adapter card that I could only use once, awaiting the next old X1 Carbon someone needs to backup.

All ramblings aside, I’ve recently been engaged in writing some code related to parsing. Two bits in particular, gcode parsing, json streaming parser, are bits that I’ll be writing about.

And so it goes.


As the Tech Turns

I joined the Office of the CTO at Microsoft just over two years ago. I was a ‘founding member’ as Kevin Scott was a new CTO for Microsoft. I have now moved on to another job, in a different CTO Office (Mark Russinovich in Azure).

I noticed one thing while I was in OCTO, I essentially stopped blogging. Why was that? Well, probably the main reason is the fact that when your in that particular office, you’re privy to all sorts of stuff, most of it very private, either to Microsoft, or other partners in the industry. Not really the kind of stuff you want to write about in a loud way. My colleague Mat Velosso managed to maintain a public voice while in the office, but I didn’t find I could do it. Now as it turns out, my new job is all about having a voice, and helping to make Engineering at Microsoft that much better.

But, before I get into all that, I want to reflect on tech.

I’m in my home office, and I’m looking around at all this clutter. I’ve got SD Cards that are 8Gb sitting around with who knows what on them. I’ve got motherboards sitting openly on my bench. I’ve got more single board computers than you can shake a stick it. Various bits and bobs, outdated tech books, mice and keyboards galore, laptops that have been long since abandoned, and 5 23″ LCD displays sitting on the floor.

That’s just in my office. My other cave has similar amounts of computers, displays, tvs, and other tech leavings from the past 10 years of hobbying around, trying to stay current. What to do?

Well, donate all that can to good places. Don’t give working displays to PC recycle, they’ll just tear them apart. Find a school, non-profit, deserving person. Then, all those Raspberry Pi versions you never took out of their boxes, send them to the PC recycler, or give them to a school. If there’s one thing I’ve learned about single board computers, if you don’t actually have an immediate use for them, they’re not worth buying.

Books, to the library, or if you have a local “Half Price Books” maybe you can get some money back. More than likely, if they’re older than 5 years, they’re headed to the compost pile.

I have saved one set of PS/2 style keyboard/mouse, because, they’re actually useful.

I want to reconfigure my office. Now that 4K UHD monitor/tvs are $250, it makes sense to purchase them as decorations for a room. A couple of those 55″ versions up on the walls gives me connectivity for any computers, as well as an ability to do things like use them as a false window. I need more workspace. My current configuration is sets of drawers, which hide who knows what, and counter top which is too narrow, and book shelves, which can’t hold very much weight. So, out it goes, and in come the wire rack shelving units, 24″ deep.

Copy off all the stuff from those random SD cards, and throw away the ones that are less than 32Gb, because you know you’ll lose them before you ever use them again, and you’ll always be wondering what’s on them. Digitize those photo albums, use one of your many SBCs to setup a NAS, and copy everything there, and backup to a cloud service

For me, new job, new tech, new office space. time to blab and share again.


Commemorating MLK’s Passing

Dr. Martin Luther King Junior was assassinated April 4th, 1968. That was 51 years ago today. In order to commemorate his passing, I penned the following and shared it with my coworkers at Microsoft.

On this very important day in history, I am contemplative.  As we consider the importance of naming our ERG, I am reflective upon how we got here.

I was only 4 years old on that fateful day when “they killed Martin!”, so I don’t remember much, other than adults crying, smoking, drinking, talking adult talk, nervous wringing of hands, and us kids playing outside.

In my tech career of roughly 35 years, I’ve often been “the only…”.  It’s been an interesting walk.  In most recent years, as I become more Yoda and less Samuel Jackson, I have contemplated these things:

Would I have died fighting rather than be put on the ship

Would I have jumped into the ocean rather than be taken

Would I have fought back upon first being whipped, choosing pride and honor over subjugation

Would I have had the bravery to run away

Would I have had the bravery to help those who ran away

Would I have had the courage to learn to read

Would I have had the strength to send my children to school

Would I have had the strength to drink from the water fountain not meant for me

Would I have had the courage to simply sit

Would I have had the tenacity to face the smoke bombs, water cannons and dogs

Would I have had the conviction to carry on a struggle, long after my inspirational leader was lost…

And here I sit today, and I contemplate.  Will I recognize my calling?  Will I recognize my civil rights moment?  Will I be able to throw off my golden handcuffs and do what’s right?

If we collectively think “Black” means anything, we collectively can’t ignore the passing of this particular day.  I encourage us all to reflect on who we are, where we come from, and where we intend to go in the future.