The Future is Nigh

It’s already January 5th, and I haven’t made any predictions for the year.  I thought about doing this last year, and then I scrapped the post because I couldn’t think faster than things were actually happening.  How to predict the future?  A couple years back, I followed the advice of not trying to predict what will come, but what will disappear.  While perhaps more accurate, it’s less satisfying than doing the fanciful sort of over the top prediction.  So, here’s a mix.

nVidia will convince itself that it is the sole heir to the future of GPU computing.  With their new tegra X1 in hand, their blinders will go up as to any other viable alternatives, which means they’ll be blind sided by reality at some point.  It is very interesting to consider the amount of compute power that is being brought to bare in very small packages.  The X1 is no bigger than typical chips, yet has teraflop capabilities.  Never mind that that’s more than your typical desktop of a few years back, throw a bunch of these together in a desktop sized box, and you’ve got all of the NSA (circa 1988) at your fingertips.

More wires will disappear from my compute space.  The Wifi thing has surely been a huge boost for eliminating wires from all our lives.  In my house, which was built in 2008, every room in wired with phone jacks, coax connectors, and some of them with ethernet ports.  The only room where I actually use the ethernet ports is my home office, where I have several devices, some of which work best when wired up to the central hub over gigabit connections (the NAS box).  But hay, in the garage, it’s all wireless.  Even the speakers for the entertainment system are wireless.  Between the traditional wifi, and bluetooth, I’m losing wires from speakers, keyboards, mice, devices.  The last remaining wires are for power, and HDMI connectors.  I suspect the power cords will stick around for quite some time, although, increasingly, devices might run off batteries more, so power will be a recharge thing rather than primary.  HDMI connections are soon to be a thing of the past.  This will happen in two different ways.  The first will be those simple devices that give you a wireless HDMI connection.  A highly specialized device that you can already purchase today.  The second will be the fact that ‘smart glass’ either in the form of large ‘tablets’ or HDMI compute dongles, will essentially turn every screen into a remote desktop display, or smarter.  And those devices will all speak wifi and/or bluetooth.

So yah, compute will continue to go down the power/compute density curve, and more wires will disappear.

What about storage?  I’ve got terabytes sitting idle at home.  I mean every single device, no matter how small, has at least a few gigabytes of one form or another.  Adding a terabyte drive to a media player costs around $50 or so.  The entirety of my archived media library is no more than a couple of terabytes.  Storage is essentially free at this point.  So what the cloud?  I don’t have a good way to backup all my terabytes of stuff.  The easiest solution might be to get a storage account with someone, and mirror from my NAS up to the cloud.  But, that assumes my NAS is a central point for all the things stored in my home.  But it’s not.  It contains most of my media libraries, and tons of backed up software and the like, but not everything.

Perhaps the question isn’t merely one of storage, but perhaps it’s about data centers in general.  With terabytes laying about the house, and excess compute power a mere $35 away, what might emerge?  I think the home data center is on the precipice.  What might that look like?  Well, some clever software that is configured to gather, backup, distribute, and consolidate stuff.  Run RAID across multiple storage devices in the home.  Do backup streaming to the cloud, or simply across multiple storage devices in the home.  The key is having the software be brain dead simple to install and run.  A set and forget setup.  This begs the question, will the home network change?  Notions of identity and permissions and the like.  We’ll likely see a shift away from the more corporate based notions of identity and authorization to a more home based solution.

And what about all that compute power?  It’s getting smaller, and requiring less energy as usual.  Self driving  cars?  Well, at least parallel parking assist would be a start, and that’s how it will start.  I’d expect a lot of this compute power to be used to improve surveillance capabilities.  Image processing will continue to makes leaps in accuracy and capability.  This tied with the rise of autonomous vehicles is likely to make a landscape full of tiny flying and roving devices that can track a lot of things.  But that’s rather broad.  In the short term, it will simply mean that military recon will get easier, and not just for large players such as the US.  Corporate espionage will also get easier as a result.

And from the fringe?  The pace of technological advancement is always accelerating.  One thing builds on the next, and the next big thing shows up before you know it.  Writing computer software is still kind of clunky and slow for the most part.  I would expect this to actually accelerate.  It will happen because the price of emulation/simulation is decreasing.  Between FPGAs and dynamically configurable virtual machines, it’s becoming easier to create new CPUs, GPUs and the like, try them out on massive scale, make improvements on the fly, and generally leverage and build systems faster than ever before.  CPUs will not only become simulated silicon, they will stop being manufactured as hard silicon, because the cycle times to create new hard silicon will be much slower than what you can do in simulation.  Parallelism will become a thing.  We will throw off the shackles of trying to build massively parallel systems using mutexes, and instead reembrace message passing as the way the world works.  This combined with new understandings of how massively parallel systems like brains work, will give a renewed emphasis on AI systems, built on the scale of data centers, and continents.  This will be the advantage of the data center.  Not just compute and storage, but consolidated knowledge, and increased AI capabilities.

And so it goes.  The future is nigh, so we’ll see what the future brings.


GraphicC – Random Lines

Thus far, I’ve been able to draw individual pixels, some rectangles, straight vertical and horizontal lines.  Now it’s time to tackle those lines that have an arbitrary slope.


In this case, we’re only interested in the line drawing, not the rectangles and triangles.  Here’s the example code:


#include "test_common.h"

void test_writebitmap()
	size_t width = 480;
	size_t height = 480;
	pb_rgba pb;
	pb_rgba_init(&pb, width, height);

	// draw horizontal lines top and bottom
	raster_rgba_hline(&pb, 0, 0, width - 1, pWhite);
	raster_rgba_hline(&pb, 0, height - 1, width - 1, pWhite);

	// draw vertical lines left and right
	raster_rgba_vline(&pb, 0, 0, height - 1, pGreen);
	raster_rgba_vline(&pb, width - 1, 0, height - 1, pTurquoise);

	// draw criss cross lines
	raster_rgba_line(&pb, 0, 0, width - 1, height - 1, pRed);
	raster_rgba_line(&pb, width - 1, 0, 0, height - 1, pYellow);

	// draw a couple of rectangles
	raster_rgba_rect_fill(&pb, 5, 5, 60, 60, pLightGray);
	raster_rgba_rect_fill(&pb, width - 65, height - 65, 60, 60, pLightGray);

	// draw a rectangle in the center
	pb_rgba fpb;
	pb_rgba_get_frame(&pb, (width / 2) - 100, (height / 2) - 100, 200, 200, &fpb);
	raster_rgba_rect_fill(&fpb, 0, 0, 200, 200, pBlue);

	// Draw triangle 
	raster_rgba_triangle_fill(&pb, 0, height - 10, 0, 10, (width / 2) - 10, height / 2, pGreen);

	// Now we have a simple image, so write it to a file
	int err = write_PPM("test_writebitmap.ppm", &pb);


int main(int argc, char **argv)

	return 0;

This is the part we’re interested in here:

	// draw criss cross lines
	raster_rgba_line(&pb, 0, 0, width - 1, height - 1, pRed);
	raster_rgba_line(&pb, width - 1, 0, 0, height - 1, pYellow);

Basically, feed in x1, y1, x2, y2 and a color, and a line will be drawn. The operation is SRCCOPY, meaning, the color is not blended, it just lays over whatever is already there. And the line drawing routine itself?

#define sgn(val) ((0 < val) - (val < 0))

// Bresenham simple line drawing
void raster_rgba_line(pb_rgba *pb, unsigned int x1, unsigned int y1, unsigned int x2, unsigned int y2, int color)
	int dx, dy;
	int i;
	int sdx, sdy, dxabs, dyabs;
	unsigned int x, y, px, py;

	dx = x2 - x1;      /* the horizontal distance of the line */
	dy = y2 - y1;      /* the vertical distance of the line */
	dxabs = abs(dx);
	dyabs = abs(dy);
	sdx = sgn(dx);
	sdy = sgn(dy);
	x = dyabs >> 1;
	y = dxabs >> 1;
	px = x1;
	py = y1;

	pb_rgba_set_pixel(pb, x1, y1, color);

	if (dxabs >= dyabs) /* the line is more horizontal than vertical */
		for (i = 0; i<dxabs; i++)
			y += dyabs;
			if (y >= (unsigned int)dxabs)
				y -= dxabs;
				py += sdy;
			px += sdx;
			pb_rgba_set_pixel(pb, px, py, color);
	else /* the line is more vertical than horizontal */
		for (i = 0; i<dyabs; i++)
			x += dxabs;
			if (x >= (unsigned int)dyabs)
				x -= dyabs;
				px += sdx;
			py += sdy;
			pb_rgba_set_pixel(pb, px, py, color);

This is a fairly simple implementation of the well known Bresenham line drawing algorithm. It may no longer be the fastest in the world, but it’s fairly effective and compoutationally simple. If we wanted to do a blend of colors, then the ‘pb_rgba_set_pixel’ could simple be replaced with the blending routine that we saw in the other line drawing routines.

And that’s that. You could implement this as EFLA, or any number of other routines that might prove to be faster. But, why optimize too early? It might be that memory access is the bottleneck, and not the simple calculations done here. Bresenham is also fairly nice because everything is simple integer arithmetic, which is both fairly fast, as well as easy to implement on an FPGA, if it comes to that. Certainly on a microcontroller, it would be preferred to float/double.

Why we’re at it, how about that bitmap writing to file routine?

	int err = write_PPM("test_writebitmap.ppm", &pb);

I’ve seen plenty of libraries that spend an inordinate amount of lines of code on the simple image load/save portion of things. That can easily dominate anything else you do. I didn’t want to really pollute the basic codebase with all that, so I chose the ‘ppm’ format as the only format that the library speaks natively. It’s a good ol’ format, nothing fancy, basic RGB dump of values, with some text at the beginning to describe the format of the image. The routine looks like this:


#pragma once

#ifndef PBM_H
#define PBM_H

#include "pixelbuffer.h"

#ifdef __cplusplus
extern "C" {

int write_PPM(const char *filename, pb_rgba *fb);

#ifdef __cplusplus



#include "pbm.h"


#pragma warning(push)
#pragma warning(disable: 4996)	// _CRT_SECURE_NO_WARNINGS (fopen) 

int write_PPM(const char *filename, pb_rgba *fb)
	FILE * fp = fopen(filename, "wb");
	if (!fp) return -1;

	// write out the image header
	fprintf(fp, "P6\n%d %d\n255\n", fb->frame.width, fb->frame.height);
	// write the individual pixel values in binary form
	unsigned int * pixelPtr = (unsigned int *)fb->data;

	for (size_t row = 0; row frame.height; row++) {
		for (size_t col = 0; col frame.width; col++){
			fwrite(&pixelPtr[col], 3, 1, fp);
		pixelPtr += fb->pixelpitch;


	return 0;

#pragma warning(pop)

There are lots of different ways to do this, but basically, get a pointer to the pixel values, and write out the RGB portion (ignoring the Alpha portion). The first line written is in plaintext, and it tells the format ‘P6′, followed by the width and height of the image. And that’s that. The internal format of a framebuffer is fairly simple, so writing a whole library that can read and write things like GIF, PNG, JPG and the like can be done fairly easily, and independently of the core library. And that’s probably the best way to do it. Then the consumer of this library isn’t forced to carry along complexity they don’t need, but they can simply compose what is necessary for their needs.

Alright then. There is one more primitive, the triangle, which will complete the basics of the drawing routines. So, next time.

GraphicC – Who’s line is it?


After pixels, drawing lines are probably the next most useful primitive in a graphics system.  Perhaps bitblt is THE most useful, but lines are certain up there.  There are a few things going on in the picture above, so I’m going to go through them bit by bit.

This is the code that does it:

#include "test_common.h"

void test_linespan()
	size_t width = 320;
	size_t height = 240;
	pb_rgba pb;
	pb_rgba_init(&pb, width, height);

	// Set a white background
	raster_rgba_rect_fill(&pb, 0, 0, width, height, pWhite);

	// do a horizontal fade
	int color1 = RGBA(127, 102, 0, 255);
	int color2 = RGBA(0, 127, 212, 255);

	for (size_t row = 0; row < 100; row++) {
		int x1 = 0;
		int x2 = width - 1;
		raster_rgba_hline_span(&pb, x1, color1, x2, color2, row);

	// Draw a button looking thing
	// black
	raster_rgba_hline(&pb, 20, 20, 80, pBlack);
	raster_rgba_hline(&pb, 20, 21, 80, pDarkGray);
	raster_rgba_hline(&pb, 20, 22, 80, pLightGray);

	// light gray rect
	raster_rgba_rect_fill(&pb, 20, 23, 80, 24, pLightGray);

	// fade to black
	for (size_t col = 20; col < 100; col++) {
		raster_rgba_vline_span(&pb, 46, pLightGray, 77, pBlack, col);

	// Draw some blended lines atop the whole
	for (size_t row = 35; row < 120; row++){
		raster_rgba_hline_blend(&pb, 45, row, 100, RGBA(0, 153, 153, 203));

	// Now we have a simple image, so write it to a file
	int err = write_PPM("test_linespan.ppm", &pb);


int main(int argc, char **argv)

	return 0;

In most graphics systems, there are three kinds of lines;

  • horizontal
  • vertical
  • sloped

Along with the kind of line, there are typically various attributes such as thickness, color,
and in the case of joined lines, what kind of joinery (butted, mitered, rounded).

At the various lowest level though, there are simply lines, and the things I focus on are slopes and colors.

There are two basic lines kinds that are of particular interest right off the bat, those are horizontal and vertical.  They are special cases because their drawing can be easily optimized, and if you have any acceleration hardware, these are most likely included.  This kinds of brings up another design desire.

  • Can be implemented in an fpga

But, that’s for a much later time.  For now, how about those horizontal and vertical lines?
If what you want is a solid line, of a single color, ignoring any transparency, then you want a simple hline:

int raster_rgba_hline(pb_rgba *pb, unsigned int x, unsigned int y, unsigned int length, int value)
	size_t terminus = x + length;
	x = x < 0 ? 0 : x;
	terminus = terminus - x;

	unsigned int * data = &((unsigned int *)pb->data)[y*pb->pixelpitch+x];
	size_t count = 1;
	for (size_t idx = 0; idx < terminus; idx++)
		*data = value;

	return 0;

This makes a simple rectangle easy to implement as well:

#define raster_rgba_rect_fill(pb, x1, y1, width, height, value) for (size_t idx = 0; idx < height; idx++){raster_rgba_hline(pb, x1, y1 + idx, width, value);	}

Simple single color horizontal lines can be drawn quickly, often times using some form of memset
on the host platform. Even in the case where a tight inner loop is used, what appears to be
fairly slow can be quite fast depending on how your compiler optimizes things.

There is a corresponding ‘vline’ routine that does similar.

int raster_rgba_vline(pb_rgba *pb, unsigned int x, unsigned int y, unsigned int length, int value)
    unsigned int * data = &((unsigned int *)pb->data)[y*pb->frame.width + x];
    size_t count = 1;
    while (count <= length) {
        *data = value;
        data += pb->pixelpitch;

    return 0;

Here, the same basic assignment is used, after determining the starting offset of the first
pixel. The pointer is advanced by the pixelpitch, which in this case is the count of 32-bit
integer values per row. If you’re rendering into a framebuffer that doesn’t conform to this,
then this offset can be adjusted accordingly.

Horizontal and vertical lines of a solid color are nice, and can go a long way towards satisfying
some very basic drawing needs. But, soon enough you’ll want those lines to do a little bit more
work for you.

void raster_rgba_hline_span(pb_rgba *pb, int x1, int color1, int x2, int color2, int y)
    int xdiff = x2 - x1;
    if (xdiff == 0)

    int c1rd = GET_R(color1);
    int c1gr = GET_G(color1);
    int c1bl = GET_B(color1);

    int c2rd = GET_R(color2);
    int c2gr = GET_G(color2);
    int c2bl = GET_B(color2);

    int rdx = c2rd - c1rd;
    int gdx = c2gr - c1gr;
    int bdx = c2bl - c1bl;

    float factor = 0.0f;
    float factorStep = 1.0f / (float)xdiff;

    // draw each pixel in the span
    for (int x = x1; x < x2; x++) {
        int rd = c1rd + (int)(rdx*factor);
        int gr = c1gr + (int)(gdx*factor);
        int bl = c1bl + (int)(bdx*factor);
        int fg = RGBA(rd, gr, bl, 255);
        pb_rgba_set_pixel(pb, x, y, fg);

        factor += factorStep;

Let’s imagine you want to draw that fade from green to blue. You could do it pixel by pixel,
but it’s nice to have a simple line drawing routine that does it for you.

The ‘raster_rgba_hline_span’ routine takes two colors, as well as the x1, x2, and y values. It
will draw a horizontal line between x1 and x2 (inclusive), going from color1, to color2 in a
linear fashion. Same goes for the corresponding vline_span routine. This is nice and simple.
You could modify it a bit to change the parameters that are passed in, and fuss with trying to
optimize the loop in various ways, but this basically works.

Great, we have solid lines, line spans with color fading, what about transparency?

The individual pixels are represented with 32-bits each. There are 8-bits each for R, G, and B
color components. The ‘A’ portion of the color is not used by the frame buffer, but rather used
to represent an opacity value. The opacity just determines how much of the color will be shown
if this color is blended with another color.

In the picture above, the turquoise rectangle is blended with the pixels that are below, rather
than just copying its pixels atop the others.

In order to do that, we want to draw some horizontal lines, but we want to blend the colors
instead of copy them:

// Draw some blended lines atop the whole
for (size_t row = 35; row < 120; row++)
    raster_rgba_hline_blend(&pb, 45, row, 100, RGBA(0, 153, 153, 203));

The ‘raster_rgba_hline_blend’ routine does the needful. It takes a color with an alpha (opacity)
value of 203, and blends that with whatever is already in the specified frame buffer. It looks
like this:

#define blender(bg, fg, a) ((uint8_t)((fg*a+bg*(255-a)) / 255))

#define blend_color(bg, fg) RGBA(                \
    blender(GET_R(bg), GET_R(fg), GET_A(fg)),    \
    blender(GET_G(bg), GET_G(fg), GET_A(fg)),    \
    blender(GET_B(bg), GET_B(fg), GET_A(fg)),    255)

int raster_rgba_hline_blend(pb_rgba *pb, unsigned int x, unsigned int y, unsigned int length, int value)
    size_t terminus = x + length;
    x = x < 0 ? 0 : x;
    terminus = terminus - x;

    unsigned int * data = &((unsigned int *)pb->data)[y*pb->pixelpitch + x];
    size_t count = 1;
    for (size_t idx = 0; idx < terminus; idx++)
        int bg = *data;
        int fg = value;

        *data = blend_color(bg, fg);

    return 0;

The use of macros here keeps the operations straight, and provides easy routines that can be used elsewhere.
It’s a pretty straight forward process though. Each time through the inner loop, the ‘bg’
represents the ‘background’ color, which is taken from the pixel buffer. The ‘fg’ value
represents the color that the user passed in. The new value is a blend of the two, using
the alpha component of the foreground color.

This is a fairly basic routine that can be adapted in many ways. For example, instead of
using the alpha value from the foreground color, you could just pass in a fixed alpha value
to be applied. This is useful when doing something like a ‘fade to black’, without having to
alter each of the source colors directly.

This particular routine only works with solid colors. As such, it is just like the plain
hline routine. So why not have a flag on the hline routine that says to do a blend or not?
Better still, why not have a parameter which is an operation to perform between the bg and fg
pixel values? This is how a lot of libraries work, including good ol’ GDI. I think that overly
complicates matters though. Creating a general purpose routine will introduce a giant switch
statement, or a bunch of function pointers, or something else unworldly. For now, I know I need
only two operators, SRCOPY, SRCOVER, and that’s it, I don’t currently find need for more exotic
operators. But of course, anything can be built atop or easily changed, so who knows.

And that’s about it for the first two kinds of line drawing routines. The third kind, where the
slope is neither vertical nor horizontal, is a little more work, and thus will be saved for
another time.

Woot! Raise the roof up, horizontal and vertical lines, along with pixel copying. What more
could you possibly expect from a basic low level graphics system? Well, for the longest time,
not much more than this, but there’s still some modern conveniences to discuss, such as those
sloped lines, and some more primitives such as triangles.

GraphicC – The Basics

There is one more design constraint I did not mention last time, and it’s a fairly important one for my purposes.

  • Must be easily interfaced from scripting environments

In my particular case, I’m going to favor LuaJIT and its FFI mechanism, and JavaScript, in the form of node.js.  At the very least, this means that I should not utilize too many fancy features, such as classes, operator or function overloading, templates, and the like.  These things are just nasty to deal with from script, and not really worth the benefits in such a small system.  Sticking to C89, or C99 style conventions, at least for the parts that are intended to be publicly accessible, is a good idea.

With that, how about looking at some of the basics.  First up, the most basic thing is the representation of a pixel.  For me, there is a very clear separation between a pixel and a color.  A pixel is a machine representation of a color value.  It is typically in a form that the graphics card can render directly.  Modern day representations are typically RGBA, or RGB, or BGR, BGRA.  That is, one value between 0-255 for each of the color components.  These values are arranged one way or the other, but these are the most typical.  Of course with most modern graphics cards, and leveraging an API such as OpenGL or DirectX, your pixel values can be floats, and laid out in many other forms.  But, I’ll ignore those for now.

Pixel values are constrained in a way, because they are represented by a limited set of byte values.  Colors are a completely different animal.  Then you get into various color spaces such as Cie, or XYZ, or HSL.  They all have their uses depending on the application space you’re in.  In the end though, when you want to display something, you’ve got to convert from those various color spaces to the RGB pixel values that this graphics system can support.

Since I am dealing with pixel values (and not colors), the very lowest level routines deal with pixel buffers, or pixel frames.  That is an array of pixel values.  Simple as that.

typedef struct _pix_rgba {
	unsigned char r, g, b, a;
} pix_rgba;

// On a little endian machine
// Stuff it such that 
// byte 0 == red
// byte 1 == green
// byte 2 == blue
// byte 3 == alpha
#define RGBA(r,g,b,a) ((unsigned int)(a<<24|b<<16|g<<8|r))
#define GET_R(value) ((unsigned int)value &0xff)
#define GET_G(value) (((unsigned int)value &0xff00) >> 8)
#define GET_B(value) (((unsigned int)value &0xff0000) >> 16)
#define GET_A(value) (((unsigned int)value &0xff000000) >> 24)

// pixel buffer rectangle
typedef struct _pb_rect {
	unsigned int x, y, width, height;
} pb_rect;

typedef struct _pb_rgba {
	unsigned char *		data;
	unsigned int		pixelpitch;
	int					owndata;
	pb_rect				frame;
} pb_rgba;

The ‘pb-rgba’ structure is the base of the drawing system. This is the drawing ‘surface’ if you will. It is akin to the ‘framebuffer’ if you were doing graphics in hardware. Things can only be a certain way. The buffer has a known width, height, and number of bytes per row. So far there is only one defined pixel buffer rectangle type, and that supports the rgba layout. I could create different pixel buffer types for different pixel layouts, such as rgba16, or rgb15, or bitmap. But, I don’t need those right now, so this is the only one.

I debated a long time whether the ‘pix_rgba’ structure was needed or not. As you can see from the macros ‘RGBA’ and friends, you can easily stuff the rgba values into an int32. This is good and bad, depending. The truth is, having both forms is useful depending on what you’re doing. If there are machine instructions to manipulate bytes in parallel, one form or the other might be more beneficial. This will be put to the test in blending functions later.

So, what are the most basic functions of this most basic structure?

#ifdef __cplusplus
extern "C" {

int pb_rgba_init(pb_rgba *pb, const unsigned int width, const unsigned int height);
int pb_rgba_free(pb_rgba *pb);

int pb_rgba_get_frame(pb_rgba *pb, const unsigned int x, const unsigned int y, const unsigned int width, const unsigned int height, pb_rgba *pf);

#ifdef __cplusplus

#define pb_rgba_get_pixel(pb, x, y, value) *value = ((unsigned int *)(pb)->data)[(y*(pb)->pixelpitch)+x]
#define pb_rgba_set_pixel(pb, x, y, value) ((unsigned int *)(pb)->data)[(y*(pb)->pixelpitch)+x] = value

#define pb_rect_contains(rect, x, y) ((x>=(rect)->x && x<= (rect)->x+(rect)->width) && ((y>=(rect)->y) && (y<=(rect)->y+(rect)->height)))
#define pb_rect_clear(rect) memset((rect), 0, sizeof(pb_rect))

The ‘pb_rgba_init’ function is the beginning. It will fill in the various fields of the structure, and most importantly allocate memory for storing the actual pixel values. This is very important. Without this, the data pointer points to nothing, or some random part of memory (badness). The corresponding ‘pb_rgba_free’ cleans things up.

The ‘pb_rgba_get_frame’ function copies a portion of a pixel array into another pixel array. You could consider this to be a ‘blit’ function in most libraries. There’s a bit of a twist here though. Instead of actually copying the data, the data pointer is set, and the fields of the receiving frame are set, and that’s it. Why? Because there are many cases where you want to move something around, without having to create multiple copies. Just think of a simple ‘sprite’ based game where you copy some fixed image to multiple places on the screen. You don’t want to have to make multiple copies to do it, just reference some part of a fixed pixel buffer, and do what you will.

Then there are the two macros ‘pb_rgba_get_pixel’, and ‘pb_rgba_set_pixel’. These will get and set a pixel value. Once you have these in hand, you have the keys to the graphics drawing kingdom. Everything else in the system can be based on these two functions. Something as simple as line drawing might use these. There may be optimizations in cases such as horizontal line drawing, but for the most part, this is all you need to write textbook perfect drawing routines.

And that pretty much rounds out the basics up to pixel setting.

It’s kind of amazing to me. Even before really getting into anything interesting, I’ve had to make decisions about some very basic data structure representations, and there are various hidden assumptions as well. For example, there is an hidden assumption that all parameters to the set/get pixel routines are bounds checked. The routines themselves will not perform any bounds checking. This is nice for a composable system, because the consumer can decide where they want to pay for such checking. Doing it per pixel might not be the best place.

There are assumptions about the endianness of the int32 value on the machine, as well as various other large and small assumptions in terms of interfacing to routines (using macros?). There are even assumptions about what the coordinate system is (cartesian), and how it is oriented (top left is 0,0). Of course all of these assumptions must be clearly documented (preferably in the code itself), to minimize the surprises as the system is built and used.

This is a good start though. Here’s how we could use what’s there so far.

#include "test_common.h"

void test_blit()
	size_t width = 640;
	size_t height = 480;

	pb_rgba pb;
	pb_rgba_init(&pb, width, height);

	pb_rgba fpb;
	pb_rgba_get_frame(&pb, 0, 0, width / 2, height / 2, &fpb);

	// draw into primary pixel buffer
	raster_rgba_rect_fill(&pb, 10, 10, (width / 2) - 20, (height / 2) - 20, pBlue);

	// draw the frame in another location
	raster_rgba_blit(&pb, width / 2, height / 2, &fpb);

	// Now we have a simple image, so write it to a file
	int err = write_PPM("test_blit.ppm", &pb);


int main(int argc, char **argv)

	return 0;

Setup a pixel buffer that is 640 x 480. Draw a rectangle into the pixel buffer (blue color). Copy that frame into another location in the same pixel buffer. Write the pixel buffer out to a file.


And that’s about it.

Some basic constants, simple data structures, core assumptions, and pixels are being stuffed into a buffered in such a way that you can view a picture on your screen.

Next up, drawing lines and rectangles, and what is a ppm file anyway?

SawStop Contractor Saw Cabinet – Part 1

I have this great table saw, the SawStop contractor saw.  The great thing about it is the ability to stop the saw blade instantly if it ever touches flesh.  Considering that I’m an occasional woodworker, this sounded like a good idea, and actually came from a recommendation of someone who was a regular woodworker, with half a sawn off finger.

Besides being a finger saver, the saw itself is quite a nice saw.  Mine is configured with the nice T-square fence system, rather than the regular contractor’s saw fence.  In addition, it has the 52″ rail, which means the overall length of the thing is 85″.  That’s a pretty big and unwieldy piece of equipment for the garage.


Moving, and thus using, the saw involves lifting it up using that foot lift thing on the saw’s base, and shoving it around, hoping the action doesn’t knock the thing out of alignment while I’m doing it.  So, I’ve scoured the interwebs looking for inspiration on what to do about the situation.  There are quite a few good examples of cabinetry around table saws:

There are myriad other examples if you just do a search for ‘table saw cabinet’.

Many of these designs are multi-purpose, in that they include a router extension as well as the table saw.  I don’t need that initially, as I have a separate router table that’s just fine.  So, my design criteria are:

  • Must support the entire length of the saw and fence system
  • Must provide some onboard storage
  • Must be easily mobile
  • Must be stable when not mobile
  • Must support adding various extensions

A fairly loose set of constraints (looking just like software), but good enough to help make some decisions.

The very first step is deciding on what kind of mobility I’m going to design for.  I considered many options, but they roughly boil down to, locking swivels, on at least 2 corners.  For the wheels themselves, I chose a 5″ wheel, where each wheel has a 750lb capacity.  That seems heavy duty enough for this particular purpose.  I could have gone with 3″ wheels, but that seemed too small, and I read from other efforts that the bigger the better, considering the resulting weight of the cabinet could be several hundred pounds, and moving that with small wheels might have a lot of friction and be difficult.

I chose to use 4 locking swivel wheels, one at each corner.  The overall length of the cabinet is 86″, which had me thinking about sagging.  Perhaps I should stick more wheels mid-span just in case.  But, I chose instead to go with an engineered solution.  The base is built out of a torsion box.  The torsion box consists of 1×4″ lumber forming the internal supports.  That is trimmed by 1×4″ on the outside, and it’s skinned top and bottom by 3/4″ oak plywood.


I studied many different options for constructing this beast. Probably the best would have been to cut slots in crosswise members and laid them uniformly down the length of the base.  But, I don’t currently have a dado blade on my sawstop, so I went with these smaller cross pieces instead.  I think it actually turns out better because I get the offsets, which allow me to fasten the cross members to the long runners individually.

Also in this picture, you can see that the corners have been filled in with blocks.  This is where the wheels will mount, once the skin is on.  I didn’t want to have bolts protruding with nuts and washers on the ends, so I went with lag screws into these think chunks instead.  The chunks are formed by cutting playwood pieces, and gluing them together down in the hole.  That basically forms a nice 3.5″ chunks of wood that is glued through and through from skin to skin.

Here is the base with the skin and wheels on it.


It may not look like it, because the base is sitting atop an assembly table which itself is pretty long, but this thing is pretty big.  It’s also fairly solid.  When I put it on the floor, I stood on it, tried to kick it around and the like, and even without any other supports on it, it’s not moving, bending, flexing, or what have you.  I believe the torsion box will do a nice job.  One deviation I made from the typical cabinets that I’ve see is that they will typically have the wheels touching the ‘top’ skin, the the rest of the torsion box hanging down towards the floor.  Well, I wanted to get the wheels solidly under the whole thing, with not potential for a shearing force breaking the plywood along the mounting plate of the wheel, so I went this direction.  But, that begs the question.  There is now roughly 5.5″ of space that just hanging below the bottom skin and the floor.  What can be done with that?


I thought, well, I can put some drawers down below of course.  I could have just put some hanging drawer sliders down there, and called it a day, but I went with a slightly different design.  I wanted to have something that could change easily over time, so I went with a French cleat system, which could take any attachments over time, starting with some drawers that I had laying around from some cabinet that wasn’t being used.


So, two side by side sections of hanging French cleats, the one on the left with drawer installed.

And finally, the whole mess turned right side up with some junk thrown into the drawer


With the offset from the base, the drawers are about 4 inches tall, leaving around 1.5″ to the floor.  That’s a great usage of space as far as I’m concerned.  With this setup, I can keep some things that are commonly used with the table saw, or assembly, or just things that don’t quite have anywhere else to live at the moment.

So, this is phase one.  The non-sagging base, ready for the cabinetry work to be set atop, which will actually hold the saw and table surfaces.

GraphicC – What’s so special about graphics anyway?

I have done graphics libraries off and on over the past few decades of programming.  I wanted to take one more crack at it, just because it’s a fun pasttime.  My efforts are on github in the graphicc repository.  This time around, I had some loose design criteria to work from.

  • Use the simplest C constructs as possible
  • Make it small enough to work on a microcontroller
  • Make it flexible enough to be included in any project
  • Make it composable, so that different design tradeoffs can be made
  • Do 2D basics
  • Do 3D basics

These aren’t exactly hard core criteria, but they’re good enough of a constraint for me to get started.  What followed was a lot of fumbling about, writing, rewriting, trying out, testing, and rewriting.  It continues to evolve, and shrink in size.  What of the key aspects of the project is trying to discover the elegant design.  This is something that I have tried to do over the years, no matter what piece of engineering I’m working on, whether it be a machine, workbench, software system, or what have you.  It’s relatively easy to come up with brute force designs that get the job done.  It’s relatively hard to come up with the design that gets the job done with at minimal expense.

When doing a graphics system, it’s really easy to lose focus and get locked into things that you might find on the host platform.  For example, Simple Direct Media Layer (SDL) is available everywhere, so why would anyone build a window and basic graphics interface today?  Well, SDL is more than I’m shooting for.  Over time, my design constraints may line up with those of SDL, and at such time I’ll have a decision to make about switching to using it.  But, to begin, it’s more than I need.

There are tons of other libraries that might suit the bill as well:

There are literally hundreds to choose from.  Some are commercial, some very mature, some old and abandoned.  Some rely on platform specifics, whether it be X or OpenGL, or DirectX.  Some are completely independent, but are tightly integrated with frameworks.  Some are just right, and exactly what I would use, but have some very stiff license terms that are unpleasant.  This is not to say that a perfect solution does not exist, but there’s enough leeway for me to justify to myself that I can spend time on writing this code.

How to start?  Well, first, I consider my various criteria.  The 3D aspects can easily dominate the entirety of the library.  I’m not after creating a gaming library, but I have similar criteria.  I want to be able to simply display 3D models, whether they be renderings of .stl files, or simple ray traces.  I’m not after creating a library that will support the next Destiny on its own, but creating such a library should be possible standing on the shoulders of graphicc.  That means taking care of some basic linear algebra at the very least, and considering various structures.

In the beginning, there was graphicc.h

Here’s a snippet:

#pragma once 

#ifndef graphicc_h
#define graphicc_h

#include &lt;stdint.h&gt;

typedef double REAL;

// Some useful constants
#ifndef M_PI
#define M_PI			3.14159265358979323846264338327950288
#define M_ROOT_PI		1.772453850905516027
#define M_HALF_PI		1.57079632679489661923132169163975144
#define M_QUARTER_PI	0.785398163397448309615660845819875721
#define M_ONE_OVER_PI	0.318309886183790671537767526745028724

#define M_E				2.71828182845904523536
#define M_EULER			0.577215664901532860606

#define M_GOLDEN_RATIO	1.61803398874989484820458683436563811


#define DEGREES(radians) ((180 / M_PI) * radians)
#define RADIANS(degrees) ((M_PI/180)*degrees)

From the very first lines, I am making some design decisions about the language environments in which this code will operate. The ‘pragma once’ is fairly typical of modern C++ compilers. It takes care of only including a header file once in a mass compilation. But, since not all compilers deal with that, there’s the more classic ‘ifndef’ as well.

I do make other assumptions.  including stdint.h.  I’m going to assume this file exists, and contains things like uint8_t.  I’m going to assume that whomever is compiling this code can make sure stdint.h is available, or substitutes appropriate typedefs to make it so.

Then there’s those monsterous constants done as #define statements.  I could do them as const double XXX, but who knows if that’s right for the platform or not.  Again, it’s easily changeable by whomever is compiling the unit.

Then there’s the ‘typedef double REAL’.  I use ‘REAL’ throughout the library because I don’t want to assume ‘double’ is the preferred type on any given platform.  This makes it relatively easy to change to ‘typedef float REAL’ if you so choose.  Of course each one has a different level of precision, and range, so that needs to be accounted for.  The assumption here is that whomever is doing the compilation will know what they want, and if they don’t sticking with a double is a good default.

The various constants are there mainly because they come up time and again when doing simple math for graphics, so they might as well be readily available.  Same goes for the two conversion routines between degrees and radians.  Of course I should change the divisions to be constants for increased speed.

The rest of this header contains some basic types:

typedef REAL real2[2];
typedef REAL real3[3];
typedef REAL real4[4];

typedef struct _mat2 {
REAL m11, m12;
REAL m21, m22;
} mat2;

typedef struct _mat3 {
REAL m11, m12, m13;
REAL m21, m22, m23;
REAL m31, m32, m33;
} mat3;

typedef struct _mat4 {
REAL m11, m12, m13, m14;
REAL m21, m22, m23, m24;
REAL m31, m32, m33, m34;
REAL m41, m42, m43, m44;
} mat4;

enum pixellayouts {

All of these, except the ‘pixellayouts’ are related primarily to 3D graphics.  They are widely used though, so they find themselves in this central header.  Later, there are decisions about doing things like Catmull-Rom splines using matrices.  These are best done using 4×4 matrices, even if you’re just calculating for 2D curves.

Before I go any further, a picture is worth a thousand words no?


This is demonstrating a few things.  Here’s the code:

#include &quot;test_common.h&quot;

void checkerboard(pb_rgba *pb, const size_t cols, const size_t rows, const size_t width, const size_t height, int color1, int color2)
size_t tilewidth = width / cols;
size_t tileheight = (height / rows);

for (size_t c = 0; c &lt; cols; c++){
for (size_t r = 0; r &lt; rows; r++){
raster_rgba_rect_fill(pb, c*tilewidth, r*tileheight, tilewidth / 2, tileheight / 2, color1);
raster_rgba_rect_fill(pb, (c*tilewidth) + tilewidth / 2, r*tileheight, tilewidth / 2, tileheight / 2, color2);
raster_rgba_rect_fill(pb, c*tilewidth, (r*tileheight) + tileheight / 2, tilewidth / 2, tileheight / 2, color2);
raster_rgba_rect_fill(pb, (c*tilewidth) + tilewidth / 2, (r*tileheight) + tileheight / 2, tilewidth / 2, tileheight / 2, color1);

void test_blender()
size_t width = 800;
size_t height = 600;

pb_rgba pb;
pb_rgba_init(&amp;pb, width, height);

// Red background
raster_rgba_rect_fill(&amp;pb, 0, 0, width, height, pRed);

// create checkerboard background
checkerboard(&amp;pb, 16, 16, width, height, pLightGray, pYellow);

// Draw some blended rectangles atop the whole
for (int offset = 10; offset &lt; 400; offset += 40) {
float factor = offset / 400.0f;
int alpha = (int)(factor * 255);
//printf(&quot;factor: %f alpha: %d\n&quot;, factor, alpha);

int fgColor = RGBA(0, 255, 255, alpha);
raster_rgba_rect_fill_blend(&amp;pb, offset, offset, 100, 100, fgColor);

// Now we have a simple image, so write it to a file
int err = write_PPM(&quot;test_blender.ppm&quot;, &amp;pb);

int main(int argc, char **argv)

return 0;

Initializing a pixel buffer (a rendering surface), filling it with a checkerboard pattern, then blending progressively opaque rectangles atop that.  Finally, it writes the result out to a ‘.ppm’ file.  There’s a long way from some constant definitions to doing blended rectangle rendering, but each step is pretty simple, and builds upon the previous.  So, that’s what I’ll focus on next.  In the meanwhile, there’s always to code.

FUD Game Theory and extreme marketing?

There’s this recurring thought that I have.  Every time I see or hear something in the news that sounds slightly unbelievable, I think about it from a game theory standpoint.

How do you get people to do what you want, against their own reason?  One way is through inspiration.  “We’re going to take that hill.  Not all of us will live in doing it, but our names will live on for our valorous act!!”  The adrenalin gets pumpin, and up the hill we go.

Then there’s the more mundane.  I went to Panera Bread the other day, and like most retailers these days, they asked “do you have our customer loyalty card, by which we can collect purchasing habits data on you, and use it for our marketing, and sell you name and number to someone else once it’s no longer of use to us?”.  They didn’t actually say it that way, but it amounts to that.  What do I get in return?  “A secret reward, just for registering (with your phone number)”.  Well, being in Panera, I’m thinking, the ‘reward’ must be some free pastry or some such.  Maybe a 10% off coupon.  It seems harmless enough, why wouldn’t I sign up for that.  I don’t even have to carry the magnetic strip card, just tie it to my phone number, and I’m all set.  Of course, I’ll use the same phony number I use for all these loyalty card offers, just for kicks.

What am I willing to give up, just to get a little bit of reward here and there?

Then there’s gaming me.  Andy Kaufman was a famous gamer, to extremes.  He would pull off stunts that were way over the top, you know, the kind of joke that goes on far too long, way past the punch line, to the point of embarrassment.  You kept watching and interested, because it was so unbelievable that anyone would go to such lengths of spectacle just to entertain themselves.

Now, I come to the point.  If I want to promote a movie these days, would I use traditional methods?  Would I advertise using trailers in movie theatres, six months before the release?  Nah, my movie is mediocre at best.  How would Andy Kaufman do it?

If I were Andy, and put in charge of marketing a marginal movie with a fairly insensitive plot, I might do the following.  I would ‘leak’ the movie to the internet.  I would do it in such a way that seems unbelievable, outrageous.  I might invoke and enlist the skills of cyber hacking crews, whether real or made up.  I would introduce a certain amount of FUD into the mysterious leaks, tying them into the movie in one way or another.  I might even take scenes from the movie, and have them played out in real life.  I would get nation states involved, get everyone outraged, put the movie on everyone’s lips.  I would get the movie banned, “for public safety”, then I’d leak it some more through various channels.  Black market sales, private screenings, reporters on scene in secret enclaves.  I’d get everyone to a fever pitch, then I’d eventually release the details of the deal being struck for a special world wide premier of the movie, so that brave souls and free citizens everywhere could show they would not bow to the demands of terrorists and thugs.

The Netflix and BlueRay disk releases would follow soon thereafter, and the company I’m marketing for would make a mint, and reward me handsomely.  At least that’s what Andy Kaufman would do.

The world we live in has a lot of information available at our fingertips.  The challenge of living in such a world is to be ever vigilant, and grounded in values that keep us grounded in the swirl.


Get every new post delivered to your Inbox.

Join 51 other followers