SawStop Contractor Saw Cabinet – Part 2

Last time around, I had finished my torsion box base with low riding sliding drawers.  The next step in the journey was to construct the box upon which the saw itself will sit.  I looked at many options.  Solid wood, plywood, open frame, closed frame.  I needed to integrate dust collection as well, and possibly storage.  In the end I created a design which is a combination of a couple of things.

WP_20150115_004[1]

The box is constructed entirely of 3/4″ oak plywood.  The bottom is constructed of a rectangle which is put together using Kreg pocket screws.  The top ‘mid-top’ is the same.  They are held up on the sides by solid plywood.  The very top is a solid piece of plywood, with a hole cut out of it to match the swing of the motor and dust collection port on the saw.  This could also have been constructed using the Kreg framing, but I wanted to try this way as well.

That forms the basic box.  It was pretty solid, but I wanted to go one step further.  I put additional plywood sides on with full surface gluing.  This should prevent any rocking forward/back.  Before usage, I will stick a false front on the thing, and that should eliminate any side rocking.  It’s feeling pretty solid though, so I don’t think there will be much.

WP_20150115_003[1]

Here’s what it looks like with the saw sitting atop the box, atop the rolling base.  I had to strip it down so that I could then slide it off the base and onto the box without the help of others, or a hoist.

WP_20150115_005[1]

There’s the old steel base, ready to go for its next adventure.

Most of the builds I have seen have the forethought of incorporating a sloping slidey thing for the dust chute, or a drawer, or sliders, and what have you.  I could not think through my dust collection options completely, so I just designed the base as an open box.  That way, I can build any type of drawer, slide, tubing, what have you, and just slide it into the open slot.  If I want to change it later, I can, without having to build a whole new base/box.

I also decided that I don’t need to go for a full integrated unicabinet design.  In fact, it’s better to make this whole thing modular so that I can change it easily over time as my needs change.  For example, most builds have the saw mounted as I’ve shown here.  Then they have large outfeed tables so that they can do long rips.  Well, truth be told, most of what I’m going to be doing on a table saw is probably longish rips, or fairly short stuff where a sled will be utilized.  So, having all this width isn’t really that beneficial.  Easy enough, I can just turn the box sideway, and layout an outfeed table atop the torsion box, and be done.

To that end, the base is simply screwed down to the torsion box.  It’s not glued.  Other boxes will be constructed, and just screwed down as well.  Whether it’s drawers, a router extension, or what have you, just throw it on there, and get to making chips!

Almost done.  Now I need to construct the simple supports so I can reassemble the table.  The easiest thing will be to simply go back to what I had for now, that is, the super long rails, extension wings and table.  I’ll just have to adjust the length of the legs on the support table, and call it a day


TINN Reboot

I always dread writing posts that start with “it’s been a long time since…”, but here it is.

It’s been a long time since I did anything with TINN.  I didn’t actually abandon it, I just put it on the back burner as I was writing a bunch of code in C/C++ over the past year.  I did do quite a lot of experimental stuff in TINN, adding new interfaces, trying out new classes, creating a better coroutine experience.

The thing with software is, a lot of testing is required to ensure things actually work as expected and fail gracefully when they don’t.  Some things I took from the ‘experimental’ category are:

fun.lua – A library of functional routines specifically built for LuaJIT and it’s great handling of tail recursion.

msiterators.lua – Some handy iterators that split out some very MS specific string types

Now that msiterators is part of the core, it makes it much easier to do things like query the system registry and get the list of devices, or batteries, or whatever, in a simple table form.  That opens up some of the other little experiments, like enumerating batteries, monitors, and whatnot, which I can add in later.

There are not earth shattering, and don’t represent a year’s worth of waiting, but soon enough I’ll create a new package with new goodness in it.  This begs the question, what is TINN useful for?  I originally created it for the purpose of doing network programming, like you could do with node.  Then it turned into a way of doing Windows programming in general.  Since TINN provides scripted access to almost all the interesting low level APIs that are in Windows, it’s very handy for trying out how an API works, and whether it is good for a particular need.

In addition to just giving ready access to low level Windows APIs, it serves as a form of documentation as well.  When I look at a Windows API, it’s not obvious how to handle all the parameters.  Which ones do I allocate, which ones come from the system, which special function do I call when I’m done.  Since I read the docs when I create the interface, the wrapper code encapsulates that reading of the documentation, and thus acts as an encapsulated source of knowledge that’s sitting right there with the code.  Quite handy.

At any rate, TINN is not dead, long live TINN!


GraphicC – The triangle’s the thing

Pixels, lines, and squares, oh my!
test_triangle

And, finally, we come to triangles.  The demo code is thus:

#include "test_common.h"

void test_writebitmap()
{
	size_t width = 480;
	size_t height = 480;
	pb_rgba pb;
	pb_rgba_init(&pb, width, height);

	// Draw some triangles 
	size_t midx = (size_t)(width / 2);
	size_t midy = (size_t)(height / 2);

	raster_rgba_triangle_fill(&pb, 0, 0, width-1, 0, midx, midy, pRed);
	raster_rgba_triangle_fill(&pb, 0, 0, midx, midy, 0, height-1, pGreen);
	raster_rgba_triangle_fill(&pb, width-1, 0, midx, midy, width-1, height - 1, pBlue);
	raster_rgba_triangle_fill(&pb, midx, midy, 0, height-1, width - 1, height - 1, pDarkGray);

	// Now we have a simple image, so write it to a file
	int err = write_PPM("test_triangle.ppm", &pb);
}

int main(int argc, char **argv)
{
	test_writebitmap();

	return 0;
}

Why triangles? Well, beyond pixels, and lines, they are one of the most useful drawing primitives. When it finally comes to rendering in 3D for example, it will all be about triangles. Graphics hardware over the years has been optimized to render triangles as well, so it is beneficial to focus more attention to this primitive than to any other. You might think polygons are very useful, but, all polygons can be broken down into triangles, and we’re back where we started.

Here’s the triangle drawing routine:

#define swap16(a, b) { int16_t t = a; a = b; b = t; }

typedef struct _point2d
{
	int x;
	int y;
} point2d;

int FindTopmostPolyVertex(const point2d *poly, const size_t nelems)
{
	int ymin = INT32_MAX;
	int vmin = 0;

	size_t idx = 0;
	while (idx < nelems) {
		if (poly[idx].y < ymin) {
			ymin = poly[idx].y;
			vmin = idx;
		}
		idx++;
	}

	return vmin;
}

void RotateVertices(point2d *res, point2d *poly, const size_t nelems, const int starting)
{
	size_t offset = starting;
	size_t idx = 0;
	while (idx < nelems) {
		res[idx].x = poly[offset].x;
		res[idx].y = poly[offset].y;
		offset++;
		
		if (offset > nelems-1) {
			offset = 0;
		}

		idx++;
	}
}

void sortTriangle(point2d *sorted, const int x1, const int y1, const int x2, const int y2, const int x3, const int y3)
{
	point2d verts[3] = { { x1, y1 }, { x2, y2 }, { x3, y3 } };

	int topmost = FindTopmostPolyVertex(verts, 3);
	RotateVertices(sorted, verts, 3, topmost);
}

void raster_rgba_triangle_fill(pb_rgba *pb, 
	const unsigned int x1, const unsigned int  y1, 
	const unsigned int  x2, const unsigned int  y2, 
	const unsigned int  x3, const unsigned int  y3, 
	int color)
{
	int a, b, y, last;

	// sort vertices, such that 0 == y with lowest number (top)
	point2d sorted[3];
	sortTriangle(sorted, x1, y1, x2, y2, x3, y3);

	// Handle the case where points are colinear (all on same line)
	if (sorted[0].y == sorted[2].y) { 
		a = b = sorted[0].x;
		
		if (sorted[1].x < a) 
			a = sorted[1].x;
		else if (sorted[1].x > b) 
			b = sorted[1].x;

		if (sorted[2].x < a) 
			a = sorted[2].x;
		else if (sorted[2].x > b) 
			b = sorted[2].x;

		raster_rgba_hline(pb, a, sorted[0].y, b - a + 1, color);
		return;
	}

	int16_t
		dx01 = sorted[1].x - sorted[0].x,
		dy01 = sorted[1].y - sorted[0].y,
		dx02 = sorted[2].x - sorted[0].x,
		dy02 = sorted[2].y - sorted[0].y,
		dx12 = sorted[2].x - sorted[1].x,
		dy12 = sorted[2].y - sorted[1].y;
	
	int32_t sa = 0, sb = 0;

	// For upper part of triangle, find scanline crossings for segments
	// 0-1 and 0-2. If y1=y2 (flat-bottomed triangle), the scanline y1
	// is included here (and second loop will be skipped, avoiding a /0
	// error there), otherwise scanline y1 is skipped here and handled
	// in the second loop...which also avoids a /0 error here if y0=y1
	// (flat-topped triangle).
	if (sorted[1].y == sorted[2].y) 
		last = sorted[1].y; // Include y1 scanline
	else 
		last = sorted[1].y - 1; // Skip it
	
	for (y = sorted[0].y; y <= last; y++) 
	{
		a = sorted[0].x + sa / dy01;
		b = sorted[0].x + sb / dy02;
		sa += dx01;
		sb += dx02;
		/* longhand:
		a = x0 + (x1 - x0) * (y - y0) / (y1 - y0);
		b = x0 + (x2 - x0) * (y - y0) / (y2 - y0);
		*/
		
		if (a > b) swap16(a, b);
		raster_rgba_hline(pb, a, y, b - a + 1, color);
	}

	// For lower part of triangle, find scanline crossings for segments
	// 0-2 and 1-2. This loop is skipped if y1=y2.
	sa = dx12 * (y - sorted[1].y);
	sb = dx02 * (y - sorted[0].y);
	for (; y <= sorted[2].y; y++) 
	{
		a = sorted[1].x + sa / dy12;
		b = sorted[0].x + sb / dy02;
		sa += dx12;
		sb += dx02;
		/* longhand:
		a = x1 + (x2 - x1) * (y - y1) / (y2 - y1);
		b = x0 + (x2 - x0) * (y - y0) / (y2 - y0);
		*/
		if (a > b) 
			swap16(a, b);

		raster_rgba_hline(pb, a, y, b - a + 1, color);
	}
}

This seems like a lot of work to simply draw a triangle! There are lots of routines for doing this. This particular implementation borrows from a couple of different techniques. The basic idea is to first sort the vertices in scaline order, top to bottom. That is, we want to known which vertex to start from because we’re going to follow an edge down the framebuffer, drawing lines between edges as we go. In a regular triangle, without a flat top or bottom, there will be a switch between driving edges as we encounter the third point somewhere down the scan lines.

At the end, the ‘raster_rgba_hline’ function is called to actually draw a single line. If we wanted to do blending, we’d change this line. If we wanted to use a color per vertex, it would ultimately be dealt with here. All possible. Not necessary for the basic routine. With some refactoring, this could be made more general purpose, and different rendering modes could be included.

This is the only case where I use a data structure, to store the sorted points. Most of the time I just pass parameters and pointers to parameters around.

This is also a good time to mention API conventions. One of the constraints from the beginning is the need to stick with basic C constructs, and not get too fancy. In some cases this is at the expense of readability, but in general things just work out ok, as it’s simple to get used to. So, let’s look at the API here:

void raster_rgba_triangle_fill(pb_rgba *pb, 
	const unsigned int x1, const unsigned int  y1, 
	const unsigned int  x2, const unsigned int  y2, 
	const unsigned int  x3, const unsigned int  y3, 
	int color)

That’s a total of 8 parameters. If I were going for absolute speed, and I knew the platform I was running on, I might try to limit the parameters to 4, to help ensure they get passed in registers, rather than requiring the use of a stack. But, that’s an optimization that’s iffy at best, so I don’t bother. And why not use some data structures? Clearly x1, y1 is a 2D point, so why not use a ‘point’ data structure. Surely where the data comes from is structured as well. In some cases it is, and in others it is not. I figure that in the cases where the data did come from a structure, it’s fairly easy for the caller to do the dereferencing and pass in the values directly. In the cases where the data did not come from a point structure, it’s a tax to pay to stuff the data into a point structure, simply to have it pass in to a function, which may not need it in that form. So, in general, all the parameters are passed in a ‘denormalized’ form. With small numbers of easy to remember parameters, I think this is fine.

Also, I did decide that it was useful to use ‘const’ in most cases where that is applicable.  This gives the compiler some hints as to how best deal with the parameters, and allows you to specify constants inline for test cases, without compiler complaints.

There is a decided lack of error handling in most of these low level APIs.  You might think that triangle drawing would be an exception.  Shouldn’t I do some basic clipping to the framebuffer?  The answer is no, not at this level.  At a higher level, we might be drawing something larger, where a triangle is but a small part.  At that higher level we’ll know about our framebuffer, or other constraints that might cause clipping of a triangle.  At that level, we can perform the clipping, and send down new sets of triangles to be rendered by this very low level triangle drawing routine.

Of course, this routine could be made even more raw and optimized by breaking out the vertex sorting from the edge following part.  If the triangle vertices don’t change, there’s no need to sort them every time they are drawn, so making the assumption that the vertices are already in framebuffer height order can dramatically simplify the drawing portion.  Then, the consumer can call the sorting routines when they need to, and pass in the sorted vertex values.

And thus APIs evolve.  I’ll wait a little longer before making such changes as I might think of other things that I want to improve along the way.

There you have it then.  From pixels to triangles, the basic framebuffer drawing routines are now complete.  What more could be needed at this level?  Well, dealing with some curvy stuff, like ellipses and rounded rectangles might be nice, all depends on what use cases are more interesting first, drawing pies and graphs, or doing some UI stuff.  The first thing I want to do is display some data related to network traffic events, so I think I’ll focus on a little bit of text rendering first, and leave the ellipse and rounded rects for later.

So, next time around, doing simple bitmap text.


The Future is Nigh

It’s already January 5th, and I haven’t made any predictions for the year.  I thought about doing this last year, and then I scrapped the post because I couldn’t think faster than things were actually happening.  How to predict the future?  A couple years back, I followed the advice of not trying to predict what will come, but what will disappear.  While perhaps more accurate, it’s less satisfying than doing the fanciful sort of over the top prediction.  So, here’s a mix.

nVidia will convince itself that it is the sole heir to the future of GPU computing.  With their new tegra X1 in hand, their blinders will go up as to any other viable alternatives, which means they’ll be blind sided by reality at some point.  It is very interesting to consider the amount of compute power that is being brought to bare in very small packages.  The X1 is no bigger than typical chips, yet has teraflop capabilities.  Never mind that that’s more than your typical desktop of a few years back, throw a bunch of these together in a desktop sized box, and you’ve got all of the NSA (circa 1988) at your fingertips.

More wires will disappear from my compute space.  The Wifi thing has surely been a huge boost for eliminating wires from all our lives.  In my house, which was built in 2008, every room in wired with phone jacks, coax connectors, and some of them with ethernet ports.  The only room where I actually use the ethernet ports is my home office, where I have several devices, some of which work best when wired up to the central hub over gigabit connections (the NAS box).  But hay, in the garage, it’s all wireless.  Even the speakers for the entertainment system are wireless.  Between the traditional wifi, and bluetooth, I’m losing wires from speakers, keyboards, mice, devices.  The last remaining wires are for power, and HDMI connectors.  I suspect the power cords will stick around for quite some time, although, increasingly, devices might run off batteries more, so power will be a recharge thing rather than primary.  HDMI connections are soon to be a thing of the past.  This will happen in two different ways.  The first will be those simple devices that give you a wireless HDMI connection.  A highly specialized device that you can already purchase today.  The second will be the fact that ‘smart glass’ either in the form of large ‘tablets’ or HDMI compute dongles, will essentially turn every screen into a remote desktop display, or smarter.  And those devices will all speak wifi and/or bluetooth.

So yah, compute will continue to go down the power/compute density curve, and more wires will disappear.

What about storage?  I’ve got terabytes sitting idle at home.  I mean every single device, no matter how small, has at least a few gigabytes of one form or another.  Adding a terabyte drive to a media player costs around $50 or so.  The entirety of my archived media library is no more than a couple of terabytes.  Storage is essentially free at this point.  So what the cloud?  I don’t have a good way to backup all my terabytes of stuff.  The easiest solution might be to get a storage account with someone, and mirror from my NAS up to the cloud.  But, that assumes my NAS is a central point for all the things stored in my home.  But it’s not.  It contains most of my media libraries, and tons of backed up software and the like, but not everything.

Perhaps the question isn’t merely one of storage, but perhaps it’s about data centers in general.  With terabytes laying about the house, and excess compute power a mere $35 away, what might emerge?  I think the home data center is on the precipice.  What might that look like?  Well, some clever software that is configured to gather, backup, distribute, and consolidate stuff.  Run RAID across multiple storage devices in the home.  Do backup streaming to the cloud, or simply across multiple storage devices in the home.  The key is having the software be brain dead simple to install and run.  A set and forget setup.  This begs the question, will the home network change?  Notions of identity and permissions and the like.  We’ll likely see a shift away from the more corporate based notions of identity and authorization to a more home based solution.

And what about all that compute power?  It’s getting smaller, and requiring less energy as usual.  Self driving  cars?  Well, at least parallel parking assist would be a start, and that’s how it will start.  I’d expect a lot of this compute power to be used to improve surveillance capabilities.  Image processing will continue to makes leaps in accuracy and capability.  This tied with the rise of autonomous vehicles is likely to make a landscape full of tiny flying and roving devices that can track a lot of things.  But that’s rather broad.  In the short term, it will simply mean that military recon will get easier, and not just for large players such as the US.  Corporate espionage will also get easier as a result.

And from the fringe?  The pace of technological advancement is always accelerating.  One thing builds on the next, and the next big thing shows up before you know it.  Writing computer software is still kind of clunky and slow for the most part.  I would expect this to actually accelerate.  It will happen because the price of emulation/simulation is decreasing.  Between FPGAs and dynamically configurable virtual machines, it’s becoming easier to create new CPUs, GPUs and the like, try them out on massive scale, make improvements on the fly, and generally leverage and build systems faster than ever before.  CPUs will not only become simulated silicon, they will stop being manufactured as hard silicon, because the cycle times to create new hard silicon will be much slower than what you can do in simulation.  Parallelism will become a thing.  We will throw off the shackles of trying to build massively parallel systems using mutexes, and instead reembrace message passing as the way the world works.  This combined with new understandings of how massively parallel systems like brains work, will give a renewed emphasis on AI systems, built on the scale of data centers, and continents.  This will be the advantage of the data center.  Not just compute and storage, but consolidated knowledge, and increased AI capabilities.

And so it goes.  The future is nigh, so we’ll see what the future brings.

 


GraphicC – Random Lines

Thus far, I’ve been able to draw individual pixels, some rectangles, straight vertical and horizontal lines.  Now it’s time to tackle those lines that have an arbitrary slope.

test_writebitmap

In this case, we’re only interested in the line drawing, not the rectangles and triangles.  Here’s the example code:

 

#include &quot;test_common.h&quot;

void test_writebitmap()
{
	size_t width = 480;
	size_t height = 480;
	pb_rgba pb;
	pb_rgba_init(&amp;pb, width, height);


	// draw horizontal lines top and bottom
	raster_rgba_hline(&amp;pb, 0, 0, width - 1, pWhite);
	raster_rgba_hline(&amp;pb, 0, height - 1, width - 1, pWhite);

	// draw vertical lines left and right
	raster_rgba_vline(&amp;pb, 0, 0, height - 1, pGreen);
	raster_rgba_vline(&amp;pb, width - 1, 0, height - 1, pTurquoise);

	// draw criss cross lines
	raster_rgba_line(&amp;pb, 0, 0, width - 1, height - 1, pRed);
	raster_rgba_line(&amp;pb, width - 1, 0, 0, height - 1, pYellow);

	// draw a couple of rectangles
	raster_rgba_rect_fill(&amp;pb, 5, 5, 60, 60, pLightGray);
	raster_rgba_rect_fill(&amp;pb, width - 65, height - 65, 60, 60, pLightGray);

	// draw a rectangle in the center
	pb_rgba fpb;
	pb_rgba_get_frame(&amp;pb, (width / 2) - 100, (height / 2) - 100, 200, 200, &amp;fpb);
	raster_rgba_rect_fill(&amp;fpb, 0, 0, 200, 200, pBlue);

	// Draw triangle 
	raster_rgba_triangle_fill(&amp;pb, 0, height - 10, 0, 10, (width / 2) - 10, height / 2, pGreen);

	// Now we have a simple image, so write it to a file
	int err = write_PPM(&quot;test_writebitmap.ppm&quot;, &amp;pb);

}

int main(int argc, char **argv)
{
	test_writebitmap();

	return 0;
}

This is the part we’re interested in here:

	// draw criss cross lines
	raster_rgba_line(&amp;pb, 0, 0, width - 1, height - 1, pRed);
	raster_rgba_line(&amp;pb, width - 1, 0, 0, height - 1, pYellow);

Basically, feed in x1, y1, x2, y2 and a color, and a line will be drawn. The operation is SRCCOPY, meaning, the color is not blended, it just lays over whatever is already there. And the line drawing routine itself?

#define sgn(val) ((0 &lt; val) - (val &lt; 0))

// Bresenham simple line drawing
void raster_rgba_line(pb_rgba *pb, unsigned int x1, unsigned int y1, unsigned int x2, unsigned int y2, int color)
{
	int dx, dy;
	int i;
	int sdx, sdy, dxabs, dyabs;
	unsigned int x, y, px, py;

	dx = x2 - x1;      /* the horizontal distance of the line */
	dy = y2 - y1;      /* the vertical distance of the line */
	dxabs = abs(dx);
	dyabs = abs(dy);
	sdx = sgn(dx);
	sdy = sgn(dy);
	x = dyabs &gt;&gt; 1;
	y = dxabs &gt;&gt; 1;
	px = x1;
	py = y1;

	pb_rgba_set_pixel(pb, x1, y1, color);

	if (dxabs &gt;= dyabs) /* the line is more horizontal than vertical */
	{
		for (i = 0; i&lt;dxabs; i++)
		{
			y += dyabs;
			if (y &gt;= (unsigned int)dxabs)
			{
				y -= dxabs;
				py += sdy;
			}
			px += sdx;
			pb_rgba_set_pixel(pb, px, py, color);
		}
	}
	else /* the line is more vertical than horizontal */
	{
		for (i = 0; i&lt;dyabs; i++)
		{
			x += dxabs;
			if (x &gt;= (unsigned int)dyabs)
			{
				x -= dyabs;
				px += sdx;
			}
			py += sdy;
			pb_rgba_set_pixel(pb, px, py, color);
		}
	}
}

This is a fairly simple implementation of the well known Bresenham line drawing algorithm. It may no longer be the fastest in the world, but it’s fairly effective and compoutationally simple. If we wanted to do a blend of colors, then the ‘pb_rgba_set_pixel’ could simple be replaced with the blending routine that we saw in the other line drawing routines.

And that’s that. You could implement this as EFLA, or any number of other routines that might prove to be faster. But, why optimize too early? It might be that memory access is the bottleneck, and not the simple calculations done here. Bresenham is also fairly nice because everything is simple integer arithmetic, which is both fairly fast, as well as easy to implement on an FPGA, if it comes to that. Certainly on a microcontroller, it would be preferred to float/double.

Why we’re at it, how about that bitmap writing to file routine?

	int err = write_PPM(&quot;test_writebitmap.ppm&quot;, &amp;pb);

I’ve seen plenty of libraries that spend an inordinate amount of lines of code on the simple image load/save portion of things. That can easily dominate anything else you do. I didn’t want to really pollute the basic codebase with all that, so I chose the ‘ppm’ format as the only format that the library speaks natively. It’s a good ol’ format, nothing fancy, basic RGB dump of values, with some text at the beginning to describe the format of the image. The routine looks like this:

pbm.h

#pragma once

#ifndef PBM_H
#define PBM_H

#include &quot;pixelbuffer.h&quot;

#ifdef __cplusplus
extern &quot;C&quot; {
#endif

int write_PPM(const char *filename, pb_rgba *fb);

#ifdef __cplusplus
}
#endif

#endif

pbm.cpp

#include "pbm.h"

#include 

#pragma warning(push)
#pragma warning(disable: 4996)	// _CRT_SECURE_NO_WARNINGS (fopen) 


int write_PPM(const char *filename, pb_rgba *fb)
{
	FILE * fp = fopen(filename, "wb");
	
	if (!fp) return -1;

	// write out the image header
	fprintf(fp, "P6\n%d %d\n255\n", fb-&gt;frame.width, fb-&gt;frame.height);
	
	// write the individual pixel values in binary form
	unsigned int * pixelPtr = (unsigned int *)fb-&gt;data;

	for (size_t row = 0; row frame.height; row++) {
		for (size_t col = 0; col frame.width; col++){
			fwrite(&amp;pixelPtr[col], 3, 1, fp);
		}
		pixelPtr += fb-&gt;pixelpitch;
	}


	fclose(fp);

	return 0;
}

#pragma warning(pop)

There are lots of different ways to do this, but basically, get a pointer to the pixel values, and write out the RGB portion (ignoring the Alpha portion). The first line written is in plaintext, and it tells the format ‘P6′, followed by the width and height of the image. And that’s that. The internal format of a framebuffer is fairly simple, so writing a whole library that can read and write things like GIF, PNG, JPG and the like can be done fairly easily, and independently of the core library. And that’s probably the best way to do it. Then the consumer of this library isn’t forced to carry along complexity they don’t need, but they can simply compose what is necessary for their needs.

Alright then. There is one more primitive, the triangle, which will complete the basics of the drawing routines. So, next time.


GraphicC – Who’s line is it?

test_linespan

After pixels, drawing lines are probably the next most useful primitive in a graphics system.  Perhaps bitblt is THE most useful, but lines are certain up there.  There are a few things going on in the picture above, so I’m going to go through them bit by bit.

This is the code that does it:

#include "test_common.h"

void test_linespan()
{
	size_t width = 320;
	size_t height = 240;
	pb_rgba pb;
	pb_rgba_init(&pb, width, height);


	// Set a white background
	raster_rgba_rect_fill(&pb, 0, 0, width, height, pWhite);

	// do a horizontal fade
	int color1 = RGBA(127, 102, 0, 255);
	int color2 = RGBA(0, 127, 212, 255);

	for (size_t row = 0; row < 100; row++) {
		int x1 = 0;
		int x2 = width - 1;
		raster_rgba_hline_span(&pb, x1, color1, x2, color2, row);
	}

	// Draw a button looking thing
	// black
	raster_rgba_hline(&pb, 20, 20, 80, pBlack);
	raster_rgba_hline(&pb, 20, 21, 80, pDarkGray);
	raster_rgba_hline(&pb, 20, 22, 80, pLightGray);


	// light gray rect
	raster_rgba_rect_fill(&pb, 20, 23, 80, 24, pLightGray);

	// fade to black
	for (size_t col = 20; col < 100; col++) {
		raster_rgba_vline_span(&pb, 46, pLightGray, 77, pBlack, col);
	}

	// Draw some blended lines atop the whole
	for (size_t row = 35; row < 120; row++){
		raster_rgba_hline_blend(&pb, 45, row, 100, RGBA(0, 153, 153, 203));
	}

	// Now we have a simple image, so write it to a file
	int err = write_PPM("test_linespan.ppm", &pb);

}



int main(int argc, char **argv)
{
	test_linespan();

	return 0;
}

In most graphics systems, there are three kinds of lines;

  • horizontal
  • vertical
  • sloped

Along with the kind of line, there are typically various attributes such as thickness, color,
and in the case of joined lines, what kind of joinery (butted, mitered, rounded).

At the various lowest level though, there are simply lines, and the things I focus on are slopes and colors.

There are two basic lines kinds that are of particular interest right off the bat, those are horizontal and vertical.  They are special cases because their drawing can be easily optimized, and if you have any acceleration hardware, these are most likely included.  This kinds of brings up another design desire.

  • Can be implemented in an fpga

But, that’s for a much later time.  For now, how about those horizontal and vertical lines?
If what you want is a solid line, of a single color, ignoring any transparency, then you want a simple hline:

int raster_rgba_hline(pb_rgba *pb, unsigned int x, unsigned int y, unsigned int length, int value)
{
	size_t terminus = x + length;
	x = x < 0 ? 0 : x;
	terminus = terminus - x;

	unsigned int * data = &((unsigned int *)pb->data)[y*pb->pixelpitch+x];
	size_t count = 1;
	for (size_t idx = 0; idx < terminus; idx++)
	{
		*data = value;
		data++;
	}

	return 0;
}

This makes a simple rectangle easy to implement as well:

#define raster_rgba_rect_fill(pb, x1, y1, width, height, value) for (size_t idx = 0; idx < height; idx++){raster_rgba_hline(pb, x1, y1 + idx, width, value);	}

Simple single color horizontal lines can be drawn quickly, often times using some form of memset
on the host platform. Even in the case where a tight inner loop is used, what appears to be
fairly slow can be quite fast depending on how your compiler optimizes things.

There is a corresponding ‘vline’ routine that does similar.

int raster_rgba_vline(pb_rgba *pb, unsigned int x, unsigned int y, unsigned int length, int value)
{
    unsigned int * data = &((unsigned int *)pb->data)[y*pb->frame.width + x];
    size_t count = 1;
    while (count <= length) {
        *data = value;
        data += pb->pixelpitch;
        count++;
    }

    return 0;
}

Here, the same basic assignment is used, after determining the starting offset of the first
pixel. The pointer is advanced by the pixelpitch, which in this case is the count of 32-bit
integer values per row. If you’re rendering into a framebuffer that doesn’t conform to this,
then this offset can be adjusted accordingly.

Horizontal and vertical lines of a solid color are nice, and can go a long way towards satisfying
some very basic drawing needs. But, soon enough you’ll want those lines to do a little bit more
work for you.

void raster_rgba_hline_span(pb_rgba *pb, int x1, int color1, int x2, int color2, int y)
{
    int xdiff = x2 - x1;
    if (xdiff == 0)
        return;

    int c1rd = GET_R(color1);
    int c1gr = GET_G(color1);
    int c1bl = GET_B(color1);

    int c2rd = GET_R(color2);
    int c2gr = GET_G(color2);
    int c2bl = GET_B(color2);

    int rdx = c2rd - c1rd;
    int gdx = c2gr - c1gr;
    int bdx = c2bl - c1bl;

    float factor = 0.0f;
    float factorStep = 1.0f / (float)xdiff;

    // draw each pixel in the span
    for (int x = x1; x < x2; x++) {
        int rd = c1rd + (int)(rdx*factor);
        int gr = c1gr + (int)(gdx*factor);
        int bl = c1bl + (int)(bdx*factor);
        int fg = RGBA(rd, gr, bl, 255);
        pb_rgba_set_pixel(pb, x, y, fg);

        factor += factorStep;
    }
}

Let’s imagine you want to draw that fade from green to blue. You could do it pixel by pixel,
but it’s nice to have a simple line drawing routine that does it for you.

The ‘raster_rgba_hline_span’ routine takes two colors, as well as the x1, x2, and y values. It
will draw a horizontal line between x1 and x2 (inclusive), going from color1, to color2 in a
linear fashion. Same goes for the corresponding vline_span routine. This is nice and simple.
You could modify it a bit to change the parameters that are passed in, and fuss with trying to
optimize the loop in various ways, but this basically works.

Great, we have solid lines, line spans with color fading, what about transparency?

The individual pixels are represented with 32-bits each. There are 8-bits each for R, G, and B
color components. The ‘A’ portion of the color is not used by the frame buffer, but rather used
to represent an opacity value. The opacity just determines how much of the color will be shown
if this color is blended with another color.

In the picture above, the turquoise rectangle is blended with the pixels that are below, rather
than just copying its pixels atop the others.

In order to do that, we want to draw some horizontal lines, but we want to blend the colors
instead of copy them:

// Draw some blended lines atop the whole
for (size_t row = 35; row < 120; row++)
{
    raster_rgba_hline_blend(&pb, 45, row, 100, RGBA(0, 153, 153, 203));
}

The ‘raster_rgba_hline_blend’ routine does the needful. It takes a color with an alpha (opacity)
value of 203, and blends that with whatever is already in the specified frame buffer. It looks
like this:

#define blender(bg, fg, a) ((uint8_t)((fg*a+bg*(255-a)) / 255))

#define blend_color(bg, fg) RGBA(                \
    blender(GET_R(bg), GET_R(fg), GET_A(fg)),    \
    blender(GET_G(bg), GET_G(fg), GET_A(fg)),    \
    blender(GET_B(bg), GET_B(fg), GET_A(fg)),    255)


int raster_rgba_hline_blend(pb_rgba *pb, unsigned int x, unsigned int y, unsigned int length, int value)
{
    size_t terminus = x + length;
    x = x < 0 ? 0 : x;
    terminus = terminus - x;

    unsigned int * data = &((unsigned int *)pb->data)[y*pb->pixelpitch + x];
    size_t count = 1;
    for (size_t idx = 0; idx < terminus; idx++)
    {
        int bg = *data;
        int fg = value;

        *data = blend_color(bg, fg);
        data++;
    }

    return 0;
}

The use of macros here keeps the operations straight, and provides easy routines that can be used elsewhere.
It’s a pretty straight forward process though. Each time through the inner loop, the ‘bg’
represents the ‘background’ color, which is taken from the pixel buffer. The ‘fg’ value
represents the color that the user passed in. The new value is a blend of the two, using
the alpha component of the foreground color.

This is a fairly basic routine that can be adapted in many ways. For example, instead of
using the alpha value from the foreground color, you could just pass in a fixed alpha value
to be applied. This is useful when doing something like a ‘fade to black’, without having to
alter each of the source colors directly.

This particular routine only works with solid colors. As such, it is just like the plain
hline routine. So why not have a flag on the hline routine that says to do a blend or not?
Better still, why not have a parameter which is an operation to perform between the bg and fg
pixel values? This is how a lot of libraries work, including good ol’ GDI. I think that overly
complicates matters though. Creating a general purpose routine will introduce a giant switch
statement, or a bunch of function pointers, or something else unworldly. For now, I know I need
only two operators, SRCOPY, SRCOVER, and that’s it, I don’t currently find need for more exotic
operators. But of course, anything can be built atop or easily changed, so who knows.

And that’s about it for the first two kinds of line drawing routines. The third kind, where the
slope is neither vertical nor horizontal, is a little more work, and thus will be saved for
another time.

Woot! Raise the roof up, horizontal and vertical lines, along with pixel copying. What more
could you possibly expect from a basic low level graphics system? Well, for the longest time,
not much more than this, but there’s still some modern conveniences to discuss, such as those
sloped lines, and some more primitives such as triangles.


GraphicC – The Basics

There is one more design constraint I did not mention last time, and it’s a fairly important one for my purposes.

  • Must be easily interfaced from scripting environments

In my particular case, I’m going to favor LuaJIT and its FFI mechanism, and JavaScript, in the form of node.js.  At the very least, this means that I should not utilize too many fancy features, such as classes, operator or function overloading, templates, and the like.  These things are just nasty to deal with from script, and not really worth the benefits in such a small system.  Sticking to C89, or C99 style conventions, at least for the parts that are intended to be publicly accessible, is a good idea.

With that, how about looking at some of the basics.  First up, the most basic thing is the representation of a pixel.  For me, there is a very clear separation between a pixel and a color.  A pixel is a machine representation of a color value.  It is typically in a form that the graphics card can render directly.  Modern day representations are typically RGBA, or RGB, or BGR, BGRA.  That is, one value between 0-255 for each of the color components.  These values are arranged one way or the other, but these are the most typical.  Of course with most modern graphics cards, and leveraging an API such as OpenGL or DirectX, your pixel values can be floats, and laid out in many other forms.  But, I’ll ignore those for now.

Pixel values are constrained in a way, because they are represented by a limited set of byte values.  Colors are a completely different animal.  Then you get into various color spaces such as Cie, or XYZ, or HSL.  They all have their uses depending on the application space you’re in.  In the end though, when you want to display something, you’ve got to convert from those various color spaces to the RGB pixel values that this graphics system can support.

Since I am dealing with pixel values (and not colors), the very lowest level routines deal with pixel buffers, or pixel frames.  That is an array of pixel values.  Simple as that.

typedef struct _pix_rgba {
	unsigned char r, g, b, a;
} pix_rgba;

// On a little endian machine
// Stuff it such that 
// byte 0 == red
// byte 1 == green
// byte 2 == blue
// byte 3 == alpha
#define RGBA(r,g,b,a) ((unsigned int)(a<<24|b<<16|g<<8|r))
#define GET_R(value) ((unsigned int)value &0xff)
#define GET_G(value) (((unsigned int)value &0xff00) >> 8)
#define GET_B(value) (((unsigned int)value &0xff0000) >> 16)
#define GET_A(value) (((unsigned int)value &0xff000000) >> 24)

// pixel buffer rectangle
typedef struct _pb_rect {
	unsigned int x, y, width, height;
} pb_rect;

typedef struct _pb_rgba {
	unsigned char *		data;
	unsigned int		pixelpitch;
	int					owndata;
	pb_rect				frame;
} pb_rgba;

The ‘pb-rgba’ structure is the base of the drawing system. This is the drawing ‘surface’ if you will. It is akin to the ‘framebuffer’ if you were doing graphics in hardware. Things can only be a certain way. The buffer has a known width, height, and number of bytes per row. So far there is only one defined pixel buffer rectangle type, and that supports the rgba layout. I could create different pixel buffer types for different pixel layouts, such as rgba16, or rgb15, or bitmap. But, I don’t need those right now, so this is the only one.

I debated a long time whether the ‘pix_rgba’ structure was needed or not. As you can see from the macros ‘RGBA’ and friends, you can easily stuff the rgba values into an int32. This is good and bad, depending. The truth is, having both forms is useful depending on what you’re doing. If there are machine instructions to manipulate bytes in parallel, one form or the other might be more beneficial. This will be put to the test in blending functions later.

So, what are the most basic functions of this most basic structure?

#ifdef __cplusplus
extern "C" {
#endif

int pb_rgba_init(pb_rgba *pb, const unsigned int width, const unsigned int height);
int pb_rgba_free(pb_rgba *pb);

int pb_rgba_get_frame(pb_rgba *pb, const unsigned int x, const unsigned int y, const unsigned int width, const unsigned int height, pb_rgba *pf);


#ifdef __cplusplus
}
#endif

#define pb_rgba_get_pixel(pb, x, y, value) *value = ((unsigned int *)(pb)->data)[(y*(pb)->pixelpitch)+x]
#define pb_rgba_set_pixel(pb, x, y, value) ((unsigned int *)(pb)->data)[(y*(pb)->pixelpitch)+x] = value


#define pb_rect_contains(rect, x, y) ((x>=(rect)->x && x<= (rect)->x+(rect)->width) && ((y>=(rect)->y) && (y<=(rect)->y+(rect)->height)))
#define pb_rect_clear(rect) memset((rect), 0, sizeof(pb_rect))

The ‘pb_rgba_init’ function is the beginning. It will fill in the various fields of the structure, and most importantly allocate memory for storing the actual pixel values. This is very important. Without this, the data pointer points to nothing, or some random part of memory (badness). The corresponding ‘pb_rgba_free’ cleans things up.

The ‘pb_rgba_get_frame’ function copies a portion of a pixel array into another pixel array. You could consider this to be a ‘blit’ function in most libraries. There’s a bit of a twist here though. Instead of actually copying the data, the data pointer is set, and the fields of the receiving frame are set, and that’s it. Why? Because there are many cases where you want to move something around, without having to create multiple copies. Just think of a simple ‘sprite’ based game where you copy some fixed image to multiple places on the screen. You don’t want to have to make multiple copies to do it, just reference some part of a fixed pixel buffer, and do what you will.

Then there are the two macros ‘pb_rgba_get_pixel’, and ‘pb_rgba_set_pixel’. These will get and set a pixel value. Once you have these in hand, you have the keys to the graphics drawing kingdom. Everything else in the system can be based on these two functions. Something as simple as line drawing might use these. There may be optimizations in cases such as horizontal line drawing, but for the most part, this is all you need to write textbook perfect drawing routines.

And that pretty much rounds out the basics up to pixel setting.

It’s kind of amazing to me. Even before really getting into anything interesting, I’ve had to make decisions about some very basic data structure representations, and there are various hidden assumptions as well. For example, there is an hidden assumption that all parameters to the set/get pixel routines are bounds checked. The routines themselves will not perform any bounds checking. This is nice for a composable system, because the consumer can decide where they want to pay for such checking. Doing it per pixel might not be the best place.

There are assumptions about the endianness of the int32 value on the machine, as well as various other large and small assumptions in terms of interfacing to routines (using macros?). There are even assumptions about what the coordinate system is (cartesian), and how it is oriented (top left is 0,0). Of course all of these assumptions must be clearly documented (preferably in the code itself), to minimize the surprises as the system is built and used.

This is a good start though. Here’s how we could use what’s there so far.

#include "test_common.h"

void test_blit()
{
	size_t width = 640;
	size_t height = 480;

	pb_rgba pb;
	pb_rgba_init(&pb, width, height);

	pb_rgba fpb;
	pb_rgba_get_frame(&pb, 0, 0, width / 2, height / 2, &fpb);

	// draw into primary pixel buffer
	raster_rgba_rect_fill(&pb, 10, 10, (width / 2) - 20, (height / 2) - 20, pBlue);

	// draw the frame in another location
	raster_rgba_blit(&pb, width / 2, height / 2, &fpb);

	// Now we have a simple image, so write it to a file
	int err = write_PPM("test_blit.ppm", &pb);

}

int main(int argc, char **argv)
{
	test_blit();

	return 0;
}

Setup a pixel buffer that is 640 x 480. Draw a rectangle into the pixel buffer (blue color). Copy that frame into another location in the same pixel buffer. Write the pixel buffer out to a file.

test_blit

And that’s about it.

Some basic constants, simple data structures, core assumptions, and pixels are being stuffed into a buffered in such a way that you can view a picture on your screen.

Next up, drawing lines and rectangles, and what is a ppm file anyway?


Follow

Get every new post delivered to your Inbox.

Join 51 other followers