Drawing Curvy Lines with aplomb – Beziers

I have written plenty about Bezier and other curves and surfaces.  That was largely in the context of 3D printing using OpenScad, or BanateCad.  But now, I’m adding to a general low level graphics library.

test_bezier

Well well, what do we have here.  Bezier curves are one of those constructs where you lay down some ‘control points’ and then draw a line that meanders between them according to some mathematical formula.  In the picture, the green curve is represented by 4 control points, the red one is represented by 5 points, and the blue ones are each represented by 4 points.

How do you construct a Bezier curve?  Well, you don’t need much more than the following code:

void computeCoefficients(const int n, int * c)
{
	int k, i;

	for (k = 0; k <= n; k++)
	{
		// compute n!/(k!(n-k)!)
		c[k] = 1;
		for (i = n; i >= k + 1; i--)
		{
			c[k] *= i;
		}

		for (i = n - k; i >= 2; i--)
		{
			c[k] /= i;
		}
	}
}

void computePoint(const float u, Pt3 * pt, const int nControls, const Pt3 *controls, const int * c)
{
	int k;
	int n = nControls - 1;
	float blend;

	pt->x = 0.0;	// x
	pt->y = 0.0;	// y
	pt->z = 0.0;	// z
	
	// Add in influence of each control point
	for (k = 0; k < nControls; k++){
		blend = c[k] * powf(u, k) *powf(1 - u, n - k);
		pt->x += controls[k].x * blend;
		pt->y += controls[k].y * blend;
		pt->z += controls[k].z * blend;
	}
}

void bezier(const Pt3 *controls, const int nControls, const int m, Pt3 * curve)
{
	// create space for the coefficients
	int * c = (int *)malloc(nControls * sizeof(int));
	int i;

	computeCoefficients(nControls - 1, c);
	for (i = 0; i <= m; i++) {
		computePoint(i / (float)m, &curve[i], nControls, controls, c);
	}
	free(c);	
}

This is pretty much the same code you would get from any book or tutorial on fundamental computer graphics. It will allow you to calculate a Bezier curve using any number of control points.

Here’s the test case that generated the picture

typedef struct {
	REAL x;
	REAL y;
	REAL z;
} Pt3;

void polyline(pb_rgba *pb, Pt3 *curve, const int nPts, int color)
{
	for (int idx = 0; idx < nPts; idx++) {
		raster_rgba_line(pb, curve[idx].x, curve[idx].y, curve[idx + 1].x, curve[idx + 1].y, color);
	}
}

void test_bezier()
{

	size_t width = 400;
	size_t height = 400;
	int centerx = width / 2;
	int centery = height / 2;
	int xsize = (int)(centerx*0.9);
	int ysize = (int)(centery*0.9);

	pb_rgba pb;
	pb_rgba_init(&pb, width, height);

	// background color
	raster_rgba_rect_fill(&pb, 0, 0, width, height, pLightGray);

	// One curve drooping down
	Pt3 controls[4] = { { centerx - xsize, centery, 0 }, { centerx, centery + ysize, 0 }, { centerx, centery + ysize, 0 }, { centerx + xsize, centery, 0 } };
	int nControls = 4;
	int m = 60;
	Pt3 curve[100];
	bezier(controls, nControls, m, curve);
	polyline(&pb, curve, m, pGreen);

	// Several curves going up
	for (int offset = 0; offset < ysize; offset += 5) {
		Pt3 ctrls2[4] = { { centerx - xsize, centery, 0 }, { centerx, centery - offset, 0 }, { centerx, centery - offset, 0 }, { centerx + xsize, centery, 0 } };
		bezier(ctrls2, nControls, m, curve);
		polyline(&pb, curve, m, pBlue);
	}

	// one double peak through the middle
	Pt3 ctrls3[5] = { { centerx - xsize, centery, 0 }, { centerx-(xsize*0.3f), centery + ysize, 0 }, { centerx, centery - ysize, 0 }, { centerx+(xsize*0.3f), centery + ysize, 0 }, { centerx + xsize, centery, 0 } };
	int nctrls = 5;
	bezier(ctrls3, nctrls, m, curve);
	polyline(&pb, curve, m, pRed);

	// Now we have a simple image, so write it to a file
	int err = write_PPM("test_bezier.ppm", &pb);
}

From here, there are some interesting considerations. For example, you don’t want to calculate the coefficients every single time you draw a single curve. In terms of computer graphics, most Bezier curves consist of 3 or 4 points at most. Ideally, you would calculate the coefficients for those specific curves and store the values statically for later usage. This is what you’d do ideally for a small embedded library. The tradeoff in storage space is well worth the savings in compute time.

Additionally, instead of calculating all the line segments and then storing those values and using a polyline routine to draw things, you’d like want to simply have the bezier routine draw the lines directly. That would cut down on temporary allocations.

At the same time though, you want to retain the ‘computePoint’ function as independent because once you have a Bezier calculation function within the library, you’ll want to use it to do things other than just draw curved lines. Bezier, and it’s corollary Hermite, are good for calculating things like color ramps, motion, and other curvy stuff. This is all of course before you start using splines and nurbs, which are much more involved process.

There you have it, a couple of functions, and suddenly you have Bezier curves based on nothing more than the low level line drawing primitives. At the moment, this code sits in a test file, but soon enough I’ll move it into the graphicc library proper.


William Does Linux on Azure!

What?

You see, it’s like this.  As it turns out, a lot of people want to run code against a Linux kernel in the cloud.  Even though Windows might be a fine OS for cloud computing, the truth is, many customers are simply Linux savvy.  So, if we want to make those customers happy, then Linux needs to become a first class citizen in the Azure ecosystem.

Being a person to jump on technological and business related grenades, I thought I would join the effort within Microsoft to make Linux a fun place to be on Azure.  What does that mean?  Well, you can already get a Linux VM on Azure pretty easily, just like with everyone else.  But what added value is there coming from Microsoft so this isn’t just a simple commodity play?  Microsoft does in fact have a rich set of cloud assets, and not all of them are best accessed from a Linux environment.  This might mean anything from providing better access to Azure Active Directory, to creating new applications and frameworks altogether.

One thing is for sure.  As the Windows OS heads for the likes of the Raspberry Pi, and Linux heads for Azure, the world of computing is continuing to be a very interesting place.


Windows and Raspberry Pi, Oh my!

I woke this morning to two strange realities.  My sometimes beloved Seahawks did not win the SuperBowl, and the Raspberry Pi Foundation announced the Raspberry Pi 2, which will run Windows 10!

I’ll conveniently forget the first reality for now as there’s always next season.  But that second reality?  I’ve long been a fan of the Raspberry Pi.  Not because of the specific piece of hardware, but because at the time it was first announced, it was the first of the somewhat reasonable $35 computers.  The hardware itself has long since been eclipsed by other notables, but none of them have quite got the Raspberry Pi community thing going on, nor the volumes.  Now the Pi is moving into “we use them for embedded” territory, not just for the kids to learn programming.

And now along comes Windows!  This is interesting in two respects.  First, I did quite a bit of work putting a LuaJIT skin on the Raspberry Pi some time back.  At the time, I did it because I wanted to learn all about the deep down internals of the Raspberry Pi, but from the comforts of Lua.  At the time, I leveraged an early form of the ljsyscall library to take care of the bulk of the *NIX specific system calls. I was going to go one step further and implement the very lowest interface to the Video chip, but that didn’t seem like a very worthwhile effort, so I left it at the Khronos OpenGL ES level.

At roughly the same time, I started implementing LuaJIT Win32 APIs, starting with LJIT2Win32.  Then I went hog wild and implemeted TINN, which for me is the ultimate in LuaJIT APIs for Win32 systems.  Both ljsyscall and TINN exist because programming at the OS level is a very tedious/esoteric process.  Most of the time the low level OS specifics are paved over with one higher level API/framework or another.  Well, these are in fact such frameworks, giving access to the OS at a very high level from the LuaJIT programming language.

So, this new Windows on Pi, what of it?  Well, finally I can program the Raspberry Pi using the TINN tool.  This is kind of cool for me.  I’m not forced into using Linux on this tiny platform, where I might be more familiar with the Windows API and how things work.  Even better, as TINN is tuned to running things like coroutines and IO Completion ports, I should be able to push the tiny device to its limits with respect to IO at least.  Same goes for multi-threaded programming.  All the goodness I’ve enjoyed on my Windows desktop will now be readily available to me on the tiny Pi.

The new pi is a quad core affair, which means the kids will learn about muteness, semaphores and the like…  Well, actually, I’d expect the likes of the go language, TINN, and other tools to come to the rescue.  The beauty of Windows on Pi is likely going to be the ease of programming.  When I last programmed on the Pi directly, I used the nano editor, and print() for debugging.  I couldn’t really use eclipse, as it was too slow back then.  Now the Pi will likely just be a Visual Studio target, maybe even complete with simulator.  That would be a great way to program.  All the VS goodness that plenty of people have learned to love.  Or maybe a slimmed down version that’s not quite so enterprise industrial.

But, what are these Pi used for anyway?  Are they truly replacement PC?  Are they media servers, NAS boxes, media players?  The answer is YES to all, to varying degrees.  Following along the ‘teach the kids to program’ theme, having an relatively inexpensive box that allows you to program can not be a bad thing.  Making Windows and Linux available can not be a bad thing.  Having a multi-billion dollar software company supporting your wares, MUST be a good thing.  Love to hate Microsoft?  Meh, lots of Windows based resources are available in the world, so, I don’t see how it does any harm.

On the very plus side, as this is a play towards makers, it will force Microsoft to consider the various and sundry application varieties that are currently being pursued by those outside the corporate enterprise space.  Robotics will force a reconsideration of realtime constraints.  As well, vision might become a thing.  Creating an even more coherent story around media would be a great thing.  And maybe bringing the likes of the Kinect to this class of machine?  Well, not in this current generation.

The news on this monday is both melancholy and eye brow raising.  I for one will be happy to program the latest Raspberry Pi using TINN.


Local Manufacturing – Is that a factory in your garage?

Why the whole table saw cabinet thing?  Well, I first purchased the SawStop a few years back because I wanted to make some fairly good triangular bases for a 3D printer project.  I figured that as an occasional workshopper, it’s better to have more expensive tools with safety features, so that I can preserve my white collar hands.

More recently, I’ve wanted to expand the capabilities in the shop.  I want to cut wood of course, but I want to cut it in infinite variety.  I have a nice heavy duty router, which gives me some capabilities.  I have a cheapo band saw with still others.  I probably need a scroll saw for really intricate stuff.  I could use a mill to play with various metals, and a lath might be interesting as well.  Well, that’s adding up to be a lot of different bits of equipment, all with their own safety and space concerns.

Then I got to thinking, what I really need is an automated (CNC) platform that I can use various tools on.  After quite a lot of browsing around, I came across the Grunblau Plaform CNC kit.  What’s so great about this particular machine?  Well, it looks good for one.  It’s uses 80/20 extrusions, like most DIY CNC machines, but it throws in just enough steel to make it more subtle, and easier to assemble than your typical machine.

WP_20150122_003[1]

This is what mine looks like after a couple of weekends of assembly.  First weekend was laying out parts, and assembling the base.  Second weekend was assembly of the gantry, and mounting to the base.

But this begs one question, where to put this thing.  It’s roughly 3’x5′ and a couple feet tall.  It takes up more room than a table saw, but less than the combination of tools that I intend for it to replace.  So, of course it needs a piece of shop furniture to go with it.

WP_20150126_008[1] WP_20150126_010[1]

That’s a base made of 2×6 and 4×4 lumber, with a 3/4″ maple plywood skin.  The skin will be closed on 4 slides, which leaves the ability to slide it back if that’s ever needed.  The skin is not fastened to the base in any way, as gravity working on the skin, as well as the machine itself, should be enough to hold it in place.  If not, then a couple of screws at strategic positions should be more than enough to hold it in place.  I’m awaiting some nice leveling casters which will make this as portable as the table saw.

I wanted to try the leveling casters as its yet another option for mobility.  In this particular case, the machine will mostly stay in the same place all the time.  But, when it comes time to move it, I want it to be a relatively easy operation.  These Footmaster GDR-60F Leveling Casters seem to foot the bill, so I’ll see how it goes.

This makes for interesting theatre.  The other day, I had a neighbor wander into my garage and exclaim “wow, you have a lot of tools, what do you build with them?”.  Well, I, uh, that is, you see, I just like to tinker.  Fact is, mostly I’ve built shop furniture to deal with the various tools that I’ve been buying over the years to build shop furniture.

But, this time is different.  Now I’ve got my 3D printer setup.  I’ve got my table saw squared away.  I’ve got the CNC Router coming into existance.  Surely a Murphy bed, or some kids playscape in the downstairs, or at least a jewelry box for the wife?  The fact is, I like designing and building “furniture”.  I can’t help it that my man cave is the primary beneficiary of said furniture.  But I think there’s something else here as well.  As the machines become more versatile (through the beauty of software), my ability to manufacture all manner of things locally improves.

I’ve wanted to build things out of aluminum for the longest time.  Now, with the CNC Router, I’ll be able to do that.  This is the same sort of enabling that occured with the birth of the 3D printers.  I can at least design and prototype my own stuff, and print it in plastic.  Now I’ll be able to actually build some molds for injection or other molding if I so choose, which is a next logical step to the all too slow process of using FDM printers.

So, am I building a factory in my garage?  Well, I consider it a definite evolution of the American garage.  A CNC router can take the place of a lot of typical woodworking tools.  It also adds the ability to mill soft metals, cut with a knife, draw with a plotter pen, carve with a plasma torch, or possibly a laser.  Add another axis, or two, and suddenly you’re doing 5 axis milling in your garage.

Yah, this is way cool.  Not necessarily a factory in the garage, but certainly a “local manufacturing plant” in our neighborhood.


SawStop Contractor Saw Cabinet – Part 2

Last time around, I had finished my torsion box base with low riding sliding drawers.  The next step in the journey was to construct the box upon which the saw itself will sit.  I looked at many options.  Solid wood, plywood, open frame, closed frame.  I needed to integrate dust collection as well, and possibly storage.  In the end I created a design which is a combination of a couple of things.

WP_20150115_004[1]

The box is constructed entirely of 3/4″ oak plywood.  The bottom is constructed of a rectangle which is put together using Kreg pocket screws.  The top ‘mid-top’ is the same.  They are held up on the sides by solid plywood.  The very top is a solid piece of plywood, with a hole cut out of it to match the swing of the motor and dust collection port on the saw.  This could also have been constructed using the Kreg framing, but I wanted to try this way as well.

That forms the basic box.  It was pretty solid, but I wanted to go one step further.  I put additional plywood sides on with full surface gluing.  This should prevent any rocking forward/back.  Before usage, I will stick a false front on the thing, and that should eliminate any side rocking.  It’s feeling pretty solid though, so I don’t think there will be much.

WP_20150115_003[1]

Here’s what it looks like with the saw sitting atop the box, atop the rolling base.  I had to strip it down so that I could then slide it off the base and onto the box without the help of others, or a hoist.

WP_20150115_005[1]

There’s the old steel base, ready to go for its next adventure.

Most of the builds I have seen have the forethought of incorporating a sloping slidey thing for the dust chute, or a drawer, or sliders, and what have you.  I could not think through my dust collection options completely, so I just designed the base as an open box.  That way, I can build any type of drawer, slide, tubing, what have you, and just slide it into the open slot.  If I want to change it later, I can, without having to build a whole new base/box.

I also decided that I don’t need to go for a full integrated unicabinet design.  In fact, it’s better to make this whole thing modular so that I can change it easily over time as my needs change.  For example, most builds have the saw mounted as I’ve shown here.  Then they have large outfeed tables so that they can do long rips.  Well, truth be told, most of what I’m going to be doing on a table saw is probably longish rips, or fairly short stuff where a sled will be utilized.  So, having all this width isn’t really that beneficial.  Easy enough, I can just turn the box sideway, and layout an outfeed table atop the torsion box, and be done.

To that end, the base is simply screwed down to the torsion box.  It’s not glued.  Other boxes will be constructed, and just screwed down as well.  Whether it’s drawers, a router extension, or what have you, just throw it on there, and get to making chips!

Almost done.  Now I need to construct the simple supports so I can reassemble the table.  The easiest thing will be to simply go back to what I had for now, that is, the super long rails, extension wings and table.  I’ll just have to adjust the length of the legs on the support table, and call it a day


TINN Reboot

I always dread writing posts that start with “it’s been a long time since…”, but here it is.

It’s been a long time since I did anything with TINN.  I didn’t actually abandon it, I just put it on the back burner as I was writing a bunch of code in C/C++ over the past year.  I did do quite a lot of experimental stuff in TINN, adding new interfaces, trying out new classes, creating a better coroutine experience.

The thing with software is, a lot of testing is required to ensure things actually work as expected and fail gracefully when they don’t.  Some things I took from the ‘experimental’ category are:

fun.lua – A library of functional routines specifically built for LuaJIT and it’s great handling of tail recursion.

msiterators.lua – Some handy iterators that split out some very MS specific string types

Now that msiterators is part of the core, it makes it much easier to do things like query the system registry and get the list of devices, or batteries, or whatever, in a simple table form.  That opens up some of the other little experiments, like enumerating batteries, monitors, and whatnot, which I can add in later.

There are not earth shattering, and don’t represent a year’s worth of waiting, but soon enough I’ll create a new package with new goodness in it.  This begs the question, what is TINN useful for?  I originally created it for the purpose of doing network programming, like you could do with node.  Then it turned into a way of doing Windows programming in general.  Since TINN provides scripted access to almost all the interesting low level APIs that are in Windows, it’s very handy for trying out how an API works, and whether it is good for a particular need.

In addition to just giving ready access to low level Windows APIs, it serves as a form of documentation as well.  When I look at a Windows API, it’s not obvious how to handle all the parameters.  Which ones do I allocate, which ones come from the system, which special function do I call when I’m done.  Since I read the docs when I create the interface, the wrapper code encapsulates that reading of the documentation, and thus acts as an encapsulated source of knowledge that’s sitting right there with the code.  Quite handy.

At any rate, TINN is not dead, long live TINN!


GraphicC – The triangle’s the thing

Pixels, lines, and squares, oh my!
test_triangle

And, finally, we come to triangles.  The demo code is thus:

#include "test_common.h"

void test_writebitmap()
{
	size_t width = 480;
	size_t height = 480;
	pb_rgba pb;
	pb_rgba_init(&pb, width, height);

	// Draw some triangles 
	size_t midx = (size_t)(width / 2);
	size_t midy = (size_t)(height / 2);

	raster_rgba_triangle_fill(&pb, 0, 0, width-1, 0, midx, midy, pRed);
	raster_rgba_triangle_fill(&pb, 0, 0, midx, midy, 0, height-1, pGreen);
	raster_rgba_triangle_fill(&pb, width-1, 0, midx, midy, width-1, height - 1, pBlue);
	raster_rgba_triangle_fill(&pb, midx, midy, 0, height-1, width - 1, height - 1, pDarkGray);

	// Now we have a simple image, so write it to a file
	int err = write_PPM("test_triangle.ppm", &pb);
}

int main(int argc, char **argv)
{
	test_writebitmap();

	return 0;
}

Why triangles? Well, beyond pixels, and lines, they are one of the most useful drawing primitives. When it finally comes to rendering in 3D for example, it will all be about triangles. Graphics hardware over the years has been optimized to render triangles as well, so it is beneficial to focus more attention to this primitive than to any other. You might think polygons are very useful, but, all polygons can be broken down into triangles, and we’re back where we started.

Here’s the triangle drawing routine:

#define swap16(a, b) { int16_t t = a; a = b; b = t; }

typedef struct _point2d
{
	int x;
	int y;
} point2d;

int FindTopmostPolyVertex(const point2d *poly, const size_t nelems)
{
	int ymin = INT32_MAX;
	int vmin = 0;

	size_t idx = 0;
	while (idx < nelems) {
		if (poly[idx].y < ymin) {
			ymin = poly[idx].y;
			vmin = idx;
		}
		idx++;
	}

	return vmin;
}

void RotateVertices(point2d *res, point2d *poly, const size_t nelems, const int starting)
{
	size_t offset = starting;
	size_t idx = 0;
	while (idx < nelems) {
		res[idx].x = poly[offset].x;
		res[idx].y = poly[offset].y;
		offset++;
		
		if (offset > nelems-1) {
			offset = 0;
		}

		idx++;
	}
}

void sortTriangle(point2d *sorted, const int x1, const int y1, const int x2, const int y2, const int x3, const int y3)
{
	point2d verts[3] = { { x1, y1 }, { x2, y2 }, { x3, y3 } };

	int topmost = FindTopmostPolyVertex(verts, 3);
	RotateVertices(sorted, verts, 3, topmost);
}

void raster_rgba_triangle_fill(pb_rgba *pb, 
	const unsigned int x1, const unsigned int  y1, 
	const unsigned int  x2, const unsigned int  y2, 
	const unsigned int  x3, const unsigned int  y3, 
	int color)
{
	int a, b, y, last;

	// sort vertices, such that 0 == y with lowest number (top)
	point2d sorted[3];
	sortTriangle(sorted, x1, y1, x2, y2, x3, y3);

	// Handle the case where points are colinear (all on same line)
	if (sorted[0].y == sorted[2].y) { 
		a = b = sorted[0].x;
		
		if (sorted[1].x < a) 
			a = sorted[1].x;
		else if (sorted[1].x > b) 
			b = sorted[1].x;

		if (sorted[2].x < a) 
			a = sorted[2].x;
		else if (sorted[2].x > b) 
			b = sorted[2].x;

		raster_rgba_hline(pb, a, sorted[0].y, b - a + 1, color);
		return;
	}

	int16_t
		dx01 = sorted[1].x - sorted[0].x,
		dy01 = sorted[1].y - sorted[0].y,
		dx02 = sorted[2].x - sorted[0].x,
		dy02 = sorted[2].y - sorted[0].y,
		dx12 = sorted[2].x - sorted[1].x,
		dy12 = sorted[2].y - sorted[1].y;
	
	int32_t sa = 0, sb = 0;

	// For upper part of triangle, find scanline crossings for segments
	// 0-1 and 0-2. If y1=y2 (flat-bottomed triangle), the scanline y1
	// is included here (and second loop will be skipped, avoiding a /0
	// error there), otherwise scanline y1 is skipped here and handled
	// in the second loop...which also avoids a /0 error here if y0=y1
	// (flat-topped triangle).
	if (sorted[1].y == sorted[2].y) 
		last = sorted[1].y; // Include y1 scanline
	else 
		last = sorted[1].y - 1; // Skip it
	
	for (y = sorted[0].y; y <= last; y++) 
	{
		a = sorted[0].x + sa / dy01;
		b = sorted[0].x + sb / dy02;
		sa += dx01;
		sb += dx02;
		/* longhand:
		a = x0 + (x1 - x0) * (y - y0) / (y1 - y0);
		b = x0 + (x2 - x0) * (y - y0) / (y2 - y0);
		*/
		
		if (a > b) swap16(a, b);
		raster_rgba_hline(pb, a, y, b - a + 1, color);
	}

	// For lower part of triangle, find scanline crossings for segments
	// 0-2 and 1-2. This loop is skipped if y1=y2.
	sa = dx12 * (y - sorted[1].y);
	sb = dx02 * (y - sorted[0].y);
	for (; y <= sorted[2].y; y++) 
	{
		a = sorted[1].x + sa / dy12;
		b = sorted[0].x + sb / dy02;
		sa += dx12;
		sb += dx02;
		/* longhand:
		a = x1 + (x2 - x1) * (y - y1) / (y2 - y1);
		b = x0 + (x2 - x0) * (y - y0) / (y2 - y0);
		*/
		if (a > b) 
			swap16(a, b);

		raster_rgba_hline(pb, a, y, b - a + 1, color);
	}
}

This seems like a lot of work to simply draw a triangle! There are lots of routines for doing this. This particular implementation borrows from a couple of different techniques. The basic idea is to first sort the vertices in scaline order, top to bottom. That is, we want to known which vertex to start from because we’re going to follow an edge down the framebuffer, drawing lines between edges as we go. In a regular triangle, without a flat top or bottom, there will be a switch between driving edges as we encounter the third point somewhere down the scan lines.

At the end, the ‘raster_rgba_hline’ function is called to actually draw a single line. If we wanted to do blending, we’d change this line. If we wanted to use a color per vertex, it would ultimately be dealt with here. All possible. Not necessary for the basic routine. With some refactoring, this could be made more general purpose, and different rendering modes could be included.

This is the only case where I use a data structure, to store the sorted points. Most of the time I just pass parameters and pointers to parameters around.

This is also a good time to mention API conventions. One of the constraints from the beginning is the need to stick with basic C constructs, and not get too fancy. In some cases this is at the expense of readability, but in general things just work out ok, as it’s simple to get used to. So, let’s look at the API here:

void raster_rgba_triangle_fill(pb_rgba *pb, 
	const unsigned int x1, const unsigned int  y1, 
	const unsigned int  x2, const unsigned int  y2, 
	const unsigned int  x3, const unsigned int  y3, 
	int color)

That’s a total of 8 parameters. If I were going for absolute speed, and I knew the platform I was running on, I might try to limit the parameters to 4, to help ensure they get passed in registers, rather than requiring the use of a stack. But, that’s an optimization that’s iffy at best, so I don’t bother. And why not use some data structures? Clearly x1, y1 is a 2D point, so why not use a ‘point’ data structure. Surely where the data comes from is structured as well. In some cases it is, and in others it is not. I figure that in the cases where the data did come from a structure, it’s fairly easy for the caller to do the dereferencing and pass in the values directly. In the cases where the data did not come from a point structure, it’s a tax to pay to stuff the data into a point structure, simply to have it pass in to a function, which may not need it in that form. So, in general, all the parameters are passed in a ‘denormalized’ form. With small numbers of easy to remember parameters, I think this is fine.

Also, I did decide that it was useful to use ‘const’ in most cases where that is applicable.  This gives the compiler some hints as to how best deal with the parameters, and allows you to specify constants inline for test cases, without compiler complaints.

There is a decided lack of error handling in most of these low level APIs.  You might think that triangle drawing would be an exception.  Shouldn’t I do some basic clipping to the framebuffer?  The answer is no, not at this level.  At a higher level, we might be drawing something larger, where a triangle is but a small part.  At that higher level we’ll know about our framebuffer, or other constraints that might cause clipping of a triangle.  At that level, we can perform the clipping, and send down new sets of triangles to be rendered by this very low level triangle drawing routine.

Of course, this routine could be made even more raw and optimized by breaking out the vertex sorting from the edge following part.  If the triangle vertices don’t change, there’s no need to sort them every time they are drawn, so making the assumption that the vertices are already in framebuffer height order can dramatically simplify the drawing portion.  Then, the consumer can call the sorting routines when they need to, and pass in the sorted vertex values.

And thus APIs evolve.  I’ll wait a little longer before making such changes as I might think of other things that I want to improve along the way.

There you have it then.  From pixels to triangles, the basic framebuffer drawing routines are now complete.  What more could be needed at this level?  Well, dealing with some curvy stuff, like ellipses and rounded rectangles might be nice, all depends on what use cases are more interesting first, drawing pies and graphs, or doing some UI stuff.  The first thing I want to do is display some data related to network traffic events, so I think I’ll focus on a little bit of text rendering first, and leave the ellipse and rounded rects for later.

So, next time around, doing simple bitmap text.


Follow

Get every new post delivered to your Inbox.

Join 51 other followers