Drawproc – Like Processing, but in C++

Triangle Strips using drawproc

trianglestrip

So, what’s the code look like to create this gem?

#include "drawproc.h"

int x;
int y;
float outsideRadius = 150;
float insideRadius = 100;


void setup() {
	size(640, 360);
	background(204);
	x = width / 2;
	y = height / 2;
}


void drawStrip()
{
	int numPoints = int(map(mouseX, 0, width, 6, 60));
	float angle = 0;
	float angleStep = 180.0 / numPoints;

	beginShape(GR_TRIANGLE_STRIP);
	for (int i = 0; i <= numPoints; i++) {
		float px = x + cos(radians(angle)) * outsideRadius;
		float py = y + sin(radians(angle)) * outsideRadius;
		angle += angleStep;
		vertex(px, py);
		px = x + cos(radians(angle)) * insideRadius;
		py = y + sin(radians(angle)) * insideRadius;
		vertex(px, py);
		angle += angleStep;
	}
	endShape();
}

void draw() {
  background(204);
  drawStrip();
}

If you’ve done any coding in Processing, you can look at the example that inspired this bit of code here: Triangle Strip

What’s notable about it is the similarity to the Java or even the JavaScript version (if processing.js). It takes about a 10 minute conversion to go from Processing to using drawproc. So, what is drawproc?

Drawproc is an application and library which facilitates the creation of interactive graphics. It is the culmination of taking the work from graphicc and encapsulating in such a way that makes it easy to use in multiple situations.

So, how does it work?  Basically, there is the drawproc.exe application.  This application contains a main(), and a primary event loop which takes care of capturing mouse and keyboard events, and issuing “draw()” calls.  Previously (Dyanmic Programming in C) I explained how that dynamic bit of machinery works. All that machinery is at work here with the addition of one more dynamic programming item.

bool InitializeInstance(const char *moduleName)
{

	// Get pointers to client setup and loop routines
	clientModule = LoadLibrary(moduleName);

	printf("modH: 0x%p\n", clientModule);

	SetupHandler procAddr = (SetupHandler)GetProcAddress(clientModule, "setup");
	printf("proc Address: 0x%p\n", procAddr);

	if (procAddr != NULL) {
		setSetupRoutine(procAddr);
	}

	LoopHandler loopAddr = (LoopHandler)GetProcAddress(clientModule, "draw");
	printf("loop Addr: 0x%p\n", loopAddr);

	if (loopAddr != NULL) {
		setLoopRoutine(loopAddr);
	}

	if ((procAddr == nullptr) && (loopAddr == nullptr))
	{
		return false;
	}

	gClock = dproc_clock_new();

	return true;
}

When invoking drawproc, you give a name of a module, which is a .dll file compiled against the .exe. Typical execution looks like this:

c:\tools>drawproc trianglestrip.dll

That ‘trianglestrip.dll’ is passed along to the InitializeInstance() call, the module is loaded, and the ‘setup()’ and ‘draw()’ functions are looked for. If neither of them is found, or the .dll doesn’t load, then the program quits. At this point, everything is the same as if you had linked the drawing module into the drawproc.exe program directly. The advantage is you have a simple small (~200K for debug version) executable (drawproc.exe) which is very slow changing. Then you have the modules, which can be numerous, and dynamic. You can create modules independently of the drawproc.exe and run as you wish. You could even write a single module which loads .lua, or any other embedded scripting environment, and write your code using that scripting language instead.

How do you create these modules? Well, you just write your code, make reference to the header files within drawproc, and use drawproc.lib as the library reference. All the relevant symbols within drawproc are exported, so this just works. At the end of the day, the drawproc.exe looks just like any other .dll that might be out there.

In case you’re still reading, here’s another picture.

SineConsoleBanate CAD 2011

This one is interesting because it’s actually an animation (SineConsole).  A few years back, when I was experimenting with BanateCAD, I had done something similar, all in Lua Banate CAD 2011.

Why bother with all this though?  Why C? What’s the point?  I had this interesting conversation last week with a co-worker.  We were discussing whether people who are coming into becoming software programmers would be better served by learning C#, or C/C++.  I think my answer was C#, simply because it seems more in fashion and more applicable to other dynamic languages than does C/C++.  But, here we’re doing a lot of ‘dynamic’ with standard C/C++.  Really the answer to that question is “you will need to learn and use many languages, frameworks, and tools in your software development.  Learning some C will likely serve you well, but be prepared to learn many different things.’

drawproc being written in C/C++ is great because it makes programming graphics fairly simple (because of the Processing mimicry).  Using the Processing API makes the graphics stuff really easy.  At the same time, since it’s written in C/C++, gaining access to the lowest level stuff of the platform is really easy as well.  For example, integrating with the Microsoft Kinect sensor is as easy as just using the Microsoft Provided SDK directly.  No shim, no translation layer, no ‘binding’ to get in the way.  That’s a very good thing.  Also, as time goes on, doing the accelerated this and that, throwing in networking and the like will be a relative no brainer.

So, there you have it.  drawproc is a new standalone tool which can be used for fiddling about with graphics.  For those who are into such things, it’s a nice tool to play with.

Advertisements

Dyanmic Programming in C

Certainly there must be a picture?
test_keyboard

This is a simple test app using the graphicc drawproc library.  The app is pretty simple.  It displays a background image, and as you drag the mouse over the image, the location and size of that semi-translucent square is displayed at the bottom of the window on that status line.
Here is the entirety of the program

/*
test_keyboard

Do some simple mouse and keyboard tracking

*/
#include "drawproc.h"
#include "guistyle.h"
#include &lt;stdio.h&gt;

static GUIStyle styler;
static const int gMaxMode = 3;
static int gMode = 0;

pb_rgba fb;
pb_rect keyRect = { 0, 0, 34, 34 };

void  mousePressed()
{
	gMode++;
	if (gMode &gt;= gMaxMode) {
		gMode = 0;
	}
}

void  mouseMoved()
{
	keyRect.x = mouseX - keyRect.width / 2;
	keyRect.y = mouseY - keyRect.height / 2;
}

void  keyReleased()
{
	switch (keyCode)
	{
	case VK_SPACE:
		write_PPM_binary("test_keyboard.ppm", gpb);
		break;

	case VK_RIGHT: // increase width of keyRect
		keyRect.width += 1;
		break;

	case VK_LEFT:
		keyRect.width -= 1;
		if (keyRect.width &lt; 4) keyRect.width = 4;
		break;
	case VK_UP:
		keyRect.height += 1;
		break;
	case VK_DOWN:
		keyRect.height -= 1;
		if (keyRect.height &lt; 4) keyRect.height = 4;
		break;
	}

	keyRect.x = mouseX - keyRect.width / 2;
	keyRect.y = mouseY - keyRect.height / 2;

}

// Draw information about the mouse
// location, buttons pressed, etc
void drawMouseInfo()
{
	// draw a white banner across the bottom
	noStroke();
	fill(pWhite);
	rect(0, fb.frame.height + 2, width, 24);


	// draw the key rectangle
	fill(RGBA(255, 238, 200, 180));
	stroke(pDarkGray);
	rect(keyRect.x, keyRect.y, keyRect.width, keyRect.height);

	// select verdana font
	setFont(verdana17);
	char infobuff[256];
	sprintf_s(infobuff, "Mouse X: %3d Y: %3d    Key: (%3f, %3f)(%3.0f, %3.0f)", mouseX, mouseY, keyRect.x, 
		keyRect.y, keyRect.width, keyRect.height);
	fill(pBlack);
	textAlign(TX_LEFT, TX_TOP);
	text(infobuff, 0, fb.frame.height + 2);

}

void draw()
{
	background(pLightGray);
	backgroundImage(&fb);

	drawMouseInfo();
}

void setup()
{
	int ret = PPM_read_binary("c:/repos/graphicc/Test/windows-keyboard-60-keys.ppm", &fb);

	size(fb.frame.width+4, fb.frame.height+4+30);
	background(pLightGray);
}



Of particular note, and what this article is about, are the routines to track the mouse and keyboard actions. Let’s take the mouse movement first of all.

void  mousePressed()
{
	gMode++;
	if (gMode &gt;= gMaxMode) {
		gMode = 0;
	}
}

void  mouseMoved()
{
	keyRect.x = mouseX - keyRect.width / 2;
	keyRect.y = mouseY - keyRect.height / 2;
}

Basically, every time the mouse moves, the location of the ‘keyRect’ is updated, to reflect the new mouse position. In this case, we have the reported mouse position at the center of that translucent square.

Alright, but who’s calling these routines in the first place? For that we look at the mouse handling within drawproc.cpp


LRESULT CALLBACK mouseHandler(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
{
	switch (message)
	{
		case WM_MOUSEWHEEL:
			if (gmouseOnWheelHandler != nullptr) {
				gmouseOnWheelHandler(); // hWnd, message, wParam, lParam);
			}
		break;


		case WM_MOUSEMOVE:
			mouseX = GET_X_LPARAM(lParam);
			mouseY = GET_Y_LPARAM(lParam);

			if (isMousePressed) {
				if (gmouseOnDraggedHandler != nullptr) {
					gmouseOnDraggedHandler();
				}
			} else if (gmouseOnMovedHandler != nullptr) {
				gmouseOnMovedHandler();
			}
		break;

		case WM_LBUTTONDOWN:
		case WM_MBUTTONDOWN:
		case WM_RBUTTONDOWN:
			isMousePressed = true;
			mouseButton = wParam;

			if (gOnMousePressedHandler != nullptr) {
				gOnMousePressedHandler();
			}
		break;

		case WM_LBUTTONUP:
		case WM_MBUTTONUP:
		case WM_RBUTTONUP:
			isMousePressed = false;

			if (gmouseReleasedHandler != nullptr) {
				gmouseReleasedHandler();
			}
		break;

		default:
			return DefWindowProc(hWnd, message, wParam, lParam);
	}

	return 0;
}

Good good, looks like a typical “Windows” mouse handling routine. But, what of these various global variables?

		case WM_MOUSEMOVE:
			mouseX = GET_X_LPARAM(lParam);
			mouseY = GET_Y_LPARAM(lParam);

			if (isMousePressed) {
				if (gmouseOnDraggedHandler != nullptr) {
					gmouseOnDraggedHandler();
				}
			} else if (gmouseOnMovedHandler != nullptr) {
				gmouseOnMovedHandler();
			}
		break;

In this case, we have mouseX, mouseY, isMousePressed, gmouseOnDraggedHandler , and gmouseOnMovedHandler. That’s a lot of global state. Of particular interest are the two that point to functions, which are gmouseOnDraggedHandler, and gmouseOnMovedHandler. These routines are called whenever there is mouse movement. The one for dragging is called if the additional global variable ‘isMousePressed’ is true.

This is nice because we’re already getting a chance at being ‘dynamic’ because the function calls are not hard coded, which means, those pointers, although they may have started with some value at compile time, can changed dynamically any time during runtime. So, let’s see how they might get set.

Before any actions occur, as part of the initialization for the drawproc routines, there is this series of calls:

void init()
{
	// Setup text
	font_t_init(&gfont, verdana12);
	gTextSize = 10;
	gTextAlignX = TX_LEFT;
	gTextAlignY = TX_TOP;


	
	HMODULE modH = GetModuleHandle(NULL);

	// Keyboard handling routines
	setOnKeyPressedHandler((EventObserverHandler)GetProcAddress(modH, "keyPressed"));
	setOnKeyReleasedHandler((EventObserverHandler)GetProcAddress(modH, "keyReleased"));
	setOnKeyTypedHandler((EventObserverHandler)GetProcAddress(modH, "keyTyped"));

	setKeyboardHandler(keyHandler);


	// Mouse Handling Routines
	setOnMousePressedHandler((EventObserverHandler)GetProcAddress(modH, "mousePressed"));
	setOnMouseReleasedHandler((EventObserverHandler)GetProcAddress(modH, "mouseReleased"));
	setOnMouseMovedHandler((EventObserverHandler)GetProcAddress(modH, "mouseMoved"));
	setOnMouseDraggedHandler((EventObserverHandler)GetProcAddress(modH, "mouseDragged"));

	setMouseHandler(mouseHandler);
}

First of all, ‘GetModuleHandle’ is the way within Windows to get a handle on one of the libraries (.dll file) which may have been loaded into the program. If you pass “NULL” as the parameter, it will get the handle for the currently running program (the .exe file) itself. This is valuable to have because with this handle, you can make the ‘GetProcAddress’ call, with the name of a function, and get a pointer to that function. Once we have that pointer, we can set it on the global variable, and the rest we’ve already seen.

So that’s it, that’s the trick. The typical way of compiling with the drawproc library is to statically link your code with it. The drawproc library comes with a ‘main’, so all you have to do is implement various bits and pieces that might not already be there.

As we saw from the test_keyboard code, mousePressed, and mouseMoved are the only two of the 4 mouse routines that were implemented. How is that possible? Well, the others will be NULL, as they won’t be found in the library, and since they’re NULL, they simply won’t be called.

There is one trick though to making this work. The GetProcAddress will only work if the routines were marked for export when compiled. This is achieved using the following in the drawproc.h header:

#define DPROC_API		__declspec(dllexport)

// IO Event Handlers
DPROC_API void keyPressed();
DPROC_API void keyReleased();
DPROC_API void keyTyped();


DPROC_API void mousePressed();
DPROC_API void mouseMoved();
DPROC_API void mouseDragged();
DPROC_API void mouseReleased();

This will essentially create function declarations for the functions we intend to implement. If your test code includes ‘drawproc.h’, and then you implement any of these routines, they will automatically be ‘exported’ when the whole is compiled. This will make it possible for them to be picked up during the ‘init()’ process, and thereby setting those global variables. This is also true of the ‘setup()’ and ‘draw()’ routines.

This is a great little trick, when you’re linking statically against the library. Another way to do it would be to put your code into a separate .dll, and run a drawproc.exe host program, giving the name of your .dll as a parameter. Then the library could be dynamically loaded, and the same hookup of routines could occur. The advantage of that setup is that you don’t need to create an entirely new .exe file for each of your test programs. Just produce the .dll. The disadvantage is you get stuck in a state when the runtime changes, and your .dll files no longer match.

So, there you have it, some amount of ‘dynamic’ programming in the little graphics engine that could, using ‘C’, which is certainly capable of supporting this pattern of dynamic programming.


graphicc – presenting a graph

The latest graphicc library is shaping up to be an almost useful thing.

The only way I know how to ensure a library actually serves a purpose is to build applications upon it.  This installment is about that sort of thing.  But first, a look at some more text.  This is basically text alignment working in graphicc.

test_text

The little bit of code that’s doing this looks like this.

void draw()
{
	background(pLightGray);

	// Draw some lines
	stroke(pBlack);
	line(width / 2, 0, width / 2, height - 1);
	line(0, height / 2, width - 1, height / 2);

	// draw some text
	int midx = width / 2;
	int midy = height / 2;
	fill(pBlack);
	textAlign(TX_LEFT);
	text("LEFT", midx, 20);
	
	textAlign(TX_CENTER);
	text("CENTER", midx, 40);
	
	textAlign(TX_RIGHT);
	text("RIGHT", midx, 60);

	// Around the center
	textAlign(TX_LEFT, TX_TOP);
	text("LEFT TOP", 0, 0);

	textAlign(TX_RIGHT, TX_TOP);
	text("RIGHT TOP",width,0);

	textAlign(TX_RIGHT, TX_BOTTOM);
	text("RIGHT BOTTOM", width,height);

	textAlign(TX_LEFT, TX_BOTTOM);
	text("LEFT BOTTOM",0,height);

	stroke(pRed);
	line(midx - 6, midy, midx + 6, midy);
	line(midx, midy - 6, midx, midy + 6);

	fill(pWhite);
	textAlign(TX_CENTER, TX_CENTER);
	text("CENTER CENTER", midx, midy);
}

But wait, this is high level stateful graphics sort of stuff, not the low level raw graphicc API. Yep, that’s right. Along the way of creating the lower level stuff, I’ve been nursing along what I call ‘drawproc’. This is essentially an interface that looks and feels very similar to the popular Processing.org environment, but it’s for C/C++ instead of Java. I also have a skin for PHIGS, but this one is a lot further along, and used constantly for test cases.

In order to work the drawproc API, and thus the low level graphicc routines, I’m going through a book on Processing: Visualizing Data which shows a lot of techniques for visualizing data sets using Processing. And here are the fruits of following one particular chapter:

timeseries

Nice usage of text, alignment, lines, rectangles, tiny circles, different sized  text, and all that.  If you were doing the interactive app, you could flip between Milk, Tea, and Coffee graphs.  The drawproc along with a shim for win32, give you keyboard and mouse control as well, so doing some interactive hover effects and the like is possible.

It’s a kind of funny thing.  In these days of HTML rendering, how could there possibly be any other way to do UI?  Well, HTML is the answer in many cases, but not all.  Having a small tight graphics library that can allow you to build interactive apps quickly and easily is still a useful thing.  Besides, it’s just plain fun.

Now that I’ve got a reasonable set of core graphics routines, I contemplated what it would be like to write a remote UI sort of thing for cloud based VMs.  Basically, just put some little engine on the VM which receives drawing commands, and allow that to be connected to via some port, which also sends/receives key and mouse stuff.  Well, isn’t that what ‘X’ does?  Yah, and a whole lot more!  Well then, surely VNC has got it covered?  Yes, as long as you’re already running X.  But, it’s a challenge.  Can I write such a small service in less than 1Mb of code?  Maybe 2Mb just to be safe?  Of course you’d have to write apps on the server side that talk to the graphics server, but that shouldn’t be too hard.  Just pick an API skin, like Processing, or GDI, or whatever, and send all the commands to the service, which will render into a buffer to be served up to whomever is looking.

One can dream…


Composition at What Level?

barpoly

That picture’s got a lot going on.  In Render Text like a Boss, I was rendering text, which is a pretty exciting milestone.  The only thing more exciting to do with text is render based on vectors, and being able to scale and rotate, but there are other fish to fry in the meanwhile.  I find that polygons are fairly useful as a primitive.

The above picture shows a whole bunch of primitives rendering with varying degrees of transparency.  Those random rectangles in the background are just making things look interesting.  Next are some random bars, with varying degrees of transparency, again just to make things look interesting.  And then comes that orange tinted polygon thing that lays atop the bars.  The polygon is simply constructed by taking the midpoint of the top edge of the bar as a vertex.  The two bottom vertices (left and right) close out the polygon.

But wait a minute, I see a bunch of triangles in there?  Well, just for the purposes of this article, I’m showing how the polygon is broken down into triangles (triangulated).  Triangles?  what the heck?  Well, you see, it’s like this.  There is more than one way to skin a cat, or render a polygon.  You can search the interwebs and find as many ways to do it as there are graphics libraries.  The ‘direct’ method would have you go scan line by line, calculating even and odd line crossings to determine whether individual pixels were ‘inside’ or ‘outside’ the polygon, and thus color the pixel.  You could traverse down edges, creating horizontal line spans, and then render those spans.  Well, these both look fairly similar to the choices made when rendering a triangle.  So, maybe it’s easier to simply break the polygon down into triangles, and then render the resultant triangles.  I’m sure there’s some math theorem somewhere that can be used to prove that any polygon > 3 points can be broken up into nice sets of triangles.  Equivalently, there are numerous coding algorithms which will do the same.  For graphic, I found a nice public domain triangulator, and incorporated it into the library.  Pass it some points, it breaks them down into triangles, and the triangles are rendered.

Great.  This has a couple of nice implications.  First of all, triangles (and ultimately horizontal lines) remain a useful primitive in the core of the graphics library.  I did not actually add a polygon capability to the core, just keep the triangles there, and anyone who needs polygons can easily break them down into triangles.  That leaves me with a single complex primitive to optimize.  I do the same for quads.  Rectangles are a special case, in that they may be optimized separately to come up with faster behavior than the triangulation would deliver.

ellipses

What about the ellipse?  Hmmm.  There are faily fast ellipse drawing routines in the world.  The most interesting for drawing ellipse outlines looks like this:

 

void raster_rgba_ellipse_stroke(pb_rgba *pb, const uint32_t cx, const uint32_t cy, const size_t xradius, size_t yradius, const uint32_t color)
{
	int x, y;
	int xchange, ychange;
	int ellipseerror;
	int twoasquare, twobsquare;
	int stoppingx, stoppingy;

	twoasquare = 2 * xradius*xradius;
	twobsquare = 2 * yradius*yradius;

	x = xradius;
	y = 0;

	xchange = yradius*yradius*(1 - 2 * xradius);
	ychange = xradius*xradius;
	ellipseerror = 0;
	stoppingx = twobsquare*xradius;
	stoppingy = 0;

	// first set of points, sides
	while (stoppingx >= stoppingy)
	{
		Plot4EllipsePoints(pb, cx, cy, x, y, color);
		y++;
		stoppingy += twoasquare;
		ellipseerror += ychange;
		ychange += twoasquare;
		if ((2 * ellipseerror + xchange) > 0) {
			x--;
			stoppingx -= twobsquare;
			ellipseerror += xchange;
			xchange += twobsquare;
		}
	}

	// second set of points, top and bottom
	x = 0;
	y = yradius;
	xchange = yradius*yradius;
	ychange = xradius*xradius*(1 - 2 * yradius);
	ellipseerror = 0;
	stoppingx = 0;
	stoppingy = twoasquare*yradius;

	while (stoppingx <= stoppingy) {
		Plot4EllipsePoints(pb, cx, cy, x, y, color);
		x++;
		stoppingx += twobsquare;
		ellipseerror += xchange;
		xchange += twobsquare;
		if ((2 * ellipseerror + ychange) > 0) {
			y--;
			stoppingy -= twoasquare;
			ellipseerror += ychange;
			ychange += twoasquare;
		}
	}
}

This is one of those routines that takes advantage of the fact that axis aligned ellipses have a lot of symmetry, so 4 pointes are plotted at the same time. Well, it’s not a big leap to think that this technique can be extended. For filling the ellipse, you can simply draw horizontal lines between the symmetrical points, and you end up with a solid ellipse.

inline void fill2EllipseLines(pb_rgba *pb, const uint32_t cx, const uint32_t cy, const unsigned int x, const unsigned int y, const uint32_t color)
{
	raster_rgba_hline_blend(pb, cx - x, cy + y, 2*x, color);
	raster_rgba_hline_blend(pb, cx - x, cy - y, 2 * x, color);
}

void raster_rgba_ellipse_fill(pb_rgba *pb, const uint32_t cx, const uint32_t cy, const size_t xradius, size_t yradius, const uint32_t color)
{
	int x, y;
	int xchange, ychange;
	int ellipseerror;
	int twoasquare, twobsquare;
	int stoppingx, stoppingy;

	twoasquare = 2 * xradius*xradius;
	twobsquare = 2 * yradius*yradius;

	x = xradius;
	y = 0;

	xchange = yradius*yradius*(1 - 2 * xradius);
	ychange = xradius*xradius;
	ellipseerror = 0;
	stoppingx = twobsquare*xradius;
	stoppingy = 0;

	// first set of points, sides
	while (stoppingx >= stoppingy)
	{
		//Plot4EllipsePoints(pb, cx, cy, x, y, color);
		fill2EllipseLines(pb, cx, cy, x, y, color);
		y++;
		stoppingy += twoasquare;
		ellipseerror += ychange;
		ychange += twoasquare;
		if ((2 * ellipseerror + xchange) > 0) {
			x--;
			stoppingx -= twobsquare;
			ellipseerror += xchange;
			xchange += twobsquare;
		}
	}

	// second set of points, top and bottom
	x = 0;
	y = yradius;
	xchange = yradius*yradius;
	ychange = xradius*xradius*(1 - 2 * yradius);
	ellipseerror = 0;
	stoppingx = 0;
	stoppingy = twoasquare*yradius;

	while (stoppingx <= stoppingy) {
		fill2EllipseLines(pb, cx, cy, x, y, color);
		x++;
		stoppingx += twobsquare;
		ellipseerror += xchange;
		xchange += twobsquare;
		if ((2 * ellipseerror + ychange) > 0) {
			y--;
			stoppingy -= twoasquare;
			ellipseerror += ychange;
			ychange += twoasquare;
		}
	}
}

In this context, the ellipse is kind of like the rectangle. There is an easy optimization at hand, so the benefits of employing triangulation might not be worth the effort. Then again, if you wanted to maintain a smaller codebase, then you might want to triangulate even this ellipse. You’d still be stuck with using the straight up code to draw the ellipse outline, and with a little bit of refactoring, you could use the exact same code for the outline and the fill. The routine to be utilized for the symmetrical points could be passed in as a function pointer, and that would be that. That makes for easy composition, code reuse, and the like.

I’m not actually sure how much I tend to use ellipses in realistic drawing anyway. I suppose there are circles here and there, but eh, there you go.

The theme of this article was composition. There is a bit of code reuse, composition in how you build larger things from smaller things, and simply trying to maintain a constraint of a small codebase, requires that higher order elements are composed from smaller simpler elements. The triangulation code currently relies on some C++ (for vector), so it can’t go into the core library as yet (straight C only), but I’m sure it will get there.

Well, there you have it, the little library that could is gaining more capabilities all the time. Soon enough, these capabilities will allow the composition of some fairly complex UI elements.


Drawing Curvy Lines with aplomb – Beziers

I have written plenty about Bezier and other curves and surfaces.  That was largely in the context of 3D printing using OpenScad, or BanateCad.  But now, I’m adding to a general low level graphics library.

test_bezier

Well well, what do we have here.  Bezier curves are one of those constructs where you lay down some ‘control points’ and then draw a line that meanders between them according to some mathematical formula.  In the picture, the green curve is represented by 4 control points, the red one is represented by 5 points, and the blue ones are each represented by 4 points.

How do you construct a Bezier curve?  Well, you don’t need much more than the following code:

void computeCoefficients(const int n, int * c)
{
	int k, i;

	for (k = 0; k <= n; k++)
	{
		// compute n!/(k!(n-k)!)
		c[k] = 1;
		for (i = n; i >= k + 1; i--)
		{
			c[k] *= i;
		}

		for (i = n - k; i >= 2; i--)
		{
			c[k] /= i;
		}
	}
}

void computePoint(const float u, Pt3 * pt, const int nControls, const Pt3 *controls, const int * c)
{
	int k;
	int n = nControls - 1;
	float blend;

	pt->x = 0.0;	// x
	pt->y = 0.0;	// y
	pt->z = 0.0;	// z
	
	// Add in influence of each control point
	for (k = 0; k < nControls; k++){
		blend = c[k] * powf(u, k) *powf(1 - u, n - k);
		pt->x += controls[k].x * blend;
		pt->y += controls[k].y * blend;
		pt->z += controls[k].z * blend;
	}
}

void bezier(const Pt3 *controls, const int nControls, const int m, Pt3 * curve)
{
	// create space for the coefficients
	int * c = (int *)malloc(nControls * sizeof(int));
	int i;

	computeCoefficients(nControls - 1, c);
	for (i = 0; i <= m; i++) {
		computePoint(i / (float)m, &curve[i], nControls, controls, c);
	}
	free(c);	
}

This is pretty much the same code you would get from any book or tutorial on fundamental computer graphics. It will allow you to calculate a Bezier curve using any number of control points.

Here’s the test case that generated the picture

typedef struct {
	REAL x;
	REAL y;
	REAL z;
} Pt3;

void polyline(pb_rgba *pb, Pt3 *curve, const int nPts, int color)
{
	for (int idx = 0; idx < nPts; idx++) {
		raster_rgba_line(pb, curve[idx].x, curve[idx].y, curve[idx + 1].x, curve[idx + 1].y, color);
	}
}

void test_bezier()
{

	size_t width = 400;
	size_t height = 400;
	int centerx = width / 2;
	int centery = height / 2;
	int xsize = (int)(centerx*0.9);
	int ysize = (int)(centery*0.9);

	pb_rgba pb;
	pb_rgba_init(&pb, width, height);

	// background color
	raster_rgba_rect_fill(&pb, 0, 0, width, height, pLightGray);

	// One curve drooping down
	Pt3 controls[4] = { { centerx - xsize, centery, 0 }, { centerx, centery + ysize, 0 }, { centerx, centery + ysize, 0 }, { centerx + xsize, centery, 0 } };
	int nControls = 4;
	int m = 60;
	Pt3 curve[100];
	bezier(controls, nControls, m, curve);
	polyline(&pb, curve, m, pGreen);

	// Several curves going up
	for (int offset = 0; offset < ysize; offset += 5) {
		Pt3 ctrls2[4] = { { centerx - xsize, centery, 0 }, { centerx, centery - offset, 0 }, { centerx, centery - offset, 0 }, { centerx + xsize, centery, 0 } };
		bezier(ctrls2, nControls, m, curve);
		polyline(&pb, curve, m, pBlue);
	}

	// one double peak through the middle
	Pt3 ctrls3[5] = { { centerx - xsize, centery, 0 }, { centerx-(xsize*0.3f), centery + ysize, 0 }, { centerx, centery - ysize, 0 }, { centerx+(xsize*0.3f), centery + ysize, 0 }, { centerx + xsize, centery, 0 } };
	int nctrls = 5;
	bezier(ctrls3, nctrls, m, curve);
	polyline(&pb, curve, m, pRed);

	// Now we have a simple image, so write it to a file
	int err = write_PPM("test_bezier.ppm", &pb);
}

From here, there are some interesting considerations. For example, you don’t want to calculate the coefficients every single time you draw a single curve. In terms of computer graphics, most Bezier curves consist of 3 or 4 points at most. Ideally, you would calculate the coefficients for those specific curves and store the values statically for later usage. This is what you’d do ideally for a small embedded library. The tradeoff in storage space is well worth the savings in compute time.

Additionally, instead of calculating all the line segments and then storing those values and using a polyline routine to draw things, you’d like want to simply have the bezier routine draw the lines directly. That would cut down on temporary allocations.

At the same time though, you want to retain the ‘computePoint’ function as independent because once you have a Bezier calculation function within the library, you’ll want to use it to do things other than just draw curved lines. Bezier, and it’s corollary Hermite, are good for calculating things like color ramps, motion, and other curvy stuff. This is all of course before you start using splines and nurbs, which are much more involved process.

There you have it, a couple of functions, and suddenly you have Bezier curves based on nothing more than the low level line drawing primitives. At the moment, this code sits in a test file, but soon enough I’ll move it into the graphicc library proper.


GraphicC – The triangle’s the thing

Pixels, lines, and squares, oh my!
test_triangle

And, finally, we come to triangles.  The demo code is thus:

#include "test_common.h"

void test_writebitmap()
{
	size_t width = 480;
	size_t height = 480;
	pb_rgba pb;
	pb_rgba_init(&pb, width, height);

	// Draw some triangles 
	size_t midx = (size_t)(width / 2);
	size_t midy = (size_t)(height / 2);

	raster_rgba_triangle_fill(&pb, 0, 0, width-1, 0, midx, midy, pRed);
	raster_rgba_triangle_fill(&pb, 0, 0, midx, midy, 0, height-1, pGreen);
	raster_rgba_triangle_fill(&pb, width-1, 0, midx, midy, width-1, height - 1, pBlue);
	raster_rgba_triangle_fill(&pb, midx, midy, 0, height-1, width - 1, height - 1, pDarkGray);

	// Now we have a simple image, so write it to a file
	int err = write_PPM("test_triangle.ppm", &pb);
}

int main(int argc, char **argv)
{
	test_writebitmap();

	return 0;
}

Why triangles? Well, beyond pixels, and lines, they are one of the most useful drawing primitives. When it finally comes to rendering in 3D for example, it will all be about triangles. Graphics hardware over the years has been optimized to render triangles as well, so it is beneficial to focus more attention to this primitive than to any other. You might think polygons are very useful, but, all polygons can be broken down into triangles, and we’re back where we started.

Here’s the triangle drawing routine:

#define swap16(a, b) { int16_t t = a; a = b; b = t; }

typedef struct _point2d
{
	int x;
	int y;
} point2d;

int FindTopmostPolyVertex(const point2d *poly, const size_t nelems)
{
	int ymin = INT32_MAX;
	int vmin = 0;

	size_t idx = 0;
	while (idx < nelems) {
		if (poly[idx].y < ymin) {
			ymin = poly[idx].y;
			vmin = idx;
		}
		idx++;
	}

	return vmin;
}

void RotateVertices(point2d *res, point2d *poly, const size_t nelems, const int starting)
{
	size_t offset = starting;
	size_t idx = 0;
	while (idx < nelems) {
		res[idx].x = poly[offset].x;
		res[idx].y = poly[offset].y;
		offset++;
		
		if (offset > nelems-1) {
			offset = 0;
		}

		idx++;
	}
}

void sortTriangle(point2d *sorted, const int x1, const int y1, const int x2, const int y2, const int x3, const int y3)
{
	point2d verts[3] = { { x1, y1 }, { x2, y2 }, { x3, y3 } };

	int topmost = FindTopmostPolyVertex(verts, 3);
	RotateVertices(sorted, verts, 3, topmost);
}

void raster_rgba_triangle_fill(pb_rgba *pb, 
	const unsigned int x1, const unsigned int  y1, 
	const unsigned int  x2, const unsigned int  y2, 
	const unsigned int  x3, const unsigned int  y3, 
	int color)
{
	int a, b, y, last;

	// sort vertices, such that 0 == y with lowest number (top)
	point2d sorted[3];
	sortTriangle(sorted, x1, y1, x2, y2, x3, y3);

	// Handle the case where points are colinear (all on same line)
	if (sorted[0].y == sorted[2].y) { 
		a = b = sorted[0].x;
		
		if (sorted[1].x < a) 
			a = sorted[1].x;
		else if (sorted[1].x > b) 
			b = sorted[1].x;

		if (sorted[2].x < a) 
			a = sorted[2].x;
		else if (sorted[2].x > b) 
			b = sorted[2].x;

		raster_rgba_hline(pb, a, sorted[0].y, b - a + 1, color);
		return;
	}

	int16_t
		dx01 = sorted[1].x - sorted[0].x,
		dy01 = sorted[1].y - sorted[0].y,
		dx02 = sorted[2].x - sorted[0].x,
		dy02 = sorted[2].y - sorted[0].y,
		dx12 = sorted[2].x - sorted[1].x,
		dy12 = sorted[2].y - sorted[1].y;
	
	int32_t sa = 0, sb = 0;

	// For upper part of triangle, find scanline crossings for segments
	// 0-1 and 0-2. If y1=y2 (flat-bottomed triangle), the scanline y1
	// is included here (and second loop will be skipped, avoiding a /0
	// error there), otherwise scanline y1 is skipped here and handled
	// in the second loop...which also avoids a /0 error here if y0=y1
	// (flat-topped triangle).
	if (sorted[1].y == sorted[2].y) 
		last = sorted[1].y; // Include y1 scanline
	else 
		last = sorted[1].y - 1; // Skip it
	
	for (y = sorted[0].y; y <= last; y++) 
	{
		a = sorted[0].x + sa / dy01;
		b = sorted[0].x + sb / dy02;
		sa += dx01;
		sb += dx02;
		/* longhand:
		a = x0 + (x1 - x0) * (y - y0) / (y1 - y0);
		b = x0 + (x2 - x0) * (y - y0) / (y2 - y0);
		*/
		
		if (a > b) swap16(a, b);
		raster_rgba_hline(pb, a, y, b - a + 1, color);
	}

	// For lower part of triangle, find scanline crossings for segments
	// 0-2 and 1-2. This loop is skipped if y1=y2.
	sa = dx12 * (y - sorted[1].y);
	sb = dx02 * (y - sorted[0].y);
	for (; y <= sorted[2].y; y++) 
	{
		a = sorted[1].x + sa / dy12;
		b = sorted[0].x + sb / dy02;
		sa += dx12;
		sb += dx02;
		/* longhand:
		a = x1 + (x2 - x1) * (y - y1) / (y2 - y1);
		b = x0 + (x2 - x0) * (y - y0) / (y2 - y0);
		*/
		if (a > b) 
			swap16(a, b);

		raster_rgba_hline(pb, a, y, b - a + 1, color);
	}
}

This seems like a lot of work to simply draw a triangle! There are lots of routines for doing this. This particular implementation borrows from a couple of different techniques. The basic idea is to first sort the vertices in scaline order, top to bottom. That is, we want to known which vertex to start from because we’re going to follow an edge down the framebuffer, drawing lines between edges as we go. In a regular triangle, without a flat top or bottom, there will be a switch between driving edges as we encounter the third point somewhere down the scan lines.

At the end, the ‘raster_rgba_hline’ function is called to actually draw a single line. If we wanted to do blending, we’d change this line. If we wanted to use a color per vertex, it would ultimately be dealt with here. All possible. Not necessary for the basic routine. With some refactoring, this could be made more general purpose, and different rendering modes could be included.

This is the only case where I use a data structure, to store the sorted points. Most of the time I just pass parameters and pointers to parameters around.

This is also a good time to mention API conventions. One of the constraints from the beginning is the need to stick with basic C constructs, and not get too fancy. In some cases this is at the expense of readability, but in general things just work out ok, as it’s simple to get used to. So, let’s look at the API here:

void raster_rgba_triangle_fill(pb_rgba *pb, 
	const unsigned int x1, const unsigned int  y1, 
	const unsigned int  x2, const unsigned int  y2, 
	const unsigned int  x3, const unsigned int  y3, 
	int color)

That’s a total of 8 parameters. If I were going for absolute speed, and I knew the platform I was running on, I might try to limit the parameters to 4, to help ensure they get passed in registers, rather than requiring the use of a stack. But, that’s an optimization that’s iffy at best, so I don’t bother. And why not use some data structures? Clearly x1, y1 is a 2D point, so why not use a ‘point’ data structure. Surely where the data comes from is structured as well. In some cases it is, and in others it is not. I figure that in the cases where the data did come from a structure, it’s fairly easy for the caller to do the dereferencing and pass in the values directly. In the cases where the data did not come from a point structure, it’s a tax to pay to stuff the data into a point structure, simply to have it pass in to a function, which may not need it in that form. So, in general, all the parameters are passed in a ‘denormalized’ form. With small numbers of easy to remember parameters, I think this is fine.

Also, I did decide that it was useful to use ‘const’ in most cases where that is applicable.  This gives the compiler some hints as to how best deal with the parameters, and allows you to specify constants inline for test cases, without compiler complaints.

There is a decided lack of error handling in most of these low level APIs.  You might think that triangle drawing would be an exception.  Shouldn’t I do some basic clipping to the framebuffer?  The answer is no, not at this level.  At a higher level, we might be drawing something larger, where a triangle is but a small part.  At that higher level we’ll know about our framebuffer, or other constraints that might cause clipping of a triangle.  At that level, we can perform the clipping, and send down new sets of triangles to be rendered by this very low level triangle drawing routine.

Of course, this routine could be made even more raw and optimized by breaking out the vertex sorting from the edge following part.  If the triangle vertices don’t change, there’s no need to sort them every time they are drawn, so making the assumption that the vertices are already in framebuffer height order can dramatically simplify the drawing portion.  Then, the consumer can call the sorting routines when they need to, and pass in the sorted vertex values.

And thus APIs evolve.  I’ll wait a little longer before making such changes as I might think of other things that I want to improve along the way.

There you have it then.  From pixels to triangles, the basic framebuffer drawing routines are now complete.  What more could be needed at this level?  Well, dealing with some curvy stuff, like ellipses and rounded rectangles might be nice, all depends on what use cases are more interesting first, drawing pies and graphs, or doing some UI stuff.  The first thing I want to do is display some data related to network traffic events, so I think I’ll focus on a little bit of text rendering first, and leave the ellipse and rounded rects for later.

So, next time around, doing simple bitmap text.


GraphicC – Random Lines

Thus far, I’ve been able to draw individual pixels, some rectangles, straight vertical and horizontal lines.  Now it’s time to tackle those lines that have an arbitrary slope.

test_writebitmap

In this case, we’re only interested in the line drawing, not the rectangles and triangles.  Here’s the example code:

 

#include &quot;test_common.h&quot;

void test_writebitmap()
{
	size_t width = 480;
	size_t height = 480;
	pb_rgba pb;
	pb_rgba_init(&amp;pb, width, height);


	// draw horizontal lines top and bottom
	raster_rgba_hline(&amp;pb, 0, 0, width - 1, pWhite);
	raster_rgba_hline(&amp;pb, 0, height - 1, width - 1, pWhite);

	// draw vertical lines left and right
	raster_rgba_vline(&amp;pb, 0, 0, height - 1, pGreen);
	raster_rgba_vline(&amp;pb, width - 1, 0, height - 1, pTurquoise);

	// draw criss cross lines
	raster_rgba_line(&amp;pb, 0, 0, width - 1, height - 1, pRed);
	raster_rgba_line(&amp;pb, width - 1, 0, 0, height - 1, pYellow);

	// draw a couple of rectangles
	raster_rgba_rect_fill(&amp;pb, 5, 5, 60, 60, pLightGray);
	raster_rgba_rect_fill(&amp;pb, width - 65, height - 65, 60, 60, pLightGray);

	// draw a rectangle in the center
	pb_rgba fpb;
	pb_rgba_get_frame(&amp;pb, (width / 2) - 100, (height / 2) - 100, 200, 200, &amp;fpb);
	raster_rgba_rect_fill(&amp;fpb, 0, 0, 200, 200, pBlue);

	// Draw triangle 
	raster_rgba_triangle_fill(&amp;pb, 0, height - 10, 0, 10, (width / 2) - 10, height / 2, pGreen);

	// Now we have a simple image, so write it to a file
	int err = write_PPM(&quot;test_writebitmap.ppm&quot;, &amp;pb);

}

int main(int argc, char **argv)
{
	test_writebitmap();

	return 0;
}

This is the part we’re interested in here:

	// draw criss cross lines
	raster_rgba_line(&amp;pb, 0, 0, width - 1, height - 1, pRed);
	raster_rgba_line(&amp;pb, width - 1, 0, 0, height - 1, pYellow);

Basically, feed in x1, y1, x2, y2 and a color, and a line will be drawn. The operation is SRCCOPY, meaning, the color is not blended, it just lays over whatever is already there. And the line drawing routine itself?

#define sgn(val) ((0 &lt; val) - (val &lt; 0))

// Bresenham simple line drawing
void raster_rgba_line(pb_rgba *pb, unsigned int x1, unsigned int y1, unsigned int x2, unsigned int y2, int color)
{
	int dx, dy;
	int i;
	int sdx, sdy, dxabs, dyabs;
	unsigned int x, y, px, py;

	dx = x2 - x1;      /* the horizontal distance of the line */
	dy = y2 - y1;      /* the vertical distance of the line */
	dxabs = abs(dx);
	dyabs = abs(dy);
	sdx = sgn(dx);
	sdy = sgn(dy);
	x = dyabs &gt;&gt; 1;
	y = dxabs &gt;&gt; 1;
	px = x1;
	py = y1;

	pb_rgba_set_pixel(pb, x1, y1, color);

	if (dxabs &gt;= dyabs) /* the line is more horizontal than vertical */
	{
		for (i = 0; i&lt;dxabs; i++)
		{
			y += dyabs;
			if (y &gt;= (unsigned int)dxabs)
			{
				y -= dxabs;
				py += sdy;
			}
			px += sdx;
			pb_rgba_set_pixel(pb, px, py, color);
		}
	}
	else /* the line is more vertical than horizontal */
	{
		for (i = 0; i&lt;dyabs; i++)
		{
			x += dxabs;
			if (x &gt;= (unsigned int)dyabs)
			{
				x -= dyabs;
				px += sdx;
			}
			py += sdy;
			pb_rgba_set_pixel(pb, px, py, color);
		}
	}
}

This is a fairly simple implementation of the well known Bresenham line drawing algorithm. It may no longer be the fastest in the world, but it’s fairly effective and compoutationally simple. If we wanted to do a blend of colors, then the ‘pb_rgba_set_pixel’ could simple be replaced with the blending routine that we saw in the other line drawing routines.

And that’s that. You could implement this as EFLA, or any number of other routines that might prove to be faster. But, why optimize too early? It might be that memory access is the bottleneck, and not the simple calculations done here. Bresenham is also fairly nice because everything is simple integer arithmetic, which is both fairly fast, as well as easy to implement on an FPGA, if it comes to that. Certainly on a microcontroller, it would be preferred to float/double.

Why we’re at it, how about that bitmap writing to file routine?

	int err = write_PPM(&quot;test_writebitmap.ppm&quot;, &amp;pb);

I’ve seen plenty of libraries that spend an inordinate amount of lines of code on the simple image load/save portion of things. That can easily dominate anything else you do. I didn’t want to really pollute the basic codebase with all that, so I chose the ‘ppm’ format as the only format that the library speaks natively. It’s a good ol’ format, nothing fancy, basic RGB dump of values, with some text at the beginning to describe the format of the image. The routine looks like this:

pbm.h

#pragma once

#ifndef PBM_H
#define PBM_H

#include &quot;pixelbuffer.h&quot;

#ifdef __cplusplus
extern &quot;C&quot; {
#endif

int write_PPM(const char *filename, pb_rgba *fb);

#ifdef __cplusplus
}
#endif

#endif

pbm.cpp

#include "pbm.h"

#include 

#pragma warning(push)
#pragma warning(disable: 4996)	// _CRT_SECURE_NO_WARNINGS (fopen) 


int write_PPM(const char *filename, pb_rgba *fb)
{
	FILE * fp = fopen(filename, "wb");
	
	if (!fp) return -1;

	// write out the image header
	fprintf(fp, "P6\n%d %d\n255\n", fb-&gt;frame.width, fb-&gt;frame.height);
	
	// write the individual pixel values in binary form
	unsigned int * pixelPtr = (unsigned int *)fb-&gt;data;

	for (size_t row = 0; row frame.height; row++) {
		for (size_t col = 0; col frame.width; col++){
			fwrite(&amp;pixelPtr[col], 3, 1, fp);
		}
		pixelPtr += fb-&gt;pixelpitch;
	}


	fclose(fp);

	return 0;
}

#pragma warning(pop)

There are lots of different ways to do this, but basically, get a pointer to the pixel values, and write out the RGB portion (ignoring the Alpha portion). The first line written is in plaintext, and it tells the format ‘P6’, followed by the width and height of the image. And that’s that. The internal format of a framebuffer is fairly simple, so writing a whole library that can read and write things like GIF, PNG, JPG and the like can be done fairly easily, and independently of the core library. And that’s probably the best way to do it. Then the consumer of this library isn’t forced to carry along complexity they don’t need, but they can simply compose what is necessary for their needs.

Alright then. There is one more primitive, the triangle, which will complete the basics of the drawing routines. So, next time.