Drawproc – Like Processing, but in C++

Triangle Strips using drawproc

trianglestrip

So, what’s the code look like to create this gem?

#include "drawproc.h"

int x;
int y;
float outsideRadius = 150;
float insideRadius = 100;


void setup() {
	size(640, 360);
	background(204);
	x = width / 2;
	y = height / 2;
}


void drawStrip()
{
	int numPoints = int(map(mouseX, 0, width, 6, 60));
	float angle = 0;
	float angleStep = 180.0 / numPoints;

	beginShape(GR_TRIANGLE_STRIP);
	for (int i = 0; i <= numPoints; i++) {
		float px = x + cos(radians(angle)) * outsideRadius;
		float py = y + sin(radians(angle)) * outsideRadius;
		angle += angleStep;
		vertex(px, py);
		px = x + cos(radians(angle)) * insideRadius;
		py = y + sin(radians(angle)) * insideRadius;
		vertex(px, py);
		angle += angleStep;
	}
	endShape();
}

void draw() {
  background(204);
  drawStrip();
}

If you’ve done any coding in Processing, you can look at the example that inspired this bit of code here: Triangle Strip

What’s notable about it is the similarity to the Java or even the JavaScript version (if processing.js). It takes about a 10 minute conversion to go from Processing to using drawproc. So, what is drawproc?

Drawproc is an application and library which facilitates the creation of interactive graphics. It is the culmination of taking the work from graphicc and encapsulating in such a way that makes it easy to use in multiple situations.

So, how does it work?  Basically, there is the drawproc.exe application.  This application contains a main(), and a primary event loop which takes care of capturing mouse and keyboard events, and issuing “draw()” calls.  Previously (Dyanmic Programming in C) I explained how that dynamic bit of machinery works. All that machinery is at work here with the addition of one more dynamic programming item.

bool InitializeInstance(const char *moduleName)
{

	// Get pointers to client setup and loop routines
	clientModule = LoadLibrary(moduleName);

	printf("modH: 0x%p\n", clientModule);

	SetupHandler procAddr = (SetupHandler)GetProcAddress(clientModule, "setup");
	printf("proc Address: 0x%p\n", procAddr);

	if (procAddr != NULL) {
		setSetupRoutine(procAddr);
	}

	LoopHandler loopAddr = (LoopHandler)GetProcAddress(clientModule, "draw");
	printf("loop Addr: 0x%p\n", loopAddr);

	if (loopAddr != NULL) {
		setLoopRoutine(loopAddr);
	}

	if ((procAddr == nullptr) && (loopAddr == nullptr))
	{
		return false;
	}

	gClock = dproc_clock_new();

	return true;
}

When invoking drawproc, you give a name of a module, which is a .dll file compiled against the .exe. Typical execution looks like this:

c:\tools>drawproc trianglestrip.dll

That ‘trianglestrip.dll’ is passed along to the InitializeInstance() call, the module is loaded, and the ‘setup()’ and ‘draw()’ functions are looked for. If neither of them is found, or the .dll doesn’t load, then the program quits. At this point, everything is the same as if you had linked the drawing module into the drawproc.exe program directly. The advantage is you have a simple small (~200K for debug version) executable (drawproc.exe) which is very slow changing. Then you have the modules, which can be numerous, and dynamic. You can create modules independently of the drawproc.exe and run as you wish. You could even write a single module which loads .lua, or any other embedded scripting environment, and write your code using that scripting language instead.

How do you create these modules? Well, you just write your code, make reference to the header files within drawproc, and use drawproc.lib as the library reference. All the relevant symbols within drawproc are exported, so this just works. At the end of the day, the drawproc.exe looks just like any other .dll that might be out there.

In case you’re still reading, here’s another picture.

SineConsoleBanate CAD 2011

This one is interesting because it’s actually an animation (SineConsole).  A few years back, when I was experimenting with BanateCAD, I had done something similar, all in Lua Banate CAD 2011.

Why bother with all this though?  Why C? What’s the point?  I had this interesting conversation last week with a co-worker.  We were discussing whether people who are coming into becoming software programmers would be better served by learning C#, or C/C++.  I think my answer was C#, simply because it seems more in fashion and more applicable to other dynamic languages than does C/C++.  But, here we’re doing a lot of ‘dynamic’ with standard C/C++.  Really the answer to that question is “you will need to learn and use many languages, frameworks, and tools in your software development.  Learning some C will likely serve you well, but be prepared to learn many different things.’

drawproc being written in C/C++ is great because it makes programming graphics fairly simple (because of the Processing mimicry).  Using the Processing API makes the graphics stuff really easy.  At the same time, since it’s written in C/C++, gaining access to the lowest level stuff of the platform is really easy as well.  For example, integrating with the Microsoft Kinect sensor is as easy as just using the Microsoft Provided SDK directly.  No shim, no translation layer, no ‘binding’ to get in the way.  That’s a very good thing.  Also, as time goes on, doing the accelerated this and that, throwing in networking and the like will be a relative no brainer.

So, there you have it.  drawproc is a new standalone tool which can be used for fiddling about with graphics.  For those who are into such things, it’s a nice tool to play with.

Advertisements

graphicc – presenting a graph

The latest graphicc library is shaping up to be an almost useful thing.

The only way I know how to ensure a library actually serves a purpose is to build applications upon it.  This installment is about that sort of thing.  But first, a look at some more text.  This is basically text alignment working in graphicc.

test_text

The little bit of code that’s doing this looks like this.

void draw()
{
	background(pLightGray);

	// Draw some lines
	stroke(pBlack);
	line(width / 2, 0, width / 2, height - 1);
	line(0, height / 2, width - 1, height / 2);

	// draw some text
	int midx = width / 2;
	int midy = height / 2;
	fill(pBlack);
	textAlign(TX_LEFT);
	text("LEFT", midx, 20);
	
	textAlign(TX_CENTER);
	text("CENTER", midx, 40);
	
	textAlign(TX_RIGHT);
	text("RIGHT", midx, 60);

	// Around the center
	textAlign(TX_LEFT, TX_TOP);
	text("LEFT TOP", 0, 0);

	textAlign(TX_RIGHT, TX_TOP);
	text("RIGHT TOP",width,0);

	textAlign(TX_RIGHT, TX_BOTTOM);
	text("RIGHT BOTTOM", width,height);

	textAlign(TX_LEFT, TX_BOTTOM);
	text("LEFT BOTTOM",0,height);

	stroke(pRed);
	line(midx - 6, midy, midx + 6, midy);
	line(midx, midy - 6, midx, midy + 6);

	fill(pWhite);
	textAlign(TX_CENTER, TX_CENTER);
	text("CENTER CENTER", midx, midy);
}

But wait, this is high level stateful graphics sort of stuff, not the low level raw graphicc API. Yep, that’s right. Along the way of creating the lower level stuff, I’ve been nursing along what I call ‘drawproc’. This is essentially an interface that looks and feels very similar to the popular Processing.org environment, but it’s for C/C++ instead of Java. I also have a skin for PHIGS, but this one is a lot further along, and used constantly for test cases.

In order to work the drawproc API, and thus the low level graphicc routines, I’m going through a book on Processing: Visualizing Data which shows a lot of techniques for visualizing data sets using Processing. And here are the fruits of following one particular chapter:

timeseries

Nice usage of text, alignment, lines, rectangles, tiny circles, different sized  text, and all that.  If you were doing the interactive app, you could flip between Milk, Tea, and Coffee graphs.  The drawproc along with a shim for win32, give you keyboard and mouse control as well, so doing some interactive hover effects and the like is possible.

It’s a kind of funny thing.  In these days of HTML rendering, how could there possibly be any other way to do UI?  Well, HTML is the answer in many cases, but not all.  Having a small tight graphics library that can allow you to build interactive apps quickly and easily is still a useful thing.  Besides, it’s just plain fun.

Now that I’ve got a reasonable set of core graphics routines, I contemplated what it would be like to write a remote UI sort of thing for cloud based VMs.  Basically, just put some little engine on the VM which receives drawing commands, and allow that to be connected to via some port, which also sends/receives key and mouse stuff.  Well, isn’t that what ‘X’ does?  Yah, and a whole lot more!  Well then, surely VNC has got it covered?  Yes, as long as you’re already running X.  But, it’s a challenge.  Can I write such a small service in less than 1Mb of code?  Maybe 2Mb just to be safe?  Of course you’d have to write apps on the server side that talk to the graphics server, but that shouldn’t be too hard.  Just pick an API skin, like Processing, or GDI, or whatever, and send all the commands to the service, which will render into a buffer to be served up to whomever is looking.

One can dream…


Revisiting C++

I was a C++ expert twice in the past. The first time around was because I was doing some work for Taligent, and their whole operating system was written in C++. With that system I got knee deep into the finer details of templates, and exceptions, to a degree that will likely never be seen on the planet earth.

The second time around, was because I was programming on the BeOS. Not quite as crazy as the Taligent experience, but C/C++ were all the rage.

Then I drifted into Microsoft, and C# was born. For the past 15 years, it’s been a slow rise to dominance with C# in certain quarters of Microsoft. It just so happens that this corresponds to the rise of the virus attacks on Windows, as well as the shift in programming skills of college graduates. In the early days of spectacular virus attacks, you could attribute most of them to buffer overruns, which allowed code to run on the stack. This was fairly easily plugged by C#, and security coding standards.

Today, I am working on a project where once again I am learning C++. This time around it’s C++ 11, which is decidedly more mature than the C++ I learned while working on Taligent. It’s not so dramatically different as say the difference between Lisp and Cobol, but it gained a lot of stuff over the years.

I thought I would jot down some of the surface differences I have noticed since I’ve been away.

First, to compare C++ to Lua, there are some surface differences. Most of the languages I program in today have their roots in Algol, so they largely look the same. But, there are some simple dialect differences. C++ is full of curly braces ‘{}’, semi-colons ‘;’, and parenthesis ‘()’. Oh my god with the parens and semis!! With Lua, parens are optional, semis are optional, and instead of curlies, there are ‘do’, ‘end’, or simply ‘end’. For loops are different, array indices are different (unless you’re doing interop with the FFI), and do/while is repeat/until.

These are all minor differences, like say the differences between Portuguese and Spanish. You can still understand the other if you speak one. Perhaps not perfectly, but there is a relatively easy translation path.

Often times in language wars, these are the superficial differences that people talk about. Meh, not interesting enough to drive me one way or another.

But then, there’s this other stuff, which is truly the essence of the differences. Strong typing/duck typing, managed memory, dynamic code execution. I say ‘Lua’ here, but really that could be a standin for C#, node.js, Python, or Ruby. Basically, there are a set of modern languages which exhibit a similar set of features which are different enough from C/C++ that there is a difference in the programming models.

To illustrate, here’s a bit of C++ code that I have written recently. The setup is this, I receive a packet of data, typically the beginning of a HTTP conversation. From that packet of data, I must be able to ‘parse’ the thing, determine whether it is http/https, pull out headers, etc. I need to build a series of in-place parsers, which keep the amount of memory allocated to a minimum, and work fairly quickly. So, the first piece is this thing called a AShard_t:

#pragma once

#include "anl_conf.h"

class  DllExport AShard_t  {
public:
	uint8_t *	m_Data;
	size_t	m_Length;
	size_t	m_Offset;

	// Constructors
	AShard_t();
	AShard_t(const char *);
	AShard_t(uint8_t *data, size_t length, size_t offset);

	// Virtual Destructor
	virtual ~AShard_t() {};

	// type cast
	operator uint8_t *() {return getData();}

	// Operator Overloads
	AShard_t & operator= (const AShard_t & rhs);

	// Properties
	uint8_t *	getData() {return &m_Data[m_Offset];};
	size_t		getLength() {return m_Length;};

	// Member functions
	AShard_t &	clear();
	AShard_t &	first(AShard_t &front, AShard_t &rest, uint8_t delim) const;
	bool		indexOfChar(const uint8_t achar, size_t &idx) const;
	bool		indexOfShard(const AShard_t &target, size_t &idx);
	bool 		isEmpty() const;
	void		print() const;
	bool		rebase();
	char *		tostringz() const;
	AShard_t &	trimfrontspace();

};

OK, so it’s actually a fairly simple data structure. Assuming you have a buffer of data, a shard is just a pointer into that buffer. It contains the pointer, an offset, and a length. You might say that the pointer/offset combo is redundant, you probably don’t need both. The offset could be eliminated, assuming the pointer is always at the base of the structure. But, there might be a design choice that makes this useful later.

At any rate, there’s a lot going on here for such a simple class. First of all, there’s that ‘#pragma once’ at the top. Ah yes, good ol’ C preprocessor, needs to be told not to load stuff it’s already loaded before. There’s there’s class vs struct, not to be confused with ‘typedef struct’. Public/Protected/Private, copy constructor or ‘operator=’. And heaven forbid you forget to make a public default constructor. You will not be able to create an array of these things without it!

These are not mere dialectual differences, these are the differences between Spanish and Hungarian. You MUST know about the default constructor thing, or things just won’t work.

As far as implementation is concerned, I did a mix of things here, primarily because the class is so small. I’ve inserted some simple “string” processing right into the class, because I found them to be constantly useful. ‘first’, ‘indexOfChar’, and ‘indexOfShard’ turn out to be fairly handy when you’re trying to parse through something in place. ‘first’ is like in Lisp, get the first element off the list of elements. In this case you can specify a single character delimiter. ‘indexOfChar’, is like strchr() function from C, except in this case it’s aware of the length, and it doesn’t assume a ‘null’ terminated string. ‘indexOfShard’ is like ‘strstr’, or ‘strpbrk’. With these in hand, you can do a lot of ‘tokenizing’.

Here’s an example of parsing a URL:

bool parseUrl(const AShard_t &uriShard)
{
  AShard_t shard = uriShard;
  AShard_t rest;
	
  AShard_t scheme;
  AShard_t url;
  AShard_t authority;
  AShard_t hostname;
  AShard_t port;
  AShard_t resquery;
  AShard_t resource;
  AShard_t query;

  // http:
  shard.first(scheme, rest, ':');

  // the 'rest' represents the resource, which 
  // includes the authority + query
  // so try and separate authority from query if the 
  // query part exists
  shard = rest;
  // skip past the '//'
  shard.m_Offset += 2;
  shard.m_Length -= 2;

  // Now we have the url separated from the scheme
  url = shard;

  // separate the authority from the resource based on '/'
  url.first(authority, rest, '/');
  resquery = rest;

  // Break the authority into host and port
  authority.first(hostname, rest, ':');
  port = rest;

  // Back to the resource.  Split it into resource/query
  parseResourceQuery(resquery, resource, query);


  // Print the shards
  printf("URI: "); uriShard.print();
  printf("  Scheme: "); scheme.print();
  printf("  URL: "); url.print();
  printf("    Authority: "); authority.print();
  printf("      Hostname: "); hostname.print();
  printf("      Port: "); port.print();
  printf("    Resquery: "); resquery.print();
  printf("      Resource: "); resource.print();
  printf("      Query: "); query.print();
  printf("\n");

  return true;
}

AShard_t url0("http://www.sharmin.com:8080/resources/gifs/bunny.gif?user=willynilly&password=funnybunny");
parseUrl(url0);

Of course, I’m leaving out error checking, but even for this simple tokenization, it’s fairly robust because in most cases, if a ‘first’ fails, you’ll just gen an empty ‘rest’, but definitely not a crash.

So, how does this fair against my beloved LuaJIT? Well, at this level things are about the same. In Lua, I could create exactly the same structure, using a table, and perform exactly the same operations. Only, if I wanted to do it without using the ffi, I’d have to stuff the data into a Lua string object (which causes a copy), then use the lua string.char, count from 1, etc. totally doable, and probably fairly optimized. There is a bit of a waste though because in Lua, everything interesting is represented by a table, so that’s a much bigger data structure than this simple AShard_t. It’s bigger in terms of memory footprint, and it’s probably slower in execution because it’s a generalized data structure that can serve many wonderful purposes.

For memory management, at this level of structure, things are relatively easy. Since the shard does not copy the data, it doesn’t actually do any allocations, so there’s relatively little to cleanup. The most common use case for shards is that they’ll either be stack based, or they’ll be stuffed into a data structure. In either case, their lifetime is fairly short and well managed, so memory management isn’t a big issue. If they are dynamically allocated, then of course there’s something to be concerned with.

Well, that touches the ice berg. I’ve re-attached to C++, and so far the gag reflex hasn’t driven me insane, so I guess it’s ok to continue.

Next, I’ll explore how insanely great the world becomes when shards roam the earth.