graphicc – Scalable Fonts

The initial text support in graphicc was based on some fixed sized bitmap fonts.  That’s a good starting point, and good enough for a lot of UI work, and perhaps even some simple graphs.  But it’s a less satisfying answer when you really want to get into the nitty gritty of things.


Here’s the segoeui.ttf font being displayed in the current graphicc .  Why display in such an unreadable way?  To emphasize that it looks pretty smooth if you don’t look too close…

When it comes to truetype font rendering, the most common goto source is probably the freetype library.  It does a great job, has existed forever, and is used by everyone who doesn’t want to roll their own.  It does great rendering, using the hinting found in the truetype font files and all that.  All that greatness comes at a cost though.  The freetype library is a bit largish for such a small library as graphicc.  So, I went with a different approach.

Here I’m leveraging the stb_trutype.h file found in the public domain stb codes.  The library is good enough to render what you see here, using any of the truetype fonts found on your system.  It won’t do all the fancy things that freetype will, but it at least gives you access to the bulk of the font information, so you can do simple things like kerning.

Whats more, you get access to all the vertex information, so if you want to, you can actually render polygon outlines and the like, and not just take the pre-rendered bitmaps that it offers up.  In fact, one of the strongest parts of these routines is the polygon rendering code.  It makes sense to lift this code up and make it the general polygon rendering code for the entirety of graphicc.  At the moment, there are two or three polygon renderers in the codebase (including samples).  Having one routine which is compact, and capable of dealing with text, and anything else less complex than that, would be a big win for a tiny library such as this.  It will take some work to refactor in such a way that this can become a reality, but it’s probably well worth the effort.

Another benefit of that separation will be the fact that I’ll be able to apply a transform to the vertices for a glyph, and do the cheesy rotated text tricks.
Curious about the code? It’s roughly this

struct ttfset fontset;
struct stbtt_fontinfo finfo;

void setup()
	size(1024, 768);

	//char *filename = "c:/windows/fonts/arial.ttf";
	char *filename = "c:/windows/fonts/segoeui.ttf";

	if (!ttfset_open(fontset, filename)) {
		return ;

	finfo = ttfset_font_from_index(fontset, 0);

void drawText()
	int i, j;
	int ascent, descent, lineGap, baseline;
	float scale, xpos = 2; // leave a little padding in case the character extends left
	float ypos = 2;

	char *text = "Hella Werld!";
	unsigned char *bitmap;

	stbtt_GetFontVMetrics(&finfo, &ascent, &descent, &lineGap);

	for (int pixfactor = 3; pixfactor < 9; pixfactor++) {

		int pixsize = pow((float)2, pixfactor);
		scale = stbtt_ScaleForPixelHeight(&finfo, pixsize);
		baseline = (int)(ascent*scale);

		int idx = 0;
		while (text[idx]) {
			int advance, lsb, x0, y0, x1, y1;
			float x_shift = xpos - (float)floor(xpos);
			stbtt_GetCodepointHMetrics(&finfo, text[idx], &advance, &lsb);
			stbtt_GetCodepointBitmapBoxSubpixel(&finfo, text[idx], scale, scale, x_shift, 0, &x0, &y0, &x1, &y1);

			int w, h;
			bitmap = stbtt_GetCodepointBitmap(&finfo, 0, scale, text[idx], &w, &h, 0, 0);

			raster_rgba_blend_alphamap(gpb, xpos, ypos + baseline + y0, bitmap, w, h, pYellow);

			//printf("%d %d %d", baseline, y0, y1);
			xpos += (advance * scale);
			if (text[idx + 1])
				xpos += scale*stbtt_GetCodepointKernAdvance(&finfo, text[idx], text[idx + 1]);

			stbtt_FreeBitmap(bitmap, NULL);
		xpos = 2;
		ypos += pixsize-(scale*descent);

It’s a bit wasteful in that a new glyph bitmap is created for every single character, even if they repeat, so there’s no caching of those bitmaps going on. If this were a real app, I’d be caching those bitmaps whenever I created them for a particular size.

The primary addition to the graphicc core is the rgba_blend_alphamap() function. It takes the 256 value alpha image generated for a glyph, and copies it to the destination, using the specified color, and the bitmap as the alpha value of the color as it does it. This makes for some nice anti-aliasing, which helps make the thing look smooth.

Well, there you have it. The font support is improving, without dragging in the kitchen sink. It may be possible that graphicc is able to stick to a very slender memory footprint and offer serious enough features.

Render Text like a Boss

And it looks like this:


One of the most rewarding parts of bringing up a graphics system is when you’re able to draw some text.  I have finally reached that point with the graphicc library.  There are many twists and turns along the way, some of which are interesting, but I’ll just share a bit of the journey here.

First of all, the code that generates this image looks like this:

#include "test_common.h"
#include "animwin32.h"

struct font_entry {
	char *name;
	const uint8_t *data;

struct font_entry fontlist[] = {
	{"gse4x6", gse4x6 },
	{ "gse4x8", gse4x8 },
	{ "gse5x7", gse5x7 },
	{ "gse5x9", gse5x9 },
	{ "gse6x12", gse6x12 },
	{ "gse6x9", gse6x9 },
	{ "gse7x11", gse7x11 },
	{ "gse7x11_bold", gse7x11_bold },
	{ "gse7x15", gse7x15 },
	{ "gse7x15_bold", gse7x15_bold },
	{ "gse8x16", gse8x16 },
	{ "gse8x16_bold", gse8x16_bold },
	{ "mcs11_prop", mcs11_prop },
	{ "mcs11_prop_condensed", mcs11_prop_condensed },
	{ "mcs12_prop", mcs12_prop },
	{ "mcs13_prop", mcs13_prop },
	{ "mcs5x10_mono", mcs5x10_mono },
	{ "mcs5x11_mono", mcs5x11_mono },
	{ "mcs6x10_mono", mcs6x10_mono },
	{ "mcs6x11_mono", mcs6x11_mono },
	{ "mcs7x12_mono_high", mcs7x12_mono_high },
	{ "mcs7x12_mono_low", mcs7x12_mono_low },
	{ "verdana12", verdana12 },
	{ "verdana12_bold", verdana12_bold },
	{ "verdana13", verdana13 },
	{ "verdana13_bold", verdana13_bold },
	{ "verdana14", verdana14 },
	{ "verdana14_bold", verdana14_bold },
	{ "verdana16", verdana16 },
	{ "verdana16_bold", verdana16_bold },
	{ "verdana17", verdana17 },
	{ "verdana17_bold", verdana17_bold },
	{ "verdana18", verdana18 },
	{ "verdana18_bold", verdana18_bold }

static const int numfonts = sizeof(fontlist) / sizeof(fontlist[0]);
static int fontidx = 0;

LRESULT CALLBACK keyReleased(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
	switch (wParam)
		case VK_UP:
			if (fontidx < 0) {
				fontidx = numfonts - 1;

		case VK_DOWN:
			if (fontidx >= numfonts){
				fontidx = 0;

		case VK_SPACE:
			write_PPM("test_agg_raster_fonts.ppm", gpb);

	return 0;

void setup()
	size(640, 480);


static char LOWS[] = "abcdefghijklmnopqrstuvwxyz";
static char NUMS[] = "1234567890";
static char SYMS[] = "!@#$%^&*()_+-={}|[]\\:\"; '<>?,./~`";
static char SOME[] = "The quick brown fox jumped over the lazy dog.";

void draw()

	text(fontlist[fontidx].name, 0, 0);
	text(CAPS, 0, gfont.height * 1);

	text(LOWS, 0, gfont.height * 2);

	text(NUMS, 0, gfont.height * 3);

	text(SYMS, 0, gfont.height * 4);

	text(SOME, 0, gfont.height * 5);

Many of the test programs in graphicc have this similar layout. If you’ve every done any programming using Processing you might be familiar with the ‘setup()’ and ‘draw()’ methods. The animwin32 file is responsible for this little construction. Basically, it provides a “Processing like” environment, which makes it really easy to play with various graphics routines. It also does mouse/keyboard handling, so you can setup to deal with key presses and mouse actions. In this case, I use the key up/down actions to change the font from a list of font data. Pressing ‘SPACE’ will take a dump of the screen.

That font data was borrowed from the Anti-Grain Geometry library, because it’s a nice compact bitmap representations of a few fonts.  Just a brief on the AGG library.  This is a fairly well written graphics library written by Maxim Shemanarev, who died way too young I think.  The library is an inspiration and source of knowledge to anyone doing 2D graphics.

The text() function will render the string using whatever is the currently selected font, fill color, and specified location. The location specified the upper left of the font box. There are various text rendering attributes which can be employed, such as alignment, a draw box, whether to use the upper left corner, or the baseline, etc. All in due time.

The guts of the text() routine look like this:

// Text Processing
void text(const char *str, const int x, const int y)
	scan_str(gpb, &gfont, x, y, str, fillColor);

void setFont(const uint8_t *fontdata)
	font_t_init(&gfont, fontdata);

Very simple yes?

And deeper into the rabbit hole…

int scan_str(pb_rgba *pb, font_t *font, const int x, const int y, const char *chars, const int color)
	glyph_t ginfo;

	int idx = 0;
	int dx = x;
	int dy = y;

	while (chars[idx])
		glyph_t_init(font, &ginfo, chars[idx]);
		scan_glyph(pb, font, &ginfo, dx, dy, color);
		dx += ginfo.width;

	return dx;

Basically, get the glyph information for each character, and scan that glyph into the pixel buffer. The scan_glyph() in turn takes the packed bitmap information, unpacks it into individual ‘cover spans’, then turns that into pixels to be placed into the pixel buffer.

Lots can be done to optimize this activity. For example, all the glyphs can be rendered into a bitmap, and then just do a blit to the pixel buffer when needed. This would essentially be a tradeoff between space and time. That’s a choice easily made at a higher level than the lowest graphics routines, so the low level graphic routines don’t make that assumption, they only deal with the packed form of the data. I find this kind of design choice to be most interesting and useful in terms of what kind of profile the overall system has, and what constraints it will adhere to in terms of what type of system it will run on.

So, there you have it.  At this point, the graphicc library can do primitives from single pixels to bezier curves, and text.  With the help of the animwin32 routines, you can do simple UI programming as if you were using the 2D portion of Processing.  I find this to be a good plateau in the development of the library.  At this point, it makes sense to reconsider the various design choices made, cleanup the API, throw in some ‘const’ here and there, consider error checking, and the like.  The next step after that is to consider higher level libraries, which might be purpose built.  I’ll want to incorporate 3D at some point, and there are hints of that already in the routines, but I don’t want to get bogged down in that just yet.  Good image scaling and rotation is probably a first priority before lighting models and 3D rendering.

I would love to be able to create graphic systems profiles, such as, can I mimic GDI, or the HTML Canvas routines.  I can already do Processing to a certain extent.  My first test of the library though is something much simpler.  I want to be able to render the state of a network of machines performing some tasks.  First pass will likely be simple 2D graphs of some sort, so time to get on it.

More Drone Nonsense

Just having a tool/toy in hand makes you thought wander of the possibilities.  After flying the iris+, taking some video in my local neighborhood, and watching other videos people are taking, I believe this whole flying sensor platform thing is going to be quite transformative.  Here are some more thoughts on what might likely come;

Surveillance – of course, but since these things are so cheap and easy, it’s not just going to be the state watching the citizenry.  In fact, the state may lag behind due to various restrictions, such as budget, laws, and the like.  Just imagine that everything will be filmed and listended to by someone.

Everyting else is a form of surveillanace in one form or another.

Animal tracking – As a child I watched Mutual of Omaha’s Wild Kindom.  Marklin Perkins set things up, and Jim Fowler jumped out of the truck to tag the wild rhino.  Well, with a drone, you could just sneak up on the animals from a distance, blow a tracking dart into them, and be done with it.  Not so sexy for the insurance salesmen, but easier for the animal trackers.

Agriculture – another no brainer.  You want to survey the fields.  Fly a pattern, send pictures back to something that can make sense of them.  Dispatch robot machinery to fix problems.  I’m no aggie, but it must be pretty difficult to inspect what’s going on in a field that covers hundreds our thousands of acres.  With a drone, or a few, you could do cameras, take simple soil samples, and the like, send data to some processor, and do your analysis.

Mapping – Another no brainer.  Google has spent years trying to perfect the whole mapping thing.  Multi-camera rigs worn by intrepid mappers, cars with roof mounted thingies, and the like.  But, technology stands still for no technological dream.  Take you van equipped with 20 drones, park it in the center of town, and within an hour, you no doubt can do the job that a self driving car would have done in a few days.  The drones can no only do the street view, but they can do the view up to 100ft as well, pretty easily.  They can track the wifi hot spots, and all the other fun things google has gotten into trouble for.  It doesn’t take a large company, or a monsterous outlay of cash to implement this one either.  I could easily imagine local companies getting in on the act, promoting the intricacy of their maps, or the timeliness.

Sports – When I was in college, I met this guy who with his father, invented that camera system that zips around sports stadiums.  That has provided quite a nice view on the playing field from above.  With drones, you could assign a drone per player, and have them zipping around constantly, filming from interesting angles.  I, as a viewer, could setup my tv viewing to watch the 2 or 3 camera views that I prefer.  Always watch the quarterback, wide receiver, safety, the ball, or the hot dog vendor…  Of course there would have to be automated air traffic control so the drones don’t knock into each other, but that’s a sure bit of software that must be developed anyway.  Ad hoc localized air traffic management for drones.

Some things are autonomous, some things require interaction.  The possibilities are fairly limitless at this point.

My Head In The Cloud – putting my code where my keyboard is

I have written a lot about the greatness of LuaJIT, coding for the internet, async programming, and the wonders of Windows. Now, I have finally reached a point where it’s time to put the code to the test.

I am running a service in Azure:

This site is totally fueled by the work I have done with TINN. It is a static web page server with a couple of twists.

First of all, you can access the site through a pretty name:

If you just hit the site directly, you will get the static front page which has nothing more than an “about” link on it.

If you want to load up a threed model viewing thing, hit this:

If you want to see what your browser is actually sending to the server, then hit this:

I find the echo thing to be interesting, and I try hitting the site using different browsers to see what they produce.  This kind of feedback makes it relatively easy to do rapid turnarounds on the webpage content, challenging my assumptions and filling in the blanks.

The code for this web server is not very complex.  It’s the same standard ‘main’ that I’ve used in the past:

local resourceMap = require("ResourceMap");
local ResourceMapper = require("ResourceMapper");
local HttpServer = require("HttpServer")

local port = arg[1] or 8080

local Mapper = ResourceMapper(resourceMap);

local obj = {}

local OnRequest = function(param, request, response)
	local handler, err = Mapper:getHandler(request)

	-- recycle the socket, unless the handler explictly says
	-- it will do it, by returning 'true'
	if handler then
		if not handler(request, response) then
		print("NO HANDLER: ", request.Url.path);
		-- send back content not found

		-- recylce the request in case the socket
		-- is still open


obj.Server = HttpServer(port, OnRequest, obj);

In this case, I’m dealing with the OnRequest() directly, rather than using the WebApp object.  I’m doing this because I want to do some more interactions at this level that the standard WebApp may not support.

Of course the ‘handlers’ are where all the fun is. I guess it makes sense to host the content of the site up on the site for all to see and poke fun at.

My little experiment here is to give my code real world exposure, with the intention of hardening it, and gaining practical experience on what a typical web server is likely to see out in the wild.

So, if you read this blog, go hit those links. Soon enough, perhaps I will be able to serve up my own blog using my own software. That’s got a certain circular reference to it.

ReadFile – The Good, the bad, and the async

If you use various frameworks on any platform, you’re probably an arm’s length away from the nasty little quirks of the underlying operating system.  If you are the creator of such frameworks, the nasty quirks are what you live with on a daily basis.

In TINN, I want to be async from soup to nuts.  All tcp/udp, socket stuff is already that way.  Recently I’ve been adding async support for “file handles”, and let me tell you, you have to be very careful around these things.

In the core windows APIs, in order to read from a file, you do two things.  You first open a file using the CreateFile(), function.  This may be a bit confusing, because why would you use “create” to ‘open’ an existing file?  Well, you have to think of it like a kernel developer might.  From that perspective, what you’re doing is ‘create a file handle’.  While you’re doing this, you can tell the function whether to actually create the file if it doesn’t exist already, open only if it exists, open read-only, etc.

The basic function signature for CreateFile() looks like this:

  _In_      LPCTSTR lpFileName,
  _In_      DWORD dwDesiredAccess,
  _In_      DWORD dwShareMode,
  _In_opt_  LPSECURITY_ATTRIBUTES lpSecurityAttributes,
  _In_      DWORD dwCreationDisposition,
  _In_      DWORD dwFlagsAndAttributes,
  _In_opt_  HANDLE hTemplateFile

Well, that’s a mouthful, just to get a file handle. But hay, it’s not much more than you’d do in Linux, except it has some extra flags and attributes that you might want to take care of. Here’s where the history of Windows gets in the way. There is a much simpler function “OpenFile()”, which on the surface might do what you want, but beware, it’s a lot less capable, a leftover from the MSDOS days. The documentation is pretty clear about this point “don’t use this, use CreateFile instead…”, but still, you’d have to wade through some documentation to reach this conclusion.

Then, the ReadFile() function has this signature:

  _In_         HANDLE hFile,
  _Out_        LPVOID lpBuffer,
  _In_         DWORD nNumberOfBytesToRead,
  _Out_opt_    LPDWORD lpNumberOfBytesRead,
  _Inout_opt_  LPOVERLAPPED lpOverlapped

Don’t be confused by another function, ReadFileEx(). That one sounds even more modern, but in fact, it does not support the async file reading that I want.

Seems simple enough. Take the handle you got from CreateFile(), and pass it to this function, including a buffer, and you’re done? Well yah, this is where things get really interesting.
Windows supports two forms of IO processing. Async, and synchronous. The Synchronous case is easy. You just make your call, and your thread will be blocked until the IO “completes”. That is certainly easy to uderstand, and if you’re a user of the standard C library, or most other frameworks, this is exactly the behaviour you can expect. Lua, by default, using the standard io library will do exactly this.

The other case is when you want to do async io. That is, you want to initiate the ReadFile() and get an immediate return, and handle the processing of the result later, perhaps with an alert on an io completion port.

Here’s the nasty bit. This same function can be used in both cases, but has very different behavior. It’s a subtle thing. If you doing synchronous, then the kernel will track the fileposition, and automatically update it for you. So, you can do consecutive ReadFile() calls, and read the file contents from beginning to end.

But… When you do things async, the kernel will not track your file pointer. Instead, you must do this on your own! When you do async, you pass in a instance of a OVERLAPPED structure, wich contains things like a pointer to the buffer to be filled, as well as the size of the buffer. This structure also contains things like the offset within the file to read from. By default, the offset is ‘0’, which will have you reading from the beginning of the file every single time.

typedef struct _OVERLAPPED {
    ULONG_PTR Internal;
    ULONG_PTR InternalHigh;
    union {
        struct {
            DWORD Offset;
            DWORD OffsetHigh;

        PVOID Pointer;

    HANDLE hEvent;

You have to be very careful and diligent with using this structure, and the proper calling sequences. In addition, if you’re going to do async, you need to call CreateFile() with the appropriate OVERLAPPED flag. In TINN, I have created the NativeFile object, which pretty much deals with all this subtlety. The NativeFile object presents a basic block device interface to the user, and wraps up all that subtlety such that the interface to files is clean and simple.

-- NativeFile.lua

local ffi = require("ffi")
local bit = require("bit")
local bor = bit.bor;

local core_file = require("core_file_l1_2_0");
local errorhandling = require("core_errorhandling_l1_1_1");
local FsHandles = require("FsHandles")
local WinBase = require("WinBase")
local IOOps = require("IOOps")

typedef struct {
  IOOverlapped OVL;

  // Our specifics
  HANDLE file;
} FileOverlapped;

-- A win32 file interfaces
-- put the standard async stream interface onto a file
local NativeFile={}
setmetatable(NativeFile, {
  __call = function(self, ...)
    return self:create(...);

local NativeFile_mt = {
  __index = NativeFile;

NativeFile.init = function(self, rawHandle)
	local obj = {
		Handle = FsHandles.FsHandle(rawHandle);
		Offset = 0;
	setmetatable(obj, NativeFile_mt)

	if IOProcessor then
		IOProcessor:observeIOEvent(obj:getNativeHandle(), obj:getNativeHandle());

	return obj;

NativeFile.create = function(self, lpFileName, dwDesiredAccess, dwCreationDisposition, dwShareMode)
	if not lpFileName then
		return nil;
	dwDesiredAccess = dwDesiredAccess or bor(ffi.C.GENERIC_READ, ffi.C.GENERIC_WRITE)
	dwCreationDisposition = dwCreationDisposition or OPEN_ALWAYS;
	dwShareMode = dwShareMode or bor(FILE_SHARE_READ, FILE_SHARE_WRITE);
	local lpSecurityAttributes = nil;
	local dwFlagsAndAttributes = bor(ffi.C.FILE_ATTRIBUTE_NORMAL, FILE_FLAG_OVERLAPPED);
	local hTemplateFile = nil;

	local rawHandle = core_file.CreateFileA(

	if rawHandle == INVALID_HANDLE_VALUE then
		return nil, errorhandling.GetLastError();

	return self:init(rawHandle)

NativeFile.getNativeHandle = function(self)
  return self.Handle.Handle

-- Cancel current IO operation
NativeFile.cancel = function(self)
  local res = core_file.CancelIo(self:getNativeHandle());

-- Close the file handle
NativeFile.close = function(self)
  self.Handle = nil;

NativeFile.createOverlapped = function(self, buff, bufflen, operation, deviceoffset)
	if not IOProcessor then
		return nil

	fileoffset = fileoffset or 0;

	local obj ="FileOverlapped");

	obj.file = self:getNativeHandle();
	obj.OVL.operation = operation;
	obj.OVL.opcounter = IOProcessor:getNextOperationId();
	obj.OVL.Buffer = buff;
	obj.OVL.BufferLength = bufflen;
	obj.OVL.OVL.Offset = deviceoffset;

	return obj, obj.OVL.opcounter;

-- Write bytes to the file
NativeFile.writeBytes = function(self, buff, nNumberOfBytesToWrite, offset, deviceoffset)
	fileoffset = fileoffset or 0

	if not self.Handle then
		return nil;

	local lpBuffer = ffi.cast("const char *",buff) + offset or 0
	local lpNumberOfBytesWritten = nil;
	local lpOverlapped = self:createOverlapped(ffi.cast("uint8_t *",buff)+offset,

	if lpOverlapped == nil then
		lpNumberOfBytesWritten ="DWORD[1]")

	local res = core_file.WriteFile(self:getNativeHandle(), lpBuffer, nNumberOfBytesToWrite,
  		ffi.cast("OVERLAPPED *",lpOverlapped));

	if res == 0 then
		local err = errorhandling.GetLastError();
		if err ~= ERROR_IO_PENDING then
			return false, err
		return lpNumberOfBytesWritten[0];

	if IOProcessor then
    	local key, bytes, ovl = IOProcessor:yieldForIo(self, IOOps.WRITE, lpOverlapped.OVL.opcounter);
--print("key, bytes, ovl: ", key, bytes, ovl)
	    return bytes

NativeFile.readBytes = function(self, buff, nNumberOfBytesToRead, offset, deviceoffset)
	offset = offset or 0
	local lpBuffer = ffi.cast("char *",buff) + offset
	local lpNumberOfBytesRead = nil
	local lpOverlapped = self:createOverlapped(ffi.cast("uint8_t *",buff)+offset,

	if lpOverlapped == nil then
		lpNumberOfBytesRead ="DWORD[1]")

	local res = core_file.ReadFile(self:getNativeHandle(), lpBuffer, nNumberOfBytesToRead,
		ffi.cast("OVERLAPPED *",lpOverlapped));

	if res == 0 then
		local err = errorhandling.GetLastError();

--print("NativeFile, readBytes: ", res, err)

		if err ~= ERROR_IO_PENDING then
			return false, err
		return lpNumberOfBytesRead[0];

	if IOProcessor then
    	local key, bytes, ovl = IOProcessor:yieldForIo(self, IOOps.READ, lpOverlapped.OVL.opcounter);

    	local ovlp = ffi.cast("OVERLAPPED *", ovl)
    	print("overlap offset: ", ovlp.Offset)

--print("key, bytes, ovl: ", key, bytes, ovl)
	    return bytes


return NativeFile;

This is enough of a start. If you want to simply open a file:

local NativeFile = require("NativeFile")
local fd = NativeFile("sample.txt");

From there you can use readBytes(), and writeBytes(). If you want to do streaming, you can feed this into the new and improved Stream class like this:

local NativeFile = require("NativeFile") 
local Stream = require("stream") 
local IOProcessor = require("IOProcessor")

local function main()

  local filedev, err = NativeFile("./sample.txt", nil, OPEN_EXISTING, FILE_SHARE_READ)

  -- wrap the file block device with a stream
  local filestrm = Stream(filedev)

  local line1, err = filestrm:readLine();  
  local line2, err = filestrm:readLine();  
  local line3, err = filestrm:readLine()

  print("line1: ", line1, err)  
  print("line2: ", line2, err)  
  print("line3: ", line3, err) 


The Stream class looks for readBytes() and writeBytes(), and can provide the higher level readLine(), writeLine(), read/writeString(), and a few others. This is great because it can be fed by anything that purports to be a block device, which could be anything from an async file, to a chunk of memory.

And that’s about it for now. There are subtleties when dealing with async file access in windows. Having a nice abstraction on top of it gives you all the benefits of async without all the headaches.


Bit Twiddling Again? – How I finally came to my senses

Right after I published my last little missive, I saw an announcement that VNC is available on the Chrome browser. Go figure…

It’s been almost a year since I wrote about stuff related to serialization: Serialization Series

Oh, what a difference a year makes!  I was recently implementing code to support WebSocket client and server, so I had reason to revisit this topic.  For WebSocket, the protocol specifies things at the bit level, and in bigendian order.  This poses some challenges for the little-endian machines that I use.  That’s some extreme bit twiddling, and although I did revise my low level BitBang code, that’s not what I’m writing about today.

I have another bit of code that just deals with bytes as the smallest element.  This is the BinaryStream object.  BinaryStream allows me to simply read numeric values out of a stream.  It takes care of handling the big/littleendian nature of things.  The BinaryStream wrapps any other stream, so you can do things like this:

local mstream =;
local bstream =, true);




This quite handy for doing all the things related to packing/unpacking bytes of memory. Of course there are plenty of libraries that do this sort of thing, but this is the one that I use.

The revelation for me this time around had to do with the nature of my implementation. In my first incarnation of these routines, I was doing byte swapping manually, like this:

function BinaryStream:ReadInt16()

  -- Read two bytes
  -- return nil if two bytes not read
  if (self.Stream:ReadBytes(types_buffer.bytes, 2, 0) &lt;2)
    then return nil

  -- if we don't need to do any swapping, then
  -- we can just return the Int16 right away
  if not self.NeedSwap then
    return types_buffer.Int16;

  local tmp = types_buffer.bytes[0]
  types_buffer.bytes[0] = types_buffer.bytes[1]
  types_buffer.bytes[1] = tmp

  return types_buffer.Int16;

Well, this works, but… It’s the kind of code I would teach to someone who was new to programming, not necessarily the best, but shows all the detail.

Given Lua’s nature, I could have done the byte swapping like this:

types_buffer.bytes[0], type_buffer.bytes[1] = types_buffer.bytes[1], types_buffer.bytes[0]

Yep, yes sir, that would work. But, it’s still a bit clunky.

I have recently also been implementing some TLS related stuff, and in TLS there are 24-bit (3 byte) integers. In order to read them, I really want a generic integer reader:

function BinaryStream:ReadIntN(n)
  local value = 0;

  if self.BigEndian then
    for i=1,n do
      value = lshift(value,8) + self:ReadByte()
    for i=1,n do
      value = value + lshift(self:ReadByte(),8*(i-1))

  return value;


Well, this will work if there’s 1, 2, 3, or 4 byte integers. Can’t work beyond that because the bit operations only work up to 32 bits. But, ok, that makes things a lot easier, and reduces the amount of code I have to write, and puts all the endian stuff in one place.

Then there’s 64-bit, float, and double.

In these cases, the easiest thing is to use a union structure:

typedef union  {
  int64_t		Int64;
  uint64_t	UInt64;
  float 		Single;
  double 		Double;
  uint8_t bytes[8];
} bstream_types_t

function BinaryStream:ReadBytesN(buff, n, reverse)
  if reverse then
    for i=n,1,-1 do
      buff[i-1] = self:ReadByte()
    for i=1,n do
      buff[i-1] = self:ReadByte()

function BinaryStream:ReadInt64()
  self:ReadBytesN(self.valunion.bytes, 8, self.NeedSwap)
  return tonumber(self.valunion.Int64);

function BinaryStream:ReadSingle()
  self:ReadBytesN(self.valunion.bytes, 4, self.NeedSwap)
  return tonumber(self.valunion.Single);

function BinaryStream:ReadDouble()
  self:ReadBytesN(self.valunion.bytes, 8, self.NeedSwap)
  return tonumber(self.valunion.Double);

Of course, with Lua, the 64-bit int is limited to only 52 bits, but the technique will work in general. The Single and Double, you just need to get the bytes in the right order and everything is fine. Whether this is compatible with another ordering of the bytes or not depends on the other application, but at least this is self consistant.

This incarnation uses functions a lot more than the last incarnation. This is the big revelation for me. In the past, I was thinking like a ‘C’ programmer, and essentially trying to do what I would do in assembly language. Well, I realize this is not necessarily the best way to go with LuaJIT. Also, I was trying to optimize by getting stuff into a buffer, and messing around with it from there, assuming getting stuff from the underlying stream is expensive. Well, that simply might not be a good assumption, so I relaxed it.

With this newer implementation, I was able to drop 200 lines of code, out of 428. That’s a pretty good savings. This in and of itself might be worthwhile because the code will be more easily maintained, due to smaller and simpler implementation.

So, every day, I see and hear things about either my own code, or someone else’s and I try to apply what I’ve learned to my own cases. I’m happy to rewrite code, when it results in smaller tighter, more accurate coding.

And there you have it.

Tech Preview 2013

I’ve been champing at the bit to write this post. A new year, a sunny day! I read something somewhere about how futurists got the whole prediction game wrong. Rather than trying to describe the new and exciting things that would be showing up in oh so many years in the future, they should be describing what would be removed from our current context. With that in mind, here are some short to medium term predictions.

More wires will be removed from our environment. The keyboard and mouse wires will be gone, replaced by wireless, bluetooth, or otherwise. The ubiquitous s-video, and various component audio/video cables will disappear. They’ll be replaced with a single wire in the form of hdmi, or they’ll disappear altogether, being replaced by wireless audio/video transmission.

The “desktop Computer” will disappear. Basic “airplay” capability will be baked into monitors of all stripes, so that the compute component of anyone’s environment will consist of nothing more than a little computicle, combined with whatever input devices the user so happens to need for their particular task. “Touch” sensors, such as the Leap Motion, will find their way into more interesting and interactive input activities.

Power Consumption related to computing will continue to shrink. Since cell phones seem to be the focus of current compute innovation, this genre will drive the compute world. With the likes of the Raspberry Pi, and the Odroid-U2 becoming popular forms of System on Chip compute platforms, the raw requirements for a compute experience will be reduced from 80watts to about 5 watts.

A bit longer term…

Drivers will be removed from the driving experience. With BMW, Google, and others experimenting with driverless vehicles, this will eventually become the preferred method of transportation. Particularly with an aging population in places, it will likely be safer to have seniors driven around in nice Prius like pods, rather than having them drive themselves.

“Data Centers” will become irrelevant. This is a bit of a stretch, but the thinking goes like this. A Data Center is a concentration of communications, power consumption, compute capability, and ultimately data storage. They are essentially the Timesharing/mainframe model of 50 years back done up in large centralized format. Why would I use a data center though. If I had fast internet (100 Mbps or better) to my home/business, would I need a data center? If I have enough compute power in my home, in the form of 3 or 4 communications servers, do I need a data center? If I have 16Tb of storage sitting under my desk at home, do I need a data center? In short, if I eliminate the redundancy, uptime guarantees, etc, that a data center gives me, I can probably supply the same from my home/small business, at an equally affordable cost. As compute and storage costs continue to decrease, and power consumption of my devices goes with them, the tipping point for data center value will change, and doing it from home will become more appealing.

Trending… breaking from the “things that will be removed”, and applying a more traditional “what will come” filter…

“Data” will become less important as “Connections” becomes more important. In recent years, blogs were interesting, then tweeting, the micro form of the blog, became more interesting. At the same time, Facebook started to emerge, which is kind of like a stretched out tweet/blog. Now along comes Pinterest. And in the background, there’s been Google. Pinterest represents a new form of communication. Gone are the words, replaced by connections. I can infer that I’ll be interested in something you’ll be interested in by seeing how many times I’ve liked other things that you’ve been interested in. If I’m an advertiser, tracking down the most pinned people, and following what they’re pinning is probably a better indicator of relevance than anything Google or Facebook have to offer. The connections are more important than any data you can actually read. In fact, you probably won’t read much, you’ll look at pictures, and tiny captions, and possibly follow some links. If there’s a prediction here, it is that the likes of Google, Facebook, and Twitter, will be fallen by the likes of Pinterest and others who are driving in the “graph” space.

3D Printing. As this year will see the likely emergence of the FormsLab printer, as well as continued evolution of the ABS and PLA based printers, content for these printers will become more interesting. The price point for the printers will continue to hover around $1500, for the truly “plug and play” variety, as opposed to the DIY variety.

For 3D content design, the Raspberry Pi, and ODroid-U2, will be combined with the LeapMotion input device to create truly spectacular and easy to use 3D design tools, for less than $200 for a hardware/software combination.

As computicles become cheaper, the premium for software will continue to decrease. There will be a consolidation of the hard/software offering. When a decent computicle costs $35 (Raspberry Pi), then it’s hard to justify software costing any more than $5. If you’re a software manufacturer, you will consider creating packages that include both the hardware and software in order to get any sort of premium.

Ubiquitous computicles will see the emergence of a new hardware “hub”. Basically a wifi connected device that attaches to various peripherals such as the LeapMotion, a Kinect, a HDMI based screen. It will managed the various interactive inputs from the various other computicles located within the home/office. Rather than being a primary compute device itself, it will act as a coordinator of many devices in a given environment.

That’s about it for this year. No doom and gloom zombie scenarios as far as I can see. Some esoteric trending on many fronts, some empire breaking trends, some evolution of technologies which will make our lives a little easier to live.


Get every new post delivered to your Inbox.

Join 51 other followers