schedlua – async io

And so, finally we come to the point. Thus far, we looked at the simple scheduler, the core signaling, and the predicate and alarm functions built atop that. That’s all great stuff for fairly straight forward apps that need some amount of concurrency. The best part though is when you can do concurrent networking stuff.

Here’s the setup; I want to issue 20 concurrent GET requests to 20 different sites, and get results back. I want the program to halt once all the tasks have been completed.

--test_linux_net.lua
package.path = package.path..";../?.lua"

local ffi = require("ffi")

local Kernel = require("kernel"){exportglobal = true}
local predicate = require("predicate")(Kernel, true)
local AsyncSocket = require("AsyncSocket")

local sites = require("sites");

-- list of tasks
local taskList = {}


local function httpRequest(s, sitename)
	local request = string.format("GET / HTTP/1.1\r\nUser-Agent: schedlua (linux-gnu)\r\nAccept: */*\r\nHost: %s\r\nConnection: close\r\n\r\n", sitename);
	return s:write(request, #request);
end

local function httpResponse(s)
	local BUFSIZ = 512;
	local buffer = ffi.new("char[512+1]");
	local bytesRead = 0
	local err = nil;
	local cumulative = 0

	repeat
		bytesRead, err = s:read(buffer, BUFSIZ);

		if bytesRead then
			cumulative = cumulative + bytesRead;
		else
			print("read, error: ", err)
			break;
		end
	until bytesRead < 1

	return cumulative;
end


local function siteGET(sitename)
	print("siteGET, BEGIN: ", sitename);

	local s = AsyncSocket();

	local success, err = s:connect(sitename, 80);  

	if success then
		httpRequest(s, sitename);
		httpResponse(s);
	else
		print("connect, error: ", err, sitename);
	end

	s:close();

	print("siteGET, FINISHED: ", sitename)
end


local function allProbesFinished()
	for idx, t in ipairs(taskList) do
		if t:getStatus() ~= "dead" then
			return false;
		end
	end

	return true;
end

local function main()
	for count=1,20 do
		table.insert(taskList, Kernel:spawn(siteGET, sites[math.random(#sites)]))
		Kernel:yield();
	end

	when(allProbesFinished, halt);
end

run(main)

Step by step. The httpRequest() function takes a socket, and does the most bare mimimal HTTP GET request, assuming the socket is already connected to the site.

Similarly, the httpResponse() function gets a response back from the server, and reads as much as it can until the socket is closed (because the Connection: close header was sent).

That’s about the most basic of HTTP request/response pairs you can have, ignoring doing any parsing of the returned data.

Alright, so let’s wrap those two up into a function called siteGET(). siteGET(sitename) takes the name of a site, creates a socket, connects it to the site, and then issues the httpRequest(), and then the httpResponse(). Very simple. What I like about this is that the httpRequest(); httpResponse() sequence is executed in serial as far as I’m concerned. I don’t have to be worried about the httpResponse() being issued before the request completes. Furthermore, if I didn’t use a spawn(), I could simply execute the code directly and be none the wiser.

I want to execute these siteGET()s concurrently though, so within main(), I start up 20 of these tasks, and let them go. Then comes the waiting part:

local function allProbesFinished()
	for idx, t in ipairs(taskList) do
		if t:getStatus() ~= "dead" then
			return false;
		end
	end

	return true;
end

	when(allProbesFinished, halt);

Going back to our knowledge of predicates, we know that the ‘when’ function takes a predicate (function that returns true/false), and will execute the second function when the predicate returns true.

OK, so we just need to come up with a predicate which tells us that all the tasks have completed. Easy enough as a list of the tasks is generated when they are spawned. So, we just go through that list and see if any of them are still running. If there is a single one that is still running, the predicate will return false, and ‘halt()’ will not be called. As soon as the last task finished, the predicate will return true, and the halt() function will be called.

Of course, most things in schedlua are convenient compositions of deeper things (with signals being at the core).

Instead of using the ‘when’ function, you could write the code more directly like this:

	while true
		if allProbesFinished() then
			halt();
			break;
		end
		yield();
	end

That doesn’t quite look as nice as just using the when() function I think. Also, you’re sitting in the main() function, which is no big deal as there’s nothing else trying to execute after this, but it just doesn’t seem as clean. Furthermore, the ‘when’ function might have some magic in its implementation, such as a better understanding of the state of tasks, or special knowledge of the scheduler, or who knows what. At any rate, either way essentially implements a barrier, and the technique can be used anywhere you want to perform an action after some set of tasks has completed. The allProbesFinished() function can be generalized to wait on any list of tasks, maybe call it “waitForTasks()” or some such thing.

At any rate, that completes the primitives that are baked into the core schedlua package. Everything from signals, to predicates, alarms, and finally async io. Of course this is Linux, so async io works with any file descriptor, not just network sockets, so file management or device communications in general can be thrown into the mix.

Now that the basics work, it’s a matter of cleaning up, writing more test cases, fixing bugs, reorganizing, and optimizing resource usage a bit. In general though, the constructs are there, and it’s ready to be used for real applications.


schedlua – predicates and alarms

A few years back, during the creation of Language Integrated Query (LINQ), I had this idea. If we could add database semantics to the language, what would adding async semantics look like. These days we’ve gone the route of async/await and various other constructs, but still, I always just wanted a primitives. such as “when”, “whenever”, and “waitUntil”. The predicates in schedlua are just that:

  • signalOnPredicate(predicate, signalName)
  • waitForPredicate(predicate)
  • when(predicate, func)
  • whenever(predicate, func)

Of courese, this is all based at the very core on the signaling mechanism that’s in the kernel of schedlua, but these primitives are not in the kernel proper.  They don’t need to be, which is nice because it means you can easily add such functions without having to alter the core.

What do they look like in practice?  Well, first of all, a ‘predicate’ is nothing more than a fancy name for a function that returns a bool value.  It will either return ‘true’ or ‘false’.  Based on this, various things can occur.  For example, ‘signalOnPredicate’, when the predicate returns ‘true’, emit the signal specified by signalName.  Similarly, for ‘waitForPredicate’, the currently running task will be put into a suspended state until such time as the predicate returns ‘true’.  ‘when’ and ‘whenever’ are similar, but they spawn new tasks, rather than suspending the existing task.  And here’s some code:

 

--test_scheduler.lua
package.path = package.path..";../?.lua"

local Functor = require("functor")
local Kernel = require("kernel"){exportglobal = true}
local Predicate = require("predicate")(Kernel, true)

local idx = 0;
local maxidx = 100;

local function numbers(ending)
	local function closure()
		idx = idx + 1;
		if idx > ending then
			return nil;
		end
		return idx;
	end
	
	return closure;
end



local function counter(name, nCount)
	for num in numbers(nCount) do
		print(num)
		local eventName = name..tostring(num);
		--print(eventName)
		signalOne(eventName);

		yield();
	end

	signalAll(name..'-finished')
end


local function predCount(num)
	waitForPredicate(function() return idx > num end)
	print(string.format("PASSED: %d!!", num))
end



local function every5()
	while idx <= maxidx do
		waitForPredicate(function() return (idx % 5) == 0 end)
		print("!! matched 5 !!")
		yield();
	end
end

local function test_whenever()
	local t1 = whenever(
		function() 
			if idx >maxidx then return nil end; 
			return (idx % 2) == 0 end,
		function() print("== EVERY 2 ==") end)
end

local function main()
	local t1 = spawn(counter, "counter", maxidx)

	local t6 = spawn(test_whenever);

	local t2 = spawn(predCount, 12)
	local t3 = spawn(every5)
	local t4 = spawn(predCount, 50)
	local t5 = when(
		function() return idx == 75 end, 
		function() print("WHEN IDX == 75!!") end)
	local t6 = when(function() return idx >= maxidx end,
		function() halt(); end);


end

run(main)



It’s a test case, so it’s doing some contrived things.  Basically, there is one task that is running a counter that throws a signal for every new number (up to maxid).  Then you can see the various tests which use predicates.

local function every5()
	while idx <= maxidx do
		waitForPredicate(function() return (idx % 5) == 0 end)
		print("!! matched 5 !!")
		yield();
	end
end

Here, we’re just running a continuous loop which will print it’s message every time the predicate is true. It seems kind of wasteful doesn’t it? Under normal circumstances, this would be a very hot spin loop, but when you call ‘waitForPredicate’, you task will alctually be thrown into a ‘suspended’ state, which means if there are other tasks to execute, they’ll go ahead, and you’ll get back in the queue to be tested later. So, it’s really, “test this predicate, if it’s not true, then throw the task at the back of the ready list, and test it again later. If it’s true, then continue on with whatever is next in this task”. The ‘yield()’ here is redundant.

In this particular case, we’ve essentially created a ‘whenever’ construct. This construct happens enough that it’s worth creating a convenience function for it.

local function test_whenever()
	local t1 = whenever(
		function() 
			if idx >maxidx then return nil end; 
			return (idx % 2) == 0 end,
		function() print("== EVERY 2 ==") end)
end

In this particular case, we let the ‘whenever’ construct do the work for us. Every other count, we’ll print our message. Of course, I’m using in place functions (lambda expressions?) in these test cases. They don’t have to be that way, you can set the functions however you want.

t6 is interesting because it says, ‘when the count reaches maxidx, halt the program’, which will in fact break us out of the main even loop and stop the program. Very convenient. This construct is useful because there may be myriad reasons why you’d want to stop the program. You can simply setup a predicate to do that. It could be a ‘when’ or perhaps you’d rather it be based on a signal, in that case use a signalOnPredicate/waitForSignal sort of thing. It’s composable, so use whatever set of constructs makes the most sense. It’s kind of a sane form of exception handling.

So there you have it, yet another construct tackled. Predicates are a simple construct that kind of extend the if/then flow control into the async realm. ‘when’ is almost a direct replacement for ‘if’ in this context. The waitOnPredicate is kind of a new construct I think. It’s like an if/spinlock, except you’re not spinning, you’re suspended with periodic checks on the predicate. And then of course the ‘signalOnPredicate’ is like a hail Mary pass. You don’t know who/what is going to respond, but you’re going to send up the signal. That’s like programming with interrupts, except, unless the scheduler allows for high priority interrupts/signals, these will be handled in the normal flow of cooperative processing.

Predicates are great, they’re a slightly different construct than I’ve typically used in my every day programming. They make some tasks a lot easier, and they make thinking about async programming more manageable.

And then there’s time…

Time is a lot easier construct to think about, because it’s well represented in most language frameworks already. Here are the time primitives:
 

  • waitUntilTime
  • sleep
  • delay
  • periodic

 
‘waitUntilTime’ is the lynch pin in this case. It will suspend the current task until the specified time. Time, in this case is a relative thing. The alarm module keeps its own clock, so everything is expressed relative to that clock.

sleep(seconds), will simply suspend the current task for a specified number of seconds. You can specify fractions of seconds, and the clock has nanosecond precision, but we’re not using a realtime scheduler, so you’ll get some amount of delay. Of course you could simply swap in a new scheduler and deal with any realtime requirements you might have.

delay, will spawn a task which will then execute the specified function after the specified amount of time has passed. This is kind of like a ‘when’ predicate, with a specialization for time. You could in fact reconstruct this primitive using the when predicate, but the alarm, knowing about time as it does, will do it more efficiently.

local Kernel = require("kernel"){exportglobal = true};
local Alarm = require("alarm")(Kernel)
local Clock = require("clock")

local c1 = Clock();

local function twoSeconds()
	print("TWO SECONDS: ", c1:secondsElapsed());
	Kernel:halt();
end

local function test_alarm_delay()
	print("delay(twoSeconds, 2000");
	Alarm:delay(twoSeconds, 2000);
end

run(test_alarm_delay)

periodic is similar, in that it well execute a function, but whereas ‘delay’ is a oneshot event, ‘periodic’ will repeat. In this way it is like the ‘whenever’ construct.

And there you have it. Between the predicates and the alarms, you have some new basic constructs for doing async programming. They are supported by the signaling construct that’s already a part of the kernel. They are simple add-ons, which means you can easily create your own constructs and replace these, or create completely new constructs which are similar. They can leverage the signaling mechanism, or maybe they want to do something else altogether.

So far, the constructs have been of the if/then variety, only in async fashion. I think there’s another set of constructs, which have to do with barriers and completions of tasks. That will clean up the other part of async, which is the ‘await’. We’ll see. In the meanwhile, next time, async io, which is pretty exciting.


graphicc – presenting a graph

The latest graphicc library is shaping up to be an almost useful thing.

The only way I know how to ensure a library actually serves a purpose is to build applications upon it.  This installment is about that sort of thing.  But first, a look at some more text.  This is basically text alignment working in graphicc.

test_text

The little bit of code that’s doing this looks like this.

void draw()
{
	background(pLightGray);

	// Draw some lines
	stroke(pBlack);
	line(width / 2, 0, width / 2, height - 1);
	line(0, height / 2, width - 1, height / 2);

	// draw some text
	int midx = width / 2;
	int midy = height / 2;
	fill(pBlack);
	textAlign(TX_LEFT);
	text("LEFT", midx, 20);
	
	textAlign(TX_CENTER);
	text("CENTER", midx, 40);
	
	textAlign(TX_RIGHT);
	text("RIGHT", midx, 60);

	// Around the center
	textAlign(TX_LEFT, TX_TOP);
	text("LEFT TOP", 0, 0);

	textAlign(TX_RIGHT, TX_TOP);
	text("RIGHT TOP",width,0);

	textAlign(TX_RIGHT, TX_BOTTOM);
	text("RIGHT BOTTOM", width,height);

	textAlign(TX_LEFT, TX_BOTTOM);
	text("LEFT BOTTOM",0,height);

	stroke(pRed);
	line(midx - 6, midy, midx + 6, midy);
	line(midx, midy - 6, midx, midy + 6);

	fill(pWhite);
	textAlign(TX_CENTER, TX_CENTER);
	text("CENTER CENTER", midx, midy);
}

But wait, this is high level stateful graphics sort of stuff, not the low level raw graphicc API. Yep, that’s right. Along the way of creating the lower level stuff, I’ve been nursing along what I call ‘drawproc’. This is essentially an interface that looks and feels very similar to the popular Processing.org environment, but it’s for C/C++ instead of Java. I also have a skin for PHIGS, but this one is a lot further along, and used constantly for test cases.

In order to work the drawproc API, and thus the low level graphicc routines, I’m going through a book on Processing: Visualizing Data which shows a lot of techniques for visualizing data sets using Processing. And here are the fruits of following one particular chapter:

timeseries

Nice usage of text, alignment, lines, rectangles, tiny circles, different sized  text, and all that.  If you were doing the interactive app, you could flip between Milk, Tea, and Coffee graphs.  The drawproc along with a shim for win32, give you keyboard and mouse control as well, so doing some interactive hover effects and the like is possible.

It’s a kind of funny thing.  In these days of HTML rendering, how could there possibly be any other way to do UI?  Well, HTML is the answer in many cases, but not all.  Having a small tight graphics library that can allow you to build interactive apps quickly and easily is still a useful thing.  Besides, it’s just plain fun.

Now that I’ve got a reasonable set of core graphics routines, I contemplated what it would be like to write a remote UI sort of thing for cloud based VMs.  Basically, just put some little engine on the VM which receives drawing commands, and allow that to be connected to via some port, which also sends/receives key and mouse stuff.  Well, isn’t that what ‘X’ does?  Yah, and a whole lot more!  Well then, surely VNC has got it covered?  Yes, as long as you’re already running X.  But, it’s a challenge.  Can I write such a small service in less than 1Mb of code?  Maybe 2Mb just to be safe?  Of course you’d have to write apps on the server side that talk to the graphics server, but that shouldn’t be too hard.  Just pick an API skin, like Processing, or GDI, or whatever, and send all the commands to the service, which will render into a buffer to be served up to whomever is looking.

One can dream…


schedlua – the kernel

Previously, I talked about the scheduler within the new scedlua.  A scheduler is a fairly simple thing, it just decides which of the many ready tasks will run next.  The default scheduler follows a fairly simple FIFO strategy, so there are no priorities, favorites, or the like.  Of course this wouldn’t be any fun if you were stuck with just one scheduler, so naturally enough this is an easily pluggable part of the system.  But what does this plugging?

In steps the Kernel.  In general, the schedlua project is about creating a set of tools by which highly performant services can be constructed.  schedlua largely supports the concept that a single processor can be highly leveraged if programmed correctly.  It does not try to gain performance through the usage of multiple threads, but rather it just takes on the task of suspending various tasks which are blocked on IO or otherwise idle, and letting tasks which are ready to run do their thing.  The concurrency model is cooperative, not preemptive, so if any one task misbehaves, the process can become stuck.

So, let’s take a look at this code:

--kernel.lua
-- kernel is a singleton, so return
-- single instance if we've already been
-- through this code
print("== KERNEL INCLUDED ==")

local Scheduler = require("scheduler")
local Task = require("task")
local Queue = require("queue")
local Functor = require("functor")

local Kernel = {
	ContinueRunning = true;
	TaskID = 0;
	Scheduler = Scheduler();
	TasksSuspendedForSignal = {};
}

setmetatable(Kernel, {
    __call = function(self, params)
    	params = params or {}
    	params.Scheduler = params.Scheduler or self.Scheduler
    	
    	if params.exportglobal then
    		self:globalize();
    	end

    	self.Scheduler = params.Scheduler;

    	return self;
    end,
})

function Kernel.getNewTaskID(self)
	self.TaskID = self.TaskID + 1;
	return self.TaskID;
end

function Kernel.getCurrentTaskID(self)
	return self:getCurrentTask().TaskID;
end

function Kernel.getCurrentTask(self)
	return self.Scheduler:getCurrentTask();
end

function Kernel.spawn(self, func, ...)
	local task = Task(func, ...)
	task.TaskID = self:getNewTaskID();
	self.Scheduler:scheduleTask(task, {...});
	
	return task;
end

function Kernel.suspend(self, ...)
	self.Scheduler:suspendCurrentFiber();
	return self:yield(...)
end

function Kernel.yield(self, ...)
	return self.Scheduler:yield();
end


function Kernel.signalOne(self, eventName, ...)
	if not self.TasksSuspendedForSignal[eventName] then
		return false, "event not registered", eventName
	end

	local nTasks = #self.TasksSuspendedForSignal[eventName]
	if nTasks < 1 then
		return false, "no tasks waiting for event"
	end

	local suspended = self.TasksSuspendedForSignal[eventName][1];

	self.Scheduler:scheduleTask(suspended,{...});
	table.remove(self.TasksSuspendedForSignal[eventName], 1);

	return true;
end

function Kernel.signalAll(self, eventName, ...)
	if not self.TasksSuspendedForSignal[eventName] then
		return false, "event not registered"
	end

	local nTasks = #self.TasksSuspendedForSignal[eventName]
	if nTasks < 1 then
		return false, "no tasks waiting for event"
	end

	for i=1,nTasks do
		self.Scheduler:scheduleTask(self.TasksSuspendedForSignal[eventName][1],{...});
		table.remove(self.TasksSuspendedForSignal[eventName], 1);
	end

	return true;
end

function Kernel.waitForSignal(self, eventName)
	local currentFiber = self.Scheduler:getCurrentTask();

	if currentFiber == nil then
		return false, "not currently in a running task"
	end

	if not self.TasksSuspendedForSignal[eventName] then
		self.TasksSuspendedForSignal[eventName] = {}
	end

	table.insert(self.TasksSuspendedForSignal[eventName], currentFiber);

	return self:suspend()
end

function Kernel.onSignal(self, func, eventName)
	local function closure()
		self:waitForSignal(eventName)
		func();
	end

	return self:spawn(closure)
end



function Kernel.run(self, func, ...)

	if func ~= nil then
		self:spawn(func, ...)
	end

	while (self.ContinueRunning) do
		self.Scheduler:step();		
	end
end

function Kernel.halt(self)
	self.ContinueRunning = false;
end

function Kernel.globalize()
	halt = Functor(Kernel.halt, Kernel);
    onSignal = Functor(Kernel.onSignal, Kernel);

    run = Functor(Kernel.run, Kernel);

    signalAll = Functor(Kernel.signalAll, Kernel);
    signalOne = Functor(Kernel.signalOne, Kernel);

    spawn = Functor(Kernel.spawn, Kernel);
    suspend = Functor(Kernel.suspend, Kernel);

    waitForSignal = Functor(Kernel.waitForSignal, Kernel);

    yield = Functor(Kernel.yield, Kernel);
end

return Kernel;

 

From the top, you can see the Kernel requires the scheduler, task and functor. The scheduler has already been explained. The Kernel serves a couple of purposes. First of all, it manages the scheduler. The ‘run’ function at the bottom is the ‘loop’ of the application. It will run until ‘halt’ is called. Each time through the loop it’s telling the scheduler to take a step.

Also at the bottom, you can see usage of the Functor. A functor is just a simple convenience wrapper that helps you call a function on a table at a later point. Those functors are used to make the keywords global.

There are two primary things the kernel does. One is to spawn new tasks, the other is to provide a central point for signal handling.

First, let’s look at the ‘spawn’.

function Kernel.spawn(self, func, ...)
	local task = Task(func, ...)
	task.TaskID = self:getNewTaskID();
	self.Scheduler:scheduleTask(task, {...});
	
	return task;
end

This is actually the coolest part of the system in terms of how the programming model is expressed. Here’s an example of it in use.

local Kernel = require("kernel"){exportglobal = true}


local function numbers(ending)
	local idx = 0;
	local function closure()
		idx = idx + 1;
		if idx > ending then
			return nil;
		end
		return idx;
	end
	
	return closure;
end

local function task1()
	print("first task, first line")
	yield();
	print("first task, second line")
end

local function task2()
	print("second task, only line")
end

local function counter(name, nCount)
	for num in numbers(nCount) do
		print(name, num);
		yield();
	end
	halt();
end

local function main()
	local t0 = spawn(counter, "counter1", 5)
	local t1 = spawn(task1)
	local t2 = spawn(task2)
	local t3 = spawn(counter, "counter2", 7)
end

run(main)

Basically, any time you want something to happen concurrently, you just say ‘spawn(func, params) and that’s that.

What happens is a Task object is created which holds onto the function object as well as the initial set of parameters. This task is then sent to the scheduler to be run. From then on out you can forget about it. Of course, you’re handed the task when you say ‘spawn’, so you do have a chance of suspending and killing it off in the future if you like. Similarly, you can wait for a task to complete as well.

So, that’s spawning.

The other major feature in the kernel is signal handling.

function Kernel.waitForSignal(self, eventName)
	local currentFiber = self.Scheduler:getCurrentTask();

	if currentFiber == nil then
		return false, "not currently in a running task"
	end

	if not self.TasksSuspendedForSignal[eventName] then
		self.TasksSuspendedForSignal[eventName] = {}
	end

	table.insert(self.TasksSuspendedForSignal[eventName], currentFiber);

	return self:suspend()
end

This is probably THE most important routine in the whole system. Basically, take the current task, and put it onto the suspension list, connected with the signal name (signals are just a string value). Later on, when you want the task to resume, you would signal it using either signalOne(), or signalAll().

A little bit of code demonstrating this:

local Kernel = require("kernel"){exportglobal = true}
local Functor = require("functor")

local function numbers(ending)
	local idx = 0;
	local function closure()
		idx = idx + 1;
		if idx > ending then
			return nil;
		end
		return idx;
	end
	
	return closure;
end

local function waitingOnCount(name, ending)
	local eventName = name..tostring(ending)
	waitForSignal(eventName)

	print("REACHED COUNT: ", ending)
end

local function onCountFinished(name)
	print("Counter Finished: ", name)
end

local function counter(name, nCount)
	for num in numbers(nCount) do
		print(num)
		local eventName = name..tostring(num);
		--print(eventName)
		signalOne(eventName);

		yield();
	end

	signalAll(name..'-finished')
end

local function main()
	local t1 = spawn(counter, "counter", 50)
	local t2 = spawn(waitingOnCount, "counter", 20)
	local t3 = spawn(function() print("LAMDA"); waitForSignal("counter15") print("reached 15!!") end)
	
	-- test signalAll().  All three of these should trigger when
	-- counter finishes
	local t13 = onSignal(Functor(onCountFinished, "counter-1"), "counter-finished")
	local t14 = onSignal(Functor(onCountFinished, "counter-2"), "counter-finished")
	local t15 = onSignal(Functor(onCountFinished, "counter-3"), "counter-finished")
end

run(main)

Here, there is a counter(), which is just counting, and firing off a signal for each number. The various waitingOnCount(), and LAMBDA routines are going to respond to the appropriate signals.

Finally, the t13, t14, and t15 tasks are waiting for the “counter-finished” signal, and they will all fire off and print their little message.

Of course, at this point you could have something that would call ‘halt()’ so you don’t have to press Ctl-C to stop the process, but you get the idea.

And that’s pretty much it for the kernel. Absent from here are the async io, predicates, alarms and the like. They are available in schedlua, but they’re not a part of the kernel. Instead of being part of the kernel proper, these are essentially modules. They utilize the signal and spawn features built into the kernel, and they’re free to do their own thing.

I’ll get into the details of alarms and predicates next time around to demonstrate the concept of easy add-on modules.


Schedulers revisited

A ran quite a long series on the buildup of the TINN scheduler, including this gem on the primitives included in the application model: Parallel Conversations

The scheduler developed in that whole series served me very well, taking on timing, predicates, signals, and async IO.  It was all based on Windows primitives, and although fairly modular, was an evolved codebase, which had some lumps and inefficiencies.  So now, two years on, I decided to reimagine that scheduler.  Primary reasons?  I had some time on my hands, and I wanted a scheduler that works equally well on Linux as on Windows.

And so, the schedlua project was born.

What were the design criteria this time around?

  • Reduce Size and complexity
  • Increase composability
  • Do not force features where they’re not desired

In the previous incarnation, I went back and forth between having an “Application” object, and the “Scheduler”.  Things were primarily driven by the Application, and it had a central role in the instance of any running app.  I still favor having this central role, but I changed the meaning and composition of that central player this time around.  First though, there is the “Scheduler”.


local ffi = require("ffi");

local Queue = require("queue")
local Task = require("task");


--[[
	The Scheduler supports a collaborative processing
	environment.  As such, it manages multiple tasks which
	are represented by Lua coroutines.

	The scheduler works by being the arbitrator of which routine is running
	in the application.  When work is to be done, it is encapsulated in 
	the form of a task object.  The scheduler relies upon that task object
	to call a form of 'yield' at some point, at which time it will receive
	the main thread of execution, and will pick the next task out of the ready
	list to run.
--]]
local Scheduler = {}
setmetatable(Scheduler, {
	__call = function(self, ...)
		return self:create(...)
	end,
})
local Scheduler_mt = {
	__index = Scheduler,
}

function Scheduler.init(self, ...)
	local obj = {
		TasksReadyToRun = Queue();
	}
	setmetatable(obj, Scheduler_mt)
	
	return obj;
end

function Scheduler.create(self, ...)
	return self:init(...)
end


function Scheduler.tasksPending(self)
	return self.TasksReadyToRun:length();
end


-- put a task on the ready list
-- the 'task' should be something that can be executed,
-- whether it's a function, functor, or something that has a '__call'
-- metamethod implemented.
-- The 'params' is a table of parameters which will be passed to the function
-- when it's ready to run.
function Scheduler.scheduleTask(self, task, params)
	--print("Scheduler.scheduleTask: ", task, params)
	params = params or {}
	
	if not task then
		return false, "no task specified"
	end

	task:setParams(params);
	self.TasksReadyToRun:enqueue(task);	
	task.state = "readytorun"

	return task;
end

function Scheduler.removeFiber(self, fiber)
	--print("REMOVING DEAD FIBER: ", fiber);
	return true;
end

function Scheduler.inMainFiber(self)
	return coroutine.running() == nil; 
end

function Scheduler.getCurrentFiber(self)
	return self.CurrentFiber;
end

function Scheduler.suspendCurrentFiber(self, ...)
	self.CurrentFiber.state = "suspended"
end

function Scheduler.step(self)
	-- Now check the regular fibers
	local task = self.TasksReadyToRun:dequeue()

	-- If no fiber in ready queue, then just return
	if task == nil then
		print("Scheduler.step: NO TASK")
		return true
	end

	if task:getStatus() == "dead" then
		self:removeFiber(task)

		return true;
	end

	-- If the task we pulled off the active list is 
	-- not dead, then perhaps it is suspended.  If that's true
	-- then it needs to drop out of the active list.
	-- We assume that some other part of the system is responsible for
	-- keeping track of the task, and rescheduling it when appropriate.
	if task.state == "suspended" then
		--print("suspended task wants to run")
		return true;
	end

	-- If we have gotten this far, then the task truly is ready to 
	-- run, and it should be set as the currentFiber, and its coroutine
	-- is resumed.
	self.CurrentFiber = task;
	local results = {task:resume()};

	-- once we get results back from the resume, one
	-- of two things could have happened.
	-- 1) The routine exited normally
	-- 2) The routine yielded
	--
	-- In both cases, we parse out the results of the resume 
	-- into a success indicator and the rest of the values returned 
	-- from the routine
	--local pcallsuccess = results[1];
	--table.remove(results,1);

	local success = results[1];
	table.remove(results,1);

--print("PCALL, RESUME: ", pcallsuccess, success)

	-- no task is currently executing
	self.CurrentFiber = nil;


	if not success then
		print("RESUME ERROR")
		print(unpack(results));
	end

	-- Again, check to see if the task is dead after
	-- the most recent resume.  If it's dead, then don't
	-- bother putting it back into the readytorun queue
	-- just remove the task from the list of tasks
	if task:getStatus() == "dead" then
		self:removeFiber(task)

		return true;
	end

	-- The only way the task will get back onto the readylist
	-- is if it's state is 'readytorun', otherwise, it will
	-- stay out of the readytorun list.
	if task.state == "readytorun" then
		self:scheduleTask(task, results);
	end
end

function Scheduler.yield(self, ...)
	return coroutine.yield(...);
end


return Scheduler

I think the code is pretty straight forward to follow, except for that ‘step()’ function.  So, what is a scheduler anyway?  Well, we want to have some level of ‘concurrency’ in our applications.  That is, I would love to be able to have the appearance of running several tasks at once on my machine, without much work.  In modern day computing, this is fairly well covered by the OS as applications.  We can run several apps at the same time on our computers, and the kernel takes care of the details of how that gets done.  Modern kernels simply split  time between tasks, giving some amount of time to each task in turn, so they feel like they are running in parallel, or concurrently.  It’s a nice illusion because the machines are fast enough, and there are enough times when a task will simply be waiting around anyway that the amount of concurrency achieved can be very high.

So, what if you want some of that kernel goodness within the context of your app directly?  You want to handle mouse and keyboard, while doing some drawing and networking at the same time.  This is typically where ‘threads’ step in, and down the rabbit hole we go, getting involved in threading models, concurrency primitives and the like.  Well, in Lua, there’s a concurrency primitive built in, in the form of coroutines.  Coroutines are a fairly old style of concurrency whereby the application gets control of the CPU, and only gives it up willingly by calling a ‘yield()’ function at some point.  OK, great.  So, you start a routine, and at some point you call yield, and some other routine gets a chance to run.  But, which routine?  Aha, so this is where the ‘scheduler’ comes in.

The scheduler might be considered the ‘home’ routine.  That is, once a routine gives up a slide of time by calling ‘yield’, the scheduler is the place they will return to.  At that point, the scheduler can decide which routines is to be run next.  It can do this because it has a list of ready to run tasks (those placed on the list using ‘scheduleTask’).  In this scheduler, it simply takes the next one off the list, and tells it to resume.  That is the essence of the ‘step()’ function.

Great!  And what does it look like in action?

--test_scheduler.lua
package.path = package.path..";../?.lua"

Scheduler = require("scheduler")()
Task = require("task")
local taskID = 0;

local function getNewTaskID()
	taskID = taskID + 1;
	return taskID;
end

local function spawn(scheduler, func, ...)
	local task = Task(func, ...)
	task.TaskID = getNewTaskID();
	scheduler:scheduleTask(task, {...});
	
	return task;
end


local function task1()
	print("first task, first line")
	Scheduler:yield();
	print("first task, second line")
end

local function task2()
	print("second task, only line")
end

local function main()
	local t1 = spawn(Scheduler, task1)
	local t2 = spawn(Scheduler, task2)

	while (true) do
		--print("STATUS: ", t1:getStatus(), t2:getStatus())
		if t1:getStatus() == "dead" and t2:getStatus() == "dead" then
			break;
		end
		Scheduler:step()
	end
end

main()

That’s the scheduler in isolation.  In this test case, you can see that task creation, and the ‘spawn’ primitive aren’t part of the scheduler proper.  They don’t need to be.  The scheduler should only be concerned with deciding which task is going to be next.  In this way, you can imagine creating all sorts of different kinds of schedulers.  You cold introduce the concept of priorities, or aging, or realtime, or whatever.  The same general idea would apply, the scheduler just needs to decide which task is going to run next.  Also of not here, the ‘main()’ function operates as the “even loop” if you will.  This is the bit of code that drives the scheduler between steps.  Having this separate from the scheduler proper will become advantageous down the road when we want to compose the scheduler will a larger application framework.

So, this is the first step in revitalizing the concurrency core I built into the TINN project.  This time around, more composable, more feature rich, more easily cross platform, more bang for the buck.  Next time around, I’ll examine the “kernel”.


Composition at What Level?

barpoly

That picture’s got a lot going on.  In Render Text like a Boss, I was rendering text, which is a pretty exciting milestone.  The only thing more exciting to do with text is render based on vectors, and being able to scale and rotate, but there are other fish to fry in the meanwhile.  I find that polygons are fairly useful as a primitive.

The above picture shows a whole bunch of primitives rendering with varying degrees of transparency.  Those random rectangles in the background are just making things look interesting.  Next are some random bars, with varying degrees of transparency, again just to make things look interesting.  And then comes that orange tinted polygon thing that lays atop the bars.  The polygon is simply constructed by taking the midpoint of the top edge of the bar as a vertex.  The two bottom vertices (left and right) close out the polygon.

But wait a minute, I see a bunch of triangles in there?  Well, just for the purposes of this article, I’m showing how the polygon is broken down into triangles (triangulated).  Triangles?  what the heck?  Well, you see, it’s like this.  There is more than one way to skin a cat, or render a polygon.  You can search the interwebs and find as many ways to do it as there are graphics libraries.  The ‘direct’ method would have you go scan line by line, calculating even and odd line crossings to determine whether individual pixels were ‘inside’ or ‘outside’ the polygon, and thus color the pixel.  You could traverse down edges, creating horizontal line spans, and then render those spans.  Well, these both look fairly similar to the choices made when rendering a triangle.  So, maybe it’s easier to simply break the polygon down into triangles, and then render the resultant triangles.  I’m sure there’s some math theorem somewhere that can be used to prove that any polygon > 3 points can be broken up into nice sets of triangles.  Equivalently, there are numerous coding algorithms which will do the same.  For graphic, I found a nice public domain triangulator, and incorporated it into the library.  Pass it some points, it breaks them down into triangles, and the triangles are rendered.

Great.  This has a couple of nice implications.  First of all, triangles (and ultimately horizontal lines) remain a useful primitive in the core of the graphics library.  I did not actually add a polygon capability to the core, just keep the triangles there, and anyone who needs polygons can easily break them down into triangles.  That leaves me with a single complex primitive to optimize.  I do the same for quads.  Rectangles are a special case, in that they may be optimized separately to come up with faster behavior than the triangulation would deliver.

ellipses

What about the ellipse?  Hmmm.  There are faily fast ellipse drawing routines in the world.  The most interesting for drawing ellipse outlines looks like this:

 

void raster_rgba_ellipse_stroke(pb_rgba *pb, const uint32_t cx, const uint32_t cy, const size_t xradius, size_t yradius, const uint32_t color)
{
	int x, y;
	int xchange, ychange;
	int ellipseerror;
	int twoasquare, twobsquare;
	int stoppingx, stoppingy;

	twoasquare = 2 * xradius*xradius;
	twobsquare = 2 * yradius*yradius;

	x = xradius;
	y = 0;

	xchange = yradius*yradius*(1 - 2 * xradius);
	ychange = xradius*xradius;
	ellipseerror = 0;
	stoppingx = twobsquare*xradius;
	stoppingy = 0;

	// first set of points, sides
	while (stoppingx >= stoppingy)
	{
		Plot4EllipsePoints(pb, cx, cy, x, y, color);
		y++;
		stoppingy += twoasquare;
		ellipseerror += ychange;
		ychange += twoasquare;
		if ((2 * ellipseerror + xchange) > 0) {
			x--;
			stoppingx -= twobsquare;
			ellipseerror += xchange;
			xchange += twobsquare;
		}
	}

	// second set of points, top and bottom
	x = 0;
	y = yradius;
	xchange = yradius*yradius;
	ychange = xradius*xradius*(1 - 2 * yradius);
	ellipseerror = 0;
	stoppingx = 0;
	stoppingy = twoasquare*yradius;

	while (stoppingx <= stoppingy) {
		Plot4EllipsePoints(pb, cx, cy, x, y, color);
		x++;
		stoppingx += twobsquare;
		ellipseerror += xchange;
		xchange += twobsquare;
		if ((2 * ellipseerror + ychange) > 0) {
			y--;
			stoppingy -= twoasquare;
			ellipseerror += ychange;
			ychange += twoasquare;
		}
	}
}

This is one of those routines that takes advantage of the fact that axis aligned ellipses have a lot of symmetry, so 4 pointes are plotted at the same time. Well, it’s not a big leap to think that this technique can be extended. For filling the ellipse, you can simply draw horizontal lines between the symmetrical points, and you end up with a solid ellipse.

inline void fill2EllipseLines(pb_rgba *pb, const uint32_t cx, const uint32_t cy, const unsigned int x, const unsigned int y, const uint32_t color)
{
	raster_rgba_hline_blend(pb, cx - x, cy + y, 2*x, color);
	raster_rgba_hline_blend(pb, cx - x, cy - y, 2 * x, color);
}

void raster_rgba_ellipse_fill(pb_rgba *pb, const uint32_t cx, const uint32_t cy, const size_t xradius, size_t yradius, const uint32_t color)
{
	int x, y;
	int xchange, ychange;
	int ellipseerror;
	int twoasquare, twobsquare;
	int stoppingx, stoppingy;

	twoasquare = 2 * xradius*xradius;
	twobsquare = 2 * yradius*yradius;

	x = xradius;
	y = 0;

	xchange = yradius*yradius*(1 - 2 * xradius);
	ychange = xradius*xradius;
	ellipseerror = 0;
	stoppingx = twobsquare*xradius;
	stoppingy = 0;

	// first set of points, sides
	while (stoppingx >= stoppingy)
	{
		//Plot4EllipsePoints(pb, cx, cy, x, y, color);
		fill2EllipseLines(pb, cx, cy, x, y, color);
		y++;
		stoppingy += twoasquare;
		ellipseerror += ychange;
		ychange += twoasquare;
		if ((2 * ellipseerror + xchange) > 0) {
			x--;
			stoppingx -= twobsquare;
			ellipseerror += xchange;
			xchange += twobsquare;
		}
	}

	// second set of points, top and bottom
	x = 0;
	y = yradius;
	xchange = yradius*yradius;
	ychange = xradius*xradius*(1 - 2 * yradius);
	ellipseerror = 0;
	stoppingx = 0;
	stoppingy = twoasquare*yradius;

	while (stoppingx <= stoppingy) {
		fill2EllipseLines(pb, cx, cy, x, y, color);
		x++;
		stoppingx += twobsquare;
		ellipseerror += xchange;
		xchange += twobsquare;
		if ((2 * ellipseerror + ychange) > 0) {
			y--;
			stoppingy -= twoasquare;
			ellipseerror += ychange;
			ychange += twoasquare;
		}
	}
}

In this context, the ellipse is kind of like the rectangle. There is an easy optimization at hand, so the benefits of employing triangulation might not be worth the effort. Then again, if you wanted to maintain a smaller codebase, then you might want to triangulate even this ellipse. You’d still be stuck with using the straight up code to draw the ellipse outline, and with a little bit of refactoring, you could use the exact same code for the outline and the fill. The routine to be utilized for the symmetrical points could be passed in as a function pointer, and that would be that. That makes for easy composition, code reuse, and the like.

I’m not actually sure how much I tend to use ellipses in realistic drawing anyway. I suppose there are circles here and there, but eh, there you go.

The theme of this article was composition. There is a bit of code reuse, composition in how you build larger things from smaller things, and simply trying to maintain a constraint of a small codebase, requires that higher order elements are composed from smaller simpler elements. The triangulation code currently relies on some C++ (for vector), so it can’t go into the core library as yet (straight C only), but I’m sure it will get there.

Well, there you have it, the little library that could is gaining more capabilities all the time. Soon enough, these capabilities will allow the composition of some fairly complex UI elements.


Render Text like a Boss

And it looks like this:

verdana18

One of the most rewarding parts of bringing up a graphics system is when you’re able to draw some text.  I have finally reached that point with the graphicc library.  There are many twists and turns along the way, some of which are interesting, but I’ll just share a bit of the journey here.

First of all, the code that generates this image looks like this:

#include "test_common.h"
#include "animwin32.h"

struct font_entry {
	char *name;
	const uint8_t *data;
};

struct font_entry fontlist[] = {
	{"gse4x6", gse4x6 },
	{ "gse4x8", gse4x8 },
	{ "gse5x7", gse5x7 },
	{ "gse5x9", gse5x9 },
	{ "gse6x12", gse6x12 },
	{ "gse6x9", gse6x9 },
	{ "gse7x11", gse7x11 },
	{ "gse7x11_bold", gse7x11_bold },
	{ "gse7x15", gse7x15 },
	{ "gse7x15_bold", gse7x15_bold },
	{ "gse8x16", gse8x16 },
	{ "gse8x16_bold", gse8x16_bold },
	{ "mcs11_prop", mcs11_prop },
	{ "mcs11_prop_condensed", mcs11_prop_condensed },
	{ "mcs12_prop", mcs12_prop },
	{ "mcs13_prop", mcs13_prop },
	{ "mcs5x10_mono", mcs5x10_mono },
	{ "mcs5x11_mono", mcs5x11_mono },
	{ "mcs6x10_mono", mcs6x10_mono },
	{ "mcs6x11_mono", mcs6x11_mono },
	{ "mcs7x12_mono_high", mcs7x12_mono_high },
	{ "mcs7x12_mono_low", mcs7x12_mono_low },
	{ "verdana12", verdana12 },
	{ "verdana12_bold", verdana12_bold },
	{ "verdana13", verdana13 },
	{ "verdana13_bold", verdana13_bold },
	{ "verdana14", verdana14 },
	{ "verdana14_bold", verdana14_bold },
	{ "verdana16", verdana16 },
	{ "verdana16_bold", verdana16_bold },
	{ "verdana17", verdana17 },
	{ "verdana17_bold", verdana17_bold },
	{ "verdana18", verdana18 },
	{ "verdana18_bold", verdana18_bold }

};
static const int numfonts = sizeof(fontlist) / sizeof(fontlist[0]);
static int fontidx = 0;



LRESULT CALLBACK keyReleased(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
{
	switch (wParam)
	{
		case VK_UP:
			fontidx--;
			if (fontidx < 0) {
				fontidx = numfonts - 1;
			}
		break;

		case VK_DOWN:
			fontidx++;
			if (fontidx >= numfonts){
				fontidx = 0;
			}
		break;

		case VK_SPACE:
			write_PPM("test_agg_raster_fonts.ppm", gpb);
		break;
	}

	return 0;
}

void setup()
{
	size(640, 480);
	background(pWhite);

	setOnKeyReleasedHandler(keyReleased);
}


static char CAPS[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
static char LOWS[] = "abcdefghijklmnopqrstuvwxyz";
static char NUMS[] = "1234567890";
static char SYMS[] = "!@#$%^&*()_+-={}|[]\\:\"; '<>?,./~`";
static char SOME[] = "The quick brown fox jumped over the lazy dog.";

void draw()
{
	setFont(fontlist[fontidx].data);

	fill(pBlack);
	text(fontlist[fontidx].name, 0, 0);
	text(CAPS, 0, gfont.height * 1);

	fill(pRed);
	text(LOWS, 0, gfont.height * 2);

	fill(pGreen);
	text(NUMS, 0, gfont.height * 3);

	fill(pBlue);
	text(SYMS, 0, gfont.height * 4);

	fill(pBlack);
	text(SOME, 0, gfont.height * 5);
}

Many of the test programs in graphicc have this similar layout. If you’ve every done any programming using Processing you might be familiar with the ‘setup()’ and ‘draw()’ methods. The animwin32 file is responsible for this little construction. Basically, it provides a “Processing like” environment, which makes it really easy to play with various graphics routines. It also does mouse/keyboard handling, so you can setup to deal with key presses and mouse actions. In this case, I use the key up/down actions to change the font from a list of font data. Pressing ‘SPACE’ will take a dump of the screen.

That font data was borrowed from the Anti-Grain Geometry library, because it’s a nice compact bitmap representations of a few fonts.  Just a brief on the AGG library.  This is a fairly well written graphics library written by Maxim Shemanarev, who died way too young I think.  The library is an inspiration and source of knowledge to anyone doing 2D graphics.

The text() function will render the string using whatever is the currently selected font, fill color, and specified location. The location specified the upper left of the font box. There are various text rendering attributes which can be employed, such as alignment, a draw box, whether to use the upper left corner, or the baseline, etc. All in due time.

The guts of the text() routine look like this:

// Text Processing
void text(const char *str, const int x, const int y)
{
	scan_str(gpb, &gfont, x, y, str, fillColor);
}

void setFont(const uint8_t *fontdata)
{
	font_t_init(&gfont, fontdata);
}

Very simple yes?

And deeper into the rabbit hole…

int scan_str(pb_rgba *pb, font_t *font, const int x, const int y, const char *chars, const int color)
{
	glyph_t ginfo;

	int idx = 0;
	int dx = x;
	int dy = y;

	while (chars[idx])
	{
		glyph_t_init(font, &ginfo, chars[idx]);
		scan_glyph(pb, font, &ginfo, dx, dy, color);
		dx += ginfo.width;
		idx++;
	}

	return dx;
}

Basically, get the glyph information for each character, and scan that glyph into the pixel buffer. The scan_glyph() in turn takes the packed bitmap information, unpacks it into individual ‘cover spans’, then turns that into pixels to be placed into the pixel buffer.

Lots can be done to optimize this activity. For example, all the glyphs can be rendered into a bitmap, and then just do a blit to the pixel buffer when needed. This would essentially be a tradeoff between space and time. That’s a choice easily made at a higher level than the lowest graphics routines, so the low level graphic routines don’t make that assumption, they only deal with the packed form of the data. I find this kind of design choice to be most interesting and useful in terms of what kind of profile the overall system has, and what constraints it will adhere to in terms of what type of system it will run on.

So, there you have it.  At this point, the graphicc library can do primitives from single pixels to bezier curves, and text.  With the help of the animwin32 routines, you can do simple UI programming as if you were using the 2D portion of Processing.  I find this to be a good plateau in the development of the library.  At this point, it makes sense to reconsider the various design choices made, cleanup the API, throw in some ‘const’ here and there, consider error checking, and the like.  The next step after that is to consider higher level libraries, which might be purpose built.  I’ll want to incorporate 3D at some point, and there are hints of that already in the routines, but I don’t want to get bogged down in that just yet.  Good image scaling and rotation is probably a first priority before lighting models and 3D rendering.

I would love to be able to create graphic systems profiles, such as, can I mimic GDI, or the HTML Canvas routines.  I can already do Processing to a certain extent.  My first test of the library though is something much simpler.  I want to be able to render the state of a network of machines performing some tasks.  First pass will likely be simple 2D graphs of some sort, so time to get on it.


Follow

Get every new post delivered to your Inbox.

Join 51 other followers