Have Computicles Arrived?

So, I’ve written quite a lot about computicles over the past few years.  In most of those articles, I’m talking about the software implementation of tiny units of computation.  The idea for computicles stems from a conversation I had with my daughter circa 2007 in which I was laying out a grand vision of the world where units of computation would be really small, fit in your hand sized, and be able to connect and do stuff fairly easily.  That was my envisioning of ubiquitous computing.  And so, yesterday, I received the latest creation from HardKernel, the Odroid HC1 (HC – Home Cloud).

20170902_072503

Hardkernel is a funny company.  I’ve been buying at least one of everything they’ve made in the past 5 years or so.  They essentially make single board computers in the “credit card” form factor.  What you see in the picture is the HC1, with an attached SSD of 120Gb.  The SSD is 2.5″ standard, so that gives you a sense of the size of the thing.  The black is aluminum, and it’s the heat sink for the unit.

The computer bit of it is essentially a reworked Odroid XU4, which all by itself is quite a strong computer in this category.  Think Raspberry Pi, but 4 or 5 times stronger.  The HC1 has a single Sata connector to slot in whatever hard storage you choose.  No extra ribbon cables or anything like that.  The XU4 itself can run variants of Linux, as well as Android.  The uSD card sticking out the right side provides the OS.  In this particular case I’m using OMV (Open Media Vault), because I wanted to try the unit out as a NAS in my home office.

One of the nice things about the HC1 is that it’s stackable.  So, I can see piling up 3 or 4 of these to run my local server needs.  Of course, when you compare to the giant beefy 64-bit super server that I’m currently typing at, these toy droids give it very little competition in the way of absolute compute power.  They even did an analysis of bitcoin mining and determined a number of years it would take to get a return on their investment.  But, computing, and computicles aren’t about absolute concentrated power.  Rather, they are about distributed processing.

Right now I have a Synology, probably the equivalent of today’s DS1517.  That thing has RAID up the wazoo, redundant power, multiple nics, and it’s just a reliable workhorse that just won’t quit, so far.  The base price starts at $799, before you actually start adding storage media.  The HC1 starts at $49.  Of course there’s no real comparison in terms of reliability, redundancy, and the like, or is there?

Each HC1 can hold a single disk.  You can throw on whatever size and variety you like.  This first one has a Samsung SSD that’s a couple years old, at 120Gb.  These days you can get 250Gb for $90.  You can go up to 4TB with an SSD, but that’s more like a $1600 proposition.  So, I’d be sticking with the low end.  That makes a reasonable storage computicle roughly $150.

You could of course skip the SSD or HDD and just stick a largish USB thumb drive, or 128Gb uSD for $65, but the speed on that interface isn’t going to be nearly as fast as the Sata III interface the HC1 is sporting.  So, great for a small time use, but for relatively constant streaming and download, the SSD solutions, and even HDD solutions will be more robust.

So, what’s the use case?  Well, besides the pure geekery of the thing, I’m trying to imagine more appliance like usage.  I’m imagining what it looks like to have several of these placed throughout the house.  Maybe one is configured as a YouTube downloader, and that’s all it does all the time, shunting to the larger Synology every once in a while.

How about video streaming?  Right now that’s served up by the Synology running a Plex server, which is great for most clients, but sometimes, it’s just plain slow, particularly when it comes to converting video clips from ancient cameras and cell phones.  Having one HC1 dedicated to the task of converting clips to various formats that are known to be used in our house would be good.  Also, maybe serving up the video itself?  The OMV comes with a minidlna server, which works well with almost all the viewing kit we have.  But, do we really have any hiccups when it comes to video streaming from the Synology?  Not enough to worry about, but still.

Maybe it’s about multiple redundant distributed RAID.  With 5 – 10 of these spread throughout the house, each one could fail in time, be replaced, and nothing would be the wiser.  I could load each with a couple of terabytes, and configure some form of pleasant redudancy across them and be very happy.  But, then there’s the cloud.  I actually do feel somewhat reassured having the ability to backup somewhere else.  As recent flooding in Texas shows, as well as wildfires, no matter how much redundancy you have locally, it’s local.

Then there’s compute.  Like I said, a single beefy x64 machine with a decent GPU is going to smoke any single one of these.  Likewise if you have a small cluster of these.  But, that doesn’t mean it’s not useful.  Odroid boards are ARM based, which makes them inherently low power consumption compared to their intel counterparts.  If I’ve have some relatively light loads that are trivially parallelizable, then having a cluster of a few of these might make some sense.  Again with the ubiquitous computing, if I want to have the Star Trek style “computer, where’s my son”, or “turn on the lights in the garage”, without having to send my voice to the cloud constantly, then performing operations such as speech recognition on a little cluster might be interesting.

The long and short of it is that having a compute/storage module in the $150 range makes for some interesting thinking.  It’s surely not the only storage option in this range, but the combination of darned good hardware, tons of software support, low price and easy assembly, gives me pause to consider the possibilities.  Perhaps the hardware has finally caught up to the software, and I can start realizing computicles in physical as well as soft form.

Advertisements

Computicles – A tale of two schedulers

One of the drivers for the creation of computicles is to maximize the efficiency of the running system while minimizing the complexity for the programmer.  Herein lies the rub.  Modern computers are multi-core/multi-proc, and Lua is largely a single core sort of system.  Lua has its own notion of “light weight threads”, which are essentially cooperative processing threads.  The native OS (Windows or Linux) has a notion of “threads” which are much more heavy weight.  While the Lua threads can number easily in the thousands and more, they are not actually running a parallel, they are just rapidly context switching between each other at the whim of the programmer.  The OS threads, on the other hand, are in fact running in parallel, on multiple cores if they exist.  But, as soon as you have more threads than you have cores, the threads are shifting rapidly between each other, just like in the Lua case, but it’s ‘preemptive’ instead of cooperative.

What I want?  I want to get the best of both worlds.  But, before starting down the path of leveraging the multiple cores, I want to start with the programming paradigm.

I want to write essentially serial code.  My brain is not good at dealing things like mutexes, semaphores, barriers, or any other kind of sharing mechanisms that have been invented over the past 40 years.  I know how to write straight sequential code.  I can deal with saying “spawn” to get something running in parallel, but that’s about it.

So, in steps computicles.

I’ve gone on about the subject a few times now, but I’ve finally created the unified scheduler that I require.  It looks like this:

 

-- comp_msgpump.lua
local ffi = require("ffi");
require("IOProcessor");

-- default to 15 millisecond timeout
gIdleTimeout = gIdleTimeout or 15

local idlecount = 0;

while true do
  if IOProcessor then
    IOProcessor:step();
  end

  local msg, err = SELFICLE:getMessage(gIdleTimeout);

  if not msg then
    if err == WAIT_TIMEOUT then
      --print("about to idle")
      idlecount = idlecount + 1;
      if OnIdle then
        OnIdle(idlecount);
      end
    end
  else
    local msgFullyHandled = false;
    msg = ffi.cast("ComputicleMsg *", msg);

    if OnMessage then
      msgFullyHandled = OnMessage(msg);
    end

    if not msgFullyHandled then
      msg = ffi.cast("ComputicleMsg *", msg);
      local Message = msg.Message;
      --print("Message: ", Message, msg.Param1, msg.Param2);
		
      if Message == Computicle.Messages.QUIT then
        if OnExit then
          OnExit();
        end
        break;
      end

      if Message == Computicle.Messages.CODE then
        local len = msg.Param2;
        local codePtr = ffi.cast("const char *", msg.Param1);
		
        if codePtr ~= nil and len > 0 then
          local code = ffi.string(codePtr, len);

          SELFICLE:freeData(ffi.cast("void *",codePtr));

          local func = loadstring(code);
          func();
        end
      end
      SELFICLE:freeMessage(msg);
    end
  end
end

 
This is pretty much the same event driven loop that has existed previously. It’s main function is to get messages off its message queue, and deal with them. This is how you communicate with a computicle. Under normal circumstances, a Computicle can simply implement either OnMessage(), if it wants to only respond when it receives a message. This is a perfectly event driven way to exist. Or it can implement OnIdle() if it wants to respond to the fact that nothing else is occuring in the system. This is a great combination, and will cover many useful cases. But what about waiting for some IO to complete?

Well, at the top of this event loop there is the IOProcessor:step() call. And what is an IOProcessor?

The IOProcessor is a scheduler for cooperative Lua threads. The IOProcessor assumes the user’s code is utilizing co-routines, and will deal with putting them on a ‘sleeping’ list whenever they perform a task, such as socket IO which does not complete immediately. It’s a classic, and before the Computicles existed, this was the primary scheduler.

It’s a bit thick with code, but here it is:

local ffi = require("ffi");

local Collections = require "Collections"
local IOCPSocket = require("IOCPSocket");
local SimpleFiber = require("SimpleFiber");
local IOCompletionPort = require("IOCompletionPort");
local SocketOps = require("SocketOps");


IOProcessor = {
  fibers = Collections.Queue.new();
  coroutines = {};
  EventFibers = {};
  FibersAwaitingEvent = {};

  IOEventQueue = IOCompletionPort:create();
  MessageQuanta = 15;		-- 15 milliseconds
};


--[[
	Socket Management
--]]

IOProcessor.createClientSocket = function(self, hostname, port)
  return IOCPSocket:createClient(hostname, port, self)
end

IOProcessor.createServerSocket = function(self, params)
  return IOCPSocket:createServerSocket(params, self)
end

IOProcessor.observeSocketIO = function(self, socket)
  return self.IOEventQueue:addIoHandle(socket:getNativeHandle(), socket.SafeHandle);
end

--[[
	Fiber Handling
--]]

IOProcessor.scheduleFiber = function(self, afiber, ...)
  if not afiber then
    return nil
  end
  self.coroutines[afiber.routine] = afiber;
  self.fibers:Enqueue(afiber);	

  return afiber;
end

IOProcessor.spawn = function(self, aroutine, ...)
  return self:scheduleFiber(SimpleFiber(aroutine, ...));
end

IOProcessor.removeFiber = function(self, fiber)
  self.coroutines[fiber.routine] = nil;
end

IOProcessor.inMainFiber = function(self)
  return coroutine.running() == nil; 
end

IOProcessor.yield = function(self)
  coroutine.yield();
end

IOProcessor.yieldForIo = function(self, sock, iotype)
  -- associate a fiber with a socket
  print("yieldForIo, CurrentFiber: ", self.CurrentFiber);
	
  self.EventFibers[sock:getNativeSocket()] = self.CurrentFiber;

  -- Keep a list of fibers that are awaiting io
  if self.CurrentFiber ~= nil then
    self.FibersAwaitingEvent[self.CurrentFiber] = true;

    -- Whether we were successful or not in adding the socket
    -- to the pool, perform a yield() so the world can move on.
    self:yield();
  end
end


IOProcessor.processIOEvent = function(self, key, numbytes, overlapped)
    local ovl = ffi.cast("SocketOverlapped *", overlapped);
    local sock = ovl.sock;
    ovl.bytestransferred = numbytes;
    if sock == INVALID_SOCKET then
		return false, "invalid socket"
    end

    --print("IOProcessor.processIOEvent(): ", sock, ovl.operation);

    local fiber = self.EventFibers[sock];
    if fiber then
      self:scheduleFiber(fiber);
      self.EventFibers[sock] = nil;
      self.FibersAwaitingEvent[fiber] = nil;
    else
      print("EventScheduler_t.ProcessEventQueue(), No Fiber waiting to process.")
      -- remove the socket from the watch list
    end
end

IOProcessor.stepIOEvents = function(self)
    -- Check to see if there are any IO Events to deal with
    local key, numbytes, overlapped = self.IOEventQueue:dequeue(self.MessageQuanta);

    if key then
      self:processIOEvent(key, numbytes, overlapped);
    else
      -- typically timeout
      --print("Event Pool ERROR: ", numbytes);
    end
end

IOProcessor.stepFibers = function(self)
  -- Now check the regular fibers
  local fiber = self.fibers:Dequeue()

  -- Take care of spawning a fiber first
  if fiber then
    if fiber.status ~= "dead" then
      self.CurrentFiber = fiber;
      local result, values = fiber:Resume();
      if not result then
        print("RESUME RESULT: ", result, values)
      end
      self.CurrentFiber = nil;

      if fiber.status ~= "dead" and not self.FibersAwaitingEvent[fiber] then
        self:scheduleFiber(fiber)
      else
        --print("FIBER FINISHED")
        -- remove coroutine from dictionary
        self:removeFiber(fiber)
      end
    else
      self:removeFiber(fiber)
    end
  end
end

IOProcessor.step = function(self)
  self:stepFibers();
  self:stepIOEvents();
end

return IOProcessor

There are a couple of ways to approach this. From the perspective of the other event loop, the “step()” method here is executed once around the loop. The ‘step()’ method in turn checks on the fibers, and then on the ioevents. “stepFibers” checks the list of fibers that are ready to run, and runs one of them for a bit until it yields, and is thus placed back on the queue of fibers ready to be run, or it finishes. This is the part where a normal cooperative processing system could be brought to its knees, and a preemptive multi-tasking system would just keep going. The ‘stepIOEvents()’ function checks on the IOCompletionPort that is being used by sockets to indicate whether anything interesting has occured. If there has been any activity, the cooperative thread associated with the activity is scheduled to execute a bit of code. It does not execute immediately, but it is now on the list to be executed next time around.

The stepIOEvents() function is at the heart of any system, such as node.js, which gets high performance with IO processing, while maintaining a low CPU load. Most of the time you’re just waiting, doing nothing, and once the system indicates there is action on the socket, you can spring into action. Thus, you do not spend any time looping over sockets polling to see if there’s any activity, you’re just notified when there is.

The rest of the code is largely helpers, like creating a socket that is wired correctly and whatnot.

So, at the end of it, what does this system do?

Well, assuming I want to write a DateTimeClient, which talks to a service, gets the date and time, prints it out, etc, I would write this:

local ffi = require "ffi"
require("IOProcessor");

local daytimeport = 13

GetDateAndTime = function(hostname, port)
    hostname = hostname or "localhost";
    port = port or daytimeport;

    local socket, err = IOProcessor:createClientSocket(hostname, port);

    if not socket then
        print("Socket Creation Failed: ", err);
        return nil, err;
    end

    local bufflen = 256;
    local buff = ffi.new("char [?]", bufflen);

    local n, err = socket:receive(buff, bufflen)
 
    if not n then
        return false, err;
    end

    if n > 0 then
        return ffi.string(buff, n);
    end
end

Well, that looks like normal sequential code to me. And yes, it is. Nothing unusual. But, when running in the context of a computicle, like the following, it gets more interesting.

local Computicle = require("Computicle");

local codeTemplate = [[
require("DaytimeClient");
local dtc, err = GetDateAndTime("localhost");
print(dtc);
]]

local comp1 = Computicle:create(codeTemplate);
local comp2 = Computicle:create(codeTemplate);

comp1:waitForFinish();
comp2:waitForFinish();

This will take the otherwise sequential code of the DateTimeClient, and execute it in two parallel preemptive Operating System level threads, and wait for their completion. The magic is all hidden behind the schedulers and event loops. I never have to know about how that all happens, but I can appreciate the benefits.

Marrying the concepts of event driven, muti-process, cooperative user space threads, preemptive multi-tasking, and the like can be a daunting task. But, with a little bit of script, some chewing gum, a whistle and a prayer, it all comes together in a seamless whole which is quite easy to use, and fairly simple to understand. I think I have achieved my objectives for ease of use, maximizing efficiency, and reducing head explosions.


Computicles – Simplifying Chaining

Using the “exec()” method, I can chain computicles together, but the code is kind of raw. There is one addition I left out of the equation last time around, and that the hydrate/rehydrate of a computicle variable itself. Now it looks like this:

elseif dtype == "table" then
  if getmetatable(data) == Computicle_mt then
    -- package up a computicle
    datastr = string.format("Computicle:init(TINNThread:StringToPointer(%s),TINNThread:StringToPointer(%s));", 
      TINNThread:PointerToString(data.Heap:getNativeHandle()), 
      TINNThread:PointerToString(data.IOCP:getNativeHandle()));
  elseif getmetatable(data) == getmetatable(self.IOCP) then
    -- The data is an iocompletion port, so handle it specially
    datastr = string.format("IOCompletionPort:init(TINNThread:StringToPointer(%s))",
      TINNThread:PointerToString(data:getNativeHandle()));
  else
    -- get a json string representation of the table
    datastr = string.format("[[ %s ]]",JSON.encode(data, {indent=true}));
  end

This is just showing the types that are descended from the ‘table’ type. This includes IOCompletionPort, and ‘Computicle’. Anything other than these two will be serialized as fairly straight forward key/value pair tables. With this in place, I can now do the following.

-- Setup the splitter that will dispatch to the leaf nodes
local splitter = Computicle:load("comp_splittercode");

-- Setup the source, which will be originating messages
local source = Computicle:load("comp_sourcecode");
source.sink = splitter;

The code for the comp_sourcecode computicle looks like this:


RunOnce = function(sink)
  print("comp_sourcecode, RunOnce...")
  for i = 1, 10 do
    sink:postMessage(i);
  end
end

OnIdle = function(counter)
  print("comp_sourcecode, OnIdle(): ", counter)
  if sink ~= nil then 
    RunOnce(sink);

    sink = nil;
    exit();
  end
end

This computicle implements the OnIdle() function, as it will use that to determine when I can send messages off to the sink. If the sink hasn’t been set yet, then it will do nothing. If it has been set, then it will execute the ‘RunOnce()’ function, sending the sink a bunch of messages.

After performing the RunOnce() task, the computicle will simply exit(), which == SELFICLE:quit() == SELFICLE:postMessage(Computicle.Messages.QUIT);

And that’s the end of that computicle. The splitter has become much simpler as well.

local ffi = require("ffi");


OnMessage = function(msg)
	msg = ffi.cast("ComputicleMsg *", msg);
	local Message = msg.Message;
--print("comp_splittercode, OnMessage(): ", Message);

	if sink1 then
		sink1:postMessage(Message);
	end	

	if sink2 then
		sink2:postMessage(Message);
	end
end

Here, the splitter assumes there are two sinks. If either of them exists, they will receive the message that was passed into the splitter. If they don’t exist, then the message will be dropped. Of course, other behaviors, such as holding on the messages, could be implemented as well.

In this case, the computicle is not exited internally. This is because the splitter itself does not know when it is finished, all it knows is that when it receives a message, it is supposed to pass that message on to its two sinks.

The sink code is equally simple.

local ffi = require("ffi");

OnMessage = function(msg)
  msg = ffi.cast("ComputicleMsg *", msg);
  local Message = msg.Message;

  print(msg.Message*10);
end

Here again, just implement the ‘OnMessage()’ function, and the comp_msgpump code will call it automatically. And again, the sink does not know when it is done, so it does not explicitly exit.

Pulling it all together, the exiting logic becomes more clear:

local Computicle = require("Computicle");

-- Setup the leaf nodes to receive messages
local sink1 = Computicle:load("comp_sinkcode");
local sink2 = Computicle:load("comp_sinkcode");


-- Setup the splitter that will dispatch to the leaf nodes
local splitter = Computicle:load("comp_splittercode");

splitter.sink1 = sink1;
splitter.sink2 = sink2;


-- Setup the source, which will be originating messages
local source = Computicle:load("comp_sourcecode");
source.sink = splitter;


-- Close everything down from the outside
print("FINISH source: ", source:waitForFinish());

-- tell the splitter to quit
splitter:quit();
print("FINISH splitter:", splitter:waitForFinish());

-- the two sinks will receive the quit message from the splitter
-- so, just wait for them to quit.
print("Finish Sink 1: ", sink1:waitForFinish());
print("Finish Sink 2: ", sink2:waitForFinish());

There is an order in which computicles are created, and assigned, which stitches them together. The computicles could actually be constructed in a “suspended state”, but that would not help matters too much.

At the end, the quit() sequence can clearly be seen. First, wait for the source to be finished. Then tell the splitter to quit(). This is not an interrupt, so whatever was in its queue previously will flow through before the QUIT is processed. Then finally, do the same thing on the sinks, after the splitter has finished.

All messages flow, and all processing finishes cleanly. So, this is one way to perform the exit mechanism from the outside, as well as from the inside.

Adding this bit of ease in programming did not change the fundamentals of the computicle. It still has a single communciation mechanism (the queue). It still performs computation, it still communicates with the outside world. The new code saves the error prone process of type casting and hydrating objects that the system already knows about. This in turn makes the usage pattern that much easier to deal with.

The comp_msgpump was added in a fairly straight forward way. When a computicle is constructed, there is a bit of code (a Prolog) that is placed before the user’s code, and a bit of code (Epilog) placed after it. Those two bits of code look like this:

Computicle = {
	Prolog = [[
TINNThread = require("TINNThread");
Computicle = require("Computicle");
IOCompletionPort = require("IOCompletionPort");
Heap = require("Heap");

exit = function()
    SELFICLE:quit();
end
]];

Epilog = [[
require("comp_msgpump");
]];

Computicle.createThreadChunk = function(self, codechunk, params, codeparams)
	local res = {};

	-- What we want to load before any other code is executed
	-- This is typically some 'require' statements.
	table.insert(res, Computicle.Prolog);


	-- Package up the parameters that may want to be passed in
	local paramname = "_params";

	table.insert(res, string.format("%s = {};", paramname));
	table.insert(res, self:packParams(params, paramname));
	table.insert(res, self:packParams(codeparams, paramname));

	-- Create the Computicle instance that is running within 
	-- the Computicle
	table.insert(res, 
		string.format("SELFICLE = Computicle:init(%s.HeapHandle, %s.IOCPHandle);", paramname, paramname));


	-- Stuff in the user's code
	table.insert(res, codechunk);

	
	-- What we want to execute after the user's code is loaded
	-- By default, this will be a message pump
	table.insert(res, Computicle.Epilog);

	return table.concat(res, '\n');
end

If an application needs a different prolog or epilog, it’s fairly easy to change them either by changing the file that is being read in, or by changing the string itself: Computicle.Prolog = [[print(“Hello, World”)]]

And so it goes. I’ve never had it so easy creating multi-threaded bits of computation and stitching those bits together.


Computicles – Unsafe, but fun, code injection

Computicles are capable of receiving bits of code to execute. That is in fact one way in which they communicate with each. From .
one context I can simple do: comp.someValue = 100, and that will set “someValue = 100” in another context. Well, great. How about sending something across that’s more substantial than a simple variable setting?

Well, with the Computicle.exec() method, which the previous example is based on, you can really send across any bit of code to be executed. It works, and you can even send across the body of a function like this:

comp:exec[[
function hello()
  print("Hello, World!");
end
]]

That works fine, and you have full access to all stuff that’s running in that context. It’s a bit clunky though because the entirety of your piece of code is wrapped up in a string value. This is very similar to executing SQL code on some server. The bulk of your code, if not in stored procedures”, ends up in these opaque strings. Hard to catch errors, hard to debug, etc. How about the following:

comp.hello = function()
  print("Hello, World!");
end

What that you say? Basically, assigning a function as a variable to the computicle. Yes, this can work. There is a slight modification to turn a function into a string, and it looks like this:

Computicle.datumToString = function(self, data, name)
	local dtype = type(data);
	local datastr = tostring(nil);

--print("DATUM TYPE: ", name, dtype);

	if type(data) == "cdata" then
		-- If it is a cdata type that easily converts to 
		-- a number, then convert to a number and assign to string
		if tonumber(data) then
			datastr = tostring(tonumber(data));
		else
			-- if not easily converted to number, then just assign the pointer
			datastr = string.format("TINNThread:StringToPointer(%s);", 
				TINNThread:PointerToString(data));
		end
	elseif dtype == "table" then
		if getmetatable(data) == Computicle_mt then
			-- package up a computicle
		else
			-- get a json string representation of the table
			datastr = string.format("[[ %s ]]",JSON.encode(data, {indent=true}));

			--print("=== JSON ===");
			--print(datastr)
		end
	elseif dtype == "function" then
		datastr = "loadstring([["..string.dump(data).."]])";
	elseif dtype == "string" then
		datastr = string.format("[[%s]]", data);
	else 
		datastr = tostring(data);
	end

	return datastr;
end

Notice the part that starts: ‘elseif dtype == “function()” then’.

Basically, there is a string function that turns anything into bytecode which can later be hydrated using loadstring later. It is placed in ‘loadstring([[…]])’, because this is the only way to preserve the actual 8-bit values. If you use string.format(“%s”), it will chop out the non-ascii stuff.

Then, this code can be executed in the remote context just the same as any other little bit of code. There are some restrictions to what kinds of functions you can do this with though. No ‘upvalues’, meaning, everything must be contained within the function itself, no calling out to global variables and the like.

Of course, that just gets the code over to the other side in a convenient way. What about actually executing the code?

comp:exec([[hello()]]);

Same as ever, just make the function call using “exec()”. Of course, this little bit could be wrapped up in some other convenient override, like using the ‘:’ notation, but there’s some more work to be done before making that a reality.

What other functions can be dealt with? Well, the loop running within the computicle looks like the following now:

-- comp_msgpump.lua

local ffi = require("ffi");

-- This is a basic message pump
-- 

-- default to 15 millisecond timeout
gIdleTimeout = gIdleTimeout or 15


local idlecount = 0;

while true do
  local msg, err = SELFICLE:getMessage(gIdleTimeout);
  -- false, WAIT_TIMEOUT == timed out
  --print("MSG: ", msg, err);

  if not msg then
    if err == WAIT_TIMEOUT then
      --print("about to idle")
      idlecount = idlecount + 1;
      if OnIdle then
        OnIdle(idlecount);
      end
    end
  else
  	local msgFullyHandled = false;

    if OnMessage then
      msgFullyHandled = OnMessage(msg);
    end

    if not msgFullyHandled then
      msg = ffi.cast("ComputicleMsg *", msg);
      local Message = msg.Message;
      --print("Message: ", Message, msg.Param1, msg.Param2);
		
      if Message == Computicle.Messages.QUIT then
        break;
      end

      if Message == Computicle.Messages.CODE then
        local len = msg.Param2;
        local codePtr = ffi.cast("const char *", msg.Param1);
		
        if codePtr ~= nil and len > 0 then
          local code = ffi.string(codePtr, len);

          SELFICLE:freeData(ffi.cast("void *",codePtr));

          local func = loadstring(code);
          func();
        end
      end
      SELFICLE:freeMessage(msg);
    end
  end
end

This new message pump has a couple of interesting new features. The first feature has to do with the getMessage(). It will timeout after some number of milliseconds have passed and a message hasn’t come in. If there is an ‘OnIdle()’ function defined, it will be called, passing that function the current count. If that function does not exist, then nothing will happen.

The second change has to do with message processing. If there is a “OnMessage()” function defined, it will be called, and the message will be handed to it for processing. If that function does not exist, then it will perform some default actions, such as reading and code, and possibly handling a “QUIT”.

So, how about that OnIdle? that’s a function that takes a parameter. Can I inject that? Sure can:

comp.OnIdle = function(count)
  print("IDLE", count)
end

Just the the hello() case, except this one can take a parameter. This does not violate the “no upvalues” rule, so it’s just fine.

In this case, I don’t have to actually execute the code, because the loop within the computicle will pick up on it and execute it. And if you wanted to remove this idling funcition, simple do: ‘comp.OnIdle = nil’

And then what?

Well, that’s pretty cool. Now I can easily set simple variables, and I can set table values, and I can even set functions to be called. I have a generic message pumping loop, which has embedded “OnIdle”, and “OnMessage” functions.

The last little bit of work to be done here is to deal with Asynchronous IO in a relatively seamless way using continuations, just like I had in the pre-computicle world. That’s basically a marriage of the core of the EventScheduler and the main even loop here.

Throw some authentication on top of it all and suddenly you have a system that is very cool, secure, and useful.


Computicles – Inter-computicle communication

Alrighty then, so a computicle is a vessel that holds a bit of computation power. You can communicate with it, and it can communicate with others.

Most computicles do not stand as islands unto themselves, so easily communicating with them becomes very important.

Here is some code that I want to be able to run:

local Computicle = require("Computicle");
local comp = Computicle:load("comp_receivecode");

-- start by saying hello
comp:exec([[print("hello, injector")]]);

-- queue up a quit message
comp:quit();

-- wait for it all to actually go through
comp:waitForFinish();

So, what’s going on here? The first line is a standard “require” to pull in the computicle module.

Next, I create a single instance of a Computicle, running the Lua code that can be found in the file “comp_receivecode.lua”. I’ll come back to that bit of code later. Suffice to say it’s running a simple computicle that does stuff, like execute bits of code that I hand to it.

Further on, I use the Computicle I just created, and call the “exec()” function. I’m passing a string along as the only parameter. What will happen is the receiving Computicle will take that string, and execute the script from within its own context. That’s actually a pretty nifty trick I think. Just imagine, outside the world of scripting, you can create a thread in one line of code, and then inject a bit of code for that thread to execute. Hmmm, the possibilities are intriguing methinks.

The tail end of this code just posts a quit, and then finally waits for everything to finish up. Just not that the ‘quit()’ function is not the same thing as “TerminateThread()”, or “ExitThread()”. Nope, all it does is post a specific kind of message to the receiving Computicle’s queue. What the thread does with that QUIT message is up to the individual Computicle.

Let’s have a look at the code for this computicle:

local ffi = require("ffi");

-- This is a basic message pump
-- 
while true do
  local msg = SELFICLE:getMessage();
  msg = ffi.cast("ComputicleMsg *", msg);
  local Message = msg.Message;

  if OnMessage then
    OnMessage(msg);
  else
    if Message == Computicle.Messages.QUIT then
      break;
    end

    if Message == Computicle.Messages.CODE then
      local len = msg.Param2;
      local codePtr = ffi.cast("const char *", msg.Param1);
		
      if codePtr ~= nil and len > 0 then
        local code = ffi.string(codePtr, len);

        SELFICLE:freeData(ffi.cast("void *",codePtr));

        local f = loadstring(code);
	f();
      end
    end
  end

  SELFICLE:freeMessage(msg);
end

It’s not too many lines. This little Computicle takes care a few scenarios.

First of all, if there so happens to be a ‘OnMessage’ function defined, it will receive the message, and the main loop will do no further processing of it.

If there is no ‘OnMessage’ function, then the message pump will handle a couple of cases. In case a ‘QUIT’ message is received, the loop will break and the thread/Computible will simply exit.

When the message == ‘CODE’ things get really interesting. The ‘Param1’ of the message contains a pointer to the actual bit of code that is intended to be executed. The ‘Param2’ contains the length of the specified code.

Through a couple of type casts, and an ffi.string() call, the code is turned into something that can be used with ‘loadstring()’, which is a standard Lua function. It will parse the string, and then when ‘f()’ is called, that string will actually be executed (within the context of the Computicle). And that’s that!

At the end, the ‘SELFICLE:freeMessage()’ is called to free up the memory used to allocate the outer message. Notice that ‘SELFICLE:freeData()’ was used to clean up the string value that was within the message itself. I have intimate knowledge of how this message was constructed, so I know this is the correct behavior. In general, if you’re going to pass data to a computicle, and you intend the Computicle to clean it up, you should use the computicle instance “allocData()” function.

OK. So, that explains how I could possibly inject some code into a Computicle for execution. That’s pretty nifty, but it looks a bit clunky. Can I do better?

I would like to be able to do the following.

comp.lowValue = 100;
comp.highValue = 200;

In this case, it looks like I’m setting a value on the computicle instance, but in which thread context? Well, what will actually happen is this will get executed within the computicle instance context, and be available to any code that is within the computicle.

We already know that the ‘exec()’ function will execute a bit of code within the context of the running computicle, so the following should now be possible:

comp:exec([[print(" Low: ", lowValue)]]);
comp:exec([[print("High: ", highValue)]])

Basically, just print those values from the context of the computile. If they were in fact set, then this should print them out. If there were not in fact set, then it should print ‘nil’ for each of them. On my machine, I get the correct values, so that’s an indication that they were in fact set correctly.

How is this bit of magic achieved?

The key is the Lua ‘__newindex’ metamethod. Wha? Basically, if you have a table, and you try to set a value that does not exist, like I did with ‘lowValue’ and ‘highValue’, the ‘__newindex()’ function will be called on your table if you’ve got it setup right. Here’s the associated code of the Computicle that does exactly this.

__newindex = function(self, key, value)
  local setvalue = string.format("%s = %s", key, self:datumToString(value, key));
  return self:exec(setvalue);
end

That’s pretty straight forward. Just create some string that represents setting whatever value you’re trying to set, and then call ‘exec()’, which is already known to execute within the context of the thread. So, in the case where I have written “comp.lowValue = 100”, this will turn into a sting that == “lowValue == 100”, and that string will be executed, setting a global variable ‘lowValue’ == 100.

And what is this ‘datumToString()’ function? Ah yes, this is the little bit that takes various values and returns their string equivalent, ready to be injected into a running Computicle.

Computicle.datumToString = function(self, data, name)
  local dtype = type(data);
  local datastr = tostring(nil);

  if type(data) == "cdata" then
      -- If it is a cdata type that easily converts to 
      -- a number, then convert to a number and assign to string
    if tonumber(data) then
      datastr = tostring(tonumber(data));
    else
      -- if not easily converted to number, then just assign the pointer
      datastr = string.format("TINNThread:StringToPointer(%s);", 
        TINNThread:PointerToString(data));
    end
  elseif dtype == "table" then
    if getmetatable(data) == Computicle_mt then
      -- package up a computicle
    else
      -- get a json string representation of the table
      datastr = string.format("[[ %s ]]",JSON.encode(data, {indent=true}));
    end
  elseif dtype == "string" then
    datastr = string.format("[[%s]]", data);
  else 
    datastr = tostring(data);
  end

  return datastr;
end

The task is actually fairly straight forward. Given a Lua based value, turn it into a string that can be executed in another Lua state. There are of course methods in Lua which will do this, and tons of marshalling frameworks as well. But, this is a quick and dirty version that does exactly what I need.

Of particular note are the handling of cdata and table types. For cdata, some of the values, such as ‘int64_t’, I want to just convert to a number. Tables are the most interesting. This particular technique will really only work for fairly simple tables, that do not make references to other tables and the like. Basically, turn the table into a JSON string, and send that across to be rehydrated as a table.

Here’s some code that actually makes use of this.

comp.contacts = {
  {first = "William", last = "Adams", phone = "(111) 555-1212"},
  {first = "Bill", last = "Gates", phone = "(111) 123-4567"},
}

comp:exec([=[
print("== CONTACTS ==")

-- turn contacts back into a Lua table
local JSON = require("dkjson");

local contable = JSON.decode(contacts);


for _, person in ipairs(contable) do
	--print(person);
	print("== PERSON ==");
	for k,v in pairs(person) do
		print(k,v);
	end
end
]=]);

Notice that ‘comp.contacts = …’ just assigns the created table directly. This is fine, as there are no other references to the table on ‘this’ side of the computicle, so it will be safely garbage collected after some time.

The rest of the code is using the ‘exec()’, so it is executing in the context of the computicle. It basically gets the value of the ‘contacts’ variable, and turns it back into a Lua table value, and does some regular processing on it (print out all the values).

And that’s about it. From the humble beginnings of being able to inject a bit of code to run in the context of an already running thread, to exchanging tables between two different threads with ease, Computicles make pretty short work of such a task. It all stems from the same three principles of Computicles: listen, compute, communicate. And again, not a single mutex, lock, or other visible form of synchronization. The IOCompletionPort is the single communications mechanism, and all the magic of serialized multi-threaded communication hide behind that.

Of course, ‘code injection’ is a dirty word around the computing world, so there must be a way to secure such transfers? Yah, sure, why not. I’ve been bumping around the indentity/security/authorization/authentication space recently, so surely something must be applicable here…


The Birth of Computicles

I have written about the “computicle” concept a couple of times in the past:
Super Computing Simulation Sure is Simple
The Existential Program

The essence of a computicle boils down to the following three attributes.

  • Ability to receive input
  • Ability to perform some operations
  • Ability to communicate with other computicles

I had written about memory sharing and ownership, and what mechanisms might be used to exchange data and whatnot. Now that I’ve done the IO Completion Port thing, I can finally introduce a ‘computicle’.

The idea is fairly straight forward. I would like to be able to write a piece of code that is fairly self sufficient/standalone, and have it run in its own environment without much fuss. In the context of Lua, that means, I want each computicle to have its own LuaState, and to run on its own OS thread. I want to be able to communicate with the thing, using a queue, and allow it to communicate with other computicles using the same mechanism. Of course I want everything seamlessly integrated so the programmer isn’t required to know anything about locking mechanisms, or even the fact that they are in a separate thread, or anything like that. Without further ado, here’s an example:

local Computicle = require("Computicle");

print(Computicle:compute([[print("Hello World!")]]):waitForFinish());

Tada! This little code snippet does all those things I listed above. I’ll present it in another form so it is a little more obvious what’s going on.

local Computicle = require("Computicle");

local comp = Computicle:create([[print("Hello World!")]]);

local status, err = comp:waitForFinish();

print("Finish: ", status, err);

Here, an instance of a computicle is created, and the code chunk that it is to execute is fed to it through the constructor. Of course, that is Lua code, so it could be anything. Another more interesting form might be the following:

local comp = Computicle:create([[require("helloworld.lua")]]);

What’s happening behind the scenese here is that the Computicle is creating a TINNThread to run that bit of code, which in turn is creating a separate LuaState for the same. At the same time, an IOCompletion port is also being created so that communication can occur. The ‘waitForFinish’ is a simple mechanism by which the any computicle can be waited upon. More exotic synchronization mechanisms can be brought into play where necessary, but this works just fine.

The code for the computicle is a test case to be found here.

Here’s an example of how you can spin up multiple computicles at the same time.

local Computicle = require("Computicle");

local comp1 = Computicle:compute([[print("Hello World!");]]);
local comp2 = Computicle:compute([[
for i=1,10 do
print("Counter: ", i);
end
]]);

print("Finish: ", comp1:waitForFinish());
print("Finish: ", comp2:waitForFinish());

 

If you run this, you see some interleaving of the two threads running at the same time. You wait for completion of both computicles before finally finishing.

How about inter-computicle communications? Well, when you construct a computicle, you can actually give it two bits of information. The first is an absolute must, and that’s the code to be executed. The second is optional, and is the set of parameters that you want to pass to the computicle. I have previously written about the fact that communicating bits of data between threads requires a well thought out strategy with respect to memory ownership, and I showed how the IO Completion Port can be used as a queue to achieve this. Well, computicles just use that mechanism. So, you can do the following:

local Computicle = require("Computicle");

local comp2 = Computicle:create([[
local ffi = require("ffi");

while true do
  local msg = SELFICLE:getMessage();
  msg = ffi.cast("ComputicleMsg *", msg);

  print(msg.Message*10);
  SELFICLE.Heap:free(msg);
end
]]);

local sinkstone = comp2:getStoned();

local comp1 = Computicle:create([[
local ffi = require("ffi");

local stone = _params.sink;
stone = ffi.cast("Computicle_t *", stone);

local sinkComp = Computicle:init(stone.HeapHandle, stone.IOCPHandle, stone.ThreadHandle);

for i = 1, 10 do
  sinkComp:postMessage(i);
end
]], {sink = sinkstone});

print("Finish 1: ", comp1:waitForFinish());
print("Finish 2: ", comp2:waitForFinish());

What’s going on here? Well, the computicle ‘comp2’ is acting as a sink for information. Meaning, it spends its whole time just pulling work data out of its inbuilt queue. But, there’s a bit of trickery here, so a line by line explanation is in order.

  local msg = SELFICLE:getMessage();

What’s a ‘SELFICLE’? That’s the Computicle context within which the code is running. It is a global variable that exists within each and every computicle. This is the way the code within a computicle can get at various functions and variables for the environment. One of the computicle methods is ‘getMessage()’. This is of course how you can get messages out of your Computicle message queue. If you’ve done any Windows programming before, it’s very similar to doing “GetMessage()”, when you’re fiddling about with User32. I suspect the very lowest level mechanisms are probably similar.

  msg = ffi.cast("ComputicleMsg *", msg);

What you get out of ‘getMessage()’ is a pointer (as expected). In order to make any sense of it, you need to cast it to the “ComputicleMsg” type, which looks like this:

typedef struct {
	int32_t		Message;
	UINT_PTR	wParam;
	LONG_PTR	lParam;
} ComputicleMsg;

Again, looks fairly familiar. Basically, any data structure would do, as long as your computicles agree on what it is. This data structure works because it combines a simple “message”, with a couple of pointers, so you can pass pointers to other more elaborate structures.

  print(msg.Message*10);

Once we have the message cast to our message type, we can do some work. In this case, just get the ‘Message’ field, multiply it, and print it out. This is the working end of the computicle. You can put any code in here for the “work”. I can call out to other bits of code, fire of network requests, launch more computicles, whatever. It’s just Lua code running in a thread with its own LuaState, so the sky’s the limit!

  SELFICLE.Heap:free(msg);

And finally, you have to manage the bit of memory that you were handed. These computicles are simple, they are sharing a common heap across all of them, so I know the bit of memory was allocated from one heap, so I can just free it.

So, that’s the consuming side. What about the sending side?

local sinkstone = comp2:getStoned();

I need to tell the second computicle about the first one. There are only two communications mechanisms available. The first is to pass whatever I want as a list of parameters when I construct the computicle. The second is to use the computicle’s queue. In this particular case, I’ll use the former method and pass the computicle as a startup parameter. In order to do that though, I can only communicate cdata. I can not pass any LuaState based bits of information, because it becomes ambiguous as to who owns the chunk of memory used, and what’s the lifetime of that chunk.

The ‘getStoned()’ function takes a computicle and creates a heap based representation of it which is appropriate for sending to another computicle. The data structure that is created looks like this:

typedef struct {
	HANDLE HeapHandle;
	HANDLE IOCPHandle;
	HANDLE ThreadHandle;
} Computicle_t;
local comp1 = Computicle:create([[
local ffi = require("ffi");

The source Computicle is created. Keep in mind that a Computicle is an almost, but not quite, empty container waiting for you to fill with code. There are a couple of things, like the SELFICLE, that are already available, but things like other modules must be pulled in just like any other code you might write. The only modules already in place, besides standard libraries, are the TINNThread, and Computicle.

local stone = _params.sink;
stone = ffi.cast("Computicle_t *", stone);

local sinkComp = Computicle:init(stone.HeapHandle, stone.IOCPHandle, stone.ThreadHandle);

The other global variable besides ‘SELFICLE’ is the ‘_params’ table. This table contains a list of parameters that you might have passed into the constructor. Some manipulation has occured to these items though. They do not contain type information, they are just raw ‘void *’, so you have to turn them back into whatever they were supposed to be by doing the ‘ffi.cast’. Once you do that, you can use them. In this particular case, we passed the stoned state of the sink Computicle as the ‘sink’ paramter when the source computicle was constructed, so the code can just access that parameter and be on its way. It’s easiest to think of the ‘_params’ table as being the same as the ‘arg’ table that Lua has in general. I didn’t reuse the ‘arg’ concept though for a couple of reasons. First of all, ‘arg’ is a simple array, not a dictionary. So, you access things using index numbers. I wanted to support named values because that’s more useful in this particular usage. Second, since I’m not using it as a simple array, I didn’t want to confuse matters by naming it the same thing as ‘arg’, so thus, ‘_params’.

In the above code, a computicle is being initiated using the value passed in from the sink stone. This is a common pattern. When it is desirable to construct a Computicle from scratch, the ‘Computicle:create()’ function is used. In this case, I don’t want to create a new computicle, but rather an alias to an existing computicle. This is why I need to know its state, so that I can create this alias, and ultimately communicate with it.

Lastly, after everything is setup:

for i = 1, 10 do
	sinkComp:postMessage(i);
end

What’s going on here? Well, here’s what the postMessage() function looks like:

Computicle.postMessage = function(self, msg, wParam, lParam)
  -- Create a message object to send to the thread
  local msgSize = ffi.sizeof("ComputicleMsg");
  local newWork = self.Heap:alloc(msgSize);
  newWork = ffi.cast("ComputicleMsg *", newWork);
  newWork.Message = msg;
  newWork.wParam = wParam or 0;
  newWork.lParam = lParam or 0;

  -- post it to the thread's queue
  self.IOCP:enqueue(newWork);

  return true;
end

Remember, the only way to communicate to a computicle is to send it a bit of shared memory by placing it into its queue. So, postMessage just takes the few parameters that you pass it, and packages them up into a chunk of memory which is ready to be sent across via that queue. On this case, we have the alias to the Computicle we want to talk to, so the message does in fact land in the proper queue.

]], {sink = sinkstone});

This final part is just how we pass the ‘sink’ stone as a parameter to the constructor.

So, there you have it. In full detail. I can construct Computicles. I can write bits of script code that run completely independently. I can communicate between Computicles in a relatively easy manner.

As a lazy programmer, I’m loving this. It allows me to conceptualize my code at a different level. I don’t have to be so much concerned with the mechanics of making low level system calls, which is typically an error prone process for me. That’s all been wrapped up and taken care of. One of the hardest aspects of multi-threaded programming (locking), is completely eliminated because of using the IO Completion Port as a simple thread-safe queue. Simple synchronization is achieved either through queued messages, or through waiting on a particular Computicle to finish its job.

This construct reminds me of DirectShow and filter graphs. Very similar in concept, but Computicles generalize the concept, and make it brain dead simple to pull off with scrict rather than ‘C’ code.

I’m also loving this because now I can go more easily from ‘boxes and arrows’ to actual working system. The ‘boxes’ are instances of computicles, and the arrows are the ‘postMessage/getMessage’ calls between them. This is a fairly under stated, but extremely powerful mechanism.

Next time around, I’ll go through the actual Computicle code itself to shed light on the mystery therein.


Lua Coroutine Roundup

My WordPress dashboard says this is my 250th post. I figure what better way to mark the occasion than to do a little bit of a roundup on one of my favorite topics.

One of the most awesome features of the Lua language is a built in co-routine mechanism. I’ve talked about co-routines quite a bit, doing a little series on the basics of coroutines, all the way up through a scheduler for multi-tasking.

Coroutines give the programmer the illusion of creating a parallel processing environment. Basically, write up your code “spawn” different coroutines, and then just forget about them… Well, almost. The one thing the Lua language does not give you is a way to manage those coroutines. Co-routines are a mechanism by which ‘threads’ can do cooperative multi-tasking. This is different from the OS level preemptive multitasking that you get with ‘threads’ on most OS libraries. So, I can create coroutines, resume, and yield. It’s the ‘yield’ that gives coroutines the multi-tasking flavor. One co-routine yields, and another co-routine must be told to ‘resume’. Something has to do the telling, and keep track of who’s yielded, and who needs to be resumed. In steps the scheduler.

Summaries from my own archives
Lua Coroutines – Getting Started

Multitask UI Like it’s 1995

Hurry Up and Wait – TINN Timing

Computicles – A tale of two schedulers

Parallel Computing is Child’s Play

Multitasking single threaded UI – Gaming meets networking

Alertable Predicates? What’s that?

Pinging Peers – Creating Network Meshes

From the Interwebs
GitHub Gist: Deco / coroutine_scheduler.lua

Beginner’s Guide to Coroutines – Roblox Wiki

LOOP: Thread Scheduler

LuaAV

And of course the search engines will show you thousands more:

lua coroutine scheduler

I think it would be really great if the Lua language itself had some form of rudimentary scheduler, but scheduling is such a domain specific thing, and the rudiments are so basic, that it’s possibly not worth providing such support. The Go language supports a basic scheduler, and has coroutine support, to great effect I think.

The simple scheduler is simple, just do a round robbin execution of one task after the next as each task yields. I wrote a simple dispatcher which started with exactly this. Then interesting things start to happen. You’d like to have your tasks go into a wait state if they’re doing an IO operation, and not be scheduled again until some IO has occurred. Or, you want to put a thread to sleep for some amount of time. Or, you want a thread to wait on a condition of one sort or another, or signals, or something else. Pretty soon, you basic scheduler is hundreds of lines of code, and things start to get rather complicated.

After going round and round in this problem space, I finally came up with a solution that I think has some legs. Rather than a complex scheduler, I promote a simple scheduler, and add functions to it through normal thread support. So, here is my final ‘scheduler’ for 2013:


local ffi = require("ffi");

local Collections = require("Collections");
local StopWatch = require("StopWatch");


--[[
	The Scheduler supports a collaborative processing
	environment.  As such, it manages multiple tasks which
	are represented by Lua coroutines.
--]]
local Scheduler = {}
setmetatable(Scheduler, {
	__call = function(self, ...)
		return self:create(...)
	end,
})
local Scheduler_mt = {
	__index = Scheduler,
}

function Scheduler.init(self, scheduler)
	local obj = {
		Clock = StopWatch();

		TasksReadyToRun = Collections.Queue();
	}
	setmetatable(obj, Scheduler_mt)
	
	return obj;
end

function Scheduler.create(self, ...)
	return self:init(...)
end

--[[
		Instance Methods
--]]
function Scheduler.tasksArePending(self)
	return self.TasksReadyToRun:Len() > 0
end

function Scheduler.tasksPending(self)
	return self.TasksReadyToRun:Len();
end


function Scheduler.getClock(self)
	return self.Clock;
end


--[[
	Task Handling
--]]

function Scheduler.scheduleTask(self, afiber, ...)
	if not afiber then
		return false, "no fiber specified"
	end

	afiber:setParams(...);
	self.TasksReadyToRun:Enqueue(afiber);	
	afiber.state = "readytorun"

	return afiber;
end

function Scheduler.removeFiber(self, fiber)
	--print("DROPPING DEAD FIBER: ", fiber);
	return true;
end

function Scheduler.inMainFiber(self)
	return coroutine.running() == nil; 
end

function Scheduler.getCurrentFiber(self)
	return self.CurrentFiber;
end

function Scheduler.step(self)
	-- Now check the regular fibers
	local task = self.TasksReadyToRun:Dequeue()

	-- If no fiber in ready queue, then just return
	if task == nil then
		return true
	end

	if task:getStatus() == "dead" then
		self:removeFiber(task)

		return true;
	end

	-- If the task we pulled off the active list is 
	-- not dead, then perhaps it is suspended.  If that's true
	-- then it needs to drop out of the active list.
	-- We assume that some other part of the system is responsible for
	-- keeping track of the task, and rescheduling it when appropriate.
	if task.state == "suspended" then
		return true;
	end

	-- If we have gotten this far, then the task truly is ready to 
	-- run, and it should be set as the currentFiber, and its coroutine
	-- is resumed.
	self.CurrentFiber = task;
	local results = {task:resume()};

	-- no task is currently executing
	self.CurrentFiber = nil;

	-- once we get results back from the resume, one
	-- of two things could have happened.
	-- 1) The routine exited normally
	-- 2) The routine yielded
	--
	-- In both cases, we parse out the results of the resume 
	-- into a success indicator and the rest of the values returned 
	-- from the routine
	local success = results[1];
	table.remove(results,1);


	--print("SUCCESS: ", success);
	if not success then
		print("RESUME ERROR")
		print(unpack(results));
	end

	-- Again, check to see if the task is dead after
	-- the most recent resume.  If it's dead, then don't
	-- bother putting it back into the readytorun queue
	-- just remove the task from the list of tasks
	if task:getStatus() == "dead" then
		--print("Scheduler, DEAD coroutine, removing")
		self:removeFiber(task)

		return true;
	end

	-- The only way the task will get back onto the readylist
	-- is if it's state is 'readytorun', otherwise, it will
	-- stay out of the readytorun list.
	if task.state == "readytorun" then
		self:scheduleTask(task, results);
	end
end


--[[
	Primary Interfaces
--]]

function Scheduler.suspend(self, aTask)
	if not aTask then
		self.CurrentFiber.state = "suspended"
		return self:yield()
	end

	aTask.state = "suspended";

	return true
end

function Scheduler.yield(self, ...)
	return coroutine.yield(...);
end


--[[
	Running the scheduler itself
--]]
function Scheduler.start(self)
	if self.ContinueRunning then
		return false, "scheduler is already running"
	end
	
	self.ContinueRunning = true;

	while self.ContinueRunning do
		self:step();
	end
	--print("FINISHED STEP ITERATION")
end

function Scheduler.stop(self)
	self.ContinueRunning = false;
end

return Scheduler

At 195 LOC, this is back down to the basic size of the original simple dispatcher that I started with. There are only a few functions to this interface:

  • scheduleTask()
  • step()
  • suspend()
  • yield()
  • start()

These are the basics to do some scheduling of tasks. There is an assumption that there is an object that can be scheduled, such that ‘scheduleTask’ can attach some metadata, and the like. Other than that, there’s not much here but the basic round robin scheduler.

So, how do you make a more complex scheduler, say one that deals with time? Well, first of all, I’ve also added an Application object, which actually contains a scheduler, and supports various other things related to an application, including add ons to the scheduler. Here’s an excerpt of the Application object construction.

function Application.init(self, ...)
	local sched = Scheduler();

	waitForIO.MessageQuanta = 0;

	local obj = {
		Clock = Stopwatch();
		Scheduler = sched;
		TaskID = 0;
		wfc = waitForCondition(sched);
		wft = waitForTime(sched);
		wfio = waitForIO(sched);
	}
	setmetatable(obj, Application_mt)

	-- Create a task for each add-on
	obj:spawn(obj.wfc.start, obj.wfc)
	obj:spawn(obj.wft.start, obj.wft)
	obj:spawn(obj.wfio.start, obj.wfio)

	return obj;
end

In the init(), the Application object creates instances for the add ons to the scheduler. These add-ons (waitForCondition, waitForTime, waitForIO) don’t have to adhere to much of an interface, but you do need some function which is callable so that it can be spawned. The Application construction ends with the spawning of the three add-ons as normal threads in the scheduler. Yep, that’s right, the add ons are just like any other thread, running in the scheduler. You might think; But these things need to run with a higher priority than regular threads! And yes, in some cases they do need that. But hay, that’s an exercise for the scheduler. If the core scheduler gains a ‘priority’ mechanism, then these threads can be run with a higher priority, and everything is good.

And what do these add ons look like? I’ve gone over the before, but here’s an example of the timing one:

local Functor = require("Functor")
local tabutils = require("tabutils");


local waitForTime = {}
setmetatable(waitForTime, {
	__call = function(self, ...)
		return self:create(...)
	end,
})

local waitForTime_mt = {
	__index = waitForTime;
}

function waitForTime.init(self, scheduler)
	local obj = {
		Scheduler = scheduler;
		TasksWaitingForTime = {};
	}
	setmetatable(obj, waitForTime_mt)

	--scheduler:addQuantaStep(Functor(obj.step,obj));
	--scheduler:spawn(obj.step, obj)

	return obj;
end

function waitForTime.create(self, scheduler)
	if not scheduler then
		return nil, "no scheduler specified"
	end

	return self:init(scheduler)
end

function waitForTime.tasksArePending(self)
	return #self.TasksWaitingForTime > 0
end

function waitForTime.tasksPending(self)
	return #self.TasksWaitingForTime;
end

local function compareTaskDueTime(task1, task2)
	if task1.DueTime < task2.DueTime then
		return true
	end
	
	return false;
end

function waitForTime.yieldUntilTime(self, atime)
	--print("waitForTime.yieldUntilTime: ", atime, self.Scheduler.Clock:Milliseconds())
	--print("Current Fiber: ", self.CurrentFiber)
	local currentFiber = self.Scheduler:getCurrentFiber();
	if currentFiber == nil then
		return false, "not currently in a running task"
	end

	currentFiber.DueTime = atime;
	tabutils.binsert(self.TasksWaitingForTime, currentFiber, compareTaskDueTime);

	return self.Scheduler:suspend()
end

function waitForTime.yield(self, millis)
	--print('waitForTime.yield, CLOCK: ', self.Scheduler.Clock)

	local nextTime = self.Scheduler.Clock:Milliseconds() + millis;

	return self:yieldUntilTime(nextTime);
end

function waitForTime.step(self)
	--print("waitForTime.step, CLOCK: ", self.Scheduler.Clock);

	local currentTime = self.Scheduler.Clock:Milliseconds();

	-- traverse through the fibers that are waiting
	-- on time
	local nAwaiting = #self.TasksWaitingForTime;
--print("Timer Events Waiting: ", nAwaiting)
	for i=1,nAwaiting do

		local fiber = self.TasksWaitingForTime[1];
		if fiber.DueTime <= currentTime then
			--print("ACTIVATE: ", fiber.DueTime, currentTime);
			-- put it back into circulation
			-- preferably at the front of the list
			fiber.DueTime = 0;
			--print("waitForTime.step: ", self)
			self.Scheduler:scheduleTask(fiber);

			-- Remove the fiber from the list of fibers that are
			-- waiting on time
			table.remove(self.TasksWaitingForTime, 1);
		end
	end
end

function waitForTime.start(self)
	while true do
		self:step();
		self.Scheduler:yield();
	end
end

return waitForTime

The ‘start()’ function is what was scheduled with the scheduler, so that’s the thread that’s actually running. As you can see, it’s just a normal cooperative task.

The real action comes from the ‘yield()’ and ‘step()’ functions. The ‘yield()’s purpose in life is to suspend the current task (take it out of the active running list of the scheduler), and setup the appropriate stuff such that it can be activated later. The purpose of the “step()” function is to go through the list of tasks which are currently suspended, and determine which ones need to be scheduled again, because their time signal has been met.

And that’s how you add items such as “sleep()” to the system. You don’t change the core scheduler, but rather, you add a small module which knows how to deal with timed events, and that’s that.

The application object comes along and provides some convenience methods that can easily be turned into globals, so that you can easily type things like:

delay(function() print("Hello, World!") end, 5000)  -- print "Hello, World! after 5 seconds
periodic(function() print("Hi There!") end, 1000)  -- print "Hi, There!" every second

And there you have it. The core scheduler is a very simple thing. Just a ready list, and some algorithm that determines which one of the tasks in the list gets to run next. This can be as complex as necessary, or stay very simple.

Using an add on mechanism, the overall system can take on more robust scenarios, without increasing the complexity of the individual parts.

This simplified design satisfies all the scenario needs I currently have from high throughput web server to multi-tasking UI with network interaction. I will improve the scheduling algorithm, and probably make that a pluggable function for ease.

Lua coroutines are an awesome feature to have in a scripting language. With a little bit of work, a handy scheduler makes multi-task programming a mindless exercise, and that’s a good thing.

Happy New Year!


Multitasking single threaded UI – Gaming meets networking

There are two worlds that I want to collide. There is the active UI gaming world, then there’s the more passive networking server world. The difference between these two worlds is the kind of control loop and scheduler that they both typically use.

The scheduler for the networking server is roughly:

while true do
  waitFor(networkingEvent)
  
  doNetworkingStuff()
end

This is great, and works quite well to limit the amount of resources required to run the server because most of the time it’s sitting around idle. This allows you to serve up web pages with a much smaller machine than if you were running flat out, with say a “polling” api.

The game loop is a bit different. On Windows it looks something like this:

while true do
  ffi.fill(msg, ffi.sizeof("MSG"))
  local peeked = User32.PeekMessageA(msg, nil, 0, 0, User32.PM_REMOVE);
			
  if peeked ~= 0 then
    -- do regular Windows message processing
    local res = User32.TranslateMessage(msg)		
    User32.DispatchMessageA(msg)
  end

  doOtherStuffLikeRunningGamePhysics();
end

So, when I want to run a networking game, where I take some input from the internet, and incorporate that into the gameplay, I’ve got a basic problem. Who’s on top? Which loop is going to be THE loop?

The answer lies in the fact that the TINN scheduler is a basic infinite loop, and you can modify what occurs within that loop. Last time around, I showed how the waitFor() function can be used to insert a ‘predicate’ into the scheduler’s primary loop. So, perhaps I can recast the gaming loop as a predicate and insert it into the scheduler?

I have a ‘GameWindow’ class that takes care of creating a window, showing it on the screen and dealing with drawing. This window has a run() function which has the typical gaming infinite loop. I have modified this code to recast things in such a way that I can use the waitFor() as the primary looping mechanism. The modification make it look like this:

local appFinish = function(win)

  win.IsRunning = true
  local msg = ffi.new("MSG")

  local closure = function()
    ffi.fill(msg, ffi.sizeof("MSG"))
    local peeked = User32.PeekMessageA(msg, nil, 0, 0, User32.PM_REMOVE);
			
    if peeked ~= 0 then
      local res = User32.TranslateMessage(msg)
      User32.DispatchMessageA(msg)

      if msg.message == User32.WM_QUIT then
        return win:OnQuit()
      end
    end

    if not win.IsRunning then
      return true;
    end
  end

  return closure;
end

local runWindow = function(self)
	
  self:show()
  self:update()

  -- Start the FrameTimer
  local period = 1000/self.FrameRate;
  self.FrameTimer = Timer({Delay=period, Period=period, OnTime =self:handleFrameTick()})

  -- wait here until the application window is closed
  waitFor(appFinish(self))

  -- cancel the frame timer
  self.FrameTimer:cancel();
end

GameWindow.run = function(self)
  -- spawn the fiber that will wait
  -- for messages to finish
  Task:spawn(runWindow, self);

  -- set quanta to 0 so we don't waste time
  -- in i/o processing if there's nothing there
  Task:setMessageQuanta(0);
	
  Task:start()
end

Starting from the run() function. Only three things need to occur. First, spawn ‘runWindow’ in its own fiber. I want this routine to run in a fiber so that it is cooperative with other parts of the system that might be running.

Second, call ‘setMessageQuanta(0)’. This is a critical piece to get a ‘gaming loop’. This quanta is the amount of time the IO processing part of the scheduler will spend waiting for an IO event to occur. This time will be spent every time through the primary scheduler’s loop. If the value is set to 0, then effectively we have a nice runaway infinite loop for the scheduler. IO events will still be processed, but we won’t spend any time waiting for them to occur.

This has the effect of providing maximum CPU timeslice to various other waiting fibers. If this value is anything other than 0, let’s say ‘5’ for example, then the inner loop of the scheduler will slow to a crawl, providing no better than 60 iterations of the loop per second. Not enough time slice for a game. Setting it to 0 allows more like 3000 iteractions of the loop per second, which gives more time to other fibers.

That’s the trick of this integration right there. Just set the messageQuanta to 0, and away you go. To finish this out, take a look at the runWindow() function. Here just a couple of things are set in place. First, a timer is created. This timer will ultimately end up calling a ‘tick()’ function that the user can specify.

The other thing of note is the use of the waitFor(appCheck(self)). This fiber will block here until the “appCheck()” predicate returns true.

So, finally, the appFinish() predicate, what does that do?

Well, I’ll be darned if it isn’t the essence of the typical game window loop, at least the Windows message handling part of it. Remember that a predicate that is injected to the scheduler using “waitFor()” is executed every time through the scheduler’s loop, so the scheduler’s loop is essentially the same as the outer loop of a typical game.

With all this in place, you can finally do the following:


local GameWindow = require "GameWindow"
local StopWatch = require "StopWatch"

local sw = StopWatch();

-- The routine that gets called for any
-- mouse activity messages
function mouseinteraction(msg, wparam, lparam)
	print(string.format("Mouse: 0x%x", msg))
end

function keyboardinteraction(msg, wparam, lparam)
	print(string.format("Keyboard: 0x%x", msg))
end


function randomColor()
		local r = math.random(0,255)
		local g = math.random(0,255)
		local b = math.random(0,255)
		local color = RGB(r,g,b)

	return color
end

function randomline(win)
	local x1 = math.random() * win.Width
	local y1 = 40 + (math.random() * (win.Height - 40))
	local x2 = math.random() * win.Width
	local y2 = 40 + (math.random() * (win.Height - 40))

	local color = randomColor()
	local ctxt = win.GDIContext;

	ctxt:SetDCPenColor(color)

	ctxt:MoveTo(x1, y1)
	ctxt:LineTo(x2, y2)
end

function randomrect(win)
	local width = math.random(2,40)
	local height = math.random(2,40)
	local x = math.random(0,win.Width-1-width)
	local y = math.random(0, win.Height-1-height)
	local right = x + width
	local bottom = y + height
	local brushColor = randomColor()

	local ctxt = win.GDIContext;

	ctxt:SetDCBrushColor(brushColor)
	--ctxt:RoundRect(x, y, right, bottom, 0, 0)
	ctxt:Rectangle(x, y, right, bottom)
end


function ontick(win, tickCount)

	for i=1,30 do
		randomrect(win)
		randomline(win)
	end

	local stats = string.format("Seconds: %f  Frame: %d  FPS: %f", sw:Seconds(), tickCount, tickCount/sw:Seconds())
	win.GDIContext:Text(stats)
end



local appwin = GameWindow({
		Title = "Game Window",
		KeyboardInteractor = keyboardinteraction,
		MouseInteractor = mouseinteraction,
		FrameRate = 24,
		OnTickDelegate = ontick,
		Extent = {1024,768},
		})


appwin:run()

This will put a window on the screen, and draw some lines and rectangles at a frame rate of roughly 24 frames per second.

And thus, the two worlds have been melded together. I’m not doing any networking in this particular case, but adding it is no different than doing it for any other networking application. The same scheduler is still at play, and everything else will work as expected.

The cool thing about this is the integration of these two worlds does not require the introduction of multiple threads of execution. Everything is still within the same single threaded context that TINN normally runs with. If real threads are required, they can easily be added and communicated with using the computicles that are within TINN.


Function Forwarding – The Lazy Programmer Doesn’t Like Writing Glue Code

Back in the day, when NeXT computers roamed the earth, there was this language feature in Objective-C that I came to know and love. Objective-C has this feature called forwarding, or delegating, or whatever. The basic idea is, you call a function on an object, and if it knows the function (late bound goodness), it will execute the function. If it doesn’t, it will forward the function call to the objects ‘forward’ function, handing it all the parameters, and giving it a chance to do something, before throwing an error if nothing is done.

Well, roll forward 20 years (yah, it was that long ago), and here we are with the likes of Python, JavaScript, Ruby, and Lua inheriting the dynamic language mantle passed along from Simula and SmallTalk. So, recently, I was pining for that good ol’ feature of Objective-C, so I decided to implement it and see what it takes today.

Why? The scenario is simple. There are many instance where I want to write a ‘proxy’ of some sort or another. In one such scenario, I want to take the name of the function being called, along with the parameters for the function, wrap them up into a string, send them somewhere, and have them dealt with at that new place. At the other end, I can deal with actually applying the function and parameters.

So, beginning with this:


local baseclass = {
  func1 = function(foo, bar, bas)
    print(foo, bar, bas);
  end,

  func2 = funciton(baz, booz, bazzle)
    print("Function 2: ", baz, booz, dazzle);
  end
}

I want to be able to do the following:

local Messenger("machine.domain.com/baseclass");

Messenger:func1("this", "little", "Piggy");
Messenger:func2("Piggy", 500, false);

Tada! Done…

OK. So, let’s begin at the beginning. First of all, I want some sort of base Object/Class thing that will help make things easier. I’ll use the one from the ogle project, because it seems to be pretty decent and minimal.


-- Object.lua

local Object = {}
Object.__index = Object
setmetatable(Object, Object)

-- Variables
Object.__metamethods =  {
    "__add", "__sub", "__mul", "__div", 
    "__mod", "__pow", "__unm","__len", 
    "__lt", "__le", "__concat", "__tostring"
}

-- Metamethods
function Object:__call(...)
    return self:new(...)
end

function Object:__newindex(k, v)
    if k == "init" or getfenv(2)[k] == self then
        rawset(self, "__init", v)
    else
        rawset(self, k, v)
    end
end

-- Constructor
function Object:__init()
end

-- Private/Static methods
function Object:__metamethod(event)
    
    return function(...)
        if self:parent() == self then
            error(event .. " is unimplemented on object.", 2)
        end
        
        func = rawget(self, event) or rawget(self:parent(), event)
        if type(func) == "function" then return func(...) else return func end
    end
end

-- Methods
function Object:parent()
    return getmetatable(self)
end


function Object:new(...)
    local o = {}
    setmetatable(o, self);


    self.__index = function(self, key)
        -- See if the value is one from our metatable
        -- if it is, then return that.
        local mt = getmetatable(self); 
        local value = rawget(mt,key);
        if value then
            return value;
        end

        -- If the thing is not found, then return the 'forward' method
         local delegate = rawget(self, "__delegate") 
        if delegate ~= nil then
            return self:__delegate(key);
         end
    end


    self.__call = Object.__call
    self.__newindex = Object.__newindex
    
    for k,v in pairs(Object.__metamethods) do
        o[v] = self:__metamethod(v)
    end
    
    o:__init(...)
    return o
end

return Object;

Just a little bit of Object/behavior mixin mumbo jumbo. I’ve made a couple of mods from the original, the most interesting being the implementation of the ‘.__index’ function. The ‘__index’ function is called whenever you’ve tried to access something in the runtime that doesn’t immediately exist in the table. In the example code above, when I’m calling: ‘Messenger:func1()”, the runtime first does: something = Messenger.func1. If it finds something, and that something is a function, it evaluates the function, passing the specified parameters. If it doesn’t find something, then it will throw an error saying ‘something is nil’, and thus can not be called.

Alright, so if you look at that ‘__index’ function, the last case says; if there is a function called “__delegate” defined, then return that.

OK. So, at this point, if you create an instance of an object, and try to call a function that does not exist, it will fail because there is no ‘__delegate’ function defined. In steps the ‘subclass’.


Messenger = Object();

function Messenger:__init(target)
  self.Target = target;
  
  self.__delegate =  function(self, name, ...)
    return function(self, ...)
      local jsonstr = JSON.encode({...});
      print(name, jsonstr);
    end
  end

  return self;
end

What’s going on here? Basically, the “Messenger” is a subclass of the “Object”. The ‘__init()’ function is called automatically as part of the object/type creation. Within the ‘__init()’, the ‘__delegate()’ function is setup. Isn’t that a weird function to be called? If you go back and look at the ‘__index()’ function, you’ll notice that it returns the result of the ‘__delegate’ method right away. In this case, the ‘Messenger.__delegate()’ function will return a ‘closure’, which is a fancy word for a function enclosed in a function. This happens with iterators as well, but that’s another story. The closure, in this case, is the true workhorse of the operation. In this particular case, all it does is turn the parameters into a json string and print it out, but you can probably imagine doing other things here, like sending the parameters, along with the function name, across a network interface to a receiver, who will actually execute the code in question.

And that’s that.

Another example:


local mssngr = Messenger();
mssngr:foo("Hello", 5);

mssngr:bar("what", 'would', 1, "I", {23,14,12}, "do?");

> foo	["Hello",5]
> bar	["what","would",1,"I",[23,14,12],"do?"]

This is a great outcome as far as I’m concerned. It means that I can easily use this Messenger class as a proxy for whatever purposes I have in mind. I can create different kinds of proxies for different situations, simply by replacing the guts of the ‘__delegate()’ function.

Besides being motivated by nostalgia for Objective-C paradigms of the past, I actually have a fairly practical and immediate application. I’ve noticed that Computicles are an almost perfect implementation of the Actor pattern, and the only piece missing was a separable “Messenger” actor. Now I have one, and a mechanism to use to communicate both locally with other computicles within the same process, as well as computicles across networks, without changing any core computicle code. This makes it easier to pass data, call functions, and the like.

At any rate, discovering a fairly straight forward implementation of this pattern has been great, and I’m sure I’ll get much tremendously good usage out of it. I wonder if it can be applied to making calls to C libraries without first having to specify their interfaces…


Tech Preview 2013

I’ve been champing at the bit to write this post. A new year, a sunny day! I read something somewhere about how futurists got the whole prediction game wrong. Rather than trying to describe the new and exciting things that would be showing up in oh so many years in the future, they should be describing what would be removed from our current context. With that in mind, here are some short to medium term predictions.

More wires will be removed from our environment. The keyboard and mouse wires will be gone, replaced by wireless, bluetooth, or otherwise. The ubiquitous s-video, and various component audio/video cables will disappear. They’ll be replaced with a single wire in the form of hdmi, or they’ll disappear altogether, being replaced by wireless audio/video transmission.

The “desktop Computer” will disappear. Basic “airplay” capability will be baked into monitors of all stripes, so that the compute component of anyone’s environment will consist of nothing more than a little computicle, combined with whatever input devices the user so happens to need for their particular task. “Touch” sensors, such as the Leap Motion, will find their way into more interesting and interactive input activities.

Power Consumption related to computing will continue to shrink. Since cell phones seem to be the focus of current compute innovation, this genre will drive the compute world. With the likes of the Raspberry Pi, and the Odroid-U2 becoming popular forms of System on Chip compute platforms, the raw requirements for a compute experience will be reduced from 80watts to about 5 watts.

A bit longer term…

Drivers will be removed from the driving experience. With BMW, Google, and others experimenting with driverless vehicles, this will eventually become the preferred method of transportation. Particularly with an aging population in places, it will likely be safer to have seniors driven around in nice Prius like pods, rather than having them drive themselves.

“Data Centers” will become irrelevant. This is a bit of a stretch, but the thinking goes like this. A Data Center is a concentration of communications, power consumption, compute capability, and ultimately data storage. They are essentially the Timesharing/mainframe model of 50 years back done up in large centralized format. Why would I use a data center though. If I had fast internet (100 Mbps or better) to my home/business, would I need a data center? If I have enough compute power in my home, in the form of 3 or 4 communications servers, do I need a data center? If I have 16Tb of storage sitting under my desk at home, do I need a data center? In short, if I eliminate the redundancy, uptime guarantees, etc, that a data center gives me, I can probably supply the same from my home/small business, at an equally affordable cost. As compute and storage costs continue to decrease, and power consumption of my devices goes with them, the tipping point for data center value will change, and doing it from home will become more appealing.

Trending… breaking from the “things that will be removed”, and applying a more traditional “what will come” filter…

“Data” will become less important as “Connections” becomes more important. In recent years, blogs were interesting, then tweeting, the micro form of the blog, became more interesting. At the same time, Facebook started to emerge, which is kind of like a stretched out tweet/blog. Now along comes Pinterest. And in the background, there’s been Google. Pinterest represents a new form of communication. Gone are the words, replaced by connections. I can infer that I’ll be interested in something you’ll be interested in by seeing how many times I’ve liked other things that you’ve been interested in. If I’m an advertiser, tracking down the most pinned people, and following what they’re pinning is probably a better indicator of relevance than anything Google or Facebook have to offer. The connections are more important than any data you can actually read. In fact, you probably won’t read much, you’ll look at pictures, and tiny captions, and possibly follow some links. If there’s a prediction here, it is that the likes of Google, Facebook, and Twitter, will be fallen by the likes of Pinterest and others who are driving in the “graph” space.

3D Printing. As this year will see the likely emergence of the FormsLab printer, as well as continued evolution of the ABS and PLA based printers, content for these printers will become more interesting. The price point for the printers will continue to hover around $1500, for the truly “plug and play” variety, as opposed to the DIY variety.

For 3D content design, the Raspberry Pi, and ODroid-U2, will be combined with the LeapMotion input device to create truly spectacular and easy to use 3D design tools, for less than $200 for a hardware/software combination.

As computicles become cheaper, the premium for software will continue to decrease. There will be a consolidation of the hard/software offering. When a decent computicle costs $35 (Raspberry Pi), then it’s hard to justify software costing any more than $5. If you’re a software manufacturer, you will consider creating packages that include both the hardware and software in order to get any sort of premium.

Ubiquitous computicles will see the emergence of a new hardware “hub”. Basically a wifi connected device that attaches to various peripherals such as the LeapMotion, a Kinect, a HDMI based screen. It will managed the various interactive inputs from the various other computicles located within the home/office. Rather than being a primary compute device itself, it will act as a coordinator of many devices in a given environment.

That’s about it for this year. No doom and gloom zombie scenarios as far as I can see. Some esoteric trending on many fronts, some empire breaking trends, some evolution of technologies which will make our lives a little easier to live.