Microsoft Part II

I joined Microsoft in 1998 to work on MSXML. One of the reasons I joined way back then is because MS was in trouble with the DOJ, and competitors were getting more interesting. I thought “They’re either going down, or they’re going to resurge, either way, it will be a fun ride”.

Here it is, more than 15 years later, and I find my sentiment about the same. Microsoft has been in trouble the past few years. Missing a few trends, losing our way, catching our breath as our competitors run farther and faster ahead of us…

In the past 4 years, I’ve been associated with the rise of Azure, and most recently associated with our various identity services. In the past couple of months, I’ve been heads down working in an internal startup, which is about to deliver bits to the web. That’s 2 months from conception to delivery of a public preview of a product. That’s fairly unheard of for our giant company.

But, today, I saw a blizzard of news that made me think ye olde company has some life yet left in it.

The strictly Microsoft related news…
Windows Azure Active Directory Premium
C# Goes Open Source
TypeScript goes 1.0
Windows 8.1 is FREE for devices less than 9″!!

Of all of these, I think the Windows 8.1 going for free is probably the most impactful from a ‘game changer’ perspective. Android is everywhere, probably largely because it is ‘free’. I can’t spit in the wind without hitting a new micro device that runs Android, and doesn’t run Windows. Perhaps this will begin to change somewhat.

Then there’s peripheral news like…
intel Galileo board ($99) is fully programmable from Visual Studio
Novena laptop goes for crowd funding

The Novena laptop is very interesting because it’s a substantial offering created by a couple of hardcore engineers. It is clearly a MUST HAVE machine for any self respecting hard/software hacker. It’s not the most powerful laptop in the world, and that’s beside the point. What it does represent is that some good engineers, hooked up with a solid supply chain, can produce goods that are almost price competitive with commodity goods. That and the fact that this is just an extraordinary hack machine.

I find the Galileo interesting because other than some third party support for Arduino programming from MSVC, this is a serious support drive for small things, from Microsoft. Given the previous news about the ‘free’, this Galileo support bodes well. You could conceivably get a $99 ‘computer’ with some form of Windows OS, and use it at the heart of your robot, quadcopter, art display, home automation thing…

Of course, the rest of the tinker market is heading even lower priced with things like the Teensy 3.1 at around $20. No “OS” per se, but surely capable hardware that could benefit from a nicely integrated programming environment and support from Microsoft. But, you don’t want Windows on such a device. You want to leverage some core technologies that Microsoft has in-house, and just apply it in various places. Wouldn’t it be great if all of Microsoft’s internal software was made available as installable packages…

Then there’s the whole ‘internet of things’ angle. Microsoft actually has a bunch of people focused in this space, but there’s no public offerings as yet. We’re Microsoft though, so you can imagine what the outcomes might look like. Just imagine lots of tiny little devices all tied to Microsoft services in some way, including good identities and all that.

Out on the fringe, non-Microsoft, there is, with their latest board back from manufacturing. I micro controller that runs node.js (and typescript for that matter), which is WiFi connected. That is bound to have a profound impact for those who are doing quick and dirty web connected physical computing.

Having spent the past few weeks coding in C++, I have been feeling the weight of years of piled on language cruft. I’ve been longing for the simplicity of the Lua language, and my beloved TINN, but that will just have to wait a few more weeks. In the meanwhile, I did purchase a Mojo FPGA board, in the hopes that I will once again get into FPGA programming, because “hardware is the new software”.

At the end of the day, I am as excited about the prospects of working at Microsoft as I was in 1998. My enthusiasm isn’t constrained by the possibilities of what Microsoft itself might do, rather I am overjoyed at the pace of development and innovation across the industry. There are new frontiers opening up all the time. New markets to explore, new waves to catch. It’s not all about desktops, browsers, office suites, search engines, phones, and tablets. Every day, there’s a new possibility, and the potential for a new application. Throw in 3D printing, instant manufacturing, and a smattering of BitCoin, and we’re living in a braver new world every day!!

All the pretty little asyncs…

I have gone on about various forms of async for quite some time now. So could there possibly be more? Well, yes of course!

Here’s the scenario I want to enable. I want to keep track of my file system activity, sending the various operations to a distant storage facility. I want to do this while a UI displays what’s going on, and I want to be able to configure things while its happening, like which events I really care to shadow, and the like.

I don’t want to use multiple OS level threads if I can at all avoid them as they will complicate my programming tremendously. So, what to do.

Well, first I’ll start with the file tracking business. I have talked about change journals in the past. This is a basic mechanism that Windows has to track changes to the file system. Every single open, close, delete, write, etc, has an entry in this journal. If you’re writing a backup program, you’ll be using change journals.

The essence of the change journal functionality is usage of the DeviceIoControl() function. Most of us are very familiar with the likes of CreateFile(), ReadFile(), WriteFile(), CloseHandle(), when it comes to dealing with files. But, for everything else, there is this DeviceIOControl() function.

What is a device? Well, you’d be surprised to learn that most things in the Windows OS are represented by ‘devices’ just like they are in UNIX systems. For example, ‘C:’, is a device. But, also “DISPLAY1” is also a device, as are “LCD” and “PhysicalDisk0”. When it comes to controlling devices, the Win32 level API calls will ultimately make DeviceIoControl() calls with various parameters. That’s great to know as it allows you to create whatever API you want, as long as you know the nuances of the device driver you’re trying to control.

But, I digress. The key point here is that I can open up a device, and I can make a DeviceIoControl() call, and true to form, I can use OVERLAPPED structures, and IO Completion Ports. That makes these calls “async”, or with TINN, cooperative.

To wrap it up in a tidy little bow, here is a Device class which does the grunt work for me:

local ffi = require("ffi")
local bit = require("bit")
local bor = bit.bor;

local core_file = require("core_file_l1_2_0");
local core_io = require("core_io_l1_1_1");
local Application = require("Application")
local IOOps = require("IOOps")
local FsHandles = require("FsHandles");
local errorhandling = require("core_errorhandling_l1_1_1");
local WinBase = require("WinBase");

local Device = {}
setmetatable(Device, {
	__call = function(self, ...)
		return self:open(...)
local Device_mt = {
	__index = Device,

function Device.init(self, rawhandle)
	local obj = {
		Handle = FsHandles.FsHandle(rawhandle)
	setmetatable(obj, Device_mt)
	Application:watchForIO(rawhandle, rawhandle)

	return obj;

function, devicename, dwDesiredAccess, dwShareMode)
	local lpFileName = string.format("\\\\.\\%s", devicename);
	dwDesiredAccess = dwDesiredAccess or bor(ffi.C.GENERIC_READ, ffi.C.GENERIC_WRITE);
	local lpSecurityAttributes = nil;
	local dwCreationDisposition = OPEN_EXISTING;
	local dwFlagsAndAttributes = FILE_FLAG_OVERLAPPED;
	local hTemplateFile = nil;

	local handle = core_file.CreateFileA(

	if handle == INVALID_HANDLE_VALUE then
		return nil, errorhandling.GetLastError();

	return self:init(handle)

function Device.getNativeHandle(self)
	return self.Handle.Handle;

function Device.createOverlapped(self, buff, bufflen)
	local obj ="FileOverlapped");
	obj.file = self:getNativeHandle();
	obj.OVL.Buffer = buff;
	obj.OVL.BufferLength = bufflen;

	return obj;

function Device.control(self, dwIoControlCode, lpInBuffer, nInBufferSize, lpOutBuffer, nOutBufferSize)
	local lpBytesReturned = nil;
	local lpOverlapped = self:createOverlapped(ffi.cast("void *", lpInBuffer), nInBufferSize);

	local status = core_io.DeviceIoControl(self:getNativeHandle(), 
          ffi.cast("void *", lpInBuffer),
          ffi.cast("OVERLAPPED *",lpOverlapped));

	local err = errorhandling.GetLastError();

	-- Error conditions
	-- status == 1, err == WAIT_TIMEOUT (258)
	-- status == 0, err == ERROR_IO_PENDING (997)
	-- status == 0, err == something else

	if status == 0 then
		if err ~= ERROR_IO_PENDING then
			return false, err

    local key, bytes, ovl = Application:waitForIO(self, lpOverlapped);

    return bytes;

return Device

I’ve shown this kind of construct before with the NativeFile object. That object contains Read, and Write functions as well, but lacks the control() function. Of course the two could be combined for maximum benefit.

How to use this thing?

dev = Device("c:")

OK, that’s out of the way. Now, what about this change journal thing? Very simple now that the device is handled.
A change journal can look like this:

-- USNJournal.lua
-- References

local ffi = require("ffi");
local bit = require("bit");
local bor = bit.bor;
local band =;

local core_io = require("core_io_l1_1_1");
local core_file = require("core_file_l1_2_0");
local WinIoCtl = require("WinIoCtl");
local WinBase = require("WinBase");
local errorhandling = require("core_errorhandling_l1_1_1");
local FsHandles = require("FsHandles");
local Device = require("Device")


	An abstraction for NTFS Change journal management
local ChangeJournal = {}
setmetatable(ChangeJournal, {
	__call = function(self, ...)
		return self:open(...);

local ChangeJournal_mt = {
	__index = ChangeJournal;

ChangeJournal.init = function(self, device)
	local obj = {
		Device = device;
	setmetatable(obj, ChangeJournal_mt);

	local jinfo, err = obj:getJournalInfo();

	print("ChangeJournal.init, jinfo: ", jinfo, err)

	if jinfo then
		obj.JournalID = jinfo.UsnJournalID;
		obj.LowestUsn = jinfo.LowestValidUsn;
		obj.FirstUsn = jinfo.FirstUsn;
		obj.MaxSize = jinfo.MaximumSize;
		obj.MaxUsn = jinfo.MaxUsn;
		obj.AllocationSize = jinfo.AllocationDelta;

	return obj;
end = function(self, driveLetter)
	local device, err = Device(driveLetter)

	if not device then
		print(", ERROR: ", err)
		return nil, err

	return self:init(device);

ChangeJournal.getNextUsn = function(self)
	local jinfo, err = self:getJournalInfo();

	if not jinfo then
		return false, err;

	return jinfo.NextUsn;

ChangeJournal.getJournalInfo = function(self)
	local dwIoControlCode = FSCTL_QUERY_USN_JOURNAL;
	local lpInBuffer = nil;
	local nInBufferSize = 0;
	local lpOutBuffer ="USN_JOURNAL_DATA");
	local nOutBufferSize = ffi.sizeof(lpOutBuffer);

	local success, err = self.Device:control(dwIoControlCode, 

	if not success then
		return false, errorhandling.GetLastError();

	return lpOutBuffer;

function ChangeJournal.waitForNextEntry(self, usn, ReasonMask) 
 	usn = usn or self:getNextUsn();
 	local ReasonMask = ReasonMask or 0xFFFFFFFF;
 	local ReturnOnlyOnClose = false;
 	local Timeout = 0;
 	local BytesToWaitFor = 1;

    local ReadData ="READ_USN_JOURNAL_DATA", {usn, ReasonMask, ReturnOnlyOnClose, Timeout, BytesToWaitFor, self.JournalID});

    local pusn ="USN");
    -- This function does not return until the USN
    -- record exits
	local BUF_LEN = ffi.C.USN_PAGE_SIZE;
	local Buffer ="uint8_t[?]", BUF_LEN);
    local dwBytes ="DWORD[1]");

	local success, err = self.Device:control(FSCTL_READ_USN_JOURNAL, 

	if not success then 
		return false, err

	local UsnRecord = ffi.cast("PUSN_RECORD", ffi.cast("PUCHAR",Buffer) + ffi.sizeof("USN")); 

    return UsnRecord;

return ChangeJournal;

This very much looks like the change journal I created a few months back. The primary difference is the device control stuff is abstracted out into the Device object, so it does not need to be repeated here.

When we want to track the changes to the device, we make repeated calls to ‘waitForNextEntry’.

local function test_waitForNextEntry(journal)
    local entry = journal:waitForNextEntry();

    while entry do
        entry = journal:waitForNextEntry();

This is your typical serially written code. There’s nothing that look special about it, no hint of async processing. Behind the covers though, way back in the Device:control() function, the actual sending of a command to the device happens using IO Completion Port, so if you’re running with TINN, this particular task will ‘waitForIO’, and other tasks can continue.

So, using it in context looks like this:

local function main()
    local journal, err = ChangeJournal("c:")

    spawn(test_waitForNextEntry, journal);
    periodic(function() print("tick") end, 1000)


In this case, I spawn the journal waiting/printing thing as its own task. Then I setup a periodic timer to simply print “tick” every second, to show there is some activity.

Since the journaling is cooperative (mostly waiting for io control to complete), the periodic timer, or UI processing, or what have you, is free to run, without any hindrance.

Combine this with the already cooperative UI stuff, and you can imagine how easy it could be to construct the scenario I set out to construct. Since all networking and file system operations in TINN are automatically async, it would be easy to log these values, or send them across a network to be stored for later analysis or what have you.

And there you have it. Async everywhere makes for some very nice scenarios. Being able to do async on any device, whether with standard read/write operations, or io control, makes for very flexible programming.

Next time around, I’m going to show how to do async DNS queries for fun and profit.

Exposing Yourself on the Interwebs – Baby Steps

I have begun a little experiement. Over the past year, I have written quite a bit of code related to networking. I have prototyped a lot of different things, and actually used some of it in a production environment. I have written http parser, websocket implementation, xml parser, and myriad wrappers for standard libraries.

So, now the experiment. I want to expose a few web services, running from my home, running on nothing but code that I have written (except for core OS code). How hard could it be?

Yesterday, I packaged up a bit of TINN and put it on my desktop machine to run a very simple http static content server. It’s a Windows box, and of course I could simply run IIS, but that’s a bit of a cheat. So, I started the service:

tinn main.lua

And that’s that. According to my intentions, the only content that should be served up is stuff that’s sitting within the ‘./wwwroot’ directory relative to where I started the service running. This is essentially the server that I outlined previously.

I am an average internet consumer when it comes to home network setup. I have an ASUS router that’s pretty fast and decent with its various security holes and strengths. At home I am sitting behind it’s “firewall” protection. But, I do want to expose myself, so what do I do?

Well, I must change the configuration on the router. First of all, I need to get a DNS entry that will point a certain URL to my router. Luckily, the ASUS router has a dynamic DNS service built right in. So, I choose a name (I’ll show that later), and simply select a button, and “Apply”. OK. Now my router is accessible on a well known url/ip: I confirm this by typing that into my we browser, and sure enough, I can connect to my browser over the internet. I am prompted for the admin password, and I’m in!

So, the first scary thought is, I hope I chose a password that is relatively strong. I hope I didn’t use the default ‘password’, like so many people do.

Alright. Now I know my router, and thus my network in general, can be accessed through a well known public url. The next thing I need to do is set a static IP address for my web server machine. This isn’t strictly necessary, but as I’m about to enable port forwarding, it will just be easier to use a static IP within my home domain. I set it up as: The HP printer is 1, the Synology box is 2, and everything else gets random numbers.

Next is port forwarding. What I want is to have the web server machine, which is listening on port 8080, receive any traffice coming from the well known url headed to port 8080. I want the following URL to land on this machine and be handled by the web server code that’s running:

So, I set that configuration in the router, and press ‘Apply’…

Back to my browser, type in that URL and voila! It works!

Now I take a pause at this point and ask myself a few questions. First of all, am I really confident enough in my programming skills to expose myself to the wide open internet like this? Second, I ask myself if my brother, or mother could have worked their way through a similar exercise?

Having gotten this far, I’m feeling fairly confident, so I let it run overnight to see what happens. Mind you, I’m not accessing it myself at night, but I wanted to see what would happen just having my router and server hanging out there on the internet.

I cam back in the morning, and checked the console output to see what happened, if anything. What I saw was this:


Hah! It happened twice, then never more. Well, that HNAP1 trick is a particular vulnerability to home routers which are configured by default to do automatic configuration stuff. D-Link routers, in particular, are vulnerable to an attack whereby they can be compromised through a well scripted Soap exchange, starting from here.

I’ve turned off that particular feature of my router, so, I think I luckily dodged that particular bullet.

The funny thing is though, I didn’t advertise my url, and I didn’t tell anyone that there would be an http server hanging out on port 8080. This happened within 8 hours of my service going live. So, it tells you what a teaming pool hackedness the internet truly is.

The other thing I have learned thus far is that I need a nice logging module. I just so happen to be printing out the URL of each request that comes in, but I should like to have the IP address of the requester, and some more interesting information that you typically find in web logs. So, I’ll have to add that module.

Having started down this path, I have another desire as well. My desktop machine is way too loud, and consumes too much power to be an always on web server. So, I’ve ordered the parts to build a nice Shuttle PC which will serve this purpose. It’s a decent enough machine. 256Gb SSD, i7, onboard video. I don’t need it to be a gaming rig, nor an HTPC, nor serve any other purpose. It just needs to run whatever web services I come up with, and it must run Windows. This goes towards the purpose built argument I made about the Surface 2. A machine specific to a specific job, without concern for any other purpose it might have. You could argue that I should just purchase a router that has a built in web server, or just use the Synology box, which will do this just fine. But, my criteria is that I want to write code, tinker about, and it must run Windows.

And so it begins. I’ve got the basic server up and running, and I’m already popular enough to be attacked. Now I am confident to add some features and content over time to make it more interesting.

Jobs at Microsoft – Working on iOS and Android

Catchy title isn’t it.  Microsoft, where I am employed, is actually doing a fair bit of iOS and Android work.  In days of yore, “cross platform” used to mean “works on multiple forms of Windows”.  These days, it actually means things like iOS, Android, Linux, and multiple forms of Windows.

I am currently working in the Windows Azure Group.  More specifically, I am working in the area of identity, which covers all sorts of things from Active Directory to single sign on for Office 365.  My own project, the Application Gateway, has been quite an experience in programming with node.js, Android OS, iOS, embedded devices, large world scale servers, and all manner of legal wranglings to ship Open Source for our product.

Recently, my colleague Rich Randall came by and said “I want to create a group of excellence centered around iOS and Android development, can you help me?”.  Of course I said “sure, why not”, so here is this post.

Rich is working on making it easier for devices (non-windows specific) to participate in our “identity ecosystem”.  What does that mean?  Well, the job descriptions are here:

iOS Developer – Develop apps and bits of code to make it relatively easy to leverage the identity infrastructure presented by Microsoft.

Android Developer – Develop apps and bits of code to make it relative easy to leverage the identity infrastructure presented by Microsoft.

I’m being unfair, these job descriptions were well crafted and more precisely convey the actual needs.  But, what’s more interesting to me is to give a should out to Rich, and some support for his recruiting efforts.

As Microsoft is “in transition”, it’s worth pointing out that although we may be considered old and stodgy by today’s internet standards, we are still a hotbed of creativity, and actually a great place to work.  Rich is not alone in putting together teams of programmers who have non-traditional Microsoft skillsets.  Like I said, there are plenty that now understand that as a “services and devices” company, we can’t just blindly push the party line and platform components.  We have to meet the market where it is, and that is in the mobile space, with these two other operating systems.

So, if you’re interesting in leveraging your iOS and Android skills, delivering code that is open source, being able to do full stack development, working with a great set of people, please feel free to check out those job listings, or send mail to Rich Randall directly.  I’d check out the listings, then send to Rich.

Yes, this has been a shameless jobs plug.  I do work for the company, and am very interested in getting more interesting people in the door to work with.



Computicles – Inter-computicle communication

Alrighty then, so a computicle is a vessel that holds a bit of computation power. You can communicate with it, and it can communicate with others.

Most computicles do not stand as islands unto themselves, so easily communicating with them becomes very important.

Here is some code that I want to be able to run:

local Computicle = require("Computicle");
local comp = Computicle:load("comp_receivecode");

-- start by saying hello
comp:exec([[print("hello, injector")]]);

-- queue up a quit message

-- wait for it all to actually go through

So, what’s going on here? The first line is a standard “require” to pull in the computicle module.

Next, I create a single instance of a Computicle, running the Lua code that can be found in the file “comp_receivecode.lua”. I’ll come back to that bit of code later. Suffice to say it’s running a simple computicle that does stuff, like execute bits of code that I hand to it.

Further on, I use the Computicle I just created, and call the “exec()” function. I’m passing a string along as the only parameter. What will happen is the receiving Computicle will take that string, and execute the script from within its own context. That’s actually a pretty nifty trick I think. Just imagine, outside the world of scripting, you can create a thread in one line of code, and then inject a bit of code for that thread to execute. Hmmm, the possibilities are intriguing methinks.

The tail end of this code just posts a quit, and then finally waits for everything to finish up. Just not that the ‘quit()’ function is not the same thing as “TerminateThread()”, or “ExitThread()”. Nope, all it does is post a specific kind of message to the receiving Computicle’s queue. What the thread does with that QUIT message is up to the individual Computicle.

Let’s have a look at the code for this computicle:

local ffi = require("ffi");

-- This is a basic message pump
while true do
  local msg = SELFICLE:getMessage();
  msg = ffi.cast("ComputicleMsg *", msg);
  local Message = msg.Message;

  if OnMessage then
    if Message == Computicle.Messages.QUIT then

    if Message == Computicle.Messages.CODE then
      local len = msg.Param2;
      local codePtr = ffi.cast("const char *", msg.Param1);
      if codePtr ~= nil and len > 0 then
        local code = ffi.string(codePtr, len);

        SELFICLE:freeData(ffi.cast("void *",codePtr));

        local f = loadstring(code);


It’s not too many lines. This little Computicle takes care a few scenarios.

First of all, if there so happens to be a ‘OnMessage’ function defined, it will receive the message, and the main loop will do no further processing of it.

If there is no ‘OnMessage’ function, then the message pump will handle a couple of cases. In case a ‘QUIT’ message is received, the loop will break and the thread/Computible will simply exit.

When the message == ‘CODE’ things get really interesting. The ‘Param1’ of the message contains a pointer to the actual bit of code that is intended to be executed. The ‘Param2’ contains the length of the specified code.

Through a couple of type casts, and an ffi.string() call, the code is turned into something that can be used with ‘loadstring()’, which is a standard Lua function. It will parse the string, and then when ‘f()’ is called, that string will actually be executed (within the context of the Computicle). And that’s that!

At the end, the ‘SELFICLE:freeMessage()’ is called to free up the memory used to allocate the outer message. Notice that ‘SELFICLE:freeData()’ was used to clean up the string value that was within the message itself. I have intimate knowledge of how this message was constructed, so I know this is the correct behavior. In general, if you’re going to pass data to a computicle, and you intend the Computicle to clean it up, you should use the computicle instance “allocData()” function.

OK. So, that explains how I could possibly inject some code into a Computicle for execution. That’s pretty nifty, but it looks a bit clunky. Can I do better?

I would like to be able to do the following.

comp.lowValue = 100;
comp.highValue = 200;

In this case, it looks like I’m setting a value on the computicle instance, but in which thread context? Well, what will actually happen is this will get executed within the computicle instance context, and be available to any code that is within the computicle.

We already know that the ‘exec()’ function will execute a bit of code within the context of the running computicle, so the following should now be possible:

comp:exec([[print(" Low: ", lowValue)]]);
comp:exec([[print("High: ", highValue)]])

Basically, just print those values from the context of the computile. If they were in fact set, then this should print them out. If there were not in fact set, then it should print ‘nil’ for each of them. On my machine, I get the correct values, so that’s an indication that they were in fact set correctly.

How is this bit of magic achieved?

The key is the Lua ‘__newindex’ metamethod. Wha? Basically, if you have a table, and you try to set a value that does not exist, like I did with ‘lowValue’ and ‘highValue’, the ‘__newindex()’ function will be called on your table if you’ve got it setup right. Here’s the associated code of the Computicle that does exactly this.

__newindex = function(self, key, value)
  local setvalue = string.format("%s = %s", key, self:datumToString(value, key));
  return self:exec(setvalue);

That’s pretty straight forward. Just create some string that represents setting whatever value you’re trying to set, and then call ‘exec()’, which is already known to execute within the context of the thread. So, in the case where I have written “comp.lowValue = 100”, this will turn into a sting that == “lowValue == 100”, and that string will be executed, setting a global variable ‘lowValue’ == 100.

And what is this ‘datumToString()’ function? Ah yes, this is the little bit that takes various values and returns their string equivalent, ready to be injected into a running Computicle.

Computicle.datumToString = function(self, data, name)
  local dtype = type(data);
  local datastr = tostring(nil);

  if type(data) == "cdata" then
      -- If it is a cdata type that easily converts to 
      -- a number, then convert to a number and assign to string
    if tonumber(data) then
      datastr = tostring(tonumber(data));
      -- if not easily converted to number, then just assign the pointer
      datastr = string.format("TINNThread:StringToPointer(%s);", 
  elseif dtype == "table" then
    if getmetatable(data) == Computicle_mt then
      -- package up a computicle
      -- get a json string representation of the table
      datastr = string.format("[[ %s ]]",JSON.encode(data, {indent=true}));
  elseif dtype == "string" then
    datastr = string.format("[[%s]]", data);
    datastr = tostring(data);

  return datastr;

The task is actually fairly straight forward. Given a Lua based value, turn it into a string that can be executed in another Lua state. There are of course methods in Lua which will do this, and tons of marshalling frameworks as well. But, this is a quick and dirty version that does exactly what I need.

Of particular note are the handling of cdata and table types. For cdata, some of the values, such as ‘int64_t’, I want to just convert to a number. Tables are the most interesting. This particular technique will really only work for fairly simple tables, that do not make references to other tables and the like. Basically, turn the table into a JSON string, and send that across to be rehydrated as a table.

Here’s some code that actually makes use of this.

comp.contacts = {
  {first = "William", last = "Adams", phone = "(111) 555-1212"},
  {first = "Bill", last = "Gates", phone = "(111) 123-4567"},

print("== CONTACTS ==")

-- turn contacts back into a Lua table
local JSON = require("dkjson");

local contable = JSON.decode(contacts);

for _, person in ipairs(contable) do
	print("== PERSON ==");
	for k,v in pairs(person) do

Notice that ‘comp.contacts = …’ just assigns the created table directly. This is fine, as there are no other references to the table on ‘this’ side of the computicle, so it will be safely garbage collected after some time.

The rest of the code is using the ‘exec()’, so it is executing in the context of the computicle. It basically gets the value of the ‘contacts’ variable, and turns it back into a Lua table value, and does some regular processing on it (print out all the values).

And that’s about it. From the humble beginnings of being able to inject a bit of code to run in the context of an already running thread, to exchanging tables between two different threads with ease, Computicles make pretty short work of such a task. It all stems from the same three principles of Computicles: listen, compute, communicate. And again, not a single mutex, lock, or other visible form of synchronization. The IOCompletionPort is the single communications mechanism, and all the magic of serialized multi-threaded communication hide behind that.

Of course, ‘code injection’ is a dirty word around the computing world, so there must be a way to secure such transfers? Yah, sure, why not. I’ve been bumping around the indentity/security/authorization/authentication space recently, so surely something must be applicable here…

Spelunking Windows – Tokens for fun and profit

I want to shutdown/restart my machine programmatically. There’s an API for that:

-- kernel32.dll
    LPWSTR lpMachineName,
    LPWSTR lpMessage,
    DWORD dwTimeout,
    BOOL bForceAppsClosed,
    BOOL bRebootAfterShutdown,
    DWORD dwReason);

Wow, it’s that easy?!!

OK. So, I need the name of the machine, some message to display in a dialog box, a timeout, force app closure, reboot or not, and some reason why the shutdown is occuring. That sounds easy enough. So, I’ll just give it a call…

local status = core_shutdown.InitiateSystemShutdownExW(
  nil,    -- nil, so local machine
  nil,    -- no special message
  10,     -- wait 10 seconds
  false,  -- don't force apps to close
  true,   -- reboot after shutdown

And what do I get for my troubles?

Darn, now I’m going to have to read the documentation.

In the Remarks of the documentation, it plainly states:

To shut down the local computer, the calling thread must have the SE_SHUTDOWN_NAME privilege.

Yah, ok, right then. What’s a privilege? And thus Alice went down into the rabbit’s hole…

As it turns out, there are quite a few concepts in Windows that are related to identity, security, authorization, and the like. As soon as you log into your machine, even if done programmatically, you get this thing called a ‘Token’ attached to your process. The easiest way to think of the token is it’s your electronic proxy and passport. Just like your passport, this token contains some basic identity information about who you are (name, identifying marks…). Some things in the system, such as being able to access a file, can be handled simply by knowing your name. These are simple access rights. But, other things in the system require a ‘visa’, meaning, not only does the operation have to know who you are, but it also needs to know you have the proper permissions to perform the operation you’re about to perform. It’s just like getting a visa stamped into your passport. If I want to travel to India, my passport alone is not enough. I need to get a visa as well. The same is true of this token thing. It’s not enough that I simply have an identity, I must also have a “privilege” in order to perform certain operations.

In addition to having a privilege, I must actually ‘activate’ it. So, yes, the system may have granted me the privilege, but it’s like super powers, you don’t want them to always be active. It’s like when you’re walking down the street in that foreign country you’re visiting. You don’t walk down the street flashing your fancy passport showing everyone the neat visas you have stamped in there. If you do, you’ll likely get a crowd following you trying to relieve you of said passport. So, you generally keep it to yourself, and only flash it when the need arises. So too with token privilege. Yes, you might have the ability to reboot the machine, but you don’t always want to have that privilege enabled, in case some nefarious software so happens to come along to exploit that fact.

Alright, that’s enough analogizing. How about some code. Well, it can be daunting to get your head around the various APIs associated with tokens. To begin with, there is a token associated with the process you’re currently running in, and there is a token associated with every thread you may launch from within that process as well. Generally, you want the process token if you’re single threaded. That’s one API call:

OpenProcessToken (
    HANDLE ProcessHandle,
    DWORD DesiredAccess,
    PHANDLE TokenHandle

This is one of those standard API calls where you pass in a couple of parameters (ProcessHandle, DesiredAccess), and a ‘handle’ is returned (TokenHandle). You then use the ‘handle’ to make subsequent calls to the various API functions. This is ripe for wrapping up in some nice data structure to deal with it.

I’ve created the ‘Token’ object, as the convenience point. One of the functions in there is this one:

getProcessToken = function(self, DesiredAccess)
  DesiredAccess = DesiredAccess or ffi.C.TOKEN_QUERY;
  local ProcessHandle = core_process.GetCurrentProcess();
  local pTokenHandle ="HANDLE [1]")
  local status  = core_process.OpenProcessToken (ProcessHandle, DesiredAccess, pTokenHandle);

  if status == 0 then
    return false, errorhandling.GetLastError();

  return Token(pTokenHandle[0]);

One of the important things to take note of when you create a token is the DesiredAccess. What you can do with a token after it is created is somewhat determined by the access that you put into it when you create it. Here are the various options available:

static const int TOKEN_ASSIGN_PRIMARY    =(0x0001);
static const int TOKEN_DUPLICATE         =(0x0002);
static const int TOKEN_IMPERSONATE       =(0x0004);
static const int TOKEN_QUERY             =(0x0008);
static const int TOKEN_QUERY_SOURCE      =(0x0010);
static const int TOKEN_ADJUST_PRIVILEGES =(0x0020);
static const int TOKEN_ADJUST_GROUPS     =(0x0040);
static const int TOKEN_ADJUST_DEFAULT    =(0x0080);
static const int TOKEN_ADJUST_SESSIONID  =(0x0100);

For the case where we want to turn on a privilege that’s attached to the token, we will want to make sure the ‘TOKEN_ADJUST_PRIVILEGES’ access right is attached. It also does not hurt to add the ‘TOKEN_QUERY’ access as well. It’s probably best to use the least of these rights as is necessary to get the job done.

Setting a privilege on a token is another bit of work. It’s not hard, but it’s just one of those things where you have to read the docs, and look at a few samples on the internet in order to get it right. Assuming your token has the TOKEN_ADJUST_PRIVILEGES access right on it, you can do the following:

Token.enablePrivilege = function(self, privilege)
  local lpLuid, err = self:getLocalPrivilege(privilege);
  if not lpLuid then
    return false, err;

  local tkp ="TOKEN_PRIVILEGES");
  tkp.PrivilegeCount = 1;
  tkp.Privileges[0].Luid = lpLuid;
  tkp.Privileges[0].Attributes = ffi.C.SE_PRIVILEGE_ENABLED;

  local status = security_base.AdjustTokenPrivileges(self.Handle.Handle, false, tkp, 0, nil, nil);

  if status == 0 then
    return false, errorhandling.GetLastError();

  return true;

Well, that gets into some data structures, and introduces this thing called a LUID, and that AdjustTokenPrivileges function, and… I get tired just thinking about it. Luckily, once you have this function, it’s a fairly easy task to turn a privilege on and off.

OK. So, with this little bit of code in hand, I can now do the following:

	local token = Token:getProcessToken(ffi.C.TOKEN_ADJUST_PRIVILEGES);

This just gets a token that is associated with the current process and turns on the privilege that allows us to successfully call the shutdown function.

In totality:

-- test_shutdown.lua
local ffi = require("ffi");

local core_shutdown = require("core_shutdown_l1_1_0");
local errorhandling = require("core_errorhandling_l1_1_1");
local Token = require("Token");

local function test_Shutdown()
  local token = Token:getProcessToken();
  local status = core_shutdown.InitiateSystemShutdownExW(nil, nil,

  if status == 0 then
    return false, errorhandling.GetLastError();

  return true;


And finally we emerge back into the light! This will now actually work. It’s funny, when I got this to work correctly, I pointed out to my wife that my machine was rebooting without me touching it. She tried to muster a smile of support, but really, she wasn’t that impressed. But, knowing the amount of work that goes into such a simple task, I gave myself a pat on the back, and smiled inwardly at the greatness of my programming fu.

Tokens are a very powerful thing in Windows. Being able to master both the concepts, and the API calls themselves, gives you a lot of control over what happens with your machine.

Taming VTables with Aplomb

If you do enough interop work, you’ll eventually run across a VTable that you’re going to have to work with.  I have previously dealt with OpenGL, which doesn’t strictly have a vtable, but has a bunch of functions which you have to lookup in order to use.  In explored the topic in this article: HeadsUp OpenGL Extension Wrangling

Recently, I have been writing code to support TLS connections in TINN.  This ultimately involves using the sspi interfaces in Windows, which leads you to the sspi.h header file which contains the following:


    unsigned long                       dwVersion;
    ENUMERATE_SECURITY_PACKAGES_FN_A    EnumerateSecurityPackagesA;
    QUERY_CREDENTIALS_ATTRIBUTES_FN_A   QueryCredentialsAttributesA;
    ACQUIRE_CREDENTIALS_HANDLE_FN_A     AcquireCredentialsHandleA;
    FREE_CREDENTIALS_HANDLE_FN          FreeCredentialHandle;
    void *                      Reserved2;
    INITIALIZE_SECURITY_CONTEXT_FN_A    InitializeSecurityContextA;
    ACCEPT_SECURITY_CONTEXT_FN          AcceptSecurityContext;
    COMPLETE_AUTH_TOKEN_FN              CompleteAuthToken;
    DELETE_SECURITY_CONTEXT_FN          DeleteSecurityContext;
    APPLY_CONTROL_TOKEN_FN              ApplyControlToken;
    QUERY_CONTEXT_ATTRIBUTES_FN_A       QueryContextAttributesA;
    IMPERSONATE_SECURITY_CONTEXT_FN     ImpersonateSecurityContext;
    REVERT_SECURITY_CONTEXT_FN          RevertSecurityContext;
    MAKE_SIGNATURE_FN                   MakeSignature;
    VERIFY_SIGNATURE_FN                 VerifySignature;
    FREE_CONTEXT_BUFFER_FN              FreeContextBuffer;
    QUERY_SECURITY_PACKAGE_INFO_FN_A    QuerySecurityPackageInfoA;
    void *                      Reserved3;
    void *                      Reserved4;
    EXPORT_SECURITY_CONTEXT_FN          ExportSecurityContext;
    IMPORT_SECURITY_CONTEXT_FN_A        ImportSecurityContextA;
    ADD_CREDENTIALS_FN_A                AddCredentialsA ;
    void *                      Reserved8;
    QUERY_SECURITY_CONTEXT_TOKEN_FN     QuerySecurityContextToken;
    ENCRYPT_MESSAGE_FN                  EncryptMessage;
    DECRYPT_MESSAGE_FN                  DecryptMessage;
    SET_CONTEXT_ATTRIBUTES_FN_A         SetContextAttributesA;
    SET_CREDENTIALS_ATTRIBUTES_FN_A     SetCredentialsAttributesA;
    CHANGE_PASSWORD_FN_A                ChangeAccountPasswordA;
} SecurityFunctionTableA, * PSecurityFunctionTableA;

You get at this function table by making the following call:

local sspilib = ffi.load("secur32");
local VTable = sspilib.InitSecurityInterfaceA();

And then, to execute one of the functions, you could do this:

local pcPackages ="int[1]");
local ppPackageInfo ="PSecPkgInfoA[1]");
local result = VTable["EnumerateSecurityPackagesA"](pcPackages, ppPackageInfo);

-- Print names of all security packages
for i=0,pcPackages[0] do

Tada!! What could be simpler…

Well, this is Lua of course, so things could be made a bit simpler.

First of all, why is there even a vtable in this case? All these functions are just in the .dll file directly aren’t they? Well, there’s a bit of trickery when it comes to security packages. It turns out, it’s best not to actually load the .dll that represents the security package into the address space of the program that’s using it, directly. By calling “IniSecurityInterface()”, the actual package is loaded into a different address space, and the vtable is then used to access the functions.

You can make multiple calls to InitSecurityInterface() to get that vtable pointer, or you could stuff it into a global variable, making it available to all modules within your program, or, you could stuff it into a bit of a table wrapping and make life much easier.

-- sspi.lua
local ffi = require("ffi");

local sspi_ffi = require("sspi_ffi");
local SecError = require ("SecError");
local sspilib = ffi.load("secur32");
local SecurityPackage = require("SecurityPackage");
local Credentials = require("CredHandle");
local schannel = require("schannel");

local SecurityInterface = {
  VTable = sspilib.InitSecurityInterfaceA();
setmetatable(SecurityInterface, {
  __index = function(self, key)
    return self.VTable[key]

return {
  schannel = schannel;
  SecurityInterface = SecurityInterface;
  SecurityPackage = SecurityPackage;
  Credentials = Credentials;

With this little bit, I can now do this in my program:

local sspi = require("sspi");
local SecurityInterface = sspi.SecurityInterface;

local pcPackages ="int[1]");
local ppPackageInfo ="PSecPkgInfoA[1]");

local result = SecurityInterface.EnumerateSecurityPackagesA(pcPackages, ppPackageInfo);

The SecurityInterface table takes care of loading the VTable as part of it’s construction. By doing the setmetatable, and implementing the ‘__index’ metamethod, whenever a ‘.functionname’ is asked for, as with ‘.EnumerateSecurityPackagesA’, the element within the vtable with that name will be returned. Those elements so happen to be function pointers, so they will then just be executed like regular functions!

I think that’s a pretty awesome trick. The SecurityInterface table looks like a static structure with function pointers, and you just get to call those functions directly, passing in the appropriate arguments. This looks pretty much exactly like what I would expect if I were writing this in C, but I don’t have to worry about type casts and the like.

This works in this particular case because there is a single table representing the function pointers. If you were instead doing something where there were instances of an object, and an attendant vtable, you’d have to do a little bit more work to preserve the instance data, and pass it into the individual functions. Not too hard, and I actually do this trick in my Kinect interface implementation.

At any rate, that’s a relatively easy way to tackle vtables without much work. It was actually a bit surprising to me that it worked so easily, and I’ve been able to refine a pattern that I somewhat understood before, and now truly appreciate.

Confessions of a Microsoft Engineer

It’s perhaps not a well known fact that I work for Microsoft. I don’t believe I hide it that much, but neither do I actively talk about it either, at least not on this blog. But, today I’m going to talk about my real world work.

For the past year, I’ve been part of a small team (2 at minimum, 4 at max, presently 3) of engineers, working on a particular problem. The problem, simply stated is: Can Microsoft provide an ‘authenticated pipe’ that allows anyone with a browser to connect to their corporate assets behind a firewall?

The scenario is simple. You’ve got an iPad or Android device, you’re sitting in a cafe, or at home, and you want to gain access to http://corpnetsite. You want to feel like you’re on your corporate network, without actually being on your corporate network. Furthermore, you want to be able to achieve this freedom without having to turn to your IT department to request a punch through the firewall, or the setup of VPN, because it’s a challenge for the IT department to believe that end users should be able to do such a thing. Most importantly, you want to do this little task without seriously compromising the integrity of your corporate network, and exposing it to the ravishes of the open internet.

Our small team first prototyped, and recently has made freely available, what we call the “Application Gateway”. If you want to check it out, you can go to the Portal Page, and read about it.

What I really want to blog about is how our small team got it done, what tools we used, what processes we followed, how we’re doing the sales/marketing, and the like.  There’s really a lot to talk about.  But, as this is more of a confessional first post, I’ll be brief.

A bit of history (or, short form of my resume).

I have worked at Microsoft for the past 14 years.  I joined the company in 1998 because a colleague of mine (Chris Lovett) told me they were working on this cool thing called XML and that I might be perfect for the team.  I had worked with Chris while he was at Taligent.  My then company, Adamation, had sold some core technology to Taligent, so I was onsite.  So, I helped birth XML, and System.Xml, System.Data, a bunch of data stuff, and ultimately this thing called Xen, which served as a prototype to what is today known as LinQ.  Then I did something completely different, went to India, created the Engineering Excellence team there, lived and worked in Hyderabad for three years, before returning to Redmond, just in time to work on the first service available on this new thing called “Azure”.  That service was ACS (Access Control Service), which is about issuing claims/tokens for controlling things like Office Online, or any other application.

So, last year, I hooked with a long time colleague, and Microsoft Technical Fellow, John Shewchuk, to tackle this particular problem space.  Working for a Technical Fellow at Microsoft is an interesting experience, not for the faint of heart, or the insecure.  John is one of those guys that wonders out loud about a problem, and then says “go build it”.  The “go build it part” is extremely interesting, because at that point, you’re set free to get the job done however you see fit.  Do you need to hire engineers?  Do you need to buy a company, do you need to use technologies not standard to Microsoft?  Whatever you need to do, go do it.

As you might guess from my blog history, I personally spent a lot of time with Lua over the past year.  That was primarily for prototyping various pieces of the puzzle.  We also used node, and node is what we use for various production pieces.  At the same time, we coded up some pieces on Android, and iOS.  The iOS piece, what we call the “Application Gateway Browser” was most interesting because the bits are available in OpenSource form, as well as downloadable from the Apple store.  In fact, the application is free for anyone, and we encourage people to use it, look at the code, change it, etc.  We’re all about the pipe itself, not the end user browser code.

After working on the project for about 9 months, we came to the innevitable place of “yah, this might actually work, we better start thinking about getting the word out”.  If you know anything about Microsoft engineering, you might guess that a team of 3-4 people is not typical.  Big things are usually done by a minimum of about 30 people, and they can take a very long time.  That because in the ‘on premises’ world, where you only get to deploy new bits to customers once every few years, you have to expend an extraordinary amount of energy ensuring you catch as many bugs as possible up front, because the cost of fixing them down the line is so extremely high.  With cloud services though, the world is different.  Finding a bug, creating a fix, and deploying it, can occur in a matter of hours, or minutes, so you can run much more lean.  At any rate, here we are at the point where we want to get the word out.

This part of the story is just beginning.  For the moment, we’re allowing ourselves to blog personally, telling our friends and family to kick the tires, and slowly but surely getting the word out.

It’s actually very fun.  I feel about the same as I did when I was running my own company.  Imagine that, a startup hatched within the confines of a mega corporation.  Yes, there is a Santa Claus!

At any rate, this is a bit of a coming out.  In the near future, I’ll probably be posting more on our little project, because it has been quite fun, and we’ve done a lot of what I consider to be interesting tech.

So, if you’re a regular reader here, please do check out our portal at:  and see what it’s all about.  Keep in mind, this is work done by a few engineers, not a highly polished marketing piece.  It’s rough, but it representative our our passion and love for the product we’ve worked on for the past year.