Splunking Windows – Extracting pleasure from legacy apis

If you are a modern programmer of Windows apps, there are numerous frameworks for you, hundreds of SDKs, scripted wrappers, IDEs to hide behind, and just layers upon layers of goodness to keep you safe and sane.  So, when it comes to using some of the core Windows APIs directly, you can be forgiven for not even knowing they exist, let alone how to use them from your favorite environment.

I’ve done a ton of exploration on the intricacies of the various Linux interfaces, Spelunking Linux goes over everything from auxv to procfs, and quite a few in between.  But, what about Windows?  Well, I’ve recently embarked on a new project lj2win32 (not to be confused with earlier LJIT2Win32).  The general purpose of this project is to bring the goodness of TINN to the average LuaJIT developer.  Whereas TINN is a massive project that strives to cover the entirety of the known world of common Windows interfaces, and provides a ready to go multi-tasking programming environment, lj2win32 is almost the opposite.  It does not provide its own shell, rather it just provides the raw bindings necessary for the developer to create whatever they want.  It’s intended to be a simple luarocks install, much in the way the ljsyscall works for creating a standard binding to UNIX kinds of systems without much fuss or intrusion.

In creating this project, I’ve tried to adhere to a couple of design principles to meet some objectives.

First objective is that it must ultimately be installable using luarocks.  This means that I have to be conscious about the organization of the file structure.  To wit, everything of consequence lives in a ‘win32’ directory.  The package name might ultimately be ‘win32’.  Everything is referenced from there.

Second objective, provide the barest minimum bindings.  Don’t change names of things, don’t introduce non-windows semantics, don’t create several layers of class hierarchies to objectify the interfaces.  Now, of course there are some very simple exceptions, but they should be fairly limited.  The idea being, anyone should be able to take this as a bare minimum, and add their own layers atop it.  It’s hard to resist objectifying these interfaces though, and everything from Microsoft’s ancient MFC, ATL, and every framework since, has thrown layers of object wrappers on the core Win32 interfaces.  In this case, wrappers and other suggestions will show up in the ‘tests’ directory.  That is fertile ground for all manner of fantastical object wrapperage.

Third objective, keep the dependencies minimal.  If you do programming in C on Windows, you include a couple of well known headers files at the beginning of your program, and the whole world gets dragged in.  Everything is pretty much in a global namespace, which can lead to some bad conflicts, but they’ve been worked out over time.  In lj2win32, there are only a couple things in the global namespace, everything else is either in some table, or within the ffi.C facility.  Additionally, the wrappings are clustered in a way that follows the Windows API Sets.  API sets are a mechanism Windows has for pulling apart interdependencies in the various libraries that make up the OS.  In short, it’s just a name (so happens to end in ‘.dll’) which is used by the loader to load in various functions.  If you use these special names, instead of the traditional ‘kernel32’, ‘advapi32’, you might pull in a smaller set of stuff.

With all that, I thought I’d explore one particular bit of minutia as an example of how things could go.

The GetSystemMetrics() function call is a sort of dumping ground for a lot of UI system information.  Here’s where you can find things like how big the screen is, how many monitors there are, how many pixels are used for the menu bars, and the like.  Of course this is just a wrapper on items that probably come from the registry, or various devices and tidbits hidden away in other databases throughout the system, but it’s the convenient developer friendly interface.

The signature looks like this

int WINAPI GetSystemMetrics(
_In_ int nIndex
);

A simple enough call. And a simple enough binding:

ffi.cdef[[
int GetSystemMetrics(int nIndex);
]]

Of course, there is the ‘nIndex’, which in the Windows headers is a bunch of manifest constants, which in LuaJIT might be defined thus:

ffi.cdef[[
	// Used for GetSystemMetrics
static const int	SM_CXSCREEN = 0;
static const int	SM_CYSCREEN = 1;
static const int	SM_CXVSCROLL = 2;
static const int	SM_CYHSCROLL = 3;
static const int	SM_CYCAPTION = 4;
static const int	SM_CXBORDER = 5;
static const int	SM_CYBORDER = 6;
]] 

 
Great. Then I can simply do

local value = ffi.C.GetSystemMetrics(ffi.C.SM_CXSCREEN)

 
Fantastic, I’m in business!

So, this meets the second objective of bare minimum binding. But, it’s not a very satisfying programming experience for the LuaJIT developer. How about just a little bit of sugar? Well, I don’t want to violate the same second objective of non-wrapperness, so I’ll create a separate thing in the tests directory. The systemmetrics.lua file contains a bit of an exploration in getting of system metrics.

It starts out like this:

local ffi = require("ffi")
local errorhandling = require("win32.core.errorhandling_l1_1_1");

ffi.cdef[[
int GetSystemMetrics(int nIndex);
]]

local exports = {}

local function SM_toBool(value)
	return value ~= 0
end

Then defines something like this:

exports.names = {
    SM_CXSCREEN = {value = 0};
    SM_CYSCREEN = {value = 1};
    SM_CXVSCROLL = {value = 2};
    SM_CYHSCROLL = {value = 3};
    SM_CYCAPTION = {value = 4};
    SM_CXBORDER = {value = 5};
    SM_CYBORDER = {value = 6};
    SM_CXDLGFRAME = {value = 7};
    SM_CXFIXEDFRAME = {value = 7};
    SM_CYDLGFRAME = {value = 8};
    SM_CYFIXEDFRAME = {value = 8};
    SM_CYVTHUMB = {value = 9};
    SM_CXHTHUMB = {value = 10};
    SM_CXICON = {value = 11};
    SM_CYICON = {value = 12};
    SM_CXCURSOR = {value = 13};
    SM_CYCURSOR = {value = 14};
    SM_CYMENU = {value = 15};
    SM_CXFULLSCREEN = {value = 16};
    SM_CYFULLSCREEN = {value = 17};
    SM_CYKANJIWINDOW = {value = 18, converter = SM_toBool};
    SM_MOUSEPRESENT = {value = 19, converter = SM_toBool};
    SM_CYVSCROLL = {value = 20};
    SM_CXHSCROLL = {value = 21};
    SM_DEBUG = {value = 22, converter = SM_toBool};
    SM_SWAPBUTTON = {value = 23, converter = SM_toBool};
    SM_RESERVED1 = {value = 24, converter = SM_toBool};
    SM_RESERVED2 = {value = 25, converter = SM_toBool};
    SM_RESERVED3 = {value = 26, converter = SM_toBool};
    SM_RESERVED4 = {value = 27, converter = SM_toBool};
}

And finishes with a flourish like this:

local function lookupByNumber(num)
	for key, entry in pairs(exports.names) do
		if entry.value == num then
			return entry;
		end
	end

	return nil;
end

local function getSystemMetrics(what)
	local entry = nil;
	local idx = nil;

	if type(what) == "string" then
		entry = exports.names[what]
		idx = entry.value;
	else
		idx = tonumber(what)
		if not idx then 
			return nil;
		end
		
		entry = lookupByNumber(idx)

        if not entry then return nil end
	end

	local value = ffi.C.GetSystemMetrics(idx)

    if entry.converter then
        value = entry.converter(value);
    end

    return value;
end

-- Create C definitions derived from the names table
function exports.genCdefs()
    for key, entry in pairs(exports.names) do
        ffi.cdef(string.format("static const int %s = %d", key, entry.value))
    end
end

setmetatable(exports, {
	__index = function(self, what)
		return getSystemMetrics(what)
	end,
})

return exports

All of this allows you to do a couple of interesting things. First, what if you wanted to print out all the system metrics. This same technique can be used to put all the metrics into a table to be used within your program.

local sysmetrics = require("systemmetrics");

local function testAll()
    for key, entry in pairs(sysmetrics.names) do
        local value, err = sysmetrics[key]
        if value ~= nil then
            print(string.format("{name = '%s', value = %s};", key, value))
        else
            print(key, err)
        end
    end
end

OK, so what? Well, the systemmetrics.names is a dictionary matching a symbolic name to the value used to get a particular metric. And what’s this magic with the ‘sysmetrics[key]’ thing? Well, let’s take a look back at that hand waving from the systemmetrics.lua file.

setmetatable(exports, {
	__index = function(self, what)
		return getSystemMetrics(what)
	end,
})

Oh, I see now, it’s obvious…

So, what’s happening here with the setmetatable thing is, Lua has a way of setting some functions on a table which will dictate the behavior they will exhibit in certain situations. In this case, the ‘__index’ function, if it exists, will take care of the cases when you try to look something up, and it isn’t directly in the table. So, in our example, doing the ‘sysmetrics[key]’ thing is essentially saying, “Try to find a value with the string associated with ‘key’. If it’s not found, then do whatever is associated with the ‘__index’ value”. In this case, ‘__index’ is a function, so that function is called, and whatever that returns becomes the value associated with that key.

I know, it’s a mouth full, and metatables are one of the more challenging aspects of Lua to get your head around, but once you do, it’s a powerful concept.

How about another example which will be a more realistic and typical case.

local function testSome()
    print(sysmetrics.SM_MAXIMUMTOUCHES)
end

In this case, the exact same mechanism is at play. In Lua, there are two ways to get a value out of a table. The first one we’ve already seen, where the ‘[]’ notation is used, as if the thing were an array. In the ‘testSome()’ case, the ‘.’ notation is being utilized. This is accessing the table as if it were a data structure, but it’s exactly the same as trying to access as an array, at least as far as the invocation of the ‘__index’ function is concerned. The ‘SM_MAXIMUMTOUCHES’ is taken as a string value, so it’s the same as doing: sysmetrics[‘SM_MAXIMUMTOUCHES’], and from the previous example, we know how that works out.

Now, there’s one more thing to note from this little escapade. The implementation of the helper function:

local function getSystemMetrics(what)
	local entry = nil;
	local idx = nil;

	if type(what) == "string" then
		entry = exports.names[what]
		idx = entry.value;
	else
		idx = tonumber(what)
		if not idx then 
			return nil;
		end
		
		entry = lookupByNumber(idx)

        if not entry then return nil end
	end

	local value = ffi.C.GetSystemMetrics(idx)

    if entry.converter then
        value = entry.converter(value);
    end

    return value;
end

There’s all manner of nonsense in here. The ‘what’ can be either a string or something that can be converted to a number. This is useful because it allows you to pass in symbolic names like “SM_CXBLAHBLAHBLAH” or a number 123. That’s great depending on what you’re interacting with and how the values are held. You might have some UI for example where you just want to use the symbolic names and not deal with numbers.

The other thing of note is that ‘entry.converter’ bit at the end. If you look back at the names table, you’ll notice that some of the entries have a ‘converter’ field associated with them. this is an optional function that can be associated with the entries. If it exists, it is called, with the value from the system called passed to it. In most cases, what the system returns is a number (number of mouse buttons, size of screen, etc). In some cases, the value returned is ‘0’ for false, and ‘non-zero’ for true. Well, as a Lua developer, I’d rather just get a bool in those cases where it’s appropriate, and this helper function is in a position to provide that for me. This is great because it allows me to not have to check the documentation to figure it out.

There’s one more tiny gem hidden in all this madness.

function exports.genCdefs()
    for key, entry in pairs(exports.names) do
        ffi.cdef(string.format("static const int %s = %d", key, entry.value))
    end
end

What does this do exactly? Simply, it generates those constants in the ffi.C space, so that you can still do this:

ffi.C.GetSystemMetrics(ffi.C.SM_MAXIMUMTOUCHES)

So, there you have it. You can go with the raw traditional sort of ffi binding, or you can spice things up a bit and make things a bit more useful with a little bit of effort. I like doing the latter, because I can generate the more traditional binding from the table of names that I’ve created. That’s a useful thing for documentation purposes, and in general.

I have stuck to my objectives, and this little example just goes to prove how esoteric minute details can be turned into approachable things of beauty with a little bit of Lua code.


schedlua – refactor compactor

The subject of scheduling and async programming has been a long running theme in my blog.  From the very first entries related to LJIT2Win32, through the creation of TINN, and most recently (within the past year), the creation of schedlua, I have been exploring this subject.  It all kind of started innocently enough.  When node.js was born, and libuv was ultimately released, I thought to myself, ‘what prevents anyone from doing this in LuaJIT without the usage of any external libraries whatsovever?’

It’s been a long road.  There’s really no reason for this code to continue to evolve.  It’s not at the center of some massively distributed system.  These are merely bread crumbs left behind, mainly for myself, as I explore and evolve a system that has proven itself to be useful at least as a teaching aid.

In the most recent incarnation of schedlua kernel, I was able to clean up my act with the realization that you can implement all higher level semantics using a very basic ‘signal’ mechanism within the kernel.  That was pretty good as it allowed me to easily implement the predicate system (when, whenever, waitForTruth, signalOnPredicate).  In addition, it allowed me to reimplement the async io portion with the realization that a task waiting on IO to occur is no different than a task waiting on any other kind of signal, so I could simply build the async io atop the signaling.

schedlua has largely been a Linux based project, until now.  The crux of the difference between Linux and Windows comes down to two things in schedlua.  The first thing is timing operations.  Basically, how do you get a microsecond accurate clock on the system.  On Linux, I use the ‘clock_gettime()’ system call.  On Windows, I use ‘QueryPerformanceCounter, QueryPerformanceFrequency’.  In order to isolate these, I put them into their own platform specific timeticker.lua file, and they both just have to surface a ‘seconds()’ function.  The differences are abstracted away, and the common interface is that of a stopwatch class.

That was good for time, but what about alarms?

The functions in schedlua related to alarms, are: delay, periodic, runnintTime, and sleep.  Together, these allow you to run things based on time, as well as delay the current task as long as you like.  My first implementation of these routines, going all the way back to the TINN implementation, were to run a separate ‘watchdog’ task, which in turn maintained its list of tasks that were waiting, and scheduled them.  Recently, I thought, “why can’t I just use the ‘whenever’ semantics to implement this?”.

Now, the implementation of the alarm routines comes down to this:

 

local function taskReadyToRun()
	local currentTime = SWatch:seconds();

	-- traverse through the fibers that are waiting
	-- on time
	local nAwaiting = #SignalsWaitingForTime;

	for i=1,nAwaiting do
		local task = SignalsWaitingForTime[1];
		if not task then
			return false;
		end

		if task.DueTime <= currentTime then
			return task
		else
			return false
		end
	end

	return false;
end

local function runTask(task)
    signalOne(task.SignalName);
    table.remove(SignalsWaitingForTime, 1);
end

Alarm = whenever(taskReadyToRun, runTask)

The Alarm module still keeps a list of tasks that are waiting for their time to execute, but instead of using a separate watchdog task to keep track of things, I simply use the schedlua built-in ‘whenever’ function. This basically says, “whenever the function ‘taskReadyToRun()’ returns a non-false value, call the function ‘runTask()’ passing the parameter from taskReadyToRun()”. Convenient, end of story, simple logic using words that almost feel like an English sentence to me.

I like this kind of construct for a few reasons. First of all, it reuses code. I don’t have to code up that specialized watchdog task time and time again. Second, it wraps up the async semantics of the thing. I don’t really have to worry about explicitly calling spawn, or anything else related to multi-tasking. It’s just all wrapped up in that one word ‘whenever’. It’s relatively easy for me to explain this code, without mentioning semaphores, threads, conditions, or whatever. I can tell a child “whenever this is true, do that other thing”, and they will understand it.

So, that’s it. First I used signals as the basis to implement higher order functions, such as the predicate based flow control. Now I’m using the predicate based flow control to implement yet other functions such as alarms. Next, I’ll take that final step and do the same to the async IO, and I’ll be back to where I was a few months back, but with a much smaller codebase, and cross platform to boot.


Spelunking Linux – procfs or is that sysctl?

Last time around, I introduced some simple things with lj2procfs.  Being able to simply access the contents of the various files within procfs is a bit of convenience.  Really what lj2procfs is doing is just giving you a common interface to the data in those files.  Everything shows up as simple lua values, typically tables, with strings, and numbers.  That’s great for most of what you’d be doing with procfs, just taking a look at things.

But, on Linux, procfs has another capability.  The /proc/sys directory contains a few interesting directories of its own:

 

abi/
debug/
dev/
fs/
kernel/
net/
vm/

And if you look into these directories, you find some more interesting files. For example, in the ‘kernel/’ directory, we can see a little bit of this:

hostname
hotplug
hung_task_check_count
hung_task_panic
hung_task_timeout_secs
hung_task_warnings
io_delay_type
kexec_load_disabled
keys
kptr_restrict
kstack_depth_to_print
max_lock_depth
modprobe
.
.
.

Now, these are looking kind of interesting. These files contain typically tunable portions of the kernel. On other unices, these values might be controlled through the sysctl() function call. On Linux, that function would just manipulate the contents of these files. So, why not just use lj2procfs to do the same.

Let’s take a look at a few relatively simple tasks. First, I want to get the version of the OS running on my machine. This can be obtained through the file /proc/sys/kernel/version

local procfs = require("lj2procfs.procfs")
print(procfs.sys.kernel.version)

$ #15-Ubuntu SMP Thu Apr 16 23:32:37 UTC 2015

This is the same string returned from the call ‘uname -v’

And, to get the hostname of the machine:

print(procfs.sys.kernel.hostname)
$ ubuntu

Which is what the ‘hostname’ command returns on my machine.

And what about setting the hostname? First of all, you’ll want to do this as root, but it’s equally simple:

procfs.sys.kernel.hostname = 'alfredo'

Keep in mind that setting the hostname in this way is transient, and it will seriously mess up things, like your about to sudo after this. But, there you have it.

Any value under /proc/sys can be retrieved or set using the fairly simple mechanism. I find this to be very valuable for two reasons. First of all, spelunking these values makes for great discovery. More importantly, being able to capture and set the values makes for a fairly easily tunable system.

An example of how this can be used for system diagnostics and tuning, you can capture the kernel values, using a simple command that just dumps what you want into a table. Send that table to anyone else for analysis. Similarly, if someone has come up with a system configuration that is great for a particular task, tuning the VM allocations, networking values, and the like, they can send you the configuration (just a string value that is a lua table) and you can apply it to your system.

This is a tad better than simply trying to look at system logs to determine after the fact what might be going on with a system. Perhaps the combination of these live values, as well as correlation with system logs, makes it easier to automate the process of diagnosing and tuning a system.

Well, there you have it. The lj2procfs thing is getting more concise, as well as becoming more usable at the same time.


Spelunking Linux – procfs, and a bag of chips

Recently, as a way to further understand all things Linux, I’ve been delving into procfs.  This is one of those virtual file systems on linux, meaning, the ‘files’ and ‘directories’ are not located on some real media anywhere, they are conjured up in realtime from within the kernel.  If you take a look at the ‘/proc’ directory on any linux machine, you’ll find a couple of things.  First, there are a bunch of directories with numeric values as their names.

 

1	     10			100	      1003	     1004
10075	     101		10276	      10683	     10695
10699	     1071		10746	      10756	     10757
1081	     11			11927	      12	     1236
12527	     12549		12563	      1296	     13

Yes, true to the unix philosophy, all the things are but files/directories. Each one of these numbers represents a process that is running on the system at the moment. Each one of these directories contains additional directories, and files. The files contain interesting data, and the directories lead into even more places where you can find more interesting things about the process.

Here are the contents of the directory ‘1’, which is the first process running on the system:

attr/	    autogroup	     auxv	 cgroup     clear_refs	cmdline
comm	    coredump_filter  cpuset	 cwd@	    environ	exe@
fd/	    fdinfo/	     gid_map	 io	    limits	loginuid
map_files/  maps	     mem	 mountinfo  mounts	mountstats
net/	    ns/		     numa_maps	 oom_adj    oom_score	oom_score_adj
pagemap     personality      projid_map  root@	    sched	schedstat
sessionid   setgroups	     smaps	 stack	    stat	statm
status	    syscall	     task/	 timers     uid_map	wchan

Some actual files, some more directories, some symbolic links. To find out the details of what each of these contains, and their general meaning, you need to consult the procfs man page, as well as the source code of the linux kernel, or various utilities that use them.

Backing up a bit, the /proc directory itself contains some very interesting files as well:

acpi/	      asound/		 buddyinfo     bus/	      cgroups
cmdline       consoles		 cpuinfo       crypto	      devices
diskstats     dma		 driver/       execdomains    fb
filesystems   fs/		 interrupts    iomem	      ioports
irq/	      kallsyms		 kcore	       keys	      key-users
kmsg	      kpagecount	 kpageflags    loadavg	      locks
mdstat	      meminfo		 misc	       modules	      mounts@
mtrr	      net@		 pagetypeinfo  partitions     sched_debug
schedstat     scsi/		 self@	       slabinfo       softirqs
stat	      swaps		 sys/	       sysrq-trigger  sysvipc/
thread-self@  timer_list	 timer_stats   tty/	      uptime
version       version_signature  vmallocinfo   vmstat	      zoneinfo

Again, the meanings of each of these is buried in the various documentation and source code files that surround them, but let’s take a look at a couple of examples. How about that uptime file?

8099.41 31698.74

OK. Two numbers. What do they mean? The first one is how many seconds the system has been running. The second one is the number of seconds all cpus on the system have been idle since the system came up. Yes, on a multi-proc system, the second number can be greater than the first. And thus begins the actual journey into spelunking procfs. If you’re like me, you occasionally need to know this information. Of course, if you want to know it from the command line, you just run the ‘uptime’ command, and you get…

 06:38:22 up  2:18,  2 users,  load average: 0.17, 0.25, 0.17

Well, hmmm, I get the ‘up’ part, but what’s all that other stuff, and what happened to the idle time thing? As it turns out, the uptime command does show the up time, but it also shows the logged in users, and the load average numbers, which actually come from different files.

It’s like this. Whatever you want to know about the system is probably available, but you have to know where to look for it, and how to interpret the data from the files. Often times there’s either a libc function you can call, or a command line utility, if you can discover and remember them.

What about a different way? Since I’m spelunking, I want to discover things in a more random fashion, and of course I want easy lua programmatic access to what I find. In steps the lj2procfs project.

In lj2procfs, I try to provide a more manageable interface to the files in /proc.  Most often, the information is presented as lua tables.  If the information is too simple (like /proc/version), then it is presented as a simple string.  Here is a look at that uptime example, done using lj2procfs:

 

return {
    ['uptime'] = {
        seconds = 19129.39,
        idle = 74786.86,
    };
}

You can see that the simple two numbers in the uptime file are converted to meaningful fields within the table. In this case, I use a simple utility program to turn any of the files into simple lua value output, suitable for reparsing, or transmitting. First, what does the ‘parsing’ look like?

--[[
	seconds idle 

	The first value is the number of seconds the system has been up.
	The second number is the accumulated number of seconds all processors
	have spent idle.  The second number can be greater than the first
	in a multi-processor system.
--]]
local function decoder(path)
	path = path or "/proc/uptime"
	local f = io.open(path)
	local str = f:read("*a")
	f:close()

	local seconds, idle = str:match("(%d*%.?%d+)%s+(%d*%.?%d+)")
	return {
		seconds = tonumber(seconds);
		idle = tonumber(idle);
	}
end

return {
	decoder = decoder;
}

In most cases, the output of the /proc files are meant to be human readable. At least with Linux. Other platforms might prefer these files to be more easily machine readable (binary). As such, they are readily parseable mostly by simple string patterns.

So, this decoder is one of many. There is one for each of the file types in the /proc directory, or at least the list is growing.

They are in turn accessed using the Decoders class.

local Decoders = {}
local function findDecoder(self, key)
	local path = "lj2procfs.codecs."..key;

	-- try to load the intended codec file
	local success, codec = pcall(function() return require(path) end)
	if success and codec.decoder then
		return codec.decoder;
	end

	-- if we didn't find a decoder, use the generic raw file loading
	-- decoder.
	-- Caution: some of those files can be very large!
	return getRawFile;
end
setmetatable(Decoders, {
	__index = findDecoder;

})

This is a fairly simple forwarding mechanism. You could use this in your code by doing the following:

procfs = require("Decoders")
local uptime = procfs.uptime

printValue(uptime)

When you try to access the procfs.uptime field of the Decoders class, it will go; “Hmmm, I don’t have a field in my table with that name, I’ll defer to whatever was set as my __index value, which so happens to be a function, so I’m going to call that function and see what it comes up with”. The findDecoder function will in turn look in the codecs directory for something with that name. It will find the code in uptime.lua, and execute it, handing it the path specified. The uptime function will read the file, parse the values, and return a table.

And thus magic is practiced!

It’s actually pretty nice because having things as lua tables and lua values such as numbers and strings, makes it really easy to do programmatic things from there.

Here’s meminfo.lua

local function meminfo(path)
	path = path or "/proc/meminfo"
	local tbl = {}
	local pattern = "(%g+):%s+(%d+)%s+(%g+)"

	for str in io.lines(path) do
		local name, size, units = str:match(pattern)
		if name then
			tbl[name] = {
				size = tonumber(size), 
				units = units;
			}
		end
	end

	return tbl
end

return {
	decoder = meminfo;
}

The raw ‘/proc/meminfo’ file output looks something like this:

MemTotal:        2045244 kB
MemFree:          273464 kB
MemAvailable:     862664 kB
Buffers:           72188 kB
Cached:           629268 kB
SwapCached:            0 kB
.
.
.

And the parsed output might be something like this:

    ['meminfo'] = {
        ['Active'] = {
            size = 1432756,
            units = [[kB]],
        };
        ['DirectMap2M'] = {
            size = 1992704,
            units = [[kB]],
        };
        ['MemFree'] = {
            size = 284604,
            units = [[kB]],
        };
        ['MemTotal'] = {
            size = 2045244,
            units = [[kB]],
        };
.
.
.

Very handy.

In some cases, the output can be a bit tricky, since it’s trying to be human readable, there might be some trickery, like header lines, and variable number of columns. This can get tricky, but you have the full power of lua to do the parsing, including using something like lpeg if you so choose. Here’s the parser for the ‘/proc/interrupts’ file, for example:

local strutil = require("lj2procfs.string-util")

local namepatt = "(%g+):(.*)"
local numberspatt = "[%s*(%d+)]+"
local descpatt = "[%s*(%d+)]+(.*)"

local function numbers(value)
	num, other = string.match(value, numpatt)
	return num;
end

local function interrupts(path)
	path = path or "/proc/interrupts"

	local tbl = {}
	for str in io.lines(path) do
		local name, remainder = str:match(namepatt)
		if name then
			local numbers = remainder:match(numberspatt)
			local desc = remainder:match(descpatt)

			local cpu = 0;
			local valueTbl = {}
			for number in numbers:gmatch("%s*(%d+)") do
				--print("NUMBER: ", number)
				valueTbl["cpu"..cpu] = tonumber(number);
				cpu = cpu + 1;
			end
			valueTbl.description = desc
			tbl[name] = valueTbl

		end
	end

	return tbl
end

return {
	decoder = interrupts;
}

And it deals with raw file data that looks like this:

          CPU0       CPU1       CPU2       CPU3       
  0:         38          0          0          0   IO-APIC-edge      timer
  1:      25395          0          0          0   IO-APIC-edge      i8042
  8:          1          0          0          0   IO-APIC-edge      rtc0
  9:      70202          0          0          0   IO-APIC-fasteoi   acpi
.
.
.

In this case, I’m running on a VM which was configured with 4 cpus. I had run previously with a VM with only 3 CPUs, and there were only 3 CPU columns. So, in this case, the patterns first isolate the interrupt number from the remainder of the line, then the numeric columns are isolated from the interrupt description field, then the numbers themselves are matched off using an iterator (gmatch). The table generated looks something like this:

    ['interrupts'] = {
        ['SPU'] = {
            cpu2 = 0,
            cpu3 = 0,
            cpu1 = 0,
            cpu0 = 0,
            description = [[Spurious interrupts]],
        };
        ['22'] = {
            cpu2 = 0,
            cpu3 = 0,
            cpu1 = 0,
            cpu0 = 0,
            description = [[IO-APIC  22-fasteoi   virtio1]],
        };
        ['NMI'] = {
            cpu2 = 0,
            cpu3 = 0,
            cpu1 = 0,
            cpu0 = 0,
            description = [[Non-maskable interrupts]],
        };
.
.
.

Nice.

To make spelunking easier, I’ve created a simple script which just calls the procfs thing, given a command line argument of the name of the file you’re interested in looking at.

#!/usr/bin/env luajit

--[[
	procfile

	This is a general purpose /proc/<file> interpreter.
	Usage:  
		$ sudo ./procfile filename

	Here, 'filname' is any one of the files listed in the 
	/proc directory.

	In the cases where a decoder is implemented in Decoders.lua
	the file will be parsed, and an appropriate value will be
	returned and printed in a lua form appropriate for reparsing.

	When there is no decoder implemented, the value returned is
	"NO DECODER AVAILABLE"

	example:
		$ sudo ./procfile cpuinfo
		$ sudo ./procfile partitions

--]]

package.path = "../?.lua;"..package.path;

local procfs = require("lj2procfs.procfs")
local putil = require("lj2procfs.print-util")

if not arg[1] then
	print ([[

USAGE: 
	$ sudo ./procfile <filename>

where <filename> is the name of a file in the /proc
directory.

Example:
	$ sudo ./pocfile cpuinfo
]])

	return 
end


local filename = arg[1]

print("return {")
putil.printValue(procfs[filename], "    ", filename)
print("}")

Once you have these basic tools in hand, you can begin to look at the various utilities that are used within linux, and try to emulate them. For example, the ‘free’ command will show you roughly how memory currently sits on your system, in terms of how much is physically available, how much is used, and the like. It’s typical output, without any parameters, might look like:

             total       used       free     shared    buffers     cached
Mem:       2045244    1760472     284772      28376      73276     635064
-/+ buffers/cache:    1052132     993112
Swap:      1046524          0    1046524

Here’s the code to generate something similar, using lj2procfs

#!/usr/bin/env luajit

--[[
	This lua script acts similar to the 'free' command, which will
	display some interesting information about how much memory is being
	used in the system.
--]]
--memfree.lua
package.path = "../?.lua;"..package.path;

local procfs = require("lj2procfs.procfs")

local meminfo = procfs.meminfo;

local memtotal = meminfo.MemTotal.size
local memfree = meminfo.MemFree.size
local memused = memtotal - memfree
local memshared = meminfo.Shmem.size
local membuffers = meminfo.Buffers.size
local memcached = meminfo.Cached.size

local swaptotal = meminfo.SwapTotal.size
local swapfree = meminfo.SwapFree.size
local swapused = swaptotal - swapfree

print(string.format("%18s %10s %10s %10s %10s %10s",'total', 'used', 'free', 'shared', 'buffers', 'cached'))
print(string.format("Mem: %13d %10d %10d %10d %10d %10d", memtotal, memused, memfree, memshared, membuffers, memcached))
--print(string.format("-/+ buffers/cache: %10d %10d", 1, 2))
print(string.format("Swap: %12d %10d %10d", swaptotal, swapused, swapfree))

The working end of this is simply ‘local meminfo = procfs.meminfo’

The code generates the following output.

             total       used       free     shared    buffers     cached
Mem:       2045244    1768692     276552      28376      73304     635100
Swap:      1046524          0    1046524

I couldn’t quite figure out where the -/+ buffers/cache: values come from yet. I’ll have to look at the actual code for the ‘free’ program to figure it out. But, the results are otherwise the same.

Some of these files can be quite large, like kallsyms, which might argue for an iterator interface instead of a table interface. But, some of the files have meta information, as well as a list of fields. Since the number of large files is fairly small, it made more sense to cover the broader cases instead, and tables do that fine. kallsyms being fairly large, it will still nicely fit into a table.

So, what can you do with that?

--findsym.lua
package.path = "../?.lua;"..package.path;

local sym = arg[1]
assert(sym, "must specify a symbol")

local putil = require("lj2procfs.print-util")
local sutil = require("lj2procfs.string-util")
local fun = require("lj2procfs.fun")
local procfs = require("lj2procfs.procfs")

local kallsyms = procfs.kallsyms

local function isDesiredSymbol(name, tbl)
    return sutil.startswith(name, sym)
end

local function printSymbol(name, value)
	putil.printValue(value)
end

fun.each(printSymbol, fun.filter(isDesiredSymbol, kallsyms))

In this case, a little utility which will traverse through the symbols, looking for something that starts with whatever the user specified on the command line. So, I can use it like this:

luajit findsym.lua mmap

And get the following output:

{
    location = [[0000000000000000]],
    name = [[mmap_init]],
    kind = [[T]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_rnd]],
    kind = [[t]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_zero]],
    kind = [[t]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_min_addr_handler]],
    kind = [[T]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_vmcore_fault]],
    kind = [[t]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_region]],
    kind = [[T]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_vmcore]],
    kind = [[t]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_kset.19656]],
    kind = [[b]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_min_addr]],
    kind = [[B]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_mem_ops]],
    kind = [[r]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_mem]],
    kind = [[t]],
};

Of course, you’re not limited to simply printing to stdout. In fact, that’s the least valuable thing you could be doing. What you really have is programmatic access to all these values. If you had run this command as root, you would get the actual addresses of these routines.

And so it goes. lj2procfs gives you easy programmatic access to all the great information that is hidden in the procfs file system on linux machines. These routines make it relatively easy to gain access to the information, and utilize tools such as luafun to manage it. Once again, the linux system is nothing more than a very large database. Using a tool such a lua makes it relatively easy to access all the information in that database.

So far, lj2procfs just covers reading the values. In this article I did not cover the fact that you can also get information on individual processes as well. Aside from this, procfs actually allows you to set some values as well. This is why I structured the code as ‘codecs’. You can encode, and decode. So, in future, setting a value will be as simple as ‘procfs.something.somevalue = newvalue’. This will eliminate the guess work out of doing command line ‘echo …’ commands for esoteric values, which are seldom used. It also makes easy to achieve great things programmatically through script, without even relying on various libraries that are meant to do the same.

And there you have it. procfs wrapped up in lua goodness.


Spelunking Linux – Decomposing systemd

Honestly, I don’t know what all the fuss is about. What is systemd?  It’s that bit of code that gets things going on your Linux machine once the kernel has loaded itself.  You know, dealing with bringing up services, communicating between services, running the udev and dbus stuff, etc.

So, I wrote an ffi wrapper for the libsystemd.so library This has proven to be handy, as usual, I can essentially write what looks like standard C code, but it’s actually LuaJIT goodness.

--[[
	Test using SDJournal as a cursor over the journal entries

	In this case, we want to try the various cursor positioning
	operations to ensure the work correctly.
--]]
package.path = package.path..";../src/?.lua"

local SDJournal = require("SDJournal")
local sysd = require("systemd_ffi")

local jnl = SDJournal()

-- move forward a few times
for i=1,10 do
	jnl:next();
end

-- get the cursor label for this position
local label10 = jnl:positionLabel()
print("Label 10: ", label10)

-- seek to the beginning, print that label
jnl:seekHead();
jnl:next();
local label1 = jnl:positionLabel();
print("Label 1: ", label1);


-- seek to label 10 again
jnl:seekLabel(label10)
jnl:next();
local label3 = jnl:positionLabel();
print("Label 3: ", label3)
print("label 3 == label 10: ", label3 == label10)

In this case, a simple journal object which makes it relatively easy to browse through the systemd journals that are laying about. That’s handy. Combined with the luafun functions, browsing through journals suddenly becomes a lot easier, with the full power of lua to form very interesting queries, or other operations.

--[[
	Test cursoring over journal, turning each entry
	into a lua table to be used with luafun filters and whatnot
--]]
package.path = package.path..";../src/?.lua"

local SDJournal = require("SDJournal")
local sysd = require("systemd_ffi")
local fun = require("fun")()

-- Feed this routine a table with the names of the fields
-- you are interested in seeing in the final output table
local function selection(fields, aliases)
	return function(entry)
		local res = {}
		for _, k in ipairs(fields) do
			if entry[k] then
				res[k] = entry[k];
			end
		end
		return res;
	end
end

local function  printTable(entry)
	print(entry)
	each(print, entry)
end

local function convertCursorToTable(cursor)
	return cursor:currentValue();
end


local function printJournalFields(selector, flags)
	flags = flags or 0
	local jnl1 = SDJournal();

	if selector then
		each(printTable, map(selector, map(convertCursorToTable, jnl1:entries())))
	else
		each(printTable, map(convertCursorToTable, jnl1:entries()))	
	end
end

-- print all fields, but filter the kind of journal being looked at
--printJournalFields(nil, sysd.SD_JOURNAL_CURRENT_USER)
--printJournalFields(nil, sysd.SD_JOURNAL_SYSTEM)

-- printing specific fields
--printJournalFields(selection({"_HOSTNAME", "SYSLOG_FACILITY"}));
printJournalFields(selection({"_EXE", "_CMDLINE"}));

-- to print all the fields available per entry
--printJournalFields();

In this case, we have a simple journal printer, which will take a list of fields, as well as a selection of the kinds of journals to look at. That’s quite useful as you can easily generate JSON or XML, or Lua tables on the output end, without much work. You can easily select which fields you want to display, and you could even change the names along the way. You have the full power of lua at your disposal to do whatever you want with the data.

In this case, the SDJournal object is pretty straight forward. It simply wraps the various ‘sd_xxx’ calls within the library to get its work done. What about some other cases? Does the systemd library need to be used for absolutely everything that it does? The answer is ‘no’, you can do a lot of the work yourself, because at the end of the day, the passive part of systemd is just a bunch of file system manipulation.

Here’s where it gets interesting in terms of decomposition.

Within the libsystemd library, there is the sd_get_machine_names() function:

_public_ int sd_get_machine_names(char ***machines) {
        char **l = NULL, **a, **b;
        int r;

        assert_return(machines, -EINVAL);

        r = get_files_in_directory("/run/systemd/machines/", &l);
        if (r < 0)
                return r;

        if (l) {
                r = 0;

                /* Filter out the unit: symlinks */
                for (a = l, b = l; *a; a++) {
                        if (startswith(*a, "unit:") || !machine_name_is_valid(*a))
                                free(*a);
                        else {
                                *b = *a;
                                b++;
                                r++;
                        }
                }

                *b = NULL;
        }

        *machines = l;
        return r;
}

The lua wrapper for this would simply be:

ffi.cdef("int sd_get_machine_names(char ***machines)")

Great, for those who already know this call, you can allocate a char * array, get the array of string values, and party on. But what about the lifetime of those strings, and if you’re doing it as an iterator, when do you ever free stuff, and isn’t this all wasteful?

So, looking at that code in the library, you might think, ‘wait a minute, I could just replicate that in Lua, and get it done without doing any ffi stuff at all!

local fun = require("fun")

local function isNotUnit(name)
	return not strutil.startswith(name, "unit:")
end

function SDMachine.machineNames(self)
	return fun.filter(isNotUnit, fsutil.files_in_directory("/run/systemd/machines/"))
end

OK, that looks simple. But what’s happening with that ‘files_in_directory()’ function? Well, that’s the meat and potatoes of this operation.

local function nil_iter()
    return nil;
end

-- This is a 'generator' which will continue
-- the iteration over files
local function gen_files(dir, state)
    local de = nil

    while true do
       de = libc.readdir(dir)
    
        -- if we've run out of entries, then return nil
        if de == nil then return nil end

    -- check the entry to see if it's an actual file, and not
    -- a directory or link
        if dirutil.dirent_is_file(de) then
            break;
        end
    end

    
    local name = ffi.string(de.d_name);

    return de, name
end

local function files_in_directory(path)
    local dir = libc.opendir(path)

    if not dir==nil then return nil_iter, nil, nil; end

    -- make sure to do the finalizer
    -- for garbage collection
    ffi.gc(dir, libc.closedir);

    return gen_files, dir, initial;
end

In this case, files_in_directory() takes a string path, like “/run/systemd/machines”, and just iterates over the directory, returning only the files found there. It’s convenient in that it will skip so called ‘hidden’ files, and things that are links. This simple technique/function can be the cornerstone of a lot of things that view files in Linux. The function leverages the libc opendir(), readdir(), functions, so there’s nothing new here, but it wraps it all up in this convenient iterator, which is nice.

systemd is about a whole lot more than just browsing directories, but that’s certain a big part of it. When you break it down like this, you begin to realize that you don’t actually need to use a ton of stuff from the library. In fact, it’s probably better and less resource intensive to just ‘go direct’ where it makes sense. In this case, it was just implementing a few choice routines to make file iteration work the same as it does in systemd. As this binding evolves, I’m sure there is other low lying fruit that I’ll be able to pluck to make it even more interesting, useful, and independent of the libsystemd library.


Spelunking Linux – Yes, the system truly is a database

In this article: Isn’t the whole system just a database? – libdrm, I explored a little bit of the database nature of Linux by using libudev to enumerate and open libdrm devices.  After that, I spent some time bringing up a USB module: LJIT2libusb.  libusb is a useful cross platform library that makes it relatively easy to gain access to the usb functions on multiple platforms.  It can enumerate devices, deal with hot plug notifications, open up, read, write, etc.

At its core, on Linux at least, libusb tries to leverage the uvdev capabilities of the target system, if those capabilities are there.  This means that device enumeration and hot plugging actually use the libuvdev stuff.  In fact, the code for enumerating those usb devices in libusb looks like this:

 

	udev_enumerate_add_match_subsystem(enumerator, "usb");
	udev_enumerate_add_match_property(enumerator, "DEVTYPE", "usb_device");
	udev_enumerate_scan_devices(enumerator);
	devices = udev_enumerate_get_list_entry(enumerator);

There’s more stuff of course, to turn that into data structures which are appropriate for use within the libusb view of the world. But, here’s the equivalent using LLUI and the previously developed UVDev stuff:

local function isUsbDevice(dev)
	if dev.IsInitialized and dev:getProperty("subsystem") == "usb" and
		dev:getProperty("devtype") == "usb_device" then
		return true;
	end

	return false;
end

each(print, filter(isUsbDevice, ctxt:devices()))

It’s just illustrative, but it’s fairly simple to understand I think. The ‘ctxt:devices()’ is an iterator over all the devices in the system. The ‘filter’ function is part of the luafun functional programming routines available to Lua. the ‘isUsbDevice’ is a predicate function, which returns ‘true’ when the device in question matches what it believes makes a device a ‘usb’ device. In this case, its the subsystem and dev_type properties which are used.

Being able to easily query devices like this makes life a heck of a lot easier. No funky code polluting my pure application. Just these simple query predicates written in Lua, and I’m all set. So, instead of relying on libusb to enumerate my usb devices, I can just enumerate them directly using uvdev, which is what the library does anyway. Enumeration and hotplug handing is part of the library. The other part is the actual send and receiving of data. For that, the libusb library is still primarily important, as replacing that code will take some time.

Where else can this great query capability be applied? Well, libudev is just a nice wrapper atop sysfs, which is that virtual file system built into Linux for gaining access to device information and control of the same. There’s all sorts of stuff in there. So, let’s say you want to list all the block devices?

local function isBlockDevice(dev)
	if dev.IsInitialized and dev:getProperty("subsystem") == "block" then
		return true;
	end

	return false;
end

That will get all the devices which are in the subsystem “block”. That includes physical disks, virtual disks, partitions, and the like. If you’re after just the physical ones, then you might use something like this:

local function isPhysicalBlockDevice(dev)
	if dev.IsInitialized and dev:getProperty("subsystem") == "block" and
		dev:getProperty("devtype") == "disk" and
		dev:getProperty("ID_BUS") ~= nil then
		return true;
	end

	return false;
end

Here, a physical device is indicated by subsystem == ‘block’ and devtype == ‘disk’ and the ‘ID_BUS’ property exists, assuming any physical disk would show up on one of the system’s buses. This won’t catch a SD card though. For that, you’d use the first one, and then look for a property related to being an SD card. Same goes for ‘cd’ vs ramdisk, or whatever. You can make these queries as complex or simple as you want.

Once you have a device, you can simply open it using the “SysName” parameter, handed to an fopen() call.

I find this to be a great way to program. It makes the creation of utilities such as ‘lsblk’ relatively easy. You would just look for all the block devices and their partitions, and put them into a table. Then separately, you would have a display routine, which would consume the table and generate whatever output you want. I find this much better than the typical Linux tools which try to do advanced display using the terminal window. That’s great as far as it goes, but not so great if what you really want is a nice html page generated for some remote viewing.

At any rate, this whole libudev exploration is a great thing. You can list all devices easily, getting every bit of information you care to examine. Since it’s all scriptable, it’s fairly easy to taylor your queries on the fly, looking at, discovering, and the like. I discovered that the thumb print reader in my old laptop was made by Broadcom, and my webcam by 3M? It’s just so much fun.

Well there you have it. The more you spelunk, the more you know, and the more you can fiddle about.


Isn’t the whole system just a database? – libdrm

Do enough programming, and everything looks like a database of one form or another. Case in point, when you want to get keyboard and mouse input, you first have to query the system to see which of the /dev/input/eventxxx devices you want to open for your particular needs. Yes, there are convenient shortcuts, but that’s beside the point.

Same goes with other devices in the system. This time around, I want to find the drm device which represents the graphics card in my system (from the libdrm perspective).

In LJIT2libudev there are already objects that make it convenient to enumerate all the devices in the system using a simple iterator:

local ctxt, err = require("UDVContext")()
for _, dev in ctxt:devices() do
    print(dev)
end

Well, that’s find and all, but let’s get specific. To use the libdrm library, I very specifically need one of the active devices in the ‘drm’ subsystem. I could write this:

local ctxt, err = require("UDVContext")()

local function getActiveDrm()
  local function isActiveDrm(dev)
    if dev.IsInitialized and dev:getProperty("subsystem") == "drm" then
      return true;
    end

    return false;
  end

  for _, dev in ctxt:devices() do
    if isActiveDrm(dev) then
      return dev;
    end
  end

  return nil;
end

local device = getActiveDrm()

Yah, that would work. Then of course, when I want to change the criteria for finding the device I’m looking for, I would change up this code a bit. The core iterator is the key starting point at least. The ‘isActiveDrm()’ is a function which acts as a predicate to filter through the results, and only return the ones I want.

Since this is Lua though, and since there is a well throught out functional programming library already (luafun), this could be made even easier:

local function isActiveDrm(dev)
  if dev.IsInitialized and dev:getProperty("subsystem") == "drm" then
    return true;
  end

  return false;
end

local device = head(filter(isActiveDrm, ctxt:devices()))
assert(device, "could not find active drm device")

In this case, we let the luafun ‘filter’ and ‘head’ functions do their job of dealing with the predicate, and taking the first one off the iterator that matches and returning it. Now, changing my criteria is fairly straight forward. Just change out the predicate, and done. This is kind of nice, particularly with Lua, because that predicate is just some code, it could be generated at runtime because we’re in script right?

So, how about this version:

-- File: IsActiveDrmDevice.lua
-- predicate to determine if a device is a DRM device and it's active
return function(dev)
  if dev.IsInitialized and dev:getProperty("subsystem") == "drm" then
    return true;
  end

  return false;
end

-- File: devices_where.lua
#!/usr/bin/env luajit

-- devices_where.lua
-- print devices in the system, filtered by a supplied predicate
-- generates output which is a valid lua table
package.path = package.path..";../?.lua"

local fun = require("fun")()
local utils = require("utils")

local ctxt, err = require("UDVContext")()
assert(ctxt ~= nil, "Error creating context")

if #arg < 1 then
  error("you must specify a predicate")
end

local predicate = require(arg[1])

print("{")
	
each(utils.printDevice, filter(predicate, ctxt:devices()))

print("}")


-- Actual usage from the command line
./devices_where.lua isActiveDrmDevice

In this case, the ‘query’ has been generalized enough such that you can pass a predicate as a filename (minus the ‘.lua’). The code for the predicate will be compiled in, and used as the predicate for the filter() function. Well, that’s pretty nifty I think. And since the query itself again is just a bit of code, that can be changed on the fly as well. I can easily see a system where lua is the query language, and the entire machine is the database.

The tarantool database is written in Lua, and I believe the luafun code is used there. Tarantool is not a system database, but the fact that it’s written in Lua itself is interesting, and just proves the case that Lua is a good language for doing some database work.

I have found that tackling the lowest level enumeration by putting a Lua iterator on top of it makes life a whole lot easier. With many of the libraries that you run across, they spend a fair amount of resources/code on trying to make things look like a database. In the case of libudev, there are functions for iterating their internal hash table of values, routines for creating ‘enumerators’ which are essentially queries, routines for getting properties, routines for turning properties into more accessible strings, routines for turning the ‘FLAGS’ property into individual values, and the like, and then there’s the memory management routines (ref, unref). A lot of that stuff either goes away, or is handled much more succinctly when you’re using a language such as Lua, or JavaScript, or Python, Ruby, whatever, as long as it’s modern, dynamic, and has decent enough higher level memory managed libraries.

And thus, the whole system, from log files, to perf counters, to device lists, is a database, waiting to be harvested, and made readily available.


Functional Programming with libevdev

Previously, I wrote about the LuaJIT binding to libevdev: LJIT2libevdev – input device tracking on Linux

In that case, I went over the basics, and it all works out well, and you get get at keyboard and mouse events from Lua, with very little overhead.  Feels natural, light weight iterators, life is good.

Recently, I wanted to go one step further though.  Once you go down the path of using iterators, you begin to think about functional programming.  At least that’s how I’m wired.  So, I’ve pushed it a bit further, and incorporated the fun.lua module to make things a bit more interesting.  fun.lua provides things like each, any, all, map, filter, and other stuff which makes dealing with streams of data really brainlessly simple.

Here’s an example of spewing out the stream of events for a device:

 

package.path = package.path..";../?.lua"

local EVEvent = require("EVEvent")
local dev = require("EVDevice")(arg[1])
assert(dev)

local utils = require("utils")
local fun = require("fun")

-- print out the device particulars before 
-- printing out the stream of events
utils.printDevice(dev);
print("===== ===== =====")

-- perform the actual printing of the event
local function printEvent(ev)
    print(string.format("{'%s', '%s', %d};",ev:typeName(),ev:codeName(),ev:value()));
end

-- decide whether an event is interesting enough to 
-- print or not
local function isInteresting(ev)
	return ev:typeName() ~= "EV_SYN" and ev:typeName() ~= "EV_MSC"
end

-- convert from a raw 'struct input_event' to the EVEvent object
local function toEVEvent(rawev)
    return EVEvent(rawev)
end


fun.each(printEvent, 
    fun.filter(isInteresting, 
        fun.map(toEVEvent,
            dev:rawEvents())));

 
The last line there, where it starts with “fun.each”, can be read from the inside out.

dev:rawEvents() – is an iterator, it returns a steady stream of events coming off of the specified device.

fun.map(toEVEvent,…) – the map function will take some input, run it through the provided function, and send that as the output. In functional programming, this would be transformation.

fun.filter(isInteresting, …) – filter takes an input, allplies the specified predicate, and allows that item to pass through or not based on the predicate returning true or false.

fun.each(printEvent, …) – the ‘each’ function takes each of the items coming from the stream of items, and applies the specified function, in this case the printEvent function.

This is a typical pull chain. The events are pulled out of the lowest level iterator as they are needed by subsequent operations in the chain. If the chain stops, then nothing further is pulled.

This is a great way to program because doing different things with the stream of events is simply a matter of replacing items in the chain. For example, if we just want the first 100 items, we could write

each(printEvent, 
    take_n(100, 
      filter(isInteresting, 
        map(toEVEvent, dev:rawEvents()))));

You can create tees, send data elsewhere, construct new streams by combining streams. There are a few key operators, and with them you can do a ton of stuff.

At the very heart, the EVDevice:rawEvents() iterator looks like this:

function EVDevice.rawEvents(self)
	local function iter_gen(params, flags)
		local flags = flags or ffi.C.LIBEVDEV_READ_FLAG_NORMAL;
		local ev = ffi.new("struct input_event");
		--local event = EVEvent(ev);
		local rc = 0;


		repeat
			rc = evdev.libevdev_next_event(params.Handle, flags, ev);
		until rc ~= -libc.EAGAIN
		
		if (rc == ffi.C.LIBEVDEV_READ_STATUS_SUCCESS) or (rc == ffi.C.LIBEVDEV_READ_STATUS_SYNC) then
			return flags, ev;
		end

		return nil, rc;
	end

	return iter_gen, {Handle = self.Handle}, state 
end

This is better than the previous version where we could pass in a predicate. In this case, we don’t even convert to the EVEvent object at this early low level stage, because we’re not sure what subsequent links in the chain will want to do, so we leave it out. This simplifies the code at the lowest levels, and makes it more composable, which is a desirable effect.

And so it goes. This binding started out as a simple ffi.cdef hack job, then objects were added to encapsulate some stuff, and now the iterators are being used in a functional programming style, which makes the whole thing that much more useful and integratable.


LJIT2pixman – Drawing on Linux

On the Linux OS, libpixman is at the heart of doing some low level drawing. It’s very simple stuff like compositing, rendering of gradients and the like. It takes up the slack where hardware acceleration doesn’t exist. So, graphics libraries such as Cairo leverage libpixman at the bottom to take care of the basics. LJIT2pixman is a little project that delivers a fairly decent LuaJIT binding to that library.

Here’s one demo of what it can do.

checkerboard

Of course, given the title of this post, you know LuaJIT is involved, so you can expect that there’s some way of doing this in LuaJIT.

package.path = package.path..";../?.lua"

local ffi = require("ffi")
local bit = require("bit")
local band = bit.band

local pixman = require("pixman")()
local pixlib = pixman.Lib_pixman;
local ENUM = ffi.C
local utils = require("utils")
local save_image = utils.save_image;

local function D2F(d) return (pixman_double_to_fixed(d)) end

local function main (argc, argv)

	local WIDTH = 400;
	local HEIGHT = 400;
	local TILE_SIZE = 25;

    local trans = ffi.new("pixman_transform_t", { {
	    { D2F (-1.96830), D2F (-1.82250), D2F (512.12250)},
	    { D2F (0.00000), D2F (-7.29000), D2F (1458.00000)},
	    { D2F (0.00000), D2F (-0.00911), D2F (0.59231)},
	}});

    local checkerboard = pixlib.pixman_image_create_bits (ENUM.PIXMAN_a8r8g8b8,
					     WIDTH, HEIGHT,
					     nil, 0);

    local destination = pixlib.pixman_image_create_bits (ENUM.PIXMAN_a8r8g8b8,
					    WIDTH, HEIGHT,
					    nil, 0);

    for i = 0, (HEIGHT / TILE_SIZE)-1  do
		for j = 0, (WIDTH / TILE_SIZE)-1 do
	    	local u = (j + 1) / (WIDTH / TILE_SIZE);
	    	local v = (i + 1) / (HEIGHT / TILE_SIZE);
	    	local black = ffi.new("pixman_color_t", { 0, 0, 0, 0xffff });
	    	local white = ffi.new("pixman_color_t", {
				v * 0xffff,
				u * 0xffff,
				(1 - u) * 0xffff,
				0xffff });
	    	
	    	local c = white;

	    	if (band(j, 1) ~= band(i, 1)) then
				c = black;
	    	end

	    	local fill = pixlib.pixman_image_create_solid_fill (c);

	    	pixlib.pixman_image_composite (ENUM.PIXMAN_OP_SRC, fill, nil, checkerboard,
				    0, 0, 0, 0, j * TILE_SIZE, i * TILE_SIZE,
				    TILE_SIZE, TILE_SIZE);
		end
    end

    pixlib.pixman_image_set_transform (checkerboard, trans);
    pixlib.pixman_image_set_filter (checkerboard, ENUM.PIXMAN_FILTER_BEST, nil, 0);
    pixlib.pixman_image_set_repeat (checkerboard, ENUM.PIXMAN_REPEAT_NONE);

    pixlib.pixman_image_composite (ENUM.PIXMAN_OP_SRC,
			    checkerboard, nil, destination,
			    0, 0, 0, 0, 0, 0,
			    WIDTH, HEIGHT);

	save_image (destination, "checkerboard.ppm");

    return true;
end


main(#arg, arg)


With a couple of exceptions, the code looks almost exactly like its C based counterpart. I actually think this is a very good thing, because you can rapidly prototype something from a C coding example, but have all the support and protection that a dynamic language such as lua provides.

And here’s another:

conical-test

In this case is the conical-test.lua demo doing the work.

package.path = package.path..";../?.lua"

local ffi = require("ffi")
local bit = require("bit")
local band = bit.band
local lshift, rshift = bit.lshift, bit.rshift

local pixman = require("pixman")()
local pixlib = pixman.Lib_pixman;
local ENUM = ffi.C
local utils = require("utils")
local save_image = utils.save_image;
local libc = require("libc")

local SIZE = 128
local GRADIENTS_PER_ROW = 7
local NUM_GRADIENTS = 35

local NUM_ROWS = ((NUM_GRADIENTS + GRADIENTS_PER_ROW - 1) / GRADIENTS_PER_ROW)
local WIDTH = (SIZE * GRADIENTS_PER_ROW)
local HEIGHT = (SIZE * NUM_ROWS)

local function double_to_color(x)
    return (x*65536) - rshift( (x*65536), 16)
end

local function PIXMAN_STOP(offset,r,g,b,a)		
   return ffi.new("pixman_gradient_stop_t", { pixman_double_to_fixed (offset),		
	{					
	    double_to_color (r),		
		double_to_color (g),		
		double_to_color (b),		
		double_to_color (a)		
	}					
    });
end

local stops = ffi.new("pixman_gradient_stop_t[4]",{
    PIXMAN_STOP (0.25,       1, 0, 0, 0.7),
    PIXMAN_STOP (0.5,        1, 1, 0, 0.7),
    PIXMAN_STOP (0.75,       0, 1, 0, 0.7),
    PIXMAN_STOP (1.0,        0, 0, 1, 0.7)
});

local  NUM_STOPS = (ffi.sizeof (stops) / ffi.sizeof (stops[0]))


local function create_conical (index)
    local c = ffi.new("pixman_point_fixed_t")
    c.x = pixman_double_to_fixed (0);
    c.y = pixman_double_to_fixed (0);

    local angle = (0.5 / NUM_GRADIENTS + index / NUM_GRADIENTS) * 720 - 180;

    return pixlib.pixman_image_create_conical_gradient (c, pixman_double_to_fixed (angle), stops, NUM_STOPS);
end


local function main (argc, argv)

    local transform = ffi.new("pixman_transform_t");

    local dest_img = pixlib.pixman_image_create_bits (ENUM.PIXMAN_a8r8g8b8,
					 WIDTH, HEIGHT,
					 nil, 0);
 
    utils.draw_checkerboard (dest_img, 25, 0xffaaaaaa, 0xff888888);

    pixlib.pixman_transform_init_identity (transform);

    pixlib.pixman_transform_translate (NULL, transform,
				pixman_double_to_fixed (0.5),
				pixman_double_to_fixed (0.5));

    pixlib.pixman_transform_scale (nil, transform,
			    pixman_double_to_fixed (SIZE),
			    pixman_double_to_fixed (SIZE));
    pixlib.pixman_transform_translate (nil, transform,
				pixman_double_to_fixed (0.5),
				pixman_double_to_fixed (0.5));

    for i = 0, NUM_GRADIENTS-1 do
    
	   local column = i % GRADIENTS_PER_ROW;
	   local row = i / GRADIENTS_PER_ROW;

	   local src_img = create_conical (i); 
	   pixlib.pixman_image_set_repeat (src_img, ENUM.PIXMAN_REPEAT_NORMAL);
   
	   pixlib.pixman_image_set_transform (src_img, transform);
	
	   pixlib.pixman_image_composite32 (
	       ENUM.PIXMAN_OP_OVER, src_img, nil,dest_img,
	       0, 0, 0, 0, column * SIZE, row * SIZE,
	       SIZE, SIZE);
	
	   pixlib.pixman_image_unref (src_img);
    end

    save_image (dest_img, "conical-test.ppm");

    pixlib.pixman_image_unref (dest_img);

    return true;
end


main(#arg, arg)

linear-gradient

Linear Gradient demo

screen-test

screen demo (transparency).
Perhaps this is the easiest one of all. All the interesting functions are placed into the global namespace, so they can be accessed easily, just like everything in C is globally available.

package.path = package.path..";../?.lua"

local ffi = require("ffi")
local bit = require("bit")
local band = bit.band
local lshift, rshift = bit.lshift, bit.rshift

local pixman = require("pixman")()
local pixlib = pixman.Lib_pixman;
local ENUM = ffi.C
local utils = require("utils")
local save_image = utils.save_image;
local libc = require("libc")


local function main (argc, argv)

    WIDTH = 40
    HEIGHT = 40
    
    local src1 = ffi.cast("uint32_t *", libc.malloc (WIDTH * HEIGHT * 4));
    local src2 = ffi.cast("uint32_t *", libc.malloc (WIDTH * HEIGHT * 4));
    local src3 = ffi.cast("uint32_t *", libc.malloc (WIDTH * HEIGHT * 4));
    local dest = ffi.cast("uint32_t *", libc.malloc (3 * WIDTH * 2 * HEIGHT * 4));

    for i = 0, (WIDTH * HEIGHT)-1 do
	   src1[i] = 0x7ff00000;
	   src2[i] = 0x7f00ff00;
	   src3[i] = 0x7f0000ff;
    end

    for i = 0, (3 * WIDTH * 2 * HEIGHT)-1 do
	   dest[i] = 0x0;
    end

    local simg1 = pixman_image_create_bits (ENUM.PIXMAN_a8r8g8b8, WIDTH, HEIGHT, src1, WIDTH * 4);
    local simg2 = pixman_image_create_bits (ENUM.PIXMAN_a8r8g8b8, WIDTH, HEIGHT, src2, WIDTH * 4);
    local simg3 = pixman_image_create_bits (ENUM.PIXMAN_a8r8g8b8, WIDTH, HEIGHT, src3, WIDTH * 4);
    local dimg  = pixman_image_create_bits (ENUM.PIXMAN_a8r8g8b8, 3 * WIDTH, 2 * HEIGHT, dest, 3 * WIDTH * 4);

    pixman_image_composite (ENUM.PIXMAN_OP_SCREEN, simg1, NULL, dimg, 0, 0, 0, 0, WIDTH, HEIGHT / 4, WIDTH, HEIGHT);
    pixman_image_composite (ENUM.PIXMAN_OP_SCREEN, simg2, NULL, dimg, 0, 0, 0, 0, (WIDTH/2), HEIGHT / 4 + HEIGHT / 2, WIDTH, HEIGHT);
    pixman_image_composite (ENUM.PIXMAN_OP_SCREEN, simg3, NULL, dimg, 0, 0, 0, 0, (4 * WIDTH) / 3, HEIGHT, WIDTH, HEIGHT);

    save_image (dimg, "screen-test.ppm");
    
    return true;
end

main(#arg, arg)

I really like this style of rapid prototyping. The challenge I have otherwise is that it’s just too time consuming to consume things in their raw C form. Things like build systems, compiler versions, and other forms of magic always seem to get in the way. And if it’s not that stuff, it’s memory management, and figuring out the inevitable crashes.

Once you wrap a library up in a bit of lua goodness though, it becomes much more approachable. It may or may not be the most performant thing in the world, but you can worry about that later.

Having this style of rapid prototyping available saves tremendous amounts of time. Since you’re not wasting your time on the mundane (memory management, build system, compiler extensions and the like), you can spend much more time on doing mashups of the various pieces of technology at hand.

In this case it was libpixman. Previously I’ve tackled everything from hot plugging usb devices, to consuming keystrokes, and putting a window on the screen.

What’s next? Well, someone inquired as to whether I would be doing a Wayland binding. My response was essentially, “since I now have all the basics, there’s no need for wayland, I can just call what it was going to call directly.

And so it goes. One more library in the toolbox, more interesting demos generated.