Splunking Windows – Extracting pleasure from legacy apis

If you are a modern programmer of Windows apps, there are numerous frameworks for you, hundreds of SDKs, scripted wrappers, IDEs to hide behind, and just layers upon layers of goodness to keep you safe and sane.  So, when it comes to using some of the core Windows APIs directly, you can be forgiven for not even knowing they exist, let alone how to use them from your favorite environment.

I’ve done a ton of exploration on the intricacies of the various Linux interfaces, Spelunking Linux goes over everything from auxv to procfs, and quite a few in between.  But, what about Windows?  Well, I’ve recently embarked on a new project lj2win32 (not to be confused with earlier LJIT2Win32).  The general purpose of this project is to bring the goodness of TINN to the average LuaJIT developer.  Whereas TINN is a massive project that strives to cover the entirety of the known world of common Windows interfaces, and provides a ready to go multi-tasking programming environment, lj2win32 is almost the opposite.  It does not provide its own shell, rather it just provides the raw bindings necessary for the developer to create whatever they want.  It’s intended to be a simple luarocks install, much in the way the ljsyscall works for creating a standard binding to UNIX kinds of systems without much fuss or intrusion.

In creating this project, I’ve tried to adhere to a couple of design principles to meet some objectives.

First objective is that it must ultimately be installable using luarocks.  This means that I have to be conscious about the organization of the file structure.  To wit, everything of consequence lives in a ‘win32’ directory.  The package name might ultimately be ‘win32’.  Everything is referenced from there.

Second objective, provide the barest minimum bindings.  Don’t change names of things, don’t introduce non-windows semantics, don’t create several layers of class hierarchies to objectify the interfaces.  Now, of course there are some very simple exceptions, but they should be fairly limited.  The idea being, anyone should be able to take this as a bare minimum, and add their own layers atop it.  It’s hard to resist objectifying these interfaces though, and everything from Microsoft’s ancient MFC, ATL, and every framework since, has thrown layers of object wrappers on the core Win32 interfaces.  In this case, wrappers and other suggestions will show up in the ‘tests’ directory.  That is fertile ground for all manner of fantastical object wrapperage.

Third objective, keep the dependencies minimal.  If you do programming in C on Windows, you include a couple of well known headers files at the beginning of your program, and the whole world gets dragged in.  Everything is pretty much in a global namespace, which can lead to some bad conflicts, but they’ve been worked out over time.  In lj2win32, there are only a couple things in the global namespace, everything else is either in some table, or within the ffi.C facility.  Additionally, the wrappings are clustered in a way that follows the Windows API Sets.  API sets are a mechanism Windows has for pulling apart interdependencies in the various libraries that make up the OS.  In short, it’s just a name (so happens to end in ‘.dll’) which is used by the loader to load in various functions.  If you use these special names, instead of the traditional ‘kernel32’, ‘advapi32’, you might pull in a smaller set of stuff.

With all that, I thought I’d explore one particular bit of minutia as an example of how things could go.

The GetSystemMetrics() function call is a sort of dumping ground for a lot of UI system information.  Here’s where you can find things like how big the screen is, how many monitors there are, how many pixels are used for the menu bars, and the like.  Of course this is just a wrapper on items that probably come from the registry, or various devices and tidbits hidden away in other databases throughout the system, but it’s the convenient developer friendly interface.

The signature looks like this

int WINAPI GetSystemMetrics(
_In_ int nIndex
);

A simple enough call. And a simple enough binding:

ffi.cdef[[
int GetSystemMetrics(int nIndex);
]]

Of course, there is the ‘nIndex’, which in the Windows headers is a bunch of manifest constants, which in LuaJIT might be defined thus:

ffi.cdef[[
	// Used for GetSystemMetrics
static const int	SM_CXSCREEN = 0;
static const int	SM_CYSCREEN = 1;
static const int	SM_CXVSCROLL = 2;
static const int	SM_CYHSCROLL = 3;
static const int	SM_CYCAPTION = 4;
static const int	SM_CXBORDER = 5;
static const int	SM_CYBORDER = 6;
]] 

 
Great. Then I can simply do

local value = ffi.C.GetSystemMetrics(ffi.C.SM_CXSCREEN)

 
Fantastic, I’m in business!

So, this meets the second objective of bare minimum binding. But, it’s not a very satisfying programming experience for the LuaJIT developer. How about just a little bit of sugar? Well, I don’t want to violate the same second objective of non-wrapperness, so I’ll create a separate thing in the tests directory. The systemmetrics.lua file contains a bit of an exploration in getting of system metrics.

It starts out like this:

local ffi = require("ffi")
local errorhandling = require("win32.core.errorhandling_l1_1_1");

ffi.cdef[[
int GetSystemMetrics(int nIndex);
]]

local exports = {}

local function SM_toBool(value)
	return value ~= 0
end

Then defines something like this:

exports.names = {
    SM_CXSCREEN = {value = 0};
    SM_CYSCREEN = {value = 1};
    SM_CXVSCROLL = {value = 2};
    SM_CYHSCROLL = {value = 3};
    SM_CYCAPTION = {value = 4};
    SM_CXBORDER = {value = 5};
    SM_CYBORDER = {value = 6};
    SM_CXDLGFRAME = {value = 7};
    SM_CXFIXEDFRAME = {value = 7};
    SM_CYDLGFRAME = {value = 8};
    SM_CYFIXEDFRAME = {value = 8};
    SM_CYVTHUMB = {value = 9};
    SM_CXHTHUMB = {value = 10};
    SM_CXICON = {value = 11};
    SM_CYICON = {value = 12};
    SM_CXCURSOR = {value = 13};
    SM_CYCURSOR = {value = 14};
    SM_CYMENU = {value = 15};
    SM_CXFULLSCREEN = {value = 16};
    SM_CYFULLSCREEN = {value = 17};
    SM_CYKANJIWINDOW = {value = 18, converter = SM_toBool};
    SM_MOUSEPRESENT = {value = 19, converter = SM_toBool};
    SM_CYVSCROLL = {value = 20};
    SM_CXHSCROLL = {value = 21};
    SM_DEBUG = {value = 22, converter = SM_toBool};
    SM_SWAPBUTTON = {value = 23, converter = SM_toBool};
    SM_RESERVED1 = {value = 24, converter = SM_toBool};
    SM_RESERVED2 = {value = 25, converter = SM_toBool};
    SM_RESERVED3 = {value = 26, converter = SM_toBool};
    SM_RESERVED4 = {value = 27, converter = SM_toBool};
}

And finishes with a flourish like this:

local function lookupByNumber(num)
	for key, entry in pairs(exports.names) do
		if entry.value == num then
			return entry;
		end
	end

	return nil;
end

local function getSystemMetrics(what)
	local entry = nil;
	local idx = nil;

	if type(what) == "string" then
		entry = exports.names[what]
		idx = entry.value;
	else
		idx = tonumber(what)
		if not idx then 
			return nil;
		end
		
		entry = lookupByNumber(idx)

        if not entry then return nil end
	end

	local value = ffi.C.GetSystemMetrics(idx)

    if entry.converter then
        value = entry.converter(value);
    end

    return value;
end

-- Create C definitions derived from the names table
function exports.genCdefs()
    for key, entry in pairs(exports.names) do
        ffi.cdef(string.format("static const int %s = %d", key, entry.value))
    end
end

setmetatable(exports, {
	__index = function(self, what)
		return getSystemMetrics(what)
	end,
})

return exports

All of this allows you to do a couple of interesting things. First, what if you wanted to print out all the system metrics. This same technique can be used to put all the metrics into a table to be used within your program.

local sysmetrics = require("systemmetrics");

local function testAll()
    for key, entry in pairs(sysmetrics.names) do
        local value, err = sysmetrics[key]
        if value ~= nil then
            print(string.format("{name = '%s', value = %s};", key, value))
        else
            print(key, err)
        end
    end
end

OK, so what? Well, the systemmetrics.names is a dictionary matching a symbolic name to the value used to get a particular metric. And what’s this magic with the ‘sysmetrics[key]’ thing? Well, let’s take a look back at that hand waving from the systemmetrics.lua file.

setmetatable(exports, {
	__index = function(self, what)
		return getSystemMetrics(what)
	end,
})

Oh, I see now, it’s obvious…

So, what’s happening here with the setmetatable thing is, Lua has a way of setting some functions on a table which will dictate the behavior they will exhibit in certain situations. In this case, the ‘__index’ function, if it exists, will take care of the cases when you try to look something up, and it isn’t directly in the table. So, in our example, doing the ‘sysmetrics[key]’ thing is essentially saying, “Try to find a value with the string associated with ‘key’. If it’s not found, then do whatever is associated with the ‘__index’ value”. In this case, ‘__index’ is a function, so that function is called, and whatever that returns becomes the value associated with that key.

I know, it’s a mouth full, and metatables are one of the more challenging aspects of Lua to get your head around, but once you do, it’s a powerful concept.

How about another example which will be a more realistic and typical case.

local function testSome()
    print(sysmetrics.SM_MAXIMUMTOUCHES)
end

In this case, the exact same mechanism is at play. In Lua, there are two ways to get a value out of a table. The first one we’ve already seen, where the ‘[]’ notation is used, as if the thing were an array. In the ‘testSome()’ case, the ‘.’ notation is being utilized. This is accessing the table as if it were a data structure, but it’s exactly the same as trying to access as an array, at least as far as the invocation of the ‘__index’ function is concerned. The ‘SM_MAXIMUMTOUCHES’ is taken as a string value, so it’s the same as doing: sysmetrics[‘SM_MAXIMUMTOUCHES’], and from the previous example, we know how that works out.

Now, there’s one more thing to note from this little escapade. The implementation of the helper function:

local function getSystemMetrics(what)
	local entry = nil;
	local idx = nil;

	if type(what) == "string" then
		entry = exports.names[what]
		idx = entry.value;
	else
		idx = tonumber(what)
		if not idx then 
			return nil;
		end
		
		entry = lookupByNumber(idx)

        if not entry then return nil end
	end

	local value = ffi.C.GetSystemMetrics(idx)

    if entry.converter then
        value = entry.converter(value);
    end

    return value;
end

There’s all manner of nonsense in here. The ‘what’ can be either a string or something that can be converted to a number. This is useful because it allows you to pass in symbolic names like “SM_CXBLAHBLAHBLAH” or a number 123. That’s great depending on what you’re interacting with and how the values are held. You might have some UI for example where you just want to use the symbolic names and not deal with numbers.

The other thing of note is that ‘entry.converter’ bit at the end. If you look back at the names table, you’ll notice that some of the entries have a ‘converter’ field associated with them. this is an optional function that can be associated with the entries. If it exists, it is called, with the value from the system called passed to it. In most cases, what the system returns is a number (number of mouse buttons, size of screen, etc). In some cases, the value returned is ‘0’ for false, and ‘non-zero’ for true. Well, as a Lua developer, I’d rather just get a bool in those cases where it’s appropriate, and this helper function is in a position to provide that for me. This is great because it allows me to not have to check the documentation to figure it out.

There’s one more tiny gem hidden in all this madness.

function exports.genCdefs()
    for key, entry in pairs(exports.names) do
        ffi.cdef(string.format("static const int %s = %d", key, entry.value))
    end
end

What does this do exactly? Simply, it generates those constants in the ffi.C space, so that you can still do this:

ffi.C.GetSystemMetrics(ffi.C.SM_MAXIMUMTOUCHES)

So, there you have it. You can go with the raw traditional sort of ffi binding, or you can spice things up a bit and make things a bit more useful with a little bit of effort. I like doing the latter, because I can generate the more traditional binding from the table of names that I’ve created. That’s a useful thing for documentation purposes, and in general.

I have stuck to my objectives, and this little example just goes to prove how esoteric minute details can be turned into approachable things of beauty with a little bit of Lua code.


schedlua – refactor compactor

The subject of scheduling and async programming has been a long running theme in my blog.  From the very first entries related to LJIT2Win32, through the creation of TINN, and most recently (within the past year), the creation of schedlua, I have been exploring this subject.  It all kind of started innocently enough.  When node.js was born, and libuv was ultimately released, I thought to myself, ‘what prevents anyone from doing this in LuaJIT without the usage of any external libraries whatsovever?’

It’s been a long road.  There’s really no reason for this code to continue to evolve.  It’s not at the center of some massively distributed system.  These are merely bread crumbs left behind, mainly for myself, as I explore and evolve a system that has proven itself to be useful at least as a teaching aid.

In the most recent incarnation of schedlua kernel, I was able to clean up my act with the realization that you can implement all higher level semantics using a very basic ‘signal’ mechanism within the kernel.  That was pretty good as it allowed me to easily implement the predicate system (when, whenever, waitForTruth, signalOnPredicate).  In addition, it allowed me to reimplement the async io portion with the realization that a task waiting on IO to occur is no different than a task waiting on any other kind of signal, so I could simply build the async io atop the signaling.

schedlua has largely been a Linux based project, until now.  The crux of the difference between Linux and Windows comes down to two things in schedlua.  The first thing is timing operations.  Basically, how do you get a microsecond accurate clock on the system.  On Linux, I use the ‘clock_gettime()’ system call.  On Windows, I use ‘QueryPerformanceCounter, QueryPerformanceFrequency’.  In order to isolate these, I put them into their own platform specific timeticker.lua file, and they both just have to surface a ‘seconds()’ function.  The differences are abstracted away, and the common interface is that of a stopwatch class.

That was good for time, but what about alarms?

The functions in schedlua related to alarms, are: delay, periodic, runnintTime, and sleep.  Together, these allow you to run things based on time, as well as delay the current task as long as you like.  My first implementation of these routines, going all the way back to the TINN implementation, were to run a separate ‘watchdog’ task, which in turn maintained its list of tasks that were waiting, and scheduled them.  Recently, I thought, “why can’t I just use the ‘whenever’ semantics to implement this?”.

Now, the implementation of the alarm routines comes down to this:

 

local function taskReadyToRun()
	local currentTime = SWatch:seconds();

	-- traverse through the fibers that are waiting
	-- on time
	local nAwaiting = #SignalsWaitingForTime;

	for i=1,nAwaiting do
		local task = SignalsWaitingForTime[1];
		if not task then
			return false;
		end

		if task.DueTime <= currentTime then
			return task
		else
			return false
		end
	end

	return false;
end

local function runTask(task)
    signalOne(task.SignalName);
    table.remove(SignalsWaitingForTime, 1);
end

Alarm = whenever(taskReadyToRun, runTask)

The Alarm module still keeps a list of tasks that are waiting for their time to execute, but instead of using a separate watchdog task to keep track of things, I simply use the schedlua built-in ‘whenever’ function. This basically says, “whenever the function ‘taskReadyToRun()’ returns a non-false value, call the function ‘runTask()’ passing the parameter from taskReadyToRun()”. Convenient, end of story, simple logic using words that almost feel like an English sentence to me.

I like this kind of construct for a few reasons. First of all, it reuses code. I don’t have to code up that specialized watchdog task time and time again. Second, it wraps up the async semantics of the thing. I don’t really have to worry about explicitly calling spawn, or anything else related to multi-tasking. It’s just all wrapped up in that one word ‘whenever’. It’s relatively easy for me to explain this code, without mentioning semaphores, threads, conditions, or whatever. I can tell a child “whenever this is true, do that other thing”, and they will understand it.

So, that’s it. First I used signals as the basis to implement higher order functions, such as the predicate based flow control. Now I’m using the predicate based flow control to implement yet other functions such as alarms. Next, I’ll take that final step and do the same to the async IO, and I’ll be back to where I was a few months back, but with a much smaller codebase, and cross platform to boot.


Spelunking Linux – procfs or is that sysctl?

Last time around, I introduced some simple things with lj2procfs.  Being able to simply access the contents of the various files within procfs is a bit of convenience.  Really what lj2procfs is doing is just giving you a common interface to the data in those files.  Everything shows up as simple lua values, typically tables, with strings, and numbers.  That’s great for most of what you’d be doing with procfs, just taking a look at things.

But, on Linux, procfs has another capability.  The /proc/sys directory contains a few interesting directories of its own:

 

abi/
debug/
dev/
fs/
kernel/
net/
vm/

And if you look into these directories, you find some more interesting files. For example, in the ‘kernel/’ directory, we can see a little bit of this:

hostname
hotplug
hung_task_check_count
hung_task_panic
hung_task_timeout_secs
hung_task_warnings
io_delay_type
kexec_load_disabled
keys
kptr_restrict
kstack_depth_to_print
max_lock_depth
modprobe
.
.
.

Now, these are looking kind of interesting. These files contain typically tunable portions of the kernel. On other unices, these values might be controlled through the sysctl() function call. On Linux, that function would just manipulate the contents of these files. So, why not just use lj2procfs to do the same.

Let’s take a look at a few relatively simple tasks. First, I want to get the version of the OS running on my machine. This can be obtained through the file /proc/sys/kernel/version

local procfs = require("lj2procfs.procfs")
print(procfs.sys.kernel.version)

$ #15-Ubuntu SMP Thu Apr 16 23:32:37 UTC 2015

This is the same string returned from the call ‘uname -v’

And, to get the hostname of the machine:

print(procfs.sys.kernel.hostname)
$ ubuntu

Which is what the ‘hostname’ command returns on my machine.

And what about setting the hostname? First of all, you’ll want to do this as root, but it’s equally simple:

procfs.sys.kernel.hostname = 'alfredo'

Keep in mind that setting the hostname in this way is transient, and it will seriously mess up things, like your about to sudo after this. But, there you have it.

Any value under /proc/sys can be retrieved or set using the fairly simple mechanism. I find this to be very valuable for two reasons. First of all, spelunking these values makes for great discovery. More importantly, being able to capture and set the values makes for a fairly easily tunable system.

An example of how this can be used for system diagnostics and tuning, you can capture the kernel values, using a simple command that just dumps what you want into a table. Send that table to anyone else for analysis. Similarly, if someone has come up with a system configuration that is great for a particular task, tuning the VM allocations, networking values, and the like, they can send you the configuration (just a string value that is a lua table) and you can apply it to your system.

This is a tad better than simply trying to look at system logs to determine after the fact what might be going on with a system. Perhaps the combination of these live values, as well as correlation with system logs, makes it easier to automate the process of diagnosing and tuning a system.

Well, there you have it. The lj2procfs thing is getting more concise, as well as becoming more usable at the same time.


Spelunking Linux – procfs, and a bag of chips

Recently, as a way to further understand all things Linux, I’ve been delving into procfs.  This is one of those virtual file systems on linux, meaning, the ‘files’ and ‘directories’ are not located on some real media anywhere, they are conjured up in realtime from within the kernel.  If you take a look at the ‘/proc’ directory on any linux machine, you’ll find a couple of things.  First, there are a bunch of directories with numeric values as their names.

 

1	     10			100	      1003	     1004
10075	     101		10276	      10683	     10695
10699	     1071		10746	      10756	     10757
1081	     11			11927	      12	     1236
12527	     12549		12563	      1296	     13

Yes, true to the unix philosophy, all the things are but files/directories. Each one of these numbers represents a process that is running on the system at the moment. Each one of these directories contains additional directories, and files. The files contain interesting data, and the directories lead into even more places where you can find more interesting things about the process.

Here are the contents of the directory ‘1’, which is the first process running on the system:

attr/	    autogroup	     auxv	 cgroup     clear_refs	cmdline
comm	    coredump_filter  cpuset	 cwd@	    environ	exe@
fd/	    fdinfo/	     gid_map	 io	    limits	loginuid
map_files/  maps	     mem	 mountinfo  mounts	mountstats
net/	    ns/		     numa_maps	 oom_adj    oom_score	oom_score_adj
pagemap     personality      projid_map  root@	    sched	schedstat
sessionid   setgroups	     smaps	 stack	    stat	statm
status	    syscall	     task/	 timers     uid_map	wchan

Some actual files, some more directories, some symbolic links. To find out the details of what each of these contains, and their general meaning, you need to consult the procfs man page, as well as the source code of the linux kernel, or various utilities that use them.

Backing up a bit, the /proc directory itself contains some very interesting files as well:

acpi/	      asound/		 buddyinfo     bus/	      cgroups
cmdline       consoles		 cpuinfo       crypto	      devices
diskstats     dma		 driver/       execdomains    fb
filesystems   fs/		 interrupts    iomem	      ioports
irq/	      kallsyms		 kcore	       keys	      key-users
kmsg	      kpagecount	 kpageflags    loadavg	      locks
mdstat	      meminfo		 misc	       modules	      mounts@
mtrr	      net@		 pagetypeinfo  partitions     sched_debug
schedstat     scsi/		 self@	       slabinfo       softirqs
stat	      swaps		 sys/	       sysrq-trigger  sysvipc/
thread-self@  timer_list	 timer_stats   tty/	      uptime
version       version_signature  vmallocinfo   vmstat	      zoneinfo

Again, the meanings of each of these is buried in the various documentation and source code files that surround them, but let’s take a look at a couple of examples. How about that uptime file?

8099.41 31698.74

OK. Two numbers. What do they mean? The first one is how many seconds the system has been running. The second one is the number of seconds all cpus on the system have been idle since the system came up. Yes, on a multi-proc system, the second number can be greater than the first. And thus begins the actual journey into spelunking procfs. If you’re like me, you occasionally need to know this information. Of course, if you want to know it from the command line, you just run the ‘uptime’ command, and you get…

 06:38:22 up  2:18,  2 users,  load average: 0.17, 0.25, 0.17

Well, hmmm, I get the ‘up’ part, but what’s all that other stuff, and what happened to the idle time thing? As it turns out, the uptime command does show the up time, but it also shows the logged in users, and the load average numbers, which actually come from different files.

It’s like this. Whatever you want to know about the system is probably available, but you have to know where to look for it, and how to interpret the data from the files. Often times there’s either a libc function you can call, or a command line utility, if you can discover and remember them.

What about a different way? Since I’m spelunking, I want to discover things in a more random fashion, and of course I want easy lua programmatic access to what I find. In steps the lj2procfs project.

In lj2procfs, I try to provide a more manageable interface to the files in /proc.  Most often, the information is presented as lua tables.  If the information is too simple (like /proc/version), then it is presented as a simple string.  Here is a look at that uptime example, done using lj2procfs:

 

return {
    ['uptime'] = {
        seconds = 19129.39,
        idle = 74786.86,
    };
}

You can see that the simple two numbers in the uptime file are converted to meaningful fields within the table. In this case, I use a simple utility program to turn any of the files into simple lua value output, suitable for reparsing, or transmitting. First, what does the ‘parsing’ look like?

--[[
	seconds idle 

	The first value is the number of seconds the system has been up.
	The second number is the accumulated number of seconds all processors
	have spent idle.  The second number can be greater than the first
	in a multi-processor system.
--]]
local function decoder(path)
	path = path or "/proc/uptime"
	local f = io.open(path)
	local str = f:read("*a")
	f:close()

	local seconds, idle = str:match("(%d*%.?%d+)%s+(%d*%.?%d+)")
	return {
		seconds = tonumber(seconds);
		idle = tonumber(idle);
	}
end

return {
	decoder = decoder;
}

In most cases, the output of the /proc files are meant to be human readable. At least with Linux. Other platforms might prefer these files to be more easily machine readable (binary). As such, they are readily parseable mostly by simple string patterns.

So, this decoder is one of many. There is one for each of the file types in the /proc directory, or at least the list is growing.

They are in turn accessed using the Decoders class.

local Decoders = {}
local function findDecoder(self, key)
	local path = "lj2procfs.codecs."..key;

	-- try to load the intended codec file
	local success, codec = pcall(function() return require(path) end)
	if success and codec.decoder then
		return codec.decoder;
	end

	-- if we didn't find a decoder, use the generic raw file loading
	-- decoder.
	-- Caution: some of those files can be very large!
	return getRawFile;
end
setmetatable(Decoders, {
	__index = findDecoder;

})

This is a fairly simple forwarding mechanism. You could use this in your code by doing the following:

procfs = require("Decoders")
local uptime = procfs.uptime

printValue(uptime)

When you try to access the procfs.uptime field of the Decoders class, it will go; “Hmmm, I don’t have a field in my table with that name, I’ll defer to whatever was set as my __index value, which so happens to be a function, so I’m going to call that function and see what it comes up with”. The findDecoder function will in turn look in the codecs directory for something with that name. It will find the code in uptime.lua, and execute it, handing it the path specified. The uptime function will read the file, parse the values, and return a table.

And thus magic is practiced!

It’s actually pretty nice because having things as lua tables and lua values such as numbers and strings, makes it really easy to do programmatic things from there.

Here’s meminfo.lua

local function meminfo(path)
	path = path or "/proc/meminfo"
	local tbl = {}
	local pattern = "(%g+):%s+(%d+)%s+(%g+)"

	for str in io.lines(path) do
		local name, size, units = str:match(pattern)
		if name then
			tbl[name] = {
				size = tonumber(size), 
				units = units;
			}
		end
	end

	return tbl
end

return {
	decoder = meminfo;
}

The raw ‘/proc/meminfo’ file output looks something like this:

MemTotal:        2045244 kB
MemFree:          273464 kB
MemAvailable:     862664 kB
Buffers:           72188 kB
Cached:           629268 kB
SwapCached:            0 kB
.
.
.

And the parsed output might be something like this:

    ['meminfo'] = {
        ['Active'] = {
            size = 1432756,
            units = [[kB]],
        };
        ['DirectMap2M'] = {
            size = 1992704,
            units = [[kB]],
        };
        ['MemFree'] = {
            size = 284604,
            units = [[kB]],
        };
        ['MemTotal'] = {
            size = 2045244,
            units = [[kB]],
        };
.
.
.

Very handy.

In some cases, the output can be a bit tricky, since it’s trying to be human readable, there might be some trickery, like header lines, and variable number of columns. This can get tricky, but you have the full power of lua to do the parsing, including using something like lpeg if you so choose. Here’s the parser for the ‘/proc/interrupts’ file, for example:

local strutil = require("lj2procfs.string-util")

local namepatt = "(%g+):(.*)"
local numberspatt = "[%s*(%d+)]+"
local descpatt = "[%s*(%d+)]+(.*)"

local function numbers(value)
	num, other = string.match(value, numpatt)
	return num;
end

local function interrupts(path)
	path = path or "/proc/interrupts"

	local tbl = {}
	for str in io.lines(path) do
		local name, remainder = str:match(namepatt)
		if name then
			local numbers = remainder:match(numberspatt)
			local desc = remainder:match(descpatt)

			local cpu = 0;
			local valueTbl = {}
			for number in numbers:gmatch("%s*(%d+)") do
				--print("NUMBER: ", number)
				valueTbl["cpu"..cpu] = tonumber(number);
				cpu = cpu + 1;
			end
			valueTbl.description = desc
			tbl[name] = valueTbl

		end
	end

	return tbl
end

return {
	decoder = interrupts;
}

And it deals with raw file data that looks like this:

          CPU0       CPU1       CPU2       CPU3       
  0:         38          0          0          0   IO-APIC-edge      timer
  1:      25395          0          0          0   IO-APIC-edge      i8042
  8:          1          0          0          0   IO-APIC-edge      rtc0
  9:      70202          0          0          0   IO-APIC-fasteoi   acpi
.
.
.

In this case, I’m running on a VM which was configured with 4 cpus. I had run previously with a VM with only 3 CPUs, and there were only 3 CPU columns. So, in this case, the patterns first isolate the interrupt number from the remainder of the line, then the numeric columns are isolated from the interrupt description field, then the numbers themselves are matched off using an iterator (gmatch). The table generated looks something like this:

    ['interrupts'] = {
        ['SPU'] = {
            cpu2 = 0,
            cpu3 = 0,
            cpu1 = 0,
            cpu0 = 0,
            description = [[Spurious interrupts]],
        };
        ['22'] = {
            cpu2 = 0,
            cpu3 = 0,
            cpu1 = 0,
            cpu0 = 0,
            description = [[IO-APIC  22-fasteoi   virtio1]],
        };
        ['NMI'] = {
            cpu2 = 0,
            cpu3 = 0,
            cpu1 = 0,
            cpu0 = 0,
            description = [[Non-maskable interrupts]],
        };
.
.
.

Nice.

To make spelunking easier, I’ve created a simple script which just calls the procfs thing, given a command line argument of the name of the file you’re interested in looking at.

#!/usr/bin/env luajit

--[[
	procfile

	This is a general purpose /proc/<file> interpreter.
	Usage:  
		$ sudo ./procfile filename

	Here, 'filname' is any one of the files listed in the 
	/proc directory.

	In the cases where a decoder is implemented in Decoders.lua
	the file will be parsed, and an appropriate value will be
	returned and printed in a lua form appropriate for reparsing.

	When there is no decoder implemented, the value returned is
	"NO DECODER AVAILABLE"

	example:
		$ sudo ./procfile cpuinfo
		$ sudo ./procfile partitions

--]]

package.path = "../?.lua;"..package.path;

local procfs = require("lj2procfs.procfs")
local putil = require("lj2procfs.print-util")

if not arg[1] then
	print ([[

USAGE: 
	$ sudo ./procfile <filename>

where <filename> is the name of a file in the /proc
directory.

Example:
	$ sudo ./pocfile cpuinfo
]])

	return 
end


local filename = arg[1]

print("return {")
putil.printValue(procfs[filename], "    ", filename)
print("}")

Once you have these basic tools in hand, you can begin to look at the various utilities that are used within linux, and try to emulate them. For example, the ‘free’ command will show you roughly how memory currently sits on your system, in terms of how much is physically available, how much is used, and the like. It’s typical output, without any parameters, might look like:

             total       used       free     shared    buffers     cached
Mem:       2045244    1760472     284772      28376      73276     635064
-/+ buffers/cache:    1052132     993112
Swap:      1046524          0    1046524

Here’s the code to generate something similar, using lj2procfs

#!/usr/bin/env luajit

--[[
	This lua script acts similar to the 'free' command, which will
	display some interesting information about how much memory is being
	used in the system.
--]]
--memfree.lua
package.path = "../?.lua;"..package.path;

local procfs = require("lj2procfs.procfs")

local meminfo = procfs.meminfo;

local memtotal = meminfo.MemTotal.size
local memfree = meminfo.MemFree.size
local memused = memtotal - memfree
local memshared = meminfo.Shmem.size
local membuffers = meminfo.Buffers.size
local memcached = meminfo.Cached.size

local swaptotal = meminfo.SwapTotal.size
local swapfree = meminfo.SwapFree.size
local swapused = swaptotal - swapfree

print(string.format("%18s %10s %10s %10s %10s %10s",'total', 'used', 'free', 'shared', 'buffers', 'cached'))
print(string.format("Mem: %13d %10d %10d %10d %10d %10d", memtotal, memused, memfree, memshared, membuffers, memcached))
--print(string.format("-/+ buffers/cache: %10d %10d", 1, 2))
print(string.format("Swap: %12d %10d %10d", swaptotal, swapused, swapfree))

The working end of this is simply ‘local meminfo = procfs.meminfo’

The code generates the following output.

             total       used       free     shared    buffers     cached
Mem:       2045244    1768692     276552      28376      73304     635100
Swap:      1046524          0    1046524

I couldn’t quite figure out where the -/+ buffers/cache: values come from yet. I’ll have to look at the actual code for the ‘free’ program to figure it out. But, the results are otherwise the same.

Some of these files can be quite large, like kallsyms, which might argue for an iterator interface instead of a table interface. But, some of the files have meta information, as well as a list of fields. Since the number of large files is fairly small, it made more sense to cover the broader cases instead, and tables do that fine. kallsyms being fairly large, it will still nicely fit into a table.

So, what can you do with that?

--findsym.lua
package.path = "../?.lua;"..package.path;

local sym = arg[1]
assert(sym, "must specify a symbol")

local putil = require("lj2procfs.print-util")
local sutil = require("lj2procfs.string-util")
local fun = require("lj2procfs.fun")
local procfs = require("lj2procfs.procfs")

local kallsyms = procfs.kallsyms

local function isDesiredSymbol(name, tbl)
    return sutil.startswith(name, sym)
end

local function printSymbol(name, value)
	putil.printValue(value)
end

fun.each(printSymbol, fun.filter(isDesiredSymbol, kallsyms))

In this case, a little utility which will traverse through the symbols, looking for something that starts with whatever the user specified on the command line. So, I can use it like this:

luajit findsym.lua mmap

And get the following output:

{
    location = [[0000000000000000]],
    name = [[mmap_init]],
    kind = [[T]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_rnd]],
    kind = [[t]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_zero]],
    kind = [[t]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_min_addr_handler]],
    kind = [[T]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_vmcore_fault]],
    kind = [[t]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_region]],
    kind = [[T]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_vmcore]],
    kind = [[t]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_kset.19656]],
    kind = [[b]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_min_addr]],
    kind = [[B]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_mem_ops]],
    kind = [[r]],
};
{
    location = [[0000000000000000]],
    name = [[mmap_mem]],
    kind = [[t]],
};

Of course, you’re not limited to simply printing to stdout. In fact, that’s the least valuable thing you could be doing. What you really have is programmatic access to all these values. If you had run this command as root, you would get the actual addresses of these routines.

And so it goes. lj2procfs gives you easy programmatic access to all the great information that is hidden in the procfs file system on linux machines. These routines make it relatively easy to gain access to the information, and utilize tools such as luafun to manage it. Once again, the linux system is nothing more than a very large database. Using a tool such a lua makes it relatively easy to access all the information in that database.

So far, lj2procfs just covers reading the values. In this article I did not cover the fact that you can also get information on individual processes as well. Aside from this, procfs actually allows you to set some values as well. This is why I structured the code as ‘codecs’. You can encode, and decode. So, in future, setting a value will be as simple as ‘procfs.something.somevalue = newvalue’. This will eliminate the guess work out of doing command line ‘echo …’ commands for esoteric values, which are seldom used. It also makes easy to achieve great things programmatically through script, without even relying on various libraries that are meant to do the same.

And there you have it. procfs wrapped up in lua goodness.


Spelunking Linux – Decomposing systemd

Honestly, I don’t know what all the fuss is about. What is systemd?  It’s that bit of code that gets things going on your Linux machine once the kernel has loaded itself.  You know, dealing with bringing up services, communicating between services, running the udev and dbus stuff, etc.

So, I wrote an ffi wrapper for the libsystemd.so library This has proven to be handy, as usual, I can essentially write what looks like standard C code, but it’s actually LuaJIT goodness.

--[[
	Test using SDJournal as a cursor over the journal entries

	In this case, we want to try the various cursor positioning
	operations to ensure the work correctly.
--]]
package.path = package.path..";../src/?.lua"

local SDJournal = require("SDJournal")
local sysd = require("systemd_ffi")

local jnl = SDJournal()

-- move forward a few times
for i=1,10 do
	jnl:next();
end

-- get the cursor label for this position
local label10 = jnl:positionLabel()
print("Label 10: ", label10)

-- seek to the beginning, print that label
jnl:seekHead();
jnl:next();
local label1 = jnl:positionLabel();
print("Label 1: ", label1);


-- seek to label 10 again
jnl:seekLabel(label10)
jnl:next();
local label3 = jnl:positionLabel();
print("Label 3: ", label3)
print("label 3 == label 10: ", label3 == label10)

In this case, a simple journal object which makes it relatively easy to browse through the systemd journals that are laying about. That’s handy. Combined with the luafun functions, browsing through journals suddenly becomes a lot easier, with the full power of lua to form very interesting queries, or other operations.

--[[
	Test cursoring over journal, turning each entry
	into a lua table to be used with luafun filters and whatnot
--]]
package.path = package.path..";../src/?.lua"

local SDJournal = require("SDJournal")
local sysd = require("systemd_ffi")
local fun = require("fun")()

-- Feed this routine a table with the names of the fields
-- you are interested in seeing in the final output table
local function selection(fields, aliases)
	return function(entry)
		local res = {}
		for _, k in ipairs(fields) do
			if entry[k] then
				res[k] = entry[k];
			end
		end
		return res;
	end
end

local function  printTable(entry)
	print(entry)
	each(print, entry)
end

local function convertCursorToTable(cursor)
	return cursor:currentValue();
end


local function printJournalFields(selector, flags)
	flags = flags or 0
	local jnl1 = SDJournal();

	if selector then
		each(printTable, map(selector, map(convertCursorToTable, jnl1:entries())))
	else
		each(printTable, map(convertCursorToTable, jnl1:entries()))	
	end
end

-- print all fields, but filter the kind of journal being looked at
--printJournalFields(nil, sysd.SD_JOURNAL_CURRENT_USER)
--printJournalFields(nil, sysd.SD_JOURNAL_SYSTEM)

-- printing specific fields
--printJournalFields(selection({"_HOSTNAME", "SYSLOG_FACILITY"}));
printJournalFields(selection({"_EXE", "_CMDLINE"}));

-- to print all the fields available per entry
--printJournalFields();

In this case, we have a simple journal printer, which will take a list of fields, as well as a selection of the kinds of journals to look at. That’s quite useful as you can easily generate JSON or XML, or Lua tables on the output end, without much work. You can easily select which fields you want to display, and you could even change the names along the way. You have the full power of lua at your disposal to do whatever you want with the data.

In this case, the SDJournal object is pretty straight forward. It simply wraps the various ‘sd_xxx’ calls within the library to get its work done. What about some other cases? Does the systemd library need to be used for absolutely everything that it does? The answer is ‘no’, you can do a lot of the work yourself, because at the end of the day, the passive part of systemd is just a bunch of file system manipulation.

Here’s where it gets interesting in terms of decomposition.

Within the libsystemd library, there is the sd_get_machine_names() function:

_public_ int sd_get_machine_names(char ***machines) {
        char **l = NULL, **a, **b;
        int r;

        assert_return(machines, -EINVAL);

        r = get_files_in_directory("/run/systemd/machines/", &l);
        if (r < 0)
                return r;

        if (l) {
                r = 0;

                /* Filter out the unit: symlinks */
                for (a = l, b = l; *a; a++) {
                        if (startswith(*a, "unit:") || !machine_name_is_valid(*a))
                                free(*a);
                        else {
                                *b = *a;
                                b++;
                                r++;
                        }
                }

                *b = NULL;
        }

        *machines = l;
        return r;
}

The lua wrapper for this would simply be:

ffi.cdef("int sd_get_machine_names(char ***machines)")

Great, for those who already know this call, you can allocate a char * array, get the array of string values, and party on. But what about the lifetime of those strings, and if you’re doing it as an iterator, when do you ever free stuff, and isn’t this all wasteful?

So, looking at that code in the library, you might think, ‘wait a minute, I could just replicate that in Lua, and get it done without doing any ffi stuff at all!

local fun = require("fun")

local function isNotUnit(name)
	return not strutil.startswith(name, "unit:")
end

function SDMachine.machineNames(self)
	return fun.filter(isNotUnit, fsutil.files_in_directory("/run/systemd/machines/"))
end

OK, that looks simple. But what’s happening with that ‘files_in_directory()’ function? Well, that’s the meat and potatoes of this operation.

local function nil_iter()
    return nil;
end

-- This is a 'generator' which will continue
-- the iteration over files
local function gen_files(dir, state)
    local de = nil

    while true do
       de = libc.readdir(dir)
    
        -- if we've run out of entries, then return nil
        if de == nil then return nil end

    -- check the entry to see if it's an actual file, and not
    -- a directory or link
        if dirutil.dirent_is_file(de) then
            break;
        end
    end

    
    local name = ffi.string(de.d_name);

    return de, name
end

local function files_in_directory(path)
    local dir = libc.opendir(path)

    if not dir==nil then return nil_iter, nil, nil; end

    -- make sure to do the finalizer
    -- for garbage collection
    ffi.gc(dir, libc.closedir);

    return gen_files, dir, initial;
end

In this case, files_in_directory() takes a string path, like “/run/systemd/machines”, and just iterates over the directory, returning only the files found there. It’s convenient in that it will skip so called ‘hidden’ files, and things that are links. This simple technique/function can be the cornerstone of a lot of things that view files in Linux. The function leverages the libc opendir(), readdir(), functions, so there’s nothing new here, but it wraps it all up in this convenient iterator, which is nice.

systemd is about a whole lot more than just browsing directories, but that’s certain a big part of it. When you break it down like this, you begin to realize that you don’t actually need to use a ton of stuff from the library. In fact, it’s probably better and less resource intensive to just ‘go direct’ where it makes sense. In this case, it was just implementing a few choice routines to make file iteration work the same as it does in systemd. As this binding evolves, I’m sure there is other low lying fruit that I’ll be able to pluck to make it even more interesting, useful, and independent of the libsystemd library.


Spelunking Linux – Yes, the system truly is a database

In this article: Isn’t the whole system just a database? – libdrm, I explored a little bit of the database nature of Linux by using libudev to enumerate and open libdrm devices.  After that, I spent some time bringing up a USB module: LJIT2libusb.  libusb is a useful cross platform library that makes it relatively easy to gain access to the usb functions on multiple platforms.  It can enumerate devices, deal with hot plug notifications, open up, read, write, etc.

At its core, on Linux at least, libusb tries to leverage the uvdev capabilities of the target system, if those capabilities are there.  This means that device enumeration and hot plugging actually use the libuvdev stuff.  In fact, the code for enumerating those usb devices in libusb looks like this:

 

	udev_enumerate_add_match_subsystem(enumerator, "usb");
	udev_enumerate_add_match_property(enumerator, "DEVTYPE", "usb_device");
	udev_enumerate_scan_devices(enumerator);
	devices = udev_enumerate_get_list_entry(enumerator);

There’s more stuff of course, to turn that into data structures which are appropriate for use within the libusb view of the world. But, here’s the equivalent using LLUI and the previously developed UVDev stuff:

local function isUsbDevice(dev)
	if dev.IsInitialized and dev:getProperty("subsystem") == "usb" and
		dev:getProperty("devtype") == "usb_device" then
		return true;
	end

	return false;
end

each(print, filter(isUsbDevice, ctxt:devices()))

It’s just illustrative, but it’s fairly simple to understand I think. The ‘ctxt:devices()’ is an iterator over all the devices in the system. The ‘filter’ function is part of the luafun functional programming routines available to Lua. the ‘isUsbDevice’ is a predicate function, which returns ‘true’ when the device in question matches what it believes makes a device a ‘usb’ device. In this case, its the subsystem and dev_type properties which are used.

Being able to easily query devices like this makes life a heck of a lot easier. No funky code polluting my pure application. Just these simple query predicates written in Lua, and I’m all set. So, instead of relying on libusb to enumerate my usb devices, I can just enumerate them directly using uvdev, which is what the library does anyway. Enumeration and hotplug handing is part of the library. The other part is the actual send and receiving of data. For that, the libusb library is still primarily important, as replacing that code will take some time.

Where else can this great query capability be applied? Well, libudev is just a nice wrapper atop sysfs, which is that virtual file system built into Linux for gaining access to device information and control of the same. There’s all sorts of stuff in there. So, let’s say you want to list all the block devices?

local function isBlockDevice(dev)
	if dev.IsInitialized and dev:getProperty("subsystem") == "block" then
		return true;
	end

	return false;
end

That will get all the devices which are in the subsystem “block”. That includes physical disks, virtual disks, partitions, and the like. If you’re after just the physical ones, then you might use something like this:

local function isPhysicalBlockDevice(dev)
	if dev.IsInitialized and dev:getProperty("subsystem") == "block" and
		dev:getProperty("devtype") == "disk" and
		dev:getProperty("ID_BUS") ~= nil then
		return true;
	end

	return false;
end

Here, a physical device is indicated by subsystem == ‘block’ and devtype == ‘disk’ and the ‘ID_BUS’ property exists, assuming any physical disk would show up on one of the system’s buses. This won’t catch a SD card though. For that, you’d use the first one, and then look for a property related to being an SD card. Same goes for ‘cd’ vs ramdisk, or whatever. You can make these queries as complex or simple as you want.

Once you have a device, you can simply open it using the “SysName” parameter, handed to an fopen() call.

I find this to be a great way to program. It makes the creation of utilities such as ‘lsblk’ relatively easy. You would just look for all the block devices and their partitions, and put them into a table. Then separately, you would have a display routine, which would consume the table and generate whatever output you want. I find this much better than the typical Linux tools which try to do advanced display using the terminal window. That’s great as far as it goes, but not so great if what you really want is a nice html page generated for some remote viewing.

At any rate, this whole libudev exploration is a great thing. You can list all devices easily, getting every bit of information you care to examine. Since it’s all scriptable, it’s fairly easy to taylor your queries on the fly, looking at, discovering, and the like. I discovered that the thumb print reader in my old laptop was made by Broadcom, and my webcam by 3M? It’s just so much fun.

Well there you have it. The more you spelunk, the more you know, and the more you can fiddle about.