A Picture’s worth 5Mb

What’s this then?

Another screen capture. This time I just went with the full size 1920×1080. What’s happening on this screen? Well, that tiger in the upper left is well known from PostScript days, and is the gold standard for testing graphics rendering systems. In this case, I’m using OpenVG on the Pi, from a Lua driven framework. No external support libraries, other than what’s on the box. In fact, I just took the hello_tiger sample, and did some munging to get it into the proper shape, and here it is.  One thing of note, this image is actually rotating.  It’s not blazingly fast, but it’s not blazingly fast on any platform.  But, it’s decent.  It’s way faster than what it would be using the CPU only on my “high powered” quad core desktop machine.  This speed comes from the fact that the GPU on the Pi is doing all the work.  You can tell because if you get a magnifying glass and examine the lowest right hand corner of the image, you’ll see that the CPU meter is not pegged.  What amount of action is occuring there is actually coming from other parts of the system, not the display of the tiger.  I guess that VideoCore GPU thing really does make a difference in terms of accelerating graphics.  Go figure.

In the middle of the image, you see a window “snapper.lua”. This is the code that is doing the snapshot. Basically, I run the tiger app from one terminal, the one on the lower left. Then in the lower right, I run the ‘snapper.lua’ script. As can be seen in the OkKeyUp function, every time the user presses the “SysRQ” key (also ‘Print Screen’ on many keyboards), a snapshot is taken of the screen.

Below that, there’s a little bit of code that stitches an event loop together with a keyboard object. Yes, I now have a basic event loop, and a “Application” object as well. This makes it really brain dead simple to throw together apps like this without much effort.

[sidetrack]
One very interesting thing about being able to completely control your eventing model and messaging loops is that you can do whatever you want. Eventually, I’ll want to put together a quick and dirty “remote desktop” sort of deal, and I’ll need to be able to quickly grab the keyboard, mouse, and other interesting events, and throw them to some other process. That process will need to be able to handle them as if they happened locally. Well, when you construct your environment from scratch, you can easily bake that sort of thing in.
[sidetrack]

It’s nice to have such a system readily at hand.  I can fiddle about with lots of different things, build apps, experiment with eventing models, throw up some graphics, and never once have to hit “compile” and wait.  This makes for a very productive playground where lots of different ideas can be tried out quickly before being baked into more ‘serious’ coding environments.

 


Screencast of the Raspberry Pi

It’s one of those innevitabilities.  Start with fiddling about with low level graphics system calls, do some screen capture, then some single file saving, and suddenly enough you’ve got screen capture movies!  Assuming WordPress does this right.

If you’ve been following along, the relevant code looks like this:

-- Create the resource that will be used
-- to copy the screen into.  Do this so that
-- we can reuse the same chunk of memory
local resource = DMXResource(displayWidth, displayHeight, ffi.C.VC_IMAGE_RGB888);

local p_rect = VC_RECT_T(0,0,displayWidth, displayHeight);
local pixdata = resource:CreateCompatiblePixmap(displayWidth, displayHeight);

local framecount = 120

for i=1,framecount do
	-- Do the snapshot
	Display:Snapshot(resource);

	local pixeldata, err = resource:ReadPixelData(pixdata, p_rect);
	if pixeldata then
		-- Write the data out
		local filename = string.format("screencast/desktop_%06d.ppm", i);
		print("Writing: ", filename);

		WritePPM(filename, pixeldata);
	end
end

In this case, I’m capturing into a bitmap that is 640×320, which roughly matches the aspect ratio of my wide monitor.

This isn’t the fastest method of capturing on the planet. It actually takes a fair amount of time to save each image to the SD card in my Pi. Also, I might be able to eliminate the copy (ReadPixelData), if I can get the pointer to the memory that the resource uses.

This little routine will generate a ton of .ppm image files stored in the local ‘screencast’ directory.

From there, I use ffmpeg to turn the sequence of images into a movie:

ffmpeg -i desktop_0x%06.ppm  desktop.mp4

If you’re a ffmpeg guru, you can set all sorts of flags to change the framerate, encoder, and the like. I just stuck with defaults, and the result is what you see here.

So, the Pi is capable. It’s not the MOST capable, but it can get the job done. If I were trying to do this in a production environment, I’d probably attach a nice SSD drive to the USB port, and stream out to that. I might also choose a smaller image format such as YUV, which is easier to compress. As it is, the compression was getting about 9fps, which ain’t too bad for short clips like this.

One nice thing about this screen capture method is that it doesn’t matter whether you’re running X Windows, or not. So, you’re not limited to things that run in X. You can capture simple terminal sessions as well.

I’m rambling…

This works, and it can only get better from here.

It is part of the LJIT2RPi project.


Taking Screen Snapshots on the Raspberry Pi

Last time around, I was doing some display wrangling, trying to put some amount of ‘framework’ goodness around this part of the Raspberry Pi operating system. With the addition of a couple more classes, I an finally do something useful.

Here is how to take a screen snapshot:

local ffi = require "ffi"
local DMX = require "DisplayManX"

local Display = DMXDisplay();

local width = 640;
local height = 480;
local layer = 0;	-- keep the snapshot view on top

-- Create a resource to copy image into
local pixmap = DMX.DMXResource(width,height);

-- create a view with the snapshot as
-- the backing store
local mainView = DMX.DMXView.new(Display, 200, 200, width, height, layer, pformat, pixmap);


-- Hide the view so it's not in the picture
mainView:Hide();	

-- Do the snapshot
Display:Snapshot(pixmap);

-- Show it on the screen
mainView:Show();

ffi.C.sleep(5);

This piece of code is so short, it’s almost self explanatory. But, I’ll explain it anyway.

The first few lines are just setup.

local ffi = require "ffi"
local DMX = require "DisplayManX"

local Display = DMXDisplay();

local width = 640;
local height = 480;
local layer = 0;	-- keep the snapshot view on top

The only two that are strictly needed are these:

local DMX = require "DisplayManX"
local Display = DMXDisplay();

The first line pulls in the Lua DisplayManager module that I wrote. This is the entry point into the Raspberry Pi’s low level VideoCore routines. Besides containing the simple wrappers around the core routines, it also contains some convenience classes which make doing certain tasks very simple.

Creating a DMXDisplay object is most certainly the first thing you want to do in any application involving the display. This gives you a handle on the entire display. From here, you can ask what size things are, and it’s necessary for things like creating a view on the display.

The DMXDisplay interface has a function to take a snapshot. That function looks like this:

Snapshot = function(self, resource, transform)
  transform = transform or ffi.C.VC_IMAGE_ROT0;

  return DisplayManX.snapshot(self.Handle, resource.Handle, transform);
end,

The important part here is to note that a ‘resource’ is required to take a snapshot. This might look like 3 parameters are required, but through the magic of Lua, it atually turns into only 1. We’ll come back to this.

So, a resource is needed. What’s a resource? Basically a bitmap that the VideoCore system controls. You can create one easily like this:

local pixmap = DMX.DMXResource(width,height);

There are a few other parameters that you could use while creating your bitmap, but width and height are the essentials.

One thing of note, when you eventually call: Display:Snapshot(pixmap), you can not control which part of the screen is taken as the snapshot. It will take a snapshot of the entire screen. But, your bitmap does not have to be the same size! It can be any size you like. The VideoCore library will automatically squeeze your snapshot down to the size you specified when you created your resource.

So, we have a bitmap within which our snapshot will be stored. The last thing to do is to actually take the snapshot:

Display:Snapshot(pixmap);

In this particular example, I also want to display the snapshot on the screen. So, I created a ‘view’. This view is simply a way to display something on the screen.

local mainView = DMX.DMXView.new(Display, 200, 200, width, height, layer, pformat, pixmap);

In this case, I do a couple of special things. I create the view to be the same size as the pixel buffer, and in fact, I use the pixel buffer as the backing store of the view. That means that whenever the pixel buffer changes, for example, when a snapshot is taken, it will automatically show up in the view, because the system draws the view from the pixel buffer. I know it’s a mouth full, but that’s how the system works.

So the following sequence:

-- Hide the view so it's not in the picture
mainView:Hide();	

-- Do the snapshot
Display:Snapshot(pixmap);

-- Show it on the screen
mainView:Show();

ffi.C.sleep(5);

…will hide the view
take a snapshot
show the view

That’s so the view itself is not a part of the snapshot. You could achieve the same by moving the view ‘offscreen’ and then back again, but I haven’t implemented that part yet.

Well, there you have it. A whole bunch of words to describe a fairly simple process. I think this is an interesting thing though. Thus far, when I’ve seen Raspberry Pi ‘demo’ videos, it’s typically someone with a camera in one hand, bad lighting, trying to type on their keyboard and use their mouse while taking video. With the ability to take screen snapshots in code, making screencasts can’t be that far off.

Now, if only I could mate this capability with that x264 video compression library, I’d be all set!


Taming Raspeberry Pi Display Manager

Last time around, I was busy slaying the bcm_host interface.  One of the delightful thing that follows on from slaying dragons is that you get to plunder their treasure.  In this particular case, with the bcm_host stuff in hand, you can now do fantastic things like put pixels on the screen.

Where to start?

The display system of the Raspberry Pi was at first very confusing, and perplexing.  In order for me to understand it, I had to first just ignore the X Window system, because that’s a whole other thing.  At the same time, I had to hold back the urge to ‘just give me the frame buffer!’.  You can in fact get your hands on the frame buffer, but if you do it in a nicely controlled way, you’ll get the benefit of some composited ‘windows’ as well.

Some terminology:

VideoCore – The set of libraries that is at the core of the Raspberry Pi hardware for audio and video.  This is not just one library, but a set of different libraries, each of which provides some function.  The sources are not available, but various header files are (located in /opt/vc/*).  There isn’t much documentation, other than what’s in the headers, and this is what makes them difficult to use.

EGL, EGLES, OpenVG, OpenMAX – These are various APIs defined by the Khronos Group.  They cover windowing, open GL for embedded devices, and 2D vector graphics.  These are similarly supplied as opaque libraries, with their header files made available.  Although these libraries are very useful and helpful, they are not strictly required to get something displayed on the screen.

This time around, I’m only going to focus on the parts of the VideoCore, ignoring all the Khronos specific stuff.

The first part of VideoCore is the vc_dispmanx.  This is the display manager service API.  Keep in mind that word “service”.  From a programmer’s perspective, you can consider the display manager to be a ‘service’, meaning, you send commands to it, and they are executed.  I have created a LuaJIT FFI file for vc_dispmanx.lua, which essentially gives me access to all the functions within.  Of course, following my own simple API development guidelines, I created a ‘helper’ file as well; DisplayManX.lua.

The short story is this.  Within DisplayManX, you’ll find the implementation of 4 convenience classes:

DMXDisplay – This is what gives you a ‘handle’ on the display.  This could be an attached HDMI monitor, or a composite screen.  Either way, you can use this handle to get the size, and other characteristics.  This handle is also necessary to perform any other operations on the display, from creating a window, to displaying content.

DMXElement – A representation of a visual element on the DMXDisplay.  I’m trying to avoid the word ‘window’ because it does not have a title bar, close box, resize, etc.  Those are all visual elements to be developed by a higher level thing.  This DMXElement is what you need to carve out a piece of the screen where you intend to do some drawing.  You can give an element a “layer”, so they can be ordered.  The DMXDisplay, acts as a fairly rudimentary element manger, so it does basic front to back ordering of your elements.

DMXResource – Just like I’m trying to avoid the word ‘window’ with respect to DMXElement, I’ll try to avoid the word ‘view’ in describing DMXResource.  A resource is essentially like a bitmap.  You create a resource, with a certain size, and pixel format, fill it in with stuff, and then ultimately display it on the screen by writing it into the DMXElement.  If you were creating a traditional windowing environment, this would be the backing store of a window.

DMXUpdate – This is like a transaction.  As mentioned earlier, DMX is a ‘service’, and you send commands to the service.  In order to send commands, you must bracket them in an ‘update begin’/’update end’ pairing.  You can send a ‘batch’ of commands by just placing several commands between the update begin/end.  This object represents the bracketing transaction.

The good news is, you don’t really need to worry about this low level of detail if you want to use these classes.

So, How about an example?

 

-- A simple demo using dispmanx to display an overlay

local ffi = require "ffi"
local bit = require "bit"
local bnot = bit.bnot
local band = bit.band
local bor = bit.bor
local rshift = bit.rshift
local lshift = bit.lshift

local DMX = require "DisplayManX"


ALIGN_UP = function(x,y)  
    return band((x + y-1), bnot(y-1))
end

-- This is a very simple graphics rendering routine.
-- It will fill in a rectangle, and that's it.
function FillRect( image, imgtype, pitch, aligned_height,  x,  y,  w,  h, val)
    local         row;
    local         col;
    local srcPtr = ffi.cast("int16_t *", image);
    local line = ffi.cast("uint16_t *",srcPtr + y * rshift(pitch,1) + x);

    row = 0;
    while ( row < h ) do
	col = 0; 
        while ( col < w) do
            line[col] = val;
	    col = col + 1;
        end
        line = line + rshift(pitch,1);
	row = row + 1;
    end
end

-- The main function of the example
function Run(width, height)
    width = width or 200
    height = height or 200


    -- Get a connection to the display
    local Display = DMXDisplay();
    Display:SetBackground(5, 65, 65);

    local info = Display:GetInfo();
    
    print(string.format("Display is %d x %d", info.width, info.height) );

    -- Create an image to be displayed
    local imgtype =ffi.C.VC_IMAGE_RGB565;
    local pitch = ALIGN_UP(width*2, 32);
    local aligned_height = ALIGN_UP(height, 16);
    local image = ffi.C.calloc( 1, pitch * height );

    FillRect( image, imgtype,  pitch, aligned_height,  0,  0, width,      height,      0xFFFF );
    FillRect( image, imgtype,  pitch, aligned_height,  0,  0, width,      height,      0xF800 );
    FillRect( image, imgtype,  pitch, aligned_height, 20, 20, width - 40, height - 40, 0x07E0 );
    FillRect( image, imgtype,  pitch, aligned_height, 40, 40, width - 80, height - 80, 0x001F );

    local BackingStore = DMXResource(width, height, imgtype);

	
    local dst_rect = VC_RECT_T(0, 0, width, height);

    -- Copy the image that was created into 
    -- the backing store
    BackingStore:CopyImage(imgtype, pitch, image, dst_rect);

 
    -- Create the view that will actually 
    -- display the resource
    local src_rect = VC_RECT_T( 0, 0, lshift(width, 16), lshift(height, 16) );
    dst_rect = VC_RECT_T( (info.width - width ) / 2, ( info.height - height ) / 2, width, height );
    local alpha = VC_DISPMANX_ALPHA_T( bor(ffi.C.DISPMANX_FLAGS_ALPHA_FROM_SOURCE, ffi.C.DISPMANX_FLAGS_ALPHA_FIXED_ALL_PIXELS), 120, 0 );
    local View = Display:CreateElement(dst_rect, BackingStore, src_rect, 2000, DISPMANX_PROTECTION_NONE, alpha);
 

    -- Sleep for a second so we can see the results
    local seconds = 5
    print( string.format("Sleeping for %d seconds...", seconds ));
    ffi.C.sleep( seconds )

end


Run(400, 200);

This is a sample taken from one of the original hello_pi examples, but made to work with this simplified world that I’ve created. To get started:

local DMX = require "DisplayManX"

This will simply pull in the display management service so we can start using it.

Next up, we see a simple rectangle filling routine:

-- This is a very simple graphics rendering routine.
-- It will fill in a rectangle, and that's it.
function FillRect( image, imgtype, pitch, aligned_height,  x,  y,  w,  h, val)
    local         row;
    local         col;
    local srcPtr = ffi.cast("int16_t *", image);
    local line = ffi.cast("uint16_t *",srcPtr + y * rshift(pitch,1) + x);

    row = 0;
    while ( row < h ) do
	col = 0; 
        while ( col < w) do
            line[col] = val;
	    col = col + 1;
        end
        line = line + rshift(pitch,1);
	row = row + 1;
    end
end

We don’t actually use the imagetype, nor the aligned_height. Basically, we’re assuming an image that has 16-bit pixels, and the ‘pitch’ tells us how many bytes per row. So, go through 16-bit value at a time, and set it to the color value specified.

Next, we come to the main event. We want to create a few semi-transparent rectangles, and display them on the screen. Then wait a few seconds for you to view the results before cleaning the whole thing up.

    local Display = DMXDisplay();
    Display:SetBackground(5, 65, 65);

One of the first actions is to create the display object, and set the background color. The funny thing you’ll notice, if you run this code, is suddenly your monitor seems to have a lot more screen real estate than you thought. Yep, X is taking up a smaller portion of the screen (if you’re running X). Same with the regular terminal. If you were running something like XBMC, you’d be seeing your full display being utilized. This is how they do it. At any rate, there’s an application right there. If you want to set the border color of your screen, just do those two lines of code, and you’re done…

Moving right along. We need a chunk of memory allocated, which will be what actually gets displayed in the window.

    -- Create an image to be displayed
    local imgtype =ffi.C.VC_IMAGE_RGB565;
    local pitch = ALIGN_UP(width*2, 32);
    local aligned_height = ALIGN_UP(height, 16);
    local image = ffi.C.calloc( 1, pitch * height );

For those who are framebuffer obsessed, there’s you’re window’s frame buffer right there. It’s just a chunk of memory of the appropriate size to match the pitch and alignment requirements of the pixel format you’ve selected. There are a fair number of formats to choose from, including RGBA32 if you want to burn up a lot of memory.

This would typically be represented as a “Bitmap” or “PixelMap”, or “PixelBuffer” object in most environments. Next time around, I’ll encapsulate it in one such object, but for now, it’s just a pointer to a chunk of memory ‘image’.

Now that we’ve got our chunk of memory, we fill it with color:

    FillRect( image, imgtype,  pitch, aligned_height,  0,  0, width,      height,      0xFFFF );
    FillRect( image, imgtype,  pitch, aligned_height,  0,  0, width,      height,      0xF800 );
    FillRect( image, imgtype,  pitch, aligned_height, 20, 20, width - 40, height - 40, 0x07E0 );
    FillRect( image, imgtype,  pitch, aligned_height, 40, 40, width - 80, height - 80, 0x001F );

As described earlier, the DMXResource is required to actually display stuff in a display element, so we need to create that:

    local BackingStore = DMXResource(width, height, imgtype);

Don’t get tripped up by the name of the variable. It could be anything, I just used “BackingStore” to emphasize the fact that it’s the backing store of our display element.

Now to copy the image into the backing store:

    local dst_rect = VC_RECT_T(0, 0, width, height);

    -- Copy the image that was created into 
    -- the backing store
    BackingStore:CopyImage(imgtype, pitch, image, dst_rect);

Here, I do call it ‘CopyImage’, because that is in fact what you’re doing. In all generality, the lower level API is simply ‘write_data’, but why be so obtuse when we can be more precise. At this point, we have our ‘bitmap’ written into our backing store, but it’s not displayed on the screen yet!

This last part is the only ‘black magic’ part of the whole thing, but you’ll soon see it’s nothing too exciting. We need to create an element on the display, and that element needs to have our BackingStore as it’s backing.

    -- Create the view that will actually 
    -- display the resource
    local src_rect = VC_RECT_T( 0, 0, lshift(width, 16), lshift(height, 16) );
    dst_rect = VC_RECT_T( (info.width - width ) / 2, ( info.height - height ) / 2, width, height );
    local alpha = VC_DISPMANX_ALPHA_T( bor(ffi.C.DISPMANX_FLAGS_ALPHA_FROM_SOURCE, ffi.C.DISPMANX_FLAGS_ALPHA_FIXED_ALL_PIXELS), 120, 0 );

First we establish the source and destination rectangles. The src_rect indicates what part of our ‘BackingStore’ we want to display. The ‘dst_rect’ says where we’ll want to locate the window on the display. That’s the only challenging part to wrap your head around. src_rect -> related to bitmap image, dst_rect -> related to position of window. In this case, we want to take the whole bitmap, and locate the window such that it is centered on the display.

And finally, we create the element (window) on the screen:

    local View = Display:CreateElement(dst_rect, BackingStore, src_rect, 2000, DISPMANX_PROTECTION_NONE, alpha);

Since the window already has a backing store, it will immediately display the contents, our nicely crafted rectangles.

And that about does it.

That’s about the hardest it gets. It gets easier from here once you start integrating with EGL and the other libraries. These are the bare minimums to get something up on the screen. If you were doing something like a game, you’d probably create a couple of resources, with their attendant PixelBuffers, and variously swap them into the View. This is essentially what EGL does, it manages these low level details for you.

What you’ll notice is that I don’t deal with any mouse or keyboard in this particular example. The input subsystem is a whole other ballgame, and is very Linux specific, not VideoCore specific. I will incorporate that later.

To finish this installment, I’ll leave this teaser…

local display = DMXDisplay()
display:Snapshot()

What’s that about?


Taming Sprawling Interfaces – Raspberry Pi BCM Host

If you’ve done any amount of programming, you know that one of the hardest challenges is just getting your head around the task.  There’s the environment, the language, the core libraries, build system, etc.

I was born and raised a UNIX head, and I still don’t know how the Makefile/autoconf/configure thing works.  It’s always been a black art, and I’ve never been able to go beyond the very basics from scratch.  Every time I read one of those makefiles, blood starts dripping from my pores like Zorg in Fifth Element.  Pure Evil!

So, I wanted to tackle the task of doing some UI programming on the Raspberry Pi.  If you must know, my ultimate aim is to be able to easily morph my programming environment to provide me with whatever language environment I could want.  For example, maybe I want to program using a GDI/User32 interface.  Or maybe X, or maybe HTML5 Canvas, or WebGL, or whatever.  The Raspberry Pi is perfect for this task because it’s some hardware, but it’s so cheap, I just view it as a runtime environment, just like any other.  The trick of course is to get all the appropriate libraries and frameworks in place to do the deed.

Well, you’re not likely to see GDI anywhere but Windows (perhaps Wine), so if you want it here, you’ll have to recreate it.

But, let’s begin at the beginning.  The Raspberry Pi is little more than a cell phone without a case or radio.  Of course it includes a USB port and HDMI, so it makes for a nice little quickly attachable/programmable kit.  One of the key features of the Pi is that it can do hardware acceleration for graphics operations.  The most clearly evident demonstration of this is the running of XBMC on the Pi.  You can decode/display 1080p video on the device, without it breaking a sweat.  Same goes for most simple 3D and 2D graphics.  How to unlock though?  From the terminal, it’s not totally evident, and from X Windows (which is not currently hardware accelerated), it’s even less evident.  What to do?

The Pi ships with some libraries, in particular, there libbcm_host.so.  this library is supplied by Broadcom, and it’s on every device.  It provides what’s known as the VideoCore APIs.  This library contains, amongst other things, the raw display access that any application will need.  But, like many APIs, it is sprawling…

I really want to use this though, because I want to go all the way down to the framebuffer, without anything getting in my way.  I want to do everything from Windows and views to mouse and keyboard, because I want to be able to emulate many different types of systems.  Obviously, this is where I’ll have to start if I am to pursue such an ambition.

First task, apply what I learned from my Simplified API explorations.  On the Pi, located in the directory ‘/opt/vc/include’, you’ll find the root of the bcm_host world.  The most innocent header file: ‘bcm_host.h’ lives here.  Within this header file, there are only three functions:

void bcm_host_init(void);
void bcm_host_deinit(void);

int32_t graphics_get_display_size( const uint16_t display_number,
                                                    uint32_t *width,
                                                    uint32_t *height);

That’s it, nothing to see here, move right along. Actually, the very first one is most important. You must call bcm_host_init() before anything is called. This is similar to libraries such as WinSock, where you have to call a startup function before anything else in the library.

The third function there; graphics_get_display_size(), will tell you the size (in pixels) of the display you specify (0 – default LCD). That’s a handy thing to have, as you start to contemplate building up your graphics system, you need to know how large the screen is.

Alrighty then, first task is to write the zero-th level of hell for this thing, but wait… The rest of this header looks like this:

#include "interface/vmcs_host/vc_dispmanx.h"
#include "interface/vmcs_host/vc_tvservice.h"
#include "interface/vmcs_host/vc_cec.h"
#include "interface/vmcs_host/vc_cecservice.h"
#include "interface/vmcs_host/vcgencmd.h"

Turns out those first three functions were deceptively the tip of the iceburg. The rest of the system is encapsulated in this large tree of interdependent other header files. Each one containing constants, structures, functions, and links to more header files. Such a deep dive requires a deep breath, and patience. It can be conquered, slowly but surely.

First file to create: bcm_host.lua

local ffi = require "ffi"

ffi.cdef [[
void bcm_host_init(void);
void bcm_host_deinit(void);

int32_t graphics_get_display_size( const uint16_t display_number, uint32_t *width, uint32_t *height);
]]

require "libc"
require "vc_dispmanx"

require "interface/vmcs_host/vc_tvservice"
require "interface/vmcs_host/vc_cec"
require "interface/vmcs_host/vc_cecservice"
require "interface/vmcs_host/vcgencmd"

And that’s it. According to my API wrapping guidelines, the first level is meant for the programmer who wants to act like they are a C programmer. No hand holding, not little helpers. I cheat a little bit and provide some simple typedefs here and there, but that’s about it.

Next up is the BCMHost.lua file:

local ffi = require "ffi"

require "bcm_host"

local lib = ffi.load("bcm_host");

--[[
	The bcm_host_init() function must be called
	before any other functions in the library can be
	utilized.  This will be done automatically
	if the developer does:
		require "bcm_host"
--]]

lib.bcm_host_init();

local GetDisplaySize = function(display_number)
	display_number = display_number or 0
	local pWidth = ffi.new("uint32_t[1]");
	local pHeight = ffi.new("uint32_t[1]");

	local err = lib.graphics_get_display_size(display_number, pWidth, pHeight);

	-- Return immediately if there was an error
	if err ~= 0 then
		return false, err
	end

	return pWidth[0], pHeight[0];
end

return {
	Lib = lib,

	GetDisplaySize = GetDisplaySize,
}

For the less he-man programmer, this is a bit more gentle, and a lot more Lua-like. First of all, it includes the previous raw interface file (require “bcm_host”). This gives us our basic definitions.

Then it loads the library, and initializes it:

local lib = ffi.load("bcm_host")

lib.bcm_host_init();

At this point, we can confidently call functions in the library. The last bit of code in this file provides a convenience wrapper around getting the display size. Of course any programmer could just call the ‘graphics_get_display_size’ function directly, but, they’d have to remember to allocate a could of int arrays to receive the values back, and they’d have to check the return value against 0, for success, and then unpack the values from the arrays, and of course check to see whether you want to deal with a particular screen, or just the default screen… Or you could just do this:

local BCH = require "BcmHost"
local width, height = BCH.GetDisplaySize();

And finally, just to make this a ‘module’ so that things are nicely encapsulated:

return {
	Lib = lib,

	GetDisplaySize = GetDisplaySize,
}

There, now that wasn’t too bad. Of course, like I said, this is just the tip of the iceberg. We can now load the bcm_host library, and get the size of the display. There’s a heck of a lot more work to be done in order to get a ‘window’ displayed on the screen with some accelerated 3D Graphics, but this is a good start.

Next time around, I’ll tackle the Display Manager, which is where the real fun begins.

The entirety of my efforts so far can be found in the LJIT2RPi project on GitHub.