Windows and Raspberry Pi, Oh my!

I woke this morning to two strange realities.  My sometimes beloved Seahawks did not win the SuperBowl, and the Raspberry Pi Foundation announced the Raspberry Pi 2, which will run Windows 10!

I’ll conveniently forget the first reality for now as there’s always next season.  But that second reality?  I’ve long been a fan of the Raspberry Pi.  Not because of the specific piece of hardware, but because at the time it was first announced, it was the first of the somewhat reasonable $35 computers.  The hardware itself has long since been eclipsed by other notables, but none of them have quite got the Raspberry Pi community thing going on, nor the volumes.  Now the Pi is moving into “we use them for embedded” territory, not just for the kids to learn programming.

And now along comes Windows!  This is interesting in two respects.  First, I did quite a bit of work putting a LuaJIT skin on the Raspberry Pi some time back.  At the time, I did it because I wanted to learn all about the deep down internals of the Raspberry Pi, but from the comforts of Lua.  At the time, I leveraged an early form of the ljsyscall library to take care of the bulk of the *NIX specific system calls. I was going to go one step further and implement the very lowest interface to the Video chip, but that didn’t seem like a very worthwhile effort, so I left it at the Khronos OpenGL ES level.

At roughly the same time, I started implementing LuaJIT Win32 APIs, starting with LJIT2Win32.  Then I went hog wild and implemeted TINN, which for me is the ultimate in LuaJIT APIs for Win32 systems.  Both ljsyscall and TINN exist because programming at the OS level is a very tedious/esoteric process.  Most of the time the low level OS specifics are paved over with one higher level API/framework or another.  Well, these are in fact such frameworks, giving access to the OS at a very high level from the LuaJIT programming language.

So, this new Windows on Pi, what of it?  Well, finally I can program the Raspberry Pi using the TINN tool.  This is kind of cool for me.  I’m not forced into using Linux on this tiny platform, where I might be more familiar with the Windows API and how things work.  Even better, as TINN is tuned to running things like coroutines and IO Completion ports, I should be able to push the tiny device to its limits with respect to IO at least.  Same goes for multi-threaded programming.  All the goodness I’ve enjoyed on my Windows desktop will now be readily available to me on the tiny Pi.

The new pi is a quad core affair, which means the kids will learn about muteness, semaphores and the like…  Well, actually, I’d expect the likes of the go language, TINN, and other tools to come to the rescue.  The beauty of Windows on Pi is likely going to be the ease of programming.  When I last programmed on the Pi directly, I used the nano editor, and print() for debugging.  I couldn’t really use eclipse, as it was too slow back then.  Now the Pi will likely just be a Visual Studio target, maybe even complete with simulator.  That would be a great way to program.  All the VS goodness that plenty of people have learned to love.  Or maybe a slimmed down version that’s not quite so enterprise industrial.

But, what are these Pi used for anyway?  Are they truly replacement PC?  Are they media servers, NAS boxes, media players?  The answer is YES to all, to varying degrees.  Following along the ‘teach the kids to program’ theme, having an relatively inexpensive box that allows you to program can not be a bad thing.  Making Windows and Linux available can not be a bad thing.  Having a multi-billion dollar software company supporting your wares, MUST be a good thing.  Love to hate Microsoft?  Meh, lots of Windows based resources are available in the world, so, I don’t see how it does any harm.

On the very plus side, as this is a play towards makers, it will force Microsoft to consider the various and sundry application varieties that are currently being pursued by those outside the corporate enterprise space.  Robotics will force a reconsideration of realtime constraints.  As well, vision might become a thing.  Creating an even more coherent story around media would be a great thing.  And maybe bringing the likes of the Kinect to this class of machine?  Well, not in this current generation.

The news on this monday is both melancholy and eye brow raising.  I for one will be happy to program the latest Raspberry Pi using TINN.


The Insanity of Hardware Chasing

Just a couple months back, I purchased a few Android/Linux dev boards to play with.  This includes the Raspberry Pi, the Odroid-X, and a couple of fairly capable routers.

Since I purchased the Pis, they went from 256Mb RAM to 512Mb of RAM for the same $35 price.  Recently Hardkernel,  the makers of the Odroid-X, released three new versions of their kit.  First, an upgraded Odroid-X2, which has a faster clock speed, and double the RAM as the previous version.  They went an extra step though.  They now have a new model, the Odroid-U2.  This is an ultra-compact quad-core computer, smaller than the size of a credit card.

This newest Odroid-U2 is about the same size as the nano router by TP-Link.  Fit a couple of these boards together with that wireless router, and I think you have the makings of a nice little compact portable, low powered compute/networking rig.

But, hardware without advances in software aren’t that dramatically important to me.  In the case of HardKernel, you can now get Ubuntu on a SD card to run with your new Odroid-XXX board.  That’s nice because if you don’t find Android to be that compelling for your particular application, there is a pretty darned good alternative.  Of course, there are other distros available as well, but having Ubuntu I think is a slam dunk in terms of getting something that’s well supproted, and fun to play with.

Not to forget the Raspberry Pi, they are making progress on releasing their “Model A” Raspberry Pi board.  This board has slightly less hardware than the model B.  The price point of $25 is the killer feature.

Along with the Pi, there is a new OS release, Plan 9 from AT&T Labs origin is now available for the Pi.  I find this last bit to be particularly interesting since the mission of the Raspberry Pi is educational purposes.  I think Plan 9 provides a platform rife with learning opportunities.

In addition to “doing UNIX better than UNIX”, Plan 9 presents some interesting abstraction and separation ideas which might find now life in the emergent “internet of things” environment.  Plan 9 makes it relatively easy to separate things, including ‘memory’ and ‘processing’.  It has a fairly minimal “C” interface as most operations are carried out by sending messages around rather than calling C functions.

Hardware is moving fast, and I can hardly keep up.  I think there will need to be changes in the software landscape to truly keep up.  It probably starts by getting message passing established as the primary mode of communication between devices.  HTTP/REST helps along these lines.  Probably need to go much further, but there you go.

The hardware changes quickly.  Software skills, not so much.  We live in great times.

 


A Picture’s worth 5Mb

What’s this then?

Another screen capture. This time I just went with the full size 1920×1080. What’s happening on this screen? Well, that tiger in the upper left is well known from PostScript days, and is the gold standard for testing graphics rendering systems. In this case, I’m using OpenVG on the Pi, from a Lua driven framework. No external support libraries, other than what’s on the box. In fact, I just took the hello_tiger sample, and did some munging to get it into the proper shape, and here it is.  One thing of note, this image is actually rotating.  It’s not blazingly fast, but it’s not blazingly fast on any platform.  But, it’s decent.  It’s way faster than what it would be using the CPU only on my “high powered” quad core desktop machine.  This speed comes from the fact that the GPU on the Pi is doing all the work.  You can tell because if you get a magnifying glass and examine the lowest right hand corner of the image, you’ll see that the CPU meter is not pegged.  What amount of action is occuring there is actually coming from other parts of the system, not the display of the tiger.  I guess that VideoCore GPU thing really does make a difference in terms of accelerating graphics.  Go figure.

In the middle of the image, you see a window “snapper.lua”. This is the code that is doing the snapshot. Basically, I run the tiger app from one terminal, the one on the lower left. Then in the lower right, I run the ‘snapper.lua’ script. As can be seen in the OkKeyUp function, every time the user presses the “SysRQ” key (also ‘Print Screen’ on many keyboards), a snapshot is taken of the screen.

Below that, there’s a little bit of code that stitches an event loop together with a keyboard object. Yes, I now have a basic event loop, and a “Application” object as well. This makes it really brain dead simple to throw together apps like this without much effort.

[sidetrack]
One very interesting thing about being able to completely control your eventing model and messaging loops is that you can do whatever you want. Eventually, I’ll want to put together a quick and dirty “remote desktop” sort of deal, and I’ll need to be able to quickly grab the keyboard, mouse, and other interesting events, and throw them to some other process. That process will need to be able to handle them as if they happened locally. Well, when you construct your environment from scratch, you can easily bake that sort of thing in.
[sidetrack]

It’s nice to have such a system readily at hand.  I can fiddle about with lots of different things, build apps, experiment with eventing models, throw up some graphics, and never once have to hit “compile” and wait.  This makes for a very productive playground where lots of different ideas can be tried out quickly before being baked into more ‘serious’ coding environments.

 


Screencast of the Raspberry Pi

It’s one of those innevitabilities.  Start with fiddling about with low level graphics system calls, do some screen capture, then some single file saving, and suddenly enough you’ve got screen capture movies!  Assuming WordPress does this right.

If you’ve been following along, the relevant code looks like this:

-- Create the resource that will be used
-- to copy the screen into.  Do this so that
-- we can reuse the same chunk of memory
local resource = DMXResource(displayWidth, displayHeight, ffi.C.VC_IMAGE_RGB888);

local p_rect = VC_RECT_T(0,0,displayWidth, displayHeight);
local pixdata = resource:CreateCompatiblePixmap(displayWidth, displayHeight);

local framecount = 120

for i=1,framecount do
	-- Do the snapshot
	Display:Snapshot(resource);

	local pixeldata, err = resource:ReadPixelData(pixdata, p_rect);
	if pixeldata then
		-- Write the data out
		local filename = string.format("screencast/desktop_%06d.ppm", i);
		print("Writing: ", filename);

		WritePPM(filename, pixeldata);
	end
end

In this case, I’m capturing into a bitmap that is 640×320, which roughly matches the aspect ratio of my wide monitor.

This isn’t the fastest method of capturing on the planet. It actually takes a fair amount of time to save each image to the SD card in my Pi. Also, I might be able to eliminate the copy (ReadPixelData), if I can get the pointer to the memory that the resource uses.

This little routine will generate a ton of .ppm image files stored in the local ‘screencast’ directory.

From there, I use ffmpeg to turn the sequence of images into a movie:

ffmpeg -i desktop_0x%06.ppm  desktop.mp4

If you’re a ffmpeg guru, you can set all sorts of flags to change the framerate, encoder, and the like. I just stuck with defaults, and the result is what you see here.

So, the Pi is capable. It’s not the MOST capable, but it can get the job done. If I were trying to do this in a production environment, I’d probably attach a nice SSD drive to the USB port, and stream out to that. I might also choose a smaller image format such as YUV, which is easier to compress. As it is, the compression was getting about 9fps, which ain’t too bad for short clips like this.

One nice thing about this screen capture method is that it doesn’t matter whether you’re running X Windows, or not. So, you’re not limited to things that run in X. You can capture simple terminal sessions as well.

I’m rambling…

This works, and it can only get better from here.

It is part of the LJIT2RPi project.


Raspberry Pi OpenSource VideoCore Access

The Pi Foundation today announced they availability of the VideoCore client side libraries as OpenSource!

And the crowd just keeps moving along…

What’s the big deal?  Well, the way the Raspberry Pi is arranged, there are essentially two ‘cores’ cooperating on a chip to form the hardware of the Raspberry Pi.  One of those cores, “VideoCore”, is highly proprietary, and handles all the lowest level video and audio for the Raspberry Pi.  If you were running a typical PC, this would be similar to the arrangement of a CPU (intel) and a GPU (nVidia) running in the same machine.  The CPU generally takes care of the “operating system”, and anything having to do with video gets communicated to the GPU, and magic happens.

Most users don’t care about this level of detail.  Typically, the libraries that communicate with the GPU are highly proprietary to the vendor who produced them.  nVidia does not open Source the drivers for their chips.  They just provide a binary blob to the OS, and leave it at that.

This same situation was occuring here with Broadcom.  It’s not a big deal to most users, but, when you’re trying to sell the Raspberry Pi as an “educational tool”, and half the system is basically off limits, it kind of hurts the credibility of the whole mission.  So, now they’ve gone and made that code available to the masses.

Now, of course if you just want to put something together in Pygame, then your world has not shifted one bit.  You’ll never program down at this level.  But, let’s say you’re trying to put together an OS distribution.  Well then things start to get interesting.  You can better integrate X, or XBMC with the lowest level high performance part of the hardware, without having to poke your way along in the dark with an undocumented API.  You can get right down there in the chips and create the bestest interop between the ‘CPU’ and the ‘GPU’.

What will it mean?  Probably nothing in the very short term.  In the long run, it will mean that people who really care about such things will have the tools they need to eek the most performance out of this System on Chip as possible.

I think the whole Raspberry Pi thing is a master stroke for Broadcom.  They are typically a behind the scenes provider of low level chips.  The Pi is giving them a bully pulpit upon which they can advertise their wares.  Making this stuff open source doesn’t diminish their intellectual property in the least, and doesn’t give anything to their competitors.  It does get a bunch of maniacal programmers focused on programming their VideoCore, which without such a move would remain an obscure opaque part of their offerings.

It’s a fun time to be a progammer!

 


Capturing Screenshots of the Raspberry Pi


Last Time Around, I went through how to capture the screen on the Raspberry Pi. Well, capturing, and displaying on the same screen at the same time really isn’t that interesting. It becomes more fun to capture, and then share with your friends, or make movies or what have you.

This time around, I’ve actually captured, and saved to a file.

In order to do this, I had to introduce a couple new concepts.

First is the idea of PixelData. This is simply a data structure to hold onto some specified pixel data.

ffi.cdef[[
struct DMXPixelData {
	void *		Data;
	VC_IMAGE_TYPE_T	PixelFormat;
	int32_t		Width;
	int32_t		Height;
	int32_t		Pitch;
};
]]

local pixelSizes = {
	[tonumber(ffi.C.VC_IMAGE_RGB565)] = 2,
	[tonumber(ffi.C.VC_IMAGE_RGB888)] = 3,
}

local DMXPixelData = ffi.typeof("struct DMXPixelData");
local DMXPixelData_mt = {

	__gc = function(self)
		print("GC: DMXPixelMatrix");
		if self.Data ~= nil then
			ffi.C.free(self.Data);
		end
	end,

	__new = function(ct, width, height, pformat)
		pformat = pformat or ffi.C.VC_IMAGE_RGB565

		local sizeofpixel = pixelSizes[tonumber(pformat)];

		local pitch = ALIGN_UP(width*sizeofpixel, 32);
		local aligned_height = ALIGN_UP(height, 16);
		local dataPtr = ffi.C.calloc(pitch * height, 1);
		return ffi.new(ct, dataPtr, pformat, width, height, pitch);
	end,
}
ffi.metatype(DMXPixelData, DMXPixelData_mt);

The ‘__new’ metamethod is where all the action is at. You can do the following:

pixmap = DMXPixelData(640, 480)

And you’ll get a nice chunk of memory allocated of the appropriate size. You can go further and specify the pixel format (RGB565 or RGB888), but if you don’t, it will default to a conservative RGB565.

This is great. Now we have a place to store the pixels. But what pixels? When we did a screen capture, we captured into a DMXResource object. Well, that object doesn’t have ready made access to the pixel pointer, so what to do? Well, just like DMXResource has a CopyPixelData() function, it can have a ReadPixelData() function as well. In that way, we can read the pixel data out of a resource, and go ahead and do other things with it.

ReadPixelData = function(self, pixdata, p_rect)
  local p_rect = p_rect or VC_RECT_T(0,0,self.Width, self.Height);
  local pixdata = pixdata or self:CreateCompatiblePixmap(p_rect.width, p_rect.height);

  local success, err = DisplayManX.resource_read_data (self.Handle, p_rect, pixdata.Data, pixdata.Pitch)
  if success then
    return pixdata;
  end

  return false, result;
end

Alrighty, now we’re talking. With this routine, I can now read the pixel data out of any resource. There are two ways to use it. If you pass in your own DMXPixelData object (pixdata), then it will be filled in. If you don’t pass in anything, then a new PixelData object will be created by the ‘CreateCompatiblePixmap()’ function.

OK. So, we know how to capture, and now we know how to get our hands on the actual pixel data. Last we need a way to write this data out to a file. There are tons of graphics file formats, but I’ll stick to the most basic for this task:

local function WritePPM(filename, pixbuff)
    local r, c, val;

    local fp = io.open(filename, "wb")
    if not fp then
        return false
    end

    local header = string.format("P6\n%d %d\n255\n", pixbuff.Width, pixbuff.Height)
    fp:write(header);

    for row=0,pixbuff.Height-1 do
	local dataPtr = ffi.cast("char *",pixbuff.Data) + pixbuff.Pitch*row
    	local data = ffi.string(dataPtr, pixbuff.Width*3);
    	fp:write(data);
    end

    fp:close();
end

This is one of the oldest and most basic image file formats. The header is in plain text, giving the width and height of the image, and the maximum value to be found (255). This is followed by the actual pixel data, in R,G,B format, one byte per value. And that’s it. Of course, I converted this basic image into a .png file for display in this blog, but you can see how easy it is to accomplish.

So, altogether:

local ffi = require "ffi"
local DMX = require "DisplayManX"

local Display = DMXDisplay();
local screenWidth, screenHeight = Display:GetSize();
local ratio = screenWidth / screenHeight;
local displayHeight = 320;
local displayWidth = 640;

-- Create the view that will display the snapshot
local displayView = Display:CreateView(
	displayWidth, displayHeight, 
	0, screenHeight-displayHeight-1,
	0, ffi.C.VC_IMAGE_RGB888)

-- Do the snapshot
displayView:Hide();	
Display:Snapshot(displayView.Resource);
displayView:Show();

local pixeldata, err = displayView.Resource:ReadPixelData();
if pixeldata then
	-- Write the data out
	WritePPM("desktop.ppm", pixeldata);
end

And that’s all there is to it really. If you can take one snapshot of the screen, you can take multiples. You could take hundreds, and dump them into a directory, and use some tool that converts a series of images into an h.264 file if you like, and show some movies of your work.

This stuff is getting easier all the time.  After taming the basics of the bcm_host, screen captures are now possible, displaying simple windows is possible.  I’ve been looking into mouse and keyboard support.  I’ll tackle that next, as once you have this support, you can actually write some interesting interactive applications.

 


Taking Screen Snapshots on the Raspberry Pi

Last time around, I was doing some display wrangling, trying to put some amount of ‘framework’ goodness around this part of the Raspberry Pi operating system. With the addition of a couple more classes, I an finally do something useful.

Here is how to take a screen snapshot:

local ffi = require "ffi"
local DMX = require "DisplayManX"

local Display = DMXDisplay();

local width = 640;
local height = 480;
local layer = 0;	-- keep the snapshot view on top

-- Create a resource to copy image into
local pixmap = DMX.DMXResource(width,height);

-- create a view with the snapshot as
-- the backing store
local mainView = DMX.DMXView.new(Display, 200, 200, width, height, layer, pformat, pixmap);


-- Hide the view so it's not in the picture
mainView:Hide();	

-- Do the snapshot
Display:Snapshot(pixmap);

-- Show it on the screen
mainView:Show();

ffi.C.sleep(5);

This piece of code is so short, it’s almost self explanatory. But, I’ll explain it anyway.

The first few lines are just setup.

local ffi = require "ffi"
local DMX = require "DisplayManX"

local Display = DMXDisplay();

local width = 640;
local height = 480;
local layer = 0;	-- keep the snapshot view on top

The only two that are strictly needed are these:

local DMX = require "DisplayManX"
local Display = DMXDisplay();

The first line pulls in the Lua DisplayManager module that I wrote. This is the entry point into the Raspberry Pi’s low level VideoCore routines. Besides containing the simple wrappers around the core routines, it also contains some convenience classes which make doing certain tasks very simple.

Creating a DMXDisplay object is most certainly the first thing you want to do in any application involving the display. This gives you a handle on the entire display. From here, you can ask what size things are, and it’s necessary for things like creating a view on the display.

The DMXDisplay interface has a function to take a snapshot. That function looks like this:

Snapshot = function(self, resource, transform)
  transform = transform or ffi.C.VC_IMAGE_ROT0;

  return DisplayManX.snapshot(self.Handle, resource.Handle, transform);
end,

The important part here is to note that a ‘resource’ is required to take a snapshot. This might look like 3 parameters are required, but through the magic of Lua, it atually turns into only 1. We’ll come back to this.

So, a resource is needed. What’s a resource? Basically a bitmap that the VideoCore system controls. You can create one easily like this:

local pixmap = DMX.DMXResource(width,height);

There are a few other parameters that you could use while creating your bitmap, but width and height are the essentials.

One thing of note, when you eventually call: Display:Snapshot(pixmap), you can not control which part of the screen is taken as the snapshot. It will take a snapshot of the entire screen. But, your bitmap does not have to be the same size! It can be any size you like. The VideoCore library will automatically squeeze your snapshot down to the size you specified when you created your resource.

So, we have a bitmap within which our snapshot will be stored. The last thing to do is to actually take the snapshot:

Display:Snapshot(pixmap);

In this particular example, I also want to display the snapshot on the screen. So, I created a ‘view’. This view is simply a way to display something on the screen.

local mainView = DMX.DMXView.new(Display, 200, 200, width, height, layer, pformat, pixmap);

In this case, I do a couple of special things. I create the view to be the same size as the pixel buffer, and in fact, I use the pixel buffer as the backing store of the view. That means that whenever the pixel buffer changes, for example, when a snapshot is taken, it will automatically show up in the view, because the system draws the view from the pixel buffer. I know it’s a mouth full, but that’s how the system works.

So the following sequence:

-- Hide the view so it's not in the picture
mainView:Hide();	

-- Do the snapshot
Display:Snapshot(pixmap);

-- Show it on the screen
mainView:Show();

ffi.C.sleep(5);

…will hide the view
take a snapshot
show the view

That’s so the view itself is not a part of the snapshot. You could achieve the same by moving the view ‘offscreen’ and then back again, but I haven’t implemented that part yet.

Well, there you have it. A whole bunch of words to describe a fairly simple process. I think this is an interesting thing though. Thus far, when I’ve seen Raspberry Pi ‘demo’ videos, it’s typically someone with a camera in one hand, bad lighting, trying to type on their keyboard and use their mouse while taking video. With the ability to take screen snapshots in code, making screencasts can’t be that far off.

Now, if only I could mate this capability with that x264 video compression library, I’d be all set!


Taming Raspeberry Pi Display Manager

Last time around, I was busy slaying the bcm_host interface.  One of the delightful thing that follows on from slaying dragons is that you get to plunder their treasure.  In this particular case, with the bcm_host stuff in hand, you can now do fantastic things like put pixels on the screen.

Where to start?

The display system of the Raspberry Pi was at first very confusing, and perplexing.  In order for me to understand it, I had to first just ignore the X Window system, because that’s a whole other thing.  At the same time, I had to hold back the urge to ‘just give me the frame buffer!’.  You can in fact get your hands on the frame buffer, but if you do it in a nicely controlled way, you’ll get the benefit of some composited ‘windows’ as well.

Some terminology:

VideoCore – The set of libraries that is at the core of the Raspberry Pi hardware for audio and video.  This is not just one library, but a set of different libraries, each of which provides some function.  The sources are not available, but various header files are (located in /opt/vc/*).  There isn’t much documentation, other than what’s in the headers, and this is what makes them difficult to use.

EGL, EGLES, OpenVG, OpenMAX – These are various APIs defined by the Khronos Group.  They cover windowing, open GL for embedded devices, and 2D vector graphics.  These are similarly supplied as opaque libraries, with their header files made available.  Although these libraries are very useful and helpful, they are not strictly required to get something displayed on the screen.

This time around, I’m only going to focus on the parts of the VideoCore, ignoring all the Khronos specific stuff.

The first part of VideoCore is the vc_dispmanx.  This is the display manager service API.  Keep in mind that word “service”.  From a programmer’s perspective, you can consider the display manager to be a ‘service’, meaning, you send commands to it, and they are executed.  I have created a LuaJIT FFI file for vc_dispmanx.lua, which essentially gives me access to all the functions within.  Of course, following my own simple API development guidelines, I created a ‘helper’ file as well; DisplayManX.lua.

The short story is this.  Within DisplayManX, you’ll find the implementation of 4 convenience classes:

DMXDisplay – This is what gives you a ‘handle’ on the display.  This could be an attached HDMI monitor, or a composite screen.  Either way, you can use this handle to get the size, and other characteristics.  This handle is also necessary to perform any other operations on the display, from creating a window, to displaying content.

DMXElement – A representation of a visual element on the DMXDisplay.  I’m trying to avoid the word ‘window’ because it does not have a title bar, close box, resize, etc.  Those are all visual elements to be developed by a higher level thing.  This DMXElement is what you need to carve out a piece of the screen where you intend to do some drawing.  You can give an element a “layer”, so they can be ordered.  The DMXDisplay, acts as a fairly rudimentary element manger, so it does basic front to back ordering of your elements.

DMXResource – Just like I’m trying to avoid the word ‘window’ with respect to DMXElement, I’ll try to avoid the word ‘view’ in describing DMXResource.  A resource is essentially like a bitmap.  You create a resource, with a certain size, and pixel format, fill it in with stuff, and then ultimately display it on the screen by writing it into the DMXElement.  If you were creating a traditional windowing environment, this would be the backing store of a window.

DMXUpdate – This is like a transaction.  As mentioned earlier, DMX is a ‘service’, and you send commands to the service.  In order to send commands, you must bracket them in an ‘update begin’/’update end’ pairing.  You can send a ‘batch’ of commands by just placing several commands between the update begin/end.  This object represents the bracketing transaction.

The good news is, you don’t really need to worry about this low level of detail if you want to use these classes.

So, How about an example?

 

-- A simple demo using dispmanx to display an overlay

local ffi = require "ffi"
local bit = require "bit"
local bnot = bit.bnot
local band = bit.band
local bor = bit.bor
local rshift = bit.rshift
local lshift = bit.lshift

local DMX = require "DisplayManX"


ALIGN_UP = function(x,y)  
    return band((x + y-1), bnot(y-1))
end

-- This is a very simple graphics rendering routine.
-- It will fill in a rectangle, and that's it.
function FillRect( image, imgtype, pitch, aligned_height,  x,  y,  w,  h, val)
    local         row;
    local         col;
    local srcPtr = ffi.cast("int16_t *", image);
    local line = ffi.cast("uint16_t *",srcPtr + y * rshift(pitch,1) + x);

    row = 0;
    while ( row < h ) do
	col = 0; 
        while ( col < w) do
            line[col] = val;
	    col = col + 1;
        end
        line = line + rshift(pitch,1);
	row = row + 1;
    end
end

-- The main function of the example
function Run(width, height)
    width = width or 200
    height = height or 200


    -- Get a connection to the display
    local Display = DMXDisplay();
    Display:SetBackground(5, 65, 65);

    local info = Display:GetInfo();
    
    print(string.format("Display is %d x %d", info.width, info.height) );

    -- Create an image to be displayed
    local imgtype =ffi.C.VC_IMAGE_RGB565;
    local pitch = ALIGN_UP(width*2, 32);
    local aligned_height = ALIGN_UP(height, 16);
    local image = ffi.C.calloc( 1, pitch * height );

    FillRect( image, imgtype,  pitch, aligned_height,  0,  0, width,      height,      0xFFFF );
    FillRect( image, imgtype,  pitch, aligned_height,  0,  0, width,      height,      0xF800 );
    FillRect( image, imgtype,  pitch, aligned_height, 20, 20, width - 40, height - 40, 0x07E0 );
    FillRect( image, imgtype,  pitch, aligned_height, 40, 40, width - 80, height - 80, 0x001F );

    local BackingStore = DMXResource(width, height, imgtype);

	
    local dst_rect = VC_RECT_T(0, 0, width, height);

    -- Copy the image that was created into 
    -- the backing store
    BackingStore:CopyImage(imgtype, pitch, image, dst_rect);

 
    -- Create the view that will actually 
    -- display the resource
    local src_rect = VC_RECT_T( 0, 0, lshift(width, 16), lshift(height, 16) );
    dst_rect = VC_RECT_T( (info.width - width ) / 2, ( info.height - height ) / 2, width, height );
    local alpha = VC_DISPMANX_ALPHA_T( bor(ffi.C.DISPMANX_FLAGS_ALPHA_FROM_SOURCE, ffi.C.DISPMANX_FLAGS_ALPHA_FIXED_ALL_PIXELS), 120, 0 );
    local View = Display:CreateElement(dst_rect, BackingStore, src_rect, 2000, DISPMANX_PROTECTION_NONE, alpha);
 

    -- Sleep for a second so we can see the results
    local seconds = 5
    print( string.format("Sleeping for %d seconds...", seconds ));
    ffi.C.sleep( seconds )

end


Run(400, 200);

This is a sample taken from one of the original hello_pi examples, but made to work with this simplified world that I’ve created. To get started:

local DMX = require "DisplayManX"

This will simply pull in the display management service so we can start using it.

Next up, we see a simple rectangle filling routine:

-- This is a very simple graphics rendering routine.
-- It will fill in a rectangle, and that's it.
function FillRect( image, imgtype, pitch, aligned_height,  x,  y,  w,  h, val)
    local         row;
    local         col;
    local srcPtr = ffi.cast("int16_t *", image);
    local line = ffi.cast("uint16_t *",srcPtr + y * rshift(pitch,1) + x);

    row = 0;
    while ( row < h ) do
	col = 0; 
        while ( col < w) do
            line[col] = val;
	    col = col + 1;
        end
        line = line + rshift(pitch,1);
	row = row + 1;
    end
end

We don’t actually use the imagetype, nor the aligned_height. Basically, we’re assuming an image that has 16-bit pixels, and the ‘pitch’ tells us how many bytes per row. So, go through 16-bit value at a time, and set it to the color value specified.

Next, we come to the main event. We want to create a few semi-transparent rectangles, and display them on the screen. Then wait a few seconds for you to view the results before cleaning the whole thing up.

    local Display = DMXDisplay();
    Display:SetBackground(5, 65, 65);

One of the first actions is to create the display object, and set the background color. The funny thing you’ll notice, if you run this code, is suddenly your monitor seems to have a lot more screen real estate than you thought. Yep, X is taking up a smaller portion of the screen (if you’re running X). Same with the regular terminal. If you were running something like XBMC, you’d be seeing your full display being utilized. This is how they do it. At any rate, there’s an application right there. If you want to set the border color of your screen, just do those two lines of code, and you’re done…

Moving right along. We need a chunk of memory allocated, which will be what actually gets displayed in the window.

    -- Create an image to be displayed
    local imgtype =ffi.C.VC_IMAGE_RGB565;
    local pitch = ALIGN_UP(width*2, 32);
    local aligned_height = ALIGN_UP(height, 16);
    local image = ffi.C.calloc( 1, pitch * height );

For those who are framebuffer obsessed, there’s you’re window’s frame buffer right there. It’s just a chunk of memory of the appropriate size to match the pitch and alignment requirements of the pixel format you’ve selected. There are a fair number of formats to choose from, including RGBA32 if you want to burn up a lot of memory.

This would typically be represented as a “Bitmap” or “PixelMap”, or “PixelBuffer” object in most environments. Next time around, I’ll encapsulate it in one such object, but for now, it’s just a pointer to a chunk of memory ‘image’.

Now that we’ve got our chunk of memory, we fill it with color:

    FillRect( image, imgtype,  pitch, aligned_height,  0,  0, width,      height,      0xFFFF );
    FillRect( image, imgtype,  pitch, aligned_height,  0,  0, width,      height,      0xF800 );
    FillRect( image, imgtype,  pitch, aligned_height, 20, 20, width - 40, height - 40, 0x07E0 );
    FillRect( image, imgtype,  pitch, aligned_height, 40, 40, width - 80, height - 80, 0x001F );

As described earlier, the DMXResource is required to actually display stuff in a display element, so we need to create that:

    local BackingStore = DMXResource(width, height, imgtype);

Don’t get tripped up by the name of the variable. It could be anything, I just used “BackingStore” to emphasize the fact that it’s the backing store of our display element.

Now to copy the image into the backing store:

    local dst_rect = VC_RECT_T(0, 0, width, height);

    -- Copy the image that was created into 
    -- the backing store
    BackingStore:CopyImage(imgtype, pitch, image, dst_rect);

Here, I do call it ‘CopyImage’, because that is in fact what you’re doing. In all generality, the lower level API is simply ‘write_data’, but why be so obtuse when we can be more precise. At this point, we have our ‘bitmap’ written into our backing store, but it’s not displayed on the screen yet!

This last part is the only ‘black magic’ part of the whole thing, but you’ll soon see it’s nothing too exciting. We need to create an element on the display, and that element needs to have our BackingStore as it’s backing.

    -- Create the view that will actually 
    -- display the resource
    local src_rect = VC_RECT_T( 0, 0, lshift(width, 16), lshift(height, 16) );
    dst_rect = VC_RECT_T( (info.width - width ) / 2, ( info.height - height ) / 2, width, height );
    local alpha = VC_DISPMANX_ALPHA_T( bor(ffi.C.DISPMANX_FLAGS_ALPHA_FROM_SOURCE, ffi.C.DISPMANX_FLAGS_ALPHA_FIXED_ALL_PIXELS), 120, 0 );

First we establish the source and destination rectangles. The src_rect indicates what part of our ‘BackingStore’ we want to display. The ‘dst_rect’ says where we’ll want to locate the window on the display. That’s the only challenging part to wrap your head around. src_rect -> related to bitmap image, dst_rect -> related to position of window. In this case, we want to take the whole bitmap, and locate the window such that it is centered on the display.

And finally, we create the element (window) on the screen:

    local View = Display:CreateElement(dst_rect, BackingStore, src_rect, 2000, DISPMANX_PROTECTION_NONE, alpha);

Since the window already has a backing store, it will immediately display the contents, our nicely crafted rectangles.

And that about does it.

That’s about the hardest it gets. It gets easier from here once you start integrating with EGL and the other libraries. These are the bare minimums to get something up on the screen. If you were doing something like a game, you’d probably create a couple of resources, with their attendant PixelBuffers, and variously swap them into the View. This is essentially what EGL does, it manages these low level details for you.

What you’ll notice is that I don’t deal with any mouse or keyboard in this particular example. The input subsystem is a whole other ballgame, and is very Linux specific, not VideoCore specific. I will incorporate that later.

To finish this installment, I’ll leave this teaser…

local display = DMXDisplay()
display:Snapshot()

What’s that about?


Taming Sprawling Interfaces – Raspberry Pi BCM Host

If you’ve done any amount of programming, you know that one of the hardest challenges is just getting your head around the task.  There’s the environment, the language, the core libraries, build system, etc.

I was born and raised a UNIX head, and I still don’t know how the Makefile/autoconf/configure thing works.  It’s always been a black art, and I’ve never been able to go beyond the very basics from scratch.  Every time I read one of those makefiles, blood starts dripping from my pores like Zorg in Fifth Element.  Pure Evil!

So, I wanted to tackle the task of doing some UI programming on the Raspberry Pi.  If you must know, my ultimate aim is to be able to easily morph my programming environment to provide me with whatever language environment I could want.  For example, maybe I want to program using a GDI/User32 interface.  Or maybe X, or maybe HTML5 Canvas, or WebGL, or whatever.  The Raspberry Pi is perfect for this task because it’s some hardware, but it’s so cheap, I just view it as a runtime environment, just like any other.  The trick of course is to get all the appropriate libraries and frameworks in place to do the deed.

Well, you’re not likely to see GDI anywhere but Windows (perhaps Wine), so if you want it here, you’ll have to recreate it.

But, let’s begin at the beginning.  The Raspberry Pi is little more than a cell phone without a case or radio.  Of course it includes a USB port and HDMI, so it makes for a nice little quickly attachable/programmable kit.  One of the key features of the Pi is that it can do hardware acceleration for graphics operations.  The most clearly evident demonstration of this is the running of XBMC on the Pi.  You can decode/display 1080p video on the device, without it breaking a sweat.  Same goes for most simple 3D and 2D graphics.  How to unlock though?  From the terminal, it’s not totally evident, and from X Windows (which is not currently hardware accelerated), it’s even less evident.  What to do?

The Pi ships with some libraries, in particular, there libbcm_host.so.  this library is supplied by Broadcom, and it’s on every device.  It provides what’s known as the VideoCore APIs.  This library contains, amongst other things, the raw display access that any application will need.  But, like many APIs, it is sprawling…

I really want to use this though, because I want to go all the way down to the framebuffer, without anything getting in my way.  I want to do everything from Windows and views to mouse and keyboard, because I want to be able to emulate many different types of systems.  Obviously, this is where I’ll have to start if I am to pursue such an ambition.

First task, apply what I learned from my Simplified API explorations.  On the Pi, located in the directory ‘/opt/vc/include’, you’ll find the root of the bcm_host world.  The most innocent header file: ‘bcm_host.h’ lives here.  Within this header file, there are only three functions:

void bcm_host_init(void);
void bcm_host_deinit(void);

int32_t graphics_get_display_size( const uint16_t display_number,
                                                    uint32_t *width,
                                                    uint32_t *height);

That’s it, nothing to see here, move right along. Actually, the very first one is most important. You must call bcm_host_init() before anything is called. This is similar to libraries such as WinSock, where you have to call a startup function before anything else in the library.

The third function there; graphics_get_display_size(), will tell you the size (in pixels) of the display you specify (0 – default LCD). That’s a handy thing to have, as you start to contemplate building up your graphics system, you need to know how large the screen is.

Alrighty then, first task is to write the zero-th level of hell for this thing, but wait… The rest of this header looks like this:

#include "interface/vmcs_host/vc_dispmanx.h"
#include "interface/vmcs_host/vc_tvservice.h"
#include "interface/vmcs_host/vc_cec.h"
#include "interface/vmcs_host/vc_cecservice.h"
#include "interface/vmcs_host/vcgencmd.h"

Turns out those first three functions were deceptively the tip of the iceburg. The rest of the system is encapsulated in this large tree of interdependent other header files. Each one containing constants, structures, functions, and links to more header files. Such a deep dive requires a deep breath, and patience. It can be conquered, slowly but surely.

First file to create: bcm_host.lua

local ffi = require "ffi"

ffi.cdef [[
void bcm_host_init(void);
void bcm_host_deinit(void);

int32_t graphics_get_display_size( const uint16_t display_number, uint32_t *width, uint32_t *height);
]]

require "libc"
require "vc_dispmanx"

require "interface/vmcs_host/vc_tvservice"
require "interface/vmcs_host/vc_cec"
require "interface/vmcs_host/vc_cecservice"
require "interface/vmcs_host/vcgencmd"

And that’s it. According to my API wrapping guidelines, the first level is meant for the programmer who wants to act like they are a C programmer. No hand holding, not little helpers. I cheat a little bit and provide some simple typedefs here and there, but that’s about it.

Next up is the BCMHost.lua file:

local ffi = require "ffi"

require "bcm_host"

local lib = ffi.load("bcm_host");

--[[
	The bcm_host_init() function must be called
	before any other functions in the library can be
	utilized.  This will be done automatically
	if the developer does:
		require "bcm_host"
--]]

lib.bcm_host_init();

local GetDisplaySize = function(display_number)
	display_number = display_number or 0
	local pWidth = ffi.new("uint32_t[1]");
	local pHeight = ffi.new("uint32_t[1]");

	local err = lib.graphics_get_display_size(display_number, pWidth, pHeight);

	-- Return immediately if there was an error
	if err ~= 0 then
		return false, err
	end

	return pWidth[0], pHeight[0];
end

return {
	Lib = lib,

	GetDisplaySize = GetDisplaySize,
}

For the less he-man programmer, this is a bit more gentle, and a lot more Lua-like. First of all, it includes the previous raw interface file (require “bcm_host”). This gives us our basic definitions.

Then it loads the library, and initializes it:

local lib = ffi.load("bcm_host")

lib.bcm_host_init();

At this point, we can confidently call functions in the library. The last bit of code in this file provides a convenience wrapper around getting the display size. Of course any programmer could just call the ‘graphics_get_display_size’ function directly, but, they’d have to remember to allocate a could of int arrays to receive the values back, and they’d have to check the return value against 0, for success, and then unpack the values from the arrays, and of course check to see whether you want to deal with a particular screen, or just the default screen… Or you could just do this:

local BCH = require "BcmHost"
local width, height = BCH.GetDisplaySize();

And finally, just to make this a ‘module’ so that things are nicely encapsulated:

return {
	Lib = lib,

	GetDisplaySize = GetDisplaySize,
}

There, now that wasn’t too bad. Of course, like I said, this is just the tip of the iceberg. We can now load the bcm_host library, and get the size of the display. There’s a heck of a lot more work to be done in order to get a ‘window’ displayed on the screen with some accelerated 3D Graphics, but this is a good start.

Next time around, I’ll tackle the Display Manager, which is where the real fun begins.

The entirety of my efforts so far can be found in the LJIT2RPi project on GitHub.