Windows and Raspberry Pi, Oh my!

I woke this morning to two strange realities.  My sometimes beloved Seahawks did not win the SuperBowl, and the Raspberry Pi Foundation announced the Raspberry Pi 2, which will run Windows 10!

I’ll conveniently forget the first reality for now as there’s always next season.  But that second reality?  I’ve long been a fan of the Raspberry Pi.  Not because of the specific piece of hardware, but because at the time it was first announced, it was the first of the somewhat reasonable $35 computers.  The hardware itself has long since been eclipsed by other notables, but none of them have quite got the Raspberry Pi community thing going on, nor the volumes.  Now the Pi is moving into “we use them for embedded” territory, not just for the kids to learn programming.

And now along comes Windows!  This is interesting in two respects.  First, I did quite a bit of work putting a LuaJIT skin on the Raspberry Pi some time back.  At the time, I did it because I wanted to learn all about the deep down internals of the Raspberry Pi, but from the comforts of Lua.  At the time, I leveraged an early form of the ljsyscall library to take care of the bulk of the *NIX specific system calls. I was going to go one step further and implement the very lowest interface to the Video chip, but that didn’t seem like a very worthwhile effort, so I left it at the Khronos OpenGL ES level.

At roughly the same time, I started implementing LuaJIT Win32 APIs, starting with LJIT2Win32.  Then I went hog wild and implemeted TINN, which for me is the ultimate in LuaJIT APIs for Win32 systems.  Both ljsyscall and TINN exist because programming at the OS level is a very tedious/esoteric process.  Most of the time the low level OS specifics are paved over with one higher level API/framework or another.  Well, these are in fact such frameworks, giving access to the OS at a very high level from the LuaJIT programming language.

So, this new Windows on Pi, what of it?  Well, finally I can program the Raspberry Pi using the TINN tool.  This is kind of cool for me.  I’m not forced into using Linux on this tiny platform, where I might be more familiar with the Windows API and how things work.  Even better, as TINN is tuned to running things like coroutines and IO Completion ports, I should be able to push the tiny device to its limits with respect to IO at least.  Same goes for multi-threaded programming.  All the goodness I’ve enjoyed on my Windows desktop will now be readily available to me on the tiny Pi.

The new pi is a quad core affair, which means the kids will learn about muteness, semaphores and the like…  Well, actually, I’d expect the likes of the go language, TINN, and other tools to come to the rescue.  The beauty of Windows on Pi is likely going to be the ease of programming.  When I last programmed on the Pi directly, I used the nano editor, and print() for debugging.  I couldn’t really use eclipse, as it was too slow back then.  Now the Pi will likely just be a Visual Studio target, maybe even complete with simulator.  That would be a great way to program.  All the VS goodness that plenty of people have learned to love.  Or maybe a slimmed down version that’s not quite so enterprise industrial.

But, what are these Pi used for anyway?  Are they truly replacement PC?  Are they media servers, NAS boxes, media players?  The answer is YES to all, to varying degrees.  Following along the ‘teach the kids to program’ theme, having an relatively inexpensive box that allows you to program can not be a bad thing.  Making Windows and Linux available can not be a bad thing.  Having a multi-billion dollar software company supporting your wares, MUST be a good thing.  Love to hate Microsoft?  Meh, lots of Windows based resources are available in the world, so, I don’t see how it does any harm.

On the very plus side, as this is a play towards makers, it will force Microsoft to consider the various and sundry application varieties that are currently being pursued by those outside the corporate enterprise space.  Robotics will force a reconsideration of realtime constraints.  As well, vision might become a thing.  Creating an even more coherent story around media would be a great thing.  And maybe bringing the likes of the Kinect to this class of machine?  Well, not in this current generation.

The news on this monday is both melancholy and eye brow raising.  I for one will be happy to program the latest Raspberry Pi using TINN.


The Insanity of Hardware Chasing

Just a couple months back, I purchased a few Android/Linux dev boards to play with.  This includes the Raspberry Pi, the Odroid-X, and a couple of fairly capable routers.

Since I purchased the Pis, they went from 256Mb RAM to 512Mb of RAM for the same $35 price.  Recently Hardkernel,  the makers of the Odroid-X, released three new versions of their kit.  First, an upgraded Odroid-X2, which has a faster clock speed, and double the RAM as the previous version.  They went an extra step though.  They now have a new model, the Odroid-U2.  This is an ultra-compact quad-core computer, smaller than the size of a credit card.

This newest Odroid-U2 is about the same size as the nano router by TP-Link.  Fit a couple of these boards together with that wireless router, and I think you have the makings of a nice little compact portable, low powered compute/networking rig.

But, hardware without advances in software aren’t that dramatically important to me.  In the case of HardKernel, you can now get Ubuntu on a SD card to run with your new Odroid-XXX board.  That’s nice because if you don’t find Android to be that compelling for your particular application, there is a pretty darned good alternative.  Of course, there are other distros available as well, but having Ubuntu I think is a slam dunk in terms of getting something that’s well supproted, and fun to play with.

Not to forget the Raspberry Pi, they are making progress on releasing their “Model A” Raspberry Pi board.  This board has slightly less hardware than the model B.  The price point of $25 is the killer feature.

Along with the Pi, there is a new OS release, Plan 9 from AT&T Labs origin is now available for the Pi.  I find this last bit to be particularly interesting since the mission of the Raspberry Pi is educational purposes.  I think Plan 9 provides a platform rife with learning opportunities.

In addition to “doing UNIX better than UNIX”, Plan 9 presents some interesting abstraction and separation ideas which might find now life in the emergent “internet of things” environment.  Plan 9 makes it relatively easy to separate things, including ‘memory’ and ‘processing’.  It has a fairly minimal “C” interface as most operations are carried out by sending messages around rather than calling C functions.

Hardware is moving fast, and I can hardly keep up.  I think there will need to be changes in the software landscape to truly keep up.  It probably starts by getting message passing established as the primary mode of communication between devices.  HTTP/REST helps along these lines.  Probably need to go much further, but there you go.

The hardware changes quickly.  Software skills, not so much.  We live in great times.

 


A Picture’s worth 5Mb

What’s this then?

Another screen capture. This time I just went with the full size 1920×1080. What’s happening on this screen? Well, that tiger in the upper left is well known from PostScript days, and is the gold standard for testing graphics rendering systems. In this case, I’m using OpenVG on the Pi, from a Lua driven framework. No external support libraries, other than what’s on the box. In fact, I just took the hello_tiger sample, and did some munging to get it into the proper shape, and here it is.  One thing of note, this image is actually rotating.  It’s not blazingly fast, but it’s not blazingly fast on any platform.  But, it’s decent.  It’s way faster than what it would be using the CPU only on my “high powered” quad core desktop machine.  This speed comes from the fact that the GPU on the Pi is doing all the work.  You can tell because if you get a magnifying glass and examine the lowest right hand corner of the image, you’ll see that the CPU meter is not pegged.  What amount of action is occuring there is actually coming from other parts of the system, not the display of the tiger.  I guess that VideoCore GPU thing really does make a difference in terms of accelerating graphics.  Go figure.

In the middle of the image, you see a window “snapper.lua”. This is the code that is doing the snapshot. Basically, I run the tiger app from one terminal, the one on the lower left. Then in the lower right, I run the ‘snapper.lua’ script. As can be seen in the OkKeyUp function, every time the user presses the “SysRQ” key (also ‘Print Screen’ on many keyboards), a snapshot is taken of the screen.

Below that, there’s a little bit of code that stitches an event loop together with a keyboard object. Yes, I now have a basic event loop, and a “Application” object as well. This makes it really brain dead simple to throw together apps like this without much effort.

[sidetrack]
One very interesting thing about being able to completely control your eventing model and messaging loops is that you can do whatever you want. Eventually, I’ll want to put together a quick and dirty “remote desktop” sort of deal, and I’ll need to be able to quickly grab the keyboard, mouse, and other interesting events, and throw them to some other process. That process will need to be able to handle them as if they happened locally. Well, when you construct your environment from scratch, you can easily bake that sort of thing in.
[sidetrack]

It’s nice to have such a system readily at hand.  I can fiddle about with lots of different things, build apps, experiment with eventing models, throw up some graphics, and never once have to hit “compile” and wait.  This makes for a very productive playground where lots of different ideas can be tried out quickly before being baked into more ‘serious’ coding environments.

 


Screencast of the Raspberry Pi

It’s one of those innevitabilities.  Start with fiddling about with low level graphics system calls, do some screen capture, then some single file saving, and suddenly enough you’ve got screen capture movies!  Assuming WordPress does this right.

If you’ve been following along, the relevant code looks like this:

-- Create the resource that will be used
-- to copy the screen into.  Do this so that
-- we can reuse the same chunk of memory
local resource = DMXResource(displayWidth, displayHeight, ffi.C.VC_IMAGE_RGB888);

local p_rect = VC_RECT_T(0,0,displayWidth, displayHeight);
local pixdata = resource:CreateCompatiblePixmap(displayWidth, displayHeight);

local framecount = 120

for i=1,framecount do
	-- Do the snapshot
	Display:Snapshot(resource);

	local pixeldata, err = resource:ReadPixelData(pixdata, p_rect);
	if pixeldata then
		-- Write the data out
		local filename = string.format("screencast/desktop_%06d.ppm", i);
		print("Writing: ", filename);

		WritePPM(filename, pixeldata);
	end
end

In this case, I’m capturing into a bitmap that is 640×320, which roughly matches the aspect ratio of my wide monitor.

This isn’t the fastest method of capturing on the planet. It actually takes a fair amount of time to save each image to the SD card in my Pi. Also, I might be able to eliminate the copy (ReadPixelData), if I can get the pointer to the memory that the resource uses.

This little routine will generate a ton of .ppm image files stored in the local ‘screencast’ directory.

From there, I use ffmpeg to turn the sequence of images into a movie:

ffmpeg -i desktop_0x%06.ppm  desktop.mp4

If you’re a ffmpeg guru, you can set all sorts of flags to change the framerate, encoder, and the like. I just stuck with defaults, and the result is what you see here.

So, the Pi is capable. It’s not the MOST capable, but it can get the job done. If I were trying to do this in a production environment, I’d probably attach a nice SSD drive to the USB port, and stream out to that. I might also choose a smaller image format such as YUV, which is easier to compress. As it is, the compression was getting about 9fps, which ain’t too bad for short clips like this.

One nice thing about this screen capture method is that it doesn’t matter whether you’re running X Windows, or not. So, you’re not limited to things that run in X. You can capture simple terminal sessions as well.

I’m rambling…

This works, and it can only get better from here.

It is part of the LJIT2RPi project.


Raspberry Pi OpenSource VideoCore Access

The Pi Foundation today announced they availability of the VideoCore client side libraries as OpenSource!

And the crowd just keeps moving along…

What’s the big deal?  Well, the way the Raspberry Pi is arranged, there are essentially two ‘cores’ cooperating on a chip to form the hardware of the Raspberry Pi.  One of those cores, “VideoCore”, is highly proprietary, and handles all the lowest level video and audio for the Raspberry Pi.  If you were running a typical PC, this would be similar to the arrangement of a CPU (intel) and a GPU (nVidia) running in the same machine.  The CPU generally takes care of the “operating system”, and anything having to do with video gets communicated to the GPU, and magic happens.

Most users don’t care about this level of detail.  Typically, the libraries that communicate with the GPU are highly proprietary to the vendor who produced them.  nVidia does not open Source the drivers for their chips.  They just provide a binary blob to the OS, and leave it at that.

This same situation was occuring here with Broadcom.  It’s not a big deal to most users, but, when you’re trying to sell the Raspberry Pi as an “educational tool”, and half the system is basically off limits, it kind of hurts the credibility of the whole mission.  So, now they’ve gone and made that code available to the masses.

Now, of course if you just want to put something together in Pygame, then your world has not shifted one bit.  You’ll never program down at this level.  But, let’s say you’re trying to put together an OS distribution.  Well then things start to get interesting.  You can better integrate X, or XBMC with the lowest level high performance part of the hardware, without having to poke your way along in the dark with an undocumented API.  You can get right down there in the chips and create the bestest interop between the ‘CPU’ and the ‘GPU’.

What will it mean?  Probably nothing in the very short term.  In the long run, it will mean that people who really care about such things will have the tools they need to eek the most performance out of this System on Chip as possible.

I think the whole Raspberry Pi thing is a master stroke for Broadcom.  They are typically a behind the scenes provider of low level chips.  The Pi is giving them a bully pulpit upon which they can advertise their wares.  Making this stuff open source doesn’t diminish their intellectual property in the least, and doesn’t give anything to their competitors.  It does get a bunch of maniacal programmers focused on programming their VideoCore, which without such a move would remain an obscure opaque part of their offerings.

It’s a fun time to be a progammer!

 


Capturing Screenshots of the Raspberry Pi


Last Time Around, I went through how to capture the screen on the Raspberry Pi. Well, capturing, and displaying on the same screen at the same time really isn’t that interesting. It becomes more fun to capture, and then share with your friends, or make movies or what have you.

This time around, I’ve actually captured, and saved to a file.

In order to do this, I had to introduce a couple new concepts.

First is the idea of PixelData. This is simply a data structure to hold onto some specified pixel data.

ffi.cdef[[
struct DMXPixelData {
	void *		Data;
	VC_IMAGE_TYPE_T	PixelFormat;
	int32_t		Width;
	int32_t		Height;
	int32_t		Pitch;
};
]]

local pixelSizes = {
	[tonumber(ffi.C.VC_IMAGE_RGB565)] = 2,
	[tonumber(ffi.C.VC_IMAGE_RGB888)] = 3,
}

local DMXPixelData = ffi.typeof("struct DMXPixelData");
local DMXPixelData_mt = {

	__gc = function(self)
		print("GC: DMXPixelMatrix");
		if self.Data ~= nil then
			ffi.C.free(self.Data);
		end
	end,

	__new = function(ct, width, height, pformat)
		pformat = pformat or ffi.C.VC_IMAGE_RGB565

		local sizeofpixel = pixelSizes[tonumber(pformat)];

		local pitch = ALIGN_UP(width*sizeofpixel, 32);
		local aligned_height = ALIGN_UP(height, 16);
		local dataPtr = ffi.C.calloc(pitch * height, 1);
		return ffi.new(ct, dataPtr, pformat, width, height, pitch);
	end,
}
ffi.metatype(DMXPixelData, DMXPixelData_mt);

The ‘__new’ metamethod is where all the action is at. You can do the following:

pixmap = DMXPixelData(640, 480)

And you’ll get a nice chunk of memory allocated of the appropriate size. You can go further and specify the pixel format (RGB565 or RGB888), but if you don’t, it will default to a conservative RGB565.

This is great. Now we have a place to store the pixels. But what pixels? When we did a screen capture, we captured into a DMXResource object. Well, that object doesn’t have ready made access to the pixel pointer, so what to do? Well, just like DMXResource has a CopyPixelData() function, it can have a ReadPixelData() function as well. In that way, we can read the pixel data out of a resource, and go ahead and do other things with it.

ReadPixelData = function(self, pixdata, p_rect)
  local p_rect = p_rect or VC_RECT_T(0,0,self.Width, self.Height);
  local pixdata = pixdata or self:CreateCompatiblePixmap(p_rect.width, p_rect.height);

  local success, err = DisplayManX.resource_read_data (self.Handle, p_rect, pixdata.Data, pixdata.Pitch)
  if success then
    return pixdata;
  end

  return false, result;
end

Alrighty, now we’re talking. With this routine, I can now read the pixel data out of any resource. There are two ways to use it. If you pass in your own DMXPixelData object (pixdata), then it will be filled in. If you don’t pass in anything, then a new PixelData object will be created by the ‘CreateCompatiblePixmap()’ function.

OK. So, we know how to capture, and now we know how to get our hands on the actual pixel data. Last we need a way to write this data out to a file. There are tons of graphics file formats, but I’ll stick to the most basic for this task:

local function WritePPM(filename, pixbuff)
    local r, c, val;

    local fp = io.open(filename, "wb")
    if not fp then
        return false
    end

    local header = string.format("P6\n%d %d\n255\n", pixbuff.Width, pixbuff.Height)
    fp:write(header);

    for row=0,pixbuff.Height-1 do
	local dataPtr = ffi.cast("char *",pixbuff.Data) + pixbuff.Pitch*row
    	local data = ffi.string(dataPtr, pixbuff.Width*3);
    	fp:write(data);
    end

    fp:close();
end

This is one of the oldest and most basic image file formats. The header is in plain text, giving the width and height of the image, and the maximum value to be found (255). This is followed by the actual pixel data, in R,G,B format, one byte per value. And that’s it. Of course, I converted this basic image into a .png file for display in this blog, but you can see how easy it is to accomplish.

So, altogether:

local ffi = require "ffi"
local DMX = require "DisplayManX"

local Display = DMXDisplay();
local screenWidth, screenHeight = Display:GetSize();
local ratio = screenWidth / screenHeight;
local displayHeight = 320;
local displayWidth = 640;

-- Create the view that will display the snapshot
local displayView = Display:CreateView(
	displayWidth, displayHeight, 
	0, screenHeight-displayHeight-1,
	0, ffi.C.VC_IMAGE_RGB888)

-- Do the snapshot
displayView:Hide();	
Display:Snapshot(displayView.Resource);
displayView:Show();

local pixeldata, err = displayView.Resource:ReadPixelData();
if pixeldata then
	-- Write the data out
	WritePPM("desktop.ppm", pixeldata);
end

And that’s all there is to it really. If you can take one snapshot of the screen, you can take multiples. You could take hundreds, and dump them into a directory, and use some tool that converts a series of images into an h.264 file if you like, and show some movies of your work.

This stuff is getting easier all the time.  After taming the basics of the bcm_host, screen captures are now possible, displaying simple windows is possible.  I’ve been looking into mouse and keyboard support.  I’ll tackle that next, as once you have this support, you can actually write some interesting interactive applications.

 


Taking Screen Snapshots on the Raspberry Pi

Last time around, I was doing some display wrangling, trying to put some amount of ‘framework’ goodness around this part of the Raspberry Pi operating system. With the addition of a couple more classes, I an finally do something useful.

Here is how to take a screen snapshot:

local ffi = require "ffi"
local DMX = require "DisplayManX"

local Display = DMXDisplay();

local width = 640;
local height = 480;
local layer = 0;	-- keep the snapshot view on top

-- Create a resource to copy image into
local pixmap = DMX.DMXResource(width,height);

-- create a view with the snapshot as
-- the backing store
local mainView = DMX.DMXView.new(Display, 200, 200, width, height, layer, pformat, pixmap);


-- Hide the view so it's not in the picture
mainView:Hide();	

-- Do the snapshot
Display:Snapshot(pixmap);

-- Show it on the screen
mainView:Show();

ffi.C.sleep(5);

This piece of code is so short, it’s almost self explanatory. But, I’ll explain it anyway.

The first few lines are just setup.

local ffi = require "ffi"
local DMX = require "DisplayManX"

local Display = DMXDisplay();

local width = 640;
local height = 480;
local layer = 0;	-- keep the snapshot view on top

The only two that are strictly needed are these:

local DMX = require "DisplayManX"
local Display = DMXDisplay();

The first line pulls in the Lua DisplayManager module that I wrote. This is the entry point into the Raspberry Pi’s low level VideoCore routines. Besides containing the simple wrappers around the core routines, it also contains some convenience classes which make doing certain tasks very simple.

Creating a DMXDisplay object is most certainly the first thing you want to do in any application involving the display. This gives you a handle on the entire display. From here, you can ask what size things are, and it’s necessary for things like creating a view on the display.

The DMXDisplay interface has a function to take a snapshot. That function looks like this:

Snapshot = function(self, resource, transform)
  transform = transform or ffi.C.VC_IMAGE_ROT0;

  return DisplayManX.snapshot(self.Handle, resource.Handle, transform);
end,

The important part here is to note that a ‘resource’ is required to take a snapshot. This might look like 3 parameters are required, but through the magic of Lua, it atually turns into only 1. We’ll come back to this.

So, a resource is needed. What’s a resource? Basically a bitmap that the VideoCore system controls. You can create one easily like this:

local pixmap = DMX.DMXResource(width,height);

There are a few other parameters that you could use while creating your bitmap, but width and height are the essentials.

One thing of note, when you eventually call: Display:Snapshot(pixmap), you can not control which part of the screen is taken as the snapshot. It will take a snapshot of the entire screen. But, your bitmap does not have to be the same size! It can be any size you like. The VideoCore library will automatically squeeze your snapshot down to the size you specified when you created your resource.

So, we have a bitmap within which our snapshot will be stored. The last thing to do is to actually take the snapshot:

Display:Snapshot(pixmap);

In this particular example, I also want to display the snapshot on the screen. So, I created a ‘view’. This view is simply a way to display something on the screen.

local mainView = DMX.DMXView.new(Display, 200, 200, width, height, layer, pformat, pixmap);

In this case, I do a couple of special things. I create the view to be the same size as the pixel buffer, and in fact, I use the pixel buffer as the backing store of the view. That means that whenever the pixel buffer changes, for example, when a snapshot is taken, it will automatically show up in the view, because the system draws the view from the pixel buffer. I know it’s a mouth full, but that’s how the system works.

So the following sequence:

-- Hide the view so it's not in the picture
mainView:Hide();	

-- Do the snapshot
Display:Snapshot(pixmap);

-- Show it on the screen
mainView:Show();

ffi.C.sleep(5);

…will hide the view
take a snapshot
show the view

That’s so the view itself is not a part of the snapshot. You could achieve the same by moving the view ‘offscreen’ and then back again, but I haven’t implemented that part yet.

Well, there you have it. A whole bunch of words to describe a fairly simple process. I think this is an interesting thing though. Thus far, when I’ve seen Raspberry Pi ‘demo’ videos, it’s typically someone with a camera in one hand, bad lighting, trying to type on their keyboard and use their mouse while taking video. With the ability to take screen snapshots in code, making screencasts can’t be that far off.

Now, if only I could mate this capability with that x264 video compression library, I’d be all set!