Easy OpenGL Examples in HeadsUp
Posted: April 13, 2012 Filed under: Uncategorized Leave a commentIn order to flesh out the GL APIs, I’ve been coding up various samples from “The Red Book”.
alpha.c
triangle.c
robot.c
Typical HeadsUp Lua code looks like this:
local leftFirst = gl.GL_TRUE; function init() gl.glEnable(gl.GL_BLEND); gl.glBlendFunc(gl.GL_SRC_ALPHA, gl.GL_ONE_MINUS_SRC_ALPHA); gl.glShadeModel(gl.GL_FLAT); gl.glClearColor(0,0,0,0); end function drawLeftTriangle() gl.glBegin(gl.GL_TRIANGLES); gl.glColor4f(1,1,0,0.75); gl.glVertex3f(0.1, 0.9, 0); gl.glVertex3f(0.1, 0.1, 0); gl.glVertex3f(0.7, 0.5, 0); gl.glEnd(); end function drawRightTriangle() gl.glBegin(gl.GL_TRIANGLES); gl.glColor4f(0,1,1,0.75); gl.glVertex3f(0.9, 0.9, 0); gl.glVertex3f(0.3, 0.5, 0); gl.glVertex3f(0.9, 0.1, 0); gl.glEnd(); end function display() gl.glClear(gl.GL_COLOR_BUFFER_BIT); if (leftFirst) then drawLeftTriangle(); drawRightTriangle(); else drawRightTriangle(); drawLeftTriangle(); end gl.glFlush(); end function reshape(w,h) gl.glViewport(0,0,w,h); gl.glMatrixMode(gl.GL_PROJECTION); gl.glLoadIdentity(); if (w
There are of course the obvious conversions to go from the C syntax to the Lua syntax, but for the most part, the code is almost a straight copy/paste for any typical OpenGL example. If you’ve done any programming with the various OpenGL bindings in Lua, this will be fairly similar.
There is a difference to this approach though. I start with a straight ffi binding to the raw OpenGL interfaces (gl, glu). So, to do a vertex for example, you have to be very explicit:
gl.glVertex3d(x,y,z)
In LuaGL, it would simply be glVertex(). That’s because LuaGL puts a slight convenience layer between the raw calls and the Lua side. I figure given the ease with which you can use FFI in LuaJIT, that’s a good starting point. There is plenty of room to create alternative interfaces, such as making all the function calls global if you feel like it, so that you can simply do:
glVertex3f()
And, if you want to make it even more convenient, you could do:
function glVertex(x,y,z) gl.glVertex3d(x,y,z) end
Of course, if you’re using the shading language, instead of the old style fixed pipeline calls, then the surface area of the interface is a lot smaller, and this might be a reasonable approach to do by hand even.
There are other objects that get convenient wrappers. Texture management is a challenging task in OpenGL, so I have a GLTexture object for that. Creating a texture looks like this:
local tex = GLTexture.Create(width, height, gpuFormat, data, dataFormat, bytesPerElement)
There are appropriate defaults, so you can simply do the following to create a texture object of the appropriate size:
local tex = GLTexture.Create(640, 480);
And, when you want this texture to be rendered on the screen, after filling some data into it, you can get it displayed on a quad by doing the following:
tex:Render(x,y,width,height);
This is very convenient when you’ve got an application that does something like display the output of a video camera, movie, or whatever your source of video. In fact, my Kinect camera viewer code looks like this:
require "Kinect\Kinect" local captureWidth = 640 local captureHeight = 480 -- Create the Kinect sensor local sensor0 = Kinect.GetSensorByIndex(0, NUI_INITIALIZE_FLAG_USES_COLOR) local screenTexture = GLTexture.Create(captureWidth, captureHeight); function display() -- Get the current video frame local success = sensor0:GetCurrentColorFrame() -- If we successfully got a frame, then copy -- the bits to our texture object if success then screenTexture:CopyPixelData(captureWidth, captureHeight, sensor0.LockedRect.pBits, gl.GL_BGRA) end sensor0:ReleaseCurrentColorFrame(); screenTexture:Render(0,0,captureWidth, captureHeight); end
I think that’s pretty easy. Now if I want to write a screen sharing application, I just need to capture bits of screen from the network, push them into a texture object, and render just like this.
And so it goes.