Model View Project Perspective Screen…Posted: March 29, 2012
There is an extremely long walk that a pixel has to take from the model to the screen. In order to recreate the render pipeline, I have to take care of all those transforms all by myself. Starting from the last one, there is a Viewport transform. The viewport transform takes “normalized” device coordinates (-1,1 in x-axis and y-axis) and turns them into actual screen coordinates. I have implemented a ViewportTransform object which performs this particular task. It’s fairly straight forward:
local vpt0 = ViewportTransform(captureWidth, captureHeight)
Then, to transform a point:
local v11 = vpt:Transform(vec3(-.75, .25, 0))
Since these points have to be normalized, they have to be specified in values between -1, and 1 on all axes. What will be returned is a vec2, which will be in the range of captureWidth, captureHeight. That’s real nice. You can easily deal with top down, or bottom up, just by changing the sign of the captureHeight.
In the above picture, I actually setup 4 different viewport transforms:
local vpt1 = ViewportTransform(captureWidth/2, captureHeight/2, 0,0) local vpt2 = ViewportTransform(captureWidth/2, captureHeight/2, captureWidth/2, 0) local vpt3 = ViewportTransform(captureWidth/2, captureHeight/2, captureWidth/2, captureHeight/2) local vpt4 = ViewportTransform(captureWidth/2, captureHeight/2, 0, captureHeight/2)
Then, when it comes time to render, I just run the vertices through each of the transforms, and I will receive coordinates that are placed in the appropriate quadrant. That’s just how it works in OpenGL as well, although there is the advantage of a hardware assist in that case.
This is a good thing because it means I can draw different views of the same thing simply by changing the viewport. That’s what you see in those highend CAD systems where they show different perspectives on the model while you’re working on it.
Similarly, it allows you to fairly easily parcel up the screen, and do things like draw windows, or separate between the 3D Rendering, and the 2D UI portion of the screen.
But, this particular transform works with those normalized values. In order to get from the model to these normalized values, there are a couple more transforms. One is to transform from the model’s view of the world to the ‘camera’s view of the world. That includes a modelview transform, projection transform and a perspective division. Not too bad. Once those are in place, there is a complete 2D rendering pipeline, and triangles can show up again.