OpenScad Texture MappingPosted: September 21, 2011
Up until recently, I was living in a very large house, that was out in the Suburbs. The house was too big, and the neighborhood too lonely, so we’ve decided to move to a high rise apartment in the city. Before making the move, I wanted to do one last thing on Thingiverse before packing up the computers. Recently, I was reading some posts on the OpenScad mail alias, and someone mentioned lighting. That got me to thinking, ‘why can’t I deal with my own lighting in OpenScad?’ Well, for one reason or another, it’s just not possible. OpenScad does not really expose any of the rendering pipeline to the script interface, so all you can do is your basic CSG modeling, as the renderer is just a part of the display system.
OK, so I can’t really get into the rendering pipeline, or can I?…
One of the common aspects of any rendering pipeline is doing things with “textures”, which is nothing more than a fancy term for a picture. In classic 3D rendering, the texture object is used to give some life and, well, texture to objects displayed in a scene. Textures are most typically like decals. You just paste them onto the surface of what you’re viewing.
So, I created this thing: http://www.thingiverse.com/thing:11616
OpenScad has no native ability to do much with images within the CSG environment, so I had to start from the ground up.
First, I’ve got a generalized technique for representing “objects” in the environment. OpenScad does have arrays, and you can put anything into an array, even a mix of numbers, and strings. The first thing I wanted was a way to represent an image. I need the basic following information:
width, height – in pixels
maxvalue – the scale of the numbers used to represent the pixels
values – an array that contains the color values representing the image. There should be width*height*cpe numbers in the array.
cpe – components per element. How many of the numbers are used per pixel
Now that I know how I want to represent an image, I want a convenience routine to make it easy for me to use:
image(width, height, maxvalue, values, cpe) = [width, height, maxvalue, values, cpe];
So, if I had image data that looked like this:
values = [0,0,0, 255,255,255, 255,255,255, 0,0,0];
checker = image(2,2,255,values,3)
I would get the representation of a 2×2 checkerboard pattern. That’s a great thing, but does not quite represent an ability to do texture mapping.
The next thing I have to do is get individual pixel values out of my image. First, is simply getting pixel values:
This will return an array with the pixel value located at [0,0] (starting from 0), which so happens to be the color value:
similarly, image_getpixel(checker, 1,0), would return the value [1,1,1]
Note that the color values have been normalized. That is, take the raw picture data value, and divide it by ‘maxvalue’ and you get numbers between [0..1]. This is what OpenScad can deal with, so it makes sense to just do it from the beginning.
That’s great. This in and of itself is a good step towards allowing texture mapping to occur. How would you use it? Well, first, you need to be in control of the generation of your objects, down to the individual facets (triangles). If you are, as I am when generating Bezier surfaces, then you can simply apply the color to each facet. The easiest example would be generating a replication of the image, and creating a cube with the color for each pixel:
That’s nice. Now you have a little picture where the ‘pixels’ are each 1mm in size, and each pixel in the image matches one of the little cubes.
But wait, we don’t want there to be a direct connection between the number of pixels in the image and the size of our object. Also, I don’t want to have to know the actual size of the image. I want to be able to write routines in such a way that I can assume a single size for all images, and magic will just happen.
Therefore we need ‘normalized’ pixel locations, which will be the subject of the next post.