New Year, new tools – OpenScad Tetrahedron

Screen Shot 2016-01-02 at 8.16.57 AM

I fired up OpenScad, and created a new tetrahedron model using it.  I’ve done tons with platonic solids in OpenScad in the past.  This time around though, I figured I’d create the simplest tetrahedron I know.  Here’s the OpenScad code for the same:




    translate([0, 100, 100])
    translate([100, 0, 100])

The trick here is in the fact of the special relationship between a tetrahedron and a cube. The cube has 8 vertices, and you can create a tetrahedron by using 4 of those vertices. You just have to pick the right ones. You can easily choose the right ones by picking a set at opposite corners of any face (cutting a diagonal across that face). Then, choose the face exactly opposite, and select the two corners that are different from the ones you picked on the first face. Now you have these 4 vertices.

OpenScad has had the ‘hull()’ module since forever, but it didn’t always work so well. In this modern OpenScad, and for this particular case, it works extremely well. Just take the 4 vertices, throw them into a hull(), and out comes a nice tetrahedron!

This is the tetrahedron for the masses. No math required, except to know the vertices of a unit cube. And of course you can scale it up to any size you please.

If you want to do anything tricky with it though, like subtract some holes and the like, you’ll have to know the math again. But, there you have it, tetrahedron solid in a few lines of OpenScad.

Writing Samples from Books

I find that I tend to repeat patterns over and over, through years. I believe I do this because I’m improving things, or perhaps it’s because I forget how to do something, and I’m revisiting the topic again.

Here’s a picture:


Here’s the code that made it:


package.path = package.path..";../?.lua"
local View3D = require("View3D");

function init()

function triangle()

function display()

function reshape(w, h)

  if (w <= h) then
    glu.gluOrtho2D(0,30*w/h, 0, 30);



I got this code from the “OpenGL Programming Guide – Sixth Edition” (the Red Book), page 180. There are a couple of changes. I prefix things with ‘gl’ and ‘glu’. Other than that, the two lines at the top, and the one line at the bottom are all that I need to run this in the TINN environment.

TINN is not the only Lua based environment that allows you to easily script OpenGL. Since Lua is almost synonymous with game programming, I’m sure there are tens of such things. This is my version.

What I like about this is the relative ease with which I can just copy a simple sample out of a book, or magazine, or online source, and try it out. Often times, you need to use a lot more machinery, run a compiler, blah blah blah. Even worse, the sample might have been written to run on a Mac, so there might be a lot of specifics for that.

I can do this because I have added a khronos module to my TINN installation. One of the key pieces is the View3D class, which sets up an automatic window, event loop, opengl context and the like. If you’re familiar with GLFW, it’s a similar capability, only with scripting.

One difference between TINN and these other environments, is that TINN integrates that low level networking stuff as well. So, hooking up a http server/client interface to my OpenGL based ramblings will be fairly straight forward. I should be able to create collaborative applications, for example, without too much fuss, it’s just a built in feature of the environment.

I like having this tool at hand. The combination of a robust runtime environment, combined with scripting, makes it real easy to just pull books off my shelf and try things out. I’m repeating an exercise for I don’t know how many times, but I’m really enjoying the results this time around.

Supporting the Next Industrial Evolution

A long time ago in a galaxy far far away… I was in business with my brother.  We did software, on odball platforms such as NeXT and BeOS, until I left for a job in the Pacific Northwest.  In the past couple of years, we have been having discussions about the economy, and where jobs have gone, and what can be done to create new jobs.

One of the thoughts that I’ve had on the matter is that certain types of jobs are rapidly disappearing, like heavy manufacturing.  A lot of these jobs are disappearing because of automation, or shifting labor rates, or shifting raw materials, or simply the needs of societies are changing.  Moving out of coal, more into natural gas, so less coal miners needed.

One of the things that’s been changing most recently is what I’ll call the democratization of manufacturing.  When you can purchase a “computer” for $250 (lastest Samsung ChromeBook), and you can purchase an assembled 3D printer for less than $1,000, you’re suddenly got in your hands the tools to begin micro manufacturing.  You can both create the appropriate design files, and with a reel of plastic, you can print off your own versions of things, locally.

This sort of scenario is being played out time and again in Maker Faires, NYTimes articles, Wall Street Journal and the like.  There is a lively community of makers in the world, and they’re all kind of chanting in the same way “We want to make our own stuff”.

One aspect of the maker community that is true for me is that my design skills are somewhat limited.  I’m not a SketchUp user, nor will I ever master SolidWorks, or anything remotely like that.  But, I know people who are experts in that stuff.  I’m an expert in certain kinds of software development.  Of late, I’ve been focused on things related to cryptography, identity, and network services.

What does this have to do with democratization?  Well, it occurs to me that the way the world is headed is more about mash ups of different skill sets, combined with a proliferation of relatively inexpensive manufacturing tools.  Once anyone on the street can get something designed to their specifications, and get that object manufactured within a reasonable amount of money, and a reasonable amount of time, we will be engaged in a new paradigm for manufacturing.

My brother’s latest venture (without me) is the pursuit of a first step along this path.  His company (Adamation) is a design firm, which is focused on the rapid development of customized products that can be rapidly manufactured.  By ‘rapid manufacture’, I mean 3D printed in most cases, at least to start.  They’ve just launched their new web site, which is a collaboration of artists from the gaming industry, along with visual artists, web designers and the like.

What they have to start is a series of completely original figurines.  There are 20 new figurines in total, which are bunched into 4 groups, with 5 characters in each group.  Each group has a unique story, and all the characters have individual bios.  You can go to the site, click on stuff, see some visuals, rotating animations, and the like, and ultimately make a purchase.

Now, I think this is really cool, not just because it’s my brother doing it, but because this is how I think goods should be.  Right now, each of the figurines comes in 3 different poses, but I know they have plans to allow the user to customize them further, creating a figurine that is as unique as the user desires.  This is customized manufacturing.  The figurines are printed ‘just in time’, so there’s no inventory.

I could gush on about these things, but I’ll just encourage you to go check out the site if you’re interested in such things.

This does bring up another thought in my mind though.  There are a lot of people thinking about how best to bring about this industrial revolution.  There are tons of little companies working on 3D printers.  Lots of people working 3D printable models, and lots of individuals dreaming about how to pull it all together.  The pieces are out there.

When I look at the current landscape, I see things like small electronics companies (SparkFun, SeeedStudio, AdaFruit), and their ability to crank out small scale electronics kit.  Then there’s the oh so popular for the moment Raspberry Pi, which brings some compute to the table.  Then there’s tons of people focused on creating cases for electronics, and these 3D printer people, who can make just about anything to pull stuff together.

Perhaps what’s missing is the spark, a true vision to help guide things.  Perhaps there’s some amount of funding that needs to be flung into some of the dimmer corners to bring out true innovations.  Some of this happens with KickStarter projects, but there’s probably more to be done.

I was happy to help my brother get started in this little venture by providing capital so that he could purchase the means of production.  Not a fortune, but more than I had in my piggy bank.  Perhaps what’s needed here is more people coming together collectively to fund the creation of new design firms, new design software, new micro manufacturers.  Just a thought.

At any rate, I’m happy my brother’s company has reached the stage where they can start talking about it openly in the world.  I’m sure it will be a success, or at the very least, very provocative.

HeadsUp Live Mandelbrot Zoom

To kickoff the usage of shaders, I figured I go back to the Mandelbrot example. In this particular case, showing a static image isn’t that exciting, so I figured I’d produce a little movie clip instead. So, what you see here is some typical OpenGL code written, that uses a fragment shader to do the actual work. I borrowed the fragment shader from here.

This is really getting interesting.  The fragment shader itself had zero changes, because it’s GLSL code, so nothing to change.  The code wrapping changed in the usual sorts of way.  Here is how the shader program is setup:

function setup_shader(fname)
    local fp =, "r");

    local src_buf = fp:read("*all");
    local src_array ="char*[1]", ffi.cast("char *",src_buf));

    local sdr = ogm.glCreateShader(GL_FRAGMENT_SHADER_ARB);	
    ogm.glShaderSource(sdr, 1, src_array, nil);

    local prog = ogm.glCreateProgram();
    ogm.glAttachShader(prog, sdr);
    return prog;

There’s only one slightly tricky line in here. A shader is a program, and that program gets compiled on the GPU by the vendor’s OpenGL GLSL compiler. So, you’ve got to get the text of that program over to the GPU. The API for doing that is:

void glShaderSource (GLuint shader, GLsizei count, GLchar* *string, const GLint *length);

It’s the “GLchar **string” that’s the only challenge here. Basically, the function expects an array of pointers to strings. So, using the LuaJIT ffi, this turns out to be achievable with the following:

local src_array ="char*[1]", ffi.cast("char *",src_buf));

It maybe looks like a bit of a magical incantation, but once it’s done, you’re good to go. From then on out, it’s standard stuff. Notice the usage of ‘ogm’. That’s the alias for the OglMan table, which is used to pull in all the extensions you could care to use. It really was brain dead easy to do this. Whenever the LuaJIT compiler complained about not being able to find something, I just put “ogm.” in front of it, until all complaints were solved, and the program finally ran.

And the result in this case is a nice fly through of a mandelbrot set. Julia sets can be added just as easily by changing the .glsl file that I’m loading into the fragment shader.

This bodes well. It will be a small matter to wrap this stuff up in a couple of convenience objects so that I won’t have to make all those GLSL Calls explicitly.

One of the hardest parts to deal with typically is the setting of ‘uniform’ variables. This is the way in which you communicate values from the outside world into the shader code. I’m thinking Lua will help me do that in such a way that’s fairly natural, and doesn’t take a lot of code. Maybe I can use the same trick I did with OglMan (implement __index and __newindex). If I could do that, then it would look the same as setting/getting properties on an object to interact with your GLSL shader. And that would be a fine thing indeed as then the code would just slip right into the rest of the Lua code, without looking dramatically different. Never mind that the underlying code is actually running on the GPU.

At any rate, there you go. Live zooming on a Mandelbrot set, utilizing the GPU for acceleration, all written in Lua (except for the shader code of course). I wonder if the shader code could be written in Lua as well, and then just converted…

Banate CAD Lurches towards 2011 finish line

It has not been a quiet week in Lake Bellevue…

There is a New Release of Banate CAD, which can be found here: Banate CAD

What we have here, is NOT a failure to communicate, but rather the ability to communicate using multiple mechanisms.

But, those squggly circular lines don’t look anything like what I’d be doing with a 3D CAD program meant for modeling things to be printed…

Well, one of the core tenets of Banate CAD is that hard things are doable, and easy things are easy.  Processing is a programming environment that has gained a tremendous amount of popularity over the years.  In particular, it has made it relatively easy for graphics artists to create amazing display installations which include audio, video, animations, and graphics in general.  It has been able to achieve this because it pulls together all the little bits and pieces necessary to do such things, and presents them in a relatively easy fashion that anyone with a high school education (or not) can understand and utilize.

What does this have to do with 3D printing?  Well, I think the state of tools for 3D modeling are similar to what Processing came into oh so many years ago.  There are quite a lot of capable tools, which experts can utilize to create amazing things.  But, more average designers such as myself, find it extremely hard to get up that learning curve.  I want all that power, but I don’t want to spend the 10,000 hours necessary to gain the expertise.

Also, I want to do animations:

What’s this?  It’s just one of the many animations that are typical in Processing.  A point on a circle makes what kind of motion when tracked over time?  A sine/cosine wave of course!  It’s easy to see.

Now, what if I have a cool idea for a new 3D printer, and I want to visualize the mechanics of the thing actually working?  What package do I need for that?  Well, you can pay thousands of dollars and get an all singing/dancing CAD program, which can probably simulate a fighter jet in a wind tunnel, or you could use Banate CAD.

The current release of Banate CAD has the ability to animate things.  It’s fairly straight forward.  You load up your design, and implement an object that has a “update()” method on it.  That method will get called every clock “Tick()”, and you do whatever it is you want to do in terms up updating your geometry.  Separate, the “Render()” method will be called, and you’ll see nifty animation.

This is a fun thing to do with metaballs, as seeing them interact and undulate in an animation is some great fun.  Kind of like watching those lava lamps of old.  And of course, things that are solids in your scene are printable.

So, there are actually two “Programs” in the package now.

BanateCAD.wlua – The standard 3D Banate CAD program

ProcessingShell.wlua – A Program that feels very similar to the Processing program

As the animation system is a bit finicky, the best way to run things is to actually bring up the Lua development environment (perhaps SciTE), and run from there.  That way you can easily stop a runaway program.

All of the examples that are in the package should work correctly, without breaking things.

There are quite a few other features in this package as well, in various states of repair.  There is in fact CSG support

At the moment, it is challenged.  It only works with a sphere and cylinder, not with all the other shapes in Banate CAD in general.  It will get there though, and be a lot simpler than what’s there now, but, since the code is a live working thing, it’s there, even though it is incomplete.  Wear a seat belt if you try it out.

And so it goes.  Where is it going?  First of all, 3D modeling.  That gives a lot of options for visualizing stuff that is in fact 3D, not just 3D models though.  Then, add the 2D support, like in Processing, and you begin to get a very interesting system.

At some point, a .fab file will contain the information for both the 3D model itself, as well as the interface description so the user can input parameters to the design before printing, if they so choose.  In order to make that a reality, the design files need to be able to put up a UI, that includes classic text, buttons, sliders, and deal with keyboard and mouse events.  That’s how these two things fit together.

In the meanwhile, being able to learn a new system by leveraging books about other similar systems makes for a broad reach.  For example, I picked up a few books on Processing, and just try to go through the examples.  When I come across a feature Banate CAD does not currently support, I just add it, and improve the system overall.

And there you have it.  Another week, another release.

I will be going on holiday in a couple of days, but there will surely be a release to ring in the new year.

Banate CAD Third Release

Is it Monday already!

Here is the third release of Banate CAD!



This week saw multiple improvements on multiple fronts.  I can’t actually remember which features showed up between the last week’s release and today’s release, but here’s an attempt.

Generalized Bump Maps – I made this earth and moon things, and then I generalized it so that any BiParametric based shape can utilize the technique.  That’s good because everything from spheres, to bezier surfaces are based on the BiParametric object.

import_stl_mesh – You can create a single instance of an imported mesh, and then use that single instance multiple times.  That’s a great way to save on resources when you’re adding multiples of the same geometry to the scene.

Blobs – Quite a lot of work on this module.  The main thing was getting the ‘beamsearch’ algorithm working correctly.  That allows for the “real-time” viewing of metaball based objects.  This was a lot of fun to get working.  It doesn’t work in all cases, but it works in the vast majority of cases that will be used in 3D solid modeling.

Some features on way the program itself works.  I’ve added support for reading some modules in from directories at runtime.  This allows me to separate some things that are ‘core’ and MUST be in the package from a number of other things that are purely optional.  It also allows me to support users adding things, in a sane way.  They can eventually go into a “Libraries” directory.

Also, this separation, and the addition of the BAppContext object allows for the creation of tools that have nothing to do with UI.  For example, in order to write a script that just imports a mesh, performs some operations on it, then exports, does not need all the UI code.

Oh yah, and before I forget, one of the biggest additions, or rather fleshing outs, was the animation system.  The AnimationTimer object has actually been in there for a while, and the Animation menu has as well.  It’s just nothing used it.  Now, if you create shapes, and implement the Update() function, your shape will be informed whenver the clock ticks, giving you an opportunity to change things based on time.  There are a few examples included in the example files.  It’s pretty straight forward.

The animation system is quite handy for doing things like playing with metaballs, seeing how things change as you vary parameters.  This will also come in handy when describing physical things that move.  For example, I could model my Up! printer, and then actually set it in motion.  But, that’s for another time.


There you have it.  Another weekly release!

Fast Metaball Surface Estimation

Using modeling techniques such as Constructive Solid Geometry (CSG), and others, a great number of 3D designs can be quickly and easily created.  Although I have found CSG to be a quite productive tool, I also find there are limitations when you want to produce shapes that can not be easily described by combinations of basic primitives such as cubes, cylinders, and spheres.

A long time ago, Jim Blinn created the “Blobby” technique of forming what appear to be more organic looking shapes.  This technique was created for computer graphic rendering, not for 3D modeling.  There was a later refinement of the technique which took on the name “Metaball”.  In both cases, the technique is a subclass of ‘isosurface’, whereby the surface of the object is described through mathematical combinations of various surface equations.

The easiest way to think about what these things are is to consider what happens with droplets of honey that get really close to each other.  Individually, each droplet might look like a perfect little circle sitting on a table.  Get them close enough, and their boundaries begin to merge, and they become some nice amorphous shape.  Metaballs are like that.  Balls individually will just be balls.  But, as soon as they start to get close to each other, they begin to form a unified blobby looking surface.

Although there is great flexibility in using metaballs for modeling, it can be fairly challenging to try and construct reasonable models in a short amount of time.  Once you have the set of balls that will be used to describe your object, it must be rendered, and meshed, so it can become a 3D solid.  Since metaballs are more typically used for onscreen graphics, rather than modeling for 3D printing, the renderers can take short cuts.  All a renderer needs to do is cast rays into the void, and see where it hits the surface that describes the shape.  Once it hits a point, it can be done, and  move onto the next point, without having to connect them in any meaningful way.

When constructing a 3D model, it is convenient to be able to create a mesh, preferably one that can be enclosed, so that we can generate solids which can be printed.

What I have been struggling with is how to efficiently construct that mesh, without having to do a brute force calculation of the entire space within the volume where the metaball lives.  What I was looking for was a way to cast beams into the metaball equation, and at the same time connect the points to form a nice triangle mesh.  So, first is the beam casting.

The metashape is described by an equation,  f(x,y,z).  The way the equation works, any value that is < 1, is ‘outside’ the surface.  Any value > 1 is ‘inside’ the surface.  Any value that == 1, is ‘on’ the surface.  So, what I’d like to do is search a space, and for each point in this space, figure out the value of the function.  Where the value is 1, I know I have hit the surface.

This is the very crux of the problem.  How do you efficiently choose x,y,z values such that you don’t waste a bunch of time running the calculation for points that are nowhere near the surface.

The novelty that occured to me was that if I assume the whole of the metaball structure is in the middle of a sphere of sufficent radius, I could run a later beam around the surface of that sphere, casting beams towards the metaball structure.  Essentially, what I would like to do is cast a beam towards the conter of the sphere.  From there, if I’m ‘inside’, then cast again, only go half way between our last point, and the point from which I started the last cast.  If I land ‘outside’, then do the same thing, but cast inward.

This amounts to a binary search of the surface, along a particular path from a location outside the surface, in towards the center.  It seems to be fairly  efficient in practice, primarily governed by the original search radius, the number of balls, and the number of rays you want to cast.  In addition, how tolerant you are in terms of getting close to ‘1’ for the function will have an impact on both the quality and the time it takes.

So, now we have an efficient beam search routine.  How to drive it?  The easiest way is is to simply go around a sphere, progressively from the ‘south pole’ to the north, making a full circumnavigation at each latitude.  So, just imagine again, that laser beam (which we know is doing a binary search to find the surface intersection), and run it around the sphere.  Since there is coherence in how we navigate around the sphere, I can connect one point to the next, and easily triangulate between the points.  What you get in the end is not just a point cloud, but a direct calculation of all the interesting vertices, already formed into a nice mesh that can be a solid.

Here, you can clearly see the facets, and perhaps imagine the trace of the beams from bottom to top, and around the circumferance.  This will not work for all metaball networks.  It will not work in cases where there is a separation of the balls.  In that case, two networks need to be formed, or at least the two need to be healed where there is a hole between them.

Similarly, this technique will not work where the is some sort of concave part where the “laser beam” can not see at the surface.  Basically, if you imagine yourself flaying around the object, if you can not see some part of it from the outside, then it will not render correctly.  This can possibly be remedied by getting in closer in such situations, but that’s another exercise.

Creating your own metaball object is fairly straight forward:

local anifrac = 0.95

local balls = {{0,0,0,5}, {0, 1, 19.95*anifrac, 5}}
local isosurface ={
balls = balls,
radius = 50,
Threshold = 0.001,

USteps = 15,
WSteps = 15,


You can control a few things here.  First, is the ball network itself.  You can put in as many balls as you want.  Of course, the more balls, the slower the calculations.  It might require quite a few balls to make an interesting structure.  It’s just something to play around with.

Next is the radius.  This value determines the size of the enclosing search space.  It doesn’t have to be hugely accurate, but within an order of magnitude is nice.

The Threshold value determines how tight the calculation is.  0.001 is a good starting point.  This will give you a fairly tight surface.  If you go with a larger value, the calculations will be quicker, but your object will beging to look fairly coarse.  An even smaller value, 0.0001, would make for an even tighter fit to the surface, and thus a smoother object.

The last part has to do with how many facets you want to generate.  This completely depends on the size of your model.  Basically, it determines how many stops there are around the circumferance of the sphere.  USteps is how many longitudinal steps, and WSteps is how many latitudinal steps.

When modeling, it’s probably good to go with looser values, that way you can play around quickly.  As you get closer to wanting to generate your final model, you up all the numbers, and go get coffee.

In this final picture, I’ve set the Threshold to 0.00001, the USteps to 360, and the WSteps to 360.  That’s fairly fine detail, and the generated model will be quite large, but very smooth and accurate to the model.

The conslusion of this little exercise?  Metaballs are a very interesting and valuable tool for the 3D modeling of shapes that are not easily described by standard CSG operations.  Banate CAD has a metaball module that have the speed to render models in real-time, and the flexibility to allow you to dial in as much detail as you want.  This makes modeling with metaballs in Banate CAD a relatively painless prospect, which opens up the possibility of much more organic shapes for modeling.  The beamsearch algorithm makes it practical.

How much configuration?

Since Banate CAD exposes everything to the scripter, you are free to change pretty much anything at runtime.

Here, I have entered one line of code in my script to change the color scheme:

defaultviewer.colorscheme = colorschemes.Space

the ‘Space’ color scheme is defined in the colorschemes.lua file and looks like this:

colorschemes["Space"] = {
BACKGROUND_COLOR = {0, 0, 0, 1},
CROSSHAIR_COLOR = {0.5, 0.75, 0.75, 1},

You can easily add your own color scheme to that file before you start Banate CAD, then it will be available to you later in your script.

Or, since it’s just a table that’s available at runtime, you could just as easily construct a scheme from within your script.  And start using it.

That’s just one of a number of things you can customize about the environment.  You can also change things like the lighting. If you wanted to add a light, you could do something like:

local newLight = BLight:new({
ID = gl.LIGHT2,
Diffuse = {0.5,1,0.75,1},
Position = {2,1,-1,0},
Enabled = true

table.insert(defaultsceneviewer.Lights, newLight)

Or something to that effect.  In this particular case, a bit of the underlying OpenGL is showing through (gl.LIGHT2), but that will get cleaned up as the lighting system goes more generic.  But, you could do it if you wanted to.  You could even change the default lighting system by going into SceneViewer.lua and just changing the lights to be whatever you want.

This is one of those things that hardcore tinkerers might be interested in, but someone like my mother would just leave well enough alone.

At the moment, I’m working on the Animation system.  That’s pretty cool because that picture above can actually move!  Yep, you got it, the little moon can be shown to orbit the earth.  Of course, if the moon were actually proportionally that big and close, our skin would probably be ripping off our bodies from the tidal forces, but it makes for a nice visualization nonetheless.

Animation can only be shown if you’re actually playing with the script though.  I could of course add a “Make Movie” feature, but that’s a bit of work to do cross platform.  I can certainly generate a bunch of .png images that can then be stitched together by another piece of software.

At any rate, Banate CAD is first and foremost a 3D modeling/printing piece of software.  That doesn’t mean it has to be boring though.  By adding visualizations and animations, I believe it will make the 3D design experience that much more approachable and interesting, and thus more people will do it because it looks like a video game.

Exposing the Core

Banate CAD strives to follow the design cliche “Make that which is simple, simple, make that which is hard possible”, or something to that effect.  To that end, Banate CAD is structured in nicely composable layers, or at least in my mind.  This can be seen in one area, the import of mesh files.

Here, I have used the simplest of import commands:


translate({100, 0, 0})

The import_stl() function will simply read in the .stl, and add the mesh to the scene automatically. In this case, the second import_stl() call will load the same .stl again, crating another instance of the mesh, and add that to the scene as well.

This is pretty straight forward, and works just fine. For limited numbers of meshes, the fact that you’re replicating meshes, doesn’t really have that much of an effect. Where this is a pain though is when you want to load multiple instances of the same mesh, perhaps 10s or hundreds of them.

Here for displaying 16 of the bowls, I’d actually like to load the bowl once, and simply use that same mesh to perform many operations. So, instead of using the import_stl(), I use the import_stl_mesh() function like the following:

function test_instancing(rows, columns)
local amesh = import_stl_mesh("bowl_12.stl")

for row = 1,rows do
for col = 1,columns do
color({row/rows, (row+col)/(rows+columns), col/columns, 1})

In particular, it is this call at the beginning of the function that does the magic.

local amesh = import_stl_mesh("bowl_12.stl")

Here, we are loading a single instance of the mesh. Then later, we can use the addmesh() function to actually add the mesh to the scene. From then on out, it will be displayed just like in the first case, but it won’t have to replicate the mesh to do it.

It really shines when you have a whole bunch of things:

In this case, with 100 instances of the bowl, it doesn’t take much longer to load/render than simply displaying a single bowl.

What does this have to do with layering? When you call ‘import_stl’, that code is essentially doing import_stl_mesh(); addmesh(). Rather than hiding that, it is made readily available to the user. The casual user who’s not concerned with hundreds of instances, or optimizations, can simply use the import_stl() call. For those who want to get into the nitty gritty, and deal with their own instances, they can use the import_stl_mesh() call. The other benefit of doing the version where you get back the mesh, is that you can then perform operations on the mesh itself if you so choose. Let’s say for example you have some sort of smoothing routines, or something that fixes vertex normals, or an exporter, or any manner of things, you can add them easily because you have the mesh in hand, and not just a graphic representation of it on the screen.

It might be possible to write code like this:

local amesh = import_stl_mesh("file.stl")

Banate CAD does not currently have any DXF support, but when it does show up, you’ll be able to easily do this. And while you’re at it, since you can do: export_stl(amesh, "newfile.stl"), you could write code within your file to generate multiple .stl files, one per mesh in your scene if you feel like it. The default “Export STL” from the File menu simply iterates through all the meshes in the scene (simple) and exports them all. They access and layering in Banate CAD allows anyone to do the same from within your script.

So, the system is very flexible, and gives the scripter the exact same capabilities as the guy who wrote Banate CAD in the first place (me). So, if you want to stick with the simple, you can do that. If you want to go down to the metal, from within your scripts, you can do that as well.

And Height Maps for all…

How can there possibly be 3D modeling and visualization without height maps?

Of course, Height maps have been a staple of 3D graphics for quite some time now.  Why not extent this to 3D modeling for printing?  In this case,  I have created this nice generic RubberSheet object.  A RubberSheet is a BiParametric with a size.  So, it’s just a convenience wrapper on the BiParametric.  It assumes a basic X,Y grid, but then things get really strange.

Of course you can give it a number of steps to iterate in each of the axes, and you can give it a color sampler as well, and you can give it a VertexSampler.  What?  Yah, of course, you can just hand it a routine which will calculate the vertex at the particular grid point.  On its own, it will just lay out a flat grid, with nothing interesting on it.

This is using the standard ImageSampler.

local colorSampler ={  Filename='firstheightmap.png',  Size = size,  Resolution = res,  MaxHeight=64, })

rubbersheet({  Size=size,  Resolution=res,  ColorSampler = colorSampler   })

But, what if I used the color sampler for both the color at a position, and for the x,y,z value as well? After all, it’s pretty easy to calculate the x and y, values. And the z could be calculated by taking the luminace value (gray) of the color pixel, and using that as the height. In fact the code is as simple as this:

function ImageSampler.GetVertex(self, u, w)
local col = self:GetColor(u,w)

-- Turn it to a grayscale value
local height = luminance(col)

-- Multiply by max height
local x = u*self.Size[1]/self.Resolution[1]
local y = w*self.Size[2]/self.Resolution[2]
local z = height*self.MaxHeight
local vert = {x,y,z}

return vert, {0,0,1}

The ImageSampler is a functor. That is, it’s an object that has some state, and it can have simple functions called. At any rate, since I’ve implemented ‘GetVertex’, and the BiParametric object calls GetVertex if you tell it what VertexSampler to use, you can now do the following:

ColorSampler = colorSampler,
VertexSampler = colorSampler,
Thickness = -10,

As an added bonus, you can set the Thickness property, and automagically get a rubber sheet with the specified thickness, which makes it printable. This will create a rubber sheet that will follow the contours up and down. If you want a base plane, then you’d alter the lengths of the normals to be a reciprocal relationship to the height, but that’s a silly way of doing things, even though it would work.

And, there you have it. Height maps with ease. And, since it’s integral with the color Sampler, you could easily add colors based on the height value, for instance to have snow at the tops, brown dirt below that, then greenery at the bottom.