Compact Component Composability

What’s that?  Finally able to display 3D again?  Well yah.  Given the way HeadsUp works these days, a CAD program, or at least a simple STL Viewer, is just a few lines of “sample app” code. It looks like this:

require "STLCodec"
require "Scene"
require "SceneViewer"

local defaultscene = Scene();
local sceneviewer = SceneViewer();

-- Typical rendering initialization stuff
function init()
    -- create a mesh to be rendered
    local mesh = import_stl_mesh("bowl_12.stl");

    -- Add the mesh into the scene
    defaultscene:appendCommand(CADVM.mesh(mesh))
end

function reshape(w, h)
    if h == 0 then
        h = 1
    end
    gl.glViewport(0, 0, w, h)
    sceneviewer:SetSize(w, h)
end

function display()
    sceneviewer:Render(defaultscene);
end

So, for roughly 25 lines of code, you can load and display STL files. The SceneViewer object takes care of displaying the scene, including zooming, rotating around and all that. To load the stl in the first place, I’m using the import_stl_mesh() function, which comes from the STLCodec class. That’s a nicely isolated component. It can deal with reading and writing STL files. Right now it only does ASCII forms, but a little bit of code will enable binary as well. It’s nice because it’s totally isolated from the rest of the code, so it can easily be improved.

Another bit of isolation comes from the Scene and SceneViewer objects. The scene is a simple repository for things like shapes, and meshes, and other things, like rotations, translations, and the like. Your basic scene manager. The SceneViewer knows how to take a scene, and render it. It starts with a default camera and position, and color scheme. All of those things are changeable, but the defaults are good enough for most uses.

And that’s about it. All the little pieces are fairly small. Nothing more than 400 lines of code, but how they compose is the key here.

Now, to add another format, such as VRML, is a matter of adding a VRML codec. It would load into a TriangleMesh just like the stl codec does. One of the nice benefits of the separability of concerns is that you can deal with things at whatever level you like. For example, once you load a mesh in, you don’t have to render it at all. You can simply run some routines over it, and save it out again. For example, if you wanted to eliminate duplicate vertices from a mesh, you might do something like:

local mesh = import_stl_mesh("bowl.stl");
export_stl_mesh("bowl_new.stl", eliminate_duplicate_vertices(mesh));

…and you’re done. I’ve written about the desire for this kind of composability previously, but now it’s actually achievable. This kind of composability also makes for a ready made “extensions” capability for the CAD program. Since the CAD program BanateCAD is nothing more than the composition of a few different things, they can be reshaped into any configuration, and any part can load any new modules it feels like, without the primary app having to intervene in any way.


Eventus Obscuricus

There always comes a time in a programmer’s life when they must deal with an “event loop”.  Since the dawn of the teletype, programmers have had to deal with the age old question “should I poll for events, or should I be notified asynchronously”.  This very question is baked deeply into our CPU architectures with things like interrupts and queues.  It’s simply unavoidable.  So, here I sit, at the precipice of unification (2D and 3D) and the question arises, should I poll, or should I be notified.

Well, I’m actually in favor of asynchronous processing because that’s the way the world works in general.  Sure, we drive towards things, but more often than not, we are responding to something in our environment.  I figure the same should be true of a program.  Yes, it has some goals in mind, but largely it should be responsive to activities that occur on the periphery.

HeadsUp is the 2D aspect of BanateCAD.  This is where events like mouse and keyboard interaction enter the system.  Up until recently, I dealt with these events in a synchronous sort of way, with a typical event loop.  Now, I do a hybrid.  Yes, there is a fundamental message pump that pulls messages out of the system’s message queue, but then that pump turns around and stuffs them into an asynchronous queue, which is handled by a coroutine (cooperative multitasking).  That’s handy for a couple of reasons.

First of all, the message pump does not have to be completely synchronous with the rest of the program.  It can just pull messages out of the pipe, and stick them into the queue, rinse and repeat.  That makes the message pump very simple, and easily separable from the rest of the program.  Second, the part of the program that deals with processing messages does not have to know anything about the mechanics of the message pump.  All it knows it that it will be notified when there are messages to be processed.  It can then go pull as many messages from the async queue as it likes, and deal with them at its leisure.

Perhaps it’s only a point an asynchronous programmer could love.

Now that there’s a generic queue that can process commands, I want to make everything into commands.  So, here’s an example of a very typical case.  When I want to draw a line, there is typically some interface connected to the graphics API that looks like this:

Graphics:DrawLine(pt1, pt2)

That will talk to the rendering engine directly, and a line will magically appear.  But, what if under the covers, what was really happening was this:


void Graphics:DrawLine(pt1, pt2)
local cmd = Graphics:CreateDrawLine(pt1, pt2)
outboundqueue:Enque(cmd)
end

From the programmer’s perspective, there’s nothing different in their conceptual model.  They call DrawLine(), and that’s the end of it.  They assume a line is drawn, and it will be drawn, eventually.

From the system’s perspective, a whole world of possibilities just opened up.  Now that the command is packaged up and stuck into a queue, it could be immediately removed from the queue, and really executed against the graphics system, or it could be stored off in some persistent store somewhere, possibly on a distant part of the planet.  And, as long as you’ve got commands all packaged up and ready to go, you could ship them to other renderers while you’re at it.  That might be interesting and useful.

At any rate, it’s a fairly fundamental question to deal with.  Should you poll, or be notified, along with, should you call directly, or issue commands.  In the case of BanateCAD, there is a hybrid of poll/notify, and commands are definitely the way to go.


Assembling Bits and Pieces

If those monkeys pound on those keyboards enough, eventually a masterpiece will emerge…  Well, software development can be like that sometimes.  I’ve been spending quite a lot of time on 2D graphics.

Part of doing 3D CAD is being able to create a surface of revolution (revoloid).  Well, in order to create a revoloid, I need to be able to draw a curve, which will then be rotated around an axis.  Well, in order to create a curve, I need to have a nice grid I can use as a guide, and I need control points, and mouse control of those points, etc.

So, I create those various bits and pieces.  I think the grid thing is really cool.  You can specify the distance between the light and dark lines.  You can specify the colors as well.  It becomes useful when you need a backdrop for drawing, or displaying stuff in general to scale.

There have been other graphics along the way as well.  This arrow thing is useful in that the dark gray area can hold other graphics and controls.  Perhaps it doesn’t make sense sitting there on its own, but it does when it’s combined with other things.

Of course, no UI toolkit these days would be complete without having tabbed views.  So, there they are…  Of course, since the tabs are created using the ShapeBuilder object, you can get rounded corners instead of those sharp corners if you like.  Since these are just graphics objects, you can also fairly easily do drop shadows, glowing backgrounds, and the like.  And, since they are “Actor” objects, they can respond to “Update” messages, and change with time, if that’s useful.

I recently purchased a BeagleBone.  A nifty little piece of kit that one.  It uses a TI AM3358 ARM Cortex-A8 based microprocessor.  The board has ethernet and USB built in.  One of the benefits of the little board is that you can run an Android distribution on it.  Also, it has some graphics capabilities, including support for a VNC host, so you can see what’s on it’s “screen”.

I fully intend to put BanateCAD on this little device.  The Beaglebone is good in that it represents a fairly constrained system.  It has a 4Gb micro SD card, and 256Mb of RAM.  It runs at 700Mhz.  I’m thinking this should be a beefy enough spec to run a simple 3D CAD modeling program such as BanateCAD.  Also, if I can run on this little device, then certainly I can run on any Android device.  I’ll see how it goes.

At any rate, things progress.  2D UI, 3D shapes, monkeys on typewriters…  Soon enough something interesting will pop out of this little exercise.

 


HeadsUp – A new Release

Trundling along, minding my own business, making the world safe for doing some graphics…

I have created a new package called “HeadsUp”.  It is the 2D portion of the BanateCAD package.  Programming within HeadsUp is very similar to programming in the “Processing” environment, but it’s all in Lua instead of Java of course.

You can find the download here: HeadsUp

Within the package, you will find many examples to play with.  HeadsUp is named such because the relation to 3D graphics is that you can create a “Heads Up Display” if you will.  You can easily create static images, or you can create dynamic animations as it has an integral animation clock.  You load a .lua program, click the “Start” menu, and see what happens.

The other day, I went to the grocery store and got the groceries packed in a paper bag.  On the back of the bag was this interesting chart about the growing seasons of various fruits and vegetables.

I looked at that and thought: “How hard would that be to replicate using HeadsUp?”

So, I set out to replicate the graph.  Along the way, I discovered a few things.  I cleaned up some of the text rendering, found a bug in setting the fill color, and made better text alignment.  The basics of line, rectangle, and text rendering are obviously there, as well as the ellipse, and line thickness.  It was also an experiment in data representation.  For the raw data such as the various growing seasons per food item, should I represent it as a CSV file, an XML, JSON?  The easiest, since I am using Lua, was to simply represent it as a Lua table.  That way, there’s no conversion from one type/schema system to another.  Also, I think the lua version of the data is just as easily readable as XML or JSON, so it could easily act as the exchange format as well.

This is all goodness.  The other thing I rediscovered is that laying things out by hand is a real pain.  I did use algorithms to do the layout, but I could go further.  Doing layout in a more declarative way is much simpler.  Of course we know this from decades of UI design, and the growth of HTML.  I have the rudiments of layout, but it’s not used in any serious way as yet.

Another little  bit that got worked on was the mouse interaction.  There is a new mouse event thing, that gets passed around the system.  Nothing special if you’re not programming down at that level, but there nonetheless.

This is an image of the mouse tracking test program.  Basically, you move the mouse around on the screen, and the position is displayed in the center there.  A nice cross-hairs will follow your mouse around, just so you can see it tracking correctly.  That’s a bit nice because under your program’s control, you can simply turn off the system’s cursor, and create your own cursor to be whatever graphic you feel like rendering.

One more thing I discovered when doing this little bit was that the mouse tracking system does not have to be as complex or rigid as I was going to make it.  I have been borrowing some code from a UI framework I did in C#.  What I have typically done in the past is to make a core “Graphics” object, which any graphic would descend from in a class hierarchy.  This graphic object would take care of hierarchy, grouping, mouse tracking, keyboard tracking, transform, style, etc.

Programming in Lua, I find there’s a much simpler way.  For rendering, any ‘object’ simply implements the “Render(graphPort)” method.  The object is handed the graph port, and it does whatever rendering it wants to do.  It does not have to be in any class hierarchy.  If the developer wants hierarchy, they can simply add what they want.

Similarly, for mouse tracking, an object can register itself with the system saying “yes, I participate in mouse and keyboard interactions”.  This is kind of an observer/observed pattern.  Again, you don’t have to subclass, just implement the “MouseActivity()” method.  You’ll be handed the mouse activity object, and you can do whatever you want from there.

Of course, if you want to change the behavior of the system, that’s fairly straightforward.  You have the source code, so you can do what you like.  Or you could inject some functors here and there, and make small compact compatible changes.

I always have the goal of reducing the code size.  Smaller is definitely better.  In this case, I’ve managed to stay within my 256K budget, including code for the examples.  So, it can only get smaller from here.

At any rate, a new toy to play with.


Banate CAD Lurches towards 2011 finish line

It has not been a quiet week in Lake Bellevue…

There is a New Release of Banate CAD, which can be found here: Banate CAD

What we have here, is NOT a failure to communicate, but rather the ability to communicate using multiple mechanisms.

But, those squggly circular lines don’t look anything like what I’d be doing with a 3D CAD program meant for modeling things to be printed…

Well, one of the core tenets of Banate CAD is that hard things are doable, and easy things are easy.  Processing is a programming environment that has gained a tremendous amount of popularity over the years.  In particular, it has made it relatively easy for graphics artists to create amazing display installations which include audio, video, animations, and graphics in general.  It has been able to achieve this because it pulls together all the little bits and pieces necessary to do such things, and presents them in a relatively easy fashion that anyone with a high school education (or not) can understand and utilize.

What does this have to do with 3D printing?  Well, I think the state of tools for 3D modeling are similar to what Processing came into oh so many years ago.  There are quite a lot of capable tools, which experts can utilize to create amazing things.  But, more average designers such as myself, find it extremely hard to get up that learning curve.  I want all that power, but I don’t want to spend the 10,000 hours necessary to gain the expertise.

Also, I want to do animations:

What’s this?  It’s just one of the many animations that are typical in Processing.  A point on a circle makes what kind of motion when tracked over time?  A sine/cosine wave of course!  It’s easy to see.

Now, what if I have a cool idea for a new 3D printer, and I want to visualize the mechanics of the thing actually working?  What package do I need for that?  Well, you can pay thousands of dollars and get an all singing/dancing CAD program, which can probably simulate a fighter jet in a wind tunnel, or you could use Banate CAD.

The current release of Banate CAD has the ability to animate things.  It’s fairly straight forward.  You load up your design, and implement an object that has a “update()” method on it.  That method will get called every clock “Tick()”, and you do whatever it is you want to do in terms up updating your geometry.  Separate, the “Render()” method will be called, and you’ll see nifty animation.

This is a fun thing to do with metaballs, as seeing them interact and undulate in an animation is some great fun.  Kind of like watching those lava lamps of old.  And of course, things that are solids in your scene are printable.

So, there are actually two “Programs” in the package now.

BanateCAD.wlua – The standard 3D Banate CAD program

ProcessingShell.wlua – A Program that feels very similar to the Processing program

As the animation system is a bit finicky, the best way to run things is to actually bring up the Lua development environment (perhaps SciTE), and run from there.  That way you can easily stop a runaway program.

All of the examples that are in the package should work correctly, without breaking things.

There are quite a few other features in this package as well, in various states of repair.  There is in fact CSG support

At the moment, it is challenged.  It only works with a sphere and cylinder, not with all the other shapes in Banate CAD in general.  It will get there though, and be a lot simpler than what’s there now, but, since the code is a live working thing, it’s there, even though it is incomplete.  Wear a seat belt if you try it out.

And so it goes.  Where is it going?  First of all, 3D modeling.  That gives a lot of options for visualizing stuff that is in fact 3D, not just 3D models though.  Then, add the 2D support, like in Processing, and you begin to get a very interesting system.

At some point, a .fab file will contain the information for both the 3D model itself, as well as the interface description so the user can input parameters to the design before printing, if they so choose.  In order to make that a reality, the design files need to be able to put up a UI, that includes classic text, buttons, sliders, and deal with keyboard and mouse events.  That’s how these two things fit together.

In the meanwhile, being able to learn a new system by leveraging books about other similar systems makes for a broad reach.  For example, I picked up a few books on Processing, and just try to go through the examples.  When I come across a feature Banate CAD does not currently support, I just add it, and improve the system overall.

And there you have it.  Another week, another release.

I will be going on holiday in a couple of days, but there will surely be a release to ring in the new year.


Banate CAD Third Release

Is it Monday already!

Here is the third release of Banate CAD!

 

 

This week saw multiple improvements on multiple fronts.  I can’t actually remember which features showed up between the last week’s release and today’s release, but here’s an attempt.

Generalized Bump Maps – I made this earth and moon things, and then I generalized it so that any BiParametric based shape can utilize the technique.  That’s good because everything from spheres, to bezier surfaces are based on the BiParametric object.

import_stl_mesh – You can create a single instance of an imported mesh, and then use that single instance multiple times.  That’s a great way to save on resources when you’re adding multiples of the same geometry to the scene.

Blobs – Quite a lot of work on this module.  The main thing was getting the ‘beamsearch’ algorithm working correctly.  That allows for the “real-time” viewing of metaball based objects.  This was a lot of fun to get working.  It doesn’t work in all cases, but it works in the vast majority of cases that will be used in 3D solid modeling.

Some features on way the program itself works.  I’ve added support for reading some modules in from directories at runtime.  This allows me to separate some things that are ‘core’ and MUST be in the package from a number of other things that are purely optional.  It also allows me to support users adding things, in a sane way.  They can eventually go into a “Libraries” directory.

Also, this separation, and the addition of the BAppContext object allows for the creation of tools that have nothing to do with UI.  For example, in order to write a script that just imports a mesh, performs some operations on it, then exports, does not need all the UI code.

Oh yah, and before I forget, one of the biggest additions, or rather fleshing outs, was the animation system.  The AnimationTimer object has actually been in there for a while, and the Animation menu has as well.  It’s just nothing used it.  Now, if you create shapes, and implement the Update() function, your shape will be informed whenver the clock ticks, giving you an opportunity to change things based on time.  There are a few examples included in the example files.  It’s pretty straight forward.

The animation system is quite handy for doing things like playing with metaballs, seeing how things change as you vary parameters.  This will also come in handy when describing physical things that move.  For example, I could model my Up! printer, and then actually set it in motion.  But, that’s for another time.

 

There you have it.  Another weekly release!


Fast Metaball Surface Estimation

Using modeling techniques such as Constructive Solid Geometry (CSG), and others, a great number of 3D designs can be quickly and easily created.  Although I have found CSG to be a quite productive tool, I also find there are limitations when you want to produce shapes that can not be easily described by combinations of basic primitives such as cubes, cylinders, and spheres.

A long time ago, Jim Blinn created the “Blobby” technique of forming what appear to be more organic looking shapes.  This technique was created for computer graphic rendering, not for 3D modeling.  There was a later refinement of the technique which took on the name “Metaball”.  In both cases, the technique is a subclass of ‘isosurface’, whereby the surface of the object is described through mathematical combinations of various surface equations.

The easiest way to think about what these things are is to consider what happens with droplets of honey that get really close to each other.  Individually, each droplet might look like a perfect little circle sitting on a table.  Get them close enough, and their boundaries begin to merge, and they become some nice amorphous shape.  Metaballs are like that.  Balls individually will just be balls.  But, as soon as they start to get close to each other, they begin to form a unified blobby looking surface.

Although there is great flexibility in using metaballs for modeling, it can be fairly challenging to try and construct reasonable models in a short amount of time.  Once you have the set of balls that will be used to describe your object, it must be rendered, and meshed, so it can become a 3D solid.  Since metaballs are more typically used for onscreen graphics, rather than modeling for 3D printing, the renderers can take short cuts.  All a renderer needs to do is cast rays into the void, and see where it hits the surface that describes the shape.  Once it hits a point, it can be done, and  move onto the next point, without having to connect them in any meaningful way.

When constructing a 3D model, it is convenient to be able to create a mesh, preferably one that can be enclosed, so that we can generate solids which can be printed.

What I have been struggling with is how to efficiently construct that mesh, without having to do a brute force calculation of the entire space within the volume where the metaball lives.  What I was looking for was a way to cast beams into the metaball equation, and at the same time connect the points to form a nice triangle mesh.  So, first is the beam casting.

The metashape is described by an equation,  f(x,y,z).  The way the equation works, any value that is < 1, is ‘outside’ the surface.  Any value > 1 is ‘inside’ the surface.  Any value that == 1, is ‘on’ the surface.  So, what I’d like to do is search a space, and for each point in this space, figure out the value of the function.  Where the value is 1, I know I have hit the surface.

This is the very crux of the problem.  How do you efficiently choose x,y,z values such that you don’t waste a bunch of time running the calculation for points that are nowhere near the surface.

The novelty that occured to me was that if I assume the whole of the metaball structure is in the middle of a sphere of sufficent radius, I could run a later beam around the surface of that sphere, casting beams towards the metaball structure.  Essentially, what I would like to do is cast a beam towards the conter of the sphere.  From there, if I’m ‘inside’, then cast again, only go half way between our last point, and the point from which I started the last cast.  If I land ‘outside’, then do the same thing, but cast inward.

This amounts to a binary search of the surface, along a particular path from a location outside the surface, in towards the center.  It seems to be fairly  efficient in practice, primarily governed by the original search radius, the number of balls, and the number of rays you want to cast.  In addition, how tolerant you are in terms of getting close to ‘1’ for the function will have an impact on both the quality and the time it takes.

So, now we have an efficient beam search routine.  How to drive it?  The easiest way is is to simply go around a sphere, progressively from the ‘south pole’ to the north, making a full circumnavigation at each latitude.  So, just imagine again, that laser beam (which we know is doing a binary search to find the surface intersection), and run it around the sphere.  Since there is coherence in how we navigate around the sphere, I can connect one point to the next, and easily triangulate between the points.  What you get in the end is not just a point cloud, but a direct calculation of all the interesting vertices, already formed into a nice mesh that can be a solid.

Here, you can clearly see the facets, and perhaps imagine the trace of the beams from bottom to top, and around the circumferance.  This will not work for all metaball networks.  It will not work in cases where there is a separation of the balls.  In that case, two networks need to be formed, or at least the two need to be healed where there is a hole between them.

Similarly, this technique will not work where the is some sort of concave part where the “laser beam” can not see at the surface.  Basically, if you imagine yourself flaying around the object, if you can not see some part of it from the outside, then it will not render correctly.  This can possibly be remedied by getting in closer in such situations, but that’s another exercise.

Creating your own metaball object is fairly straight forward:

local anifrac = 0.95

local balls = {{0,0,0,5}, {0, 1, 19.95*anifrac, 5}}
local isosurface = shape_metaball.new({
balls = balls,
radius = 50,
Threshold = 0.001,

USteps = 15,
WSteps = 15,
})

addshape(isosurface)

You can control a few things here.  First, is the ball network itself.  You can put in as many balls as you want.  Of course, the more balls, the slower the calculations.  It might require quite a few balls to make an interesting structure.  It’s just something to play around with.

Next is the radius.  This value determines the size of the enclosing search space.  It doesn’t have to be hugely accurate, but within an order of magnitude is nice.

The Threshold value determines how tight the calculation is.  0.001 is a good starting point.  This will give you a fairly tight surface.  If you go with a larger value, the calculations will be quicker, but your object will beging to look fairly coarse.  An even smaller value, 0.0001, would make for an even tighter fit to the surface, and thus a smoother object.

The last part has to do with how many facets you want to generate.  This completely depends on the size of your model.  Basically, it determines how many stops there are around the circumferance of the sphere.  USteps is how many longitudinal steps, and WSteps is how many latitudinal steps.

When modeling, it’s probably good to go with looser values, that way you can play around quickly.  As you get closer to wanting to generate your final model, you up all the numbers, and go get coffee.

In this final picture, I’ve set the Threshold to 0.00001, the USteps to 360, and the WSteps to 360.  That’s fairly fine detail, and the generated model will be quite large, but very smooth and accurate to the model.

The conslusion of this little exercise?  Metaballs are a very interesting and valuable tool for the 3D modeling of shapes that are not easily described by standard CSG operations.  Banate CAD has a metaball module that have the speed to render models in real-time, and the flexibility to allow you to dial in as much detail as you want.  This makes modeling with metaballs in Banate CAD a relatively painless prospect, which opens up the possibility of much more organic shapes for modeling.  The beamsearch algorithm makes it practical.


Documentation Begins

Although there are numerous examples of how to use things in Banate CAD, I have started to add some documentation to these pages.  It can be found in the site header, or you can go directly to this link: Banate CAD Documentation

At the moment it’s not much more than a list of functions with their parameter names, but that will certainly get fleshed out over time.

This preliminary documentation is of the “Reference Material” variety.  I will begin to add some more of the “User Manual” style over time.


Language Skinning

Banate CAD is written in Lua, and so are the .fab files which you use to actually do your modeling and visualization.  Since the source code is part of the distribution, a user can get into infinite mischief by changing things anywhere in the system.

The more classic forms of software distribution involve a ‘compiled’ product instead.  In that case, the end consumer does not have access to changing anything core about the program.  In such cases, extensions and modifications are either done by recompiling the product, or through some ‘add-on’ mechanism.  That works fine in most cases, but what if you want something a bit more dynamic?

One of the things I have wanted in my text based visualizer, is the ability to extend the language itself.  The core language of Lua is fairly straight forward and flexible.  I don’t want to extend the core syntax of the language, but I want to add some “look and feel” from other languages.  There is one ‘language’ I find to be interesting, and that’s GLSL, the vertex and fragment shader language from OpenGL.

I don’t mean to replicate the pipeline architecture of OpenGL/GLSL, but I like a number of the built-in functions.  For example:

float smoothstep(float edge0, float edge1, float x)
vec2 smoothstep(vec2 edge0, vec2 edge1, float x)
vec3 smoothstep(vec3 edge0, vec3 edge1, float x)
vec4 smoothstep(vec4 edge0, vec4 edge1, float x)

When x= edge 1. In between, it will perform a hermite interpolation between 0 and 1. This is great for putting a bit of smoothness on things, whether it is motion paths, or colors ramps, or the corners of a solid being rendered. A very useful function.

How to add language extensions?  In this case, I have created a filed named “glsl.lua”.  I put it into the directory with all the other files of the distribution.  Perhaps they will be better organized in the future, but a the moment, you can just drop them in the main directory.  From there, you’ll need to get the file pulled in with the rest of the other sources.  The main “Language Skin” is contained in the SceneBuilder.lua file.  So, I add a single line to that file:

require(“glsl“)

And that’s it.  Now, from within my scripts, I can start using this new function, and the various others that I’ve added into the glsl.lua file.

The SceneBuilder.lua file is itself just a language skin.  It has all the convenience functions such as addshape(), addmesh(), cone(), bicubicsurface(), and the like.  If you don’t like the naming there, or if you want to have a completely different skin, you can just replace this file.  If you want your code to look/act more like JavaScript, for example, you can just Create a JavaScript skin instead.  In that case, just replace the SceneBuilder.lua file with your version, and carry on.

Of course, the downside to all this flexibility is that you end up with a situation where it becomes hard to share .fab files across multiple differently modified versions of the program.  To bring some sanity to the situation, extensions/modifications will have to be managed properly.  There will have to be an inviolable core, and an “Add-Ons” mechanism whereby the script can know what environment it requires.

Although it might be challenging to have such flexibility, I believe it gives Banate CAD an advantage in terms of development.  It can evolve much more rapidly, and new things can be added much more quickly that if it were done in the more classical style.

 


How much configuration?

Since Banate CAD exposes everything to the scripter, you are free to change pretty much anything at runtime.

Here, I have entered one line of code in my script to change the color scheme:

defaultviewer.colorscheme = colorschemes.Space

the ‘Space’ color scheme is defined in the colorschemes.lua file and looks like this:

colorschemes["Space"] = {
BACKGROUND_COLOR = {0, 0, 0, 1},
CROSSHAIR_COLOR = {0.5, 0.75, 0.75, 1},
WIREFRAME_COLOR = {1,1,1,1}
}

You can easily add your own color scheme to that file before you start Banate CAD, then it will be available to you later in your script.

Or, since it’s just a table that’s available at runtime, you could just as easily construct a scheme from within your script.  And start using it.

That’s just one of a number of things you can customize about the environment.  You can also change things like the lighting. If you wanted to add a light, you could do something like:

local newLight = BLight:new({
ID = gl.LIGHT2,
Diffuse = {0.5,1,0.75,1},
Position = {2,1,-1,0},
Enabled = true
})

table.insert(defaultsceneviewer.Lights, newLight)

Or something to that effect.  In this particular case, a bit of the underlying OpenGL is showing through (gl.LIGHT2), but that will get cleaned up as the lighting system goes more generic.  But, you could do it if you wanted to.  You could even change the default lighting system by going into SceneViewer.lua and just changing the lights to be whatever you want.

This is one of those things that hardcore tinkerers might be interested in, but someone like my mother would just leave well enough alone.

At the moment, I’m working on the Animation system.  That’s pretty cool because that picture above can actually move!  Yep, you got it, the little moon can be shown to orbit the earth.  Of course, if the moon were actually proportionally that big and close, our skin would probably be ripping off our bodies from the tidal forces, but it makes for a nice visualization nonetheless.

Animation can only be shown if you’re actually playing with the script though.  I could of course add a “Make Movie” feature, but that’s a bit of work to do cross platform.  I can certainly generate a bunch of .png images that can then be stitched together by another piece of software.

At any rate, Banate CAD is first and foremost a 3D modeling/printing piece of software.  That doesn’t mean it has to be boring though.  By adding visualizations and animations, I believe it will make the 3D design experience that much more approachable and interesting, and thus more people will do it because it looks like a video game.