The Plot for American Politics

I’ve been scratching my head quite a bit (perhaps due to hair loss) trying to figure out what’s going on with American politics.  Then, last night, it finally occurred to me.

Whomever is scripting the thing is a fan of Jim Carrey movies!

I mean, here are the titles of some of the movies which I think apply the most to what I’ve seen over the past few months.

  • Dumb and Dumber
  • Liar Liar
  • Yes Man
  • The Mask
  • Kick Ass
  • The Truman Show

Dumb and Dumber (1994) is exceptional because it even has a sequel “Dumb and Dumber To”.  That one really gets me because the first movie was such an amazingly unbelievable premise, that I just kind of stared at the screen dumb founded at what I was watching, but unable to pull myself away.  Then, it was followed up 20 years later by the same shenanigans.  So far I’ve managed to resist and not watch the sequel.  It was amazing to see that it was made, and every time I saw the trailer I thought, ‘What?  Did these guys have children who are reprising the roles of their parents?”, but no, it was the same guys again.

Liar Liar… Well, they are aspiring politicians.  Only, in the Jim Carrey movie, the protagonist always has a heart of gold, love wins out, and everything is alright.  But really, doesn’t it even bother people how much absolute falsehoods are thrown about during these elections.  I don’t mean the typical bending of the truth, I mean absolute fabrications and denials of ‘facts’.  In this age of Twitter, WikiLeaks, and cells phones on every corner, it’s becoming harder and harder to hide from the manufactured truth.

Yes Man. There’s nothing extraordinary about Yes Man in terms of the movie, other than the title.  I mean isn’t it obvious.  The candidates are surrounded by apologists, supporters, and frankly “Yes Men”.  That’s how they’re goaded on and enabled in whatever actions they take.  There’s always someone standing by ready to support their actions with a hearty heart felt “Yes sir/mam!  You’re absolutely right!”.  Although, the opposite also seems true in the current election cycle as one candidate has had a stream of supporters abandon his missteps while his most fervent supporters are still saying “Yes!”.  It’s kind of like when the Sith lord is telling Luke Skywalker (in a slow slithery voice) “yeesssss…..  Feel the power of the dark side coursing through you…”

The Mask – Well, this one is too obvious, just like Yes Man.  Who are we really?  We all are trying to hide something.  The power of the mask is to amplify who we really are.  In most recent weeks, due to the amplification power of modern media, and digital archives, we’ve been able to see who our candidates really are.  One of them turns out to be this horn dog monster of a character who thinks “there’s always time for one last kiss!”.  The other’s true self is almost overwhelmed by access to power, but really means well.

Kick Ass 2 – I’ve never seen this one, but the synopsis seems to be in keeping with our troubled politicians.  Basically someone is trying to do good, but the nasty villain keeps rearing their ugly head at inopportune times.  In this case, the villain might be in the form of foreign powers.  Just as we are trying to reap the rewards of the end of the cold war, they just keep showing up to ruin our basking in the flow of our own importance.

The Truman Show – This one, more than any of the others, really does put things in perspective.  In 1998, this movie was about a guy who grows up in a false reality.   This movie predates survivor by 2 years, and “The Apprentice” by 10, so I’ll call it the first true reality show.  It’s the ultimate in voyeurism.  A guy who grows up, owned by a corporation, and knows no truths other than those that are within his little bubble of existence.  The whole world is in on the joke, except Truman himself.  This has so many  parallels to where we are and where we’re headed.  I have no doubt there will be a reality show about getting to the presidency.  All emails, all contacts, all meetings, will be recorded and broadcast as we follow our “candidates” as they race towards their date with destiny, and the chance to serve the American people.  The ultimate politician will simply be an entertainer.  The governing part of politics will be executed by various “producers”, and a “staff”, which rotates through the show.

I get it.  Good joke!

All that remains for election 2016 is to see if the finishing plot is more in lines with “Bruce Almighty” or “The Grinch”.

 


Building a Tower PC – 2016, part 1

Last time around, I outlined what would go into my build.  This time, I’ve actually placed the order for the parts.  I was originally going to place with newegg, but the motherboard was out of stock.  This forced me to consider amazon instead.  Amazon had everything, and at fairly decent prices.  That plus prime shipping, and good return policy, made it a relative no brainer (sorry newegg).

I did a hand wave on some of the parts in the last post, so I’ll round out the inventory in detail here.

RAM 

this item used to require a ton of thought in the past, but today, you can spit in generally the right direction and things will likely work out.  I wanted to outfit my rig with 64GB total ram.  I wanted RAM that was reliable and looked good.  I probably should have gone for some red colored stuff, but I went with the black G.SKILL Ripjaws V Series DDR4 PC4-25600 3200MHz parts (model F4-3200C16D-32GVK).

Image result for g.skill 32gb (2 x 16gb) ripjaws v series ddr4 pc4-25600

They come in sets of two (32GB per set), so I ordered two sets.  Who knows, maybe I’ll get lucky and they’ll be red.

SSD Storage

I know from my laptop, and my current Shuttle PC that having a SSD as your primary OS drive is an absolute must these days.  Please, no 5400 RPM spinning rust!  On this item, I chose the Samsung V-NAND SSD 950 Pro M.2 NVM Express 256 GB.

Image result for samsung 950 pro series - 256gb pcie nvme
What’s this NVMe thing anyway?  Well, turns out that flash disks are way faster than spinning rust (go figure), and yet we’ve been constrained to the spinning rust interfaces and data transfers of old for quite some time.  NVME represents a different interface to the flash memory, going way beyond what Sata can provide.  Luckily, the chosen motherboard supports this interface, so I should be able to boot up from this super fast thing.  I probably should have gone for the 512GB version, but things being the way they are, I can probably install a much larger 1TB version in 3 year’s time for the same price.  This will be more than good enough for now, and for the forseeable future.
Mass Storage
I did get some spinning rust to go in the box as well.  Western Digital Black 2TB – WD2003FZEX (7200 RPM SATA 6 Gb/s).
Image result for wd2003fzex
I have a pair of these spinning away in my Synology NAS, and they haven’t failed in the past 4 years, so I think I’m good with this.  I could have gone with a bigger size, like 6TB, but I’m thinking why put so much storage on a single disk.  Better to spread the load across several disks.  As long as I’m spreading the load across several disks, why not just use a giant NAS with an optical link, or 10Gbit ethernet or something.  As this machine is going to find multiple uses with multiple OSes, I didn’t feel the need to make it a storage monster.  Rather, it is a show piece, workstation with decent performance.  More specialization can come through additional equipment outside the box.
I have yet to consider my cooling options.  When the boxes arrive, I’ll assemble once just to make sure all the parts work.  I’ve been eyeing some cool looking Thermaltake liquid cooling gear.  I’m considering the whole reservoir/pump/tubing thing.  It looks cool, and there are ready made kits that look fairly easy to assemble.  The open case I’ve chosen just begs to be mod’d with the liquid cooling stuff.
At any rate, boxes should arrive in a week.  I took advantage of the Amazon offer to get $5.99 gift certificates towards using their pantry service instead of getting next day delivery.  How crazy is that!  I figure I won’t be able to assemble until next week anyway, so why not get some free stuff from Amazon in payment for my patience.
Having this kit arrive will be an incentive to further clean up my office (man cave) so that I’ll have enough desk and floor space to spread things out, take pictures, and assemble without losing any of the pieces.

Building a tower PC – 2016

Well, since I’m no longer interested in building the ultimate  streaming PC, I’ve turned my attention to building a more traditional tower PC.  What?  Those are so 1980!  It’s like this.  What I’m really after is using the now not so new Vulkan API for graphics programming. My current challenge is, my nice tighty Shuttle PC doesn’t have the ability to run a substantial enough graphics card to try the thing out! I do in fact have a tower PC downstairs, but it’s circa 2009 or something like that, and I find this a rather convenient excuse to ‘upgrade’.

I used to build a new PC every two years, back in the day, but tech has moved along so fast, that it hardly makes sense to build so often, you’d be building every couple of months to keep pace. The machines you build today will last longer than an old hardware hacker cares to admit, but sometimes you’ve just go to bite the bullet.

Trying to figure out what components to use in a current build can be quite a challenge. It used to be that I’d just go to AnandTech and look at this years different builds, pick a mid-range system, and build something like that. Well, AnandTech is no longer what it used to be, and TomsHardware seems to be the better place for the occasional consumer such as myself.

The first thing to figure out is the excuse for building the machine, then the budget, then the aesthetics.

Excuse:  I want to play with the Vulkan graphics API

Budget: Less than $2,000n (to start ;-))

Aesthetics: I want it to be interesting to look at, probably wall or furniture mounted.

Since the excuse is being able to run the Vulkan API, I started contemplating the build based on the graphics card.  I’m not the type of person to go out and buy any of the most current, most expensive graphics cards, because they come out so fast that if you simply wait 9 months, that $600 card will be $300.  The top choice in this category would be a NVidia GTX 1080.  Although a veritable beast of a consumer graphics card, at $650+, that’s quite a budget buster.  Since I’m not a big gamer, I don’t need super duper frame rates, but I do want the latest features, like support of Direct X12, Vulkan, OpenGL 4.5, etc.

A nice AMD alternative is the AMD Radeon Rx 480.  That seems to be the cat’s meow at the more reasonable $250 price range.  This will do the trick as far as being able to run Vulkan, but since it’s AMD and not NVidia, I would not be able to run Cuda.  Why limit myself, since NVidia will also run OpenCL.  So, I’ve opted for an NVidia based MSI GeForce GTX 1060.

The specialness of this particular card is the 6GB of GDDR5 RAM that comes on it.  From my past history with OpenGL, I learned that the more RAM on the board the better.  I also chose this particular one because it has some red plastic on it, which will be relevant when I get to the aesthetics.  Comparisons of graphics cards abound.  You can get stuck in a morass trying to find that “perfect” board.  This board is good enough for my excuse, and at a price that won’t break the build.

Next most important after the graphics card is the motherboard you’re going to stick it in.  The motherboard is important because it’s the skeleton upon which future additions will be placed, so a fairly decent board that will support your intended expansions for the next 5 years or so would be good.

I settled on the GIGABYTE G1 Gaming GA-Z170X-Gaming GT (rev. 1.0) board.

It’s relatively expensive at $199, but it’s not outrageous like the $500+ boards.   This board supports up to three graphics cards of the variety I’m looking at, which gives me expansion on that front if I every choose to use it.  Other than that, at least 64GB of DDR4 RAM.  It has a ton of peripherals, including USB 3.1 with a type-c connector.  That’s good since it’s emerging.  Other than all that, it has good aesthetics with white molding and red highlights (sensing a theme).

To round out the essentials, you need a power supply.  For this, I want ‘enough’, not overkill, and relatively silent.

The Seasonic Snow Silent 750 is my choice.  Besides getting relatively good reviews, it’s all white on the outside, which just makes it look more interesting.

And last, but not least, the CPU to match.  Since the GPU is what I’m actually interested in, the CPU doesn’t matter as much.  But, since I’m not likely to build another one of these for a few years, I might as well get something reasonable.

I chose the intel i7-6700K for the CPU.

Image result for core i7-6700k

At $339, it’s not cheap, but again, it’s not $600.  I chose the ‘K’ version, to support overclocking.  I’ll probably never actually do that, but it’s a useful option nonetheless.  I could have gone with a less expensive i5 solution, but I think you lose out on hyper-threading or something, so might as well spend a $100 more and be slightly future proof.

Now, to hold all these guts together, you need a nice case.  I already have a very nice case housing the circa 2009 machine.  I can’t bring myself to take it apart, and besides, I tell myself, it doesn’t have the io access on the front panels required of a modern machine.  Since part of my aesthetic is to be able to show the guts of the machine (nicely themed colors), I went with something a bit more open.

The Thermaltake core P5 ATX Open Frame case is what I have chosen.

Image result for thermaltake core p5 atxNow, I’m more of a throw it together and start using it kind of builder, but putting a little bit of flash into the build could make it a tad more interesting.  Less heat dissipation problems, and if I ever do that cool liquid cooling piping stuff, I’ll be able to show it off.  This case also has options to mount it against the wall/furniture, and I’ll probably take advantage of that.  I can imagine having a home office desk with a couple of these mounted on the front just for kicks.  Thrown in a few monitors for surround, and… Oh, but I’m not a gamer.

The rest of the kit involves various memory, storage, etc.  The motherboard has M.2 as well as mSata.  So, I’ll probably put an SSD on one of those interfaces as the primary OS drive.  Throw in a few terabytes of spinning rust, and 64GB of RAM, and it’s all set.

The other nice thing about the motherboard is dual NICs.  One is for gaming, the other (intel) is for more pedestrian networking.  This can be nothing but goodness, and I’m sure I can do some nice experimenting with that.

Well, that’s what I’m after.  I added it all up on newegg.com, and it came out to about $1,500, which is nicely under budget, and will give me a machine I can be happy with for a few years to come.

 

 


Home Theatre PC Redux

A number of years ago I purchased my first barebones Shuttle PC.  At the time, it was about the size of two stacked shoe boxes, which was quite compact compared to the behemoth desktops circa 2005.  I had ideas of streaming media from it, using it as a home media center.  Microsoft even had a home centric OS, and attendant software.

It never really took off in that regard, and I ended up just using the machine as a standard browsing desktop for a few years.  Now, it finds itself in the garage, holding up various bits and pieces, not getting any action.

There has been a whole thing in the industry about creating quiet PCs.  From power supplies to fans, to specialized cases, motherboards, and the like.  All in search of that perfect PC that can sit in the living room, unobtrusively, serving up media to the giant glass TV monitor above it.

Then along came XBMC.  Oddly enough, first introduced on the Xbox to stream media content.  Soon enough XBMC found its way to the standard PC, and subsequently to Operating Systems other than Windows.  XBMC became Kodi, and here we sit today.

A couple of years back, I purchased a minix X8-H.  Again, for the day, it was quite a nifty little device, that could stream media.  But, ‘streaming media’, and servicing home media content needs has boiled down to a couple of things.  First of all, netflix, and thus a Roku or other standard media devices, are the norm these days.  For roughly $50 you can get a device that will stream all the standard network based streams that exist, from hulu, to netflix, TED Talks, NFL, or whatever.  Of course, these media devices are essentially the new “set top box” for the age where cable bundles are dwindling, and you get to pay $5-$10 per month per channel you really want.

Well, there you go, problem solved, we can all go home now…

In summary, media consumption has turned into an internet based thing, where the differentiators are things like 4K streams vs HD, amount of memory (to minimize stalls), and the quality of the sound output.  It’s no longer a question of CPUs (ARM is dominant), nor the OS (Android is dominant).  It’s not even a matter of the software (proprietary or Kodi, and that’s it).

There is a tributary off this main stream though.  That is, once you get into Kodi as your player, you’ve opened up a world of possibilities.  I can stream all of my DVDs that I backed up to my NAS.  I can get all the media content from the internet, I can stream live events, watch local television, etc.  This is even greater as I can watch whatever content I want, pay whatever price I want, and not have a single concern for the quality of the content, nor the cost of the device.  That’s all great.

So, I recently went back to the minix site just to see how they were getting along.  Lo and behold, the media players are no longer front and center, but instead, there are ‘miniature’ PCs, like the ngc-1.  This is a Windows 10 PC in the same form factor as those tiny media player boxes.  I found it on Amazon for $299.  Given the price of tablets and laptops these days, this is right in there with a typical low end machine.  It is loaded with features though, like dual-AC wireless, 4K video, 128Gb SSD, and the like.  It’s no slouch, even if it’s not the best bitcoin mining device.  This paired with a reasonable couple of monitors makes for a great interactive PC for toddlers (who destroy laptops in a second).

This is  a new breed.  I’m thinking of getting one to act as my desktop “command” computer.  You know, stick it on the top of my desk, or the back of one of my monitors, and just use it to remote desktop into other machines as I need to.

As a long time PC builder, my first reaction is, “I’m sure I could throw this together cheaper”, but the truth is I can’t.  I can even purchase the components any cheaper, and they put it in a nice solid metal case, which I could not manufacture.  I think we’ve reached the state where the PCs are almost commodity, and you can pretty much purchase one every year, and just attach them to whatever display you so happen to have.  Those 17″ displays that you find in your work room, put a media stick (roku stick or chrome cast or whatever).  For the bigger glass, like your Costco special 60″ tv, put one of the large media PCs that are capable of 4K display and have a bit more media handling capability.  For your main desktop machine, the one you use in your cave for viewing lots of different kinds of content other than movies, put one of these new nano scale PCs.  Stick a console gaming rig, or heavy duty PC on your midrange display for gaming.

My journey with media center PCs began roughly 12 years ago, and I can say, that journey has pretty much ended today.  I’ll still fiddle about with the likes of an Odroid C2 for media streaming, but really, when it comes time to watch football, or the latest netflix bing season watching thing, it’s going to be a standard media device (likely rook) on the 50″ in the living room.

Media PC pursuit, rest in peace, long live the media PC!


Jumped on Twitter… Again

I’m not a twitch communicator.  140 characters, or whatever, doesn’t really do it for me, but this is the way the world works now.  So, I joined twitter: @LeapToTech

I had a previous twitter account, but since I never used it, I can’t even remember what it was.  But, at the behest of Laura Butler @LauraCatPJs, I put up another account so that I could retweet a tweet.  It’s the future man!  The internet is going to be big some day.

So, now I’m learning about the value of #HashTags, and @callsigns, and that sort of stuff.  Really I only wrote this post so I could figure out how to stick a tweeter at the bottom…

 



schedlua – refactor compactor

The subject of scheduling and async programming has been a long running theme in my blog.  From the very first entries related to LJIT2Win32, through the creation of TINN, and most recently (within the past year), the creation of schedlua, I have been exploring this subject.  It all kind of started innocently enough.  When node.js was born, and libuv was ultimately released, I thought to myself, ‘what prevents anyone from doing this in LuaJIT without the usage of any external libraries whatsovever?’

It’s been a long road.  There’s really no reason for this code to continue to evolve.  It’s not at the center of some massively distributed system.  These are merely bread crumbs left behind, mainly for myself, as I explore and evolve a system that has proven itself to be useful at least as a teaching aid.

In the most recent incarnation of schedlua kernel, I was able to clean up my act with the realization that you can implement all higher level semantics using a very basic ‘signal’ mechanism within the kernel.  That was pretty good as it allowed me to easily implement the predicate system (when, whenever, waitForTruth, signalOnPredicate).  In addition, it allowed me to reimplement the async io portion with the realization that a task waiting on IO to occur is no different than a task waiting on any other kind of signal, so I could simply build the async io atop the signaling.

schedlua has largely been a Linux based project, until now.  The crux of the difference between Linux and Windows comes down to two things in schedlua.  The first thing is timing operations.  Basically, how do you get a microsecond accurate clock on the system.  On Linux, I use the ‘clock_gettime()’ system call.  On Windows, I use ‘QueryPerformanceCounter, QueryPerformanceFrequency’.  In order to isolate these, I put them into their own platform specific timeticker.lua file, and they both just have to surface a ‘seconds()’ function.  The differences are abstracted away, and the common interface is that of a stopwatch class.

That was good for time, but what about alarms?

The functions in schedlua related to alarms, are: delay, periodic, runnintTime, and sleep.  Together, these allow you to run things based on time, as well as delay the current task as long as you like.  My first implementation of these routines, going all the way back to the TINN implementation, were to run a separate ‘watchdog’ task, which in turn maintained its list of tasks that were waiting, and scheduled them.  Recently, I thought, “why can’t I just use the ‘whenever’ semantics to implement this?”.

Now, the implementation of the alarm routines comes down to this:

 

local function taskReadyToRun()
	local currentTime = SWatch:seconds();

	-- traverse through the fibers that are waiting
	-- on time
	local nAwaiting = #SignalsWaitingForTime;

	for i=1,nAwaiting do
		local task = SignalsWaitingForTime[1];
		if not task then
			return false;
		end

		if task.DueTime <= currentTime then
			return task
		else
			return false
		end
	end

	return false;
end

local function runTask(task)
    signalOne(task.SignalName);
    table.remove(SignalsWaitingForTime, 1);
end

Alarm = whenever(taskReadyToRun, runTask)

The Alarm module still keeps a list of tasks that are waiting for their time to execute, but instead of using a separate watchdog task to keep track of things, I simply use the schedlua built-in ‘whenever’ function. This basically says, “whenever the function ‘taskReadyToRun()’ returns a non-false value, call the function ‘runTask()’ passing the parameter from taskReadyToRun()”. Convenient, end of story, simple logic using words that almost feel like an English sentence to me.

I like this kind of construct for a few reasons. First of all, it reuses code. I don’t have to code up that specialized watchdog task time and time again. Second, it wraps up the async semantics of the thing. I don’t really have to worry about explicitly calling spawn, or anything else related to multi-tasking. It’s just all wrapped up in that one word ‘whenever’. It’s relatively easy for me to explain this code, without mentioning semaphores, threads, conditions, or whatever. I can tell a child “whenever this is true, do that other thing”, and they will understand it.

So, that’s it. First I used signals as the basis to implement higher order functions, such as the predicate based flow control. Now I’m using the predicate based flow control to implement yet other functions such as alarms. Next, I’ll take that final step and do the same to the async IO, and I’ll be back to where I was a few months back, but with a much smaller codebase, and cross platform to boot.


Note To Self – VS Code seems reasonable

No secret, I still work for Microsoft…

Over the past 17 years of working for the company, my go-to editor had largely been Visual Studio.  Since about 2000, it was Visual C#.  Then around 2011, I switched up, and started doing a lot of Javascript, Lua, and other languages, and my editor went from Notepad++, to a combination of Sublime Text and vim.

Most recently, I’ve had the opportunity to try and enable some editing on Windows 10 tablets, and I chose a new editor, Visual Studio Code.  I am by no means a corporate apologist, but I will certainly point out when I think my company is doing something good.  Visual Studio Code is an easy replacement for Sublime Text, at least for my needs and tastes.  I’ve been trying it out on and off for the past few months, and it just keeps improving.

Like all modern editors, it has an ‘add-on’ capability, which has a huge community of add-on builders adding on stuff.  Of course, there’s some lua syntax highlighting, which makes it A number one in my book already.  But, there are other built in features that I like as well.  It has a simple and sane integration with git repositories right out of the box.  So, I just open up my favorite projects, start editing, and it shows which files are out of sync.  A couple of clicks, type in my credentials, and the sync/push happens.  I’m sure there’s an extension for that in all modern editors, including Sublime Text, but here it’s just built into the base editor.

One item that struck me as a pleasant surprise the other day was built in support for markdown language.  I was refreshing the documentation files for schedlua, and I was putting in code block indicators (“`lua).  After I put in one such indicator, I noticed the quoted code suddenly had the lua syntax highlighting!  Yah, well, ok, getting excited about not much.  But, I had never seen that with Sublime Text, so it was new for me.  That was one of those features that just made me go ‘sold’.

The editor has other features such as being able to run a command line from within the editor and such, but it’s not a full blown IDE like Visual Studio, which is good because the tablets I’m running it on don’t have 4 – 8Gb of RAM to run Visual Studio comfortably.  So, it’s just enough editor to replace the likes of Sublime Text.  I also like the fact that it’s backed by a large company that is dedicated to continue to improve it over time with regular updates.  The community that’s being built up around add-ons seems fairly robust, which is also another good sign.  Given Microsoft’s current penchant for Open Sourcing things, I would not be surprised if it showed up available on GitHub some day in the future, which would just make it that much more interesting.

So, for now (future self), I will be using VS Code as my editor on Windows, MacOS, and Linux.  It has the stability and feature set that I need, and it continues to evolve, adding more stability and features that I find to be useful.

 


What is IT?

WP_20160319_001

IT’s coming…


SVG And Me – Don’t tell me, just another database!

A picture is worth 175Kb…

grapes

So, SVG right? Well, the original was, but this image was converted to a .png file for easy embedding in WordPress. The file size of the original grapes.svg is 75K. That savings in space is one of the reasons to use .svg files whenever you can.

But, I digress. The remotesvg project has been moving right along.

Last time around, I was able to use Lua syntax as a stand in for the raw .svg syntax.  That has some benefits because since your in a programming language, you can use programming constructs such as loops, references, functions and the like to enhance the development of your svg.  That’s great when you’re creating something from scratch programmatically, rather than just using a graphical editing tool such as inkscape to construct your .svg.  If you’re constructing a library of svg handling routines, you need a bit more though.

This time around, I’m adding in some parsing of svg files, as well as general manipulation of the same from within Lua.  Here’s a very simple example of how to read an svg file into a lua table:

 

local parser = require("remotesvg.parsesvg")

local doc = parser:parseFile("grapes.svg");

That’s it! You now have the file in a convenient lua table, ready to be manipulated. But wait, what do I have exactly? Let’s look at a section of that file and see what it gives us.

    <linearGradient
       inkscape:collect="always"
       id="linearGradient4892">
      <stop
         style="stop-color:#eeeeec;stop-opacity:1;"
         offset="0"
         id="stop4894" />
      <stop
         style="stop-color:#eeeeec;stop-opacity:0;"
         offset="1"
         id="stop4896" />
    </linearGradient>
    <linearGradient
       inkscape:collect="always"
       xlink:href="#linearGradient4892"
       id="linearGradient10460"
       gradientUnits="userSpaceOnUse"
       gradientTransform="translate(-208.29289,-394.63604)"
       x1="-238.25415"
       y1="1034.7042"
       x2="-157.4043"
       y2="1093.8906" />

This is part of the definitions, which later get used on portions of representing the grapes. A couple of things to notice. As a straight ‘parsing’, you’ll get a bunch of text values. For example: y2 = “109.8906”, that will turn into a value in the lua table like this: {y2 = “109.8906”}, the ‘109.8906’ is still a string value. That’s useful, but a little less than perfect. Sometimes, depending on what I’m doing, retaining that value as a string might be just fine, but sometimes, I’ll want that value to be an actual lua number. So, there’s an additional step I can take to parse the actual attributes values and turn them into a more native form:

local parser = require("remotesvg.parsesvg")

local doc = parser:parseFile("grapes.svg");
doc:parseAttributes();

doc:write(ImageStream)

That line with doc:parseAttributes(), tells the document to go through all its attributes and parse them, turning them into more useful values from the Lua perspective. In the case above, the representation of ‘y2’ would become: {y2 = 109.8906}, which is a string value.

This gets very interesting when you have values where the string representation and the useful lua representation are different.

<svg>
<line x1="10", y1="20", width = "10cm", height= "12cm" />
</svg>

This will be turning into:

{
  x1 = {value = 10},
  y1 = {value = 20},
  width = {value = 10, units = 'cm'},
  height = {value = 12, units = 'cm'}
}

Now, in my Lua code, I can access these values like so:

local doc = parser:parseFile("grapes.svg");
doc:parseAttributes();
print(doc.svg[1].x1.value);

When I subsequently want to write this value out as valid svg, it will turn back into the string representation with no loss of fidelity.

Hidden in this example is a database query. How do I know that doc.svg[1] is going to give me the ” element that I’m looking for? In this particular case, it’s only because the svg is so simple that I know for a fact that the ” element is going to show up as the first child in the svg document. But, most of the time, that is not going to be the case.

In any svg that’s of substance, there is the usage of various ‘id’ fields, and that’s typically what is used to find an element. So, how to do that in remotesvg? If we look back at the example svg, we see this ‘id’ attribute on the first gradient: id=”linearGradient4892″.

How could I possibly find that gradient element based on the id field? Before that though, let’s look at how to enumerate elements in the document in the first place.

local function printElement(elem)
    if type(elem) == "string" then
        -- don't print content values
        return 
    end
    
    print(string.format("==== %s ====", elem._kind))

    -- print the attributes
    for name, value in elem:attributes() do
        print(name,value)
    end
end

local function test_selectAll()
    -- iterate through all the nodes in 
    -- document order, printing something interesting along
    -- the way

    for child in doc:selectAll() do
	   printElement(child)
    end
end

Here is a simple test case where you have a document already parsed, and you want to iterate through the elements, in document order, and just print them out. This is the first step in viewing the document as a database, rather than as an image. The working end of this example is the call to ‘doc:selectAll()’. This amounts to a call to an iterator that is on the BasicElem class, which looks like this:

--[[
	Traverse the elements in document order, returning
	the ones that match a given predicate.
	If no predicate is supplied, then return all the
	elements.
--]]
function BasicElem.selectElementMatches(self, pred)
	local function yieldMatches(parent, predicate)
		for idx, value in ipairs(parent) do
			if predicate then
				if predicate(value) then
					coroutine.yield(value)
				end
			else
				coroutine.yield(value)
			end

			if type(value) == "table" then
				yieldMatches(value, predicate)
			end
		end
	end

  	return coroutine.wrap(function() yieldMatches(self, pred) end)	
end

-- A convenient shorthand for selecting all the elements
-- in the document.  No predicate is specified.
function BasicElem.selectAll(self)
	return self:selectElementMatches()
end

As you can see, ‘selectAll()’ just turns around and calls ‘selectElementMatches()’, passing in no parameters. The selectElementMatches() function then does the actual work. In Lua, there are a few ways to create iterators. In this particular case, where we want to recursive traverse down a hierarchy of nodes (document order), it’s easiest to use this coroutine method. You could instead keep a stack of nodes, pushing as you go down the hierarchy, popping as you return back up, but this coroutine method is much more compact to code, if a bit harder to understand if you’re not used to coroutines. The end result is an iterator that will traverse down a document hierarchy, in document order.

Notice also that the ‘selectElementMatches’ function takes a predicate. A predicate is simply a function that takes a single parameter, and will return ‘true’ or ‘false’ depending on what it sees there. This will become useful.

So, how to retrieve an element with a particular ID? Well, when we look at our elements, we know that the ‘id’ field is one of the attributes, so essentially, what we want to do is traverse the document looking for elements that have an id attribute that matches what we’re looking for.

function BasicElem.getElementById(self, id)
    local function filterById(entry)
        print("filterById: ", entry.id, id)
        if entry.id == id then
            return true;
        end
    end

    for child in self:selectMatches(filterById) do
        return child;
    end
end

Here’s a convenient function to do just that. And to use it:

local elem = doc:getElementById("linearGradient10460")

That will retrieve the second linear gradient of the pair of gradients from our svg fragment. That’s great! And the syntax is looking very much like what I might write in javascript against the DOM. But, it’s just a database!

Given the selectMatches(), you’re not just limited to querying against attribute values. You can get at anything, and form as complex queries as you like. For example, you could find all the elements that are deep green, and turn them purple with a simple query loop.

Here’s an example of finding all the elements of a particular kind:

local function test_selectElementMatches()
    print("<==== selectElementMatches: entry._kind == 'g' ====>")
	for child in doc:selectElementMatches(function(entry) if entry._kind == "g" then return true end end) do
		print(child._kind)
	end
end

Or finding all the elements that have a ‘sodipodi’ attribute of some kind:

local function test_selectAttribute()
    -- select the elements that have an attribute
    -- with the name 'sodipodi' in them
    local function hasSodipodiAttribute(entry)
        if type(entry) ~= "table" then
            return false;
        end

        for name, value in entry:attributes() do
            --print("hasSodipodi: ", entry._kind, name, value, type(name))
            if name:find("sodipodi") then
                return true;
            end
        end

        return false
    end

    for child in doc:selectElementMatches(hasSodipodiAttribute) do
        if type(child) == "table" then
            printElement(child)
        end
    end
end

Of course, just finding these elements is one thing. Once found, you can use this to filter out those elements you don’t want. for example, eliminating the ones that are inkscape specific.

Well, there you have it. First, you can construct your svg programmatically using Lua syntax. Alternatively, you can simply parse a svg file into a lua structure. Last, you can query your document, no matter how it was constructed, for fun and profit.

Of course, the real benefit of being able to parse, and find elements and the like, is it makes manipulating the svg that much easier. Find the node that represents the graph of values, for example, and change those values over time for some form of animation…


SVG And Me

lineargradient

That’s a simple linear gradient, generated from an SVG document that looks like this:

 

<svg viewBox = '0 0 120 120' version = '1.1' xmlns = 'http://www.w3.org/2000/svg'   width = '120' height = '120' xmlns:xlink = 'http://www.w3.org/1999/xlink'>
  <defs>
    	<linearGradient id = 'MyGradient'>
      <stop stop-color = 'green' offset = '5%' />
      <stop stop-color = 'gold' offset = '95%' />
    </linearGradient>
  </defs>
  <rect x = '10' y = '10' height = '100' fill = 'url(#MyGradient)' width = '100' />
</svg>

 

Fair enough. And of course there are a thousand and one ways to generate .svg files. For various reasons, I am interested in generating .svg files on the fly in a Lua context. So, the code I used to generate this SVG document looks like this:

require("remotesvg.SVGElements")()
local FileStream = require("remotesvg.filestream")
local SVGStream = require("remotesvg.SVGStream")

local ImageStream = SVGStream(FileStream.open("test_lineargradient.svg"))

local doc = svg {
	width = "120",
	height = "120",
	viewBox = "0 0 120 120",
    ['xmlns:xlink'] ="http://www.w3.org/1999/xlink",

    defs{
        linearGradient {id="MyGradient",
            stop {offset="5%",  ['stop-color']="green"};
            stop {offset="95%", ['stop-color']="gold"};
        }
    },

    rect {
    	fill="url(#MyGradient)",
        x=10, y=10, width=100, height=100,
    },
}

doc:write(ImageStream);

This comes from my remotesvg project. If you squint your eyes, these look fairly similar I think. In the second case, it’s definitely valid Lua script. Mostly it’s nested tables with some well known types. But, where are all the parenthesis, and how can you just put a name in front of ‘{‘ and have that do anything?

OK, so Lua has some nice syntactics tricks up its sleeve that make certain things a bit easier. For example, there’s this trick that if there’s only a single parameter to a function, you can leave off the ‘()’ combination. I’ve mentioned this before way long back when I was doing some Windows code, and supporting the “L” compiler thing for unicode literals.

In this case, it’s about tables, and later we’ll see about strings. The following two things are equivalent:

local function myFunc(tbl)
  for k,v in pairs(tbl) do
    print(k,v)
  end
end


myFunc({x=1, y=1, id="MyID"})

-- Or this slightly shorter form

myFunc {x=1, y=1, id="MyID"}

OK. So that’s how we get rid of those pesky ‘()’ characters, which don’t add to the conversation. In lua, since tables are a basic type, I can easily include tables in tables, nesting as deeply as I please. So, what’s the other trick here then? The fact that all those things before the ‘{‘ are simply the names of tables. This is one area where a bit of trickery goes a long way. I created a ‘base type’ if you will, which knows how to construct these tables from a function, and do the nesting, and ultimately print out SVG. It looks like this:

--[[
	SVGElem

	A base type for all other SVG Elements.
	This can do the basic writing
--]]
local BasicElem = {}
setmetatable(BasicElem, {
	__call = function(self, ...)
		return self:new(...);
	end,
})
local BasicElem_mt = {
	__index = BasicElem;
}

function BasicElem.new(self, kind, params)
	local obj = params or {}
	obj._kind = kind;

	setmetatable(obj, BasicElem_mt);

	return obj;
end

-- Add an attribute to ourself
function BasicElem.attr(self, name, value)
	self[name] = value;
	return self;
end

-- Add a new child element
function BasicElem.append(self, name)
	-- based on the obj, find the right object
	-- to represent it.
	local child = nil;

	if type(name) == "table" then
		child = name;
	elseif type(name) == "string" then
		child = BasicElem(name);
	else
		return nil;
	end

	table.insert(self, child);

	return child;
end

function BasicElem.write(self, strm)
	strm:openElement(self._kind);

	local childcount = 0;

	for name, value in pairs(self) do
		if type(name) == "number" then
			childcount = childcount + 1;
		else
			if name ~= "_kind" then
				strm:writeAttribute(name, tostring(value));
			end
		end
	end

	-- if we have some number of child nodes
	-- then write them out 
	if childcount > 0 then
		-- first close the starting tag
		strm:closeTag();

		-- write out child nodes
		for idx, value in ipairs(self) do
			if type(value) == "table" then
				value:write(strm);
			else
				-- write out pure text nodes
				strm:write(tostring(value));
			end
		end
		
		strm:closeElement(self._kind);
	else
		strm:closeElement();
	end
end

And further on in the library, I have things like this:

defs = function(params) return BasicElem('defs', params) end;

So, ‘defs’ is a function, which takes a single parameter (typically a table), and it constructs an instance of the BasicElem ‘class’, handing in the name of the element, and the specified ‘params’. And that’s that…

BasicElem has a function ‘write(strm)’, which knows how to turn the various values and tables it contains into correct looking SVG elements and attributes. It’s all right there in the write() function. In addition, it adds a couple more tidbits, such as the attr() and append() functions.

Now that these basic constructs exist, what can be done? Well, first off all, every one of the SVG elements is covered with the simple construct we see with the ‘defs’ element. How might you used this:

	local doc = svg {
		width = "12cm", 
		height= "4cm", 
		viewBox="0 0 1200 400",
	}


	doc:append('rect')
		:attr("x", 1)
		:attr("y", 2)
		:attr("width", 1198)
		:attr("height", 398)
		:attr("fill", "none")
		:attr("stroke", "blue")
		:attr("stroke-width", 2);

   local l1 = line({x1=100, y1=300, x2=300, y2=100, stroke = "green", ["stroke-width"]=5});
   local l2 = line({x1=300, y1=300, x2=500, y2=100, stroke = "green", ["stroke-width"]=20});
   local l3 = line({x1=500, y1=300, x2=700, y2=100, stroke = "green", ["stroke-width"]=25});
   local l4 = line({x1=700, y1=300, x2=900, y2=100, stroke = "green", ["stroke-width"]=20});
   local l5 = line({x1=900, y1=300, x2=1100, y2=100, stroke = "green", ["stroke-width"]=25});


	--doc:append(r1);
	doc:append(l1);
	doc:append(l2);
	doc:append(l3);
	doc:append(l4);
	doc:append(l5);

In this case, instead of doing the ‘inlined table document’ style of the first example, I’m doing more of a ‘programmatic progressive document building’ style. I create the basic ‘svg’ element and save it in the doc variable. Then I use the ‘append()’ function, to create a ‘rect’ element. On that same element, I can use a short hand to add it’s attributes. Then, I can create separate ‘line’ elements, and append them onto the document as well. That’s pretty special if you need to construct the document based on some data you’re seeing, and you can’t use the embedded table style up front.

There are some special elements that get extra attention though. Aside from the basic table construction, and attribute setting, the ‘path’ element has a special retained mode graphics building capability.

	local p1 = path {
		fill="red", 
		stroke="blue", 
		["stroke-width"]=3
	};
	
	p1:moveTo(100, 100);
	p1:lineTo(300, 100);
	p1:lineTo(200, 300);
	p1:close();

	local doc = svg {
		width="4cm", 
		height="4cm", 
		viewBox="0 0 400 400",
		
		rect {
			x="1", y="1", 
			width="398", height="398",
        	fill="none", stroke="blue"};
	
		p1;
	}

In this case, I create my ‘path’ element, and then I use its various path construction functions such as ‘moveTo()’, and ‘lineTo()’. There’s the full set of arc, bezier curvs, and the like, so you have all the available path construction commands. Again, this works out fairly well when you are trying to build something on the fly based on some previously unknown data.

There’s one more important construct, and that’s string literals. There are cases where you might want to do something that this easy library just doesn’t make simple. In those cases, you might just want to embed some literal text into the output document. Well, luckily, Lua has a fairly easy ability to indicate single or multi-line text, and the BasicElem object knows what to do if it sees it.

    g {
      ['font-family']="Arial",
      ['font-size']="36",

      [[
      <text x="48" y="48">Test a motion path</text> 
      <text x="48" y="95" fill="red">'values' attribute.</text> 
      <path d="M90,258 L240,180 L390,180" fill="none" stroke="black" stroke-width="6" /> 
      <rect x="60" y="198" width="60" height="60" fill="#FFCCCC" stroke="black" stroke-width="6" /> 
      <text x="90" y="300" text-anchor="middle">0 sec.</text> 
      <rect x="210" y="120" width="60" height="60" fill="#FFCCCC" stroke="black" stroke-width="6" /> 
      <text x="240" y="222" text-anchor="middle">3+</text> 
      <rect x="360" y="120" width="60" height="60" fill="#FFCCCC" stroke="black" stroke-width="6" /> 
      <text x="390" y="222" text-anchor="middle">6+</text> 
      ]];

      path {
        d="M-30,0 L0,-60 L30,0 z", 
        fill="blue", 
        stroke="red", 
        ['stroke-width']=6, 
        
        animateMotion {values="90,258;240,180;390,180", begin="0s", dur="6s", calcMode="linear", fill="freeze"} 
      } 
    }

Notice the portion after the ‘font-size’ attribute is a Lua multi-line string literal. This section will be incuded in the form document verbatim. Another thing to notice here is that ‘path’ element. Although path is specialized, it still has the ability to have attributes, and even have child nodes of its own, such as for animation.

Another case where the literals may come in handy is for CSS style sheets.

	defs {
		style {type="text/css",
[[
			.land
			{
				fill: #CCCCCC;
				fill-opacity: 1;
				stroke:white;
				stroke-opacity: 1;
				stroke-width:0.5;
			}
]]
		};
	};

The ‘style’ element is well known, but the format of the actual content is a bit too specific to translate into a Lua form, so it can simply be included as a literal.

Well, that’s the beginning of this journey. Ultimately I want to view some live graphics generated from data, and send some commands back to the server to perform some functions. At this point, I can use Lua to generate the SVG on the fly, and there isn’t an SVG parser, or Javascript interpreter in sight.