Building a tower PC – 2016

Well, since I’m no longer interested in building the ultimate  streaming PC, I’ve turned my attention to building a more traditional tower PC.  What?  Those are so 1980!  It’s like this.  What I’m really after is using the now not so new Vulkan API for graphics programming. My current challenge is, my nice tighty Shuttle PC doesn’t have the ability to run a substantial enough graphics card to try the thing out! I do in fact have a tower PC downstairs, but it’s circa 2009 or something like that, and I find this a rather convenient excuse to ‘upgrade’.

I used to build a new PC every two years, back in the day, but tech has moved along so fast, that it hardly makes sense to build so often, you’d be building every couple of months to keep pace. The machines you build today will last longer than an old hardware hacker cares to admit, but sometimes you’ve just go to bite the bullet.

Trying to figure out what components to use in a current build can be quite a challenge. It used to be that I’d just go to AnandTech and look at this years different builds, pick a mid-range system, and build something like that. Well, AnandTech is no longer what it used to be, and TomsHardware seems to be the better place for the occasional consumer such as myself.

The first thing to figure out is the excuse for building the machine, then the budget, then the aesthetics.

Excuse:  I want to play with the Vulkan graphics API

Budget: Less than $2,000n (to start ;-))

Aesthetics: I want it to be interesting to look at, probably wall or furniture mounted.

Since the excuse is being able to run the Vulkan API, I started contemplating the build based on the graphics card.  I’m not the type of person to go out and buy any of the most current, most expensive graphics cards, because they come out so fast that if you simply wait 9 months, that $600 card will be $300.  The top choice in this category would be a NVidia GTX 1080.  Although a veritable beast of a consumer graphics card, at $650+, that’s quite a budget buster.  Since I’m not a big gamer, I don’t need super duper frame rates, but I do want the latest features, like support of Direct X12, Vulkan, OpenGL 4.5, etc.

A nice AMD alternative is the AMD Radeon Rx 480.  That seems to be the cat’s meow at the more reasonable $250 price range.  This will do the trick as far as being able to run Vulkan, but since it’s AMD and not NVidia, I would not be able to run Cuda.  Why limit myself, since NVidia will also run OpenCL.  So, I’ve opted for an NVidia based MSI GeForce GTX 1060.

The specialness of this particular card is the 6GB of GDDR5 RAM that comes on it.  From my past history with OpenGL, I learned that the more RAM on the board the better.  I also chose this particular one because it has some red plastic on it, which will be relevant when I get to the aesthetics.  Comparisons of graphics cards abound.  You can get stuck in a morass trying to find that “perfect” board.  This board is good enough for my excuse, and at a price that won’t break the build.

Next most important after the graphics card is the motherboard you’re going to stick it in.  The motherboard is important because it’s the skeleton upon which future additions will be placed, so a fairly decent board that will support your intended expansions for the next 5 years or so would be good.

I settled on the GIGABYTE G1 Gaming GA-Z170X-Gaming GT (rev. 1.0) board.

It’s relatively expensive at $199, but it’s not outrageous like the $500+ boards.   This board supports up to three graphics cards of the variety I’m looking at, which gives me expansion on that front if I every choose to use it.  Other than that, at least 64GB of DDR4 RAM.  It has a ton of peripherals, including USB 3.1 with a type-c connector.  That’s good since it’s emerging.  Other than all that, it has good aesthetics with white molding and red highlights (sensing a theme).

To round out the essentials, you need a power supply.  For this, I want ‘enough’, not overkill, and relatively silent.

The Seasonic Snow Silent 750 is my choice.  Besides getting relatively good reviews, it’s all white on the outside, which just makes it look more interesting.

And last, but not least, the CPU to match.  Since the GPU is what I’m actually interested in, the CPU doesn’t matter as much.  But, since I’m not likely to build another one of these for a few years, I might as well get something reasonable.

I chose the intel i7-6700K for the CPU.

Image result for core i7-6700k

At $339, it’s not cheap, but again, it’s not $600.  I chose the ‘K’ version, to support overclocking.  I’ll probably never actually do that, but it’s a useful option nonetheless.  I could have gone with a less expensive i5 solution, but I think you lose out on hyper-threading or something, so might as well spend a $100 more and be slightly future proof.

Now, to hold all these guts together, you need a nice case.  I already have a very nice case housing the circa 2009 machine.  I can’t bring myself to take it apart, and besides, I tell myself, it doesn’t have the io access on the front panels required of a modern machine.  Since part of my aesthetic is to be able to show the guts of the machine (nicely themed colors), I went with something a bit more open.

The Thermaltake core P5 ATX Open Frame case is what I have chosen.

Image result for thermaltake core p5 atxNow, I’m more of a throw it together and start using it kind of builder, but putting a little bit of flash into the build could make it a tad more interesting.  Less heat dissipation problems, and if I ever do that cool liquid cooling piping stuff, I’ll be able to show it off.  This case also has options to mount it against the wall/furniture, and I’ll probably take advantage of that.  I can imagine having a home office desk with a couple of these mounted on the front just for kicks.  Thrown in a few monitors for surround, and… Oh, but I’m not a gamer.

The rest of the kit involves various memory, storage, etc.  The motherboard has M.2 as well as mSata.  So, I’ll probably put an SSD on one of those interfaces as the primary OS drive.  Throw in a few terabytes of spinning rust, and 64GB of RAM, and it’s all set.

The other nice thing about the motherboard is dual NICs.  One is for gaming, the other (intel) is for more pedestrian networking.  This can be nothing but goodness, and I’m sure I can do some nice experimenting with that.

Well, that’s what I’m after.  I added it all up on newegg.com, and it came out to about $1,500, which is nicely under budget, and will give me a machine I can be happy with for a few years to come.

 

 


Home Theatre PC Redux

A number of years ago I purchased my first barebones Shuttle PC.  At the time, it was about the size of two stacked shoe boxes, which was quite compact compared to the behemoth desktops circa 2005.  I had ideas of streaming media from it, using it as a home media center.  Microsoft even had a home centric OS, and attendant software.

It never really took off in that regard, and I ended up just using the machine as a standard browsing desktop for a few years.  Now, it finds itself in the garage, holding up various bits and pieces, not getting any action.

There has been a whole thing in the industry about creating quiet PCs.  From power supplies to fans, to specialized cases, motherboards, and the like.  All in search of that perfect PC that can sit in the living room, unobtrusively, serving up media to the giant glass TV monitor above it.

Then along came XBMC.  Oddly enough, first introduced on the Xbox to stream media content.  Soon enough XBMC found its way to the standard PC, and subsequently to Operating Systems other than Windows.  XBMC became Kodi, and here we sit today.

A couple of years back, I purchased a minix X8-H.  Again, for the day, it was quite a nifty little device, that could stream media.  But, ‘streaming media’, and servicing home media content needs has boiled down to a couple of things.  First of all, netflix, and thus a Roku or other standard media devices, are the norm these days.  For roughly $50 you can get a device that will stream all the standard network based streams that exist, from hulu, to netflix, TED Talks, NFL, or whatever.  Of course, these media devices are essentially the new “set top box” for the age where cable bundles are dwindling, and you get to pay $5-$10 per month per channel you really want.

Well, there you go, problem solved, we can all go home now…

In summary, media consumption has turned into an internet based thing, where the differentiators are things like 4K streams vs HD, amount of memory (to minimize stalls), and the quality of the sound output.  It’s no longer a question of CPUs (ARM is dominant), nor the OS (Android is dominant).  It’s not even a matter of the software (proprietary or Kodi, and that’s it).

There is a tributary off this main stream though.  That is, once you get into Kodi as your player, you’ve opened up a world of possibilities.  I can stream all of my DVDs that I backed up to my NAS.  I can get all the media content from the internet, I can stream live events, watch local television, etc.  This is even greater as I can watch whatever content I want, pay whatever price I want, and not have a single concern for the quality of the content, nor the cost of the device.  That’s all great.

So, I recently went back to the minix site just to see how they were getting along.  Lo and behold, the media players are no longer front and center, but instead, there are ‘miniature’ PCs, like the ngc-1.  This is a Windows 10 PC in the same form factor as those tiny media player boxes.  I found it on Amazon for $299.  Given the price of tablets and laptops these days, this is right in there with a typical low end machine.  It is loaded with features though, like dual-AC wireless, 4K video, 128Gb SSD, and the like.  It’s no slouch, even if it’s not the best bitcoin mining device.  This paired with a reasonable couple of monitors makes for a great interactive PC for toddlers (who destroy laptops in a second).

This is  a new breed.  I’m thinking of getting one to act as my desktop “command” computer.  You know, stick it on the top of my desk, or the back of one of my monitors, and just use it to remote desktop into other machines as I need to.

As a long time PC builder, my first reaction is, “I’m sure I could throw this together cheaper”, but the truth is I can’t.  I can even purchase the components any cheaper, and they put it in a nice solid metal case, which I could not manufacture.  I think we’ve reached the state where the PCs are almost commodity, and you can pretty much purchase one every year, and just attach them to whatever display you so happen to have.  Those 17″ displays that you find in your work room, put a media stick (roku stick or chrome cast or whatever).  For the bigger glass, like your Costco special 60″ tv, put one of the large media PCs that are capable of 4K display and have a bit more media handling capability.  For your main desktop machine, the one you use in your cave for viewing lots of different kinds of content other than movies, put one of these new nano scale PCs.  Stick a console gaming rig, or heavy duty PC on your midrange display for gaming.

My journey with media center PCs began roughly 12 years ago, and I can say, that journey has pretty much ended today.  I’ll still fiddle about with the likes of an Odroid C2 for media streaming, but really, when it comes time to watch football, or the latest netflix bing season watching thing, it’s going to be a standard media device (likely rook) on the 50″ in the living room.

Media PC pursuit, rest in peace, long live the media PC!


Jumped on Twitter… Again

I’m not a twitch communicator.  140 characters, or whatever, doesn’t really do it for me, but this is the way the world works now.  So, I joined twitter: @LeapToTech

I had a previous twitter account, but since I never used it, I can’t even remember what it was.  But, at the behest of Laura Butler @LauraCatPJs, I put up another account so that I could retweet a tweet.  It’s the future man!  The internet is going to be big some day.

So, now I’m learning about the value of #HashTags, and @callsigns, and that sort of stuff.  Really I only wrote this post so I could figure out how to stick a tweeter at the bottom…

 



schedlua – refactor compactor

The subject of scheduling and async programming has been a long running theme in my blog.  From the very first entries related to LJIT2Win32, through the creation of TINN, and most recently (within the past year), the creation of schedlua, I have been exploring this subject.  It all kind of started innocently enough.  When node.js was born, and libuv was ultimately released, I thought to myself, ‘what prevents anyone from doing this in LuaJIT without the usage of any external libraries whatsovever?’

It’s been a long road.  There’s really no reason for this code to continue to evolve.  It’s not at the center of some massively distributed system.  These are merely bread crumbs left behind, mainly for myself, as I explore and evolve a system that has proven itself to be useful at least as a teaching aid.

In the most recent incarnation of schedlua kernel, I was able to clean up my act with the realization that you can implement all higher level semantics using a very basic ‘signal’ mechanism within the kernel.  That was pretty good as it allowed me to easily implement the predicate system (when, whenever, waitForTruth, signalOnPredicate).  In addition, it allowed me to reimplement the async io portion with the realization that a task waiting on IO to occur is no different than a task waiting on any other kind of signal, so I could simply build the async io atop the signaling.

schedlua has largely been a Linux based project, until now.  The crux of the difference between Linux and Windows comes down to two things in schedlua.  The first thing is timing operations.  Basically, how do you get a microsecond accurate clock on the system.  On Linux, I use the ‘clock_gettime()’ system call.  On Windows, I use ‘QueryPerformanceCounter, QueryPerformanceFrequency’.  In order to isolate these, I put them into their own platform specific timeticker.lua file, and they both just have to surface a ‘seconds()’ function.  The differences are abstracted away, and the common interface is that of a stopwatch class.

That was good for time, but what about alarms?

The functions in schedlua related to alarms, are: delay, periodic, runnintTime, and sleep.  Together, these allow you to run things based on time, as well as delay the current task as long as you like.  My first implementation of these routines, going all the way back to the TINN implementation, were to run a separate ‘watchdog’ task, which in turn maintained its list of tasks that were waiting, and scheduled them.  Recently, I thought, “why can’t I just use the ‘whenever’ semantics to implement this?”.

Now, the implementation of the alarm routines comes down to this:

 

local function taskReadyToRun()
	local currentTime = SWatch:seconds();

	-- traverse through the fibers that are waiting
	-- on time
	local nAwaiting = #SignalsWaitingForTime;

	for i=1,nAwaiting do
		local task = SignalsWaitingForTime[1];
		if not task then
			return false;
		end

		if task.DueTime <= currentTime then
			return task
		else
			return false
		end
	end

	return false;
end

local function runTask(task)
    signalOne(task.SignalName);
    table.remove(SignalsWaitingForTime, 1);
end

Alarm = whenever(taskReadyToRun, runTask)

The Alarm module still keeps a list of tasks that are waiting for their time to execute, but instead of using a separate watchdog task to keep track of things, I simply use the schedlua built-in ‘whenever’ function. This basically says, “whenever the function ‘taskReadyToRun()’ returns a non-false value, call the function ‘runTask()’ passing the parameter from taskReadyToRun()”. Convenient, end of story, simple logic using words that almost feel like an English sentence to me.

I like this kind of construct for a few reasons. First of all, it reuses code. I don’t have to code up that specialized watchdog task time and time again. Second, it wraps up the async semantics of the thing. I don’t really have to worry about explicitly calling spawn, or anything else related to multi-tasking. It’s just all wrapped up in that one word ‘whenever’. It’s relatively easy for me to explain this code, without mentioning semaphores, threads, conditions, or whatever. I can tell a child “whenever this is true, do that other thing”, and they will understand it.

So, that’s it. First I used signals as the basis to implement higher order functions, such as the predicate based flow control. Now I’m using the predicate based flow control to implement yet other functions such as alarms. Next, I’ll take that final step and do the same to the async IO, and I’ll be back to where I was a few months back, but with a much smaller codebase, and cross platform to boot.


Note To Self – VS Code seems reasonable

No secret, I still work for Microsoft…

Over the past 17 years of working for the company, my go-to editor had largely been Visual Studio.  Since about 2000, it was Visual C#.  Then around 2011, I switched up, and started doing a lot of Javascript, Lua, and other languages, and my editor went from Notepad++, to a combination of Sublime Text and vim.

Most recently, I’ve had the opportunity to try and enable some editing on Windows 10 tablets, and I chose a new editor, Visual Studio Code.  I am by no means a corporate apologist, but I will certainly point out when I think my company is doing something good.  Visual Studio Code is an easy replacement for Sublime Text, at least for my needs and tastes.  I’ve been trying it out on and off for the past few months, and it just keeps improving.

Like all modern editors, it has an ‘add-on’ capability, which has a huge community of add-on builders adding on stuff.  Of course, there’s some lua syntax highlighting, which makes it A number one in my book already.  But, there are other built in features that I like as well.  It has a simple and sane integration with git repositories right out of the box.  So, I just open up my favorite projects, start editing, and it shows which files are out of sync.  A couple of clicks, type in my credentials, and the sync/push happens.  I’m sure there’s an extension for that in all modern editors, including Sublime Text, but here it’s just built into the base editor.

One item that struck me as a pleasant surprise the other day was built in support for markdown language.  I was refreshing the documentation files for schedlua, and I was putting in code block indicators (“`lua).  After I put in one such indicator, I noticed the quoted code suddenly had the lua syntax highlighting!  Yah, well, ok, getting excited about not much.  But, I had never seen that with Sublime Text, so it was new for me.  That was one of those features that just made me go ‘sold’.

The editor has other features such as being able to run a command line from within the editor and such, but it’s not a full blown IDE like Visual Studio, which is good because the tablets I’m running it on don’t have 4 – 8Gb of RAM to run Visual Studio comfortably.  So, it’s just enough editor to replace the likes of Sublime Text.  I also like the fact that it’s backed by a large company that is dedicated to continue to improve it over time with regular updates.  The community that’s being built up around add-ons seems fairly robust, which is also another good sign.  Given Microsoft’s current penchant for Open Sourcing things, I would not be surprised if it showed up available on GitHub some day in the future, which would just make it that much more interesting.

So, for now (future self), I will be using VS Code as my editor on Windows, MacOS, and Linux.  It has the stability and feature set that I need, and it continues to evolve, adding more stability and features that I find to be useful.

 


What is IT?

WP_20160319_001

IT’s coming…


SVG And Me – Don’t tell me, just another database!

A picture is worth 175Kb…

grapes

So, SVG right? Well, the original was, but this image was converted to a .png file for easy embedding in WordPress. The file size of the original grapes.svg is 75K. That savings in space is one of the reasons to use .svg files whenever you can.

But, I digress. The remotesvg project has been moving right along.

Last time around, I was able to use Lua syntax as a stand in for the raw .svg syntax.  That has some benefits because since your in a programming language, you can use programming constructs such as loops, references, functions and the like to enhance the development of your svg.  That’s great when you’re creating something from scratch programmatically, rather than just using a graphical editing tool such as inkscape to construct your .svg.  If you’re constructing a library of svg handling routines, you need a bit more though.

This time around, I’m adding in some parsing of svg files, as well as general manipulation of the same from within Lua.  Here’s a very simple example of how to read an svg file into a lua table:

 

local parser = require("remotesvg.parsesvg")

local doc = parser:parseFile("grapes.svg");

That’s it! You now have the file in a convenient lua table, ready to be manipulated. But wait, what do I have exactly? Let’s look at a section of that file and see what it gives us.

    <linearGradient
       inkscape:collect="always"
       id="linearGradient4892">
      <stop
         style="stop-color:#eeeeec;stop-opacity:1;"
         offset="0"
         id="stop4894" />
      <stop
         style="stop-color:#eeeeec;stop-opacity:0;"
         offset="1"
         id="stop4896" />
    </linearGradient>
    <linearGradient
       inkscape:collect="always"
       xlink:href="#linearGradient4892"
       id="linearGradient10460"
       gradientUnits="userSpaceOnUse"
       gradientTransform="translate(-208.29289,-394.63604)"
       x1="-238.25415"
       y1="1034.7042"
       x2="-157.4043"
       y2="1093.8906" />

This is part of the definitions, which later get used on portions of representing the grapes. A couple of things to notice. As a straight ‘parsing’, you’ll get a bunch of text values. For example: y2 = “109.8906”, that will turn into a value in the lua table like this: {y2 = “109.8906”}, the ‘109.8906’ is still a string value. That’s useful, but a little less than perfect. Sometimes, depending on what I’m doing, retaining that value as a string might be just fine, but sometimes, I’ll want that value to be an actual lua number. So, there’s an additional step I can take to parse the actual attributes values and turn them into a more native form:

local parser = require("remotesvg.parsesvg")

local doc = parser:parseFile("grapes.svg");
doc:parseAttributes();

doc:write(ImageStream)

That line with doc:parseAttributes(), tells the document to go through all its attributes and parse them, turning them into more useful values from the Lua perspective. In the case above, the representation of ‘y2’ would become: {y2 = 109.8906}, which is a string value.

This gets very interesting when you have values where the string representation and the useful lua representation are different.

<svg>
<line x1="10", y1="20", width = "10cm", height= "12cm" />
</svg>

This will be turning into:

{
  x1 = {value = 10},
  y1 = {value = 20},
  width = {value = 10, units = 'cm'},
  height = {value = 12, units = 'cm'}
}

Now, in my Lua code, I can access these values like so:

local doc = parser:parseFile("grapes.svg");
doc:parseAttributes();
print(doc.svg[1].x1.value);

When I subsequently want to write this value out as valid svg, it will turn back into the string representation with no loss of fidelity.

Hidden in this example is a database query. How do I know that doc.svg[1] is going to give me the ” element that I’m looking for? In this particular case, it’s only because the svg is so simple that I know for a fact that the ” element is going to show up as the first child in the svg document. But, most of the time, that is not going to be the case.

In any svg that’s of substance, there is the usage of various ‘id’ fields, and that’s typically what is used to find an element. So, how to do that in remotesvg? If we look back at the example svg, we see this ‘id’ attribute on the first gradient: id=”linearGradient4892″.

How could I possibly find that gradient element based on the id field? Before that though, let’s look at how to enumerate elements in the document in the first place.

local function printElement(elem)
    if type(elem) == "string" then
        -- don't print content values
        return 
    end
    
    print(string.format("==== %s ====", elem._kind))

    -- print the attributes
    for name, value in elem:attributes() do
        print(name,value)
    end
end

local function test_selectAll()
    -- iterate through all the nodes in 
    -- document order, printing something interesting along
    -- the way

    for child in doc:selectAll() do
	   printElement(child)
    end
end

Here is a simple test case where you have a document already parsed, and you want to iterate through the elements, in document order, and just print them out. This is the first step in viewing the document as a database, rather than as an image. The working end of this example is the call to ‘doc:selectAll()’. This amounts to a call to an iterator that is on the BasicElem class, which looks like this:

--[[
	Traverse the elements in document order, returning
	the ones that match a given predicate.
	If no predicate is supplied, then return all the
	elements.
--]]
function BasicElem.selectElementMatches(self, pred)
	local function yieldMatches(parent, predicate)
		for idx, value in ipairs(parent) do
			if predicate then
				if predicate(value) then
					coroutine.yield(value)
				end
			else
				coroutine.yield(value)
			end

			if type(value) == "table" then
				yieldMatches(value, predicate)
			end
		end
	end

  	return coroutine.wrap(function() yieldMatches(self, pred) end)	
end

-- A convenient shorthand for selecting all the elements
-- in the document.  No predicate is specified.
function BasicElem.selectAll(self)
	return self:selectElementMatches()
end

As you can see, ‘selectAll()’ just turns around and calls ‘selectElementMatches()’, passing in no parameters. The selectElementMatches() function then does the actual work. In Lua, there are a few ways to create iterators. In this particular case, where we want to recursive traverse down a hierarchy of nodes (document order), it’s easiest to use this coroutine method. You could instead keep a stack of nodes, pushing as you go down the hierarchy, popping as you return back up, but this coroutine method is much more compact to code, if a bit harder to understand if you’re not used to coroutines. The end result is an iterator that will traverse down a document hierarchy, in document order.

Notice also that the ‘selectElementMatches’ function takes a predicate. A predicate is simply a function that takes a single parameter, and will return ‘true’ or ‘false’ depending on what it sees there. This will become useful.

So, how to retrieve an element with a particular ID? Well, when we look at our elements, we know that the ‘id’ field is one of the attributes, so essentially, what we want to do is traverse the document looking for elements that have an id attribute that matches what we’re looking for.

function BasicElem.getElementById(self, id)
    local function filterById(entry)
        print("filterById: ", entry.id, id)
        if entry.id == id then
            return true;
        end
    end

    for child in self:selectMatches(filterById) do
        return child;
    end
end

Here’s a convenient function to do just that. And to use it:

local elem = doc:getElementById("linearGradient10460")

That will retrieve the second linear gradient of the pair of gradients from our svg fragment. That’s great! And the syntax is looking very much like what I might write in javascript against the DOM. But, it’s just a database!

Given the selectMatches(), you’re not just limited to querying against attribute values. You can get at anything, and form as complex queries as you like. For example, you could find all the elements that are deep green, and turn them purple with a simple query loop.

Here’s an example of finding all the elements of a particular kind:

local function test_selectElementMatches()
    print("<==== selectElementMatches: entry._kind == 'g' ====>")
	for child in doc:selectElementMatches(function(entry) if entry._kind == "g" then return true end end) do
		print(child._kind)
	end
end

Or finding all the elements that have a ‘sodipodi’ attribute of some kind:

local function test_selectAttribute()
    -- select the elements that have an attribute
    -- with the name 'sodipodi' in them
    local function hasSodipodiAttribute(entry)
        if type(entry) ~= "table" then
            return false;
        end

        for name, value in entry:attributes() do
            --print("hasSodipodi: ", entry._kind, name, value, type(name))
            if name:find("sodipodi") then
                return true;
            end
        end

        return false
    end

    for child in doc:selectElementMatches(hasSodipodiAttribute) do
        if type(child) == "table" then
            printElement(child)
        end
    end
end

Of course, just finding these elements is one thing. Once found, you can use this to filter out those elements you don’t want. for example, eliminating the ones that are inkscape specific.

Well, there you have it. First, you can construct your svg programmatically using Lua syntax. Alternatively, you can simply parse a svg file into a lua structure. Last, you can query your document, no matter how it was constructed, for fun and profit.

Of course, the real benefit of being able to parse, and find elements and the like, is it makes manipulating the svg that much easier. Find the node that represents the graph of values, for example, and change those values over time for some form of animation…