Tinkerer’s Closet – Selecting hardware

Well, there’s no better reason to clean out your closet or workspace, than to fill it with new stuff!! I’ve been spending a fair amount of time cleaning out the old stuff, and I’ve gotten far enough along that I feel it’s safe to poke my head up and see what’s new and exciting in the world of hardware components today.

What to look at though, so I don’t just fill up with a bunch more stuff that doesn’t get used for the next 10 years? Well, this time around I’m going project based. That means, I will limit my searching to stuff that can help a project at hand. Yes, it’s useful to get some items just for the learning of it, but for a hoarder, it’s better to have an actual project in mind before making purchases.

On the compute front, I’ve been standardizing the low end around ESP 32 modules. I’ve mentioned this in the past, but it’s worth a bit more detail. The company, Espressif, came along within the past decade, and just kind of took the maker community by storm. Low cost, communications built in (wifi, bluetooth), capable processors (32-bit). They are a decent replacement at the low end of things, taking the place of the venerable Arduino, which itself was a watershed in its day.

The keen thing about the Espressif modules is how programmable they are. You can use the Aruino IDE, or PlatformIO (tied to Visual Studio), or their standalone IDE. You can program it like a single CPU with full control of everything, or you can run a Real-time OS (FreeRTOS) on it. This makes it super easy to integrate into anything from simple servo motor control, to full on robotics.

As for form factor, I’m currently favoring the Adafruit ‘feather’ forms. The ‘feather’ form factor is a board specification, which puts pins in certain locations, regardless of which actual processor is on the board. This makes it a module that can be easily integrated into designs, because you have known patterns to build around. I have been using the ESP32 Feather V2 primarily.

It’s just enough. USB-C connector for power, and programming. Battery connector for easy deployment (battery charges when USB-C is plugged in). STEMMA QT connector (tiny 8 pin connector) for easy I2C connection of things like joysticks, sensors, anything on the I2C bus. Antenna built in (wifi/bluetooth radio on the right, with black PCB antenna next to it).

It’s just a handy little package, and my current best “computicle”. You can go even smaller, and get ESP 32 modules in different packages, but this is the best for prototyping in my lab.

As an aside, I want to mention Adafruit, the company, as a good source for electronic components. You can checkout their about page to get their history. Basically, they were created in 2005, and have been cranking out the hits in the maker space ever since. What attracted me to them initially was their tutorials on the components they sell. They have kits and tutorials on how to solder, as well as how to integrate motors into an ESP 32 design. Step by step, detailed specs, they’re just good people. They also pursue quality components. I mean, every USB cable is the same right? Nope, and they go through the myriad options, and only sell the best ones. So, if you’re in the market, check them out, at least for their tutorials.

Going up the scale from here, you have “Single Board Computers”. The mindshare leader in this space is definitely the Raspberry Pi. When they sprung onto the scene, there really wasn’t any option in the sub-$50 range. Since then (2006ish), there has been an entire renaissance and explosion of single board computers. They are typically characterized by: Arm based processor, 4-8Gb RAM, USB powered, HDMI output, a couple of rows of IO pins, running Linux (Ubuntu typically).

I’ve certainly purchased my share of Raspberry Pi boards, and variants. I tend to favor those coming from Hard Kernel. I find their board innovations over the years to be better than what the Pi Foundation is typically doing. Also, they are more readily available. Hard Kernel has commercial customers that use their boards in embedded applications, so they tend to have Long Term Support for them. They have boards based on ARM typically, meant to run Linux, but they also have Windows based boards as well.

Here’s a typical offering,

The Odroid M1S.

The one thing that’s critical to have in a single board computer is software support. There are as many single board computers available in the world as there are grains of sand on a beach. What differentiates them is typically the software support, and the community around it. This is why the Raspberry Pi has been so popular. They have core OS support, and a super active community that’s always making contributions.

I find the Odroid boards to be similar, albeit a much smaller community. They do have core OS support, and typically get whatever changes they make integrated into the mainline Linux development tree.

This M1S I am considering as a brain for machines that need more than what the ESP32 can handle. A typical situation might be a CNC machine, where I want to have a camera to watch how things are going, and make adjustments if things are out of wack. For example, the camera sees that the cutting bit has broken, and will automatically stop the machine. Or, it can see how the material is burring or burning, and make adjustments to feeds and speeds automatically.

For such usage, it’s nice to have the IO pins available, but communicating over I2C, CANBus, or other means, should be readily available.

This is reason enough for me to purchase one of these boards. I will look specifically for pieces I can run on it, like OpenCV or some other visual module for the vision stuff. I have another CNC router that I am about to retrofit with new brains, and this could be the main brain, while the ESP32 can be used for the motor control side of things.

Last is the dreamy stuff.

The BeagleV-Fire

This is the latest creation of BeagleBoard.org. This organization is awesome because they are dedicated to creating “open source” hardware designs. That’s useful to the community because it means various people will create variants of the board for different uses, making the whole ecosystem more robust.

There are two special things about this board. One is that it uses a RISC-V chip, instead of ARM. RISC-V is an instruction set, which itself is open source, and license free. It is a counter to the ARM chips, which have license fees and various restrictions. RISC-V in general, will likely take up the low end of the market for CPU processors in all sorts of applications that typically had ARM based chips.

The other feature of these boards is onboard integrated FPGA (Field Programmable Gate Array). FPGA is a technology which makes the IO pins programmable. If you did not have a USB port, or you wanted another one, you could program some pins on the chip to be that kind of port. You can even program a FPGA to emulate a CPU, or other kinds of logic chips. Very flexible stuff, although challenging to program.

I’ve had various FPGA boards in the past, and even ones that are integrated with a CPU. This particular board is another iteration of the theme, done by a company that has been a strong contributor in the maker community for quite some time.

Why I won’t buy this board, as much as I want to; I don’t have an immediate need for it. I want to explore FPGA programming, and this may or may not be the best way to learn that. But, I don’t have an immediate need. Getting an Odroid for creating a smarter CNC makes sense right now, so one of those boards is a more likely purchase in the near term. It might be that in my explorations of CNC, I find myself saying “I need the programmability the BeagleBone has to offer”, but it will be a discovery based on usage, rather than raw “I want one!”, which is a departure from my past tinkerings.

At this point, I don’t need to explore above Single Board computers. They are more than powerful enough for the kinds of things I am exploring, so nothing about rack mountable servers and kubernetes clusters.

At the low end, ESP32 as my computicles. At the high end, Hard Kernel based Single Board Computers for brains.


Tinkerer’s Closet – Hardware Refresh

I am a tinkerer by birth. I’ve been fiddling about with stuff since I was old enough to take a screwdriver off my dad’s workbench. I’ve done mechanical things, home repair, wood working, gardening, 3d printing, lasering, just a little bit of everything over the years. While my main profession for about 40 years has been software development, I make the rounds through my various hobbies on a fairly regular basis.

Back around 2010, it was the age of 3D printers, and iOT devices. I should say, it was the genesis of those movements, so things were a little rough. 3D printers, for example, are almost formulaic at this point. Kits are easily obtained, and finished products can be had for $300-$400 for something that would totally blow away what we had in 2010.

At that time, I was playing around with tiny devices as well. How to make a light turn on from the internet. How to turn anything on, from a simple radio controller. As such, I was into Arduino microcontrollers, which we making the rounds of popularity, and much later, the Raspberry Pi, and other “Single Board Computers”. There were also tons of sensor modules (temperature, accelerometers, light, moisture, motion, etc), and little radio transmitters and receivers. The protocols were things like Xigbee, and just raw radio waves that could be decoded to ASCII streams.

As such, I accumulated quite a lot of kit to cover all the bases. My general moto was; “Buy two of everything, because if one breaks…”

The purchasing playground for all this kit was limited to a few choice vendors. In the past it would have been Radio Shack and HeathKit, but in 2010, it was:

AdaFruit

SeeedStudio

SparkFun

There were myriad other creators coming up with various dev boards, like the low power JeeLabs, or Dangerous Prototypes and their BusPirate product (still going today). But, mostly their stuff would end up at one of these reliable vendors, along with their own creations.

Lately, and why I’m writing this missive, I’ve been looking at the landscape of my workshop, wanting to regain some space, and make space for new projects. As such, I started looking through those hidey holes, where electronics components tend to hide, and hang out for generations. I’ve started going through the plastic bins, looking for things that are truly out of date, no longer needed, never going to find their way into a project, no longer supported by anyone, and generally, just taking up space.

To Wit, I’ve got a growing list of things that are headed for the scrap heaps;

433Mhz RF link kit, 915Mhz RF Link Kit, Various versions of Arduinos, Various versions of Raspberry Pi, TV B Gone KIt (built one, tossing the other, maybe save for soldering practice for the kids), Various Xigbee modules, Parallax Propellar (real neat stuff), SIM Card Reader, Gadget Factory FPGA boards and wings, trinkets, wearables, and myriad other things either as kits, boards, and what have you.

I’m sad to see it go, knowing how lovingly I put it all together over the years. But, most of that stuff from from 13 years ago. Things have advanced since then.

It used to be the “Arduino” was the dominant microcontroller and form factor for very small projects. Those boards could run $30, and weren’t much compared to what we have today. Nowadays, the new kids in town are the ESP 32 line of compute modules, along with form factors such as the Adafruit supported “Feather”. A lot of the modules you used to buy separate, like Wifi, are just a part of the chip package, along with BlueTooth. Even the battery charging circuitry, which used to be a whole separate board, is just a part of the module now. I can buy a feather board for $15, and it will have LiPo charging circuitry, USB-C connectivity for power and programming, Wifi (abgn), and BlueTooth LE. The same board will have 8 or 16Mb or RAM, and possibly even dual cores! That’s about $100 worth of components from 2010, all shrunken down to a board about the size of my big thumb. Yes, it’s definitely time to hit refresh.

So, I’m getting rid of all this old stuff, with a tear in my eye, but a smile on my face, because there’s new stuff to be purchased!! The hobby will continue.

I’m happily building new machines, so my purchases are more targeted than the general education I was pursuing back then. New CPUs, new instructions sets, new data sheets, new capabilities, dreams, and possibilities. It’s both a sad and joyous day, because some of the new stuff at the low end even has the words “AI Enabled” on it, so let’s see.


Embodied AI – Software seeking hardware

The “AI” space writ large, covers an array of different topics. At this moment in time, the Large Language Models (LLMs) have captured everyone’s imagination, due to their uncanny ability to give seemingly good answers to a number of run of the mill questions. I have been using ChatGPT specifically for the past year or so, and have found it to be a useful companion for certain tasks. The combination I use is GitHub Copilot, in my Visual Studio development environment, and ChatGPT on the side. Copilot is great for doing very sophisticated copy and paste based on comments I type in my code. ChatGPT is good for exploring new areas I’m not familiar with, and making suggestions as to things I can try.

That’s great stuff, and Microsoft isn’t the only game in town now. Google with their Bard/Gemini is coming right along the same path, and Facebook isn’t far behind with their various llama based offerings. I am currently exploring beyond what the LLM models provide.

One of the great benefits I see of AI is the ability to help automate various tasks. Earlier in the 20th century, we electrified, and motorized a lot of tasks, which resulted in the industrial revolution, giving us everything from cars to tractors, trains, airplanes, and rockets. Now we sit at a similar nexus. We have the means to not just motorize everything, but to give everything a little bit of intelligence as well. What I’m really after in this is the ability to create more complex machines, without having to spend the months and years to develop the software to run them. I want them to ‘learn’. I believe this can make the means of production of goods accessible to a much broader base of the population than ever before.

What I’m talking about is manufacturing at the speed of thought. A facility where this is done is a manufactory.

In my idealized manufactory, I have various semi-intelligent machines that are capable of learning how to perform various tasks. At a high level, I want to simply think about, and perhaps visualize a piece of furniture, turn to my manufactory and say “I need a queen sized bed, with four posts, that I can assemble using a screwdriver”. What ensues is what you might expect from a session with ChatGPT, a suggestion of options, some visualization with some sort of Dall-E piece, and ultimately an actual plan that shows the various pieces that need to be cut, and how to assemble them. I would then turn these plans over to the manufactory and simply say “make it so”, and the machinery would spring into life, cutting, shaping, printing, all the necessary pieces, and delivering them to me. Bonus if there is an assembly robot that I can hire to actually put it together in my bedroom.

Well, this is pure fantasy at this moment in time, but I have no doubt it is achievable. To that end, I’ve been exploring various kinds of machines from first principles to determine where the intelligence needs to be placed in order to speed up the process.

I am interested in three kinds of machines

CNC Router – Essentially a router, or spindle, which has a spinning cutting bit. Typically rides on a gantry across a flat surface, and is capable of carving pieces.

3D Printer – Automated hot glue gun. The workhorse of plastic part generation. Basically a hot glue gun mounted to a tool head that can be moved in a 3D space to additively created a workpiece.

Robotic Arm – Typically with 5 or 6 joints, can have various tools mounted to the end. Good for many different kinds of tasks from welding, to picking stuff up, to packing items into a box.

There are plenty of other base machines, including laser cutters, milling machines, lathes, and presses, but I’ve chosen these three because they represent different enough capabilities, but they’re all relatively easy to build using standard tools that I have on hand. So, what’s interesting, and what does AI have to do with it?

Let’s look at the 3D Printer.

the100 – This is a relatively small 3D printer where most of the parts are 3D printed. The other distinction is holds is that it’s super fast when it prints, rivaling anything in the consumer commercial realm. The printability is what drew me to this one because that means all I need to start is another relatively inexpensive ($300) 3D printer to start. And of course once the100 is built, it can 3D print the next version, even faster, and so on and so forth.

The thing about this, and all tools, is they have a kinematic model. That is, they have some motors, belts, pulleys, etc. Combined, these guts determine that this is a machine capable of moving a toolhead in a 3D space in a certain way. I can raise and lower the print bed in the Z direction. I can move the tool head in the XY direction. The model also has some constraints, such as speed limits based on the motors and other components I’m using. There’s also constraints as to the size of the area within which it can move.

The way this is all handled today is clever people come up with the programs that tie all this stuff together. We hard code the kinematic model into the software, and run something like Klipper, or Marlin, or various others, which take all that information, are fed a stream of commands (gcode), and know how to make the motors move in the right way to execute the commands.

There is typically a motherboard in these machines that has a combination of motor control and motion control, all wrapped up in a tight package.

I want to separate these things. I want motor control to be explicit, and here I want to inject a bit of AI. In order to ’embody’ AI, I need to teach a model about it’s kinematics. From there, I want to train it on how to move based on those kinematics. I don’t want to write the code telling it every step how to move from point A to B, which is what we do now. I want to let it flop around, giving it positive re-enforcement when it does the right thing, and negatives when it doesn’t. Just like we do with cars, just like we do with characters in video games. This is the first step of embodiment. Let the machine know its senses and actuators, and encourage it to learn how to use itself to perform a task.

Basic motor control is something the model needs to be told, as part of the kinematic model. Motion control is the next level up. Given a task, such as ‘draw a curved line from here to there’, which motors to engage, for how long, in which sequence, when to accelerate, how fast, how long, best deceleration curve, that’s all part of the motion control, and something a second level of intelligence needs to learn.

On top of all that, you want to layer an ability to translate from one domain to another. As a human, or perhaps another entity in the manufacturing process, I’m going to hand you a ‘.stl’ or ‘.step’ or various other kinds of design files. You will then need to translate that into the series of commands you understand you can give to your embodied self to carry out the task of creating the item.

But, it all starts down in motor control, and kinematic modeling.

Next up is the CNC Router

This is the lowrider 3 by V1 Engineering. What’s special here again is the ease of creating the machine. It has mostly 3D printed parts, and used standard components that can be found at a local hardware store. At it’s core is a motor controller, which is very similar to the ones used in the 3D printer case. Here again, the machine is running in a pretty constrained 3D space, and the motor control is very similar to that of the 3D printer. These two devices run off different motherboards, but I will be changing that so they essentially run with the same brain when it comes to their basic motor control and kinematic understanding.

Whereas the 3D printer is good for small parts (like the ones used to construct this larger machine), the CNC router, in this case, is good for cutting and shaping of sheet goods, like large 4ftx8ft sheets of playwood for cabinet and furniture making. Giving this platform intelligence gives us the ability to send it a cut list for a piece of furniture and have it figure that out and just do it.

Of course, these capabilities exist in larger industrial machines, that have typically been programmed, and are tied to CAD/CAM software. Here though, I’m after something different. I don’t want to “program” it, I want to teach it, starting from the base principles of its own kinematics.

Last is the venerable Robot Arm

Here, I am building a version of the AR4 MK2 robot arm from Annin Robotics

This machine represents a departure from the other two, with 6 degrees of freedom (shoulder, elbow, wrist, etc). The motors are larger than those found in the 3D printer or CNC router, but how to control and sense them is relatively the same. So, again, ultimately I want to separate sense and motor control from motion control. I will describe a kinematic model, and have the bot learn how to move itself based on reinforcement learning on that model.

All of this is possible now because of the start of the technology. Microcontrollers, or very small computers, are more than capable of handling the complex instructions to control a set of motors. This is a departure from just 10 years ago when I needed a complete real-time Linux PC with a parallel port to control the motors alone. Now I can do it with an esp32 based device that costs less than $20, and can run off a hobby battery. Similarly, the cost of ‘intelligence’ keeps dropping. There are LLMs such as llama.cpp which can run on a Raspberry pi class machine, which can be easily incorporated into these robot frames.

So, my general approach to creating the manufactory is to create these robot frames from first principles, and embody them with AI as low as we can go, then build up intelligence from there.

At this time, I have completed the AR4 arm, and the Lowrider CNC. the100 printer is in progress, and should complete in a couple of weeks. Then begins the task of creating the software to animate them all, run simulations, train models, and see where we get to.


Obsolete and vulnerable?

For the past few years, I’ve had this HP Photosmart printer.  It’s served me well, with nary a problem.  Recently, I needed to replace ink, so I spent the usual $60+ to replace all the cartridges, and then it didn’t work…

An endless cycle of “check the ink” ensued, at which point I thought, OK, I can buy some more cartridges, rinse, repeat, or I can buy another printer.  This is the problem with printers these days.  Since all the money is made on the consumables, buying a new printer is anywhere from a rebated ‘free’, to a few hundred dollars.  Even laser printers, which used to cost $10,000 when Apple came out with their first one back in the day, are a measly $300 for a color laser!

So, I did some research.  In the end I decided on the HP MFP M277dw

It’s a pretty little beast.  It came with an installation CD, which is normal for such things.  But, since my machine doesn’t have a CD/DVD/BFD player in it, I installed software from their website instead.

It’s not often that I install hardware in my machine, so it’s a remarkable event.  It’s kind of like those passwords you only have to use once a year.  You’ll naturally try to follow the most expedient path.  So, I download and install the HP installer appropriate for this device and my OS.  No MD5 checksum available, so I just trust that the download from HP (at least over HTTPS) is good.  But, these days, any compromise to that software is probably deep in the firmware of the printer already.

The screens are typical, a list of actions that are going to occur by default.  These include automatic update, customer feedback, and some other things that don’t sound that interesting to the core functioning of my printer.  The choice to turn these options off are hidden behind a blue colored link at the bottom of the screen.  Quite unobtrusive, and if I’m color blind, I won’t even notice it.  It’s not a button, just some blue text.  So, I click the text, which turns on some check boxes, which I can check to turn off various features.

So, further with the installation, “Do I want HP Connect?”  Well, I don’t know, I don’t know what that is.  So, I leave that checked.  Things rumble along, and a couple of test print pages are printed.  One says: “Congratulations!” and proceeds to give me the details on how I can send email to my printer for printing from anywhere on the planet!  Well, that’s not what I want, and I’m sure involves having the printer talk to service out in the internet looking for print requests, or worse, it’s installed a reverse proxy on my network, punching a vulnerability hole in the same.  It just so happens a web page for printer configuration shows up as well, and I figure out how to turn that particular feature off.  But what else did it do.

Up pops a dialog window telling me it would like to authenticate my cartridges, giving me untold riches in the process.  Just another attempt to get more information on my printer, my machines, and my usage.  I just close that window, and away we go.

I’m thinking, I’m a Microsoft employee.  I’ve been around computers my entire life.  I probably upgrade things more than the average user.  I know hardware, identity, security, networking, and the like.  I’m at least an “experienced” user.  It baffles me to think of how a ‘less experienced’ user would deal with this whole situation.  Most likely, they’d go with the defaults, just clicking “OK” when required to get the darned thing running.  In so doing, they’d be giving away a lot more information than they think, and exposing their machine to a lot more outside vulnerabilities than they’d care to think about.  There’s got to be a better way.

Ideally, I think I’d have a ‘home’ computer, like ‘Jarvis’ for Tony Stark.  This is a home AI that knows about me, my family, our habits and concerns.  When I want to install a new piece of kit in the house, I should just be able to put that thing on the network, and Jarvis will take care of the rest, negotiating with the printer and manufacturer to get basic drivers installed where appropriate, and only sharing what personal information I want shared, based on knowing my habits and desires.  This sort of digital assistant is needed even more by the elderly, who are awash in technology that’s rapidly escaping their grasp.  Heck, forget the elderly, even average computer users who’s interaction with a ‘computer’ extends to their cell phones, tablets, and console gaming rigs, this stuff is just not getting any easier.

So, more than just hope, this lesson in hardware installation reminds me that the future of computing doesn’t always lie in the shiny new stuff.  Sometimes it’s just about making the mundane work in an easier, more secure fashion.

 


Have Computicles Arrived?

So, I’ve written quite a lot about computicles over the past few years.  In most of those articles, I’m talking about the software implementation of tiny units of computation.  The idea for computicles stems from a conversation I had with my daughter circa 2007 in which I was laying out a grand vision of the world where units of computation would be really small, fit in your hand sized, and be able to connect and do stuff fairly easily.  That was my envisioning of ubiquitous computing.  And so, yesterday, I received the latest creation from HardKernel, the Odroid HC1 (HC – Home Cloud).

20170902_072503

Hardkernel is a funny company.  I’ve been buying at least one of everything they’ve made in the past 5 years or so.  They essentially make single board computers in the “credit card” form factor.  What you see in the picture is the HC1, with an attached SSD of 120Gb.  The SSD is 2.5″ standard, so that gives you a sense of the size of the thing.  The black is aluminum, and it’s the heat sink for the unit.

The computer bit of it is essentially a reworked Odroid XU4, which all by itself is quite a strong computer in this category.  Think Raspberry Pi, but 4 or 5 times stronger.  The HC1 has a single Sata connector to slot in whatever hard storage you choose.  No extra ribbon cables or anything like that.  The XU4 itself can run variants of Linux, as well as Android.  The uSD card sticking out the right side provides the OS.  In this particular case I’m using OMV (Open Media Vault), because I wanted to try the unit out as a NAS in my home office.

One of the nice things about the HC1 is that it’s stackable.  So, I can see piling up 3 or 4 of these to run my local server needs.  Of course, when you compare to the giant beefy 64-bit super server that I’m currently typing at, these toy droids give it very little competition in the way of absolute compute power.  They even did an analysis of bitcoin mining and determined a number of years it would take to get a return on their investment.  But, computing, and computicles aren’t about absolute concentrated power.  Rather, they are about distributed processing.

Right now I have a Synology, probably the equivalent of today’s DS1517.  That thing has RAID up the wazoo, redundant power, multiple nics, and it’s just a reliable workhorse that just won’t quit, so far.  The base price starts at $799, before you actually start adding storage media.  The HC1 starts at $49.  Of course there’s no real comparison in terms of reliability, redundancy, and the like, or is there?

Each HC1 can hold a single disk.  You can throw on whatever size and variety you like.  This first one has a Samsung SSD that’s a couple years old, at 120Gb.  These days you can get 250Gb for $90.  You can go up to 4TB with an SSD, but that’s more like a $1600 proposition.  So, I’d be sticking with the low end.  That makes a reasonable storage computicle roughly $150.

You could of course skip the SSD or HDD and just stick a largish USB thumb drive, or 128Gb uSD for $65, but the speed on that interface isn’t going to be nearly as fast as the Sata III interface the HC1 is sporting.  So, great for a small time use, but for relatively constant streaming and download, the SSD solutions, and even HDD solutions will be more robust.

So, what’s the use case?  Well, besides the pure geekery of the thing, I’m trying to imagine more appliance like usage.  I’m imagining what it looks like to have several of these placed throughout the house.  Maybe one is configured as a YouTube downloader, and that’s all it does all the time, shunting to the larger Synology every once in a while.

How about video streaming?  Right now that’s served up by the Synology running a Plex server, which is great for most clients, but sometimes, it’s just plain slow, particularly when it comes to converting video clips from ancient cameras and cell phones.  Having one HC1 dedicated to the task of converting clips to various formats that are known to be used in our house would be good.  Also, maybe serving up the video itself?  The OMV comes with a minidlna server, which works well with almost all the viewing kit we have.  But, do we really have any hiccups when it comes to video streaming from the Synology?  Not enough to worry about, but still.

Maybe it’s about multiple redundant distributed RAID.  With 5 – 10 of these spread throughout the house, each one could fail in time, be replaced, and nothing would be the wiser.  I could load each with a couple of terabytes, and configure some form of pleasant redudancy across them and be very happy.  But, then there’s the cloud.  I actually do feel somewhat reassured having the ability to backup somewhere else.  As recent flooding in Texas shows, as well as wildfires, no matter how much redundancy you have locally, it’s local.

Then there’s compute.  Like I said, a single beefy x64 machine with a decent GPU is going to smoke any single one of these.  Likewise if you have a small cluster of these.  But, that doesn’t mean it’s not useful.  Odroid boards are ARM based, which makes them inherently low power consumption compared to their intel counterparts.  If I’ve have some relatively light loads that are trivially parallelizable, then having a cluster of a few of these might make some sense.  Again with the ubiquitous computing, if I want to have the Star Trek style “computer, where’s my son”, or “turn on the lights in the garage”, without having to send my voice to the cloud constantly, then performing operations such as speech recognition on a little cluster might be interesting.

The long and short of it is that having a compute/storage module in the $150 range makes for some interesting thinking.  It’s surely not the only storage option in this range, but the combination of darned good hardware, tons of software support, low price and easy assembly, gives me pause to consider the possibilities.  Perhaps the hardware has finally caught up to the software, and I can start realizing computicles in physical as well as soft form.


What happens when PC cooling fails?

So, a few months back, I finished the ultra-cool tower PC build.  A strong motivator for building that system was to utilize a liquid cooling system, because I had never done so before.  So, how has it gone these months later?

Well, it started with some strange sound coming from the pump on the reservoir.  It was making some clicking sound, and I couldn’t really understand why.  Then I felt the tubing coming out of the top of the CPU, and it was feeling quite warm.  Basically, the liquid cooling system was not cooling the system.

Bummer.

But, I’m a tinkerer, so I figured I’d just take it apart and figure out what was going on.  I took apart all the tubing, and took the CPU cooling block off the CPU as well.  I opened up that block, and what did I see?  A bunch of gunk clogging the very fine fins within the cooling block.  It was this white chalky looking stuff, and it was totally preventing the water from flowing through.  As it turns out, the Thermaltake system that I installed came with some Thermaltake liquid coolant, and that stuff turns out to be total crap.  After reading some reviews, it seems like a common affliction that using this coolant: Thermaltake C1000 red will eventually leave a white residue clogging the very fine parts of your cooling loop, forcing you to flush and refill or worse.

Well, that’s a bummer, and I would have been ok after that discovery.  Problem is, along the way, I put the system back together, turned it on to reflush the system, and walked away for a bit…

Luckily, my Android phone that I used to take the various pictures factory reset itself, so I no longer have the evidence of my hasty failure.   It so happens that when the CPU cooling block was slill clogged, and I put the system together, I didn’t tighten down the tube connecting to the block tight enough.  Enough pressure built up that the tube popped off.  Needless to say, I’ll need to replace the carpet in my home office.  And, I can tell you, the effect of spilling about a half gallon of water on the inerds of your running computer motherboard, power supply, and all the rest, is almost certain death for those components…

Sigh.

So, I started by dryingeverything off as best I could.  I used alcohol and q-tips to dab up obvious stuff.  The motherboard simply would not turn on again.  There are a lot of things that could be wrong, but I thought I’d start with the motherboard.

I ordered a new motherboard.  This time around I got the Gigabyte GA-Z170X-Gaming 7.  This is not the exact same motherboard as the original.  It doesn’t have the option to change the bios without RAM being installed, and it doesn’t have as many power phases, but, for my needs, saving $120 was fine, since I lost the motivation to go all out in this replacement.

The motherboard was same day delivery (which is why Amazon is great).  It installed without a flaw.  Turned it on and… glitchy internal video!  Aagghh.  OK, return this, and in another day get another of the same.  This time… No problems.  Liquid cooling system back together, killer video card installed, monitors hooked up, and it all works as flawlessy as before, if not better.

This time around, I’m not installing any fancy cooling liquid.  I’ve done my homework, and everyone who actually does these systems to run for the long run simply uses distilled water, and perhaps some biocide.  I chose to get one of those sliver spirals to act as the biocide.  That way there’s not chemicals to deal with.

When the silver coil arrives, I’ll have to drain the pipes one more time to install it in the reservoir.  I’ll also take the opportunity to use pipe cleaners on the tubes, which have become a bit milky looking due to the sediment from the C1000 cooling liquid.  I now have a checklist for assembling the cooling system, to ensure I tighten all the right fittings, and hopefully avoid another spillage mishap.

Thankfully the CPU, memory sticks, video card, power supply, and nvme memory were all spared spoilage from the flooding incident.  That would have effectively been a new PC build (darn those CPUs are expensive).

Lesson learned.  There’s quite a difference between building a liquid cooling system for looks, vs building one that will actually function for years to come.  I will now avoid Thermaltake like the plague, as I’ve found much better parts.  Next machine I build will likely not use liquid cooling at all, because it won’t be as visible, so the aesthetic isn’t critical, and the benefits are fairly minimal.  Enthusiasm is great, because it leads to doing new things.  But, I have to temper my enthusiasm with more research and caution.  I don’t mind paying more, piecing things together, rather than going for the all-in-one kit.

Lessons learned.

Now, back to computing!


Decommissioning Makerbot Cupcake

This was the first 3D printer I ever had

wp_20170215_010-2

This picture shows the machine after its last Frankenstein operation circa 2011.  I purchased it as a kit in the first place so that I could ultimately create some simple objects like this: http://www.thingiverse.com/thing:11255 to connect drinking straws so that my daughter and I could construct objects like geodesic domes.

Well, this machine never printed more than one or two objects in it wacky storied life until it was replaced with the original Up! machine, which just worked out of the box.

Those were heady days in the 3D printing industry.  RepRap, and the notion of printers printing parts for themselves was still an ideal, and the likes of Ultimaker, Zortrax, and even Prusa, were just glimmers in their creators eyes.

The hotend for this thing (that mass of acrylic and steel sitting on the 5″x5″ platform in the middle there, probably weighed nearly a pound, consumed 3mm plastic, and just didn’t really work.

wp_20170220_026

All those nuts and bolts, tons of acrylic, funky resistors, an even a piece of delrin.  It was all well intentioned, and all very experiemental, and it all just didn’t quite work for me.  Compared to a new modern extruder/hotend combo, this might seem relatively stone age, but it did have all the basics that we take for granite today.

I’m happy we built this machine.  It was a great bonding experience, and it was then that my daughter and I cemented ourselves as ‘makers’.  We went to a MakerFaire, played with electronics, sewed leds into a dress, and generally carried ourselves into the modern age of making.

I have since purchased an original Up!, an early prusa mendel, original ultimaker.  Then I jumped into another realm with a ZCorp 650, ZCorp 660, then back down to earth with an Afinia Up Box, and lately Type A Machines Hub, and Prusa i3 MK2.  That’s a lot of plastic, powder, glue and frustration right there in all that madness.

I purchased the first kit to make a little something for me and the daughter to play with.  I’ve since explored the various ways in which these devices may or may not be utilized in the real of custom on-demand manufacturing.  That journey continues.

This cupcake was both fun and frustrating as all heck.  I’m a bit nostalgic to see it go, but now that it’s real value is in the various M3 screws and nuts, I’m happy to have let this particular nightmare in our printing history go.

RIP cupcake.  You served us well.

 


Home Automation – Choosing Bulbs

With a New Year’s resolution to replace all incandescent bulbs in the house with LEDs, I actually started the process back in December.  I purchased a ton of these:

Sylvania Ultra LED Light Bulb

These bulbs were already cheap at the local Lowe’s Home Improvement store.  But, for Christmas, they were $2.20 each!  Well, I only needed 7 more to finish up the job I started, in terms of flood light replacement, so I got them.  At this rate, they’re cheaper than incandescents, by a long shot, so why not?

For my particular house, the vast majority of bulbs in common areas, are these floods, so replacing them all will make us feel good about the environment.

In most cases, these bulbs are in sets of at least three or more, so there’s a question of the light switch that goes with them.  In two cases, the family room and kitchen, there are mechanical dimmer switches.  Those are older Lutron dimmers, which were good for the older floods, but not tuned to the all new LED floods just installed.  They work, but in a kind of clunky way.  When you dim really low, the lights might start to flicker, becoming unbearable to be under.  So, some new dimmers are required.

There’s a whole story on dimmers waiting to be written, but there are basically two ways to go.  Either stick with another simple mechanical dimmer, with no automation capability, but at least LED savvy, or go with an automation capable dimmer.

This is as much a cost concern as anything.  I went with both depending.

lutroncfldimmer

This is a typical mechanical dimmer.  I chose Lutron models that are pretty much the same as the old ones, except they handle CFLs and LEDs much better.  This is a good choice when you’re not going to do any automation in the area, you just want to slap that switch on or off when you enter and exit the room, simple and sweet.  So, in my kitchen nook, which has 3 lights, I put this one in.  I also put it in for the 9 lights in the kitchen, but after some thought, I decided I want to do some automation for the kitchen, so I need an automatable switch instead.

lutronautodimmer

In this case, it’s a dimmer that works with the Lutron Caseta automation system.  There are myriad automation systems from all sorts of companies.  I went with Lutron because that’s what was already in the house previously, and I’ve known the name for at least 40 years, and the reviews on them seem to be fairly decent, and they work with the Alexa thing.

These are great because they work with the LEDs, they’re automatable, and you can still just use them locally by pushing the buttons for brighter, dimmer, on, off.

So, that covers most of the lights.  But what about all those others, like the bathrooms, bedrooms, entry way, porch, etc?

Well, in most cases, you can just replace a typical 60 watt bulb with the equivalent 9-11w LED equivalent.  Choosing a color temperature (2700 – 3000K probably the best).  These can still work with standard light switches, so nothing more to be done.  Probably not worth installing a $50 automated dimmer on each one of these lights, but you could if you wanted to.

Now, there are some spots where you actually want to do a little something with color.  In my house, perhaps on the balcony (3 lights), or a play room, or prayer nook.  In these cases, you can install something like the Philips Hue.

huecolor

This is a bulb that is individually addressable.  It requires yet another Hub device, this time from Philips.  What you get though is the ability to set the color to a wide range of colors, as well as the general dimness.  You can set scenes, and if you want to write a little code, you can even hook up a Raspberry Pi to change the color to match the natural daylight.

At $50 a bulb, this is a very spendy option ranking up there with the choice between mechanical and automation ready dimmer switches.  In this case, you get the automation without having to install an automation dimmer, but you pay the automation cost for every single light you buy.  So, for my balcony, it would cost $150 for three lights, or I could go the standard LED and dimmer route for more like $60, assuming I already have the appropriate hub in either case.  What you lose with the standard bulb/dimmer approach is the ability to change the color.  For my balcony, I don’t need to change the color.

So, these automated colored lights make more sense for something like a bathroom, or an office space, or somewhere else where you spend time and care about what the lighting color is doing.

And there you have it.  No matter what you choose, they MUST be LEDs.  At least that’s the mantra of this day.  then you are free to choose a mix of automated dimmers/switches, and automated color changing lights.  In the future, for new homes, all the lighting will be LED at least, because it’s becoming the cheaper choice for builders.  For higher end homes, I’d expect there to be hubs, with automated dimmers and colored lights as a standard set of choices the homeowner can choose, just like carpet, paint color, and cabinetry.

 

 

 


Building a Tower PC – The Furniture

This is what home computing should look like…

WP_20161215_001.jpg

Reminiscent of a Memorex commercial (for those who can remember that iconic commercial with the fellow sitting in his lounger and being blown away).

There’s no point in building out a kick ass liquid cooled blinky light PC if you’re not going to show off your work.  So, I got to thinking about the piece of furniture that was going to showcase the build, and I came up with this design.  It’s built out of 2×4 lumber and MDF, because that’s the stock I had in the garage, and I needed to get rid of it to make room for more…

My design goal was a workbench like thing whose sole purpose would be to act as a computer work table/cabinet thing.  I don’t need a ton of drawers, I can simply stack plastic bins in there, or outside, if I feel I really need them.  I wanted an ample keyboard/mouse surface, because sometimes I need to place another laptop on the surface, or write stuff, and it’s nice to have the room to just push the keyboard back and use the worktop as a worktop.

I started out with a fairly standard looking garage workbench carcass.

WP_20161213_007.jpg

I put that power strip in there because it’s totally hidden when the workbench top is on, and it provides enough outlets, spaced far enough apart, that I can plug in the computer, 2 or three monitors, extra lights, speakers, and other stuff that might so happen to be sitting on the work top.

The thing is roughly 36″ on a side, with the worktop being 36″x33″ if memory serves correctly.

WP_20161215_008.jpg

This is in my ‘home office’ room, so there is carpet.  I had the dilemma of how to cart the thing around, because fully loaded, it’s quite heavy, and unwieldy.  I had a package of those furniture moving pads in a drawer, so I whipped those out, and they work a treat!  Each pad has a vinyl plate bottom, with a rubber top.  The 2×4 lumber sits nicely in the rubber, and I can easily move this thing all over my office all by myself.

With the demands of family, this took roughly two days to assemble.  Now that it all works, I can think about actually finishing it.  The things I want to do are to make it more like furniture, and less like something you’d find in the garage.  That means, doing some sanding, mahogany staining, varnish, and the like.  I’ll top the 3/4″ MDF top with an 1/8″ piece of hard board, and put some trim around the edge, to act as a buffer, and to hide the seam between the hardboard and MDF.  This makes for a nice durable surface that I can tape paper to every once in a while if I so happen to do any gluing or other craft work.

I’ve added the speaker system to the workbench, but right now it’s just kind of there, with the wires hanging all over.  I’ll have to drill a couple of circular holes for wire pass through.  To further make it kid proof, I’ll add some plexiglass siding, to keep their delicate little fingers out of the silently whirring fans.

Putting the computer in the corner as it is, is a pretty good thing.  It’s not taking up main floor space like the desk I was using.  That gives me a ton of space to do other stuff, like setup a mini 3D printer farm.  There’s a corner over by the window ready for exactly that.

In a fit of inspiration, I also removed the couch and chairs, which more often than not were collection places for junk.  Now I have an entirely open wall, ready for yet another workbench something or other.  Oddly enough, the wall on that side of the room is totally bare, and would be a perfect place to receive a 150″ micro projected image, as a large book case is on the opposite wall.  Perhaps that would be good for video conferencing in the large?

At any rate, the killer PC is getting a custom built piece of furniture.  I’m getting a new perspective on my home work space, and life is grand.


Building a Tower PC – one month on

The tower PC has found itself sidled up next to the desk in my office.  It’s not actually the best placement of the beast as you can’t really admire the innards from that position.  It’s really cool though because it’s fairly silent, causing a faint rumbling in the floor from the cooling reservoir.  You don’t really notice it until you turn it off.

As this thing is fairly quiet, even the occasional click click noise of the disk actual spinning rust disk drive becomes noticeable, and slightly annoying.  So, I decided to make my first mod to this beast.  I took out the Western digital 2TB drive, and put in a Samsung SSD 850 EVO 1TB.  There are a couple reasons for this replacement.  SSD drives are great for speed and silent, and low energy usage.  All good things.  They’re still a bit spendy though.  The 2TB version would have been twice as much, and then some.  So, 1TB is fine for now, as this machine is not intended to be a storage power house, just enough to handle local stuff fairly fast.

It may not seem like much of a change, but how has it worked out?  Well, when I had the spinning rust in there, I put all my repos on the D: drive, so downloading things from GitHub had a noticeable lag.  So too, compiling stuff with Visual Studio felt a bit sluggish.  My thinking was, why on earth would my laptop (all SSD all the time) be much faster at fairly simple compilation tasks, when this desktop beast is so much more powerful.  We’ll, I’ve just done a totally subjective test of compilation after installing the SSD and putting my repos on it.  Conclusion:  The snappiness level now meets my expectations.  I conclude that SSDs truly are a beneficial thing.

Now that I’ve got the snappy beast humming along, I’ll need to reconfigure my home office, build some new worktops, so that I can better display it, and have a much better work surface than my currently crowded desk.  One thing leads to another…