Knowledge Surfing

Surviving in the age of AI

As is fitting of the times, I spent a few minutes with ChatGPT/Dall*E to prompt my way to that image. In the distant past (last year), I would have used a search engine, scouring the internet for such an image. If I could even find anything close to what I wanted, I’d then be worried about copyright, and the fact the image is probably also being used by someone else. Now, I know this image is “my own”, at least I think so, and it represents exactly what I wanted.

I went through several iterations before I got to this image, and I could go through several more rounds, tweaking along the way, but it was less than 5 minute’s worth of work. Such is “the power of AI”.

This image represents a couple of things that are going on right now. Between advances in hardware tech, and the rise of new things such as LLMs, and quantum computing, and humanoid robotics, there’s a wave of things that threaten to swamp our human understanding of the world and the way things work. I recently had a discussion with a friend who’s working in the core of all this AI hullabaloo, and they are clearly thinking “what does this all mean? It’s moving so fast we haven’t truly thought about the implications to humanity…”. Just let that sink in for a moment. If you thought you were being overwhelmed as an outsider, the people on the inside are having the same thoughts.

OK, so… What to do?

The other thing the image represents is a coping mechanism. When faced with a wave, the best thing to do is learn how to surf. I’ve had many a discussion with people where the topic is the tech of today, and how overwhelming it can be. My advice is always “We are being handed magic wands, learn how to be a magician”. Whether it be magic wands, or surf boards, the advice is about finding a right way to consume the knowledge, and use the tools, rather than being in awe and swamped by them.

Absorbing knowledge quickly is hard. What’s particularly hard is trying to have complete understanding of a bunch of new things all at once. Our brains just aren’t made for that. We’re much slower, taking years to absorb the simplest concepts such as using linear algebra to transform from 3D space to 2D space. I mean, understanding how consuming the internet can turn into humanoid robotics? That’s a bridge too far I think. And just as you finally grok what you think an LLM is, the models are becoming tiny instead of large, and Apple is telling you inference is best done on your mobile device, rather than in the cloud.

How to cope?

Well, way back in 1986, I actually wrote a story in a self published magazine, that deals with this exact topic. I’m going to have to scan those old stories in and re-publish them, because they seem relevant to today. The gist of it was, you have to lower your comprehension quotient in order to understand the bigger picture. Essentially relax your mind, and your body follows (to misquote a movie line).

Another reference for the Berkeley hippies such as myself, Abhyanga yoga. It’s a technique that uses oil, and many touches at the same time. The general idea is, you can’t possibly focus on all the stimuli at once, so, at some point, you just kind of give up, and take all the individual touches as one, and relax into the overall experience.

Alright hippy, what does that have to do with the tech of today? Well, everything. But let’s be practical and structured about it.

How to become a knowledge surfer

  • Find a short list of information outlets that you can refer to constantly. These outlets need to be low on drama and gas lighting, and present you with a steady stream of factual information
  • Pick a short list of topics that you’re going to track. It’s hard to follow everything, so just choose two or three things, such as “green energy”, “humanoid robotics”, “quantum communications”
  • Read your info outlets only once a day
  • Write a summary of what you’ve learned at the end of each week
  • Once a month, write a “note to self” style of thing in which you summarize the learnings for the month, and some conclusions about what you think about what you’ve learned
  • Adjust your list of topics, or information outlets at the beginning of the month, but keep the list short
  • Interact with other humans, and share your monthly summaries, to get their reactions, and possibly new perspectives

It’s all about structure and balance. You need to know which waves you’re riding, and you need to have a solid foundation (the surf board) upon which to do your information surfing. “Knowledge” comes from gathering data, testing a hypothesis, learning, and moving ahead. If you don’t have such structure, then you get swamped by the data, always grasping and gasping, and definitely not riding the crest of the wave.

This is my practice on a daily basis. I gather information, I summarize in the form of “Hot Takes”, I disseminate to small audiences and get feedback, I move on to the next week’s worth of news. There’s a lot going on in tech, and the world in general. I find that having a structured approach to information gathering makes knowledge surfing easier.


Manufactory – CNC Router Table

Well, there it is. An actual functioning CNC router table.

Being one to invent random words, I came up with “manufatory”, to mean “manufacturing at the speed of thought. What am I trying to get at? In general, I’m on a crusade for tequity, having an equity share in technology, for intergenerational wealth creation. Whether it be owning stocks in a company, or patents, or other artifacts, being able to ride the rising tide of tech wealth requires owning a piece of it, and not just being a consumer of tech.

So, what’s this manufactory business about? Software production is one kind of intellectual property, but not all things are software. Everything we interact with in the world was created by someone, or some machine, somewhere in the world. Being able to think up a design, produce the goods economically, and sell them into open markets, is tequity.

A couple blogs back (Embodied AI – Software seeking hardware) I mentioned three machines,

  • 3D printer
  • CNC router
  • Robot arm

There are myriad machines that are used to manufacture goods of all kinds. I am choosing to focus on these three forms because they are immediately approachable, relatively inexpensive, easy to build, and can be used to both create immediately useful and sellable goods, as well as create the parts necessary to build more and different machines.

The machine I built is the Lowrider CNC 3, by V1 Engineering. This machine is very simple, primarily able to cut sheet goods, with an emphasis on full sheets (8’x4′) of plywood, MDF, and the like. This is NOT the machine you’re going to use to cut an engine block out of a billet of aluminum. There are many ways to get started on this one. There are a number of parts that are 3D printed. It also uses ‘rails’, which are nothing more than tubing you can buy at any hardware store (electrical conduit tubing). There are bits and bobs of hardware (screws, nuts, bolts, timing belts, motors, electrical board, linear rails) which you can source yourself, or you can just buy the hardware kit, for $306 USD. For this first one, I purchased the hardware kit, and printed all the necessary parts myself. Save a little money on printing, at the expense of spending a fair amount of time doing it. After gaining experience with the first one, I’ve embarked on building a second one.

Same machine, sample printed parts, this time in PETG instead of PLA for some of them. I’d say the printed parts cost roughly $50 in plastic, and a couple days of printing, depending on what kind of machine you have. Mostly I print on my prusa mini+, because it’s relatively fast. The larger part I had to print on the larger Prusa MK3, because it is too big for the mini.

Once you’ve got all your printed parts, and hardware kit, you’re ready to assemble. The instructions are very easy, and step by step. You don’t need really fancy tools. A screwdriver, couple of socket wrenches, and that’s about it. Following the instructions, I was able to assemble in about 3 days of casual time. This could easily go together within a few hours in a single day, assuming you’re well organized with your tools and a nice workspace.

Alright then, you’ve got a basic machine. Next up are the electronics.

This is the jackpot board. It was designed specifically to be a low cost brain for various kinds of CNC machines. In this case, we’re doing a large format CNC router table, but it can also run a laser cutter, or any kind of machine with up to six motors. It’s all you need when you’ve got small motors (up to nema 17 realistically). It has the motor drivers onboard, and a ESP 32 compute module is the heart of its intelligence.

The board is running a firmware called fluidnc. Fluidnc does two things on this board. The first is, it takes a machine description file, and uses that to understand how to move the motors based on various commands. The second is, it interprets the CNC commands that are your design, and generates the appropriate movements based on those commands “cut this arc, move this much in that direction, lift the cutter…”.

Another thing this board does is support a web interface.

This web interface is enough to get you going. You can upload your design to be cut, over the network, press the button to start, and away it goes.

The ESP32 compute module is doing a pretty heavy lift here. It’s running the brains of the machine, sending movement commands to the motors, plus putting up a web interface, plus responding to commands from that interface. All of that from a compute module that costs about $10!! That’s what I call a true computicle!

OK, this is all great. I can build a machine rather quickly, and inexpensively. It has a brain, and it can cut stuff. Now what?

Well, I’m a software guy, so all of this is ultimately in service to a software solution right? I want to put “AI” to use. Ideally, I’d be able to articulate some design, working with something like ChatGPT, and send that design to the machine to be manufactured. To that end, I’ve started creating my own GPT (powered by ChatGPT).

I’ve been training CNC Buddy for a while now. I’ve told it about fluidnc, how you create a configuration file, what gcodes it supports, etc. I’ve gone back and forth, telling it the challenges I had while building my machine, and how I overcame them. CNC Buddy, being a “GPT” knows language stuff. You could actually ask it to help you generate a gcode file, if you’re really good at prompt engineering. More than likely though it will point you at a CAD/CAM program for that.

I find CNC Buddy to be most useful in answering questions I might typically post in a build forum, or asking some experienced person. This is great for classroom environments, as the collective can enhance the knowledge base of CNC Buddy with new experiences, making it better at answering questions in the future.

So, that’s where we are today. The basic machine is done, and capable of doing some cuts.

Where do we go from here? Well, after the excitement of “just get it running”, I will now go back and clean up this machine. The wiring can be tidied up mostly, and I need to make some improvements to the table this sits on. In the meanwhile, I will build another one, pretty much the same as this one, but for a larger full sized table.

The beauty and benefit of this large format gantry CNC is that other things can be done. I’m not going to try and cut metal with this, although it can cut aluminum. I’m going to do things like mount a laser diode, and possibly a 3D printing head. Then the machine becomes more than it was intended for. This is a base platform for enabling all sorts of automated tasks, where having a tool of some sort mounted to a low gantry is useful. I will also be looking at fluidnc, with an eye towards making it smaller, and simpler. It’s great that it hosts a web site as part of its duties, but I don’t actually want that UI running there, so alternatives.

At any rate, this is the first of the 3 machines we’re currently building. If we want to manufacture at the speed of thought, we need to start with being able to affordably build machines that give us manufacturing capabilities, so here we are.


About that robot…

  • A robot may not injure humanity or, through inaction, allow humanity to come to harm.
  • A robot may not harm a human being.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  • A robot must know it is a robot.

Isaac Asimov popularized the “Three Laws of Robotics” in his Robot series, starting in 1940. Asimov is a pillar of modern day science fiction. Not “just a writer”, he was a full on scientist, philosopher, and all around renaissance man. Asimov gave us the word “robotics”, and here we are!

What I find most interesting about his earliest explorations in this area is that he put a lot of time (50 years) into thinking about how these automaton would interact with humanity, and how humans would interact with them. Written into his “Three laws of robotics” were an obvious attempt to keep humans relevant and unharmed by the advancement of robotics. You might think he had a deference to humans and only viewed robots as slaves, but that is not the case. He ends up with a master universe scale robot that guides and protects “humanity’s” evolution over the span of millenia.

If you’re wondering what all the hubub is about, I strongly suggest you read the “Foundation Trilogy”, or even start with “iRobot”. Then you’ll say “Hay, wasn’t that the Will Smith movie?”, or “Hay, isn’t “Foundation” on Apple TV right now? Yup, this is where it comes from alright.

Right now, in 2024, we’re marking through AI and Robots at a very rapid clip. The ink has barely dried on our understanding of how to program computers, and we’ve launched full tilt into the creation of autonomous driving, Large Language Models, and humanoid robotics. Of course, “automation” writ large is nothing new. Humanity has been automating since we first picked up tools to smash insects open on rocks. Now is something new though. Now, particularly with LLMs, we’re getting closer to the automation of intellect, and not just labor. All the tools to date have helped up smash bugs better. Food production with automated farm equipment generates yields our grandparents could only dream of. We can produce goods in factories, turning out cars in hours, rather than days. We can even produce rocket engines by the hundreds a year at this point. All simple automation improvements.

But now, we’re automating intellect, meaning, you don’t need that human in the loop to generate the quarterly report. You don’t need the human in the loop to come up with the letter to send to employees wishing them happy holidays. In some cases, you don’t need humans in the loop to come up with the design of a new product, or an original score for a movie, or the entirety of a movie. With emerging tools, such as Sora, you can prompt your way to creating video scenes that are unique, and completely derived from the dream state of an AI.

I do ask myself this though; Where are the three+ laws of robotics?

Are we rushing headlong down this path to automation without any concern for whether humanity is served, or to be served up?

Humanoid robots are particularly interesting, because they embody all the physical automation stuff, while simultaneously being able to embody the intellectual automation as well.

At this moment, there are several efforts under way to create humanoid robots. In some cases, they’re just toys for the kids. Something to kick a little socker ball, or play nursery rhymes. In other cases, they are serious affairs, meant to replace humans on assembly lines in factories. In some cases, they don’t have humanoid features, just a head with some lights. In others, they are striving for human realism, with full on facial expressions and “skin”.

I don’t think there are any universal “three laws of robotics” though, and there probably should be. Having worked at Microsoft, I know there was some effort to put “ethics” into the AI development space, but those efforts were largely around where and how to source the raw data to feed the emerging AI systems, it wasn’t close to talking about how humanoid robots should behave in society. It’s probably similar in other places.

I have gathered a watch list of companies that are engaged in the development of humanoid robotics. I want to watch how they do, not just on the hardware evolution, but on this question of ‘ethics’, or generally how we’re going to evolve these capable beings. Do they become our servants, an evolved form of humans, standing next to us, or our masters and commanders?

A few companies to watch for humanoid robot development

  • Tesla – Optimus
  • Figure AI – Figure 01
  • Boston Dynamics – Atlas
  • Unitree – H1

Why now? There is a confluence of factors making humanoid robotics more of a thing now than ever in the past. One are the myriad breakthroughs in machine learning. The LLM market has certainly shown the way, but essentially, we’ve reached a point where ‘training’ is the way to evolve a system, rather than “programming”. It’s easier to show a robot videos of a task, and say “do this”, than to write a ton of code with error conditions, branches, corner cases, and the like. So, that’s a breakthrough.

On the mechanical side, there are advances in batteries (smaller, more power density, cheaper), driven by the revolution in the Electric Vehicle industry. At the same time, compute density continues to increase every year, and become cheaper. And lastly, new kinds of electric motors, actuators, and sensors are being created, partly because of EVs again, and partly because MEMS technologies continue to miniaturize and become more ubiquitous.

The last bit of this is the training models. Being able to simulate an environment as accurately as possible is critical to the rapid training evolution of these new systems. nVidia has been providing the compute density and leading the way for simulation of all manner of environments for AI training.

Couple all this with the likes of Microsoft, Facebook, Apple, Google, nVidia, and you have the world’s most valued companies investing in a space that is the stuff of science fiction. There is no doubt. We will have practical humanoid robots capable of accomplishing typical human tasks, within 2024, or 2025 at the lastest. There will be tons of side benefits along the way from continue evolution of electronics, battery, and materials tech. The real question is, will we evolve humanity to match?

This is the essential question. Asimov thought about, articulated, refined, and put into words the laws of robotics, because he was concerned with how humanity would evolve with these new tools. Now that we’re on the precipice of actually delivering on the humanoid robot promise, are we putting in the same consideration, or are we in a headlong rush to push product out the door, no matter the cost or consequence to humanity?

This is one area I want to give some focus to. I think there’s tremendous benefit to be had from continuing down this evolutionary path. I want to do it in such a way that humanity doesn’t end up on the evolutionary scrap heap.


Home Automation – 5 years of experience

It was 5 years ago when I wrote about my home automation efforts: Home Automation – Choosing Bulbs

Back then, I was enthusiastic about replacing the myriad halogen bulbs in my house with LED based bulbs, and while I was at it, ‘automating’ them as well. I installed Philips Hue, some Lutron dimmers, and hooked it all up to Alexa so I could say “downstairs lights on!”. So, 5 years on, how has it gone?

Well, aside from doing the occasional demo of turning the lights on and off by talking to the ever present speaker, we really don’t use the automation at all. Why not? It’s far easier to just flick the light switch on or off when you enter/exit a room, and we never really got into ‘automation’, like setting up on/off patterns when on vacation.

These past few days, I decided to have another go at it. Throughout the house, I’ve been retiring various consumer electronics related to TVs, lights, computers, and the like. I won’t say that I’m an Apple fan boi, but they have a lot of kit in their ecosystem that “just works”. So, what have I done?

Lighting

Well, I’ve replaced the Philips Hue lights with those from Nanoleaf. Beyond just basic light bulbs, they have lots of panels, strips, shapes, to play with. I just bought a 3 pack of Matter A19 bulbs from the Apple store.

Simple, wifi enabled, you add them to the Apple Home app, and then you can control them in all the usual ways. I’ve re-purposed an older iPad to play the role of “home control” tablet, so I can change light values and whatnot.

We’re not walking through the house saying “Siri, lights on”, but I can set their color/temperature, so that when they do come on, they’ll be a certain value.

This is easy enough, and allows me to eliminate one bit of kit from the house, the Phillips controller, which was located in my office, which is not actually ideal because lights that are far from that spot might have trouble connecting.

Video Streaming

Over the years, we’ve used all manner of video streaming device. Way back in the day, it was ripping DVDs to the Synology NAS, and streaming through plex on the laptop. Then streaming from the same on the advanced DVD player. Commercial stuff was first streamed on various incarnations of roku, then along came Amazon fire sticks, then along came google tv, then finally Apple tv.

Given current prices of cheap LED tvs in the 55″ category, these apple tv boxes are not cheap ($130-$150), but, they are pretty darned good. Currently capable of 4K video. With the right HDMI cable, and a 4K screen, such a super sharp picture. I know that technology continues an invevitable mark of progress, but I’m ok with 4K output on the streaming box for a while. The only 4K screens we have in the house right now are in my office, and not even our main viewing screens. Upgrading our main screen to 4K will be a big upgrade, so these things will have life for a few years now that they are installed. The other benefit is they tie into the same Apple Home app that the lights do, so I can control them from one place also.

The last part is ‘smart speaker’. We’ve had the Echo Dot from Amazon from as early as they were available. We bought the dot, the cylinder, and even the one with a display (short lived experiment). What we’ve found through the years is the only real use case for us has been playing music at bed time. Other than that one hour of the day, and the occasional kids session on the weekends, we leave this thing unplugged, not wanting to allow Amazon to snoop in our conversations, and make purchasing suggestions based on what it heard.

Nope. Instead, I’m going to plug in the Apple HomePod mini I bought a couple years back.

I’m already paying for Apple music anyway (for listening on my bike rides), so why not use it more. It will sit in my office, not out in the house. I trust Apple more than Amazon, when it comes to not selling me to advertisers.

This all seems like a hard core Apple commercial, and until now I hadn’t realized how much Apple kit has replaced a bunch of other stuff. But, I guess they’re the largest consumer electronics company in the world for a reason (by stock value). I’ve gotta admit, when you’re not wanting to get into the guts of how it works, and tweak to the nth degree, it just works, and provides a coherent ecosystem.

So, 5 years on, what’s happened? We’ve replaced all the random experimental stuff from companies that have varying degrees of support, to be a full on apple home, and I have no regrets. We’re still not turning lights on with our voices, but it’s a relief to just de-commission random bits of tech that served half purposes. We’ll see what things look like in the next 5 years.

I am thinking I want a whole home “virtual assistant”, I mean, given the march of AI and all. So, I’m curious who will be the purveyor of that. As far as the home is concerned, Apple might just take that crown. Until then though, I’m sure I’ll be experimenting with a lot of stuff to shake out the winner that works for us.


Hello Scene – All the pretty little things

Wait, what? Well, yah, why not?

One of the joys I’ve had as a programmer over the years has been to read some paper, or some article, and try out the code for myself. Well, ray tracing has been a love of mine since the early 90s, when I first played with POV Ray.

Back in the day, Peter Shirley introduced ray tracing to an audience of eager programmers through the book: Ray Tracing in One Weekend. There were two subsequent editions that followed, exploring various optimizations and improvements. For the purposes of my demo scene here, I wanted to see how hard it was to integrate, and how big the program would be. So, here’s what the integration looks like:


#include "scene_final.h"

#include "gui.h"

scene_final mainScene;


void onFrame()
{
    if (!mainScene.renderContinue())
    {
        recordingStop();
        halt();
    }
}

void setup() 
{
    setCanvasSize(mainScene.image_width, mainScene.image_height);

    mainScene.renderBegin();
    recordingStart();
}

Just your typical demo scene, using the gui.h approach. I implement setup(), to create the screen size, initializer the raytrace renderer, and begin the screen recording.

In ‘onFrame()’, I tell the renderer to continue, as it will render only one scanline at a time on the canvas. It will return false when there are no more lines to be rendered, and that’s when I just stop the program. How did I get the screen capture? Just comment out that ‘halt()’ for one fun, then take a screen snapshot.

I did have to make two alterations to the original Raytrace Weekend code. One was to the scene renderer. I had to split out the initialization code (for convenience), and I had to break the ‘render()’ into two parts, the ‘begin()’ and ‘continue()’.

class scene {
public:
    hittable_list world;
    hittable_list lights;
    camera        cam;

    double aspect_ratio = 1.0;
    int    image_width = 100;
    int    image_height = 100;
    int    samples_per_pixel = 10;
    int    max_depth = 20;
    color  background = color(0, 0, 0);

    int fCurrentRow = 0;

  public:
      void init(int iwidth, double aspect, int spp, int maxd, const color& bkgd)
      {
          image_width = iwidth;
          aspect_ratio = aspect;
          image_height = static_cast<int>(image_width / aspect_ratio);
          samples_per_pixel = spp;
          max_depth = maxd;
          background = bkgd;
      }

      bool renderContinue()
      {
          if (fCurrentRow >= image_height)
              return false;


          int j = image_height - 1 - fCurrentRow;

          color out_color;

          for (int i = 0; i < image_width; ++i) {
              color pixel_color(0, 0, 0);
              for (int s = 0; s < samples_per_pixel; ++s) {
                  auto u = (i + random_double()) / (image_width - 1);
                  auto v = (j + random_double()) / (image_height - 1);
                  ray r = cam.get_ray(u, v);
                  pixel_color += ray_color(r, max_depth);
              }

              fit_color(out_color, pixel_color, samples_per_pixel);
              //write_color(std::cout, pixel_color, samples_per_pixel);
              gAppSurface->copyPixel(i, fCurrentRow, PixelRGBA(out_color[0], out_color[1], out_color[2]));
          }

          fCurrentRow = fCurrentRow + 1;

          return true;
      }

      void renderBegin()
      {
        cam.initialize(aspect_ratio);

        std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n";
    }



  private:
    color ray_color(const ray& r, int depth) {
        hit_record rec;

        // If we've exceeded the ray bounce limit, no more light is gathered.
        if (depth <= 0)
            return color(0,0,0);

        // If the ray hits nothing, return the background color.
        if (!world.hit(r, interval(0.001, infinity), rec))
            return background;

        scatter_record srec;
        color color_from_emission = rec.mat->emitted(r, rec, rec.u, rec.v, rec.p);

        if (!rec.mat->scatter(r, rec, srec))
            return color_from_emission;

        if (srec.skip_pdf) {
            return srec.attenuation * ray_color(srec.skip_pdf_ray, depth-1);
        }

        auto light_ptr = make_shared<hittable_pdf>(lights, rec.p);
        mixture_pdf p(light_ptr, srec.pdf_ptr);

        ray scattered = ray(rec.p, p.generate(), r.time());
        auto pdf_val = p.value(scattered.direction());

        double scattering_pdf = rec.mat->scattering_pdf(r, rec, scattered);

        color color_from_scatter =
            (srec.attenuation * scattering_pdf * ray_color(scattered, depth-1)) / pdf_val;

        return color_from_emission + color_from_scatter;
    }
};

These are the guts of the ray tracer, with the private ‘ray_color()’ function doing the brunt of it. But, I’m not really dissecting how the ray tracer works, just what was required to incorporate it into my demo scene.

Right there in ‘renderContinue()’, you can see how we go from whatever the raytracer was doing before (writing out a .ppm file), to converting the color to something we can throw into our canvas

fit_color(out_color, pixel_color, samples_per_pixel);
//write_color(std::cout, pixel_color, samples_per_pixel);
gAppSurface->copyPixel(i, fCurrentRow, PixelRGBA(out_color[0], out_color[1], out_color[2]));

The ‘fit_color’ routine takes the oversaturated color value the ray tracer had created, and turns it into a RGB in the range of 0..255. We then simply copy that to the canvas with copyPixel(). The effect this will have is to very slowly refresh the application window every time a single line is ray traced. With this particular image, it is slower than watching grass grow. This image took several hours (8) to render on my 5 year old i7 based desktop machine. Even if you imagined it took half that time, it’s still slow. There are ways to speed it up, but that’s another story.

What I’m interested in are a couple of things, how big is that program, and where’s the movie?

This little demo program is 173kilo bytes in size. Just think about that. Go to a typical web page, and the banner image might be bigger than that. Given that our machines, even cell phones, comes with Gigabytes of RAM, who cares how big a program is? Well, small size still means more efficient, if you’ve chosen proper algorithms. I like the challenge of small, because it means I’m parsimonious. Using as little external dependencies as possible. This also means that when I want to port to another platform, beyond Windows, I have less baggage to carry around.

This points to another design point.

I’m using C/C++ here. That’s not the only language I ever use, but it’s ok for these demos. I’m a big fan of C#, as well as my favorite LuaJIT. Of course you can also just use Javascript and browsers, but here we are. You’ll also notice in my usage of the language you don’t see a lot of memory management. You don’t typically see new/delete. That’s not because I’m using some garbage collection system. It’s because of a careful choice of data structures, calling conventions, and object lifetime management. Most things are held on ‘the stack’, because they’re temporary. Then, things like the canvas object, are initialized internally, so the programmer doesn’t have to worry about how that’s occuring, and doesn’t need to manage any associated memory.

I like this. It gives me a relatively easy programming API without forcing me to deal with memory management, which is easily the biggest bug generator when using this particular language. This is great for short demos. It’s a lot harder to maintain with more serious involved applications, although I’d argue it can be done with proper composition and super tight adherence to a coding methodology. Not realistic with large teams of programmers though.

OK, so small size, simple to write code, simple to integrate stuff you see on the internet. What about the movie?

scene_full movie

There you go. 8 hours of rendering, condensed down to a few seconds for your viewing pleasure. In this particular case, since the renderer is updating the canvas every frame, each frame is a single scan line. As there are 800 vertical lines, there are 800 frames. You can pick whatever frame rate you like to display at whatever speed.

If you were really clever, and had a few machines laying around, you’d create a render farm, and make an animated short with motion, each machine rendering a single frame, and then ultimately stitching it all together. But, on my meager dev box, I just get this little movie, and that’s my demo.

This just goes to show you. If you see something interesting out there in the graphics world, maybe a new line drawing algorithm, or a real-time renderer, it’s not hard to try those things out when you’ve got the proper setup. Of course, there are other frameworks out there, like SDL, or Qt, which do this kind of thing. If you look at them, just see how big they are, how complex their build environment is, and how much framework you must learn to do basic things. If they’re ok for you, then go that route. If they’re a bit much, then you might pursue this method, which is fairly minimal.

Next time around, screen capture, for fun and profit.


Microsoft, the Musical?

Well, it was primarily driven by Microsoft summer 2019 interns

I saw reference to this while browsing through some Teams channels, and I thought, Oh, ok, one of those shaky cell phone productions, let’s see…

Oh boy was I wrong. This is full on “ready for broadway” style musical entertainment. My takeaways: We have cool interns, look at that diverse bunch, they can sing, they can produce, they did this as a passion project while still maintaining their day jobs…

I’ve never been a corporate apologist, and I’ve poked the bear in the eye more times than I’ve stroked it, but this made me feel really happy. I was entertained, and proud that the company I work for can show such fun and mirth, while doing a tong-in-cheek sendup. Tech is supposed to be fun, and this was fun.

I’m sure other companies will follow suit in years to come, or they already do this at other companies, and I just haven’t see it.

Watching this was a well spent 10 minutes to start my week.


Did I really need 3 desktops?

It’s been about 3 years since I built a monster of a desktop machine with water cooling, fans, LEDs and all that. Long time readers will remember the disastrous outcome of that adventure as the liquid leaked, drown my electronics, and caused me to essentially abandon the thing. I somewhat recovered from the fiasco by purchasing a new motherboard, and trying to limp along with various of the components from that original build. Recently, I decided to throw in the towel and start from scratch. This time around, I decided to go with an AMD build, because AMD seems to be doing some good stuff with their Ryzen and beyond chippery. So, I put together a rig around a Ryzen 7, 32Gb RAM, same old nVidia 1060 video card. Upgraded the SSD to the latest Samsung 980? or whatever it is.

That system was acting a bit flakey, to the point I thought it was defective, so I built another one, this time with Ryzen 5, and nothing from the old builds. New power supply, SSD, RAM, video card. That one worked, and it turns out the Ryzen 7 based system worked as well. Turns out it only needed a bios update to deal with networking not handling the sleep state of Windows 10.

So, now I have two working desktop machines. But wait, the second motherboard from the previous disastrous PC build probably still works? Maybe it just needs new power supply and other components and I can resurrect it? And hay, how about that Athlon system sitting over there? That was my daily driver since 2010, until I decided to build the intel water cooled disaster. I think that machine will make for a good build for the workshop in the garage. I need something out there to run the CNC machine, or at least play some content when I’m out there.

I did finally decommission one machine. The Shuttle PC I built with my daughter circa 2005 finally gave up the ghost. Tried to start it, and the 2Tb hard drive just clicks… Too bad. That machine was quite a workhorse when it came to archiving DVDs and other disks over the years. May it rest in peace.

There was one bit of surgery I did on an old Laptop. I had a ThinkPad X1 Carbon from work which succumbed to the elements last year some time. I had tech support take the ssd out of it so I could transfer to somewhere else. Given the machine is about 4+ years old, it wasn’t as simple as it being an nvme SSD. Oh no, it was some special sort of thing which required quite a lot of searching about to find an appropriate adapter. I finally found it, and plugged the SSD into it, then plugged that into an external enclosure, then to USB 3.0, and finally got the old stuff off of it! So, now I have this awesome adapter card that I could only use once, awaiting the next old X1 Carbon someone needs to backup.

All ramblings aside, I’ve recently been engaged in writing some code related to parsing. Two bits in particular, gcode parsing, json streaming parser, are bits that I’ll be writing about.

And so it goes.


As the Tech Turns

I joined the Office of the CTO at Microsoft just over two years ago. I was a ‘founding member’ as Kevin Scott was a new CTO for Microsoft. I have now moved on to another job, in a different CTO Office (Mark Russinovich in Azure).

I noticed one thing while I was in OCTO, I essentially stopped blogging. Why was that? Well, probably the main reason is the fact that when your in that particular office, you’re privy to all sorts of stuff, most of it very private, either to Microsoft, or other partners in the industry. Not really the kind of stuff you want to write about in a loud way. My colleague Mat Velosso managed to maintain a public voice while in the office, but I didn’t find I could do it. Now as it turns out, my new job is all about having a voice, and helping to make Engineering at Microsoft that much better.

But, before I get into all that, I want to reflect on tech.

I’m in my home office, and I’m looking around at all this clutter. I’ve got SD Cards that are 8Gb sitting around with who knows what on them. I’ve got motherboards sitting openly on my bench. I’ve got more single board computers than you can shake a stick it. Various bits and bobs, outdated tech books, mice and keyboards galore, laptops that have been long since abandoned, and 5 23″ LCD displays sitting on the floor.

That’s just in my office. My other cave has similar amounts of computers, displays, tvs, and other tech leavings from the past 10 years of hobbying around, trying to stay current. What to do?

Well, donate all that can to good places. Don’t give working displays to PC recycle, they’ll just tear them apart. Find a school, non-profit, deserving person. Then, all those Raspberry Pi versions you never took out of their boxes, send them to the PC recycler, or give them to a school. If there’s one thing I’ve learned about single board computers, if you don’t actually have an immediate use for them, they’re not worth buying.

Books, to the library, or if you have a local “Half Price Books” maybe you can get some money back. More than likely, if they’re older than 5 years, they’re headed to the compost pile.

I have saved one set of PS/2 style keyboard/mouse, because, they’re actually useful.

I want to reconfigure my office. Now that 4K UHD monitor/tvs are $250, it makes sense to purchase them as decorations for a room. A couple of those 55″ versions up on the walls gives me connectivity for any computers, as well as an ability to do things like use them as a false window. I need more workspace. My current configuration is sets of drawers, which hide who knows what, and counter top which is too narrow, and book shelves, which can’t hold very much weight. So, out it goes, and in come the wire rack shelving units, 24″ deep.

Copy off all the stuff from those random SD cards, and throw away the ones that are less than 32Gb, because you know you’ll lose them before you ever use them again, and you’ll always be wondering what’s on them. Digitize those photo albums, use one of your many SBCs to setup a NAS, and copy everything there, and backup to a cloud service

For me, new job, new tech, new office space. time to blab and share again.


Commemorating MLK’s Passing

Dr. Martin Luther King Junior was assassinated April 4th, 1968. That was 51 years ago today. In order to commemorate his passing, I penned the following and shared it with my coworkers at Microsoft.

On this very important day in history, I am contemplative.  As we consider the importance of naming our ERG, I am reflective upon how we got here.

I was only 4 years old on that fateful day when “they killed Martin!”, so I don’t remember much, other than adults crying, smoking, drinking, talking adult talk, nervous wringing of hands, and us kids playing outside.

In my tech career of roughly 35 years, I’ve often been “the only…”.  It’s been an interesting walk.  In most recent years, as I become more Yoda and less Samuel Jackson, I have contemplated these things:

Would I have died fighting rather than be put on the ship

Would I have jumped into the ocean rather than be taken

Would I have fought back upon first being whipped, choosing pride and honor over subjugation

Would I have had the bravery to run away

Would I have had the bravery to help those who ran away

Would I have had the courage to learn to read

Would I have had the strength to send my children to school

Would I have had the strength to drink from the water fountain not meant for me

Would I have had the courage to simply sit

Would I have had the tenacity to face the smoke bombs, water cannons and dogs

Would I have had the conviction to carry on a struggle, long after my inspirational leader was lost…

And here I sit today, and I contemplate.  Will I recognize my calling?  Will I recognize my civil rights moment?  Will I be able to throw off my golden handcuffs and do what’s right?

If we collectively think “Black” means anything, we collectively can’t ignore the passing of this particular day.  I encourage us all to reflect on who we are, where we come from, and where we intend to go in the future.


My First LEAP Video

Here it is, the first video that I’ve done related to LEAP: