When Scripts Roamed the Earth

Way way back in the day, I played with Tcl.  What a nice little compact thing that was.  Then along came this thing called Python.  Kind of funky with it’s indentation thing, but wow, what it has become!  I was cutting my CS chops when ‘p-code’ meant something.  Then along came this Javascript thing.  For the longest time, I think it kind of puttered along, until BAM!  The internet exploded, and more recently node.js happened.  Now suddenly it’s becoming a ‘de-facto’ go to language of the day.

But, another thing has happened recently as well.  With the V8 javascript compiler comes JIT compilation.  Then along comes Lua, and Go, and Python again, and suddenly ‘script’ is becoming as fast, if not faster, than statically compiled ‘C’, which has been the mainstay of computer programming for a few decades now.

And now, two other things are happening.  LuaJIT has this thing called dynasm.  This Dynamic Assembler quickly turns what looks like embedded assembly instructions into actual machine instructions at ‘runtime’.  This is kind of different than what nasm does.  Nasm is an assembler proper.  It takes assembly instructions, and turns that into machine specific code, as part of a typical ‘compile/link/run’ chain.  Dynasm just generates a function in memory, and then you can call it directly, while your program is running.

This concept of dynamic machine code generation seems to be a spreading trend, and all JIT runtimes do it.  I just came across another tool that helps you embed such a JIT thing into your C++ code.  Asmjit is the tool that does a thing similar to what luajit’s dynasm does.

These of course are not unique, and I’m sure there are countless projects that can be pointed to that do something somewhat similar.  And that’s kind of the point.  This dynamic code generation and execution thing is rapidly leaving the p-code phase, and entering the direct machine execution phase, which is making dynamic languages all the more usable and performant.

So, what’s next?

Well, that got me to thinking.  If really fast code can be delivered and executed at runtime, what kinds of problems can be solved?  Remote code execution is nothing new.  There are always challenges with marshaling, versioning, different architectures, security, and the like.  Some of the problems that exist are due to the typically static nature of the code that is being executed on both ends.  Might things change if both ends are more dynamic?

Take the case of TLS/SSL.  There’s all these certificate authorities, which is inherently fragile and error prone.  Then there’s the negotiation of the highest common denominator parameters for the exchange of data.  Well, what if this whole mess were given over to a dynamic piece?  Rather than negotiating the specifics of the encryption mechanism, the two parties could simply negotiate and possibly transfer a chunk of code to be executed.

How can that work?  The client connects to the server, using some mechanism to identify itself (possibly anonymous, possibly this is handled higher up in the stack).  The server then sends a bit of code that the client will then use to pass through every chunk of data that’s headed to the server.  Since the client has dynasm embedded, it can compile that code, and continue operating.  Whomever wrote the client doesn’t know anything about the particulars of communicating with the server.  They didn’t mess up the cryptography, they didn’t have to keep up to date with the latest heart bleed.  The server can change and customize the exchange however they see fit.

The worst case scenario is that the parties cannot agree on anything interesting, so they fall back to using plain old TLS.  This seems useful to me.  A lot of code, that has a high probability of being done wrong, is eliminated from the equation.  If certificate authorities are desired, then they can be used.  If something more interesting is desired, it can easily be encoded and shared.  If thing need to change instantly, it’s just a change on the server side, and move along.

Of course each side needs to provide an appropriate sandbox so the code doesn’t just execute something arbitrary.  Each side also needs to provide some primitives, like ability to grab certificates if needed, and access to crypto libraries if needed.

If the server wants to use a non-centralized form of identity, it can just code that up, and be on its way.  The potential is high for extremely dynamic communications, as well as mischief.

And what else?  Well, I guess just about anything that can benefit from being dynamic.  Learning new gestures, voice recognition, image recognition, learning to walk, learning new algorithms for searching, sorting, filtering, etc.  Just about anything.

Following this line of reasoning, I’d expect my various machines to start talking with each other using protocols of their own making.  Changing dynamically to fit whatever situation they encounter.  The communications algorithms go meta.  We need algorithms to create algorithms.  Threats and intrusions are perceived, and dealt with dynamically.  No waiting for a rev of the OS, no centrally distributed patches, no worrying about incompatible versions of this that and the other thing.  The machines, and their communications, become individual, dynamic, and non-static.

This could be interesting.

 


How goes that home data center?

So, last time around, I was using an old ASUS EeeBox pc as a proxy server.  That was actually working pretty well, to the point where I had forgotten that it was even running.  That’s an interesting lesson in how easy it is to forget any agregious thing that anyone does to your technology behind your back.  Eventually you’ll forget about it.

All was well until… the thunder/lightning/wind storm.  The house experienced several black/brown outs over the course of a couple of hours.  During the first blackout, most of the electronics in the house simply shut down, and didn’t come back when the power did (requiring manual resets).  The EeeBox probably took one shot too many, and in the end simply would not boot up.  Not even getting to the Bios post screen.  So much for that.  Of course, at a couple hundred dollars, not too big a loss, compared to losing a workstation say, but the proxy experiment came to an abrupt end.

I have another EeeBox sitting right next to it, and I can similarly build it up with ArchLinux, but I take a pause here and consider.

The Asus router lived through the same storm, and didn’t skip a beat.  The Synology NAS box faired similarly.  With it’s redundant power supplies, and forever spinning disks, nothing keeps that spinning rust offline for long.  But, this consumer grade repurposed PC, not so much.

I think for the home data center to work, the hardware needs to be be carefully selected for robustness.  It needs the kind of robustness you get out of your tv, or NAS box, or cable box, or router, or whatever.  They work most of the time, with a failure every few years, at which time you replace the thing with the latest and greatest, and move along your merry way.  For this case of the proxy server, I think there are two options.   Rely on the proxy capabilities built into the router, or the NAS box, or create a custom proxy box which has better reliability.  Going the router of relying on an already reliable piece of equipment is a no brainer, so I’ll ignore that.  What will it take to build a proxy box that is reliable, and doesn’t cost more than a standard home PC?

I’m not sure, but now I’m thinking about it.  The base might start with a piece of kit that already does most of what I want.  The WrtNode, for example, is in fact a router core, but you can add stuff to it.  Since I like Odroid though, that might be an interesting core to start from because it can deal with some standard Pc peripherals.  The question is whether the core is robust enough, or can be easily made so.

For now, I’m back into the research phase.  I have another EeeBox, and another similar box (Inspire), so more little boxen to fry.

I might also move the kit to the garage where it can be closest to the electrical and the cable coming in from the street.  Then I’ll have a better chance of controlling the quality of the electrical line, including robust backup for all the sensitive little bits.

And so it goes.

 


Configuring a home data center

I had this old school thought.  I need a 48u rack in the garage.  I’ll put a gigabit swatch at the top, and load it up with all these gigabit fast machines, and just party like a data center fool.

Then I went to Fry’s electronics to replace a failed hard disk in a very old Atom based machine.  One terabyte, for $59…  Because these are tiny little 5400 RPM laptop drives.

This made me rethink my rack madness.  First thought, what is storage all about these days?  First of all, there’s the cloud, with all its infinite amounts of storage at somewhat reasonable prices (if you’re a business).  But, what about the average home user.  What do you really store?  Well, there’s the Gobs and gobs of images that will never likely leave your phone.  Then there’s your aging DVD and CD collection, if you haven’t already gone full over to the streaming side of media consumption.  scanned documents? (all 1Gb of them).  What else is there?  Not much that I can think of really.  1 or 2 terabytes is plenty, and a NAS box that you never think about is probably the best way to go for most of that.

But, I want to do more with the bits and pieces of compute that I have laying around.  Alright, so long time back, I purchased two ASUS EeeBox EB1006-B machines.  Probably got them off Woot at a decent price.  Back then I wasn’t sure what I’d do with them, but I knew cheap was good.  I took one and eventually put it in my workshop, just to browse the internet on occasion.  The other sat in the box, until just recently.

These little boxen come with 1Gb of RAM, and a 160Gb hard disk.  The processor is an Atom N270, with who knows what kinds of graphics capabilities.  I upgraded the RAM to 2Gb, because $35.  I added the 1 Tb drive, because $59.  Now what?

The OS.  Well, they originally came with Windows XP, and it didn’t make sense to stick with that particular choice.  Nor did it make sense to upgrade to Windows 8.1, because that’s just not a match made in heaven.  So, I turned to… Linux.  I don’t really know what I’m going to use each box for, but I know that I can pretty much dedicate a single box to each feature I might want.  So, the first box I decided will be a proxy server for my home (outbound).  I have been playing with proxy servers at work for the past couple of years, so I thought it was high time that I actually use one at home for kicks.

Box1 – After some gnashing of teeth and wringing of hands, I settled on installing Arch Linux on the first box.  Why Arch?  Because I wanted a fairly minimal install.  I’ve installed Ubuntu on various machines in the past, and that’s a good enough environment.  Works great with just about any hardware I have.  But, for this proxy server, all I need is network and disk drive, and CPU cycles, and that’s about it.  I figure an Atom is a good enough processor for the types of proxying that are typical of home usage, so I don’t need some honking beefy server CPU here.  The 2Gb of RAM is plenty to hold the OS and most stuff that’s likely to be cached.  But, in case I want to cache large chunks of the internet, there’s the 1Tb drive sitting there doing mostly nothing most of the time.

I installed Arch, then I installed; alsa-utils (for audio, which I’m not using)

git – just in case I want to pull down and compile other interesting stuff

openssh – so I can manage the box without having an attached monitor

sudo – so I can sudo

nodejs – just in case I want to run some simple web server

squid – because that’s the actual proxy server that I need on the box

 

Realistically, I don’t need anything more than SSH and Squid, and if I reimage the machine, which is just a USB stick away, I’ll configure it with just those two packages.

After installing all the stuff, and configuring Squid (primarily cache location, and a couple of acls), I booted up.  I started by pointing FireFox from my desktop machine at the proxy.  That seemed to work.  Then I pointed the MacBook, and that works.  Then it was the iPad, which also seems to work.  To check and see if things are actually working as expected, I took a look at the Squid access log files, and sure enough, there were the expected entries for the web traffic.  Well, big woot!  Now I can go through the rest of the devices in the house and start pointing them at the proxy.

Now that the proxy machine is up and running, I can think about doing enforced, automatic proxy settings and the like, just like with big secure companies.  Then I want to play with fun ways to visualize the web accesses.  It would be really cool if I could integrate with Microsoft’s cloud app discovery service.  That would make it extra useful in terms of ready made visualizations.

The machine is nice and silent, just sitting there under a desk with it’s blue power indicator light, silenty proxying the internet.  I just put the other machine right next to it.  This one I think I’ll go with TinyCore Linux.  It’s even more stripped down than Arch Linux.  Almost nothing more than the kernel, shell and package manager.  But, when you’re going single purpose per device, that’s often enough.  For this machine, I’m thinking of making it a git server.  It’s a toss up though because my Synology box has git services as well, and for storage related stuff, the NAS is better equipped for dealing with redundancy, failures, and the like.  So, if not git server, then perhaps it will become the dhcp server for my network, relieving the router box of that particular duty.  Something like a pogoplug might be even more reasonable.  Very small compute required to serve this particular purpose.  If not, then it might just become a generalized compute node, perhaps server as a Docker thing, or as a TINN experimental server.

Besides these couple older boxes, I have a couple of Odroid XUs, some even more ancient x86 machines, and a beefy server from a bygone era (just put a new modern graphics card in it).  Each one of these devices can serve a single purpose.  This begs the question for me.  Do I need beefy multi-purpose machines in my home data center?  I think the answer is, I need a few beefy special purpose machines for certain purposes (storage, compute, graphics), and I need some more general purpose machines to do much lighter weight stuff (browsing, emailing, editing documents).

So, thus far, the home data center has gained a proxy server, recovered from a long decommissioned device.  I’m sure more specialized servers will come online over time, and I probably won’t be purchasing that 48u rack.


Microsoft Service Achievement Award

So, if you’ve been at Microsoft long enough, and you’ve done favorable work, and you’re of a certain level, you might be granted this MSAA. It’s basically time off, where you can think, rejuvenate, and come back swinging.  Some might call it a sabbatical, but you’re not headed off to another company to teach computing.

I was given one of these awards way back in the day, but never took the time… until now!

I’ve got 8 weeks, off the hook to play around, play with my kid, do some traveling, and of course some tinkering around with code, 3D printing, landscaping, and the inevitable home improvement projects.

I gave my coworkers the link to this blog so that they could follow along my exploits if they so choose.  The clock starts ticking on Sept. 29th, but I’ve already got a list of 20 things, which I know will not all get done in any way shape or form.  We’ll see.

For now, my short list is:

Write a simple graphics system in C (for what, the 3 or 4th time?)

Play around with FPGAs

Construct some cabinetry in the garage

Teach my son to walk, and the true meaning of ‘inside voice’

 

I’ve been at MS since Oct/Nov 1998, so coming on 16 years now.  I was recently doing some phone screens for college hires, and they invariably asked me the same question; “What motivates you to stay at Microsoft”.

There were two core answers that seemed to come to me easily.

1) Whenever we do anything at Microsoft, it has the potential have impacting a great many people around the world.  One key example I gave was, ‘we all Google, but Microsoft runs the ATMs and cash registers’.

2) I have been able to grow and learn a great many things within the company.  I’ve been a large scale manager, an individual contributor, worked internationally, worked on core frameworks, and whole cloud systems.  I’ve been able to switch teams and divisions, and the whole time, I’ve managed to keep a paycheck, and gather stock which is actually worth something.  Of course, I’m not a multi-billionaire, but, I’ve perfectly happy with the lifestyle my MS generated income affords me.

And so, instead of taking the payout for my sabbatical, I took the time off.  I’m looking forward to rejuvenating, ideating, and ultimately going back to work renewed and ready to kick some more serious computing butt!

 


Goodbye to colleagues

July 17 2014, some have called it Black Thursday at Microsoft.

I’ve been with the company for more than 15 years now, and I was NOT given the pink slip this time around.

Over those years, I have worked with tons of people, helped develop some careers, shipped lots of software, and generally had a good time.  Some of my colleagues were let go.  I actually feel fairly sad about it.  This is actually the second time I’ve known of colleagues being let go.  These are not people who are low performers.  In fact, last time around, the colleague found another job instantly within the company.

I remember back in the day Apple Computer would go through these fire/hire binges.  They’d let go a bunch of people, due to some change in direction or market, and then within 6 months end up hiring back just as many because they’d figured out something new which required those skilled workers.

In this case, it feels a bit different.  New head guy, new directions, new leadership, etc.

I’ve done some soul searching over this latest cull.  It’s getting lonely in My old Microsoft.  When you’ve been there as long as I have, the number of people you started with becomes very thin.  So, what’s my motivation?

It’s always the same I think.  I joined the company originally to work on the birth of XML.  I’ve done various other interesting things since then, and they all have the same pattern.  Some impossible task, some new business, some new technical challenge.

This is just the beginning of the layoffs, and I don’t know if I’ll make the next cull, but until then, I’ll be cranking code, doing the impossible, lamenting the departure of some very good engineering friends.  Mega corp is gonna do what mega corp’s gonna do.  I’m and engineer, and I’m gonna do some more engineering.

 


Fast Apps, Microsoft Style

Pheeeuuww!!

That’s what I exclaimed at least a couple of times this morning as I sat at a table in a makeshift “team room” in building 43 at Microsoft’s Redmond campus. What was the exclamation for? Well, over the past 3 months, I’ve been working on a quick strike project with a new team, and today we finally announced our “Public Preview“.  Or, if you want to get right to the product: Cloud App Discovery

I’m not a PM or marketing type, so it’s best to go and read the announcement for yourself if you want to get the official spiel on the project.  Here, I want to write a bit about the experience of coming up with a project, in short order, in the new Microsoft.

It all started back in January for me.  I was just coming off another project, and casting about for the next hardest ‘mission impossible’ to jump on.  I had a brief conversation with a dev manager who posed the question; “Is it possible to reestablish the ‘perimeter’ for IT guys in this world of cloud computing”?  An intriguing question.  The basic problem was, if you go to a lot of IT guys, they can barely tell you how many of the people within their corporation are using SalesForce.com, let alone DropBox from a cafe in Singapore.  Forget the notion of even trying to control such access.  The corporate ‘firewall’ is almost nothing more than a quartz space heater at this point, preventing very little, and knowing about even less.

So, with that question in mind, we laid out 3 phases of development.  Actually, they were already laid out before I joined the party (by a couple of weeks), so I just heard the pitch.  It was simple, the first phase of development is to see if we can capture network traffic, using various means, and project it up to the  cloud where we could use some machine learning to give an admin a view of what’s going on.

Conveniently sidestepping any objections actual employees might have with this notion, I got to thinking on how it could be done.

For my part, we wanted to have something sitting on the client machine (a windows machine that the user is using), which will inspect all network traffic coming and going, and generate some reports to be sent up to the cloud.  Keep in mind, this is all consented activity, the employee gets to opt in to being monitored in this way.  All in the open and up front.

At the lowest level, my first inclination was to use a raw socket to create a packet sniffer, but Windows has a much better solution these days, built for exactly this purpose.  The Windows Filter Platform, allows you to create a ‘filter’ which you can configure to callout to a function whenever there is traffic.  My close teammate implemented that piece, and suddenly we had a handle on network packets.

We fairly quickly decided on an interface between that low level packet sniffing, and the higher level processor.  It’s as easy as this:

 

int WriteBytes(char *buff, int bufflen);
int ReadBytes(char *buff, int bufflen, int &bytesRead);

I’m paraphrasing a bit, but it really is that simple. What’s it do? Well, the fairly raw network packets are sent into ‘WriteBytes’, some processing is done, and a ‘report’ becomes available through ‘ReadBytes’. The reports are a JSON formatted string which then gets turned into the appropriate thing to be sent up to the cloud.

The time it took from hearing about the basic product idea, to a prototype of this thing was about 3 weeks.

What do I do once I get network packets? Well, the network packets represent a multiplexed stream of packets, as if I were a NIC. All incoming, outgoing, all TCP ports. Once I receive some bytes, I have to turn it back into individual streams, then start doing some ‘parsing’. Right now we handle http and TLS. For http, I do full http parsing, separating out headers, reading bodies, and the like. I did that by leveraging the http parsing work I had done for TINN already. I used C++ in this case, but it’s all relatively the same.

TLS is a different story. At this ‘discovery’ phase, it was more about simple parsing. So, reading the record layer, decoding client_hello and server_hello, certificate, and the like. This gave me a chance to implement TLS processing using C++ instead of Lua. One of the core components that I leveraged was the byte order aware streams that I had developed for TINN. That really is the crux of most network protocol handling. If you can make herds or tails of what the various RFCs are saying, it usually comes down to doing some simple serialization, but getting the byte ordering is the hardest part. 24-bit big endian integers?

At any rate, http parsing, fairly quick. TLS client_hello, fast enough, although properly handling the extensions took a bit of time. At this point, we’d be a couple months in, and our first partners get to start kicking the tires.

For such a project, it’s very critical that real world customers are involved really early, almost sitting in our design meetings. They course corrected us, and told us what was truly important and annoying about what we were doing, right from day one.

From the feedback, it becomes clear that getting more information, like the amount of traffic flowing through the pipes is as interesting as the meta information, so getting the full support for flows becomes a higher priority. For the regular http traffic, no problem. The TLS becomes a bit more interesting. In order to deal with that correctly, it becomes necessary to suck in more of the TLS implementation. Read the server_hello, and the certificate information. Well, if you’re going to read in the cert, you might as well get the subject common name out so you can use that bit of meta information. Now comes ASN.1 (DER) parsing, and x509 parsing. That code took about 2 weeks, working “nights and weekends” while the other stuff was going on. It took a good couple of weeks not to integrate, but to write enough test cases, with real live data, to ensure that it was actually working correctly.

The last month was largely a lot of testing, making sure corner cases were dealt with and the like. As the client code is actually deployed to a bunch of machines, it really needed to be rock solid, no memory leaks, no excessive resource utilization, no CPU spiking, just unobtrusive, quietly getting the job done.

So, that’s what it does.

Now, I’ve shipped at Microsoft for numerous years. The fastest cycles I’ve usually dealt with are on the order of 3 months. That’s usually for a product that’s fairly mature, has plenty of engineering system support, and a well laid out roadmap. Really you’re just turning the crank on an already laid out plan.

This AppDiscovery project has been a bit different. It did not start out with a plan that had a 6 month planning cycle in front of it. It was a hunch that we could deliver customer value by implementing something that was challenging enough, but achievable, in a short amount of time.

So, how is this different than Microsoft of yore? Well, yes, we’ve always been ‘customer focused’, but this is to the extreme. I’ve never had customers this involved in what I was doing this early in the development cycle. I mean literally, before the first prototypical bits are even dry, the PM team is pounding on the door asking “when can I give it to the customers?”. That’s a great feeling actually.

The second thing is how much process we allowed ourselves to use. Recognizing that it’s a first run, and recognizing that customers might actually say “mehh, not interested”, it doesn’t make sense to spin up the classic development cycle which is meant to maintain a product for 10-14 years. A much more streamlined lifecycle which favors delivering quality code and getting customer feedback, is what we employed. If it turns out that customers really like the product, then there’s room to fit the cycle to a cycle that is more appropriate for longer term support.

The last thing that’s special is the amount of leveraging Open Source we are allowing ourselves these days. Microsoft has gone full tilt on OpenSource support. I didn’t personally end up using much myself, but we are free to use it elsewhere (with some legal guidelines). This is encouraging, because for crypto, I’m looking forward to using things like SipHash, and ChaCha20, which don’t come natively with the Microsoft platform.

Overall, as Microsoft continues to evolve and deliver ‘customer centric’ stuff, I’m pretty excited and encouraged that we’ll be able to use this same model time and again to great effect. Microsoft has a lot of smart engineers. Combined with some new directives about meeting customer expectations at the market, we will surely be cranking out some more interesting stuff.

I’ve implemented some interesting stuff while working on this project, some if it I’ll share here.


Microsoft Part II

I joined Microsoft in 1998 to work on MSXML. One of the reasons I joined way back then is because MS was in trouble with the DOJ, and competitors were getting more interesting. I thought “They’re either going down, or they’re going to resurge, either way, it will be a fun ride”.

Here it is, more than 15 years later, and I find my sentiment about the same. Microsoft has been in trouble the past few years. Missing a few trends, losing our way, catching our breath as our competitors run farther and faster ahead of us…

In the past 4 years, I’ve been associated with the rise of Azure, and most recently associated with our various identity services. In the past couple of months, I’ve been heads down working in an internal startup, which is about to deliver bits to the web. That’s 2 months from conception to delivery of a public preview of a product. That’s fairly unheard of for our giant company.

But, today, I saw a blizzard of news that made me think ye olde company has some life yet left in it.

The strictly Microsoft related news…
Windows Azure Active Directory Premium
C# Goes Open Source
TypeScript goes 1.0
Windows 8.1 is FREE for devices less than 9″!!

Of all of these, I think the Windows 8.1 going for free is probably the most impactful from a ‘game changer’ perspective. Android is everywhere, probably largely because it is ‘free’. I can’t spit in the wind without hitting a new micro device that runs Android, and doesn’t run Windows. Perhaps this will begin to change somewhat.

Then there’s peripheral news like…
intel Galileo board ($99) is fully programmable from Visual Studio
Novena laptop goes for crowd funding

The Novena laptop is very interesting because it’s a substantial offering created by a couple of hardcore engineers. It is clearly a MUST HAVE machine for any self respecting hard/software hacker. It’s not the most powerful laptop in the world, and that’s beside the point. What it does represent is that some good engineers, hooked up with a solid supply chain, can produce goods that are almost price competitive with commodity goods. That and the fact that this is just an extraordinary hack machine.

I find the Galileo interesting because other than some third party support for Arduino programming from MSVC, this is a serious support drive for small things, from Microsoft. Given the previous news about the ‘free’, this Galileo support bodes well. You could conceivably get a $99 ‘computer’ with some form of Windows OS, and use it at the heart of your robot, quadcopter, art display, home automation thing…

Of course, the rest of the tinker market is heading even lower priced with things like the Teensy 3.1 at around $20. No “OS” per se, but surely capable hardware that could benefit from a nicely integrated programming environment and support from Microsoft. But, you don’t want Windows on such a device. You want to leverage some core technologies that Microsoft has in-house, and just apply it in various places. Wouldn’t it be great if all of Microsoft’s internal software was made available as installable packages…

Then there’s the whole ‘internet of things’ angle. Microsoft actually has a bunch of people focused in this space, but there’s no public offerings as yet. We’re Microsoft though, so you can imagine what the outcomes might look like. Just imagine lots of tiny little devices all tied to Microsoft services in some way, including good identities and all that.

Out on the fringe, non-Microsoft, there is Tessel.io, with their latest board back from manufacturing. I micro controller that runs node.js (and typescript for that matter), which is WiFi connected. That is bound to have a profound impact for those who are doing quick and dirty web connected physical computing.

Having spent the past few weeks coding in C++, I have been feeling the weight of years of piled on language cruft. I’ve been longing for the simplicity of the Lua language, and my beloved TINN, but that will just have to wait a few more weeks. In the meanwhile, I did purchase a Mojo FPGA board, in the hopes that I will once again get into FPGA programming, because “hardware is the new software”.

At the end of the day, I am as excited about the prospects of working at Microsoft as I was in 1998. My enthusiasm isn’t constrained by the possibilities of what Microsoft itself might do, rather I am overjoyed at the pace of development and innovation across the industry. There are new frontiers opening up all the time. New markets to explore, new waves to catch. It’s not all about desktops, browsers, office suites, search engines, phones, and tablets. Every day, there’s a new possibility, and the potential for a new application. Throw in 3D printing, instant manufacturing, and a smattering of BitCoin, and we’re living in a braver new world every day!!


Follow

Get every new post delivered to your Inbox.

Join 47 other followers