New Baby in the house!!

Of course you’d expect to see a picture here, or on Facebook, or Instagram, or whatever, but we don’t roll like that.

New baby girl arrived 9/30 at 9:30am.  September 28th was apparently a ‘supermoon’, and we’ve been having ‘blood moon’, and the pope was visiting and the president of China was visiting, and…

Seems like an interesting confluence, or near confluence.  I especially like the 9/30, 09:30 thing.  That will make it easier to remember.

Having newborns around means a couple of things.  First and foremost, not a lot of sleep, although this one seems to sleep at least a couple hours here and there.  Seems like we’re constantly up from 10pm to 4 am, although it’s probably not true.  The second thing it means is that a lot of fairly shoddy code gets written.  During those wee hours, between feedings, changes, and general soothing walks around the house, one or the other of the laptops is actually on my lap, and I’m slinging out code.  It’s funny though.  Something that might take 2 hours to figure out during these late night sessions can take a mere minute to realize in the bright light of regular day hours.  Nonetheless, I’m heeding the call of our President and making it possible for everyone to write code!

And so it goes.  A new reason to write inspired code.  There’s someone new to reap the benefits of my labors.


LAPHLibs gets a makeover

Quite a while ago (it looks like about 3 years), I create the LAPHLibs repository.  It was an outgrowth of various projects I was doing, and an experiment in open licensing.  The repo is full of routines varying from hash functions to bit banging.  Not a ‘library’ as such, but just a curation of things that are all pure LuaJIT code based.

Well, after I spun it out, I didn’t show it much love.  I made a couple of updates here and there as I found fixes while using the routines in other projects.  Recently though, I found that this is the most linked to project of all my github based projects.  As such, I thought it might be useful to give it a makeover because having all that bad code out there doesn’t really speak well of the Lua language, nor my learnings of it over the past few years.

So, I recently spent a few hours cleaning things up.  Most of the changes are documented in the new CHANGELOG.md file.

If you are one of the ten people who so happens to read this blog, and are a user of bits and pieces from that library, you might want to take a look.

One of the biggest sins that I fixed is the fact that in a lot of cases, I was polluting the global namespace with my functions.  It was inconsistent.  Some functions were global, some local, sometimes even in the same file!  Now everything is local, and there are proper exports.

I also got rid of a few things, like the implementation of strtoul().  The native tonumber() function is much more correct, and deals with all cases I had implemented.

There are a few places where I was doing idiomatic classes, and I cleaned those up by adding proper looking ‘constructors’.

Overall, the set of routines stands a little taller than it did before.  I can’t say when’s the next time I’ll do another overhaul.  I did want to play around a bit more with the bit banging stuff, and perhaps I can add a little bit more from projects I’ve done recently, like the schedlua scheduler and the like.

Bottom line, sometimes it’s actually worth revisiting old code, if for no other reason than to recognize the sins of your past and correct them if possible.


William Does Linux on Azure!

What?

You see, it’s like this.  As it turns out, a lot of people want to run code against a Linux kernel in the cloud.  Even though Windows might be a fine OS for cloud computing, the truth is, many customers are simply Linux savvy.  So, if we want to make those customers happy, then Linux needs to become a first class citizen in the Azure ecosystem.

Being a person to jump on technological and business related grenades, I thought I would join the effort within Microsoft to make Linux a fun place to be on Azure.  What does that mean?  Well, you can already get a Linux VM on Azure pretty easily, just like with everyone else.  But what added value is there coming from Microsoft so this isn’t just a simple commodity play?  Microsoft does in fact have a rich set of cloud assets, and not all of them are best accessed from a Linux environment.  This might mean anything from providing better access to Azure Active Directory, to creating new applications and frameworks altogether.

One thing is for sure.  As the Windows OS heads for the likes of the Raspberry Pi, and Linux heads for Azure, the world of computing is continuing to be a very interesting place.


TINN Reboot

I always dread writing posts that start with “it’s been a long time since…”, but here it is.

It’s been a long time since I did anything with TINN.  I didn’t actually abandon it, I just put it on the back burner as I was writing a bunch of code in C/C++ over the past year.  I did do quite a lot of experimental stuff in TINN, adding new interfaces, trying out new classes, creating a better coroutine experience.

The thing with software is, a lot of testing is required to ensure things actually work as expected and fail gracefully when they don’t.  Some things I took from the ‘experimental’ category are:

fun.lua – A library of functional routines specifically built for LuaJIT and it’s great handling of tail recursion.

msiterators.lua – Some handy iterators that split out some very MS specific string types

Now that msiterators is part of the core, it makes it much easier to do things like query the system registry and get the list of devices, or batteries, or whatever, in a simple table form.  That opens up some of the other little experiments, like enumerating batteries, monitors, and whatnot, which I can add in later.

There are not earth shattering, and don’t represent a year’s worth of waiting, but soon enough I’ll create a new package with new goodness in it.  This begs the question, what is TINN useful for?  I originally created it for the purpose of doing network programming, like you could do with node.  Then it turned into a way of doing Windows programming in general.  Since TINN provides scripted access to almost all the interesting low level APIs that are in Windows, it’s very handy for trying out how an API works, and whether it is good for a particular need.

In addition to just giving ready access to low level Windows APIs, it serves as a form of documentation as well.  When I look at a Windows API, it’s not obvious how to handle all the parameters.  Which ones do I allocate, which ones come from the system, which special function do I call when I’m done.  Since I read the docs when I create the interface, the wrapper code encapsulates that reading of the documentation, and thus acts as an encapsulated source of knowledge that’s sitting right there with the code.  Quite handy.

At any rate, TINN is not dead, long live TINN!


SawStop Contractor Saw Cabinet – Part 1

I have this great table saw, the SawStop contractor saw.  The great thing about it is the ability to stop the saw blade instantly if it ever touches flesh.  Considering that I’m an occasional woodworker, this sounded like a good idea, and actually came from a recommendation of someone who was a regular woodworker, with half a sawn off finger.

Besides being a finger saver, the saw itself is quite a nice saw.  Mine is configured with the nice T-square fence system, rather than the regular contractor’s saw fence.  In addition, it has the 52″ rail, which means the overall length of the thing is 85″.  That’s a pretty big and unwieldy piece of equipment for the garage.

WP_20141127_009

Moving, and thus using, the saw involves lifting it up using that foot lift thing on the saw’s base, and shoving it around, hoping the action doesn’t knock the thing out of alignment while I’m doing it.  So, I’ve scoured the interwebs looking for inspiration on what to do about the situation.  There are quite a few good examples of cabinetry around table saws:

There are myriad other examples if you just do a search for ‘table saw cabinet’.

Many of these designs are multi-purpose, in that they include a router extension as well as the table saw.  I don’t need that initially, as I have a separate router table that’s just fine.  So, my design criteria are:

  • Must support the entire length of the saw and fence system
  • Must provide some onboard storage
  • Must be easily mobile
  • Must be stable when not mobile
  • Must support adding various extensions

A fairly loose set of constraints (looking just like software), but good enough to help make some decisions.

The very first step is deciding on what kind of mobility I’m going to design for.  I considered many options, but they roughly boil down to, locking swivels, on at least 2 corners.  For the wheels themselves, I chose a 5″ wheel, where each wheel has a 750lb capacity.  That seems heavy duty enough for this particular purpose.  I could have gone with 3″ wheels, but that seemed too small, and I read from other efforts that the bigger the better, considering the resulting weight of the cabinet could be several hundred pounds, and moving that with small wheels might have a lot of friction and be difficult.

I chose to use 4 locking swivel wheels, one at each corner.  The overall length of the cabinet is 86″, which had me thinking about sagging.  Perhaps I should stick more wheels mid-span just in case.  But, I chose instead to go with an engineered solution.  The base is built out of a torsion box.  The torsion box consists of 1×4″ lumber forming the internal supports.  That is trimmed by 1×4″ on the outside, and it’s skinned top and bottom by 3/4″ oak plywood.

WP_20141214_003

I studied many different options for constructing this beast. Probably the best would have been to cut slots in crosswise members and laid them uniformly down the length of the base.  But, I don’t currently have a dado blade on my sawstop, so I went with these smaller cross pieces instead.  I think it actually turns out better because I get the offsets, which allow me to fasten the cross members to the long runners individually.

Also in this picture, you can see that the corners have been filled in with blocks.  This is where the wheels will mount, once the skin is on.  I didn’t want to have bolts protruding with nuts and washers on the ends, so I went with lag screws into these think chunks instead.  The chunks are formed by cutting playwood pieces, and gluing them together down in the hole.  That basically forms a nice 3.5″ chunks of wood that is glued through and through from skin to skin.

Here is the base with the skin and wheels on it.

WP_20141216_002

It may not look like it, because the base is sitting atop an assembly table which itself is pretty long, but this thing is pretty big.  It’s also fairly solid.  When I put it on the floor, I stood on it, tried to kick it around and the like, and even without any other supports on it, it’s not moving, bending, flexing, or what have you.  I believe the torsion box will do a nice job.  One deviation I made from the typical cabinets that I’ve see is that they will typically have the wheels touching the ‘top’ skin, the the rest of the torsion box hanging down towards the floor.  Well, I wanted to get the wheels solidly under the whole thing, with not potential for a shearing force breaking the plywood along the mounting plate of the wheel, so I went this direction.  But, that begs the question.  There is now roughly 5.5″ of space that just hanging below the bottom skin and the floor.  What can be done with that?

WP_20141216_003

I thought, well, I can put some drawers down below of course.  I could have just put some hanging drawer sliders down there, and called it a day, but I went with a slightly different design.  I wanted to have something that could change easily over time, so I went with a French cleat system, which could take any attachments over time, starting with some drawers that I had laying around from some cabinet that wasn’t being used.

WP_20141222_005

So, two side by side sections of hanging French cleats, the one on the left with drawer installed.

And finally, the whole mess turned right side up with some junk thrown into the drawer

WP_20141223_003

With the offset from the base, the drawers are about 4 inches tall, leaving around 1.5″ to the floor.  That’s a great usage of space as far as I’m concerned.  With this setup, I can keep some things that are commonly used with the table saw, or assembly, or just things that don’t quite have anywhere else to live at the moment.

So, this is phase one.  The non-sagging base, ready for the cabinetry work to be set atop, which will actually hold the saw and table surfaces.


Fast Apps, Microsoft Style

Pheeeuuww!!

That’s what I exclaimed at least a couple of times this morning as I sat at a table in a makeshift “team room” in building 43 at Microsoft’s Redmond campus. What was the exclamation for? Well, over the past 3 months, I’ve been working on a quick strike project with a new team, and today we finally announced our “Public Preview“.  Or, if you want to get right to the product: Cloud App Discovery

I’m not a PM or marketing type, so it’s best to go and read the announcement for yourself if you want to get the official spiel on the project.  Here, I want to write a bit about the experience of coming up with a project, in short order, in the new Microsoft.

It all started back in January for me.  I was just coming off another project, and casting about for the next hardest ‘mission impossible’ to jump on.  I had a brief conversation with a dev manager who posed the question; “Is it possible to reestablish the ‘perimeter’ for IT guys in this world of cloud computing”?  An intriguing question.  The basic problem was, if you go to a lot of IT guys, they can barely tell you how many of the people within their corporation are using SalesForce.com, let alone DropBox from a cafe in Singapore.  Forget the notion of even trying to control such access.  The corporate ‘firewall’ is almost nothing more than a quartz space heater at this point, preventing very little, and knowing about even less.

So, with that question in mind, we laid out 3 phases of development.  Actually, they were already laid out before I joined the party (by a couple of weeks), so I just heard the pitch.  It was simple, the first phase of development is to see if we can capture network traffic, using various means, and project it up to the  cloud where we could use some machine learning to give an admin a view of what’s going on.

Conveniently sidestepping any objections actual employees might have with this notion, I got to thinking on how it could be done.

For my part, we wanted to have something sitting on the client machine (a windows machine that the user is using), which will inspect all network traffic coming and going, and generate some reports to be sent up to the cloud.  Keep in mind, this is all consented activity, the employee gets to opt in to being monitored in this way.  All in the open and up front.

At the lowest level, my first inclination was to use a raw socket to create a packet sniffer, but Windows has a much better solution these days, built for exactly this purpose.  The Windows Filter Platform, allows you to create a ‘filter’ which you can configure to callout to a function whenever there is traffic.  My close teammate implemented that piece, and suddenly we had a handle on network packets.

We fairly quickly decided on an interface between that low level packet sniffing, and the higher level processor.  It’s as easy as this:

 

int WriteBytes(char *buff, int bufflen);
int ReadBytes(char *buff, int bufflen, int &bytesRead);

I’m paraphrasing a bit, but it really is that simple. What’s it do? Well, the fairly raw network packets are sent into ‘WriteBytes’, some processing is done, and a ‘report’ becomes available through ‘ReadBytes’. The reports are a JSON formatted string which then gets turned into the appropriate thing to be sent up to the cloud.

The time it took from hearing about the basic product idea, to a prototype of this thing was about 3 weeks.

What do I do once I get network packets? Well, the network packets represent a multiplexed stream of packets, as if I were a NIC. All incoming, outgoing, all TCP ports. Once I receive some bytes, I have to turn it back into individual streams, then start doing some ‘parsing’. Right now we handle http and TLS. For http, I do full http parsing, separating out headers, reading bodies, and the like. I did that by leveraging the http parsing work I had done for TINN already. I used C++ in this case, but it’s all relatively the same.

TLS is a different story. At this ‘discovery’ phase, it was more about simple parsing. So, reading the record layer, decoding client_hello and server_hello, certificate, and the like. This gave me a chance to implement TLS processing using C++ instead of Lua. One of the core components that I leveraged was the byte order aware streams that I had developed for TINN. That really is the crux of most network protocol handling. If you can make herds or tails of what the various RFCs are saying, it usually comes down to doing some simple serialization, but getting the byte ordering is the hardest part. 24-bit big endian integers?

At any rate, http parsing, fairly quick. TLS client_hello, fast enough, although properly handling the extensions took a bit of time. At this point, we’d be a couple months in, and our first partners get to start kicking the tires.

For such a project, it’s very critical that real world customers are involved really early, almost sitting in our design meetings. They course corrected us, and told us what was truly important and annoying about what we were doing, right from day one.

From the feedback, it becomes clear that getting more information, like the amount of traffic flowing through the pipes is as interesting as the meta information, so getting the full support for flows becomes a higher priority. For the regular http traffic, no problem. The TLS becomes a bit more interesting. In order to deal with that correctly, it becomes necessary to suck in more of the TLS implementation. Read the server_hello, and the certificate information. Well, if you’re going to read in the cert, you might as well get the subject common name out so you can use that bit of meta information. Now comes ASN.1 (DER) parsing, and x509 parsing. That code took about 2 weeks, working “nights and weekends” while the other stuff was going on. It took a good couple of weeks not to integrate, but to write enough test cases, with real live data, to ensure that it was actually working correctly.

The last month was largely a lot of testing, making sure corner cases were dealt with and the like. As the client code is actually deployed to a bunch of machines, it really needed to be rock solid, no memory leaks, no excessive resource utilization, no CPU spiking, just unobtrusive, quietly getting the job done.

So, that’s what it does.

Now, I’ve shipped at Microsoft for numerous years. The fastest cycles I’ve usually dealt with are on the order of 3 months. That’s usually for a product that’s fairly mature, has plenty of engineering system support, and a well laid out roadmap. Really you’re just turning the crank on an already laid out plan.

This AppDiscovery project has been a bit different. It did not start out with a plan that had a 6 month planning cycle in front of it. It was a hunch that we could deliver customer value by implementing something that was challenging enough, but achievable, in a short amount of time.

So, how is this different than Microsoft of yore? Well, yes, we’ve always been ‘customer focused’, but this is to the extreme. I’ve never had customers this involved in what I was doing this early in the development cycle. I mean literally, before the first prototypical bits are even dry, the PM team is pounding on the door asking “when can I give it to the customers?”. That’s a great feeling actually.

The second thing is how much process we allowed ourselves to use. Recognizing that it’s a first run, and recognizing that customers might actually say “mehh, not interested”, it doesn’t make sense to spin up the classic development cycle which is meant to maintain a product for 10-14 years. A much more streamlined lifecycle which favors delivering quality code and getting customer feedback, is what we employed. If it turns out that customers really like the product, then there’s room to fit the cycle to a cycle that is more appropriate for longer term support.

The last thing that’s special is the amount of leveraging Open Source we are allowing ourselves these days. Microsoft has gone full tilt on OpenSource support. I didn’t personally end up using much myself, but we are free to use it elsewhere (with some legal guidelines). This is encouraging, because for crypto, I’m looking forward to using things like SipHash, and ChaCha20, which don’t come natively with the Microsoft platform.

Overall, as Microsoft continues to evolve and deliver ‘customer centric’ stuff, I’m pretty excited and encouraged that we’ll be able to use this same model time and again to great effect. Microsoft has a lot of smart engineers. Combined with some new directives about meeting customer expectations at the market, we will surely be cranking out some more interesting stuff.

I’ve implemented some interesting stuff while working on this project, some if it I’ll share here.


Jobs at Microsoft – Working on iOS and Android

Catchy title isn’t it.  Microsoft, where I am employed, is actually doing a fair bit of iOS and Android work.  In days of yore, “cross platform” used to mean “works on multiple forms of Windows”.  These days, it actually means things like iOS, Android, Linux, and multiple forms of Windows.

I am currently working in the Windows Azure Group.  More specifically, I am working in the area of identity, which covers all sorts of things from Active Directory to single sign on for Office 365.  My own project, the Application Gateway, has been quite an experience in programming with node.js, Android OS, iOS, embedded devices, large world scale servers, and all manner of legal wranglings to ship Open Source for our product.

Recently, my colleague Rich Randall came by and said “I want to create a group of excellence centered around iOS and Android development, can you help me?”.  Of course I said “sure, why not”, so here is this post.

Rich is working on making it easier for devices (non-windows specific) to participate in our “identity ecosystem”.  What does that mean?  Well, the job descriptions are here:

iOS Developer – Develop apps and bits of code to make it relatively easy to leverage the identity infrastructure presented by Microsoft.

Android Developer – Develop apps and bits of code to make it relative easy to leverage the identity infrastructure presented by Microsoft.

I’m being unfair, these job descriptions were well crafted and more precisely convey the actual needs.  But, what’s more interesting to me is to give a should out to Rich, and some support for his recruiting efforts.

As Microsoft is “in transition”, it’s worth pointing out that although we may be considered old and stodgy by today’s internet standards, we are still a hotbed of creativity, and actually a great place to work.  Rich is not alone in putting together teams of programmers who have non-traditional Microsoft skillsets.  Like I said, there are plenty that now understand that as a “services and devices” company, we can’t just blindly push the party line and platform components.  We have to meet the market where it is, and that is in the mobile space, with these two other operating systems.

So, if you’re interesting in leveraging your iOS and Android skills, delivering code that is open source, being able to do full stack development, working with a great set of people, please feel free to check out those job listings, or send mail to Rich Randall directly.  I’d check out the listings, then send to Rich.

Yes, this has been a shameless jobs plug.  I do work for the company, and am very interested in getting more interesting people in the door to work with.