You're not just buying hardware with Oxide

Reader, heads up!
This post is going to come across as a bit of a love letter to what Oxide has been able to do; I'm a big fan even if I haven't used the product. Read on to understand why.

Background

I decided to write this post because I am a big fan of the Oxide podcast. I have been listening to it for a while now and I think it's a great way to learn about the company and the product. Now, onwards to the post.

I worked with network, compute, storage as a builder, operator and integrator for Blizzard (the video game company). I haven't worked there in the past 10 years, so things may have come a long way, but what I wouldn't have given for some of the stuff Oxide has been doing.

I am going to Be warned that I haven't used the Oxide product but the point of this post is that I REALLY wish I could.

Working with Blade systems

I have worked with racks with bladesystems. I have also worked with IBM Cloud [1], formerly known as SoftLayer [2]. Both had pretty leaky interfaces, and I'm being generous here. You could connect to some of them over SSH, HTTP, and iLO if I remember correctly. But programmatic access was frankly a horror show - I spent hours, I will never get back, trying to do the most basic things. It's hard to believe a POST /halt would struggle to halt a machine. But that happened. Lots.

Q: What is iLO?
A: iLO stands for Integrated Lights Out, and at the time, still required a Java applet. It was the most egregious piece of software I have ever touched. I normally curse but I don't think I ever cursed so much in my life as when I had to work through iLO. I hope newer versions of iLO are better.
Anyway, at Blizzard we built what we could to be able to have good command and control over the blade systems.
And it worked - I am yet to see a command and control that provides nice, single pane to control thousands of servers, as good as what Blizzard built. It had hardware _and_ service control - I even got to write code to make it control services in Windows.

There was a lot of plumbing we had to do to get it all connected. I'm proud of having been a part of such machinery, but not proud of some of the code spit and gum to make it all work.

So, where did that leave us? Well, I thought that we were missing something. We were missing a way to have a single pane of control over the entire rack. Not just a "single pane of control" but a "single pane of control" that was also programmable.

Out of nowhere, SeaMicro came along - I cannot recall how we found out about it. But it was a thing.

SeaMicro [3]

So what is was SeaMicro?

I visited SeaMicro I think in 2012. I actually thought what they were trying to achieve was awesome and I either got my hands on a SM15000 or an earlier iteration. Forgive me for not remembering the details anymore.

So, SeaMicro, to me, came across as providing an integrated piece of hardware, with networking, compute and storage (familiar to anyone?) and let people enjoy the “simplicity” of it all.

It had a decent (if memory serves) RESTful API. You can still figure out its interface by looking at the python-seamicroclient codebase [4].

AMD also thought it was a good idea: it purchased SeaMicro and rebranded as AMD SeaMicro. (It then proceeded to kill it [5] but that's another story).

SeaMicro was an evolution in my eyes. The sky was the limit and even if I didn't quite think the quality and expertise was there (when I say expertise in this context, I don't mean hardware, I mean REST APIs), this was a step up.

It felt a lot more modern, unfortunately we didn't get to see it through - in hindsight, that was for the best.

Cloud

Fast forward a couple years (2014) when I was working at GoCardless, and where we were using SoftLayer. It was the worst of both worlds: not well integrated, inconsistent APIs, still had access to the likes of an iLO (but who would want it), not enough control over the servers. It was hardware as a service that I wouldn't vouch for. There's no way to put it in better terms. It sucked.

Which meant we had to move - and so we moved to Google Cloud Platform. This was a good call (I made it so take this part with a pinch of salt) - had (and still has) good developer APIs (don't come at me, in comparison to an iLO they are incredible - a unicorn on a rainbow), provide a decent tradeoff between cost and ability to go fast, and build for the future of the company.

The “Cloud” feels modern because it has APIs that for the most part deliver. Trust me, they do.

So, what does this have to do with Oxide?

Oxide

Assuming you're looking for on-prem, well, Oxide is building an incredible rack that has the best of (most of) the worlds. The only part I don't like, if I'm honest, is that I currently don't work for a company that requires buying one. Because if I ever am in charge of a company where I think buying a cloud computer (a computer that provides a cloud, i.e: Oxide) makes sense, I’m going to get in touch with Oxide.

When buying something, you can either think of the purchase as a transaction, or you can think of the purchase as the beginning of a relationship. Being able to read pieces like this one makes me think whoever buys an Oxide computer is also building a relationship with Oxide.

One part I left out on all of the above was that they sold you either a piece of hardware, or a service. I think with Oxide, you're not getting just that. You're buying a piece of hardware, and you're buying the whole team, and their culture, and their openness, and their expertise.

And that's a different ball game. When you're buying a relationship you're buying the right thing.

It's refreshing to go to a webpage for a piece of hardware and read through the specs, the blog posts about culture, compensation philosophy, the team and how they think and operate. You obviously get an API, a CLI, SDKs, and even an OpenAPI spec. They also do a lot of open source when it's not core to their business model. This is how a computer should be built and honestly, how companies should operate.

IBM doesn't have that. HP doesn't have that. Dell doesn't have that. Oxide does.

If anyone from Oxide ever reads this, keep at it. Even if you fail, you've built what you have in a way that is stellar for a company.


P.S: They also write a lot of rust, a language I adore. Had to mention it, even if I don't want to come across as another rust fan boy.

References

[1]: It seems IBM wants us to forget it ever purchased SoftLayer - it's now called IBM Cloud.
[2]: https://www.ibm.com/support/pages/softlayer <- I love how the question is just "Softlayer".
[3]: https://en.wikipedia.org/wiki/SeaMicro
[4]: https://github.com/ubuntu/python-seamicroclient/tree/master/seamicroclient/v2
[5]: https://www.datacenterdynamics.com/en/news/amd-kills-off-seamicro-server-business/
~ fin ~