Previous Entry Share Next Entry
Release 1.1.7
querki
jducoeur wrote in querki_project
I just cut a new release. From the user's point of view not much has changed (one small fix to a bug in the titles of some pages), but internally it's been one *heck* of a couple of weeks. In preparation for the upcoming move of the systems to Amazon (hopefully next month), I've been massively rearchitecting a lot of things. So today's post is mainly for the techies -- I'm going to ramble a little about what's going on internally. Anyone not interested in code should give this one a pass.


As I've mentioned before, Querki is primarily built on top of Akka, a powerful system for building Actor-based applications. Quick summary for those who are new to this approach: Actors are sort of like object-oriented programming done right. Your "objects" are much more rigorously separated than in conventional programs -- they can only communicate via messages, which are always exchanged asynchronously. This requires some serious rethinking of how you build your applications, but it comes with one *gigantic* advantage: assuming you play by the rules, you can write massively-multithreaded applications with almost no chance of the usual problems of threads stepping on each other, deadlocking, or anything like that. Given that threading issues are probably the most pervasive and hardest-to-debug problems in modern software, this is huge.

There's a corollary to that, which the Akka team gradually realized over the past five years: if you take this architecture seriously, there's nothing restricting it to a single machine. After all, these Actors are just passing messages back and forth -- they can do that between machines as easily (in principle) as they can between threads. This has gradually become the raison d'etre of Akka: building software that scales simply by adding more machines, with no changes other than minor configuration. Over the years, the Akka team has deprecated and removed all of the features that are fundamentally single-machine-centric (such as the elegant but unscalable Software Memory Transaction system they once had), in favor of focusing on making it easier and easier to scale.


Querki has always been intended to scale like this -- the notion of distributing across many machines has always been in the design. But designs are one thing, and actual code is quite another; in practice, Querki has to date been running on a single node in a colo in Marblehead. And of course, that has allowed me to be lazy, not sanity-checking whether the system *can* scale properly. Any programmer will immediately intuit that that means the system *doesn't* scale.

So my project for the summer is fixing that: the plan is that, in another month or so, we'll be moving Querki from that one hard node to a virtual cluster running at Amazon. That should be more resilient in a lot of ways, but most importantly, if I can get the system running well on three machines, that will mean that it *does* scale properly. As we get more users, we can just add more servers.


Step one, which is now done, is to *much* more rigorously separate the front and back ends of the code. While Querki's servers are going to be homogeneous for the time being, there's a clear distinction between the front end (written in Play, dealing with the HTTP layer) and the back (written in Akka, doing the actual guts of Querki). There have been two problems with the messages sent from the front to the back -- (a) they contained an unserializable field (the pointer to the Ecology, which is fundamentally local), and (b) they were just plain enormous. The latter problem is significant; the former is fatal. I believe I've now addressed both: the message being sent is now a tiny fraction of what it once was, which should keep the system much faster.


The other big change over the past two weeks was the fun one. Querki is, when you get right to it, basically a big cloud of Spaces. Each active Space is represented by a small troupe of Actors -- one for the current state of the Space itself (an enormous immutable data structure), one mediating the conversations, one for each user currently interacting with the Space (with their own private view of the Space, stripped of any Things they aren't allowed to see, which is why I have so much confidence in Querki's security), etc. The question has always been, given an incoming request for a particular Space, how do we find it? It's *somewhere* out in that cloud, but where?

This is where procrastination has been on my side. While I worked on the single-node version of Querki, the good folks at Typesafe went and solved the problem for me, with the new Cluster Sharding mechanism. This is one of the most deliciously useful tools I've ever played with, frankly. You simply tell it how, given a message, to extract the name of the target Actor; how to derive a "shard" name from the message; and how to construct an Actor of the target sort. And it deals with the rest: it doles out the Shards among the nodes, keeps track of where each one lives, builds the Actor if it doesn't already exist, shuts down Actors that haven't been used recently, and so on.

It doesn't solve every problem, but *boy* it is well-suited to Querki. Switching to this architecture was remarkably easy -- less than a day's work. I even adapted the same system to distribute several other internal Querki caches around the cluster. The result should be pretty close to optimally efficient, and easy to scale up.


All of this is a little theoretical so far -- we're still running on one node. But we've gone from "a single-node program" to "a clustered program that just happens to have a one-node cluster". The project for the next two weeks is the next step: starting to build a prototype three-node cluster, the model of the real AWS cluster to come. To help with *that*, Querki has become an official customer of Typesafe, so that we have access to their new ConductR system.

ConductR is a neat idea whose time has come. It is billed as "Cluster as a Service", which is about right. The thing is, the hardest part of running a cluster is ongoing maintenance. It's one thing to distribute new releases by hand (as I've been doing) when it's just one machine. But that becomes tiresome and difficult to do right with three machines, and increasingly impossible beyond that. Keeping the machines up to date is just plain *hard*.

There are lots of approaches to that around, but ConductR's is one of the most simple and elegant. It is itself an Akka clustered application, which you load onto the nodes of the cluster, which is always running, and it mediates everything else. You don't roll out new releases by hand -- you just hand ConductR a JAR file, some configuration information, and tell it how you want to distribute that release around the cluster. It keeps the releases around, allows you to spread specific services more or less widely, deals with warming up releases and then hot-swapping them to avoid downtime, and so on.

It's neat stuff, and I *think* it makes it plausible for me to deal with the Ops side of Querki as well as the Dev side, at least while we're still small.


None of which should ever be visible to you, but it's all essential to the long-term plan -- how we can reasonably hope to survive the rapid growth I hope to kickstart late this year. It's exciting, scary and rather fun.

So that's what I'm learning right now. No end of new tech needed to build something like this...

  • 1
I am always tickled when I see "shard" used in this sense :-)

Hmm? Not sure I know why...

Ah, sure. I sort of take it for granted that sharding came out of the MMO world, so it doesn't even occur to me that that's not universally-understood.

Hadn't realized that the origin was quite *that* literal, though -- thanks for the pointer. I expect that the theory is correct; certainly the timing is suggestive. (I wish I could be half as confident that I invented the term "portal", but the evidence is *much* less clear there...)

  • 1
?

Log in

No account? Create an account