I've been working on infrastructure for real-time notifications for a high-traffic site on and off for a few years, and recently been contributing to Centrifuge.
This post is an attempt to sum up how I see the state of the relevant technologies at the start of 2016.
I'll walk through the various techniques for delivering real-time message to browsers. There are good resources for details of each, so I'll instead focus on the gotchas and incompatibilities I've come across that need to be accounted for in the wild.
This information is a mixture of first-hand experience and second-hand reading mostly of well-tested libraries such as SockJS, socket.io and MessageBus.
WebSockets
It's 2016. We are officially in the future. WebSockets are a real standard and are supported in all recent major browsers.
That should really be the end of the article but, as always, it isn't.
Sam Saffron (the author of MessageBus and Co-Founder of Discourse) recently blogged about why WebSockets are not necessarily the future. I found his post truly refreshing as I've run into almost all of the pain points he describes.
That said, Sam's post is focusing on the case where your WebSocket/streaming/polling traffic is served by the same application and same servers as regular HTTP traffic.
There are many reasons I've experienced which suggest this might not be the best approach at scale. Sam even mentions this in his article. I can't say it's always a bad one - Discourse itself is proof that his model can work at scale - but I've found that:
Long-lived requests are very different to regular HTTP traffic whether they are WebSockets, HTTP/1.1 chunked streams or just boring long-polls. For one real-life test we increased the number of sockets open on our load balancer by a factor of 5 or more in steady state with orders-of-magnitude higher peaks during errors causing mass-reconnects. For most websites, real-time notifications are a secondary feature; failure in a socket server or overload due to a client bug really shouldn't be able to take out your main website and the best way to ensure that is to have the traffic routed to a totally different load balancer at DNS level (i.e. on a separate subdomain).
If your web application isn't already an efficient event-driven daemon (or have equivalent functionality like Rack Hijack) long-lived connections in main app are clearly a bad choice. In our case our app is PHP on apache. So handling long-lived connections must occur on separate processes (and in practice servers) with suitable technology for that job.
Scaling real-time servers and load balancing independently of your main application servers is probably a good thing. While load balancing tens or hundreds of thousands of open connections might be a huge burden to your main load balancer as in point 1, you can probably handle that load with an order of magnitude or two fewer socket servers than are in your web server cluster if you are at that scale.
But with those points aside, the main thrust of Sam's argument that resonates strongly with my experience is that most apps don't need bidirectional sockets so the cons of using WebSockets listed below can be a high price for a technology you don't really need. Sam's article goes into more details on some of the issues and includes others that are not as relevant to my overview here so worth a read.
WebSocket Pros
- Now supported in all modern browsers.
- Efficient low-latency and high-throughput transport.
- If you need low-latency, high-throughput messaging back to the server they can do it.
- Super easy API - can make a toy app in an hour.
WebSocket Cons
- Despite wide browser support, they are still not perfect: IE 8 and 9 and some other older mobile browsers need fallbacks anyway if you care about wide compatibility
- There were many revisions and false starts in the history of the WebSocket protocol, in practice you still have to support old quirky protocol versions on the server for wide browser coverage. Mostly this is handled for you by libraries but it's unpleasant baggage nonetheless.
- Even when supported in browser, there are many restrictive proxies and similar in the wild that don't support WebSockets or which close connections after some short time regardless of ping activity. If you use SSL things improve a lot as proxies don't get to mangle the actual protocol, but still not perfect.
- Due to the two issues above, you almost certainly need to implement fallbacks to other methods anyway, probably using a well-tested library as discussed below.
- They work against HTTP/2. Most notably multiple tabs will cause multiple sockets always with WebSocket, whereas the older fallbacks can all benefit from sharing a single HTTP/2 connection even between tabs. In the next few years this will become more and more significant.
- You have to choose a protocol over the WebSocket transport. If you do write data back with them, this can end up duplicating your existing REST API.
WebSocket Polyfills
One of the big problems with WebSockets then is the need to support fallbacks. The sensible choice is to reach for a tried and tested library to handle those intricate browser quirks for you.
The most popular options are SockJS and socket.io.
These are both fantastic pieces of engineering, but once you start digging into the details, there are plenty of (mostly well-documented) gotchas and quirks you might still have to think about.
My biggest issue with these options though is that they aim to transparently provide full WebSocket functionality which we've already decided isn't actually what we need most of the time. In doing so, they often make design choices that are far from optimal when all you really want is server to client notifications. Of course if you actually do need bi-directional messaging then there is not a lot to complain about.
For example, it is possible to implement subscription to channel-based notifications with a single long-poll request:
- Client sends request for
/subscribe?channels=foo,bar
- You wait until there is data then request returns (or times out)
- Authentication and resuming on reconnect can be handled by passing headers or query params in a stateless way
Yet if you are using a WebSocket polyfill, it's likely that you use some sort of PubSub protocol on top of the abstracted WebSocket-like transport. Usually that means you connect, then send some sort of handshake to establish authentication, then one or more subscribe
requests and then wait for messages. This is how most open source projects I've seen work (e.g. Bayeux protocol).
All is fine on a real WebSocket but when the transport transparently reverts to plain old long-polls, this starts to get significantly more complicated than the optimal, simple long-poll described above. Each of the handshake and subscribe messages might need to be sent in separate requests. SockJS handles sending on a separate connection to listening.
Worse is that many require that you have sticky-sessions enabled for the polling fallback to work at all since they are trying to model a stateful socket connection over stateless HTTP/1.1 requests.
The worst part is the combination: poor support for load balancing WebSockets in most popular load balancers and sticky session support. That means you may be forced to use Layer 4 (TCP/TLS) balancing for WebSockets but you can't ensure session stickyness if you do. So SockJS and the like just can't work behind this kind of load balancer. HAProxy is the only one of the most popular load balancing solutions I know of that can handle Layer 7 WebSocket balancing right now which is a pain in AWS where ELBs give you auto-scaling and bypass the need to mess with keepalived or other HA mechanism for your load balancer.
To be clear, the benefits of not reinventing the wheel and getting on with dev work probably outweigh these issues for many applications, even if you don't strictly need bi-directional communication. But when you are working at scale the inefficiencies and lack of control can be a big deal.
WebSocket Polyfill Pros
- For the most part they just work, almost everywhere.
- The most widely used ones are now battle hardened.
- Leave you to write your app and not think about the annoying transport quirks described here.
WebSocket Polyfill Cons
- Usually require sticky sessions for fallbacks to work.
- Usually less efficient and/or far more complex than needed for simple notification-only applications when falling back due to emulating bi-directional API.
- Can run into issue like exhausting connections to one domain in older browsers and deadlocking if you make other
XMLHttpRequest
s to same domain.
- Usually don't give you good control of reconnect timeouts and jitter which can limit your ability to prevent thundering herds or reconnections during incidents.
Server Sent Events/EventSource
The EventSource API has been around a while now and enjoys decent browser support - on par with WebSockets. It interacts with a server-protocol named Server Sent Events. I'll just refer to both as "EventSource" from now on.
At first glance it looks ideal for the website notification use-case I see being so prevalent. It's not a bidrectional stream; it uses only HTTP/1.1 under hood so works with most proxies and load balancers; long-lived connection can send multiple events with low latency; has a mechanism for assigning message ids and sending cursor on reconnect; browser implementations transparently perform reconnects for you.
What more can you want? Well...
EventSource Pros
- Plain HTTP/1.1 is easy to loadbalance at Layer 7.
- Not stateful, no need for sticky sessions (if you design your protocol right).
- Efficient for low-latency and high-throughput messages.
- Built in message delimiting, ids, and replay on reconnect.
- No need for the workarounds for quirks with plain chunked encoding.
- Can automatically take advantage of HTTP/2 and share a connection with any other requests streaming or otherwise to the same domain (even from different tabs).
EventSource Cons
- No IE support, not even IE 11 or Edge.
- Never made it past a draft proposal, is no longer being worked on and remains only in the WhatWG living standard. While it should still work for some time in supported browsers, this doesn't feel like a tech that people are betting on for the future.
- Browser reconnect jitter/back-off policy is not under your control which could limit your ability to mitigate outages at scale just as with WebSocket Polyfills above.
- Some older browser versions have incorrect implementations that look the same but don't support reconnecting or CORS.
- Long-lived connections can still be closed early by restrictive proxies.
- Streaming fallbacks below are essentially the same (with additional work to implement reconnect, data framing and message ids) with better browser support - you probably need them anyway for IE.
XMLHttpRequest/XDomainRequest Streaming
Uses the same underlying mechanism as EventSource above: HTTP/1.1 chunked encoding on a long-lived connection, but without browser handling the connection directly.
Instead XMLHttpRequest
is used to make the connection. For cross-domain connections CORS must be used, or in IE 8 and 9 that don't have CORS support, the non-standard XDomainRequest
is used instead.
These techniques are often refered to as "XHR/XDR Streaming".
XHR/XDR Streaming Pros
- Plain HTTP/1.1 is easy to loadbalance at Layer 7.
- Not stateful, no need for sticky sessions (if you design your protocol right).
- Efficient for low-latency and high-throughput messages.
- Can automatically take advantage of HTTP/2 and share a connection with any other requests streaming or otherwise to the same domain (even from different tabs).
- Works in vast majority of browsers right back to IE 8 and relatively early versions of other major browsers.
XHR/XDR Streaming Cons/Gotchas
- Still doesn't work (cross domain) in really old browsers like IE 7.
XDomainRequest
used as fallback for IE 8 and 9 doesn't support cookies so you can't use this if you require both cross domain connection and cookies for auth or sticky sessions.
XDomainRequest
also doesn't support custom headers. Possibly not a big deal but notable if you are trying to emulate EventSource which uses custom headers for retry cursor.
- Long-lived connections can still be closed early by restrictive proxies.
- Even well-behaved proxies will close idle connections, so your server needs to send a "ping" or heartbeat packet. Usually this is sent every 25-29 seconds to defeat 30 second timeouts reliably.
- There is a subtle memory leak: the body text of the HTTP response grows and grows as new messages come in consuming more memory over time. To mitigate this, you need to set some limit on this body size and when it's passed close and re-open connection.
- In order to defeat load balancer and proxy connection idle timeouts, server needs to periodically send something. Generally 25 seconds is recommended interval to work around proxies with a 30 second timeout.
- IE 8 and 9 don't fire XDR progress handler until they reach some threshold response size. This means you might need to send 2KB of junk when request starts to avoid blocking your real events. Some other older browsers had similar issues with XMLHttpRequest but these are so rare they probably aren't worth supporting. In practice you can tell if browser supports
XMLHttpRequest.onprogress
as an indicator that it will work without padding.
- Some browsers require
X-Content-Type-Options: nosniff
header otherwise they delay delivering messages until they have enough data to sniff (I've seen reports this is 256 bytes for Chrome).
- You may have to encapsulate messages in some custom delimiters since proxies might re-chunk the stream and you need to be able to recover original message boundaries to parse them. Google seems to use HTTP/1.1 chunked encoding scheme inside the response text to delimit "messages" (i.e. chunked encoding inside chunked encoding).
- The big one: some proxies buffer chunked responses, delaying your events uncontrollably. You may want to send a ping message immediately on a new connection and then revert to non-streaming in the client if you don't get that initial message through soon enough.
XMLHttpRequest/XDomainRequest Long-polling
Same as XHR/XDR streaming except without chunked encoding on response. Each connection is held open by server as long as there is no message to send, or until the long-poll timeout (usually 25-60 seconds).
When an event arrives at the server that the user is interested in, a complete HTTP response is sent and the connection closed (assuming no HTTP keepalive).
XHR/XDR Long-polling Pros
- Plain HTTP/1.0 (no chunked encoding) is easy to load balance at Layer 7.
- Can automatically take advantage of HTTP/2.
- Works even when proxies don't like long-lived connection, or buffer chunked responses for too long.
- No need for server pings - natural long poll timeout is set short enough.
XHR/XDR Long-polling Cons
- Same cross-domain/browser support issues as XHR/XDR streaming.
- Overhead of whole new HTTP request and possibly TCP connection every 25 seconds or so.
- To achieve high-throughput you have to start batching heavily either on server or by not reconnecting right away in the client to allow bigger batch of events to queue. If you have lots of clients all listening to high-throughput channel this adds up to a huge amount of HTTP requests unless you ramp up batching to be on order of 20-30 seconds. But then you are trading off latency - is 30 seconds latency acceptable for your "real-time" app?
JSONP Long-polling
The most widely supported cross-domain long-polling technique is JSONP or "script tag" long-polling. This is just like XHR/XDR long-polling except that we are using JSONP to achieve cross-domain requests instead of relying on CORS or XDR support. This works in virtually every browser you could reasonably want to support.
JSONP Long-polling Pros
- All the pros of XHR/XDR long-polling.
- Works in ancient browsers too.
- Supports cookies everywhere (although if your long poll server is a totally separate root domain browsers third-party cookie restrictions might prevent that).
JSONP Long-polling Cons
- Largely the same as XHR/XDR Long-polling Cons minus the cross-domain issues
Polling
Periodically issuing a plain old XHR (or XDR/JSONP) request to a backend which returns immediately.
Polling Pros
- Very simple, always works, no proxy or load balancing issues
- Can benefit from HTTP/2
Polling Cons
- Not really "real-time" with an average of half the poll interval latency per message. That might be 5 or 10 seconds.
- Perception is that this is expensive if you service the requests via your regular web app - can easily create orders of magnitude more HTTP request on your web servers. In practice you can use highly optimised path on a separate service to mitigate this.
Others
There are many other variants I'm missing out as this is already fairly long. Most of them involve using a hidden iframe. Inside the iframe HTML files with individual script blocks served with chunked encoding or one of the above transports receive events and call postMessage or a fallback method to notify the parent frame.
These variants are generally only needed if you have requirement to support both streaming and cookie enabled transport for older browsers for example. I won't consider them further.
The Future(?)
You may have noticed if you use Chrome that Facebook can now send you notifications even when you have no tab open. This is a Chrome feature that uses a new Web Push standard currently in draft status.
This standard allows browsers to subscribe to any compliant push service and monitor for updates even when your site isn't loaded. When they come in service workers can be called to handle the notification.
Great! Soon we won't have to worry about this transport stuff at all. All browsers will support this and all we'll have lovely open-source libraries to easily implement that backend. (But see update below.)
But that's some way off. Currently Chrome only supports a modified version that doesn't follow standard because it uses their existing proprietary Google Cloud Messaging platform (although they claim to be working with Mozilla on standards compliant version).
Firefox is working on an implementation (in Nightlies) but it's going to be some years yet before there is enough browser support for this to replace any of the other options for majority of users.
I came across this standard after writing most of the rest of this post and I would like to pick out a few points that reinforce my main points here:
- It's push only technology
- The transport is HTTP/2 server push which can fallback to regular HTTP poll. No WebSockets. No custom TCP protocol. Presumably if you have push enabled over HTTP/2 in your browser, then your actual site requests could be made over it too meaning that in some cases it might even cut down on connection overhead for your main page loads... That's pure speculation though.
- The spec explicitly recommends against application-level fallbacks although clearly they will be needed until this spec is supported virtually everywhere which will be at least a few years away.
Update 13th Jan 2016
After reading the spec closely and trying to think about how to use this technology it became clear that it might not be a good fit for general purpose in-page updates.
I clarified with the authors on the mailinglist (resulting in this issue). The tl;dr: this is designed similar to native mobile push - it's device centric rather than general pub/sub and is intended for infrequent message that are relevant to a user outside of a page context. Right now implementations limit or forbid it's use for anything that doesn't display browser notifications. If that's all you need, you may be able to use it in-page too, but for live-updating comment threads in your app where you only care about updates for the thread visible on page, it wont be the solution.
Do you need bi-directional sockets?
My thoughts here have a bias towards real-time notifications on websites which really don't require bi-directional low-latency sockets.
Even applications like "real-time" comment threads probably don't - submitting content as normal via POST and then getting updates via streaming push works well for Discourse.
It's also worth noting that GMail uses XHR Streaming and Facebook uses boring XHR long-polls even on modern browsers. Twitter uses even more unsexy short polls every 10 seconds (over HTTP/2 if available). These sites for me are perfect examples of the most common uses for "real-time" updates in web apps and support my conclusion that most of us don't need WebSockets or full-fidelity fallbacks - yet we have to pay the cost of their downsides just to get something working easily.
Sam Saffron's MessageBus is a notable exception which follows this line of thinking however it's only aimed at Ruby/Rack server apps.
I find myself wishing for a generalisation of MessageBus' transport that can be made portable across other applications, something like SockJS or Socket.io but without the goal of bi-directional WebSocket emulation. Eventually it could support Web Push where available and pave the way for adopting that in the interim before browsers support it. Perhaps an open-source project in the making.
Thanks to Sam Saffron, Alexandr Emelin and Micah Goulart who read through a draft of this very long post and offered comments. Any mistakes are wholly my own - please set me straight in the comments!
Tyler Treat just published another great article about Distributed Systems and the limited value of strong guarantees they might claim to provide.
I'll start with a word of thanks to Tyler - his blog is a great read and well recommended for his articulation and clarity on many computer science subjects that are often muddled by others.
Tyler's article focuses specifically on distributed messaging guarantees but at least some of the discussion is relevant to or even intimately tied to other distributed problems like data consistency and consensus.
I agree with all of his points and I hope this article is a complement to the discussion on trade-offs in the distributed design space.
The article got me thinking about the inverse question - when is it sensible to incur the overhead of working around reduced guarantees, assuming doing so is non-trivial?
When is Strong Consistency worth it?
Let's consider Google's database evolution (of which I know nothing more than you can read for yourself in these papers).
In 2006 Google's published a paper on BigTable. Unlike Dynamo and others, BigTable made some attempt to guarantee strong consistency per row. But it stopped there; no multi-row atomic updates, certainly no cross-node transaction. Five years later a paper on their MegaStore database was published. The motivation includes the fact that "[NoSQL stores like BigTable's] limited API and loose consistency models complicate application development".
A year later details of Spanner emerged, and in the introduction we discover that despite the high performance cost, engineers inside Google were tending to prefer to use MegaStore over BigTable since it allowed them to get on with writing their apps rather than re-inventing solutions for consistent geographical replication, multi-row transactions, and secondary indexing (my gloss on the wording there).
Google's cream-of-the-crop engineers with all the resources available to them chose to trade performance for abstractions with stronger guarantees.
That doesn't mean that MegaStore and Spanner can never fail. Nor (I guess) that Google's internal users of those systems are blindly assuming those guarantees hold in all cases in application code. But, at least for Google, providing stronger guarantees at the shared datastore level was a win for their engineering teams.
Just because it's Google doesn't make it right, but it is worth noting nonetheless.
Commutativity and Idempotence
A sound conclusion of Tyler's post is that you are better served if you can shape your problems into commutative and idempotent actions which can naturally tolerate relaxed guarantees correctly.
This is undoubtedly true.
But in cases where idempotence or commutativity are not naturally available, at least some of the possible solutions vary little between applications.
For example de-duplicating events based on a unique ID is a common requirement. It is equivalent to attempting to provide exactly-once delivery. Tyler points out this is impossible to do perfectly, nonetheless some applications require at least some attempt at de-duplicating event streams.
Isn't it better, given a case that requires it, to have an infrastructure that has been specifically designed and tested by distributed systems experts that provides clear and limited guarantees about de-duplicated delivery? Certainly if the alternative is to re-solve that hard problem (and possibly incur the significant storage cost) in every system we build that is not trivially idempotent. The centralised version won't be perfect, but in terms of development cost it might well be a cheaper path to "good enough".
Trade-offs
The title of Tyler's article is about "Understanding Trade-Offs". This really is key.
To me the conclusion of the article is spot on: it's better to consider up-front the real guarantees required and what cost they come at. I just want to add the (probably obvious) idea that application code complexity, re-solving similar problems and the extent to which every application developer needs to be a distributed systems expert are real costs to throw into the trade-off
The Google school of thought argues that it's better to favour simplicity for application developers and pay the performance cost of that simpler model until the application really needs the increased performance - at this point the additional application complexity can be justified.
This is orthogonal to Tyler's point that the application developer needs to have clarity on where and how the claimed guarantees of a system break down and how that affects the correctness or availability requirements of their own application. To me that's a given, but I don't think it devalues systems that do attempt to provide higher-level guarantees, provided they are properly understood.
Google AdSense (and other advertising products) appear to have turned on a new detection system for violations of their PII policy. Here are a couple of easy steps to fall foul of it without meaning to.
Google's support document for their privacy policy describes the issue well.
Specifically the line:
In particular, please make sure that pages that show ads by Google do not contain your visitors' usernames, passwords, email addresses or other personally-identifiable information (PII) in their URLs.
Easy right? Just don't be dumb and pass the visitors info in URL.
Here are a couple of ways to fail at that automatically - you may well be doing them yourself right now.
Fail #1: Your Search Results Page
If like every other site on the Internet, you have a search feature, and like every other search engine (copying Google themselves) you present the search results at a URL with the user's query in a GET parameter e.g. search/?q=ponies
, then you will probably violate AdSense policy eventually.
All but one of the handful of breach notifications I've come across from Google are due to people searching for an email address. That obviously results in a URL that looks like search/?q=user@exmaple.com
and Google's apparently new automated detection of PII violations flags that.
As an aside, it's quite likely that they are not searching for their own email address and so it's not really PII, but who can know? Google will see an email address in URL and call it a breach.
But that's pretty standard behavior for a search box right? We'll see what we can do in a bit.
Fail #2: Users Saving Your Content For "Offline Use"
Less obviously, you may have users who like to "save" pages on your site for "offline" use. Of course if they really are offline when they view it they will not make any ad requests to Google and you'll be fine.
But in at least one case I've come across, a user saved a page on a site which uses Google AdSense to a folder on their machine like /Users/name@example.com/stuff/your-page.htm
. They must have loaded this in their browser while still online and the resulting ad calls from the JS embedded in the page when they saved it fire off to Google with ?url=file:///Users/name@example.com/...
in the URL, violating their policy.
Anti-Solution
Actually in both of these cases it's a little hard to find a good solution. For the first you could perhaps have a special encoding of email addresses in search JS but that goes against the spirit of the policy - the email info is still there if it is in fact PII (probably not if visitor is searching for it but there you go). Not to mention relying on JS instead of regular GET form submission.
It's a nasty hack and misses many other cases. The second example is one of the many possible reasons you would probably never consider which would not be covered by sticking plasters like that.
Solution
It would seem the most pragmatic solution is to adjust your ad serving logic to scan URL and referrer for email addresses and just opt to not show ads on that page if you find any. Hopefully it's rare enough not to lose you too much revenue and the alternatives are likely more expensive.
One thing to note though: checking for email addresses on server side is probably not enough. It wouldn't have caught that second case above and there are many other cases where it might not work.
So if you rely totally on Google's provided libraries to display your ads, you may need to write a small wrapper to handle this case on the client side. It seems like this should be something Google's JS does automatically or at least something you can opt-in to.
I've been quiet for a while on this blog, busy with many projects, but I just had to comment on my recent discovery of Lightning Memory-Mapped Database (LMDB). It's very impressive, but left me with some questions.
Disclaimer
Let me start out with this full acknowledgement that I have not yet had a chance to compile and test LMDB (although I certainly will). This post is based on my initial response to the literature and discussion I've read, and a quick read through the source code.
I'm also very keen to acknowledge the author Howard Chu as a software giant compared to my own humble experience. I've seen other, clearly inexperienced developers online criticising his code style and I do not mean to do the same here. I certainly hope my intent is clear and my respect for him and this project is understood throughout this discussion. I humbly submit these issues for discussion for the benefit of all.
Understanding the Trade-offs
First up, with my previous statement about humility in mind, the biggest issue I ran up against when reviewing LMDB is partly to do with presentation. The slides and documentation I've read do a good job of explaining the design, but not once in what I've read was there any more than a passing mention of anything resembling a trade-off in the design of LMDB.
My engineering experience tells me that no software, especially when attempting to claim "high performance" comes without some set of assumptions and some trade-offs. So far everything I have read about LMDB has been so positive I'm left with a slight (emphasis important) feel of the "silver bullet" marketing hype I'd expect from commercial database vendors and which I've come to ignore.
Please don't get me wrong, I don't think the material I've reviewed is bad, just seems to lack any real discussion of the downsides - the areas where LMDB might not be the best solution out there.
On a personal note, I've found the apparent attitude towards leveldb and Google engineers a little off-putting too. I respect the authors opinion that LSM tree is a bad design for this purpose but the lack of respect toward it and it's authors that comes across in some presentations seems detrimental to the discussion of the engineering.
So to sum up the slight gripe here: engineers don't buy silver-bullet presentations. A little more clarity on the trade-offs is important to convince us to take the extraordinary benchmark results seriously.
[edit] On reflection the previous statement goes too far - I do take the results seriously - my point was more that they may seem "to good to be true" without a little more clarity on the limitations. [/edit]
My Questions
I have a number of questions that I feel the literature about LMDB doesn't cover adequately. Many of these are things I can and will find out for myself through experimentation but I'd like to make them public so anyone with experience might weigh in on them and further the community understanding.
Most of these are not really phrased as questions, more thoughts I had that literature does not address. Assume I'm asking the author or anyone with insight their thoughts on the issues discussed.
To reiterate, I don't claim to be an expert. Some of my assumptions or understanding that lead the the issues below may be wrong - please correct me. Some of these issues may not be at all important in many use cases too. But I'm interested to understand these areas more so please let me know if you have thoughts or preferably experience with any of this.
Write Amplification
It seems somewhat skimmed over in LMDB literature that the COW B-tree design writes multiple whole pages to disk for every single row update. That means that if you store a counter in each entry then an increment operation (i.e, changing 1 or 2 bits) will result in some number of pages (each 4kb by default) of DB written to disk. I've not worked out the branching factor given page size for a certain average record size but I guess in realistic large DBs that could be in the order of 3-10 4k pages written for a single bit change in the data.
All that is said is that "it's sequential IO so it's fast". I understand that but I'd like to understand more of the qualifiers. For leveldb in synchronous mode you only need to wait for the WAL to have the single update record appended. Writing 10s of bytes vs 10s or 100s of kbytes for every update surely deserves a little more acknowledgement.
In fact if you just skimmed the benchmarks you might have missed it but in all write configurations (sync, async, random, sequential, batched) except for batched-sequential writes, leveldb performs better, occasionally significantly better.
Given that high update throughput is a strong selling point for leveldb and the fact that LMDB was designed initially for a high-read ratio use case I feel that despite the presence in stats all of the rest of the literature seems to ignore this trade-off as if it wasn't there at all.
File Fragmentation
The free-list design for reclaiming disk space without costly garbage collection or compaction is probably the most important advance here over other COW B-tree designs. But it seems to me that the resulting fragmentation of data is also overlooked in discussion.
It's primarily a problem for sequential reads (i.e. large range scans). In a large DB that has been heavily updated, presumably a sequential read will on average end up having to seek backwards and forwards for each 4k page as they will be fragmented on disk.
One of the big benefits of LSM Tree and other compacting designs is that over time the majority of the data ends up in higher level files which are large and sorted. Admittedly, with leveldb, range scans require a reasonable amount of non-sequential IO as you need to switch between the files in different levels as you scan.
I've not done any thorough reasoning about it but seems from my intuition that with leveldb the relative amount of non-sequential IO needed will at least remain somewhat linear as more and more data ends up in higher levels where it is actually sequential on disk. With LMDB it seems to me that large range scans are bound to perform increasingly poorly over the life of the DB even if the data doesn't grow at all, just updates regularly.
But also, beyond the somewhat specialist case of large range scans, it seems to be an issue for writes. The argument given above is that large writes are OK because they are sequential IO but surely once you start re-using pages from the free list this stops being the case. What if blocks 5, 21 and 45 are next free ones and you need to write 3 tree pages for your update? I'm aware there is some attention paid to trying to find contiguous free pages but this seems like it can only be a partial solution.
The micro benchmarks show writes are already slower than leveldb but I'd be very interested to see a long-running more realistic benchmark that shows the performance over a much longer time where fragmentation effects might become more significant.
Compression
The LMDB benchmarks simply state that "Compression support was disabled in the libraries that support it". I understand why but in my opinion it's a misleading step.
The author states "any compression library could easily be used to handle compression for any database using simple wrappers around their put and get APIs". But that is totally missing the point. Compressing each individual value is a totally different thing to compressing whole blocks on disk.
Consider a trivial example: each value might look like {"id": 1234567, "referers": ["http://example.com/foo", "https://othersite.org/bar"] }
. On it's own gzipping that value is unlikely to give any real saving (the repetition of 'http' possibly but the gzip headers is more than the saving there). Whereas compressing a 4k block of such results is likely to give a significant reduction even if it is only in the JSON field names repeated each time.
This is a trivial example I won't pursue and better serialisation could fix that but in my real-world experience most data even with highly optimised binary serialisation often ends up with a lot of redundancy between records - even if it's just in the keys. Block compression is MUCH more effective for the vast majority of data types than the LMDB author implies with that comment.
Leveldb's file format is specially designed in such a way that compression is possible and effective and it seems Google's intent is to use it as a key part of the performance of the data structure. Their own benchmarks show performance gains of over 40% with compression enabled. And that is ignoring totally the size on-disk which for many will be a fairly crucial part of the equation especially if relatively expensive SSD space is required.
One argument might be that you could apply compression at block level to LMDB too but I don't think it would be easy at all. It seems like it relies on fixed block size for it's addressing and compressing contents and leaving blanks gives no disk space saving and probably no IO saving either since all 4k is likely still read from disk.
I'm pretty wary of the benchmarks where leveldb has compression off since I see it as a fairly fundamental feature of leveldb that it is very compression friendly. Any real implementation would surely have compression on since there are essentially no downsides due to the design. It's also baked in (provided you have the snappy lib) and on by default for leveldb so it's not like it's an advanced bit of tuning/modification from basic implementation to use compression for leveldb.
Maybe I'm wrong and it's trivial to add effective compression to LMDB but if so, and doing it would give ~40% performance increase why is it not already done and compared?
I'd like to see the benchmarks re-run with compression on for leveldb. Given writes are already quicker for leveldb this more realistic real-world comparison might well give a better insight into the tradeoffs of the two designs. If I get a chance I will try this myself.
Large Transactions Amplify Writes Even Further
LMDB makes a big play of being fully transactional. It's a great feature and implemented really well. My (theoretical) problem is to do with write performance - we've already seen how writes can be slower due to COW design but how about the case when you update many rows in one transaction.
Consider worst case that you modify 1 row in every leaf node, that means that the transaction commit will re-write every block in the database file. I realise currently that there is a limit on how many dirty pages can be accumulated by a single transaction but I've also read there are plans to remove this.
Leveldb by contrast can do an equivalent atomic batch write without anywhere near the same disk IO in the commit path. It would seem this is a key reason leveldb is so much better in random batch write mode. Again I'd love to see the test repeated with leveldb compression on too. [edit] On reflection, probably not such a big deal - writes to the WAL in leveldb won't be affected by compression. [/edit]
It may not be a problem for your workload but actually it might. Having certain writes use so much IO could cause you some real latency issues and given single writer lock, could give you similar IO-based stalls that leveldb is known for due to it's background compaction.
I'll repeat this is all theoretical but I'd want to understand a lot more detail like this before I used LMDB in a critical application.
Disk Reclamation
Deleting a large part of the DB does not free any disk space for other DBs or applications in LMDB. Indeed there is no built in feature or any tools I've seen that will help you re-optimise the DB after a major change, nor help migrate one DB to another to reclaim the space.
This may be a moot point for many but for practical systems, having to solve these issues in the application might add significant work for the developer and operations teams where others (leveldb) would eventually reclaim the space naturally with no effort.
Summary
I feel to counter the potentially negative tone I may have struck here, I should sum up by saying LMDB looks like a great project. I'm genuinely interested in the design and performance of all the options in this space.
I would suggest that a real understanding of the strengths and weaknesses of each option is an important factor in making real progress in the field. I'd humbly suggest that, if the author of LMDB was so inclined, including at least some discussion of some of these issues in the docs and benchmarks would benefit all.
I'll say it again if Howard or anyone else who's played with LMDB would like to comment on any of these issues, I'm looking forward to learning more.
So is LMDB a leveldb killer? I'd say it seems good, but more data required.
In making this blog I ended up using Yehuda Katz' Handlebars.js for templating. It has some intersting features I'll introduce here, but arguably dilutes Mustache's basic philosophy somewhat.
I found Handlebars to be a powerful extension to Mustache but I want to note up-front that it quite possibly isn't the best option in every case. Certainly if you need implementations outside of Javascript it's not (yet) for you, however I'm also aware that the extra power added comes with a potential cost: you can certainly undo many of the benefits of separating logic and template.
With that note in place. I'll introduce the library.
Why Handlebars?
Yehuda has already outlined his rationale for creating Handlebars so I won't go into too much detail here. The important goals can be summed up as:
Global, contextual helpers — Mustache allows helper methods in views but they must be defined in the view object (Yehuda calls this the "context" object so I'll keep that terminology from now on). Further, there is intentionally no way to pass arguments to these methods so even if they are defined globally and "mixed-in" of inherited into each view, they are fairly limited in scope.
More flexibility with accessing data from parent contexts — inside blocks, Mustache makes it tricky to access properties of the parent scope outside the block.
Precompilation support — you can pre-compile templates into native JS code. In browser context this saves the from client the string parsing overhead.
I encourage you to read his article for a lot more detail and explanation of those points but we'll crack on for now.
I won't cover all the features here. You can read them in the documentation. For now I want to highlight the power (and possible danger) of helpers.
In the case of my static site generation system, my main goal was to have a very thin layer of logic on top of simple content-with-meta-data files with some simple naming conventions. I wanted flexibility in the templating system so that I could generate menus or listings of content without writing extra code for each case.
With Mustache, this flexibility had to happen in the view layer and so became a little clumsy to express in a general and extensible way the data sets required for any page.
It turned out to be much neater and require a lot less "magic" code to be able to make the templates a little more expressive. Helpers were the key.
Helpers
Handlebars adds to Mustache the ability to register helpers that can accept contextual arguments. Helpers are simply callbacks that are used to render {{mustaches}}
or {{#blocks}}{{/blocks}}
. They can be registered globally or locally in a specific view. We'll use global registration here to keep examples clearer.
Here's a basic example of a block helper that could be used for rendering list markup.
<h1>{{title}}</h1>
{{
<a href="{{url}}">{{name}}</a>
{{/list}}
Here's the context used
{
title: 'An example',
links: [
{url: 'http://example.com/one', name: 'First one'},
{url: 'http://example.com/two', name: 'Second one'}
]
}
And here is the list
helper definition:
Handlebars.registerHelper('list', function(links, options){
var html = "<ul>\n";
for (var i = 0; i < links.length; i++) {
html += "\t<li>" + options.fn(links[i]) + "</li>\n";
}
return html + "</ul>\n";
});
When you compile this and render with the context data above, you would get the following output:
<h1>An example</h1>
<ul>
<li><a href="http://example.com/one">First one</a></li>
<li><a href="http://example.com/two">Second one</a></li>
</ul>
You can read similar examples in the documentation which have much more complete explanations of the details here but the basics should be clear:
- The helper is called like a regular Mustache expression — in this case it's a block but non-blocks work too.
- The
links
array from the context data is passed as an argument. Handlebars allows passing more than one argument or no arguments. The options arg is always present as the last one, any others before that are positional, passed through from the template expression. (you can also use non-positional hash arguments.)
- The
options
arg is passed a hash containing a few things. Most significant here is options.fn
which is the compiled template function for the block's content. That means you can call it with either the current context this
or some other data context. In this example we are passing links[i]
which means the inner block can use {{url}}
and {{name}}
directly form the current link's context.
With that very brief overview example, I want to move on to more interesting examples. If you want to read more about the specifics about what happened there then I'd encourage reading the block helpers documentation.
Helpers for Content Selection
Before I continue, I need to acknowledge that what follows breaks everything you know about MVC separation of concerns. I know. Bear with me for now.
My site generation system builds the site files based on filesystem naming conventions. For things like the blog home page I wanted to show the 5 most recent blog posts.
Internally the system reads the whole content file structure and builds an in-memory model of the content. Each directory has two indices: one for all articles with a date in the file name (most recent first) and an index of all other article files in alphabetical order. You can then get the object representing that directory and list the articles in either the date-based or name-based index.
For convenience, I developed an internal API that made this easy using "content URLs" for example Content.get('/p/?type=date&limit=5')
which will return the most recent 5 dated articles in the /p/
directory.
From there it is pretty simple to be able to make a block helper that allows templates like this:
<ul>
{{
<li><a href="{{url}}">{{title}}</a></li>
{{/pages}}
</ul>
- The
pages
helper accepts a string argument (the internal content URL) and uses it to fetch the relevant page objects from the content model.
- In this case
options.fn
is passed the page object itself so can render any property of the page.
Next and Previous
But listings aren't the only case this is useful. On the bottom of each blog article I have links to next/previous articles (if they exist) and these need the URL and title of the neighbouring items in the dated index.
I did this with another couple of block helpers. The blog template looks a bit like this:
<h1>{{title}}<h1>
{{{content}}}
<footer>
{{
<a href="{{url}}">« {{title}}</a>
{{/prev_page}}
{{
<a href="{{url}}">{{title}} »</a>
{{/next_page}}
</footer>
The helper itself uses this
which is the current context (in this case the main blog article being displayed). It then looks up in the content index the article's parent directory, and locates the previous or next item in the index relative to the current one. It then calls options.fn
with the neighbouring article object as context.
Pushing the Boundaries
From here there is a lot of grey areas you could probe with this powerful construct. For example, let's assume you have different modules of your app rendering themselves and then being combined by some layout controller and rendered into a layout.
What if you wanted to have the module's external CSS or JS requirements actually defined in the template that really has the dependency. Right off the bat, I'll say I can't think of a real reason you'd want this and not have it taken care of outside of the templating layer, but…
You could have a helper for ensuring the correct CSS is loaded up-stream in the template like:
{{add_css 'widget.css'}}
<div class="widget">
...
</div>
And then have the helper defined such that it adds the arguments passed to the layout controller and returns nothing to be rendered.
Then the layout rendering might link those CSS assets in the head.
You're right. This is almost certainly a bad idea. I mention it because it was something that occurred to me for a second before I recognised that is was an example of probably dangerous usage. When you get the hang of a powerful concept like this it's easy to start seeing every problem that can be possibly solved with it as a good candidate.
As with all powerful programming concepts and libraries, there are many things you can do with Handlebars helpers that are really bad ideas. Hence my note of caution at the start.
Conclusion
I'm quite happy with the extra power Handlebars has given me in this context. But I'm certain that with the extra power comes the inevitable responsibility. It is certainly possible to write crazy and unmaintainable code if you get too creative with helpers without thought.
The examples here are probably not best practice for an MVC web-app context. But here in a site generation script with an already in-memory content model, it allowed me to extend the expressiveness of the system without hard-coding a lot of specific logic for different cases in the model layer.
Handlebars.js has many more features than I have touched on here. Check it out. It may just be what you are looking for if you really like Mustache's philosophy but have a need (and the discipline) to make more expressive helpers.