Welcome back! Last time we talked about flow control and
latency; today let’s talk about how different features affect
the performance we see. Here are some simple scenarios. As
before, they’re all variations on the theme of one publisher and
one consumer publishing as fast as they can.
So today I would like to talk about some aspects of RabbitMQ’s
performance. There are a huge number of variables that feed into
the overall level of performance you can get from a RabbitMQ
server, and today we’re going to try tweaking some of them and
seeing what we can see.
Since the new persister arrived in RabbitMQ 2.0.0 (yes, it’s not so
new anymore), Rabbit has had a relatively good story to tell about
coping with queues that grow and grow and grow and reach sizes that
preclude them from being able to be held in RAM. Rabbit starts writing
out messages to disk fairly early on, and continues to do so at a
gentle rate so that by the time RAM gets really tight, we’ve done most
of the hard work already and thus avoid sudden bursts of
writes. Provided your message rates aren’t too high or too bursty,
this should all happen without any real impact on any connected
Some recent discussion with a client made us return to what we’d
thought was a fairly solved problem and has prompted us to make some
In our previous blog post we talked about a few approaches to topic routing optimization and described the two more important of these in brief. In this post, we will talk about a few things we tried when implementing the DFA, as well as some performance benchmarking we have done on the trie and the DFA.
Among other things, lately we have been preoccupied with improving RabbitMQ’s routing performance. In particular we have looked into speeding up topic exchanges by using a few well-known algorithms as well as some other tricks. We were able to reach solutions many times faster than our current implementation.