Posts written by Matthew Sackman
May 11, 2012
by
Matthew SackmanYou have a queue in Rabbit. You have some clients consuming from that
queue. If you don’t set a QoS setting at all (basic.qos
), then
Rabbit will push all the queue’s messages to the clients as fast as
the network and the clients will allow. The consumers will balloon in
memory as they buffer all the messages in their own RAM. The queue may
appear empty if you ask Rabbit, but there may be millions of messages
unacknowledged as they sit in the clients, ready for processing by the
client application. If you add a new consumer, there are no messages
left in the queue to be sent to the new consumer. Messages are just
being buffered in the existing clients, and may be there for a long
time, even if there are other consumers that become available to
process such messages sooner. This is rather sub optimal.
So, the default QoS prefetch
setting gives clients an unlimited
buffer, and that can result in poor behaviour and performance. But
what should you set the QoS prefetch
buffer size to? The goal is to
keep the consumers saturated with work, but to minimise the client’s
buffer size so that more messages stay in Rabbit’s queue and are thus
available for new consumers or to just be sent out to consumers as
they become free.
February 21, 2012
by
Matthew SackmanAtomizeJS is a JavaScript library for writing distributed programs, that run in the browser, without having to write any application specific logic on the server.
Here at RabbitMQ HQ we spend quite a lot of time arguing. Occasionally, it’s about important things, like what messaging really means, and the range of different APIs that can be used to achieve messaging. RabbitMQ and AMQP present a very explicit interface to messaging: you very much have verbs send and receive and you need to think about what your messaging patterns are. There’s a lot (of often quite clever stuff) going on under the bonnet but nevertheless, the interface is quite low-level and explicit, which gives a good degree of flexibility. Sometimes though, that style of API is not the most natural fit for the problem you’re trying to solve - do you really reach an impasse and think “What I need here is an AMQP-message broker”, or do you, from pre-existing knowledge, realise that you could choose to use an AMQP-message broker to solve your current problem?
October 27, 2011
by
Matthew SackmanSince the new persister arrived in RabbitMQ 2.0.0 (yes, it’s not so
new anymore), Rabbit has had a relatively good story to tell about
coping with queues that grow and grow and grow and reach sizes that
preclude them from being able to be held in RAM. Rabbit starts writing
out messages to disk fairly early on, and continues to do so at a
gentle rate so that by the time RAM gets really tight, we’ve done most
of the hard work already and thus avoid sudden bursts of
writes. Provided your message rates aren’t too high or too bursty,
this should all happen without any real impact on any connected
clients.
Some recent discussion with a client made us return to what we’d
thought was a fairly solved problem and has prompted us to make some
changes.
October 25, 2011
by
Matthew SackmanIn
RabbitMQ 2.6.0
we introduced Highly Available
queues. These necessitated a
new extension
to AMQP, and a fair amount of
documentation, but to date, little has
been written on how they work.
September 24, 2011
by
Matthew SackmanOne of the problems we face at the RabbitMQ HQ is that whilst we may
know lots about how the broker works, we don’t tend to have a large
pool of experience of designing applications that use RabbitMQ and
which need to work reliably, unattended, for long periods of time. We
spend a lot of time answering questions on the mailing list, and we do
consultancy work here and there, but in some cases it’s as a result of
being contacted by users building applications that we’re really made
to think about long-term behaviour of RabbitMQ. Recently, we’ve been
prompted to think long and hard about the basic performance of queues,
and this has lead to some realisations about provisioning Rabbits.
May 17, 2011
by
Matthew SackmanMost of us at RabbitMQ HQ have spend time working in a number of functional languages in addition to Erlang, such as Haskell, Scheme, Lisp, OCaml or others. Whilst there is lots to like about Erlang, such as its VM/Emulator, there are inevitably features that we all miss from other languages. In my case, having spent a couple of years working in Haskell before returning to the RabbitMQ fold, all sorts of features are “missing”, such as laziness, type classes, additional infix operators, the ability to specify precedence of functions, fewer parenthesis, partial application, more consistent standard libraries and do-notation. That’s a fair list, and it’ll take me a while to get around to implementing them all in Erlang, but here are two for starters.
January 20, 2011
by
Matthew SackmanFrom time to time, on
our mailing
list and elsewhere, the idea comes up of using a
different backing store within RabbitMQ. The backing store is
the bit that’s responsible for writing messages to disk (a message can
be written to disk for a number of reasons) and it’s a fairly frequent
suggestion to see what RabbitMQ would look like if its own backing
store was replaced with another storage system.
Such a change would permit functionality that is not currently
possible, for example out-of-band queue browsing, or distributed
storage, but there is a fundamental difference in the nature of data
storage and access patterns between a message broker such as RabbitMQ
and a generic database. Indeed RabbitMQ deliberately does not store
messages in such a database.
October 19, 2010
by
Matthew SackmanArriving in RabbitMQ 2.1.1, is support for bindings between exchanges. This is an extension of the AMQP specification and making use of this feature will (currently) result in your application only functioning with RabbitMQ, and not the myriad of other AMQP 0-9-1 broker implementations out there. However, this extension brings a massive increase to the expressivity and flexibility of routing topologies, and solves some scalability issues at the same time.
Normal bindings allow exchanges to be bound to queues: messages published to an exchange will, provided the various criteria of the exchange and its bindings are met, pass through the various bindings and be appended to the queue at the end of each binding. That’s fine for a lot of use cases, but there’s very little flexibility there: it’s always just one hop – the message being published to one exchange, with one set of bindings, and consequently one possible set of destinations. If you need something more flexible then you’d have to resort to publishing the same message multiple times. With exchange-to-exchange bindings, a message published once, can flow through any number of exchanges, with different types, and vastly more sophisticated routing topologies than previously possible.