Message queues — what are they?

Mindaugas Nakrosis
4 min readOct 16, 2021

To understand what messaging queues are we first need to understand how services usually communicate with each other. There are synchronous and asynchronous ways of communication. I will try to write about how they both work.

Synchronous way

Let’s start with a real world example of two services trying to communicate synchronously.

We have a Todo service that wants to send a request to a Notification service once a todo has been completed. Todo service sends a TCP synchronous request to the notification service.

What synchronous means is that the Todo service has to wait for the Notification service to respond to the request. If Notification service takes a long time to respond the connection potentially becomes dragged out — meaning we get extra latency.

Another problem we could possibly have is if the notification service crashes or fails to respond Todo service might have to retry the request multiple times until the Notification service responds.

From these points you can see that the Todo service is dependant on the behaviour of the Notification service. Therefore the two of them are coupled. Most of the times (not always) that is not a good word to hear in software development. Microservices should be as isolated as possible most of the times.

This is where asynchronous requests come into help.

If the Todo service was to send an asynchronous request instead it would not need to care about the response.
Asynchronous communication is kind of like a milkman distributing the bottles in the morning to your door. He just leaves the bottle and continues on with its work!

In order to understand how asynchronous requests work we need to understand messaging queueing.

From the previous points we can make a conclusion that the Todo service generally does not like to be kept waiting for the Notification service’s response. What can it do instead to make the situation better?

Please leave a message!

Todo service can leave a message into the message queue and go on doing stuff it generally likes (what do services do on their free time anyway though?). It does not need to worry about who actually receives the message.

As the Notification service does not like the Todo service as well it can avoid direct contact with it (during these troubling pandemic times) by picking up the message from the message queue. This way everyone is happy and have tons of free time.

In terms of message queues Todo service is the producer and the Notification service is a consumer.

What problems does this solve?

Services are not tightly coupled

When you start writing API calls from one service to another most often unintentionally you start creating API contracts. This means that services become somewhat coupled to one another.

Performance

Queues are beneficial in cases where we can generate more messages than we can actually consume. Consumer bottleneck does not impact the rate at what we generate messages if we use queues.

Scalability

This is kind of related to the previous point. We can spread the messages to multiple consumers if we want to scale the rate at which we consume the messages. This is called batching.

Although message queues are in general an architectural pattern and I did not want to include and of the implementational examples I have to mention RabbitMQ which is by far the most popular messaging broker.

RabbitMQ

RabbitMQ is a messaging broker and it has a ton of interesting features altogether with the ones I already mentioned. I will try to mention several of the more interesting ones.

Delivery acknowledgement

It is a special feature of RabbitMQ.

Let’s say Todo service puts a message in the queue. Notification service picks up the message from the queue and then for some unknown reason crashes. If RabbitMQ is configured with delivery acknowledgement it will expect the consumer to send an acknowledgement message back to the queue.
In this example notification service would fail to do so. In this case RabbitMQ would automatically put that message back to the queue where another consumer would be able to pick it up and acknowledge it.

This adds resilience to message queues.

Distributed deployment

This allows multiple instances to be joined and setup in a cluster making it highly available. It can be run on top of something like Kubernetes in a cloud.

Management & monitoring

RabbitMQ supports tools like Prometheus so you can monitor your queue from a prometheus instance.

Other popular message brokers worth mentioning

RabbitMQ, Apache Kafka, Redis, Amazon SQS, and Amazon SNS

Thanks for reading!

--

--