In the last post in the series about Azure Service Bus message completion, I briefly talked how the message batch completion could benefit from a more reactive approach. Before I dive into the inner workings of the reactive structure that I’m going to outline in the next few posts, I need to briefly recap the functional and non-functional requirements we need from the component.

The component shall

Make sure messages are only completed on the receiver they came from

Reduce the number of threads used when the number of clients is increased

Autoscale up under heavy load

Scale down under light load

Minimise the contention on the underlying collections used

Be completely asynchronous

Implements a push based model from the producer and consumer perspective

Respect the maximum batch sized defined by the client of the component or a predefined push interval

Provide FIFO semantics instead of LIFO

Be as lock-free as possible

From a design viewpoint the MultiProducerConcurrentConsumer (MPCC) component will look like

The MPCC manages internally N number of slots that can contain items of type TItem. Whenever the timeout or the batch size is reached the component fans out up to a provided concurrency per slot the completion callback.

class MultiProducerConcurrentConsumer<TItem> { public MultiProducerConcurrentConsumer(int batchSize, TimeSpan pushInterval, int maxConcurrency, int numberOfSlots) { } public void Start(Func<List<TItem>, int, object, CancellationToken, Task> pump) { } public void Start(Func<List<TItem>, int, object, CancellationToken, Task> pump, object state) { } public void Push(TItem item, int slotNumber) { } public async Task Complete(bool drain = true) { } }

Shown above is the class outline that we are going to use for the component. The constructor allows specifying the batch size, the push interval, the number of slots and the concurrency per slot. The component has a method to push items to it called Push. As soon as the component is started the provided pump delegate will be called whenever the batch size is reached or the push interval. The concurrency per slot is probably best explained by giving an example.

Let’s assume we have a batch size of 500 and a push interval of 2 seconds. When producers can produce 1500 items per second the batch size is reached three times per second. MPCC will auto-scale in that case up to the provided concurrency. So in the above example, it will take out 500 items and invoke the callback and then concurrently try to fetch another 500 items and again another 500 items until the slot is empty or the maximum concurrency is reached.

The Complete method is available for the deterministic stopping of the MPCC. It allows to asynchronously wait for all slots to be drained (by default) or immediately stop and throw away all items that haven’t been processed.

In the next post, we are going to depict the inner workings of the MultiProducerConcurrentConsumer, hope you’ll enjoy.