So I have the following setup:

Web API (entry point)

MSSQL DB (single)

Worker service (listens on RabbitMQ)

The general idea is to send a POST to the Web API, which would put a message on a single queue (always the same). The listening worker service would then pick it up and process it.

The problem arises when I need to make sure that 2+ worker services are able to be deployed, listening on the same queue. This is because the work of the service is to process multiple Document entities. These are stored in a shared MSSQL Database where the schema for Document looks something like this:

+------------+---------+------------+-----+ | ID | DocName | DocVersion | ... | +------------+---------+------------+-----+ | 1 | Doc_ABC | 1 | ... | | 2 | Doc_DEF | 3 | ... | | 3 | Doc_GHI | 1 | ... | | 4 | Doc_ABC | 2 | ... | | 5 | Doc_DEF | 4 | ... | | 6 | Doc_GHI | 2 | ... | +------------+---------+------------+-----+

Assuming the above schema, this is the run-down of what the worker is supposed to do after receiving a message:

Fetch the "latest"(version-wise) document entries. This basically means it group on the DocName then filters on DocVersion MAX and returns the whole entry. So in other words it returns a row with the MAX version for each distinct DocName (they are more or less fixed) Then, for each Document fetched, build a new document (business-logic stuff..) with incremented version Send the document to an external API (this will be rejected if the document version isn't higher than the previously sent document) Save the document to DB

This obviously introduces a race condition which is what this question is about. If a request is sent to the API (hence a worker is triggered) multiple times in succession, 2 messages will be published and both workers will get the same "latest" documents and therefore will eventually try to upload same versions.

In the example table above, both workers would get Doc_ABC v2 , Doc_DEF v4 and Doc_GHI v2 . This will then result in 6 attempts to send a document and 3 of those documents will be rejected due to the version having been sent already.

I've tried to wrap this whole process (Steps 1-4) inside an isolated transaction scope, but I could still see the race condition. I've tried other approaches, changing the order of some of the calls etc without any success but I still believe there might be a logical solution to this.

Does anyone have an idea for solving this race condition?

EDIT: One thing I should have mentioned is that I do not need the parallelism, I just need it to be deterministic with 2+ queue consumers for fail-over purposes.