First In, First Out (FIFO) is often requested in integration scenarios.  Many integration systems are built to process a huge number of messages in parallel, so FIFO is often a requirement that forces you to go against the natural flavour of your integration hub.  This blog covers some advice, scenarios and solutions in case you have to deal with FIFO.

Introduction

When the FIFO requirement pops-up during the analysis phase, the first thing you should do is challenging the customer to check if FIFO is really needed. What are the consequences if messages are processed out of order?   FIFO only makes sense if there is an unwanted business impact in case messages are not processed in the right order.  Don’t do FIFO just because we always do it like that.  On the other hand, it’s important to identify sequencing-related requirements in integrations, where you would not expect them at first sight.

Once you concluded that FIFO is required, it’s important to determine the minimal scope for FIFO.  As an example: not all patient information must be processed in order, it can be sufficient to process all messages related to a single patient in the right order.  Finding the minimal FIFO scope can give your integration solution a serious performance boost, as FIFO is slow due to its nature.

Next question that needs to be tackled is how the integration layer will know what the desired order of processing is.  There can be multiple options: the sequence in which they are submitted to the integration layer (preferably via a protocol that supports FIFO) or based on a sequence number, a version or a timestamp inside the message.  Another important question: what must happen in case a message is out of order?  Should it be ignored or must we queue it aside until it’s allowed to be processed?  And what if a missing message blocks the further processing of other messages?

Scenarios & solutions

This section describes FIFO related scenarios I’ve encountered throughout my career and their potential solution.  Remember that there are multiple solutions for each problem, I just provide my preferred one.

Import Products and Bill of Materials

Scenario: related entities must be processed first

In this customer case, we were dealing with nightly synchronizations from one system towards SAP.  First, all the available products and materials (with their corresponding metadata) were synchronized towards SAP.  Second, the Bill Of Materials (BOM) were imported into SAP, which had references to related products or materials.  These must be known in SAP, before the BOM can be imported.

Solution: retry policies based on the returned error code

We did not want to introduce state at the integration layer, just to cover an edge case where the material – to which the Bill Of Material relates – was not created yet in SAP.  This potential issue could occur in case SAP couldn’t process newly created materials fast enough.  Instead of waiting with the BOM’s, before all the materials were processed, we just processed the messages as they came in.  The edge cases were handled by configuring retry policies, based on the returned SAP error code.  If the error code meant that a material was not known by SAP, we just retried after a configurable timespan.  Easy, stateless and high performing solution!

Synchronize customer records

Scenario: synchronize data entities

In this scenario we were dealing with a synchronization of customers from one master CRM system, towards multiple target applications.  Every time a business user modified a customer record in the master CRM, the entity had to be synchronised towards the target systems.  Multiple modifications in a short timespan were possible, because business users could click the Save button several times, in case they detected for example a typo.  We needed to ensure that the target systems always had the latest modification applied.

Solution: smart endpoints

We were lucky that the master CRM system maintained an increasing version number for every customer entity.  This could be used to determine whether we should process or ignore the message.  The target systems that needed to get updated, could easily store this version number together with the customer record.  In that way, they were able to build smart import modules that ignored old modifications.  This solved the problem!  Such smart endpoints are very powerful, but require sometimes modifications to the target systems that are not possible or not desired from a design perspective.  In those scenarios, the last processed version number (or modification time) can be stored on the integration layer.

Publish price updates

Scenario: process a low number of messages in the received order

Here, we needed to publish received product price updates to a cash desk system.  Price updates were made regularly, but it didn’t cause a high message volume.  The speed of synchronizing the product prices was not their biggest concern.  The most attention needed to go to ordered delivery: processing price updates of the same product out of order had drastic financial consequences.

Solution: singleton pattern

The sequence of the price updates is only important per product, so this is our minimal FIFO scope.  Per product that we receive a price update from, a single-threaded process instantiated.  This singleton process is responsible to send the price update to the cash desk system and to check afterwards if there is another price update of the same product that needs to be handled.  If no message available anymore, the singleton instance can be shut down.  The integration engine must ensure that price updates are routed to an existing singleton process or they spin up a new instance in case there is no singleton instance running yet for that specific product.

Process inventory transactions

Scenario: process a high number of messages in the received order

At this customer, we needed to process a high amount of inventory transactions in the sequence as they were submitted to the middleware.   Before we were able to insert the messages in the backend system, there was quite some time consuming validation and processing logic needed.  The transactions were pushed to the integration layer via web services, one-by-one to keep the right order.  Performance was very important, because too much latency in the near real time synchronization could lead to an undesired business impact.

Solution: Ticket Dispenser / Gatekeeper pattern

Here, we opted for the Ticket Dispenser / Gatekeeper pattern.  This pattern ensures that the integration layer can process many messages in a multi-threaded way and that they get re-sequenced before being submitted to the target system.  This pattern is a lot faster that the simpler singleton pattern.  A ticket dispenser component assigned an increasing sequence number to the messages, as they entered the integration layer (before we returned an HTTP 202).  All messages were processed individually and sent afterwards to the gatekeeper component.  This components re-sequenced all messages, to make sure that they were send in the desired order to the destination application.

Distribute product information

Scenario: process messages according their sequence number

The last example is where we had the requirement to distribute updates of products from one master system, to multiple B2B partners.  This was needed to keep the product information in sync on several eCommerce platforms (description, price, pictures, etc…).  The messages needed to be processed in sequence, towards every single eCommerce platform.  This ensured that the latest product information was always available on each eCommerce site.

Solution: Re-sequencer pattern

The source system could only export messages in batch mode, which didn’t allow submitting the messages to a queue in the correct order.  Luckily, they could easily add a per-product sequence number for every update.  In front of every target eCommerce partner, we had a re-sequencer component, that ensured that the messages were submitted to the destination in line with their incremental sequence number.  This was actually the same component as the gatekeeper described in the previous example.  In these scenarios, you need to make sure you get alerted when one missing message blocks the complete integration for a long time.  Discuss with the customer what automated or manual procedures should be in place to cover these cases.

Conclusion

Ordered delivery and FIFO requirements can pop-up in many variations.  On one hand, try to avoid FIFO wherever it’s possible; on the other hand, try to detect scenarios where sequencing can avoid an unexpected business outcome.  In any case: try to limit the scope of FIFO and aim for solutions that keep your integration layer stateless.  However, the latter is not always possible!

If the source system can inject versioning or sequencing information in the message, you don’t need a FIFO enabled protocol.  If this is not possible, messages need to be submitted to the integration layer in the right order, which puts some responsibility on the source system.  Go beyond the happy path and figure out what’s needed in case something fails.

Have you encountered other ways to deal with any form of sequencing?  Don’t hesitate the discuss it in the comments or to reach out on Twitter.

ABOUT

MEET THE YOUR AZURE COACH TEAM

Your Azure Coach is specialized in organizing Azure trainings that are infused with real-life experience. All our coaches are active consultants, who are very passionate and who love to share their Azure expertise with you.