Integration projects fail to deliver end user value when they are not designed specifically around delivering end user value. That was a tongue twister...so, what does that mean?
Here's how many (most?) integration projects go:
The partner manager or sales person or customer success manager (or someone connected to the customer) convinces the product manager "we need an integration between our product and x". The product manager tells an engineer, "Time to build an integration to x. Here's the API documentation. Go to town." The engineer is then tasked with figuring it out.
There may be some general statements about what the integration needs to do for the end user, but this end user experience is not at the forefront of the conversation. It's a conversation about making two APIs talk.
In reality, the end user experience that an integration intends to create is the most important part. To deliver that end user integration efficiently and effectively requires you to design the integration around the end user's considerations. This post dives into what a best practice workflow for doing just that will look like.
Integration projects tend to be complex.
They are complex because of the number of variables you cannot control. They are complex because of the foundational knowledge must be equipped with prior to even starting the project. They are complex because they expose interpersonal, inter-departmental, and inter-organizational misalignment.
If you are part of a team tasked with deploying a software integration, you'll face the reality that these projects are just hard.
Think about what goes into an integration project...
The goal is to move data, usually bidirectionally (unidirectionally for ETL or data migration use cases), between two different software products. This is to happen in service of creating an inter-product user experience.
Ideally, both products have APIs.
Ideally, they both model objects similarly (e.g. customer records in both systems look similar).
Ideally, those APIs provide sufficient functionality for both their listening and talking roles in the integration.
Ideally, both APIs are well documented.
Ideally, both APIs are accessible, with minimal "partner fee" gates, custom configuration, walled gardens, or access credential restrictions.
But, this is all "ideal". You accept that some of the ideal scenario won't be there--perhaps none of it will. This is before you even get into specific requirements for the specific project. This is before even asking what stakeholders want an integration to do.
You have to align the needs of those multiple stakeholders, internal and external to your organization, who are impacted or served by the integration. You have to define how the integration will support those needs. Then you have to collaborate with an engineering team to make that integration a real thing.
Remember: engineers tend to be the most expensive and least available people on your team.
There are so many places to fail. There are so many times during the project where something might take longer than expected, work differently than expected, or cost more than expected. If you're a team that must deliver many integration projects over time, it is imperative that you reduce this risk.
Building a discipline about how you design an integration does exactly this.
You can reduce the impact of these unavoidable realities by executing a best practice integration design workflow.
Integration projects contain variance. They are always unpredictable to some extent, because they are necessarily a technical output of two or more teams collaborative ideas about what two or more separately built software products should do together. The points of misalignment are too likely to treat an integration project as a straight line from start to finish.
To reduce how long that line from start to finish is, you must manage variance--the opportunities for failure. In other words:
You can't change the fact that integration projects are hard. You can control how you and your team absorbs that baked-in difficulty.
A best practice integration design workflow standardizes the steps that go into defining an integration, the artifacts that are created, and the points of involvement for all stakeholders. It also avoids rigidity, providing the right amount of bend to absorb the inevitable surprises that show up during the project.
This reduces the questions, points of confusion, rework, and bug fixing required once you pass the baton to the engineers--the most expensive and least available people on your team--who will build the integration.
Sounds amazing! What does that actually look like?
A best practice integration design workflow follows two basic rules:
Designing the integration big to small, means defining each layer of the integration from the overall business case down to the individual field mapping. Ideally you work top-down/big-small as much as possible.
There will inevitably be rework in the opposite direction as year discover the unknown unknowns of the project. The goal, however, is to only move on to the next layer down when you have a complete understanding of the current layer (i.e. don't define data flows for a currently undefined use case).
The layers, top-down, are as follows:
The business case represents the overarching mission and theme of the integration project. It need not get into the details about what data moves where, why, or how, but it should generally describe the integration. Most importantly, it should articulate the value the integration will provide to stakeholders.
Business cases come in all shapes and sizes, and there probably isn't a wrong answer for how to do define one. Covering business cases for integration projects in detail is a topic for a future post, but you can use these guidelines when assessing how one should look:
The business case is the first artifact that is defined, and it happens during the "explore" phase of a best practice integration program. Once an integration project is triaged and prioritized to the top of the list, you define the remaining artifacts so the engineering team can make it real.
Example: X marketing automation platform should integrate with Y customer relationship management platform. This well help salespeople have an understanding of what brand interactions have occurred prior to or outside of the direct conversations they have with qualified prospects.
A business case can be broken down into one more use cases (almost always more than one).
Use cases take the business case and break it into more tangible, tactical units of value for stakeholders. While these don't have to be structured as an Agile user story, it's an easy structure to use. It also helps to shape the size and scope of what a use case should be.
Use cases are the features of an integration. For product integrations or templatized integrations that will get built once and used by many, use cases are what you'll provide the sales and marketing teams to describe what the integration does.
Use cases are not technical. They are business-focused. They define value to stakeholders, but more granularly than a business case would. Use cases should be independent of one another, so you can descope one without impacting the others. (You may need to do this.) They should also be short and sweet. If you're writing paragraphs to define a use case, you're going too deep.
Here are some recommendations to consider for how to define a use case:
The specific format is less important than the scope and purpose. If your team already has standard formats and processes for defining use cases (features, stories, etc.), it's probably sufficient. If you are starting from square one, any of the above are acceptable.
Example: When a "conversion" event fires via the marketing platform, a customer record should be created in the CRM, so a sales person can engage with the newly converted prospect.
Use cases are defined further as one or more data flows. If you find yourself defining more than three data flows for a use case, it's a sign that your use case may be too broad. Consider breaking it up.
Data flows are where requirements start to transform from business value to technical specification. This translation between stakeholder needs and technical artifacts should be the core skillset for your team's integration analyst. The data flow is the top-level entity that defines such a translation.
Defining a data flow requires you to understand what entities (the types of records that may flow between systems) are handled via both endpoint systems (the two pieces of software being integrated). Often times, those will align very closely: two systems that both leverage customer records. Sometimes two endpoints will manage very different entities, and the analyst will have to decide how those entities relate in service of delivering a use case.
Usually API documentation is a good place to start to understand what entities are defined in each system. APIs are not always well defined, well documented, or truly representative of a system's entities, though. In some cases, entities can be dynamic, based on end-user specific configuration within an endpoint system. This also adds complexity to defining a data flow.
The data flow definition should express the following:
A data flow should be unidirectional. If you find yourself trying to define a bidirectional one, you've stumbled upon a use case that requires more than one data flow to deliver stakeholder value. Again, if you go beyond three of them, you should consider whether your use case is too broad. This might be too big a bite from the elephant.
The data flow represents what things come from here and what they look like when they go to there. It does not include when that happens or details about how it happens. Those come next.
Example: Marketing Event + Customer Record from X Marketing Platform to Y CRM.
Triggers define when a data flow should actually do its thing--when data should flow from source system to target system. It is inefficient, usually impossible, and impractical for all data to flow everywhere at all times, so triggers define data movement rules that ensure the use case is achieved.
The following activities are commonly defined as triggers:
Sometimes these are combined into a single trigger (receiving an event via webhook but filtering out some of those events). It's also common that a data flow has more than one trigger configured. This can be useful if you need a "backup" to deal with the occasional failure for a record to make it from source to target.
What you can use as a trigger is typically dependent on what is available via the API or whichever type of data interface the source system provides. The integration analyst will have to decide which trigger(s) most effectively meet the needs of the use case. If you don't also have control over a source system's functionality, you may have design around endpoint limitations. (These are the types of things that make integration projects hard.)
Example: Listen for the conversion event webhook.
If triggers define what happens at the beginning of a data flow executing, outcomes define what happens at the other end of the flow. What message or status code does the API provide? What should happen as a result of the outcome?
Every data flow certainly has at least one outcome: what happens when the data flow successfully executes. Often this something like a REST API returning a 200 status code. It might be something as simple as lack of an error message returned.
However, "outcomes" is plural intentionally. It's more difficult and more important to describe the negative outcomes--what happens when data doesn't flow along the happy path. This is typically overlooked in most integration projects.
Integrations can move a lot of data, and usually its on someone else's behalf. If an entity fails to make it from the destination to the target system, somebody's job is disrupted. An order didn't make it to be fulfilled. A customer's details didn't show up for a salesperson. An email didn't get sent.
A reliable integration is one that maximizes successful transactions, but also provides the proper support for failed transactions. No integration is 100% successful. Therefore, it's important to understand what could go wrong when pushing data to the target system. Think: validation errors, server outages, invalid data, etc.
What error messages does the target system give you? Does it give you an error or fail silently? Then what happens?
You likely won't think of everything, and you'll discover more of these over time. But, building the "what could happen and what can you do about it" perspective into the integration early will make it more reliable for stakeholders.
Example: Successful API calls will respond with a standard 200 response.
A data flow defines what entity(ies) flow from which system to which system. Triggers define when that happens. Outcomes define what happens when data reaches the target system. How entities match up between the source and target systems is the last piece.
Even when two systems maintain the same entity (e.g. two systems that both contain customer records), those systems have different schemas for representing that entity. Put more simply, a customer record in one system, while containing much of the same information, is structured differently than the other.
Think of it like having a conversation with someone who speaks another language. You both may understand the same concepts and know how to solve problems together, but without a translator between you, the communication breaks down. A big part of what the integration ultimately does is perform this translation of entities between two systems. Field mappings define how each individual field relates to a comparable field on the other end.
Often times these fields map one-to-one. If both systems have a "first name" field, it should be obvious that "first name" goes to "first name". Sometimes you have to consider how nested structures and arrays map as well.
Perhaps the target system has a field Customer.BillingAddress1, but that must map to Customer.BillingAddress.Address1 in the target system. Or, perhaps there is an array on one end... Customer.BillingAddress1 must map to Customer.Addresses.Address1 and Customer.Addresses.Type = "Billing".
The goal with field mappings is to logically describe how records relate to one another across two systems, in service of a data flow. It is the meat of the integration. It's where most of the requirements churn, questions, and bugs are going to come from.
In a simple integration, you may have as few as a dozen or so fields mapped for a data flow. For a more complex data flow, you may have over 100.
Defining an accurate, complete field mapping requires some amount of knowledge of both systems, domain knowledge for the use case being delivered, and some basic understanding of how APIs and integrations work. Strong API documentation helps as well.
Field mapping takes some work, but "building" the integration on paper is less expensive than having an engineer build the wrong thing. Your goal is to use this design process, most of which will be spent on field mapping, to answer questions, uncover challenges, and bring stakeholders to agreement prior to engineers building anything.
Sometimes entities are different enough between systems that you are required to define more complex logic about how to get from one to the other. This can include numerical calculations, if-then logic about how properties map, and virtually any other kind of logic you can come up with.
This is where field mapping really gets tricky. Your goal is to codify these logic hoops that must be jumped through without literally coding them.
An easy way to do this is to write pseudocode--basically describing what to do in a procedural manner, but without having to understand or adhere to any specific syntax or format.
Example: You need to map a subtotal and a tax amount from a source system to a target system, but only add that tax amount if the order was not placed in Ohio. That could look something like: TargetOrder.Amount = If SourceOrder.State is not "Ohio" then SourceOrder.Total + SourceOrder.TaxAmount otherwise SourceOrder.Total
This isn't executable code by any means, but someone with a reasonable amount of business domain expertise can read and understand the logic that is specified. It also will serve the engineer, down the road, who can read it and clearly understand what logic to build that meets the specification.
Transformation logic may also require input from an end user, especially for a templated integration that gets deployed as a product feature. In these cases, it's not logically possible to guess which field's value should go where between source and target. The end user must fill in the blanks.
Now, you're not just thinking about how to define the functional behavior of the integration. You're also defining how different end users will have to set up the integration to be used. How do you articulate these gaps that need filled to that user in a way that they can understand? This depends on their role in their business and to what extent they are involved in setting up the integration.
Clear definition for these inputs will benefit the engineer who needs to build the integration. It'll also benefit whoever on your team writes product documentation. Ultimately, they'll enable the end users of the integration to "fill in the blanks" to make the integration function--to deliver the business value it promises.
Given all of the specification discussed so far, it's worth it to help define what "done" looks like. Test cases allow you to give the engineering team a clear set of criteria that says, "Yes, you built this integration according to how we designed it."
Writing those test cases up front is a best practice for two reasons:
Test cases should cover all of the field mappings, including all the variations described by transformations. It's a good idea to build test cases around inputs as well, including what happens when a user enters something incorrect.
When you execute the test cases, it's to verify that you don't deliver bugs or incorrect functionality to an end user. Catching mistakes in design time is significantly cheaper/faster than catching them when the engineering team is building an integration. Likewise, catching mistakes during testing is significantly cheaper/faster than catching them when end users are trying to push real data through the integration. Test cases help with the latter.
Test cases are a way to quality control the integrations you deploy, so you have confidence the integration will provide the best possible end user experience.
This post only briefly introduces the concepts that go into an effective integration design workflow. Each of them individually are topics for future posts (subscribe below to stay tuned!).
The first step is recognizing the importance of a structured workflow for building integrations. Then look at what you might be already doing. Is there a design process or a standard way to document requirements? Where could it be improved? What pain points is it causing the team?
If you find yourself waiting in line for integrations to be built or having to politic that they even be considered, then integration is too burdensome for your overall team. It's likely that a large part of that burden comes from lack of or ineffective integration design. It doesn't have to be that way.
Use this post as a template for improving what you're doing (or maybe not doing) today.
Blended Edge is working on an end-to-end suite of products for defining, deploying, and managing product integrations. If this post resonated with you, consider joining our waitlist to be an early adopter of our integration design studio. It will help you execute exactly this workflow in an efficient manner, including helpers like automatic field mapping.
Sign up here to join the list!