Announcement: We've just opened up the waitlist for our new AI powered integration features. Click here to get on the list!

What does an integration project actually look like?

Integration projects are uniquely challenging. While in many ways they share attributes of any software project, they have some pretty significant differences.

We find that many times, people who end up playing a key role in an integration project have never worked on one before. Or, perhaps they have some exposure to integration, but have not worked on high performing integration teams.

In this post, we'll give a general overview of what to expect in an integration project. Certainly, every team and every project is different, but this content should be universal. While all of this is useful for engineers who may find themselves on an integration team, this is a non-technical post that should help anyone understand what these projects look like.

To do so, we'll discuss the following:

  • What roles are typically involved in an integration project?
  • What prerequisites will be required to execute the project?
  • What are the basic phases of the project?

We'll also share some advice forwhy product integrations in particular are a little different. At the end of this post, you should feel equipped to go into your next integration project, knowing what's coming.

Roles Involved

Team sizes and the roles on those teams vary, but there are some pretty standard roles that'll come into play for an integration project. If you're a big team, there may be multiple people in each role. If you're smaller, you may have some people acting as multiple roles. Either way, it's good to understand their general place in the project.

The typical roles in an integration project are:

  • Analyst
  • Engineer
  • Project Manager
  • End User (Business Stakeholder)
  • QA Analyst (Tester)

Analyst

The analyst's job is to translate the overall theme of the project, usually described as something like "integrate system X to system Y" to a tangible specification that can be used to actually build it.

The analyst will usually have a job title like Business Systems Analyst, Technical Analyst, or even Product Manager. They are technical, but not necessarilly a programmer (though some coding skills might help). They can read API documentation and understand the mechanics of interfaces and data. They also usually possess business domain expertise that enables them to connect business objectives with technical deliverables.

Engineer

The engineer's job is to build the integration. What that means will vary among the different technical approaches you can take for an integration. The engineer's required skills also depend on what technology is being used.

If the integration is to be deployed on a low code Integration Platform-as-a-Service, the engineer doesn't need to be a hardcore programmer. In fact, their skills probably look close to those of an analyst. If they are using a framework or an approach that requires heavy customization, they may need to look more like a software developer.

Many teams will have a technical lead in the mix as well. This is typically a senior engineer who has overall ownership the technology. They may manage the engineering team. They may also be considered an architect.

Project Manager

The project manager keeps the wheels on the bus and the bus moving forward. They maintain timelines, dependencies (internal and external), and remove barriers preventing the rest of the team from succeeding. Being a project manager on an integration team is pretty similar to what the role is for any software project. That said, project managers with experience running integrations specifically are quite valuable.

End User

The end user or business stakeholder plays an important role in an integration project. They represent the people who will receive value from the integration. Often they are the people asking for it. They may not have the tools to articulate exactly what they need or how it'll work, but that's why the analyst works with them to define those things.

Quality Assurance Analyst

The quality assurance (QA) analyst is the tester. They make sure that what gets built aligns with the requirements and that it works with a high degree of quality. This also includes stress testing the integration (using high data volumes) and finding potentially unconsidered edge cases.

Prerequisites

To execute a successful integration project, you'll need to do some prep work. While not every one of these prerequisites is an absolute requirement, they all improve the team's ability to deliver the project.

In advance of the project, the following are recommended:

  • Sandbox environments for the integrated systems
  • Knowledge of the integrated systems
  • Access credentials for the systems

Sandbox Environments

An integration project is typically not a simple, "get it right the first time" kind of project. There will be churn in requirements and getting the integration wrong a few times. You also need to actually pass data through the integration to test it, even though it's not ready for production.

All of this is much easier if either or ideally both systems provide sandbox or testing environments that can be used to build and test the integration. This gives the integration team a low risk way to mimic what will happen in production without having to touch real data.

Sandbox environments should have the following:

  • Be configured exactly like or as closely as possible to the production environment
  • Have smaller amounts of data so that edge cases don't get lost in the size of the data set
  • Provide the ability to simulate system outages, maintenance operations, and other events that will impact the production integration
  • Ideally not add to cost to the team or end user, to remove the pressure of getting it done quickly

System Expertise

Having expertise in both integrated systems helps make the project go smoothly. That expertise may be on the team. It may be from someone external who the team is given access to. But, it helps limit the questions that will come up during the requirements phases of the project.

System expertise doesn't necessarilly mean you need someone who knows every nook and cranny. You want someone with functional overall understanding of the system--what it does and how it does it. You ideally also want someone who is close to the use cases the integration will address.

It is possible and often necessary to complete an integration project successfully without expertise in one or even both systems. There are common patterns and approaches that help you navigate such a challenge. It's just not your ideal way to go.

Access Credentials

In order to pull data from and/or write data to a software system, you need to authenticate with that system. You need to have permission to do so. You often log into a product with a username and password. Integrations use similar mechanisms.

Mechanically, securing access credentials is not that complicated, but organizational challenges may slow that down. You may have to work with a securty or administrative team who has permission to grant you access. And, if they have many things on their plate, it won't happen immediately. That's why you want to work ahead here.

Building an integration with sandbox environments also tends to speed this process up, because there are fewer barriers gaining access to a non-production system.

Phases of an Integration Project

No two software projects are ever exactly the same. This is true of integration projects as well. However, there are some pretty universal phases that all integration projects will include.

These phases might be really big or really small. They likely aren't performed in perfect sequence, though they typically happen in about the order listed here, with some overlap between phases. Those phases are:

  1. Use Case Development
  2. Field Mapping
  3. Construction
  4. Testing
  5. Deployment & Go-Live

Use Case Development

At the highest level, an integration requires a definition of what is to be done. Certainly, "integration system X to system Y" is a start, but what does that mean? There are a bunch of ways you could go with that general requirement.

In the first phase of an integration project the goal is to take "integrate X to Y" and distill it down into use cases that describe what that integration will help an end user accomplish. We call this use case development, but there are plenty of valid terms for it.

Uses cases are non-technical. If you are talking about specific API endpoints or object properties, you're too far in the weeds. Use cases define the cross-product experience that the integration's user will get when enabling the integration. It describes the integration through their context.

Put another way, use cases are business requirements. They describe the outcomes of two systems that become integrated.

These use cases tie the thread between "integrate X to Y" and those specific and tactical specifications that come later. They keep context around all the low-level technical details, which helps the overall team make good decisions when the integration gets difficult.

Use case development is the last step before significant work gets done. This is usually a good place to estimate (or re-estimate) the work that'll go into the project. It's just enough detail to be thoughtful, but with relatively little analyst time spent.

Who's involved?

The project manager keeps the team on task, tied to the broader strategy that is driving the integration, and out of the technical details. Their job is to try to steer the team away from analysis paralysis while also giving the team a framework to make thoughtful design decisions.

The analyst pulls more detailed requirements ouf of "X to Y" and articulates them verbally and on paper in ways that unify the team's understanding of the project. Use case development and field mapping (coming next) are primarilly owned by the analyst, and all conversations the analyst has with the rest of the team should draw back to these use cases.

The business stakeholder or end user should be involved to help clarify what they need. This might also uncover what they cannot clarify, which is often where the most helpful conversations end up. A masterful analyst can pull use cases out of an end user and describe them in ways that the engineer can understand.

While not required, it might be helpful to have a technical lead or a senior engineer involved in this conversation. You definitely don't want their involvement to drag the conversation into technical nuance, but it's helpful to have a representative of the people for whom the requirements are written. They can provide a technologist's perspective and make sure the use cases actually say what the analyst thinks the use cases say.

What happens?

Use case development is mostly conversational. Maybe there's some whiteboarding and UI mocking as well, especially if you're buildinig a product integration that has SaaS-embedded user experience. Basically, use case development is about getting on the same page.

There are a ton of frameworks and exercises out there for how exactly to organize those conversations. That's a big topic for a different day. Basiccally the following should be included in what is discussed and eventually written down:

  • What are the specific outcomes the integration will help an end user achieve?
  • How do those outcomes align with the overall product or business strategy?
  • How can those outcomes be measured?
  • (Sometimes) What outcomes will the integration not help with?

Field Mapping

Use cases are great for making sure everyone understands what the integration is supposed to help an end user accomplish. It's not enough detail, however, for an engineer to put hands on keyboard to construct the integration.

Field mapping takes the requirements to a level of detail that will support the engineer. Field mapping defines exactly what needs to be implemented (relative to the use cases).

Generally field mapping defines the following, per use case:

  • What API endpoints (or interface objects) need to be retrieved?
  • What API endpoints (or objects) will that data need to go to?
  • Given that those two APIs/objects have different names, properties, and data structures, how does one translate to the other?
  • What things need to happen along the way if the data needs to be processed in a series of steps?
  • When should this data be moved? How frequently? How much data at once?
  • What should happen when data successfully makes it to the destination? What happens if it fails to?

Describing how the source data objects translate into the destination data objects is usually the most cumbersome piece. If you're dealing with large, complex data objects, there can be a lot of detail in field mapping documents. It can look a little scary all said and done.

Field mapping is often skimmed or even skipped completely. If you simply hand the task to "build this integration" to an engineer and there is no field mapping specified, it's not likely that the engineer will define one before building.

For very simple integrations, that's probably fine. Sometimes the field mapping requirements are pretty obvious if you generally understand what the integration has to do. As the complexity ticks up, specifying a field map gets important very quickly.

Field mapping might seem pretty simple: first name -> fname, last name -> lname, provanceCode -> state, etc. It gets complicated in situations like:

  • Fields exist in one system's objects, but not the other, or like fields are represented very differently
  • The objects themselves are similar but still fundamentally different (e.g. you aren't mapping customers to customers)
  • One API has strict validation rules and the other doesn't
  • Numerical values have to be recalculated to fit into how they are represneted in the destination API

This is far from an exhaustive list. Certainly it's important to work through all of these questions, many of which you won't discover until getting into it. You do need to hand the engineer a specification that is logical and complete. But, it's more than that!

When you run into these data mismatches, which often seem really specific and particular, it usually is where you stumble upon limitations in the use case.

Maybe it was underdefined.

Maybe you've discovered that end user wants something different than they think they said.

Maybe they want something that isn't possible.

Going through the exercise of discovering, talking through, and designing around these challenges ensures that the integration and everyone's understanding of it evolves together. It also allows this all to happen "on paper", which is far cheaper and far simpler than it happening "in code".

Who's involved?

The analyst drives field mapping, because it's really just a deeper dive into the use cases defined previously. An engineer or the technical lead may be involved, as well, to back them up. This helps to make sure the analyst produces a field mapping spec that the tech team can use with minimal questions.

The end user has a much more dimished role. It's not typically necessary for them to make the call and deep technical field mapping decisions, but occasionally the analyst will have to bring them in. Often the analyst will contextualize the problem at hand for the end user.

Like in the Use Case Development phase, the project manager is to keep things on track. Now, the project is supposed to get into the weeds. That means the PM's job switches from preventing too much detail to removing barriers in the way of the team defining the detail.

What happens?

The basic thing that happens in field mapping is that the analyst gets the documentation for both APIs side by side and then writes out how one translates into the other, again, to serve each use case.

This assumes each API has documentation. It also assumes that documentation is complete and accurate. You shouldn't assume this to be the case, though software companies seem to be increasingly adept at maintaining well documented APIs.

The analyst, sometimes with help from the tech team, may use other info as well. Some systems provide schema files that help describe the objects. Analyst may use sample records to understand how the system interface/API works. They may use tools like Postman or SOAPUI to call those services themselves. The analyst will use any information available to understand each API sufficiently enough to map one API to the other.

The way this field map is "written down" varies too. Many analysts just use a spreadsheet, becasue it's very easy to do a basic specification in table format. Sometimes a spreadsheet's simplicity makes it hard to describe complex mapping scenarios, though. Data modelling and data mapping tools are also sometimes used.

(We use a homegrown tool that kind of works like a spreadsheet on steroids.)

Regardless of the medium, the output must be a specification that when handed to an engineer who has little context enables that engineer to build the integration correctly.

Construction

In the Construction phase, it's time for the rubber to meet the road. The engineer gets to build the integration, as it is specified by the field mapping requirements.

"Building the integration" can mean different things to different people who have different integration technology approaches at their disposal. Some options might include:

  • Integration Platform-as-a-Service (iPaaS) or an Extract-Transform-Load (ETL) tool
  • Integration Framework
  • Custom Coding

An Integration Platform-as-a-Service is as piece of software that helps (usually) business users create data integrations between systems. ETL tools are akin, but usually oriented toward one-way operations to move large data sets into things like data warehouses.

Functionally the tools work in similar ways though. Building an integration with an iPaaS or ETL usually actually means configuring the iPaaS to do the integration. In some cases, new connectors or components will also need to be built which may need to be custom coded.

An integration framework is a little different. It serves some of the same purposes as an iPaaS, giving you building blocks to more rapidly build the integration, but it's going to be a less packaged way to go. This can make things more challenging for simple use cases, but it affords the engineer a lot more flexibility to handle the less simple cases. There is generally a heavier technical burden when using a framework.

Some engineers simply code integrations from scratch. This is the most technically complex but the most flexible. It requires the engineer to think through a lot of what comes for free with an iPaaS or integration framework. That said, if you don't have either of those or they won't meet some unique needs, custom coding is always an option.

Especially if the field map is well specified, most of Construction is an engineer's heads down work. This phase has the least amount of communication and collaboration. it's just a matter of time and attention, building the integration as it is described.

Who's involved?

The vast majority of the work during construction is the engineer's. However, they will involve the analyst as is required to clarify the field mapping specification. The engineer may also discover logical gaps that aren't addressed with those requirements and colllabroate with the analyst to fill those gaps.

The project manager is still to keep the project moving and remove barriers (notice a pattern?), but now they begin a new responsibility. They must also start the feedback loop of expected completion date so they can coordinate for future phases and for the launch of the integration. Most of that is a big question mark until now, but engineers should be able to articulate "when it'll be done" with increasing accuracy as the project moves forward.

Depending on how the team wants to work, the quality assurance analyst (i.e. tester) may enter the mix here as well. Simply because it can be a lot of work, they may start writing test cases, using the field map and use cases, in parallel with the engineer's work.

The end user has little to no role in construction. It's often way lower level than they want to be or can understand.

What happens?

Not a lot happens during construction, but also a lot is produced. Most of Construction involves the engineer toiling away in whatever integration technology is to be used. This may happen very quickly. It may take a long time for complex requirements.

Testing

Like any software deliverable, once built the integration must be tested. There are a lot of details that go into an integration, so there are a lot of opportunities for mistakes and miscommunication. Nothing wrong with that! But, testing is in place as a filter to catch most of it.

Testing an integration is not like testing a typical piece of software with a user interface. You're generally testing something that you can't see--a series of server-side operations that retrieve, transform, and save data. It's hard to touch it, so it's hard to define how/what to test.

However, use cases come to the rescue again!

For context, most Quality Assurance Analysts (i.e. testers) will define test cases and then run that battery of those test cases against the integration. This serves the purpose of getting everyone together on what was tested and how. It also gives you a regression suite of tests to run down the road as changes are made. These test cases should directly relate to the use cases. It's how you tie "what was tested" back to "what it should do" (the use cases).

Test cases look like use cases, but they tend to be really specific. For example, if a use case is something like "invoices should sync from system A to system B every hour", a test case will include how many line items, what kind of values were used on the address, what prices were used, taxes, shipping costs, etc. The test cases get deeper, so they can validate that everything that was designed and then built actually does what everyone needs it to do.

Testing is also the biggest lever you have for expanding or shrinking the project. You can get very detailed with test cases. It may take a lot of person-hours to design and execute all those tests.

The project team can decide how much testing they want to do and how detailed that testing is. It's a balancing act between the risk of a problem not being caught by testing and the cost/time put into testing.

Just remember, there is no such thing as perfect test coverage. You can do a lot, but eventually the returns diminish.

Who's involved?

The quality assurance analyst runs the show here. They will author test cases (if they haven't arleady) and execute them. They will always find things that need to be fixed. Somtimes they will find many. This is to be expected and usually is not indicative of a broader systemic problem.

The QA analyst will communicate those issues back to the engineer so they can be fixed. Many times they are the result of a missed detail or a mundane mistake. Sometimes, yet again, a failed test case is actually a previously undiscovered gap or misalignment in the integration requirements. In this case, the analyst steps in to help resolve the issue.

The project manager's role becomes more about getting to the finish line. That also means preventing an analysis paralysis in the testing phase. Left unchecked, a QA analyst can write and execute test cases until the end of time, and the PM should own deciding the right ballance of quality and time. They may collaborate with end users on those principles as well.

The PM also can be very helpful organizationally. An integration can have many test cases with many executions of those test cases, a bunch of which will be failed test cases. Those failed test cases all require work from the engineer and possibly the analyst, and then they must be re-tested to validate the solution.

This change can cause the team to spin out from the regimented, somewhat linear workflow that has happened so far. Disorganization can turn to chaos. The PM is best positioned to reduce the risk of this happening.

What happens?

First, if not done already, the QA analyst has to write the test cases. Ideally these are authored in the same medium as the use cases and field map, or at least in something comparable or compatible. Ideally test cases are very directly correlated to use cases and even specific parts of the field map.

Then the QA analyst will run the tests. This will probably involve them getting into the sandbox environments (if provided) for both systems and doing the things that will cause the integration to move data. Usually this is create or delete records in certain ways. For each use case, the QA analyst will inspect the data in the destination system to decide if it moved their according to spec. They may also use logs and other "under the hood" practices to validate data.

This phase of the project can get a little punchy. The QA analyst's job is basically to tell the others on the team that they "did it" wrong. As righteous as the QA analyst's intentions are, humans are humans. This message is not always well received. The best QA analysts understand this and are experts at navigating these difficult conversations with engineers.

Deployment & Go-Live

Once the integration is designed, built, and tested adequately, it's time to deploy it. It's time to put it in the hands of the end users to achieve the goals defined by the use cases. Here it all comes full circle.

Like any software project, deploying and going live are more complicated than just flipping on the switch. Depending on your integration technology it may be close to that. It may be something more sophisticated.

While "flipping the switch" sounds appealing. There is actually value to a more elaborate deployment and go-live process.

The integration may have been tested in a sandbox or staging environment, and it'll need to moved to a production environment for actual use. This is to prevent mistakes and issues that rightfully arise during construction from impacting live users on other integrations.

Your integration may also come along with changes to one of the integrated systems. In these cases, both the integration and the updates to the system must be pushed live together. If only one is live, it may not function.

Moving to a "live" state is still different than what you simulated during testing. Most teams will run a lightweight version of their QA tests in the live environment just to make sure the final production deliverable works as expected.

There's a great deal of variability in what the deployment and go-live process looks like from project to project and team to team. Much depends on what the underlying technology is. The nature of the integration's requirements also make each one unique.

That said, the fundamental goal is the same: get the integration into end users hands as safely and cleanly as possible.

Who's involved?

Usually deployment is in the hands of the engineer and/or technical lead. It tends to be a more technical set of work, usually involving deploying or copying code or cofniguration.

The project manager is there to oversee the final stage of the project, and to coordinate with the rest of the organization. This may include working with early adopter end users.

The QA analyst may be involved in running a live test suite, if that's part of the team's process.

The end user has no role beyond using the new integration. Sometimes that'll happen in a controlled fashion, as a "wait and see" period.

What happens?

The activity for launching an integration depends on how it was built. It might include code deployment, configuring servers, configuring an application, or migrating data. It also may be as simple as an "on" button.

This is usually the shortest phase of the project, albeit a very important one. Most of the work is "behind the scenes". But, at the end, you've got a working, value-driving integration!

How Product Integrations are Different

All of this blog post applies to any kind of integration project, and there are many kinds out there. But, we focus on this blog (and with the services we provide) on SaaS product integration. How does that change any of this?

What do we mean by "product integration"?

The term "product integration" describes a project where a software product team (usually, but not exclusively a SaaS team) builds an integration to another system for their users. This integration is a reusable feature, available to many or all customers, and not something bespoke for one customer.

This has become the expected experience for most SaaS users. Bespoke integrations still exist, especially in the enterprise, but productized integrations enable SaaS users to add a product into their own tech stack more quickly and easily.

Sometimes these integrations are as simple as flipping the switch to turn them on. Often they make available some amount of configurability to be provided from the SaaS user. This could be just a few fields of optionality. It could be exposing the ability to fundamentally change the logic of the integration.

What makes a product integration different is that despite whatever configurability is provide, it's fundamentally the same integration for all users. This enables scale and allows the SaaS vendor to market the integration as a feature.

The Differences

Product integrations are different from your typlical integraiton project in a few noteworthy ways.

The Third Party

The fundamental difference with a product integration is that it's a three-party project, not a two party.

In a typical integration project, there's the team building the integration and the person or team for whom their building it. They probably are in the same company or are maybe even on the same team, but there's a clear line between the builders and users.

In a product Integration there's actually a third party in the mix. There's the team building the integration. There's the end user. Then, there's also the team that builds the other system. They too are a software company with their own objectives, product roadmaps, and resource constraints. If this is a product integration being built to power a partnership, all of that comes into the mix.

Working through the nuance of this third party in the relationship is more than a section in a blog post, so for now, we'll just acknoweldge that it changes some of the project's dynamics.

The end user is the customer.

In a typical integration project, the end user (or team) is probably internal to the organization building the integration. Think about an integration team integrating systems within the enterprise to automate business process or unify data.

There are exceptions to this, certainly. Maybe the integration is pulling in data from a vendor or a partner. Maybe the integration is actually interacting with a customer on a bespoke basis.

For a product integration, your end user is almost certainly your customer--the people paying to use your product. In this case, you are building a feature that customer can use, just like any other feature you'd include in your UI.

You're also building something for that customer that isn't really what they are paying you for (your product's core value proposition), but it's a necessary and secondary capability.

This means your end customer may only be partially equipped (if at all) to represent themselves in the project. They may not have time. They may not know how to be valuable. You may not have that kind of access to them. It's often the case that someone on your team, like a product manager, has to play the role of the "voice of the customer". This does mean you're a degree of separation away from the actual customer.

The integration is a template.

A product integration isn't actually an integration. It's a template for an integration.

If you are to build a one-time integration for one stakeholder, you just build it. You can hardcode into it things like credentials and business logic. It needs to work for one stakeholder, so you make it work for one stakeholder.

A product integration needs to work for any customer who might want to use it. This changes the principles for how you design it. It also requires that the end user will "plug in" additional information to actually make it work.

That plugged in information will include credentials to the other system at the very least. It likely includes optionality and other required information.

Your design process for the integration has to consider more than "how to make it work for one stakeholder". Instead, you have to think though which parts of functionality will be global, and therefore baked into the integration. You must also decide which parts need to be configurable.

Some of that configurability will be required to fill in data model gaps between the systems that can't be automatically or logically filled. But, most of that configurability is to provide flexibility to the end user.

How much flexibility will they need? How much can the typical end user actually handle? How much flexibility is required to cover enough possible use cases?

These are not questions you have to ask for a one-time integration project. If you don't ask them during a product integration, you are likely to build an integration that only works for one or a few customers.

This is why it's important to build product integrations on technology that is built for and with people who understand this specific type of integration.

*****

No two projects are the same. No two integrations are the same. But, understanding the fundamentals of how most integration projects go will equip you to feel more comfortable as you enter what might be your first integration project. This post is a good place to start!