Managing an integration program at scale has many challenges, and you can't manage what you don't measure.
Traditional product management metrics like churn, LTV, and customer counts aren't specific enough on their own to give your team direct information about how effective your integration program is. You need more specific metrics to do that.
In this post, we'll cover ten metrics you should consider. They will inform you about the health and success of your ecosystem integration program. They should also help to inform ongoing decisions about where to invest, where not to, and how to prioritize work.
These metrics are broken into three categories:
Not every product team can or will track all of these metrics, nor will everyone track them exactly the same. Use this post as a jumping off point to define your own strategy.
The first set of integration metrics help to measure the value your integrations are providing your users. These are the most important metrics to track, because if you are building integrations that aren't providing value, what’s the point?
Consider the following metrics for understanding the value of integrations for your users.
Let's start with the easy one: how many people are using each integration?
It's important to understand what the adoption rate for your integrations are. You put in a lot of effort to prioritize, design, and build integrations. You don't want them sitting out there waiting to be used.
Low adoption could be an indicator that you misread the market. While not fantastic news, this is still good to know. It'll help you avoid continuing to sink investment into an integration with no future.
But, low adoption may not necessarily mean that the integration shouldn't have been built or that anything is wrong with it.
Instead, it could be a leading indicator in problems with how the partnership is running or how the integration is being marketed. If your integration activation process is difficult for all integrations, that will also keep adoption rates down. These are helpful things to know.
You can measure "number of users" by actual individual users and/or by accounts (which may have many users). How you decide which is right will depend on the specifics of your application and your business.
Ideally your integration software should have reports or APIs that you can use to pull this data directly. If not, or you've built something homegrown, a good way to query for this information is to count how many authentication tokens you have for a given integration. This indicates how many times a user has entered credentials to enable an integration.
Knowing the adoption rates of each of your integrations is the most fundamental and most important metric you can collect.
On its own, the metric gives you a lot of information, like:
You can get even more creative, slicing this data with details about those users and/or details about the integrated system. This can help you identify things like:
Basic user adoption for integrations is the easiest to collect, and you can pull a lot of insight out of that simple metric.
Just knowing how many users turned on each integration is valuable, but it doesn't really say much about what those users are doing with each integration. Not all enabled integrations are created equally.
Therefore, you should also track the amount of data moving through each integration to draw conclusions about the criticality of that integration in your users' eyes. In general, more data flow equates to more user value.
This metric is useful on its own, but it's more so when paired with business value metrics, explained next.
How you measure data flow is a little dependent on the specifics of the integration itself, but usually, your goal is to measure how much data volume passes through. This could be something as simple as number of transactions or number of records. It could be based on data size, instead of just a pure count.
Again, your integration technology should provide this information through basic reporting. If not, or if your integrations are homegrown, try hooking into some one of the key points in the technology architecture with observability tools.
For example, we often observe AMQP queues that sit between each step of a data flow to measure how much data is flowing through. (That definitely gets into technical weeds, quickly.)
Data flow per integration helps you understand "how much" each adopted integration is being used.
Context is important. Just because integration A has twice as much data flow as integration B, it doesn't mean A is more important. But, all else kept equal, it could mean that Integration A is more heavily used and possibly more mission critical in your users' eyes.
Understanding data flow volumes helps you target both compute resources and internal support focus. On average, more data flow = more opportunities for data problems = more support tickets. It also literally just means you are processing more data.
If you're running in a cloud computing environment, it’ll help you understand if you need to scale up or start queuing up requests.
You should mostly be looking at data flow metrics in aggregate, but looking at one user's data flow for specific integrations can be helpful, too. Consider it insight for your sales or account management teams to relate more closely to that user's needs.
The basic picture is still incomplete only knowing how many times an integration is enabled and how much data is flowing through it. How do you know what the value of that data flow is?
By measuring the business value per integration, you can triangulate a complete vision of how important the integration is to your users. Business value times data flow volume at the individual and aggregate level is a powerful data set.
Your product team will need to be thoughtful about how to assign business value to an integration. Sure, sometimes you have actual dollar amounts in the data flowing through the integration, but it may not represent the value of that piece of data making it safely to its destination.
Consider what the cost would be to your user if that data doesn't get there.
Imagine the integration doesn't exist. How would your user accomplish what your cross-product experience (a.k.a. integration) is automating? Do they have to manually enter data? How many mistakes are made? Is there a physical process like delivery of a document involved?
Fundamentally, what does all of that cost your user in real dollars? It might not be a huge amount at an individual transaction level, but $2 of processing cost times millions of records will add up fast.
Try to be as consistent as you can about how you assign value to an integration's data flow. When you must deviate from consistency, try to at least stay consistent among like, integrated systems (e.g. all your CRM integrations). And, always remember, you are looking at value for your user, not value for yourself.
Business value per integration, combined with data flow per integration and adoption metrics, will give you the detail you need to produce an ROI for your users.
This is useful for your sales and marketing teams. It helps them build a business case for your product as an integrated part of their technology stack, not just a standalone solution.
This is also useful for your product team when deciding how to prioritize different initiatives. For example, if you have a very well adopted, high flow, and high value integration that is not functioning well, investing in solving those problems might be more important to your company than adding that next integration or even that next feature.
No one likes to clean up tech debt, so this makes sure you can attach a dollar amount to integration-related issues to be addressed.
You should also track metrics that help inform your team and their operational processes. This is especially important as you start to scale your integration program, when you can no longer babysit each integration manually.
Consider the following metrics that help you to operationalize internal processes related to integration.
No integration works 100% of the time. Failed transactions, API outages, bad input data, and even bugs are all normal parts of doing business. But, even though they are expected, you should still seek to minimize them.
If a user enables an integration that you manage, that user is trusting you to handle some part of their business process. It might be a critical part.
Sure! The benefit is that you have a stickier user--one has more deeply embedded your product into their operations. The tradeoff for that benefit is that you now take on the responsibility to maintain a high quality integration.
That responsibility should be taken seriously, and you should have ways to measure failures over time. The simplest is simply counting data failures per integration, over time.
Measuring data failures is very dependent on your underlying integration technology. Anything mature and off-the-shelf should make similar data available.
If your integration technology is homegrown or open source, you'll probably have to do a bit more work to make sure the necessary data is made available.
In all integration technology, the execution of a data flow can be broken down as follows:
Sometimes integration tech will describe these very clearly within the architecture. Sometimes they are more "baked in'' and up for interpretation, but all data movement follows this basic pattern.
To measure data failures, you want to count how many times a trigger fires to initiate a new transaction that does not end with a successful outcome. “Successful” is up for interpretation.
If your integration is simple, (get a record, transform a record, push the record to the other system) the outcome is easy to identify. Did the "push" step, which is last in the flow, complete successfully? Anything else, including a transaction just stopping quietly somewhere, is a failure.
Measuring outcomes is a little trickier if your flow is more complex. What if that record gets pushed to two different systems and one fails? What if the record is pushed to the same system, but it requires two API calls, the second of which fails?
In most cases, anything but absolute success should be considered a failure. If that happens count it! There are cases where a partial completion should still be considered success, though.
And, all of this data is more useful if you also track reason codes or metadata about the failures.
Your ratio of failed transactions to successful ones is a measure of an integration's quality. Usually you have the ability to reduce that failure rate by improving the design or technical implementation of your integration, but sometimes there are things you can't control. For example, some external APIs are just unreliable.
Your failure rate should be very low...way less than 1%. That said you'll still have variations among your integrations. This gives you an indication of where you could invest dollars in improving quality. Lined up next to the overall business value and adoption metrics for the integration will inform where you should invest.
Likewise, knowing which integrations have higher or lower failure rates can help you plan for where to direct technical support resources. It can also give your partner team information to provide that integrated partner, which benefits you both. (Maybe they didn't know their API was unreliable.)
Measuring data failures gives you a pulse on the health and quality of integrations. From there, you can decide how you want to improve them.
Knowing how many users have activated each integration at a given time is helpful, but it doesn't speak to the directionality of that adoption.
Knowing that an integration's adoption is accelerating can help you get in front of scale challenges and invest appropriately. Seeing that an integration is being abandoned can be a helpful canary in the coal mine.
To see that directionality, you'll want to measure activation and deactivation velocity separately. Over a given period of time, how many times has the integration been activated per a given time interval? Likewise, how many times was it deactivated by the same or other users?
The net of those measurements over time indicates the adoption velocity.
You should already be measuring how many users have activated an integration, but that's likely coming from reporting in your integration technology. It also probably represents the net of activations and deactivations. (If this is all you can get, you can do a half-baked version of velocity by tracking the trend line on that "net".)
To capture activation/deactivation velocity you have to listen for both events to occur and plot them daily as they occur. Some integration technologies may have APIs to support such an activity. Something more homegrown may require you to build that event listener.
When you plot those events by day, you can slice that data by different time intervals as well as whatever other metadata you might have included. This helps you visualize the ongoing stream of adoption and abandonment.
Adoption velocity helps you predict the future. Just because your adoption went up or down this week, doesn't mean that trend will continue. Velocity is basically a trend analysis that helps you understand likely future adoption numbers.
Use this information to:
Integration adoption velocity, like any kind of trend analysis, is imperfect, but it's more helpful than flying blind.
Integrations can be very simple to set up (one click, enter credentials, and poof!). They can also be highly customizable--even bespoke per end user.
The time it takes a user to actually set up the integration, whether alone or with professional services help, indicates the integration's time to value. In other words, how long does the user have to invest time and money into the integration, before it starts providing value?
Time to value is important to measure how effective a self-configurable integration experience is. This is likely relevant if you provide simple configuration for templated integrations, like many SaaS applications do.
However, if you provide a full white-label iPaaS capability or you offer bespoke integrations as part of your professional services offering, this time to value isn't just a user experience indicator. The stakes are much higher than an inconvenient setup.
In these bigger projects, long running integration projects cost a lot of money. Likewise they severely delay the user's value received for the integration. At worst, long time to value numbers can lose customers.
Simply put, measuring integration time to value is the distance between when a user begins to set up an integration to the time they receive value from it.
For simple, self-configured integrations, this can probably be measured using your product analytics tool. You'll need to track whatever you reasonably indicate as the start and end events for the "set up time". Make sure to slice these by specific integration, because this metric isn't useful in the aggregate.
For longer running, and project-based integrations, you're probably measuring project milestones. At a minimum you'll want to measure project start and completion, but you might want to measure time gaps to stage gates on the way to completion too. This will help you optimize your project approach more specifically.
For what it's worth, GuideCX (no affiliation) offers a project management tool that is particularly well suited for these types of projects.
Put simply, your goal should be to optimize user experiences and/or project processes to reduce time to value. The easier it is to enable an integration (receive value from it), the more integrations your users will be able to enable and the higher value those users will be.
Integrations should create stickier, higher value users. But, how do you know your integrations from your product are creating that?
You should correlate some of the integration metrics, specifically around adoption, with the indicators of customer value you are tracking more broadly in the business. Correlating data is imperfect (see the pitfalls of causation vs correlation), but it can still be helpful.
Consider the following correlations between integration adoption metrics and important overall business metrics.
You should consider how your integration program, in aggregate, contributes to driving revenue for the business. This impact on revenue can come in the form of curbing churn or enabling expansion revenue, but the biggest potential impact is contributing new revenue.
The rest of the correlation metrics are useful at the individual integration level. However, it's also important to correlate revenue impact with the investment you make in the integration portfolio as a whole. Use this as a general indicator that you are going in the right direction.
You can correlate the integration portfolio against revenue in the following ways:
In all cases, you have the option to consider a portion of the revenue as "attributed" to the integrations. If you choose an attribution model, just make sure it's applied consistently across the portfolio, so you aren't cherry picking numbers.
A highly integrated product, assuming quality in the integrations, should create happier, stickier users. One of the best metrics we have for tracking how satisfied users are in the aggregate is the net promoter score (NPS).
NPS can be tracked at the aggregate level, but it's also interesting to look at specific customer segments or even specific customers. Lining up integration adoption metrics with that NPS will help you see to what extent integration may be contributing to customer satisfaction. You may even see the opposite--that low quality integrations are detracting from satisfaction.
Customer lifetime value (LTV) is a measure of what a customer or user is worth to you. In some regards, it's a measure of how sticky your user base is. Remember, integrations are supposed to create stickier users!
Correlating customer LTV against integration adoption metrics in aggregate, by segment, or individually will help you assess the impact integrations might have on the stickiness and overall value of your customers.
Hopefully, you'll see that more integrations means higher value customers.
Given that many integration requests to your product team originate from the sales team, it's helpful to track how each integration enables additional sales. This is closely related to correlating integration to revenue, but not exactly the same. In accounting terms, it's more like correlating integrations to bookings.
To measure this, count the dollar value of sales that could not have closed without the existence of an integration--in other words, that integration was a "must have" requirement for your new customer. Count the bookings revenue, optionally applying an attribution model if so desired.
You can get a little more detailed by introducing a category of sales that were influenced by the integration, but where an integration was a “nice to have” requirement. In those cases, you would just allocate less of the bookings revenue to the integration. You decide the percentage, but don't overthink what is inherently imperfect.
While it is definitely possible to overdo it, most product teams are underutilizing opportunities to measure the impact their integration program has for their users and the business overall. This is a big miss!
Start small, but develop a discipline for measuring integration effectiveness and for incorporating that into strategic product (and other kinds of) conversations.