Your marketing automation system can give you more metrics than you know what to do with, from email statistics to closed loop revenue reporting. You probably wonder whether your nurtures are actually driving leads through the funnel and creating opportunities, and what is the best way to use this data to learn those results?
Let’s look at three data points to help you audit your nurture performance.
- Setting up reporting correctly
- Deciding which metrics really matter
- What the metrics are really telling you
- Setting up Reporting Correctly
If you want to track a metric such as opportunities created from nurture members, but the data integration isn’t set up in your MA and CRM systems to support it, you aren’t going to be able to do it. So it pays to do it right when you are setting up nurtures, and building assets and programs to support the reporting you need, such as custom reporting fields or data objects. You’ll find it’s not only easier but also faster and less expensive to put thought into reporting up front when you are building the nurture. I can tell you from experience that creating particular reporting on a nurture after the fact is a tremendous hassle, not to mention that it ranges from moderately difficult to impossible to get historical data out your systems.
Besides making sure your technical integrations are correct, there are two points to call out that save a lot of trouble, especially when exporting data outside of the MA tool:
- Create an understandable asset naming convention
- Create an easily usable asset file structure
When exporting the data on nurture campaigns, your MA tool will export it under email and campaign names. So when you go to do analysis on the results, you will be rolling up data based on those names. If the asset and campaign names are vague, it becomes time-consuming to standardize names for reporting, as well as just to see what’s actually happening. A good approach is numbering the email in the nurture plus a description of the email content or the subject line, such as “EM1-HowWeHelpYourBusiness,” then “EM2-PlansAndPricing”. If you use some kind of code like the date the asset was created, that may be good enough when you are building the nurture, but trust me, two months later you glance at the report and you will have no idea what “LP021315” is!
And here’s a hint: do it at the beginning so you are not switching the name convention halfway through your program. You’ll avoid having to manually map asset names back and forth, because “EM2-PlansAndPricing” is not the same in your report as “EM2-CheckOutOurPlansAndPricing.”
Having a usable file structure will pay off when you go to look at an email, set up reporting segments etc. You’ll have thousands of different assets. Have them set up and filed into logical folders, such as by nurture, so you would have a Suspect Nurture folder that contains all your assets associated with that. I have seen companies who have not done that, and that makes it extremely difficult to find where something is hiding. (You’re probably saying, “Duh!” but it happens way more than you might think. . .and is a bigger time waster than you would suspect.)
- Deciding Which Metrics Really Matter
Many different schools of thought can be found on what different metrics matter. It’s worth putting time and thought into these two questions:
- What metrics are important to my company’s sales cycle management?
- What metrics can we actually measure?
Two short examples and an observation will illustrate these points:
I had a Marketo client who wanted to know how many MQLs a recycle nurture created: how well that nurture worked, including not only the direct action from the nurture or website activity, but also any people who were part of the nurture and became MQLs by contacting a salesperson directly. Natively Marketo didn’t measure that point, and even with a correct integration the data couldn’t be pulled from Marketo. So to get there we needed to pull reports from both Salesforce and Marketo to put the data together for the whole story.
An Eloqua client wanted to measure the way Eloqua was helping impact opportunities and leads, but their CRM integration did not support reporting for that data. So we had to figure out other indirect ways using lead scoring snapshots to measure the impact of the nurture.
Out of the box, neither Marketo nor Eloqua have deep support for measuring email or campaign level metrics over time. You are able to compare individual emails and campaigns, but can’t really compare a nurture track over a long period of time. Neither tool really gives you the trending data to answer the questions you want to know, which are “What do I do with my assets and nurtures?” and “What has been effective to date?” Lead nurtures run over a long period of time and people are entering into them at all points in time. So if you aren’t looking over time and trending, you aren’t seeing the full picture and seeing where things correlate.
As a consultant, my recommendations come down to these:
- It’s all well and good to implement the best-practice KPIs (Key Performance Indicators) like measuring marketing ROI—but if you don’t really look and see what you can measure, then it might need to be a phased rollout to enhance your capabilities. Therefore, put together your wish list of metrics you’d like to have to answer your business questions and then check it to make sure you can actually get those metrics out of your system right now. If you can’t, then determine the roadmap to get to where you want to be.
- If you want to determine correlations over time between nurture email activity performance and other metrics such as opportunities, you need to start taking snapshots of the information from the beginning and set up some level of automated reporting, whether it’s importing the snapshots into Excel or setting up an automated data feed to a BI tool.
- What the Metrics Are Really Telling You
Let’s say you’ve got the right metrics being reported. Now what do they mean?
Opens and clicks
Opens, clicks, form submits and their rates are your standard form and email tracking metrics. You could say that marketing automation was basically invented to track these things, but they’re only the tip of the iceberg. An open rate on a single send of a single email doesn’t actually tell you very much.
Here’s an example of why it’s best not to base a decision on a limited set of data. A client of mine had a nurture that was going along fine, and suddenly open rates fell dramatically. If you have the same targeted people entering into a nurture, which usually you do, that shouldn’t occur: typically leads are coming through a channel and are segmented in an established way. By looking at the data over time, we found the point of drop-off and discovered that some purchased lists had been pushed into the nurture at that time. “Oh, OK, purchased lists usually have much worse performance,” would be a common reaction, and action would be taken based on that assumption.
But that would have been a wrong assumption. Once we followed the metrics through the entire nurture and then the following nurture, deeper analysis revealed that even though this segment had a very low engagement rate of clicks and opens for the first nurture, those who did engage subsequently engaged at a much higher rate as they traveled through the entire set of nurtures, with an much higher click-through and open rate overall.
This type of situation demonstrates very clearly the importance of looking at all metrics together to get the complete picture. For this client, a very wrong decision might have been made if they didn’t have all the facts.
Performance over time
The value of metrics is not a one-shot thing. The most useful insights will come from looking at one or two quarters’ worth of data: that’s how you’ll get the best perspective on what is happening with your nurture. Some nurture campaigns are as short as 3 or 4 emails, and we’ve had some as many as 34 emails: but on average, you don’t know for five or six weeks if a person has taken some action based on the nurture or has moved into the next nurture. Trying to do an analysis of results when only two emails have been sent is not going to give you good data for decision making.
Your definition of “success”
You need to define what “nurture success” means to your team. It’s more than just “engagement” as seen in opens and clicks and web activity. Look at the nurture exit criteria and determine how to measure it: is it a funnel stage change, an increased lead score? Becoming an MQL? Once you know that, you can look at individual email performance, and tweak them to increase overall nurture success. It all depends on your business, goals and strategies. Yes, it may sound like a life philosophy! But it’s true: you can’t know if you’ve succeeded until you know your definition of success.
As a final note, there are often dimensions to the story that do not show up with the reports and metrics initially laid out. That’s why it’s vital to do a regular nurture audit to see whether your reporting is really answering the questions you need and providing actionable insights, and then refining it until it does. There may be different visualizations, a different BI tool or different way of pulling the data that leads to actionable conclusions. Only through systematic optimization and use of your reporting do you find this out.
The whole point of nurture metrics is to provide you with the information to make course corrections and strategic decisions. Keep that in mind when setting up your nurture reporting, it will make your testing and optimization efforts much more successful.
Just getting started with your nurture campaigns? Download our 8 step framework to get your nurtures up and running
If you want to learn more about our lead nurturing services, schedule a call or just drop a note to our team.
Ryan Johnson develops and implements marketing automation strategies for DemandGen clients. As a DemandGen Consultant, he has helped clients across a wide range of industries to streamline and optimize their marketing and sales processes to drive measurable success and ROI.