Once you begin collecting these large troves of data, the next step is to begin analyzing them. This may seem like a daunting task at first. It’s called “Big Data” for a reason, after all. There is a methodical way to approach it, however, and this begins with identifying which metrics are going to be of the most value to your company.
KPIs and Dashboards
Key performance indicators (KPIs or sometimes Key Success Indicators) are the first things you need to identify before you can start making sense out of your data. These are metrics that are directly linked to your organizational goals. For example, a sales team might have KPIs of new revenue, total revenue, new customer capture, and deal pipeline size. Monitoring these metrics gives the team a good idea of how they are progressing toward revenue targets at any given time. A customer support team, on the other hand, might have KPIs like on-hold time and customer feedback ratings. The important part is to have KPIs which are measurable indications of progression toward company goals.
These KPIs are then visually displayed on dashboards where employees can see exactly where they stand. There is a multitude of software products on the market today which analyze KPIs and display dashboards, such as Tableau, Qlik, and Sense. The main point, though, is that a dashboard is a digital visualization tool. At Kloeckner’s metal service centers, for example, digital scoreboards have recently been introduced. These are digital displays visual on the warehouse floor which display KPIs in real time. By being able to see progress, employees can impact the outcome. They can also adjust midstream when the need arises.
Tracking KPIs on dashboards alone, however, is not enough to fully capture the value of available data. Customer-facing initiatives also have to be analyzed. A good way to accomplish this is through A/B testing. Also referred to as split testing and bucket testing, this method allows you to test new versions of web pages, ad copy, and more against a control.
With A/B testing, you begin with your original design or the item in question, a landing page for example. Then you identify what change you are trying to test, like a new font, layout, etc. for the page. It is best to keep it to one specific variable so you can better understand what is working and what isn’t. You then launch the test which gives half of the users the original page and half of the users the modified page.
The A/B test will analyze the impact of the change on a metric you want to test: click-through for example, or time spent on page. The test will determine whether there was a statistically significant impact of the change. This is a way to confirm that a new direction will work or a way to demonstrate that a new feature is counterproductive. Then, it is important to take that information and use it going forward, resulting in a feedback loop and additional tests and tweaks.
Big Versus Small Testing
Now that you are analyzing both your internal metrics and external efforts, there are different strategies to make the most out of it. Every company operates on underlying assumptions about what is most effective. Now, we have the data and analytic tools to test these assumptions.
Big tests usually come first. These are ways to test out entire workflows, value propositions, and outreach approaches. The point of large testing is that it gives you something conclusive to point to when you are seeing something that isn’t working. Small tests, alternatively, are for tweaking. They can be as simple as modifying a call script or changing the color of a button on a web page. While these smaller tests probably won’t be huge drivers of growth, they have a cumulative effect.
After analyzing all this data, your next step will be to create actionable business plans based upon it. That will be the subject of the next article in the Thought Leadership Series: Acting on Data.
Jonathan Toler is a digital product and innovation leader with over 17 years of industry experience at both Fortune 500 and early stage startups. He is the Head of Product & Innovation at Kloeckner Metals were he is tasked with researching and implementing new technologies and processes that will transform Kloeckner into a digital metals company.
Previous to Kloeckner, Jonathan led product management teams at Owners.com, Bridgevine, Triton Digital, and Autotrader.com. He has previously spoken at ProductCamp Atlanta,, Open Mobile Summit, and B2B Online.
Jonathan holds an MBA and BBA in Finance from Kennesaw State University and is a Certified Scrum Product Owner.