
As the business world has become more and more metric-focused over the years, it’s becoming ever-more apparent that just having more metrics at your disposal does not make businesses better.
I say this with the greatest of respect to numbers and metrics. After all, I’ve had a 30+ year career on the back of being good with numbers and metrics.
But, in the hands of people who take them too literally, and apply them formulaically without really understanding what’s going on, numbers and metrics can be dangerous and lead to poor management decisions.
This problem has compounded as today’s computerised monitoring and tracking systems can tell you pretty much anything you want to know about your business in an instant.
You can set up systems to ping you with a daily report on anything you want. Weekly meetings with your direct reports probably start with a trawl through their KPIs. Packs for your monthly board meetings are filled with more numbers than ever.
But is business getting harder or easier on the back of all these metrics?
Well, I don’t often come across people who think business is easier to manage today than it was a couple of decades ago.
And I suspect a large part of this is that businesses today try to manage complex outcomes using methods which only work for simple processes.
Sometimes it’s fine
The way most organisations manage their metrics is fine for single-dimensional issues. It’s just not a good way to manage anything more strategic.
A single-dimensional issue is one where all the elements are under your control and you can assess performance with a couple of simple yes/no questions.
Let me give an example. Some years ago I worked with a business which despatched packages to their clients overnight. On-time delivery was important to their clients’ business model, so the business I worked for used a well-known, and not-inexpensive, delivery service which promised to get packages delivered to our clients by 10am the following day if they were available for collection from our factory by 6pm the evening before.
Now, for the courier company, I’m sure there were enormous practical challenges to be overcome to deliver on that promise. But for us, it was really simple.
Since this particular business “absolutely, positively” promised to deliver by 10am, the only question we needed to know the answer to was “did they?”.
To be fully transparent, on very rare occasions, they didn’t. But we had something like a 99.8%+ on-time delivery track record and that’s what we told any new clients coming on board with us.
The key point is that on-time delivery was a simple metric to track for this business. An important metric for our clients, for sure, but a simple one for us to track and manage.
Nobody in that business spent a lot of time or energy worrying about whether deliveries were going to get to their intended destination on time. They just did. And we tracked the deliveries on a weekly basis to make sure we stayed pretty close to 100%.
But for us, there were very few moving parts to manage. As long as the boxes were on the loading dock by 6pm, they’d be collected by the courier company and delivered to our client the following morning.
That’s a single-dimensional metric. All we needed to know is “did the parcel arrive by 10am?” which we knew from the signed paperwork we got back from the courier company with their invoices as proof of delivery.
(To be fair, on the rare occasions deliveries weren’t made by 10am, we normally had someone from our client on the phone shouting at us long before the paperwork turned up from the courier company.)
In this context, “single-dimensional” doesn’t mean “unimportant”. On-time delivery was very important to our clients.
It just means that the elements were under our control.
Deciding whether or not the box got to the loading dock by 6pm was a simple yes/no question. As was “did it get to the client by 10am the next day?”
We could probably have tracked several dozen more metrics if we’d put our mind to it, but the reality was that none of those other metrics would have done anything other than corroborate what we already knew – that our client got their deliveries on-time very nearly 100% of the time.
That, in turn, meant that the time, cost and effort which would go into tracking any other metrics in this area would not add any more value to what we already knew about our clients’ on-time delivery experience.
So we didn’t track any other metrics.
We had everything we needed, and nothing we didn’t need.
It was also the lowest-cost way to assess our on-time delivery performance. Anything else we tracked would make the process more expensive and convey no additional useful information in the process.
Multi-dimensional issues
As a way to manage single-dimensional metrics the approach above is fine.
The problem is that most of the issues you spend your time dealing with in a leadership role are not as simple to manage as our on-time delivery performance was.
And the “right answer” isn’t always obvious. At least, not at first glance.
Imagine your call centre is going crazy. All the calls are backed up, and your customers are furious.
All your call centre’s performance metrics look terrible.
If this was a single-dimensional issue, you’d probably just fire whoever was looking after your call centre and bring in someone who knew what they were doing.
However, call centre metrics alone are unlikely to give you the full picture in this scenario.
Assuming you haven’t hired a complete idiot to run your call centre, it’s much more likely that something else has gone badly wrong in your business. But it’s pretty unlikely that any of the metrics you currently collect across your business will tell you it is.
Perhaps the marketing department launched a new national campaign and forgot to mention it to the call centre, leading to understaffing relative to demand in the call centre, even if the staffing levels were perfectly sensible based on the activity the call centre management team were aware of.
Perhaps the last batch of product you manufactured was terrible, so all your customers are calling to complain and demand their money back. From refund requests being so infrequent they didn’t have a major impact on call centre resourcing, hours of staff time are now being spent to process all the refunds and product replacements. This means the regular customer call-load is being neglected, resulting in them calling in to complain…and so on, and so on…
Perhaps the IT department’s recent software upgrade has made all the call centre’s CRM system run much slower than it used to. IT focused on the technology aspects but never really tested the impact on a call centre agent’s desktop “because the supplier said we needed to upgrade to the latest version of their software”.
Except now a 2-3 minute average call has turned into a 10-12 minute average call which means nobody’s calls are being answered in the usual timeframe.
The point here, though, is that none of the metrics you’re collecting for your call centre – or anywhere else in your business, probably – will give you the insight you need to work out what’s happened in any of the scenarios above.
Twice the number of metrics won’t tell you any more than you already know. You really are in deep into diminishing returns territory if you layer more metrics into your call centre reporting.
Finding the signal amongst the noise
The skill more organisations need more of is finding the signal amongst the noise. The one thing that really matters out of the thousands of pieces of information coming your way.
Originally an engineering term, the concept of “signal vs noise” is popular among financial traders where it’s all about how you find the one true nugget of information in the middle of information overload from all your data platforms and news feeds.
Yet this concept is not applied to the metrics organisations use to manage themselves as often as it might be.
By collecting more and more datapoints, all most organisations are doing is increasing the amount of noise and making the signal even harder to spot. Oh, and spending a lot more money in the process into the bargain.
Some people might claim that “AI will fix that”. I’m doubtful – and if it fixes the problem at all, it’ll be years before it does this well.
Like all computer systems, AI works on a purely logical level. If this, then that. If A, do this, if B do that.
Sure, AI is a flashier interface, but all IT-based systems have to work entirely logically because no-one has found any other way to make computers work yet. The supposedly smart things AI does is just because some people have worked out a way to apply a logical process to produce a seemingly “beyond logic” result.
But whether or not you think AI will help, it suffers from exactly the same issue as most human reviewers of metrics.
It starts with looking at an organisation as a series of discrete silos, and concentrating any review process on looking deeper into those silos. That’s the logical approach, after all…drilling down into all the available information.
“Problem in the call centre? Let’s call up even more call centre metrics and see if we can work out what happened.”
More likely, the real problem is somewhere else in the business and it’s just the call centre’s misfortune to be “downstream” of whatever that problem was. To solve the problem you need to look more widely, not more narrowly, and you’ll probably find that the metrics you collect, whether in the call centre or anywhere else, on a daily/weekly/monthly basis will only get you so far.
The signal you’re looking for is unlikely to be found within an avalanche of purely call centre-based statistics.
Your mission…should you choose to accept it…
The objective for any performance management system should be to have as few metrics as possible. Too many metrics, especially if they’re just corroborating information some other metric already provides, is a waste of time, effort, and money.
Just because your IT system tracks a feature automatically, and can produce a report on it, doesn’t mean you should track, measure, and manage your business using it.
Every metric over and above the minimum required increases the cost of running your business. It doesn’t make your business better, or easier to manage – usually the opposite, in fact.
And you need to be rigorous about the RoI on any metric.
Many times I’ve seen organisations able to get 95% of the information they’d ideally like pretty simply, easily, and inexpensively. They then spend six or seven figures a year to get that up to perhaps 97% or 98%, accepting that 100% is unachievable as a goal for a whole host of technical and practical reasons.
In reality, are they going to make any different decisions with 97% or 98% of the information they’d ideally like vs the decisions they would have made with 95% of it?
I’m not saying it’s impossible, but I’m deeply sceptical.
More likely, if the business just spent a fraction of the extra cash they spent to get an extra percent or two closer to factor in, and budget for, a little extra risk management in their decisions and processes, that would give a better bottom-line outcome for the business, and avoid an over-proliferation of metrics which add very little to what was already there, but at huge expense.
If you want to get your business off to a strong start for the new year, take a look at the metrics your organisation collects and go through them with a fine-tooth comb.
When you really challenge the assumption about more data necessarily being a good thing, you might find your organisation tracks a remarkable number of metrics which add significant cost and complexity for remarkably little bottom-line benefit…if any at all.
Then ask yourself: if anyone had presented a budget proposal to you for collecting that metric now you know the true RoI, would you have green-it the project?
You might be surprised by how often the answer, in the cold light of day, is “no”.