I have an odd relationship with management reporting. I know it’s a necessity and quite often see clear value in what’s packaged for senior management and board review. But a significant piece of the reporting content comes in the form of metrics and, well, whenever I hear the term it conjures up this ghastly image of good and decent people sinking slowly to their deaths in the quicksand that such efforts often become.
Now I’ve designed and supported more than my fair share of related content. I understand that sometimes the best way to tell a story is to paint it in the form of a picture; I get that part. But way too many times I’ve witnessed such initiatives spiral out of control to the point where it takes an army of people working ridiculous hours to pull together a deck of metrics that either fails to answer anyone’s questions or, even worse, generates requests for more metrics to provide clarity. And once a metric becomes a standard part of any reporting package it often stays there until management changes, and sometimes even beyond.
But I think there’s a bigger issue with metrics that exceeds my simply not thinking they’re “all that and a bag of chips”. Where are the controls around generating them?
Seriously, we have this vastly complex framework wrapped around financial reporting (SOX) to provide reasonable assurances that what management is reporting to its investors is accurate. We have industry, federal and state legislation requiring all manner of controls around sensitive information. There are auditors (internal and external) and regulators from all over the place that comb over everything with a fine tooth comb (or at least claim to) to make sure everything being done is done right – but in my nearly fifteen years in the audit/assurance industry I have never heard of a finding or issue regarding the veracity of metrics. Which is only a problem if the people running an institution or company rely on them to make decisions.
The reason why it’s a problem is because so much of the metrics out in circulation is pulled together from disparate sources, cobbled together in spreadsheets or non-production databases and manually generated. There’s no easy way to verify the source data, or know that it’s unaltered in any way or even know if it’s the right information. And even if the data source used is from a secured production-like environment, still there’s no real auditing conducted to ensure the information is accurate or better yet, is even the right information.
I once took over a change management process and assumed responsibility for a series of reports that were generated for the Managing Director who in turn used that as part of his reporting package shared with the CIO. One of the key metrics being reported on was scheduled releases and the IT departments on-time implementation percentage. The numbers looked great showing that they were on-time more than ninety-five percent of the time over a two year period. The only problem I could see with the metric was that it was misleading to the point where it was almost a lie. The scheduled release date was being pulled from the system used to migrate changes into production and that date was only determined once the development team had completed all of their work. So the scheduled implementation date was chosen once they knew they were ready to move into production. Of course the on-time numbers looked great, they always knew they were ready before committing to it. The Managing Director incorrectly assumed that there was a legitimate release schedule with forecasted dates and that the on-time numbers reflected on a well run process; wrong. No one ever questioned the numbers or their source and had I not inserted myself into what was described as a well honed, efficient process the problem might have never been identified; and there a few more just like it. My trust in metrics was permanently altered after that.
Metrics represents an excellent way for decision makers to quickly understand status and identify problems. I’ve quoted here before about how someone I respect quite a bit was fond of asking her team “If you can’t measure it, how can you manage it” and she’s absolutely right. Metrics is the ultimate management means of measuring key activities and issues within their world. But how far do you go and how much effort do you expend pulling the related reports together? And even if you’re able to automate the process and shorten the time necessary to generate the reports, how do you know that you’re either measuring the right things or that the underlying data is unaltered? Ultimately I think that senior managers should be provided with something akin to a cost-benefit analysis for each metric they’re given. Have them understand the degree of complexity and the amount of effort required to generate a number before deciding whether or not it’s worth it. Perhaps I’m being naive but I’d like to think that most C-level executives would eliminate a significant amount of their reporting if they could see how much it was really costing them.
Here’s the part that should really concern you the most though: Metrics is a key component of Board reporting, they make all sorts of decisions based on what these reports tell them. How can that be allowed unless the process used to generate them is locked down and audited? Where are the regulators in all of this?