Companies are always benchmarking themselves against others to see what they are good or bad at. Occasionally or perhaps fairly often, this exercise is an effort to make themselves feel good or find some type of silver lining in their efforts. And I don't see anything wrong with this. Why shouldn't you do this? Everyone wants to feel good, right? And if someone else is not going to tell you how great your organization is or how great you are, why not go out and do it yourself? But unfortunately, it's not that easy, right? You can't just go out and tell your senior management how great you are. You have to dress it up and this is where benchmarking is critical. Benchmarking gives you data and data can impress people. Armed with benchmarking data, you can stake your claims of superiority.
But how do you get to this point. Here's how it generally works. In this fictitious example, let's assume you want to understand how much companies spend on IT expense as a percent of revenue and compares yours against them. So you reach out to some consultant type generally who has benchmarked tens or hundreds of IT organizations as part of a "exhaustive research study" of some kind. Or in some cases, you find a research paper of some kind which you use. You then either go in 2 directions:
1. Aim to apply their framework to your company to see how you stack up (the self-directed approach)
2. You hire the consultant to benchmark your organization (the be-directed approach)
In the self-directed approach, you don't know exactly how the consultant did his study so you make some assumptions to come up with your number. This obviously leads to some uncertainty about the validity of your numbers, but you did the best you could with what you had. If the data makes you look good, great. If it doesn't, you can explain it away by explaining the uncertainty present of not knowing what the consultant exactly did and fully understanding their methodology.
And in the be-directed approach, the consultant comes in and because they understand their methodology and how things are measured, they will be able to provide you with an "apples to apples" comparison of your organization versus others. What they don't reveal is that every organization measures the said metrics (IT expense in this case) so differently that even they're guessing part of the time. They may say they adjust or normalize for these differences, but these are fancy words for they fudged it. What many also won't reveal is that they know your organization will likely emerge in the middle of the benchmarked pack. It won't be the worst and it won't be the best. If they say you are the best, that will prevent them from selling you additional services to 'fix' your issues and if they say your the worst, they might offend you. So you're average or maybe slightly above.
And after going down one of these paths, you've concluded the inane and generally useless benchmarking process. At this point, you have a number for you to compare against the larger data set. What happens next? Check out the next posting...
P.S. I'm sure there are consultants out there who will disagree with my generalizations. But I've talked to enough consultants 'off the record' to know this is what happens. If you are not amongst the type doing this, I applaud you