How do you do a cost benefit analysis
Great question. Thanks for the A2A. The below answer turned into a bit of a tome, but I hope it's helpful.
You can't do a proper cost-benefit analysis on an intangible benefit, so the first thing you need to do is identify a tangible benefit. It might be easy: there are many, many things we can measure besides quantities of money. If you can identify an immediate non-monetary goal, such as a reduction in the amount of time needed to finish routine office tasks, then you can do your cost-benefit in those terms.
But what if you want to do a cost-benefit analysis against your organization's overall goals? Even if the goal itself is intangible, you can try to identify something tangible and measurable that relates to your goal. These are called performance metrics or performance measures. A good performance metric has a valid, reliable connection to your underlying goals, and can be measured with reasonable effort.
Let's use the example of a local campaign to encourage children to read. The obvious metric is the amount of time children spend reading; it's tangible and directly related to your goal. But it's hard to collect. You can't assign someone to follow each child around all day to find out whether they're reading.
You could convince the schools to ask students to keep reading logs - some schools do this. But this requires a lot of work by a lot of people, and it means trusting students to keep accurate records. So what other metrics could you use? Perhaps you could ask for local school libraries to send you monthly records on the number of books checked out. School libraries are only one possible source of books for kids, but if that number goes up, it does suggest kids are reading more.
Here's a harder one: let's say you run a shelter for battered women. How would you measure success?
First off, the number of women you help might sound like a good metric, but it's not. Why? Well, when you first open and you're trying to get word out about your services, more women arriving means your message is being received. But imagine you've been in operation for a few years, and most of the time you have a lot of excess capacity. One day the local factory unexpectedly lays off most of its workers, who then go home and take out their frustrations on their wives. Now you're full to capacity and more. Do you feel like today has been an unusually successful day for your non-profit? I certainly hope not.
A better metric for an already-established shelter would be the number of women you turn away. If you never have to turn anyone away, that suggests you're able to help all the women who need it. However, you'll never be quite sure whether there are women who need help but either don't know about you or for some reason opt not to use your services. Also, this metric is completely about preventing abuse in the first place. Now, that could be a valid position for a shelter: let other domestic violence non-profits tackle prevention, and focus on your core goal of providing a safe haven.
But maybe you offer counseling to women staying with you, and try to encourage women to leave unhealthy relationships. How would you know whether that's having an effect? Well, you could look at the number of women who stay with you more than once. If that number goes down, it might suggest you've been successful in convincing women to find lasting solutions to their problems.
There's plenty of information about performance metrics out there, so now that you know what to look for, I won't
go into any more detail. Now I want to introduce a cautionary note: metrics can be worse than useless if they aren't done right. There's an old saying in management that you get what you measure. The implication behind this saying is that getting what you measure often turns out to be different from getting what you want .
One problem is that the very act of tracking a metric could result in it losing its validity as a measure of progress toward your goals. Take our first example of metrics on student reading. If students are self-reporting their reading times, and the schools offer incentives to students to read more, the students now have an reason to inflate their reports. Tracking library check-outs would be less subject to direct bias: under normal conditions it would be safe to assume that students wouldn't check out a book they didn't intend to read it. But if the school was trying to encourage reading and library check-outs were the metric they tracked, a likely result is that teachers would start to encourage students to check out books, even if the students are unlikely to read them. So you get an increase in check-outs, but not necessarily an increase in reading. This type of bias doesn't have to be consciously deceptive: it simply results from people trying to meet the goals they're judged against.
A more insidious problem is that in reality, there are probably a lot of different aspects of the final outcome that you care about. Important things you never thought about could be sacrificed to improve the metrics you chose to focus on. I remember reading the story in another Quora answer that the productivity of Soviet truck factories was measured in the total weight of all the trucks produced (rather than the number of trucks produced.) This was presumably intended to correct a bias that would make factories producing larger numbers of smaller trucks look more productive. But no one though to impose standards on the weight of each individual truck. As a result, Soviet factories produced trucks that were ridiculously heavy, improving their "productivity" at the cost of good vehicle design.
Going back to the example of the battered women's shelter: you're dealing with complex social phenomena in that scenario, and there are a lot of ways that simplistic metrics could go badly wrong. Say you chose to treat declining numbers of repeat visits as a positive sign. You might discover that your staff - trained to think of repeat visits as a sign of failure on their own part - was unconsciously (or even deliberately) discouraging women from returning, regardless of whether the women's domestic situations had improved. At a minimum it would be important to consider more than one metric in this scenario, and to monitor the situation directly to ensure numbers meant what you thought they meant.
In short, if you're going to do formal cost-benefit analysis, do it carefully. If a formal analysis tells you something that conflicts with your intuitive sense of the best option, would you trust your intuition, or the analysis? It's not obvious that you should trust the analysis. The output of formal cost-benefit analysis can never be any better than its inputs, so if any bad assumptions went into the analysis, or any important factors were overlooked, the results could be totally skewed.
I'd argue that many organizations - especially smaller ones - would be better off saving the time and effort of formal quantitative cost-benefit analyses and instead focusing on careful, thorough qualitative planning and evaluation. I know a lot of people would disagree with me on that, but that's my take.
103 views • Written 47w ago • Not for Reproduction • Asked to answer by Alaa BataynehSource: www.quora.com