Those few of you out there who are loyal readers know that I oscillate between being a cheerleader for economics and economic/market solutions to problems, and criticizing the tendency of economics to reduce everything to models and dollars. I find myself squarely in the middle of these to poles after reading Nick Kristoff’s most recent editorial on aid. Kristof is clearly infatuated with economics in a way only a non-economist could be. To quote at length, we apparently possess:
“…a rigor that other fields in the social sciences don’t – and often greater relevance as well. That’s why [we’re] shaping national debates about everything from health care to poverty, while political scientists often seem increasingly theoretical and irrelevant. Economics are successful imperialists of other disciplines because they have better tools.”
While I’m flattered that I possess such great tools and, apparently, unique intellectual rigor, I think that Kristof is a bit off the mark (perhaps that’s his political science background coming though?).
Too often, both within the field of economics and, perhaps even more, outside of the field, the clean mathematical answers that economic models provide are confused with intellectual rigor. As some notable economists (Summers & Krugman among others) have recently pointed out, math does not have a monopoly on intellectual rigor. Legal opinions are extremely rigorous and most of my attorney friends haven’t taken a math class since high school algebra. This is not to say that economics is not a rigorous field, it most definitely is and its models can provide elegant solutions to certain problems, but it is a field that has limits like any other and it certainly is not the only rigorous field in the social sciences. I’d also suggest that it does not have the best tools and could learn a great deal from fields such as geography, ecology, and the natural sciences.
Turning to science for inspiration as an economist is not a new idea. Much of the strength of economic thinking, as well as most of its weaknesses, stem from its obsession with physics. Economists, by and large, love mathematical models. If they could reduce everything to Latin symbols and equal signs they’d be all the happier. They, like physicists, like to come up with elegant models that explain why things happen and then go out and test them. And here is where economists run into a problem. The universe, and in turn physics, is governed by certain laws. We may not know what they are but they certainly exist and they typically don’t change from day to day. Gravity worked yesterday, it worked today, and it will continue to work for the foreseeable future. So when a physicist comes up with a new model he or she can go out and test it and see if it works and know that if it does or doesn’t work today they conditions that led to that result will not change too much tomorrow.
Obviously this only goes so far. As our knowledge of the universe expands our understanding of the “laws” of physics will continue to change. But the point remains, basic laws, once we get them right, don’t change. The same cannot be said of economics and there’s the rub. An economist can come up with a model and go out and test it today and it works and when they test it tomorrow it suddenly doesn’t. This is because economists don’t model the behavior of the laws of the universe. They model human behavior. And humans are nothing if not irrational and inconsistent. So attempts by economists to neatly express how humans will behave fall back on assumptions that are not broadly applicable. Thus, economic models give us ideas as to how people may behave in certain circumstances but these models are not absolutes and should not be treated as such. Human behavior is not gravity. It changes often and for reasons that are not in the models. Economists broadly understand this and don’t claim that their models are absolutely predictive. But the field as a whole remains too heavily reliant on mathematical models of behavior. More work examining individual motivations behind behavior and less attempting to model a ‘representative agent’ would be helpful. At the same time, recognition outside the field that economic models are not the word of God would be nice as well.
Expecting the general public to have a greater understanding of the limits of economic models brings me to a second problem however. The public and policy makers are too concerned with “knowing” the absolute correct answer. This is the real reason for the rise of economics in national debates that Kristof points out. The only feature inherent to economics that allows it to dominate current policy debates is its ability to provide one “right” answer. Look at the end of Kristof’s article: “What kind of aid works best? For those who want to be sure that to get the most bang for your buck, there is also a ‘proven impact fund’…” These are questions asked by a generation raised on Cost Benefit Analysis and who expect immediate measurable returns for every dollar spent. Characterized by some as the “engineering” management style this view sees the world as a series of cause and effect relationships. Do A and B will happen and if C happens then you did something wrong. The real world doesn’t work that way but is the dominance of this view that leads to the dominance of economics.
There is real danger in the dominance of both economics and the engineering mode of management. Yes, we should expect that our aid programs work to solve the problems set out before them and to that end data collection and measurement is important. My brother just spent six months demonstrating this with respect to aid organizations in Afghanistan. So in this regard I agree with Kristof. Randomized field tests can be helpful and the information that de-worming kids is more cost effective than building schools in some areas is important. But maximizing the distance that each dollar goes in accomplishing an aid goal is not the only important thing and taking the view that it is can dangerously obscure other, equally important, goals.
With respect to aid, first among these is increased understanding of issues. In contrast to the engineering style of management, which calls for specific models of a situation and strict control of the process and results, adaptive management calls for a much more expansive and integrative approach. Critically, adaptive management acknowledges that any approach to a complex problem must cope with substantial amounts of uncertainty (differentiated from risk by the fact that risk implies we know it exists and whether it might happen. Uncertainty implies we don’t even know it exists) and builds in mechanisms to evolve and respond to new information. Adaptive management is much more suited to dealing with problems in the real world. Problems arise, however, when funders take an engineering approach and demand specific models with a strict process and clear success metrics when they give aid money. These strict processes and success metrics remove the opportunity for adaptation to new information and research into the roots of problems. It may not be glamorous but solving these problems requires long-term funding commitments to projects that will not have clear results for many years, if ever.
So while Kristof is right, despite his subpar political science background, about the fact that economics possesses some neat tools for solving these problems, and statistical examination of aid programs can improve their effectiveness, he suffers from the same mindset that has given rise to the dominance of economics. An expansion of what is defined as rigorous, what qualifies as good management, and an acceptance that there are not always clear metrics for success in solving these problems would well serve both economics and the field of humanitarian aid.
No comments:
Post a Comment