Misleading Evidence and Evidence-Led Policy Making Social Science More Experimental
Research evidence can and should have an important role in shaping public policy. Just as much of the medical community has embraced the concept of "evidence-based medicine," increasing numbers of social scientists and government agencies are calling for an evidence-based approach to determine which social programs work and which ones don't. It is an irony not lost on the social scientists writing for the September volume of The Annals that the first use of experimental methods in medicine (to test the effects of Streptomycin on tuberculosis in the late 1940s) was actually conducted by an economist. But while more than one million clinical trials in medicine have been conducted since that time, only about 10,000 have been conducted to evaluate whether social programs achieve their intended effects. Authors of the September volume argue that this level of investment in the "gold standard" of research designs is insufficient for a wide range of reasons. Randomized controlled trials, for example, are far better at controlling selection biases and chance effects than are other observational methods, while econometric and statistical techniques that seek to correct for bias fall short of their promise. The volume dramatically demonstrates that alternative methods generate different (and often substantially wrong) estimates of program effects. Some research based on nonexperimental research designs actually mislead policy makers and practitioners into supporting programs that don't work, while ignoring others that do. Authors of this volume also directly address critiques of experimental designs, which range from questions about their practicality to their ethics. Some of these arguments are well taken, but addressable. The authors, however, reject other arguments against controlled tests as unfounded and damaging to social science.. Policymakers will find these articles invaluable in better understanding how alternative research methods can mislead as much as enlighten. Students and researchers will be confronted with powerful arguments that question the use of nonexperimental techniques to estimate program effects. This volume throws the gauntlet down. We challenge you to pick it up.