For about 70 years economists have implored statistics agencies to publish measures of statistical error together with their point estimates of whatever. To little avail. Early supplicants were Simon Kuznets and Oscar Morgenstern, More recently, Charles Manski, an economist of Northwestern U., has joined the petitioners at the gates of governments' data fortresses. His most recent complaint about the absence of measures of uncertainty in government statistics is his paper "Communicating uncertainty in policy analysis" that has recently appeared in PNAS.
In this paper, which is focused on the US, Manski presents his "typology of practices that contribute to incredible certitude," he discusses examples for the six different types pf practices, and he distinguishes between transitory, permanent, and conceptual statistical uncertainty. Such classification are excellent time fillers for lectures and they are useful for exams. But what else can we do with them? Do they help to convince policy makers to take uncertainty measures into account? We can't be sure. Manski deplores policy makers disregard for measures of uncertainty. What he doesn't do is to show that policy making that takes measures of uncertainty into account would lead consistently to better policy outcomes and not only to a better informed policy making processes. Unless there is evidence of bad policy outcomes because of disregard for measures of uncertainty policy makers will have little demand for such measures and statistical service organisations will supply them only scantily.
Perhaps we should learn more about what happened in weather forecasting where we regularly get probability forecasts. Why do we get a probability forecast for rain tomorrow but no probability forecast for GDP increase in the next quarter? What made the meteorological offices adopt such forecast? I wouldn't expect it was abstract enlightened insight in the economic value of uncertainty measures for government statistics. I would expect it was demand from some interest groups, probably expressed through votes and party donations.
As an aside, the cost of providing uncertainty information can hardly explain their scant supplies. Only this week Wolfram Language 12.0 has been launched. This language now comprises the object Around[x, delta], which represents a value around x with uncertainty delta. Combined with other functions of the Wolfram Language Around can do many useful things related to the measurement and communication of statistical uncertainty. Using the Wolfram Language is cheap. Perhaps using R is even cheaper.
In this paper, which is focused on the US, Manski presents his "typology of practices that contribute to incredible certitude," he discusses examples for the six different types pf practices, and he distinguishes between transitory, permanent, and conceptual statistical uncertainty. Such classification are excellent time fillers for lectures and they are useful for exams. But what else can we do with them? Do they help to convince policy makers to take uncertainty measures into account? We can't be sure. Manski deplores policy makers disregard for measures of uncertainty. What he doesn't do is to show that policy making that takes measures of uncertainty into account would lead consistently to better policy outcomes and not only to a better informed policy making processes. Unless there is evidence of bad policy outcomes because of disregard for measures of uncertainty policy makers will have little demand for such measures and statistical service organisations will supply them only scantily.
Perhaps we should learn more about what happened in weather forecasting where we regularly get probability forecasts. Why do we get a probability forecast for rain tomorrow but no probability forecast for GDP increase in the next quarter? What made the meteorological offices adopt such forecast? I wouldn't expect it was abstract enlightened insight in the economic value of uncertainty measures for government statistics. I would expect it was demand from some interest groups, probably expressed through votes and party donations.
As an aside, the cost of providing uncertainty information can hardly explain their scant supplies. Only this week Wolfram Language 12.0 has been launched. This language now comprises the object Around[x, delta], which represents a value around x with uncertainty delta. Combined with other functions of the Wolfram Language Around can do many useful things related to the measurement and communication of statistical uncertainty. Using the Wolfram Language is cheap. Perhaps using R is even cheaper.
No comments:
Post a Comment