Jump to content

Sensitivity auditing

From Wikipedia, the free encyclopedia

Sensitivity auditing is an extension of sensitivity analysis for use in policy-relevant modelling studies.[1] Its use is recommended - i.a. in the European Commission Impact assessment guidelines[2] and by the European Science Academies[3]- when a sensitivity analysis (SA) of a model-based study is meant to demonstrate the robustness of the evidence provided by the model in the context whereby the inference feeds into a policy or decision-making process.

Approach

[edit]

In settings where scientific work feeds into policy, the framing of the analysis, its institutional context, and the motivations of its author may become highly relevant, and a pure SA - with its focus on quantified uncertainty - may be insufficient. The emphasis on the framing may, among other things, derive from the relevance of the policy study to different constituencies that are characterized by different norms and values, and hence by a different story about `what the problem is' and foremost about `who is telling the story'. Most often the framing includes implicit assumptions, which could be political (e.g. which group needs to be protected) all the way to technical (e.g. which variable can be treated as a constant).

In order to take these concerns into due consideration, sensitivity auditing extends the instruments of sensitivity analysis to provide an assessment of the entire knowledge- and model-generating process. It takes inspiration from NUSAP,[4] a method used to communicate the quality of quantitative information with the generation of `Pedigrees' of numbers. Likewise, sensitivity auditing has been developed to provide pedigrees of models and model-based inferences. Sensitivity auditing is especially suitable in an adversarial context, where not only the nature of the evidence, but also the degree of certainty and uncertainty associated to the evidence, is the subject of partisan interests. These are the settings considered in Post-normal science[5] or in Mode 2[6] science. Post-normal science (PNS) is a concept developed by Silvio Funtowicz and Jerome Ravetz,[5][7][8] which proposes a methodology of inquiry that is appropriate when “facts are uncertain, values in dispute, stakes high and decisions urgent” (Funtowicz and Ravetz, 1992:[8] 251–273). Mode 2 Science, coined in 1994 by Gibbons et al., refers to a mode of production of scientific knowledge that is context-driven, problem-focused and interdisciplinary. Sensitivity auditing consists of a seven-point checklist:

1. Use Math Wisely: Ask if complex math is being used when simpler math could do the job. Check if the model is being stretched beyond its intended use.

2. Look for Assumptions: Find out what assumptions were made in the study, and see if they were clearly stated or hidden.

3. Avoid Garbage In, Garbage Out: Check if the data used in the model were manipulated to make the results look more certain than they really are, or if they were made overly uncertain to avoid regulation.

4. Prepare for Criticism: It's better to find problems in your study before others do. Do robust checks for uncertainty and sensitivity before publishing.

5. Be Transparent: Don't keep your model a secret. Make it clear and understandable to the public.

6. Focus on the Right Problem: Ensure your model is addressing the correct issue and not just solving a problem that isn't really there.

7. Do Thorough Analyses: Conduct in-depth tests to measure uncertainty and sensitivity using the best methods available.

Questions addressed by sensitivity auditing

[edit]

These rules are meant to help an analyst to anticipate criticism, in particular relating to model-based inference feeding into an impact assessment. What questions and objections may be received by the modeler? Here is a possible list:

  • You treated X as a constant when we know it is uncertain by at least 30%
  • It would be sufficient for a 5% error in X to make your statement about Z fragile
  • Your model is but one of the plausible models - you neglected model uncertainty
  • You have instrumentally maximized your level of confidence in the results
  • Your model is a black box - why should I trust your results?
  • You have artificially inflated the uncertainty
  • Your framing is not socially robust
  • You are answering the wrong question
  • Your scenarios only capture a limited set of the possible development/evolution of the system

Sensitivity auditing in the European Commission Guidelines

[edit]

Sensitivity auditing is described in the European Commission Guidelines for impact assessment.[2] Relevants excerpts are (pp. 392):

"[… ]where there is a major disagreement among stakeholders about the nature of the problem, … then sensitivity auditing is more suitable but sensitivity analysis is still advisable as one of the steps of sensitivity auditing."
"Sensitivity auditing, […] is a wider consideration of the effect of all types of uncertainty, including structural assumptions embedded in the model, and subjective decisions taken in the framing of the problem."
"The ultimate aim is to communicate openly and honestly the extent to which particular models can be used to support policy decisions and what their limitations are."
"In general sensitivity auditing stresses the idea of honestly communicating the extent to which model results can be trusted, taking into account as much as possible all forms of potential uncertainty, and to anticipate criticism by third parties."

SAPEA report

[edit]

The European Academies’ association of science for policy SAPEA describes in detail sensitivity auditing in its 2019 report entitled “Making sense of science for policy under conditions of complexity and uncertainty”.[3]

References

[edit]
  1. ^ Saltelli, Andrea, Ângela; Guimaraes Pereira, Jeroen P. van der Sluijs, and Silvio Funtowicz. 2013. ‘What Do I Make of Your Latinorum. Sensitivity Auditing of Mathematical Modelling’. International Journal of Foresight and Innovation Policy 9 (2/3/4): 213–34. https://doi.org/10.1504/IJFIP.2013.058610.
  2. ^ a b European Commission. 2021. “Better Regulation Toolbox.” November 25.
  3. ^ a b Science Advice for Policy by European Academies, Making sense of science for policy under conditions of complexity and uncertainty, Berlin, 2019.
  4. ^ Van der Sluijs JP, Craye M, Funtowicz S, Kloprogge P, Ravetz J, Risbey J (2005) Combining quantitative and qualitative measures of uncertainty in model based environmental assessment: the NUSAP system. Risk Analysis 25(2):481-492
  5. ^ a b Funtowicz, S. O. & Ravetz, J. R. 1993. Science for the post-normal age. Futures, 25(7), 739–755.
  6. ^ Gibbons, Michael; Camille Limoges; Helga Nowotny; Simon Schwartzman; Peter Scott; Martin Trow (1994). The new production of knowledge: the dynamics of science and research in contemporary societies. London: Sage. ISBN 0-8039-7794-8.
  7. ^ Funtowicz, S.O. and Jerome R. Ravetz (1991). "A New Scientific Methodology for Global Environmental Issues." In Ecological Economics: The Science and Management of Sustainability. Ed. Robert Costanza. New York: Columbia University Press: 137–152.
  8. ^ a b Funtowicz, S. O., & Ravetz, J. R. 1992. Three types of risk assessment and the emergence of postnormal science. In S. Krimsky & D. Golding (Eds.), Social theories of risk (pp. 251–273). Westport, CT: Greenwood.