A Data Science Central Community
Business analytics model risk (part 0 of 5): framing model risk - the complexity genie and the challenge of deciding on decision models
Introduction to a series of five articles on model risk
Here we introduce a series of five articles seeking to frame, define, and categorize business analytics model risk. The intention is to propose processes and practices for strengthening organizational decision model risk mitigation. The series of five articles treat the following structured topics in sequence:
The topic of model risk has rapidly come to the fore as a central concern for large organizations. Growing complexity in business decision making and an increasing reliance on IT-based decision systems, many of which become ‘black boxes’, has raised the stakes concerning model risk. This topic has been of particular concern in the finance and banking industries as poor models have been centrally identified as a factor in the U.S. Mortgage Crisis and subsequent Global Financial Crisis. The still unwinding Global Financial Crisis has graphically demonstrated the serious repercussions of ‘bad’ (i.e. incomplete, faulty, or misleading) business decision making models.
As broader industries and organizations, beyond banking and finance, are rapidly adopting complex model-based decision making methods, we are concerned with model risk more generally. In particular, the growth of ‘business analytics’ and ‘Big Data’ as structured approaches to complex business decision making has raised the stakes for improving decision model quality. Complex decision models often become ‘baked into’ systems, whereby a subsequent overreliance can cause spiraling errors. ‘Bad’ models are quickly ‘hidden’ or subsumed inside complex systems and procedures in modern large enterprise.
Model risk is here specified as ‘business analytics (BA) model risk’ to distinguish it from financial model risk (market and economic decision and risk models specific to and oriented towards finance and banking industry applications), otherwise the dominant current discourse. This recognizes that much of the literature output is focused on model risk for the finance industry, but that the scope of the model risk problem is larger and broader (across all industries) and thus deserves a more general discussion and treatment.
Thus, when speaking of model risk, we are referring to organizational decision making in large, complex organizations generally. Although outside a particular industry, organizational decision models often do come down to financial risk (being the near-universal measure for organizational health and performance). Also, although decision model implementation may be purely organizational, that is, not associated with IT systems specifically, we are more particularly concerned with decision models as encoded into IT systems: business intelligence (BI), decision support systems (DSS), manufacturing control systems, predictive machine learning, etc.
In particular we are concerned here with highly complex ‘analytics’ decision models which become encoded in IT systems (algorithmically or otherwise in terms of automated computational data processing and procedures). This recognizes that large, complex organizational decision making is increasingly automated by IT systems which encode and embed decision models. The term ‘black box’ refers to the tendency for such systems to trap and hide potentially risky assumptions with models.
Organizational decision making is a topic which is difficult to discretize. There are many modes and methods for decision making in large organizations. In particular, some champion the role of intuition versus process-focused decision making. Kanheman and Klein have addressed this topic by specifying conditions where intuition-based decision making can be useful in their article ‘Conditions for intuitive expertise: a failure to disagree’. They stipulate that “evaluating the likely quality of an intuitive judgment requires an assessment of the predictability of the environment in which the judgment is made and of the individual’s opportunity to learn the regularities of that environment. Subjective experience is not a reliable indicator of judgment accuracy.”
Kahneman and Klein assert that intuition is valuable in very specific venues: environments where experience trumps available data. A linked implication is that such venues are rapidly disappearing: the growing availability of data combined with swelling business complexity creates environments where intuition is a poor alternative to structured data-focused insight. Growing business complexity in particular increasingly reduces the type of venues where intuition is a preferable decision modality.
Multi-venue complexity is increasingly the status quo for large institutions. Business complexity, among others, entails combination and permutations of:
We arrive thus at a situation, promulgated by globalization and technological development, where it is difficult to ‘put the complexity genie back in the bottle’. There is a temptation to retreat to intuition, yet intuition itself is increasingly ineffective given the complex of factors which transcend the ability of individuals to make sound decisions. We must progress in the effort to make better decisions in inherently complex environments, yet the decision methods of the past are no longer adequate to the challenge ahead.
The theme of this series thus becomes: we are faced with increasingly difficult business decisions which can only be attacked with structured decision approaches, particularly those which combine large dataset analysis with computational approaches. This sentiment has been roughly popularized as ‘Big Data’: the structured practice of ‘business analytics’ in attacking large, complex datasets. However, applying structured decision making itself requires decision making via models. The problem is thus ‘deciding upon decision models’. The growing challenge for large enterprise is to specify robust methods for designing, validating, and implementing robust ‘analytical’ decision models in order to countenance the ‘complexity genie’.
Undergirding this assessment of BA model risk are two key, and troubling, assertions: 1) models are by nature ‘wrong’, and 2) no model can be comprehensively ‘proven to be right’ (validated). Quoting George Box, "essentially, all models are wrong, but some are useful". In addition to being ‘wrong’ at some level, we can never fully demonstrate model ‘wrongness’, formal validation (i.e. resolute scientific falsification) being methodologically and epistemologically impossible (Balci, 1998; Pidd, 2004). The implied objective is to determine where models are ‘useful enough’ while understanding and managing their inherent ‘wrongness’ (limitations implied by and inherent to their boundary conditions as willful abstractions).
These are important assertions, but perhaps not immediately intuitive, and thus shall be ‘unwrapped’ carefully in this series. The resulting main assertions, and central problems, concerning business decision models that will be explored and treated in this series are:
The following article begins with a detailed exploration of the impossibility of comprehensive model validation. This presents a challenge for business analytics practitioners: how do we establish ‘usefulness’ or general, practical ‘good enough-ness’, which is otherwise the objective of this series. How do we best decide on our decision models, given that models are ‘wrong’ and cannot be proven comprehensively? This core challenge specifies the base conditions for accommodation: understanding and admitting the problem fully is the first step towards addressing it in a practical sense.
End of introduction to a series of five articles on model risk
Ansoff, H. I., & Hayes, R. L. (1973). Roles of models in corporate decision making. Paper presented at the Sixth IFORS International Conference on Operational Research, Amsterdam, Netherlands.
Balci, O. (1998). Verification, Validation and Testing: Principles, Methodology, Advances, Applications, and Practice. In J. Banks (Ed.), Handbook of Simulation. New York: John Wiley & Sons.
Derman, E. (1996). Model Risk. Quantitative Strategies Research Notes. Goldman Sachs. http://www.ederman.com/new/docs/gs-model_risk.pdf
Hubbard, Douglas W. (2009). The Failure of Risk Management: Why It's Broken and How to Fix It. John Wiley and Sons: Kindle Edition.
Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
Kahneman, D., & Klein, G. (2009). Conditions for Intuitive Expertise. American Psychologist, 64(6), 11.
Morini, Massimo (2011). Understanding and Managing Model Risk: A Practical Guide for Quants, Traders and Validators (The Wiley Finance Series). Wiley: Kindle Edition.
Pidd, M. (2004). Computer Simulation in Management Science. New Jersey: John Wiley & Sons, Ltd.
Sargent, R. G. (1996). Verifying and Validating Simulation Models. Paper presented at the 1996 Winter Simulation Conference, Piscataway, NJ.
Shannon, R. E. (1975). Systems Simulation: The Art and Science. Englewood Cliffs, NJ: Prentice-Hall.