The greatest failure of the last two years is not the collapse of the securitized mortgage markets or the housing collapse that spawned them, but the failure of faulty risk analysis that continues to permeate the financial industry.
The financial industry became enomored of quantification modeling as a strict denominator of risk, and it largely collapsed as a result. Wall Street embraced modeling techniques that attempted to quantify risk by placing numeric valuations on the amount of risk they were holding. Wall Street investment banks, and the “quants” that wrote the programs, truly became enamored of the “simplification” of risk analysis that these models offered.
Finally it seemed, there appeared an easily understood evaluation technique that anyone could understand. Just run the “program”, and a daily benchmark would assess the entities’ risk, which was then comparable and quantifiable. These models became the Holy Grail of risk analysis on Wall Street. One such model, the VaR, or Value at Risk model, expresses risk as a number, or dollar valuation to be precise. VaR, and the hundreds of models like it, became a crutch, a lazy method of quantifying risk and compartmentalizing this part of their operation.
It also became a convenient way of transferring blame should the risks materialize. No one could, after all, shoulder blame when their “quant models” incorrectly assessed their exposure. Reliance on these models was intellectually lazy, a function of ignorance and of management’s inability to perform critical analysis. It may have been unrealistic to believe that many of these Wall Street analysts, most bred as political science, history, or English majors, could realistically understand the dynamics of corporate risk and perform in-depth analysis of their firms’ balance sheets or profit and loss statements.
Models are subject to the same limitations that auditors face, reliance on “cookbook” audit programs or other sampling techniques that provide limited assurance of an entity’s exposure or risk of financial misstatement. The parallels between the audit professions’ failures of the past 20 years and the current failures are striking. Audit firms, especially the “Big 8”, “Big 6” and now the “Big 4”, have spent the last 30 years developing audit “tools” that attempted to quantify risk at a basic level: just plug the numbers and the other miscellaneous variables into the model and the resulting output would quantify the risk, without the need for any independent judgment or critical thinking.
The beauty of these audit models was that any junior auditor could “run” them and divine the financial statement impacts or risk. Auditors became cogs, uniquely interchangeable and completely replaceable. If, after all, anyone could run these models or perform the sampling analysis required without the need for additional analysis, the audit firm could claim that it had established the “knowledge base” required to properly analyze risk. Human analysis and due diligence were divorced from risk quantification. The models could purportedly assess risk independently of their human operators. We know how this story ended, and witnessed the resulting avalanche of audit failures which ultimately led to the collapse of Arthur Andersen, the storied audit house.
What did we learn from these repeated audit failures? Not much it seems, because Wall Street also attempted to compartmentalize risk and develop “programs” that neatly assessed exposure and risk. Why no one has drawn these parallels and linked the two failures is perplexing. Enterprise risk, whether operational or financial, cannot be isolated into unique and disparate fragments of information, analyzed independently from the whole or divorced from human judgment.
The increasing sophistication of today’s business landscape and the financial statement ramifications demands due diligence that defies easy quantification. The failure of Wall Street models was a failure to understand the impact of human behavior, their financial statement impacts, and a failure to acknowledge that if even a few “input” variables were materially incorrect, the entire model and its analysis would be rendered meaningless.
Humans have always been fascinated by “universal” theories that attempt to unite and explain behavior, and these models were merely mirroring this desire. Until we understand that no singular theory completely embraces all behavior, both audit failures and Wall Street failures will continue to occur. Unfortunately, audits have become highly mechanized and automated, given the economic incentives of the audit business model. The high turnover at the Big 4 firms provides an even greater incentive to automate the audits as much as possible. I am not hopeful that this behavior will be significantly altered or modified.