자료: Austhink, http://www.austhink.org/monk/ltcm.htm
지은이(By): Paul Monk
An address at a dinner to open an Australian Taxation Office conference on [:]
Intelligence and Risk Management,
Canberra
10 February 2004.
***
“When the tide goes out, we’ll find out who’s been swimming butt naked.” - Warren Buffett
***
On 29 August 1994, Business Week ran a feature article entitled ‘Dream Team’. It was about a man called John W. Meriwether and the team of brilliant quants he had gathered around him, in 1993-94, to conduct what the magazine called ‘the biggest gamble of his life’. The gamble was to see whether, in a partnership calling itself Long Term Capital Management (LTCM), they could outperform his old firm, Salomon Brothers, in the bond markets, with a fraction of Salomon’s capital and staff.
Most of his team had worked for him in bond arbitrage at Salomons for up to a decade, raking in up to 90% of the firm’s net profit. They included Lawrence Hilibrand, Victor Haghani, Eric Rosenfeld, Gregory Hawkins and William Krasker. All, except Haghani, had graduate degrees from Harvard or MIT in mathematical economics, finance or computer science. Haghani’s degree was from the London School of Economics.
They were joined, at LTCM, by David Mullins Jr., erstwhile vice chairman of the US Federal Reserve Bank, and two brilliant math-modelling economists from the Harvard Business School: Myron Scholes and Robert C. Merton. Those two were to win the Nobel Prize, in 1997, for their work in modeling stock price movements and had taught many of the leading figures on Wall Street at Harvard, since the 1970s. Meriwether had gone to the very top of academe in putting together his dream team.
For three years, 1994-97, LTCM made breathtaking profits on its trades and the whole of Wall Street was seduced. In 1997, LTCM made a profit of $2.1 billion – more than corporate giants like MacDonalds, Nike, Disney, Xerox, American Express or Merrill Lynch. Then, in 1998, the market began to turn bad on them. Unwaveringly confident in their models and their judgment, the quants increased their exposure, keeping all their own capital in the fund.
At the beginning of 1998, LTCM had $4.7 billion in capital, of which $1.9 billion belonged to the partners. It had ridden out the Asian financial crisis of late 1997 with barely a murmur. It had $100 billion in assets, virtually all borrowed, but through thousands of derivative contracts linking it to every bank on Wall Street, it had over $1 trillion worth of exposure.
In August 1998, the Russian economy went into a tailspin. This was to be the trigger for LTCM’s demise, but none of the geniuses there realized what was about to happen. They actually decided to buy big in Russian bonds, calculating that others would sell in a panic, but Russia would stabilize. It didn’t. And the IMF refused to bail it out. The consequent shock to what amounted to a moral hazard bubble in fragile markets around the world triggered a global securities crisis for which LTCM was utterly unprepared and which left it perilously exposed. You see, it had bought the riskiest bonds everywhere, believing that such diversification would buffer it against any given one going bad.
Not only had the geniuses got Russia wrong, they had believed unquestioningly that the wide diversification of their bets would shield them from any crisis. They found out, with stunning swiftness, in late August 1998, that this, too, had been an error. Once the panic set in, it became global. The quants had calculated mathematically that they would never lose more than $35 million in a day. On Friday 21 August 1998, in the wake of Russia defaulting on its debts, LTCM lost $553 million of its capital in one day. Over the next five weeks, three quarters of its capital evaporated, as the markets refused to do what the quants’ models dictated.
During those five weeks, a notable feature of the behaviour of LTCM was its strenuous effort to hold its ground. The partners repeated to all and sundry that the problem was the irrationality of the markets. They were bound to correct, vindicating the models, and if the fund held its ground it would reap huge profits. Meriwether and his quants really believed that a great harvest was just around the next corner. But the markets were in meltdown.
The partners lost everything. They created a $1 trillion financial hole that threatened to suck the whole of Wall Street into it. The Federal Reserve Bank stepped in and induced the banks to buy LTCM out, in order to head off a wider crisis, but the models were still not vindicated over the following twelve months and LTCM was closed down. As Roger Lowenstein sums up the case, in his book on the subject: “In net terms, the greatest fund ever – surely the one with the highest IQs – had lost 77% of its capital, while the ordinary stock market investor had been more than doubling his money.”
One dollar invested in LTCM had risen to $4.00 by April 1998. By September, it had fallen to 23 cents. I find that story endlessly fascinating. A dream team of brilliant economists, mathematicians and computer scientists, certain of their positions, with years of experience in arbitrage and the whole of Wall Street behind them, lose all their capital and cause $1 trillion to evaporate in five weeks! That’s interesting, don’t you think? How was it possible?
I have undertaken to talk to you, this evening, about intelligence and risk management. As I would see it, risk management is about hedging our bets against possible divergences between our calculations and the way the world works. Intelligence is about assessing how the world is, in fact, working and adjusting our calculations accordingly, to the best of our ability.
I’ve called my talk this evening ‘Meriwether and Strange Weather’ because I want to use the LTCM debacle to reflect more broadly on how our calculations relate to the way the world works and the consequences when what happens is seriously at odds with commitments based on our calculations – when, as it were, merry weather gives way to strange weather. But I won’t confine my remarks to the LTCM case, interesting though it is. Rather, I want to share with you a few other cases, from outside the world of business and finance, in order to show that the underlying issues are common, across a wide field of human endeavors.
Those underlying issues are not to do with the mysteries of stocks and prices, or bond markets and derivatives. They are to do with the mysteries of the human mind. They are to do with the roots of error in perception and judgment, which vitiate our intelligence analyses and our risk management practices. Such errors are clearly at work in the LTCM case, to which I shall return, but they will be seen in deeper perspective if we look, first, at a couple of other cases - from the dramatic world of geopolitics.
The first case I’d like to share with you is that of the fall of France to Germany in 1940. You have all heard, I would think, of the Maginot Line – the famous defensive redoubt that the French built, after the First World War, to ensure that never again would Germany be able to invade France. It is famous chiefly because, in May 1940, the Germans bypassed the Maginot Line and overran France in six weeks. This is so widely known as to be almost proverbial. What is not at all widely understood, however, are the reasons why this was possible.
Let me quote to you the words of one of our leading analysts of intelligence and statecraft, Harvard University’s Ernest R. May, from his ground-breaking study, Strange Victory, published in 2000:
“More than anything else, this happened because France and its allies misjudged what Germany planned to do. If leaders in the Allied governments had anticipated the German offensive through the Ardennes, even as a worrisome contingency, it is almost inconceivable that France would have been defeated when and as it was. It is more than conceivable that the outcome would have been not France’s defeat but Germany’s and, possibly, a French victory parade on the Unter den Linden in Berlin.”
So shocking was the collapse of France, however, that it led, after the Second World War, to a conventional wisdom that France had been doomed to defeat in 1940; that it was always going to buckle under the German assault.
In May’s words:
“Three conclusions were thought obvious. First, Germany must have had crushing superiority, not only in modern weaponry but in an understanding of how to use it. Second, France and its allies must have been very badly led. Third, the French people must have had no stomach for fighting…Now, sixty years later, in light of what is known about the circumstances of France’s defeat, none of these conclusions holds up well.”
Germany did not have a crushing superiority. Far from it. Indeed, in computer simulations of the campaign of 1940, the Allies always win, unless human error on their part or an arbitrary force advantage for the German side is added to the historical equation. Nor was leadership lacking in France; and, as for morale, on closer examination it turns out to have been the German public and the German generals, far more than the French or British, who had no appetite for war and expected to lose it.
How, then, do we explain what happened? Quite simply by looking at the failures of Allied intelligence and risk management. And we need to begin by realizing that, in May’s words:
“the whole history of the period 1938-40 is misunderstood if one fails to keep in mind the high level of confidence that prevailed in France and Britain prior to May 1940…Confidence that France had superiority and that Germany recognized this superiority made it difficult for French and British leaders to put themselves in the place of German planners, whom Hitler had commanded to prepare an offensive no matter what their opinions about its wisdom or feasibility might have been. Imagination was not paralyzed; far from it…But the possibility that the Germans might use ingenuity to shape a surprise version of a frontal offensive seemed too fanciful for consideration.”
As it happens, over-confidence in the accuracy and reliability of our judgments is an endemic human failing. This should not be confused with stupidity or ignorance or laziness. It is more interesting than all those things. It is a sort of blind-spot to which, as it happens, we are all prone and it is especially likely to set in just when there are actually good grounds for believing that our track record of judgment has, indeed, been sound. We develop mind-sets which filter reality and blind side us, screening out phenomena that do not accord with our fixed opinions, perceptions or assumptions.
Ironically, the conquest of France turned out to be the high point of Hitler’s success. From that point forward – indeed, beginning at Dunkirk, in June 1940 – he made one mistake after another. As Bevin Alexander explains, in his book How Hitler Could Have Won World War II, Hitler won an incredible series of victories up to June 1940, but “his victory over France convinced him that he was an infallible military genius. He did not see that the victory came not from his own vision, but from that of two generals, Erich von Manstein and Heinz Guderian.” Possessed by this illusion of infallibility, he made error after error and brought catastrophe on himself and his country.
Alexander remarks:
“It did not have to be. Hitler’s strategy through mid-1940 was almost flawless. He isolated and absorbed state after state in Europe, gained the Soviet Union as a willing ally, destroyed France’s military power, threw the British off the continent and was left with only weak and vulnerable obstacles to an empire covering most of Europe, North Africa and the Middle East.”
But he abandoned the strategy of attacking weakness and threw his forces into a gigantic attack on the Soviet Union, which proved to be his undoing.
This brings me to the second geopolitical case study that I’d like to share with you this evening – Stalin’s failure, in the first half of 1941, to heed the signs that Hitler was preparing an all out attack on the Soviet Union. You see, the irony of that situation was that Stalin judged Hitler to be more rational than in fact he was. Stalin believed Hitler would never attack the Soviet Union unless or until he had settled accounts with the British Empire – because to do would expose Germany to a war against too many enemies at once. In Hitler’s position, Stalin reasoned, he would not do this and Hitler, so he believed, was like him – saw the world in the same way.
In an acclaimed study of this error on Stalin’s part, published in 1999, Gabriel Gorodetsky pointed out that Stalin had overwhelming intelligence warning him of German preparations and intentions, but simply refused to accept what this intelligence was telling him. When his top generals briefed him on the subject and pleaded with him to allow them to put the Red Army on alert, he dismissed them as fools. Just ten days before the German offensive began, Stalin told his supreme commanders, Zhukov and Timoshenko, “Hitler is not such an idiot and understands that the Soviet Union is not Poland, not France and not even England."
What was happening here was not simply that Stalin was himself completely blind or an idiot. It’s more interesting than that. He was assuming that he understood the way Hitler’s mind worked; that Hitler saw the Soviet Union the way he himself did; that he could cut a deal with Hitler. He also had an alternative interpretation of the mobilization of German forces along his borders. He believed Hitler was engaging in a gigantic game of bluff. He would demand concessions and these Stalin could negotiate over; but Hitler’s real strategy would be to move against the Middle East.
As it turned out, Stalin was mistaken on every count. The consequences were calamitous. When the Nazi juggernaut was unleashed against the Soviet Union, the Red Army was wholly unprepared. Within weeks, half its air force was destroyed, three million soldiers were killed or captured and vast areas of the western Soviet Union were overrun. Stalin was stunned and speechless, when informed that the offensive had begun. When Zhukov and Timoshenko encountered him at the Kremlin, they found him pale, bewildered and desperately clinging to his illusion that such an attack would not happen. It must be a provocation by German officers who wanted war, he asserted. “Hitler surely does not know about it.”
This last incident is fascinating. It shows Stalin engaging in what cognitive scientists call Belief Preservation. Actually, he’d been doing so for weeks, if not months before the war started. Belief Preservation is a form of cognitive behaviour in which we cling to an opinion, ignoring or devaluing evidence which conflicts with it and grasping at straws to defend it. It’s actually alarmingly common. So common, in fact, that it should be understood to be a default tendency of human beings generally. It is the flip side of another endemic cognitive tendency called Confirmation Bias, which leads us to seek or select information that supports our fixed ideas.
Confirmation Bias kicks in for the most part in a wholly subconscious manner. It can be shown in simple exercises that we all have a propensity to seek to confirm our hunches or hypotheses, rather than seek to test and refute them. Our brains simply work that way by default. In cases where the data we have to analyse are complex and allow of more than one possible interpretation, this has real, practical consequences. It plainly has implications for both intelligence and risk management. All the more so because it is at its most insidious when wholly intuitive and unselfconscious. Recall May’s observation that from an Allied viewpoint, in early 1940, the idea that the Germans would devise a surprise version of frontal assault seemed “too fanciful to even consider.”
Good intelligence work and risk management must take factors such as these into account. Much of the time they do. In the geopolitical cases that I have briefly mentioned, however, crucial critical review of assumptions was missing. The Allies failed to cross-examine their blithe assumption that the Germans would not attempt an improvised frontal assault. Had they been less presumptuous, May shows, the evidence needed for them to correct their assumption was in plain view. They simply did not look. Similarly, had Stalin paused to ask himself whether his assumptions were correct, about Hitler’s good sense or perceptions of the risk in attacking the Soviet Union, he would have been able to look at the intelligence available to him in a whole new light.
It is striking that the Germans, in 1940, went to great lengths to war game and test their assumptions and plans before attacking France. Hitler gave his generals the freedom to do this, especially at the big command bunker at Zossen, just outside Berlin, in late 1939 and early 1940. In 1941 and after, he restricted this freedom and his errors mounted quickly. Even so, during the war games of early 1940, the top German generals came to realize that their plan of attack was a gamble.
The Army Chief of Staff, General Franz Halder, informed his colleagues that the Wehrmacht would “have to resort to extraordinary methods and accept the accompanying risks…For only this can lead to defeat of the enemy.” It was a finely calculated gamble and it paid off – but there is a sense in which the Germans ‘lucked out’ – and Hitler, in particular, drew the wrong conclusion entirely from this: that his intuitive judgment had been flawless and should be backed in future with less critical review.
Over confidence, blind-spots, mind-sets, unexamined assumptions, belief preservation, confirmation bias are just some of the ways in which our judgment can be flawed. They happen, as these few examples show, at the highest levels and can have the most catastrophic consequences. In 1940, these phenomena blind-sided the Allies, leaving them open to a surprise offensive which ruptured what they believed were impregnable defences and led to the fall of France within six weeks. They then infected Hitler’s already unstable mind and led him to a long series of misperceptions and strategic errors that brought about his downfall and the devastation of Germany. In 1941, though, they blind-sided Stalin, leaving the Soviet Union wide open to attack and costing the Red Army millions of casualties, before Hitler could be brought to a halt outside Moscow.
Now, using these geopolitical cases as a kind of background, adding depth of perspective, let’s return to the 1998 Long Term Capital Management debacle. In that case, we find all the same phenomena at work. The fact that bonds and derivatives, rather than tanks and planes and investors’ money rather than soldiers’ lives were at stake makes almost no difference, cognitively speaking. I believe that awareness of the geopolitical cases helps us to see more clearly what was going on at LTCM, in 1998.
You see, the key to the whole LTCM case was that a group of very bright people persuaded themselves and many others that they had a strategy that could not fail. It was a strategy based on brilliantly conceived mathematical models of stock and bond price movements. Grounded in the work of Myron Scholes and Fischer Black, the so-called Black-Scholes model, these models allowed the arbitrageur to place bets on those movements with a confidence unavailable to more traditional, non-quantitative professionals and unfathomable to the proverbial man in the street.
John Meriwether and his quants believed that they had, for practical purposes, eliminated the uncertainty in market movements by quantifying risk as a function of the past volatility of a stock or bond price. In short, LTCM was a deliberate and confident experiment in managing risk by the numbers. And such were the reputations of the partners – the traders from Salomons and the academics from Harvard and MIT – that all the biggest financial institutions on Wall Street and others besides invested heavily in the experiment.
The models, however, were built on a number of fundamental assumptions. First, that stock and bond price movements, like other random events, form a classic bell curve with extreme movements regressing to the mean. Second, that the volatility of a security is constant. So much so that prices trade in continuous time, which is to say without any discontinuous jumps. Third, that the market is efficient, which is to say that it accurately and rapidly corrects fluctuations and draws them back to the correct value of the security.
Now, there are two interesting things to observe about these assumptions. The first is that, for almost four years after the founding of LTCM, the fund’s performance was consistent with their being true. Hence both the extraordinary returns the fund earned and the relentless increase in the confidence of the partners that their models and even their intuitive judgment could not be faulted. But the second interesting thing about the three key assumptions is that they were all actually false. What is even more interesting is that they had been shown to be false by the time LTCM was formed. Yet the geniuses who set up the fund clung to their beliefs not only as the fund prospered, but even in the wake of its spectacular downfall.
Decades before LTCM was set up, John Maynard Keynes had remarked mordantly, “Markets can remain irrational longer than you can remain solvent.” The geniuses at LTCM found this out the hard way. None other than Paul Samuelson, a mentor of LTCM partner Robert Merton and himself the first financial economist to win a Nobel Prize, told Lowenstein that he’d had doubts about the LTCM venture right from the start. Why? Because the Black-Scholes formula required something that was improbable at best – that stock and bond price movements actually did form regular bell curves over time. And none other than Lawrence Summers remarked that “The efficient market hypothesis is the most remarkable error in the history of economic theory.”
In other words, the LTCM partners set up a highly ambitious experiment in risk management on the basis of a dubious hypothesis to which they were wedded as if it were indubitably true; they suffered from over confidence on account of their brilliance, their certitude and their track records; they had blindspots, as far as the evidence contrary to their beliefs was concerned; they suffered from confirmation bias, in seeing their success as proof of their hypothesis, rather than as good fortune that might be consistent with their hypothesis actually being mistaken; they suffered from belief preservation, such that even when the market turned against them in 1998, they increased their exposure rather than reining it in; and such was their belief preservation that they insisted, even after the total collapse of their fortunes, that their hypothesis had been correct.
These are surely rather remarkable facts. They give a whole new meaning to the title Roger Lowenstein used for his book, When Genius Failed. His own analysis of the problem was that the LTCM partners suffered from greed and hubris. His account hints at various other things going on, but the language of cognitive science is missing from it more or less completely. Yet he did want to make a more profound analytical point.
That point is that you cannot manage risk by the numbers, at the end of the day, because markets are not dice and prices do not behave like inanimate random phenomena. Markets are collectivities of human beings, making emotional decisions under conditions of irreducible uncertainty, which are then reflected in price movements. Eugene Fama, he points out, had shown in the 1960s that prices do not form elegant bell curves. There are discontinuous price changes and what he called ‘fat-tails’, which is to say more, longer and less predictable swings to extremes than in the case of coin flips or dice rolls.
This claim will discomfort many an actuary or mathematician. After all, sophisticated risk management only became possible from the seventeenth century, with the development of modern mathematics and it has contributed enormously to the development of modern insurance, banking and investment. Some of you will be familiar, I’m sure with Peter Bernstein’s marvelous history of this matter, Against the Gods. So Lowenstein cannot, surely, have been suggesting that we should throw mathematics overboard and do risk management by intuition. I don’t think he was, though at times he seems to veer a little in this direction.
He does make an intermediate observation, however, which is worth reflecting on and which may be of professional interest to those of you here this evening. It is this:
“…Every investment bank, every trading floor on Wall Street, was staffed by young, intelligent PhDs, who had studied under Merton, Scholes or their disciples. The same firms that spent tens of millions of dollars per year on expensive research analysts – ie stock pickers – staffed their trading desks with finance majors who put capital at risk on the assumption that the market was efficient, meaning that stock prices were ever correct and therefore that stock picking was a fraud.”
This is the sort of observation that really interests me. There is a contradiction here at the most fundamental level in the thinking at all the major financial institutions on Wall Street. Yet, plainly, this contradiction is not apparent to those who run those institutions. Why, if it was, would they spend the tens of millions of dollars to which Lowenstein refers?
I don’t want to digress, this evening, into exploring this particular contradiction. I want to make a larger, more general point. You see, such contradictions, at quite a fundamental level, exist in a great many institutions. They exist in many areas of public policy. They exist, like deeply wormed in computer viruses within the standard operating procedures of many organisations, within the tenets of political ideologies and religious systems of belief. And they account, I suggest, for many of our most deeply embedded frustrations, puzzlements and institutional failures.
When intelligence collection or analysis are being done, when risk management is being undertaken, I suggest, such deeply embedded contradictions are seldom even visible to those who work from within institutional frameworks and sets of assumptions which they have come to take entirely for granted. Frequently, our very basic assumptions about what constitutes an intelligence requirement or a risk are of this deep and unexamined character.
It is the same with the mindsets of groups of professionals, or top decision-makers, which is why they are able to be blind-sided in the manner that the Allies were by Hitler in 1940 and Stalin by Hitler in 1941. And here’s the rub, not only are such contradictions, such assumptions, such mindsets deeply embedded and largely invisible, but institutions and professionals, both as groups and individuals, tend to strongly resist any effort to expose and challenge them. The consequences are not only occasional disasters, but chronic inefficacies and transaction costs that, for the most part, we’d be better off without.
Well, I’ve covered a fair bit of ground very quickly. Let me attempt to bring things together with reference to a different case again – the current debate about real or apparent intelligence failures, including failures of risk assessment, in the war last year against the regime of Saddam Hussein. I won’t address the debate as a whole. It’s far too broad for that. I just want to draw your attention to an article in The New York Times last week by regular opinion columnist David Brooks.
Brooks begins by pointing out that the CIA does not seem to have been politically pressured to reach intelligence findings favored by the White House. Yet it reached false intelligence findings and this he finds even more disturbing than the idea that the findings were politicized. The problem, he claims, goes far deeper than who is in the White House. It is the CIA’s scientific methodology, he claims, which has led to flawed intelligence work and consequent errors in risk assessment and strategic policy.
In his own words:
“For decades, the US intelligence community has propagated the myth that it possesses analytical methods that must be insulated pristinely from the hurly burly world of politics. The CIA has portrayed itself, and been treated as, a sort of National Weather Service of global affairs. It has relied on this aura of scientific objectivity for its prestige, and to justify its large budgets, despite a record studded with error…If you read CIA literature today, you can still see scientism in full bloom…On the CIA website you can find a book called Psychology of Intelligence Analysis, which details the community’s blindspots. But the CIA can’t correct itself by being a better version of itself. The methodology is the problem…”
You might detect a certain parallel here between Brooks’s critique of the CIA’s alleged methodology and Lowenstein’s critique of LTCM’s methodology. There is no question that those who lock themselves into a fixed way of reading reality are riding for a fall, because, as Eugene Fama put it, “Life always has a fat tail.” But Brooks is otherwise swinging rather wildly.
The book to which he refers, Psychology of Intelligence Analysis, was written by a long time analyst and instructor at the CIA’s Sherman Kent School, Richards Heuer Jr. I can strongly recommend it to you and I suggest that the problem at the CIA has not been slavish adherence to some dogmatic methodology, but rather the failure to inculcate in the agency the principles of critical thinking outlined in the book. In an illuminating Foreword to the book, Douglas MacEachin, former Deputy Director (Intelligence) at the CIA, asks rhetorically “How many times have we encountered situations in which completely plausible premises, based on solid expertise, have been used to construct a logically valid forecast – with virtually unanimous agreement – that turned out to be dead wrong?”
That was written in 1999. We’ve just seen what looks like a remarkable example of such errant forecasting. Brooks implies that the reason for the failure is a scientific methodology that rules out imagination and intuition. MacEachin, writing before the event, and drawing on Heuer’s book, has a different take on the matter. Here’s what he wrote:
“…too often, newly acquired information is evaluated and processed through the existing analytic model, rather than being used to reassess the premises of the model itself.”
This is crucial. What Brooks fails to see is that the kind of intuitions and imaginative leaps he hungers to see in intelligence and risk assessment work will, of necessity, arise out of mental models, the premises of which may or may not be correct. In this respect, if they deliver accurate forecasts, they are just lucky, because there is no clear basis for assuming that, as circumstances change, those invisible and unexamined premises will continue to yield accurate forecasts.
It is the explicitation and critical examination of the premises themselves, the assumptions underlying mental models, that is the key to really sound analysis. We saw this in the earlier case studies I briefly shared with you. The Allied assumption that Hitler would do anything but launch a frontal assault against France; the thorough testing by the German high command at Zossen of their own assumption that such an offensive could not succeed; Stalin’s assumption that Hitler would not be so irrational as to launch a war against the Soviet Union; Hitler’s assumption that his triumph in France vindicated his intuitive judgment and genius; the assumption by the LTCM geniuses that their Black-Scholes-type models were infallible and that their own intuitive judgment, even going beyond the models, was reliable are all cases in point.
In case after case here, we see the critical importance of making the premises, the underlying assumptions, of one’s mental models explicit and testing them, just in so far as one wishes to hedge against uncertainty and the consequences of turning out to be mistaken. Brooks implies that the CIA does too much of this and that this is what causes its errors. He could not be more mistaken himself. MacEachin’s testimony on this point is trenchant:
“I know from first hand encounters that many CIA officers tend to react skeptically to treatises on analytic epistemology. This is understandable. Too often, such treatises end up prescribing models as answers to the problem. These models seem to have little practical value to intelligence analysis, which takes place not in a seminar but rather in a fast-breaking world of policy. But that is not the main problem Heuer is addressing.
What Heuer examines so clearly and effectively is how the human thought process builds its own models through which we process information. This is not a phenomenon unique to intelligence…it is part of the natural functioning of the human cognitive process, and it has been demonstrated across a broad range of fields from medicine to stock market analysis.
The commonly prescribed remedy for shortcomings in intelligence analysis and estimates – most vociferously after intelligence ‘failures’ – is a major increase in expertise…The data show that expertise itself is no protection from the common analytic pitfalls that are endemic to the human thought process…A review of notorious intelligence failures demonstrates that the analytic traps caught the experts as much as anybody. Indeed, the data show that when experts fall victim to these traps, the effects can be aggravated by the confidence that attaches to expertise – both in their own view and the perception of others.”
I trust you will see that this last remark is wonderfully illustrated by the LTCM case. Yet it is true, also, of the case of the Allies in 1940. Brooks wrote last week that we need not games theorists and rational analysts so much as street smart politicians, Mafia bosses, readers of Dostoevsky. Well, Hitler and Stalin surely fill this sort of description as well as any contemporary figure is likely to and what of all the hard-edged traders and bank CEOs who sank billions into LTCM?
Brooks, I suggest, has missed what is really going on both in the CIA and in intelligence and risk analysis more generally. What is needed is not more street smarts or intuition, though these certainly have their place; but the institutionalization, among analytic professionals, of habits of critical thinking that are the only systematic way to keep a check on the root causes of error to which we are all of us prone.
MacEachin asked whether the CIA devotes sufficient resources to the study of analytic science. I believe all serious analytic organizations would do well to ask themselves the same question. For the cognitive blindspots and biases I have illustrated here this evening cannot be eliminated from the human mental process. What can be done is to train people to look for, recognize and follow procedures to adjust for such pitfalls. That, of course, is Austhink’s raison d’etre. Let me, therefore, conclude here and open the floor to questions.