Selasa, 09 April 2013

Poking the hive of DSGE (Distinctly Sensitive Group of Economists)



The other day when I wrote my recent post What you can learn from DSGE, I expected that maybe 6 or 8 people would read it. I mean, it's a fairly tiny fraction of people who really want to read about the methodology of economic modelling, even if some people like myself insist on writing about it occasionally. So I was surprised that this post seems to have drawn considerable attention, especially from economists (apparently) writing on the forum econjobrumors. An economist I know told me about this site a while back, describing it as a hornet's nest of vicious criticism and name calling.

Now I know this first hand: the atmosphere there is truly dynamic and stochastic, choked with the smog of blogosphere-style vitriol (one commenter even suggesting that I should be shot!). Some comments were amusing and rather telling. For example, writing anonymously, one reader commented that....
I like how this blogger cites a GMU Ph.D. student as an example of someone considering alternatives to rational expectations. The author has no idea that such work has been going on for decades. He doesn't know s**t.
Actually, I never implied that alternatives had never been considered before. In any event, I guess the not-so-hidden message here is that grad students from GMU -- and not even in a Department of Economics, tsk! tsk! -- shouldn't be taken seriously. Maybe the writer was just irritated that the graduate student in question, Nathan Palmer, was co-author on the paper, recently published in the American Economic Review, that I just wrote about in Bloomberg. AER is a fairly prominent outlet, I believe, taken seriously in the profession. It seems that some real economists must agree with me that this work is pretty interesting.

Most of the other comments were typical of the blog-trashing genre, but one did hit on an interesting point that deserves some further comment:
...the implication that physicists or other natural scientists would never deploy the analytic equivalent of a representative agent when studying physical processes is not quite correct.
Mean Field Theory:
In physics and probability theory, mean field theory (MFT also known as self-consistent field theory) studies the behavior of large and complex stochastic models by studying a simpler model. Such models consider a large number of small interacting individuals who interact with each other. The effect of all the other individuals on any given individual is approximated by a single averaged effect, thus reducing a many-body problem to a one-body problem.
The ideas first appeared in physics in the work of Pierre Curie[1] and Pierre Weiss to describe phase transitions.[2] Approaches inspired by these ideas have seen applications in epidemic models,[3] queueing theory,[4] computer network performance and game theory.[5]
This is a good point, although I definitely never suggested that this technique is not used in physics. The mean field approach in physics is indeed the direct analogy to the representative agent technique. Theorists use it all the time, as it is simple and leads quickly to results that are sometimes reasonably correct (sometimes even exact). And sometimes not correct.

In the case of a ferromagnet such as iron, the method essentially assumes that each elementary magnetic unit in the material (for simplicity, think of it as the magnetic moment of a single atom that is itself like a tiny magnet) acts independently of every other. That is, each one responds to the overall mean field created by all the atoms throughout the entire material, rather than to, for example, its closest neighbors. In this approximation, the magnetic behavior of the whole is simply a scaled up version of that of the individual atoms. Interactions between nearby magnetic elements do not matter. All is very simple.

Build a model like this -- you'll find this in any introductory statistical mechanics book -- and you get a self-consistency condition for the bulk magnetization. Lo and behold, you find a sharp phase transition with temperature, much like what happens in real iron magnets. A piece of iron is non-magnetic above a certain critical temperature, and spontaneously becomes magnetic when cooled below that temperature. So, voila! The mean field method works, sometimes. But this is only the beginning of the story.

Curie and Weiss wrote down theories like this in the early 1900s and this way of thinking remained in fashion into the 1950s. Famed Russian physicist Lev Landau developed a much more general theory of phase transitions based on the idea. But here's the kicker -- since the 1960s, i.e. for half a century now, we have known that this theory does not work in general, and that the mean field approximation often breaks down badly, because different parts of a material aren't statistically independent. Especially near the temperature of the phase transition, you get strong correlations between different magnetic moments in iron, so what one is doing strongly influences what others are likely to be doing. Assume statistical independence now and you get completely incorrect results. The mean field trick fails, and sometimes very dramatically. As a simple example, a string of magnetic elements in one dimension, held on a line, does not undergo any phase transition at all, in complete defiance of the mean field prediction.

An awful lot of the most interesting mathematical physics over the past half century has been devoted to overcoming this failure, and to learning how go beyond the mean field approximation, to understand systems in which the correlations between parts are strong and important. I believe that it will be crucial for economics to plunge into the same complex realm, if any serious understanding is to be had of the most important events in finance and economics, which typically do involve strong influences acting between people. The very successful models that John Geanakoplos developed to predict mortgage prepayment rates only worked by including an important element of contagion -- people becoming more likely to prepay when many others prepay, presumably because they become more aware of the possibility and wisdom of doing so.

Unfortunately, I can't write more on this now as I am flying to Atlanta in a few minutes. But this is a topic that deserves a little further examination. For example, those power laws that econophysicists seem to find so fascinating? These also seem to really irritate those writing on econjobrumors. But what we know about power laws in physical systems is that they are often (though not always) the signature if strong correlations among the different elements of a system....  so they may indeed be trying to tell us something.

Minggu, 07 April 2013

Mortgage dynamics

My latest Bloomberg column should appear sometime Sunday night 7 April. I've written about some fascinating work that explores the origins of the housing bubble and the financial crisis by using lots of data on the buying/selling behaviour of more than 2 million people over the period in question. It essentially reconstructs the crisis in silico and tests which factors had the most influence as causes of the bubble, i.e. leverage, interest rates and so on.

I think this is a hugely promising way of trying to answer such questions, and I wanted to point to one interesting angle in the history of this work: it came out of efforts on Wall St. to build better models of mortgage prepayments, using any technique that would work practically. The answer was detailed modelling of the actual actions of millions of individuals, backed up by lots of good data.

First, take a look at the figure below:



This figure shows the actual (solid line) rate of repayment of a pool of mortgages that were originally issued in 1986. It also shows the predictions (dashed line) for this rate made by an agent-based model of mortgage repayments developed by John Geanakoplos working for two different Wall St. firms. There are two things to notice. First, obviously, the model works very well over the entire period up to 1999. The second, not obvious, is that the model works well even over a period for which it was not designed, by the data, to fit. The sample of data used to build the model went from 1986 through early 1996. The model continues to work well even out of sample over the final three years of this period, roughly 30% beyond the period of fitting. (The model did not work in subsequent years and had to be adjusted due to a major changes in the market itself, after 2000, especially new possibilities to refinance and take cash out of mortgages that were not there before.).

How was this model built? Almost all mortgages give the borrower the right in any month to repay the mortgage in its entirely. Traditionally, models aiming to predict how many would do so worked by trying to guess or develop some function to describe the aggregate behavior of all the mortgage owners, reflecting ideas about individual behavior in some crude way in the aggregate. As Geanakoplos et al. put it:
The conventional model essentially reduced to estimating an equation with an assumed functional form for prepayment rate... Prepay(t) = F(age(t), seasonality(t), old rate – new rate(t), burnout(t), parameters), where old rate – new rate is meant to capture the benefit to refinancing at a given time t, and burnout is the summation of this incentive over past periods. Mortgage pools with large burnout tended to prepay more slowly, presumably because the most alert homeowners prepay first. ...

Note that the conventional prepayment model uses exogenously specified functional forms to describe aggregate behavior directly, even when the motivation for the functional forms, like burnout, is explicitly based on heterogeneous individuals.

There is of course nothing wrong with this. It's an attempt to do something practically useful with the data then available (which wasn't generally detailed at the level of individual loans). The contrasting approach, seeks instead to start from the characteristics of individual homeowners and to model their behavior, as a population, as it evolves through time:
the new prepayment model... starts from the individual homeowner and in principle follows every single individual mortgage. It produces aggregate prepayment forecasts by simply adding up over all the individual agents. Each homeowner is assumed to be subject to a cost c of prepaying, which include some quantifiable costs such as closing costs, as well as less tangible costs like time, inconvenience, and psychological costs. Each homeowner is also subject to an alertness parameter a, which represents the probability the agent is paying attention each month. The agent is assumed aware of his cost and alertness, and subject to those limitations chooses his prepayment optimally to minimize the expected present value of his mortgage payments, given the expectations that are implied by the derivatives market about future interest rates.

Agent heterogeneity is a fact of nature. It shows up in the model as a distribution of costs and alertness, and turnover rates. Each agent is characterized by an ordered pair (c,a) of cost and alertness, and also a turnover rate t denoting the probability of selling the house. The distribution of these characteristics throughout the population is inferred by fitting the model to past prepayments. The effects of observable borrower characteristics can be incorporated in the model (when they become available) by allowing them to modify the cost, alertness, and turnover.
By way of analogy, this is essentially modelling the prepayment behavior of a population of homeowners as an ecologist might model, say, the biomass consumption of some population of insects. The idea would be to  follow the density of insects as a function of their size, age and other features that influence how and when and how much they tend to consume. The more you model such features explicitly as a distribution of influential factors, the more likely your model will take on aspects of the real population, and the more likely it will be to make predictions about the future, because it has captured real aspects of the causal factors in the past.

Models of this kind also capture in a more natural way, with no extra work, things that have to be put in by hand when working only at the aggregate level. In this mortgage example, this is true of the "burnout" -- the gradual lessening of prepayment rates over time (other things being equal):
... burnout is a natural consequence of the agent-based approach; there is no need to add it in afterwards. The agents with low costs and high alertness prepay faster, leaving the remaining pool with slower homeowners, automatically causing burnout. The same heterogeneity that explains why only part of the pool prepays in any month also explains why the rate of prepayment burns out over time.
One other thing worth noting is that those developing this model found that to fit the data well they had to include an effect of "contagion", i.e. the spread of behavior directly from one person to another. When prepayment rates go up, it appears they do so not solely because people have independently made optimal decisions to prepay. Fitting the data well demands an assumption that some people become aware of the benefit of prepaying because they have seen or heard about others who have done so.

This is how it was possible, going back up to the figure above, to make accurate predictions of prepayment rates three years out of sample. In a sense, the lesson is that you do better if you really try to make contact with reality, modelling as many realistic details as you have access to. Mathematics alone won't perform miracles, but mathematics based on realistic dynamical factors, however crudely captured, can do some impressive things.

I suggest reading the original, fairly short paper, which was eventually published in the American Economic Review. That alone speaks to at least grudging respect on the part of the larger economics community to the promise of agent based modelling. The paper takes this work on mortgage prepayments as a starting point and an inspiration, and tries to model the housing market in the Washington DC area in a similar way through the period of the housing bubble.

Jumat, 05 April 2013

What you can learn from DSGE

                                       *** UPDATE BELOW ***

Anyone who has read much of this blog would expect my answer to the above question to be "NOTHING AT ALL!!!!!!!!!!!!!!!!!" Its true, I'm not a fan at all of Dynamic Stochastic General Equilibrium models, and think they offer poor tools for exploring the behaviour of any economy. That said, I also think economists should be ready and willing to use any model whatsoever if they honestly believe it might give some real practical insight into how things work. I (grudgingly) suppose that DSGE models might sometimes fall into this category.

So that's what I want to explore here, and I do briefly below. But first a few words on what I find objectionable about DSGE models.

The first thing is that the agents in such models are generally assumed to be optimisers. They have a utility function and are assumed to maximize this utility by solving some optimization problem over a path in time. [I'm using as my model the well known Smets-Wouters model as described in this European Central Bank document written, fittingly enough, by Smets and Wouters.] Personally, I find this to be a rather hugely implausible account of how any person or firm makes decisions when facing anything but the simplest problems. So it would seem like a miracle to me if the optimal behaviors predicted by the models would turn out to resemble even crudely the behavior of real individuals or firms.

Having said that, if I try to be generous, I can suppose that maybe, just maybe, the actual behaviour of people, while it isn't optimizing anything, might in the aggregate come out to something that isn't at least too far away from the optimal behavior, at least in some cases. I would guess there must be armies of economists out there collecting data on just this question, comparing the actions of real individuals and firms to the optimal predictions of the models. Maybe it isn't always bad. If I twist my arm, I can accept that this way of treating decision making as optimization sometimes lead to interesting insights (for people facing very smple decisions, this would of course be more likely).

The second thing I find bad about DSGE models is their use of the so-called representative agent. In the Smets-Wouters model, for example, there is essentially one representative consumer who makes decisions regarding labor and consumption, and then one representative firm which makes decisions on investment, etc. If you read the paper you will see it mention "a continuum of households" indexed by a continuous parameter, and this makes it seem at first like there is actually an infinite number of agents. Not really, as the index only refers to the kind of labor. Each agent makes decisions independently to optimize their utility; there are no interactions between the agents, no one can conduct a trade with another or influence their behavior, etc. So in essence there is really just one representative laborer and one representative firm, who interact with one another in the market. This I also find wholly unconvincing as the real economy emerges out of the complex interactions of millions of agents doing widely different things. Modelling an economy like this seems like modelling the flow of a river by thinking about the behaviour of a single representative water molecule, bouncing along the river bed, rather then thinking about the interactions of many which create pressure, eddies, turbulence, waves and so on. It seems highly unlikely to be very instructive.

But again, let me be generous. Perhaps, in some amazing way, this unbelievably crude approximation might sometimes give you some shred of insight. Maybe you can get lucky and find that a collective outcome can be understood by simply averaging over the behaviors of the many individuals. In situations where people do make up their own minds, independently and by seeking their own information, this might work. Perhaps this is how people behave in response to their perceptions of the macroeconomy, although it seems to me that what they hear from others, what they read and see in the media, probably has a huge effect and so they don't act independently at all.

But maybe you can still learn something from this approximation, sometimes. Does anyone out there know if there is research exploring this matter of when or under what conditions the representative agent approximation is OK because people DO act independently? I'm sure this must exist and it would be interesting to know more about it. I guess the RBC crowd must have an extensive program studying the empirical limits to the applicability of this approximation? 

So, those are my two biggest reasons for finding it hard to believe the DSGE framework. To these I might add a disbelief that the agents in economy do rapidly find their way to an equilibrium in which "production equals demand by households for consumption and investment and the government." We might stay well away from that point, and things might generally change so quickly that no equilibrium ever comes about. But let's ignore that. Maybe we're lucky and the equilibrium does come about.

So then, what can we learn from DSGE, and why this post? If I toss aside the worries I've voiced above, I'm willing to entertain the possibility that one might learn something from DSGE models. In particular, while browsing the web site of Nathan Palmer, a PhD student in the Department of Computational Social Science at George Mason University, I came across mention of two lines of work within the context of the DSGE formalism that I do think are interesting. I think more people should know about them.

First is work exploring the idea of "natural expectations." A nice example is this fairly recent paper by Andreas Fuster, David Laibson, and Brock Mendel. Most DSGE models, including the Smets-Wouters model, assume that the representative agents have rational expectations, i.e. they process information perfectly and have a wholly unbiased view of future possibilities. What this paper does is to relax that assumption in a DSGE model, assuming instead that people have more realistic "natural" or "intuitive expectations." Look at the empirical literature and you find that there's lots of evidence that investors and people of all kinds tend to overestimate recent trends in time series and expect them to continue. This paper explores some of this empirical literature, but then goes to its main purpose -- to include these trend following expectations into a DSGE model.

As they note, a seminal failure of rational expectations DSGE models is that they struggle "to explain some of the most prominent facts we observe in macroeconomics, such as large swings in asset prices, in other words “bubbles”, as well as credit cycles, investment cycles, and other mechanisms that contribute to the length and severity of economic contractions." These kinds of things, in contrast, do emerge quite readily from a DSGE model once the expectations of the agents is made a little more realistic. From the paper:
.....we embed natural expectations in a simple dynamic macroeconomic model and compare the simulated properties of the model to the available empirical evidence. The model’s predictions match many patterns observed in macroeconomic and financial time series, such as high volatility of asset prices, predictable up‐and‐down cycles in equity returns, and a negative relationship between current consumption growth and future equity returns.   
That is interesting, and all from a DSGE model. Whether you believe it or not depends on what you think about the objections I voiced above about the components of DSGE models, but it is at least nice that this single step towards realism pays some nice dividends in giving more plausible outcomes. This is a useful line of research.

Related work, equally interesting, is that of Paolo Gelain, Kevin J. Lansing and Caterina Mendicino, described in this working paper of the Federal Reserve Bank of San Francisco. This paper essentially does much the same thing as the one I just discussed, though in the context of the housing market. It uses a DSGE with trend following expectations for some of the agents to explore how a government might best try to keep housing bubbles in check through change in interest rates or restrictions on  leverage, i.e. how much a potential home buyer can borrow relative to the house value, or restrictions on how much they can borrow relative to income. The latter seems to work best. As they summarize:
Standard DSGE models with fully-rational expectations have difficulty producing large swings in house prices and household debt that resemble the patterns observed in many industrial countries over the past decade. We show that the introduction of simple moving-average forecast rules for a subset of agents can significantly magnify the volatility and persistence of house prices and household debt relative to otherwise similar model with fully-rational expectations. We evaluate various policy actions that might be used to dampen the resulting excess volatility, including a direct response to house price growth or credit growth in the central bank’s interest rate rule, the imposition of a more restrictive loan-to-value ratio, and the use of a modified collateral constraint that takes into account the borrower’s wage income. Of these, we find that a debt-to-income type constraint is the most effective tool for dampening overall excess volatility in the model economy. 
Again, this is really interesting stuff, worthwhile research, economics that is moving, to my mind, in the right direction, showing us what we should expect to be possible in an economy once we take the realistic and highly heterogenous behaviour of real people into account.

So there. I've said some not so nasty things about DSGE models! Now I think I need a stiff drink.

*** UPDATE ***

One other thing to mention. I'm happy to see this kind of work, and I applaud those doing it. But I do seriously doubt whether embedding the idea of trend following inside a DSGE model does anything to teach us about why markets often undergo bubble-like phenomena and have quite strong fluctuations in general. Does the theoretical framework add anything?

Imagine someone said the following to you:
 "Lots of people, especially in financial markets and the housing market, are prone to speculating and buying in the hope of making a profit when prices go up. This becomes more likely if people have recently seen prices rising, and their friends making profits. This situation  can lead to herding type behavior where many people act similarly and create positive feedbacks and asset bubbles, which eventually crash back to reality. The problem is generally made worse, for obvious reasons, if people can borrow very easily to leverage their investment..." 
I think most people would say "yes, of course." I suspect that many economists would also. This explanation, couched in words, is for me every bit as convincing as the similar dynamic wrapped up in the framework of DSGE. Indeed, it is even more convincing as it doesn't try to jump awkwardly through a series of bizarre methodological hoops along the way. In this sense, DSGE seems more like a straitjacket than anything else. I can't see how it adds anything to the plausibility of a story.

So, I guess, sorry for the title of this post. Should have been "What you can learn from DSGE: things you would be much better off learning elsewhere."

Senin, 25 Maret 2013

FORECAST: new book to be published tomorrow!



That's right, my new book, the cover of which you've seen off to the right of this blog for some time now, will FINALLY be in bookstores in the US tomorrow, March 26. Of course, it is also available at Amazon and other likely outlets on the web. Who knows when reviews and such will begin trickling in. The book was featured in Nature on Thursday in their "Books in brief" section (sorry, you'll need a subscription), but the poor writers of those reviews (I've been one) really have almost no space to say anything. The review does make very clear that the book exists and purports to have some new ideas about economics and finance, but it makes no judgement on the usefulness of the book at all.

Anyone in the US, if you happen to be in a physical bookstore in the next few days, please let me know if you 1) do find the book and 2) where it was located. I've had the unfortunate experience in the past that my books, such as Ubiquity or The Social Atom, were placed by bookstore managers near the back of the store in sections with labels like Mathematical Sociology or Perspectives in the Philosophy of History, where perhaps only 1 or 2 people venture each day, and then probably only because they got lost while looking for the rest room. If you do find the book in an obscure location, feel completely free -- there's no law against this -- to take all the copies you find and move them up to occupy prominent positions in the bestsellers' section, or next to the check out with the diet books, etc. I would be very grateful!

And I would very much like to hear what readers of this blog think about the book.

Jumat, 22 Maret 2013

Quantum Computing, Finally!! (or maybe not)



Today's New York Times has an article hailing the arrival of superfast practical quantum computers (weird thing pictured above), courtesy of Lockheed Martin who purchased one from a company called D-Wave Systems. As the article notes,
... a powerful new type of computer that is about to be commercially deployed by a major American military contractor is taking computing into the strange, subatomic realm of quantum mechanics. In that infinitesimal neighborhood, common sense logic no longer seems to apply. A one can be a one, or it can be a one and a zero and everything in between — all at the same time. ...  Lockheed Martin — which bought an early version of such a computer from the Canadian company D-Wave Systems two years ago — is confident enough in the technology to upgrade it to commercial scale, becoming the first company to use quantum computing as part of its business.
The article does mention that there are some skeptics. So beware.

Ten to fifteen years ago, I used to write frequently, mostly for New Scientist magazine, about research progress towards quantum computing. For anyone who hasn't read something about this, quantum computing would exploit the peculiar properties of quantum physics to do computation in a totally new way. It could potentially solve some problems very quickly that computers running on classical physics, as today's computers do, would never be able to solve. Without getting into any detail, the essential thing about quantum processes is their ability to explore many paths in parallel, rather than just doing one specific thing, which would give a quantum computer unprecedented processing power. Here's an article giving some basic information about the idea.

I stopped writing about quantum computing because I got bored with it, not the ideas, but the achingly slow progress in bringing the idea into reality. To make a really useful quantum computer you need to harness quantum degrees of freedom, "qubits," in single ions, photons, the spins of atoms, etc., and have the ability to carry out controlled logic operations on them. You would need lots of them, say hundreds and more, to do really valuable calculations, but to date no one has managed to create and control more than about 2 or 3. I wrote several articles a year noting major advances in quantum information storage, in error correction, in ways to transmit quantum information (which is more delicate than classical information) from one place to another and so on. Every article at some point had a weasel phrase like ".... this could be a major step towards practical quantum computing." They weren't. All of this was perfectly good, valuable physics work, but the practical computer receded into the future just as quickly as people made advances towards it. That seems to be true today.... except for one D-Wave Systems.

Around five years ago, this company started claiming that it was producing and achieving quantum computing and had built functioning devices with 128 qubits. It used superconducting technology. Everyone else in the field was aghast by such a claim, given this sudden staggering advance over what anyone else in the world had achieved. Oh, and D-Wave didn't release sufficient information for the claim to be judged. Here is the skeptical judgement of IEEE Spectrum magazine as of 2010. But more up to date, and not quite so negative, is this assessment by quantum information expert Scott Aaronson just over a year ago. The most important point he makes is about the failure of D-Wave to really demonstrate that its computer is really doing something essentially quantum, which is why it would be interesting. This would mean demonstrating so-called quantum entanglement in the machine, or really carrying out some calculation that was so vastly superior to anything achievable by classical computers that one would have to infer quantum performance. Aaronson asks the obvious question:
... rather than constantly adding more qubits and issuing more hard-to-evaluate announcements, while leaving the scientific characterization of its devices in a state of limbo, why doesn’t D-Wave just focus all its efforts on demonstrating entanglement, or otherwise getting stronger evidence for a quantum role in the apparent speedup?  When I put this question to Mohammad Amin, he said that, if D-Wave had followed my suggestion, it would have published some interesting research papers and then gone out of business—since the fundraising pressure is always for more qubits and more dramatic announcements, not for clearer understanding of its systems.  So, let me try to get a message out to the pointy-haired bosses of the world: a single qubit that you understand is better than a thousand qubits that you don’t.  There’s a reason why academic quantum computing groups focus on pushing down decoherence and demonstrating entanglement in 2, 3, or 4 qubits: because that way, at least you know that the qubits are qubits!  Once you’ve shown that the foundation is solid, then you try to scale up.   
So there's a finance and publicity angle here as well as the science. The NYT article doesn't really get into any of the specific claims of D-Wave, but I recommend Aaronson's comments as a good counterpoint to the hype.

Rabu, 20 Maret 2013

Third (and final) excerpt...

The third (and, you'll all be pleased to hear, final!) excerpt of my book was published in Bloomberg today. The title is "Toward a National Weather Forecaster for Finance" and explores (briefly) the topic of what might be possible in economics and finance in creating national (and international) centers devoted to data intensive risk analysis and forecasting of socioeconomic "weather."

Before anyone thinks I'm crazy, let me make very clear that I'm using the term "forecasting" in it's general sense, i.e. of making useful predictions of potential risks as they emerge in specific areas, rather than predictions such as "the stock market will collapse at noon on Thursday." I think we can all agree that the latter kind of prediction is probably impossible (although Didier Sornette wouldn't agree), and certainly would be self-defeating were it made widely known. Weather forecasters make much less specific predictions all the time, for example, of places and times where conditions will be ripe for powerful thunderstorms and tornadoes. These forecasts of potential risks are still valuable, and I see no reason similar kinds of predictions shouldn't be possible in finance and economics. Of course, people make such predictions all the time about financial events already. I'm merely suggesting that with effort and the devotion of considerable resources for collecting and sharing data, and building computational models, we could develop centers acting for the public good to make much better predictions on a more scientific basis.

As a couple of early examples, I'll point to the recent work on complex networks in finance which I've touched on here and here. These are computationally intensive studies demanding excellent data which make it possible to identify systemically important financial institutions (and links between them) more accurately than we have in the past. Much work remains to make this practically useful.

Another example is this recent and really impressive agent based model of the US housing market, which has been used as a "post mortem" experimental tool to ask all kinds of "what if?" questions about the housing bubble and its causes, helping to tease out better understanding on controversial questions. As the authors note, macroeconomists really didn't see the housing market as a likely source of large-scale macroeconomic trouble. This model has made it possible to ask and explore questions that cannot be explored with conventional economic models:
 Not only were the Macroeconomists looking at the wrong markets, they might have been looking at the wrong variables. John Geanakoplos (2003, 2010a, 2010b) has argued that leverage and collateral, not interest rates, drove the economy in the crisis of 2007-2009, pushing housing prices and mortgage securities prices up in the bubble of 2000-2006, then precipitating the crash of 2007. Geanakoplos has also argued that the best way out of the crisis is to write down principal on housing loans that are underwater (see Geanakoplos-Koniak (2008, 2009) and Geanakoplos (2010b)), on the grounds that the loans will not be repaid anyway, and that taking into account foreclosure costs, lenders could get as much or almost as much money back by forgiving part of the loans, especially if stopping foreclosures were to lead to a rebound in housing prices.

There is, however, no shortage of alternative hypotheses and views. Was the bubble caused by low interest rates, irrational exuberance, low lending standards, too much refinancing, people not imagining something, or too much leverage? Leverage is the main variable that went up and down along with housing prices. But how can one rule out the other explanations, or quantify which is more important? What effect would principal forgiveness have on housing prices? How much would that increase (or decrease) losses for investors? How does one quantify the answer to that question?

Conventional economic analysis attempts to answer these kinds of questions by building equilibrium models with a representative agent, or a very small number of representative agents. Regressions are run on aggregate data, like average interest rates or average leverage. The results so far seem mixed. Edward Glaeser, Joshua Gottlieb, and Joseph Gyourko (2010) argue that leverage did not play an important role in the run-up of housing prices from 2000-2006. John Duca, John Muellbauer, and Anthony Murphy (2011), on the other hand, argue that it did. Andrew Haughwout et al (2011) argue that leverage played a pivotal role.

In our view a definitive answer can only be given by an agent-based model, that is, a model in which we try to simulate the behavior of literally every household in the economy. The household sector consists of hundreds of millions of individuals, with tremendous heterogeneity, and a small number of transactions per month. Conventional models cannot accurately calibrate heterogeneity and the role played by the tail of the distribution. ... only after we know what the wealth and income is of each household, and how they make their housing decisions, can we be confident in answering questions like: How many people could afford one house who previously could afford none? Just how many people bought extra houses because they could leverage more easily? How many people spent more because interest rates became lower? Given transactions costs, what expectations could fuel such a demand? Once we answer questions like these, we can resolve the true cause of the housing boom and bust, and what would happen to housing prices if principal were forgiven.

... the agent-based approach brings a new kind of discipline because it uses so much more data. Aside from passing a basic plausibility test (which is crucial in any model), the agent-based approach allows for many more variables to be fit, like vacancy rates, time on market, number of renters versus owners, ownership rates by age, race, wealth, and income, as well as the average housing prices used in standard models. Most importantly, perhaps, one must be able to check that basically the same behavioral parameters work across dozens of different cities. And then at the end, one can do counterfactual reasoning: what would have happened had the Fed kept interest rates high, what would happen with this behavioral rule instead of that.

The real proof is in the doing. Agent-based models have succeeded before in simulating traffic and herding in the flight patterns of geese. But the most convincing evidence is that Wall Street has used agent-based models for over two decades to forecast prepayment rates for tens of millions of individual mortgages.
This is precisely the kind of work I think can be geared up and extended far beyond the housing market, augmented with real time data, and used to make valuable forecasting analyses. It seems to me actually to be the obvious approach.
 

Selasa, 19 Maret 2013

Second excerpt...

A second excerpt of my forthcoming book Forecast is now online at Bloomberg. It's a greatly condensed text assembled from various parts of the book. One interesting exchange in the comments from yesterday's excerpt:
Food For Thought commented....Before concluding that economic theory does not include analysis of unstable equilibria check out the vast published findings on unstable equilibria in the field of International Economics.  Once again we have someone touching on one tiny part of economic theory and drawing overreaching conclusions. 

I would expect a scientist would seek out more evidence before jumping to conclusions.
to which one Jack Harllee replied...
Sure, economists have studied unstable equilibria. But that's not where the profession's heart is. Krugman summarized rather nicely in 1996, and the situation hasn't changed much since then:
"Personally, I consider myself a proud neoclassicist. By this I clearly don't mean that I believe in perfect competition all the way. What I mean is that I prefer, when I can, to make sense of the world using models in which individuals maximize and the interaction of these individuals can be summarized by some concept of equilibrium. The reason I like that kind of model is not that I believe it to be literally true, but that I am intensely aware of the power of maximization-and-equilibrium to organize one's thinking - and I have seen the propensity of those who try to do economics without those organizing devices to produce sheer nonsense when they imagine they are freeing themselves from some confining orthodoxy. ...That said, there are indeed economists who regard maximization and equilibrium as more than useful fictions. They regard them either as literal truths - which I find a bit hard to understand given the reality of daily experience - or as principles so central to economics that one dare not bend them even a little, no matter how useful it might seem to do so."
This response fairly well captures my own position. I argue in the book that the economics profession has been fixated far too strongly on equilibrium models, and much of the time simply assumes the stability of such equilibria without any justification. I certainly don't claim that economists have never considered unstable equilibria (or examined models with multiple equilibria). But any examination of the stability of an equilibrium demands some analysis of dynamics of the system away from equilibrium, and this has not (to say the least) been a strong focus of economic theory.