Selasa, 09 April 2013

Poking the hive of DSGE (Distinctly Sensitive Group of Economists)



The other day when I wrote my recent post What you can learn from DSGE, I expected that maybe 6 or 8 people would read it. I mean, it's a fairly tiny fraction of people who really want to read about the methodology of economic modelling, even if some people like myself insist on writing about it occasionally. So I was surprised that this post seems to have drawn considerable attention, especially from economists (apparently) writing on the forum econjobrumors. An economist I know told me about this site a while back, describing it as a hornet's nest of vicious criticism and name calling.

Now I know this first hand: the atmosphere there is truly dynamic and stochastic, choked with the smog of blogosphere-style vitriol (one commenter even suggesting that I should be shot!). Some comments were amusing and rather telling. For example, writing anonymously, one reader commented that....
I like how this blogger cites a GMU Ph.D. student as an example of someone considering alternatives to rational expectations. The author has no idea that such work has been going on for decades. He doesn't know s**t.
Actually, I never implied that alternatives had never been considered before. In any event, I guess the not-so-hidden message here is that grad students from GMU -- and not even in a Department of Economics, tsk! tsk! -- shouldn't be taken seriously. Maybe the writer was just irritated that the graduate student in question, Nathan Palmer, was co-author on the paper, recently published in the American Economic Review, that I just wrote about in Bloomberg. AER is a fairly prominent outlet, I believe, taken seriously in the profession. It seems that some real economists must agree with me that this work is pretty interesting.

Most of the other comments were typical of the blog-trashing genre, but one did hit on an interesting point that deserves some further comment:
...the implication that physicists or other natural scientists would never deploy the analytic equivalent of a representative agent when studying physical processes is not quite correct.
Mean Field Theory:
In physics and probability theory, mean field theory (MFT also known as self-consistent field theory) studies the behavior of large and complex stochastic models by studying a simpler model. Such models consider a large number of small interacting individuals who interact with each other. The effect of all the other individuals on any given individual is approximated by a single averaged effect, thus reducing a many-body problem to a one-body problem.
The ideas first appeared in physics in the work of Pierre Curie[1] and Pierre Weiss to describe phase transitions.[2] Approaches inspired by these ideas have seen applications in epidemic models,[3] queueing theory,[4] computer network performance and game theory.[5]
This is a good point, although I definitely never suggested that this technique is not used in physics. The mean field approach in physics is indeed the direct analogy to the representative agent technique. Theorists use it all the time, as it is simple and leads quickly to results that are sometimes reasonably correct (sometimes even exact). And sometimes not correct.

In the case of a ferromagnet such as iron, the method essentially assumes that each elementary magnetic unit in the material (for simplicity, think of it as the magnetic moment of a single atom that is itself like a tiny magnet) acts independently of every other. That is, each one responds to the overall mean field created by all the atoms throughout the entire material, rather than to, for example, its closest neighbors. In this approximation, the magnetic behavior of the whole is simply a scaled up version of that of the individual atoms. Interactions between nearby magnetic elements do not matter. All is very simple.

Build a model like this -- you'll find this in any introductory statistical mechanics book -- and you get a self-consistency condition for the bulk magnetization. Lo and behold, you find a sharp phase transition with temperature, much like what happens in real iron magnets. A piece of iron is non-magnetic above a certain critical temperature, and spontaneously becomes magnetic when cooled below that temperature. So, voila! The mean field method works, sometimes. But this is only the beginning of the story.

Curie and Weiss wrote down theories like this in the early 1900s and this way of thinking remained in fashion into the 1950s. Famed Russian physicist Lev Landau developed a much more general theory of phase transitions based on the idea. But here's the kicker -- since the 1960s, i.e. for half a century now, we have known that this theory does not work in general, and that the mean field approximation often breaks down badly, because different parts of a material aren't statistically independent. Especially near the temperature of the phase transition, you get strong correlations between different magnetic moments in iron, so what one is doing strongly influences what others are likely to be doing. Assume statistical independence now and you get completely incorrect results. The mean field trick fails, and sometimes very dramatically. As a simple example, a string of magnetic elements in one dimension, held on a line, does not undergo any phase transition at all, in complete defiance of the mean field prediction.

An awful lot of the most interesting mathematical physics over the past half century has been devoted to overcoming this failure, and to learning how go beyond the mean field approximation, to understand systems in which the correlations between parts are strong and important. I believe that it will be crucial for economics to plunge into the same complex realm, if any serious understanding is to be had of the most important events in finance and economics, which typically do involve strong influences acting between people. The very successful models that John Geanakoplos developed to predict mortgage prepayment rates only worked by including an important element of contagion -- people becoming more likely to prepay when many others prepay, presumably because they become more aware of the possibility and wisdom of doing so.

Unfortunately, I can't write more on this now as I am flying to Atlanta in a few minutes. But this is a topic that deserves a little further examination. For example, those power laws that econophysicists seem to find so fascinating? These also seem to really irritate those writing on econjobrumors. But what we know about power laws in physical systems is that they are often (though not always) the signature if strong correlations among the different elements of a system....  so they may indeed be trying to tell us something.

Minggu, 07 April 2013

Mortgage dynamics

My latest Bloomberg column should appear sometime Sunday night 7 April. I've written about some fascinating work that explores the origins of the housing bubble and the financial crisis by using lots of data on the buying/selling behaviour of more than 2 million people over the period in question. It essentially reconstructs the crisis in silico and tests which factors had the most influence as causes of the bubble, i.e. leverage, interest rates and so on.

I think this is a hugely promising way of trying to answer such questions, and I wanted to point to one interesting angle in the history of this work: it came out of efforts on Wall St. to build better models of mortgage prepayments, using any technique that would work practically. The answer was detailed modelling of the actual actions of millions of individuals, backed up by lots of good data.

First, take a look at the figure below:



This figure shows the actual (solid line) rate of repayment of a pool of mortgages that were originally issued in 1986. It also shows the predictions (dashed line) for this rate made by an agent-based model of mortgage repayments developed by John Geanakoplos working for two different Wall St. firms. There are two things to notice. First, obviously, the model works very well over the entire period up to 1999. The second, not obvious, is that the model works well even over a period for which it was not designed, by the data, to fit. The sample of data used to build the model went from 1986 through early 1996. The model continues to work well even out of sample over the final three years of this period, roughly 30% beyond the period of fitting. (The model did not work in subsequent years and had to be adjusted due to a major changes in the market itself, after 2000, especially new possibilities to refinance and take cash out of mortgages that were not there before.).

How was this model built? Almost all mortgages give the borrower the right in any month to repay the mortgage in its entirely. Traditionally, models aiming to predict how many would do so worked by trying to guess or develop some function to describe the aggregate behavior of all the mortgage owners, reflecting ideas about individual behavior in some crude way in the aggregate. As Geanakoplos et al. put it:
The conventional model essentially reduced to estimating an equation with an assumed functional form for prepayment rate... Prepay(t) = F(age(t), seasonality(t), old rate – new rate(t), burnout(t), parameters), where old rate – new rate is meant to capture the benefit to refinancing at a given time t, and burnout is the summation of this incentive over past periods. Mortgage pools with large burnout tended to prepay more slowly, presumably because the most alert homeowners prepay first. ...

Note that the conventional prepayment model uses exogenously specified functional forms to describe aggregate behavior directly, even when the motivation for the functional forms, like burnout, is explicitly based on heterogeneous individuals.

There is of course nothing wrong with this. It's an attempt to do something practically useful with the data then available (which wasn't generally detailed at the level of individual loans). The contrasting approach, seeks instead to start from the characteristics of individual homeowners and to model their behavior, as a population, as it evolves through time:
the new prepayment model... starts from the individual homeowner and in principle follows every single individual mortgage. It produces aggregate prepayment forecasts by simply adding up over all the individual agents. Each homeowner is assumed to be subject to a cost c of prepaying, which include some quantifiable costs such as closing costs, as well as less tangible costs like time, inconvenience, and psychological costs. Each homeowner is also subject to an alertness parameter a, which represents the probability the agent is paying attention each month. The agent is assumed aware of his cost and alertness, and subject to those limitations chooses his prepayment optimally to minimize the expected present value of his mortgage payments, given the expectations that are implied by the derivatives market about future interest rates.

Agent heterogeneity is a fact of nature. It shows up in the model as a distribution of costs and alertness, and turnover rates. Each agent is characterized by an ordered pair (c,a) of cost and alertness, and also a turnover rate t denoting the probability of selling the house. The distribution of these characteristics throughout the population is inferred by fitting the model to past prepayments. The effects of observable borrower characteristics can be incorporated in the model (when they become available) by allowing them to modify the cost, alertness, and turnover.
By way of analogy, this is essentially modelling the prepayment behavior of a population of homeowners as an ecologist might model, say, the biomass consumption of some population of insects. The idea would be to  follow the density of insects as a function of their size, age and other features that influence how and when and how much they tend to consume. The more you model such features explicitly as a distribution of influential factors, the more likely your model will take on aspects of the real population, and the more likely it will be to make predictions about the future, because it has captured real aspects of the causal factors in the past.

Models of this kind also capture in a more natural way, with no extra work, things that have to be put in by hand when working only at the aggregate level. In this mortgage example, this is true of the "burnout" -- the gradual lessening of prepayment rates over time (other things being equal):
... burnout is a natural consequence of the agent-based approach; there is no need to add it in afterwards. The agents with low costs and high alertness prepay faster, leaving the remaining pool with slower homeowners, automatically causing burnout. The same heterogeneity that explains why only part of the pool prepays in any month also explains why the rate of prepayment burns out over time.
One other thing worth noting is that those developing this model found that to fit the data well they had to include an effect of "contagion", i.e. the spread of behavior directly from one person to another. When prepayment rates go up, it appears they do so not solely because people have independently made optimal decisions to prepay. Fitting the data well demands an assumption that some people become aware of the benefit of prepaying because they have seen or heard about others who have done so.

This is how it was possible, going back up to the figure above, to make accurate predictions of prepayment rates three years out of sample. In a sense, the lesson is that you do better if you really try to make contact with reality, modelling as many realistic details as you have access to. Mathematics alone won't perform miracles, but mathematics based on realistic dynamical factors, however crudely captured, can do some impressive things.

I suggest reading the original, fairly short paper, which was eventually published in the American Economic Review. That alone speaks to at least grudging respect on the part of the larger economics community to the promise of agent based modelling. The paper takes this work on mortgage prepayments as a starting point and an inspiration, and tries to model the housing market in the Washington DC area in a similar way through the period of the housing bubble.

Jumat, 05 April 2013

What you can learn from DSGE

                                       *** UPDATE BELOW ***

Anyone who has read much of this blog would expect my answer to the above question to be "NOTHING AT ALL!!!!!!!!!!!!!!!!!" Its true, I'm not a fan at all of Dynamic Stochastic General Equilibrium models, and think they offer poor tools for exploring the behaviour of any economy. That said, I also think economists should be ready and willing to use any model whatsoever if they honestly believe it might give some real practical insight into how things work. I (grudgingly) suppose that DSGE models might sometimes fall into this category.

So that's what I want to explore here, and I do briefly below. But first a few words on what I find objectionable about DSGE models.

The first thing is that the agents in such models are generally assumed to be optimisers. They have a utility function and are assumed to maximize this utility by solving some optimization problem over a path in time. [I'm using as my model the well known Smets-Wouters model as described in this European Central Bank document written, fittingly enough, by Smets and Wouters.] Personally, I find this to be a rather hugely implausible account of how any person or firm makes decisions when facing anything but the simplest problems. So it would seem like a miracle to me if the optimal behaviors predicted by the models would turn out to resemble even crudely the behavior of real individuals or firms.

Having said that, if I try to be generous, I can suppose that maybe, just maybe, the actual behaviour of people, while it isn't optimizing anything, might in the aggregate come out to something that isn't at least too far away from the optimal behavior, at least in some cases. I would guess there must be armies of economists out there collecting data on just this question, comparing the actions of real individuals and firms to the optimal predictions of the models. Maybe it isn't always bad. If I twist my arm, I can accept that this way of treating decision making as optimization sometimes lead to interesting insights (for people facing very smple decisions, this would of course be more likely).

The second thing I find bad about DSGE models is their use of the so-called representative agent. In the Smets-Wouters model, for example, there is essentially one representative consumer who makes decisions regarding labor and consumption, and then one representative firm which makes decisions on investment, etc. If you read the paper you will see it mention "a continuum of households" indexed by a continuous parameter, and this makes it seem at first like there is actually an infinite number of agents. Not really, as the index only refers to the kind of labor. Each agent makes decisions independently to optimize their utility; there are no interactions between the agents, no one can conduct a trade with another or influence their behavior, etc. So in essence there is really just one representative laborer and one representative firm, who interact with one another in the market. This I also find wholly unconvincing as the real economy emerges out of the complex interactions of millions of agents doing widely different things. Modelling an economy like this seems like modelling the flow of a river by thinking about the behaviour of a single representative water molecule, bouncing along the river bed, rather then thinking about the interactions of many which create pressure, eddies, turbulence, waves and so on. It seems highly unlikely to be very instructive.

But again, let me be generous. Perhaps, in some amazing way, this unbelievably crude approximation might sometimes give you some shred of insight. Maybe you can get lucky and find that a collective outcome can be understood by simply averaging over the behaviors of the many individuals. In situations where people do make up their own minds, independently and by seeking their own information, this might work. Perhaps this is how people behave in response to their perceptions of the macroeconomy, although it seems to me that what they hear from others, what they read and see in the media, probably has a huge effect and so they don't act independently at all.

But maybe you can still learn something from this approximation, sometimes. Does anyone out there know if there is research exploring this matter of when or under what conditions the representative agent approximation is OK because people DO act independently? I'm sure this must exist and it would be interesting to know more about it. I guess the RBC crowd must have an extensive program studying the empirical limits to the applicability of this approximation? 

So, those are my two biggest reasons for finding it hard to believe the DSGE framework. To these I might add a disbelief that the agents in economy do rapidly find their way to an equilibrium in which "production equals demand by households for consumption and investment and the government." We might stay well away from that point, and things might generally change so quickly that no equilibrium ever comes about. But let's ignore that. Maybe we're lucky and the equilibrium does come about.

So then, what can we learn from DSGE, and why this post? If I toss aside the worries I've voiced above, I'm willing to entertain the possibility that one might learn something from DSGE models. In particular, while browsing the web site of Nathan Palmer, a PhD student in the Department of Computational Social Science at George Mason University, I came across mention of two lines of work within the context of the DSGE formalism that I do think are interesting. I think more people should know about them.

First is work exploring the idea of "natural expectations." A nice example is this fairly recent paper by Andreas Fuster, David Laibson, and Brock Mendel. Most DSGE models, including the Smets-Wouters model, assume that the representative agents have rational expectations, i.e. they process information perfectly and have a wholly unbiased view of future possibilities. What this paper does is to relax that assumption in a DSGE model, assuming instead that people have more realistic "natural" or "intuitive expectations." Look at the empirical literature and you find that there's lots of evidence that investors and people of all kinds tend to overestimate recent trends in time series and expect them to continue. This paper explores some of this empirical literature, but then goes to its main purpose -- to include these trend following expectations into a DSGE model.

As they note, a seminal failure of rational expectations DSGE models is that they struggle "to explain some of the most prominent facts we observe in macroeconomics, such as large swings in asset prices, in other words “bubbles”, as well as credit cycles, investment cycles, and other mechanisms that contribute to the length and severity of economic contractions." These kinds of things, in contrast, do emerge quite readily from a DSGE model once the expectations of the agents is made a little more realistic. From the paper:
.....we embed natural expectations in a simple dynamic macroeconomic model and compare the simulated properties of the model to the available empirical evidence. The model’s predictions match many patterns observed in macroeconomic and financial time series, such as high volatility of asset prices, predictable up‐and‐down cycles in equity returns, and a negative relationship between current consumption growth and future equity returns.   
That is interesting, and all from a DSGE model. Whether you believe it or not depends on what you think about the objections I voiced above about the components of DSGE models, but it is at least nice that this single step towards realism pays some nice dividends in giving more plausible outcomes. This is a useful line of research.

Related work, equally interesting, is that of Paolo Gelain, Kevin J. Lansing and Caterina Mendicino, described in this working paper of the Federal Reserve Bank of San Francisco. This paper essentially does much the same thing as the one I just discussed, though in the context of the housing market. It uses a DSGE with trend following expectations for some of the agents to explore how a government might best try to keep housing bubbles in check through change in interest rates or restrictions on  leverage, i.e. how much a potential home buyer can borrow relative to the house value, or restrictions on how much they can borrow relative to income. The latter seems to work best. As they summarize:
Standard DSGE models with fully-rational expectations have difficulty producing large swings in house prices and household debt that resemble the patterns observed in many industrial countries over the past decade. We show that the introduction of simple moving-average forecast rules for a subset of agents can significantly magnify the volatility and persistence of house prices and household debt relative to otherwise similar model with fully-rational expectations. We evaluate various policy actions that might be used to dampen the resulting excess volatility, including a direct response to house price growth or credit growth in the central bank’s interest rate rule, the imposition of a more restrictive loan-to-value ratio, and the use of a modified collateral constraint that takes into account the borrower’s wage income. Of these, we find that a debt-to-income type constraint is the most effective tool for dampening overall excess volatility in the model economy. 
Again, this is really interesting stuff, worthwhile research, economics that is moving, to my mind, in the right direction, showing us what we should expect to be possible in an economy once we take the realistic and highly heterogenous behaviour of real people into account.

So there. I've said some not so nasty things about DSGE models! Now I think I need a stiff drink.

*** UPDATE ***

One other thing to mention. I'm happy to see this kind of work, and I applaud those doing it. But I do seriously doubt whether embedding the idea of trend following inside a DSGE model does anything to teach us about why markets often undergo bubble-like phenomena and have quite strong fluctuations in general. Does the theoretical framework add anything?

Imagine someone said the following to you:
 "Lots of people, especially in financial markets and the housing market, are prone to speculating and buying in the hope of making a profit when prices go up. This becomes more likely if people have recently seen prices rising, and their friends making profits. This situation  can lead to herding type behavior where many people act similarly and create positive feedbacks and asset bubbles, which eventually crash back to reality. The problem is generally made worse, for obvious reasons, if people can borrow very easily to leverage their investment..." 
I think most people would say "yes, of course." I suspect that many economists would also. This explanation, couched in words, is for me every bit as convincing as the similar dynamic wrapped up in the framework of DSGE. Indeed, it is even more convincing as it doesn't try to jump awkwardly through a series of bizarre methodological hoops along the way. In this sense, DSGE seems more like a straitjacket than anything else. I can't see how it adds anything to the plausibility of a story.

So, I guess, sorry for the title of this post. Should have been "What you can learn from DSGE: things you would be much better off learning elsewhere."