and this one...
and this one...
and this one...
When things like this [cycles] happen in nature - like the Earth going around the Sun, or a ball bouncing on a spring, or water undulating up and down - it comes from some sort of restorative force. With a restorative force, being up high is what makes you more likely to come back down, and being low is what makes you more likely to go back up. Just imagine a ball on a spring; when the spring is really stretched out, all the force is pulling the ball in the direction opposite to the stretch. This causes cycles.I think this is interesting and deserves some further discussion. Take an ordinary pendulum. Give such a system a kick and it will swing for a time but eventually the motion will damp away. For a while, high now does portend low in the near future, and vice versa. But this pendulum won't start start swinging this way on its own, nor will it persist in swinging over long periods of time unless repeatedly kicked by some external force.
It's natural to think of business cycles this way. We see a recession come on the heels of a boom - like the 2008 crash after the 2006-7 boom, or the 2001 crash after the late-90s boom - and we can easily conclude that booms cause busts.
So you might be surprised to learn that very, very few macroeconomists think this! And very, very few macroeconomic models actually have this property.
In modern macro models, business "cycles" are nothing like waves. A boom does not make a bust more likely, nor vice versa. Modern macro models assume that what looks like a "cycle" is actually something called a "trend-stationary stochastic process" (like an AR(1)). This is a system where random disturbances ("shocks") are temporary, because they decay over time. After a shock, the system reverts to the mean (i.e., to the "trend"). This is very different from harmonic motion - a boom need not be followed by a bust - but it can end up looking like waves when you graph it...
Minksy conjectures that financial markets begin to build up bubbles as investors become increasingly overconfident about markets. They begin to take more aggressive positions, and can often start to increase their leverage as financial prices rise. Prices eventually reach levels which cannot be sustained either by correct, or any reasonable forecast of future income streams on assets. Markets reach a point of instability, and the over extended investors must now begin to sell, and are forced to quickly deleverage in a fire sale like situation. As prices fall market volatility increases, and investors further reduce risky positions. The story that Minsky tells seems compelling, but we have no agreed on approach for how to model this, or whether all the pieces of the story will actually fit together. The model presented in this paper tries to bridge this gap.The model is in crude terms like many I've described earlier on this blog. The agents are adaptive and try to learn the most profitable ways to behave. They are also heterogeneous in their behavior -- some rely more on perceived fundamentals to make their investment decisions, while others follow trends. The agents respond to what has recently happened in the market, and then the market reality emerges out of their collective behavior. That reality, in some of the runs LeBaron explores, shows natural, irregular cycles of bubbles and subsequent crashes of the sort Minsky envisioned. The figure below, for example, shows data for the stock price, weekly returns and trading volume as they fluctuate over a 10 year period of the model:
The large amount of wealth in the adaptive strategy relative to the fundamental is important. The fundamental traders will be a stabilizing force in a falling market. If there is not enough wealth in that strategy, then it will be unable to hold back sharp market declines. This is similar to a limits to arbitrage argument. In this market without borrowing the fundamental strategy will not have sufficient wealth to hold back a wave of self-reinforcing selling coming from the adaptive strategies.Another important point, which LeBaron mentions in the paragraph above, is that there's no leverage in this model. People can't borrow to amplify investments they feel especially confident of. Leverage of course plays a central role in the instability mechanism described by Minsky, but it doesn't seem to be absolutely necessary to get this kind of instability. It can come solely from the interaction of different agents following distinct strategies.
The dynamics are dominated by somewhat irregular swings around fundamentals, that show up as long persistent changes in the price/dividend ratio. Prices tend to rise slowly, and then crash fast and dramatically with high volatility and high trading volume. During the slow steady price rise, agents using similar volatility forecast models begin to lower their assessment of market risk. This drives them to be more aggressive in the market, and sets up a crash. All of this is reminiscent of the Minksy market instability dynamic, and other more modern approaches to financial instability.
Instability in this market is driven by agents steadily moving to more extreme portfolio positions. Much, but not all, of this movement is driven by risk assessments made by the traders. Many of them continue to use models with relatively short horizons for judging market volatility. These beliefs appear to be evolutionarily stable in the market. When short term volatility falls they extend their positions into the risky asset, and this eventually destabilizes the market. Portfolio composition varying from all cash to all equity yields very different dynamics in terms of forced sales in a falling market. As one moves more into cash, a market fall generates natural rebalancing and stabilizing purchases of the risky asset in a falling market. This disappears as agents move more of their wealth into the risky asset. It would reverse if they began to leverage this position with borrowed money. Here, a market fall will generate the typical destabilizing fire sale behavior shown in many models, and part of the classic Minsky story. Leverage can be added to this market in the future, but for now it is important that leverage per se is not necessary for market instability, and it is part of a continuum of destabilizing dynamics.
Q (Baum): So how did adaptive expectations morph into rational expectations?I think this is exactly the issue: "fear of uncertainty". No science can be effective if it aims to banish uncertainty by theoretical fiat. And this is what really makes rational expectations economics stand out as crazy when compared to other areas of science and engineering. It's a short interview, well worth a quick read.
A (Phelps): The "scientists" from Chicago and MIT came along to say, we have a well-established theory of how prices and wages work. Before, we used a rule of thumb to explain or predict expectations: Such a rule is picked out of the air. They said, let's be scientific. In their mind, the scientific way is to suppose price and wage setters form their expectations with every bit as much understanding of markets as the expert economist seeking to model, or predict, their behavior. The rational expectations approach is to suppose that the people in the market form their expectations in the very same way that the economist studying their behavior forms her expectations: on the basis of her theoretical model.
Q: And what's the consequence of this putsch?
A: Craziness for one thing. You’re not supposed to ask what to do if one economist has one model of the market and another economist a different model. The people in the market cannot follow both economists at the same time. One, if not both, of the economists must be wrong. Another thing: It’s an important feature of capitalist economies that they permit speculation by people who have idiosyncratic views and an important feature of a modern capitalist economy that innovators conceive their new products and methods with little knowledge of whether the new things will be adopted -- thus innovations. Speculators and innovators have to roll their own expectations. They can’t ring up the local professor to learn how. The professors should be ringing up the speculators and aspiring innovators. In short, expectations are causal variables in the sense that they are the drivers. They are not effects to be explained in terms of some trumped-up causes.
Q: So rather than live with variability, write a formula in stone!
A: What led to rational expectations was a fear of the uncertainty and, worse, the lack of understanding of how modern economies work. The rational expectationists wanted to bottle all that up and replace it with deterministic models of prices, wages, even share prices, so that the math looked like the math in rocket science. The rocket’s course can be modeled while a living modern economy’s course cannot be modeled to such an extreme. It yields up a formula for expectations that looks scientific because it has all our incomplete and not altogether correct understanding of how economies work inside of it, but it cannot have the incorrect and incomplete understanding of economies that the speculators and would-be innovators have.
Since the beginning of banking the possibility of a lender to assess the riskiness of a potential borrower has been essential. In a rational world, the result of this assessment determines the terms of a lender-borrower relationship (risk-premium), including the possibility that no deal would be established in case the borrower appears to be too risky. When a potential borrower is a node in a lending-borrowing network, the node’s riskiness (or creditworthiness) not only depends on its financial conditions, but also on those who have lending-borrowing relations with that node. The riskiness of these neighboring nodes depends on the conditions of their neighbors, and so on. In this way the concept of risk loses its local character between a borrower and a lender, and becomes systemic.In this connection, recall Alan Greenspan's famous admission that he had trusted in the ability of rational bankers to keep markets working by controlling their counterparty risk. As he exclaimed in 2006,
The assessment of the riskiness of a node turns into an assessment of the entire financial network [1]. Such an exercise can only carried out with information on the asset-liablilty network. This information is, up to now, not available to individual nodes in that network. In this sense, financial networks – the interbank market in particular – are opaque. This intransparency makes it impossible for individual banks to make rational decisions on lending terms in a financial network, which leads to a fundamental principle: Opacity in financial networks rules out the possibility of rational risk assessment, and consequently, transparency, i.e. access to system-wide information is a necessary condition for any systemic risk management.
"Those of us who have looked to the self-interest of lending institutions to protect shareholder's equity -- myself especially -- are in a state of shocked disbelief."The trouble, at least partially, is that no matter how self-interested those lending institutions were, they couldn't possibly have made the staggeringly complex calculations required to assess those risks accurately. The system is too complex. They lacked necessary information. Hence, as Thurner and Poledna point out, we might help things by making this information more transparent.
In most developed countries interbank loans are recorded in the ‘central credit register’ of Central Banks, that reflects the asset-liability network of a country [5]. The capital structure of banks is available through standard reporting to Central Banks. Payment systems record financial flows with a time resolution of one second, see e.g. [6]. Several studies have been carried out on historical data of asset-liability networks [7–12], including overnight markets [13], and financial flows [14].I wrote a little about this DebtRank idea here. It's a computational algorithm applied to a financial network which offers a means to assess systemic risks in a coherent, self-consistent way; it brings network effects into view. The technical details aren't so important, but the original paper proposing the notion is here. The important thing is that the DebtRank algorithm, along with the data provided to central banks, makes it possible in principle to calculate a good estimate of the overall systemic risk presented by any bank in the network.
Given this data, it is possible (for Central Banks) to compute network metrics of the asset-liability matrix in real-time, which in combination with the capital structure of banks, allows to define a systemic risk-rating of banks. A systemically risky bank in the following is a bank that – should it default – will have a substantial impact (losses due to failed credits) on other nodes in the network. The idea of network metrics is to systematically capture the fact, that by borrowing from a systemically risky bank, the borrower also becomes systemically more risky since its default might tip the lender into default. These metrics are inspired by PageRank, where a webpage, that is linked to a famous page, gets a share of the ‘fame’. A metric similar to PageRank, the so-called DebtRank, has been recently used to capture systemic risk levels in financial networks [15].
The idea is to reduce systemic risk in the IB network by not allowing borrowers to borrow from risky nodes. In this way systemically risky nodes are punished, and an incentive for nodes is established to be low in systemic riskiness. Note, that lending to a systemically dangerous node does not increase the systemic riskiness of the lender. We implement this scheme by making the DebtRank of all banks visible to those banks that want to borrow. The borrower sees the DebtRank of all its potential lenders, and is required (that is the regulation part) to ask the lenders for IB loans in the order of their inverse DebtRank. In other words, it has to ask the least risky bank first, then the second risky one, etc. In this way the most risky banks are refrained from (profitable) lending opportunities, until they reduce their liabilities over time, which makes them less risky. Only then will they find lending possibilities again. This mechanism has the effect of distributing risk homogeneously through the network.The overall effect in the interbank market would be -- in an idealized model, at least -- to make systemic banking collapses much less likely. Thurner and Poledna ran a number of agent-based simulations to test out the dynamics of such a market, with encouraging results. The model involves banks, firms and households and their interactions; details in the paper for those interested. Bottom line, as illustrated in the figure below, is that cascading defaults through the banking system become much less likely. Here the red shows the statistical likelihood over many runs of banking cascades of varying size (number of banks involved) when borrowing banks choose their counterparties at random; this is the "business as usual" situation, akin to the market today. In contrast, the green and blue show the same distribution if borrowers instead sought counterparties so as to avoid those with high values of DebtRank (green and blue for slightly different conditions). Clearly, system wide problems become much less likely.
Financial products are socially beneficial when they help people insure risks, but when these same products are used for gambling they can instead be socially detrimental. The difference between insurance and gambling is that insurance enables people to reduce the risk they face, whereas gambling increases it. A person who purchases financial products in order to insure themselves essentially pays someone else to take a risk on her behalf. The counterparty is better able to absorb the risk, typically because she has a more diversified investment portfolio or owns assets whose value is inversely correlated with the risk taken on. By contrast, when a person gambles, that person exposes herself to increased net risk without offsetting a risk faced by a counterparty: she merely gambles in hopes of gaining at the expense of her counterparty or her counterparty’s regulator. As we discuss below, gambling may have some ancillary benefits in improving the information in market prices. However, it is overwhelmingly a negative-sum activity, which, in the aggregate, harms the people who engage in it, and which can also produce negative third-party effects by increasing systemic risk in the economy.Of course, putting such a thing into practice would be difficult. But it's also difficult to clean up the various messes that occur when financial products lead to unintended consequences. Difficult isn't an argument against doing something worthwhile. The paper I mentioned makes considerable effort to explore how the insurance benefits and the gambling costs associated with a new instrument might be estimated. And maybe a direct analogy to the FDA isn't the right thing at all. Can a panel of experts estimate the costs/benefits accurately? Maybe not. But there might be sensible ways to bring certain new products under scrutiny once they have been put into wide use. And products needn't be banned either -- merely regulated and their use reviewed to avoid the rapid growth of large systemic risks. Of course, steps were taken in the late 1990s (by Elizabeth Warren, most notably) to regulate derivatives markets much more closely. Those steps were quashed by the finance industry through the action of Larry Summers, Alan Greenspan and others. Had there been something like an independent FDA-like body for finance, things might have turned out less disastrously.
This basic point has long been recognized, but has had little influence on modern discussions of financial regulation. Before the 2008 financial crisis, the academic and political consensus was that financial markets should be deregulated. This consensus probably rested on pragmatic rather than theoretical considerations: the U.S. economy had grown enormously from 1980 to 2007, and this growth had taken place at the same time as, and seemed to be connected with, the booming financial sector, which was characterized by highly innovative financial practices. With the 2008 financial crisis, this consensus came to an end, and since then there has been a significant retrenchment, epitomized by the passage of the Dodd-Frank Act, which authorizes regulatory agencies to impose significant new regulations on the financial industry.
The most important task of monetary policy is surely to help avert the worst outcomes of macroeconomic instability – prolonged depression, financial panics and high inflations. And it is here that central banks are most in need of help from modern macroeconomic theory. Central bankers need to understand what are the limits to stability of a modern market economy, under what circumstances is the economy likely to spin out of control without active intervention on the part of the central bank, and what kinds of policies are most useful for restoring macroeconomic stability when financial markets are in disarray.Right on, in my opinion, although I think Peter is perhaps being rather too kind to the macroeconomic learning work, which it seems to me takes a rather narrow and overly restricted perspective on learning, as I've mentioned before. At least it is a small step in the right direction. We need bigger steps, and more people taking them. And perhaps a radical and abrupt defunding of traditional macroeconomic research (theory, not data, of course, and certainly not history) right across the board. The response of most economists to critiques of this kind is to say, well, ok, we can tweak our rational expectations equilibrium models to include some of this stuff. But this isn't nearly enough.
But it is also here that modern macroeconomic theory has the least to offer. To understand how and when a system might spin out of control we would need first to understand the mechanisms that normally keep it under control. Through what processes does a large complex market economy usually manage to coordinate the activities of millions of independent transactors, none of whom has more than a glimmering of how the overall system works, to such a degree that all but 5% or 6% of them find gainful unemployment, even though this typically requires that the services each transactor performs be compatible with the plans of thousands of others, and even though the system is constantly being disrupted by new technologies and new social arrangements? These are the sorts of questions that one needs to address to offer useful advice to policy makers dealing with systemic instability, because you cannot know what has gone wrong with a system if you do not know how it is supposed to work when things are going well.
Modern macroeconomic theory has turned its back on these questions by embracing the hypothesis of rational expectations. It must be emphasized that rational expectations is not a property of individuals; it is a property of the system as a whole. A rational expectations equilibrium is a fixed point in which the outcomes that people are predicting coincide (in a distributional sense) with the outcomes that are being generated by the system when they are making these predictions. Even blind faith in individual rationality does not guarantee that the system as a whole will find this fixed point, and such faith certainly does not help us to understand what happens when the point is not found. We need to understand something about the systemic mechanisms that help to direct the economy towards a coordinated state and that under normal circumstances help to keep it in the neighborhood of such a state.
Of course the macroeconomic learning literature of Sargent (1999), Evans and Honkapohja (2001) and others goes a long way towards understanding disequilibrium dynamics. But understanding how the system works goes well beyond this. For in order to achieve the kind of coordinated state that general equilibrium analysis presumes, someone has to find the right prices for the myriad of goods and services in the economy, and somehow buyers and sellers have to be matched in all these markets. More generally someone has to create, maintain and operate markets, holding buffer stocks of goods and money to accommodate other transactors’ wishes when supply and demand are not in balance, providing credit to deficit units with good investment prospects, especially those who are maintaining the markets that others depend on for their daily existence, and performing all the other tasks that are needed in order for the machinery of a modern economic system to function.
Needless to say, the functioning of markets is not the subject of modern macroeconomics, which instead focuses on the interaction between a small number of aggregate variables under the assumption that all markets clear somehow, that matching buyers and sellers is never a problem, that markets never disappear because of the failure of the firms that were maintaining them, and (until the recent reaction to the financial crisis) that intertemporal budget constraints are enforced costlessly. By focusing on equilibrium allocations, whether under rational or some other form of expectations, DSGE models ignore the possibility that the economy can somehow spin out of control. In particular, they ignore the unstable dynamics of leverage and deleverage that have devastated so many economies in recent years.
In short, as several commentators have recognized, modern macroeconomics involves a new ‘‘neoclassical synthesis,’’ based on what Clower and I (1998) once called the ‘‘classical stability hypothesis.’’ It is a faith-based system in which a mysterious unspecified and unquestioned mechanism guides the economy without fail to an equilibrium at all points in time no matter what happens. Is there any wonder that such a system is incapable of guiding policy when the actual mechanisms of the economy cease to function properly as credit markets did in 2007 and 2008?
Conventional economic modelling tools can extrapolate forward existing trends fairly well – if those trends continue. But they are as hopeless at forecasting a changing economic world as weather forecasts would be, if weather forecasters assumed that, because yesterday’s temperature was 29 degrees Celsius and today’s was 30, tomorrow’s will be 31 – and in a year it will be 395 degrees.
Of course, weather forecasters don’t do that. When the Bureau of Meteorology forecasts that the maximum temperature in Sydney on January 16 to January 19 will be respectively 29, 30, 35 and 25 degrees, it is reporting the results of a family of computer models that generate a forecast of future weather patterns that is, by and large, accurate over the time horizon the models attempt to predict – which is about a week.
Weather forecasts have also improved dramatically over the last 40 years – so much so that even an enormous event like Hurricane Sandy was predicted accurately almost a week in advance, which gave people plenty of time to prepare for the devastation when it arrived:How did weather forecasters get better? By recognizing, of course, the inherent role of positive feed backs and instabilities in the atmosphere, and by developing methods to explore and follow the growth of such instabilities mathematically. That meant modelling in detail the actual fine scale workings of the atmosphere and using computers to follow the interactions of those details. The same will almost certainly be true in economics. Forecasting will require both lots of data and also much more detailed models of the interactions among people, firms and financial institutions of all kinds, taking the real structure of networks into account, using real data to build models of behaviour and so on. All this means giving up tidy analytical solutions, of course, and even computer models that insist the economy must exist in a nice tidy equilibrium. Science begins by taking reality seriously.
Almost five days prior to landfall, the National Hurricane Center pegged the prediction for Hurricane Sandy, correctly placing southern New Jersey near the centre of its track forecast. This long lead time was critical for preparation efforts from the Mid-Atlantic to the Northeast and no doubt saved lives.
Hurricane forecasting has come a long way in the last few decades. In 1970, the average error in track forecasts three days into the future was 518 miles. That error shrunk to 345 miles in 1990. From 2007-2011, it dropped to 138 miles. Yet for Sandy, it was a remarkably low 71 miles, according to preliminary numbers from the National Hurricane Center.
Within 48 hours, the forecast came into even sharper focus, with a forecast error of just 48 miles, compared to an average error of 96 miles over the last five years.
Meteorological model predictions are regularly attenuated by experienced meteorologists, who nudge numbers that experience tells them are probably wrong. But they start with a model of the weather than is fundamentally accurate, because it is founded on the proposition that the weather is unstable.
Conventional economic models, on the other hand, assume that the economy is stable, and will return to an 'equilibrium growth path' after it has been dislodged from it by some 'exogenous shock'. So most so-called predictions are instead just assumptions that the economy will converge back to its long-term growth average very rapidly (if your economist is a Freshwater type) or somewhat slowly (if he’s a Saltwater croc).
Weather forecasters used to be as bad as this, because they too used statistical models that assumed the weather was in or near equilibrium, and their forecasts were basically linearly extrapolations of current trends.