skip to content

Cambridge Endowment for Research in Finance (CERF)

 

Scott Guernsey, CERF Research Associate, December 2018

Reinvesting Market Power for the Betterment of Shareholders

On the supply side, highly competitive industries are generally characterized ashaving many firms and low barriers to entry. The first condition implies that existing firms cannot dictate or influence prices, and the second that new firms can enter markets at any time and at relatively low cost when incentivized to do so. Taken together then, in equilibrium, this setting suggests that existing firms only earn enough revenue to remain competitive and cover their total costs of production.

Yet, in reality, most industries in the United States have become increasingly less competitive. For example, in the article “Are U.S. Industries Becoming More Concentrated?”, forthcoming in Review of Finance, Gustavo Grullon (Rice University), Yelena Larkin (York University), and Roni Michaely (University of Geneva), find that more than 75% of U.S. industries experienced an increase in concentration over the past two decades.[1] As such, these industries are now composed of fewer firms, are less at risk of entry by newcomers, and earn “economic rents” or revenues in excess of that which would be economically sufficient in a competitive environment. Given these new developments, it is important for shareholders to understand how a reduction in competition might affect their holdings.

 In the article “Product Market Competition and Long-Term Firm Value: Evidence from Reverse Engineering Laws”, CERF Research Associate Scott Guernsey examines the value and investment policy implications of decreased product market competition for equity holders in the U.S. manufacturing industry.

To empirically analyze the relationship between competition and firm outcomes, Dr. Guernsey centers his study on the adoption of anti-plug-mold (APM) laws, which were adopted by 12 U.S. states from 1978 to 1987, and their subsequent reversal by a U.S. Supreme Court ruling in 1989. APM laws directly influenced the intensity of competition in product markets by protecting firms headquartered in the law adopting states from competitors copying their products using a specific type of reverse engineering (RE)[2] – the “direct molding process”.

The direct molding process enabled competitors to circumvent the R&D and manufacturing costs incurred by the originating firm by using an already finished product to create a mold which would then be used to produce duplicate items. For example, a boat manufacturer using this RE process would buy an existing boat on the open market, spray it with a mold forming substance (e.g., fiberglass), remove the original boat from the hardened substance, which would then become the mold used to produce replica boats. However, under the protection of APM laws, firms were given legal recourse to stop competitors in any U.S. state from using the direct molding process to compete with their products.

Using the staggered adoptions of APM laws by different states in different years, Dr. Guernsey finds that firms located in states with RE protection experienced increases in their value, when compared to firms operating in the same industry but located in states without the laws. Moreover, when the APM laws were later overturned by a U.S. Supreme Court ruling, which found the state laws in conflict with federal patent law, he finds all of the previous value gains subside.

Next, Dr. Guernsey explores a possible economic explanation for the increase in value experienced by firms in less competitive industries. He finds evidence for the “innovation incentives” hypothesis which poses that any of the economic rents the APM protected firms earn from increased market power are being allocated to investments in new and existing production technologies. For instance, relative to industry rivals, firms located in APM enacting states increase their investments in R&D and organizational capital.

Overall, Dr. Guernsey shows a reduction in competition is value enhancing for a subset of shareholders in the manufacturing industry as it leads their firms to reinvest the spoils of market power back into the company.

References mentioned in this post

Grullon, G., Y. Larkin, and R. Michaely. 2018. Are US industries becoming more concentrated?. Review of Finance, Forthcoming.

Gutiérrez, G., and T. Philippon. 2017. Declining competition and investment in the US. Unpublished Working Paper, National Bureau of Economic Research.

Kahle, K. M., and R. M. Stulz. 2017. Is the US public corporation in trouble?. Journal of Economic Perspectives 31:67–88.

[1] Gutiérrez and Philippon (2017) and Kahle and Stulz (2017) also document evidence confirming the recent trend in rising U.S. industry concentration.

[2] The standard legal definition of reverse engineering in the U.S. is described as “starting with the known product and working backward to divine the process which aided in its development or manufacture.”

Adelphe Ekponon, CERF Research Associate, November 2018

Emerging Markets Economies Debt Is Growing... What to expect?

After the 2008 financial crisis, Central banks have implemented accommodative monetary policies with the objective to revitalize countries economic activities. As a consequence, many countries have increased their borrowing in dollar and euro-denominated debt, leading to an increase of debt/GDP ratio around the world. As an example, this ratio was on average about 82% in Europe by the end of 2017 compared to 60% before the crisis, according to Eurostat.

The prime concern, however, is currently on the Emerging Markets Economies (EMEs) side, at least for two reasons.

First, many Emerging countries have increased their exposure to foreign debt (especially to hard currencies like dollar or euro). Their overall government debt as percentage of GDP went from 41 to 51 from 2008 to 2017 (BIS Quarterly Review, September 2017). In the same period, the government debt of EMEs doubled to reached $11.7 trillion with foreign currency debt also rising. Yet the problem with foreign-currency debt is that the government cannot inflate them away and difficulties to service them may be transmitted to the local currency debt market.

Second, the US Federal Reserve and the European central Bank are ending their accommodative monetary policies, which implies that interests rate will now be on the rise and that EMEs borrowing costs as well. From past experiences, interest rate rise in the US particularly has shown to be a trigger of many emerging countries debt crisis. Before EMEs debt crisis, such as Latin America in 1980, Mexico in 1994 and Asia in 1997, interests rate in the US were growing after remaining low.

Other factors may even worsen the situation, i.e. contagion or capital outflow, among others.  

In their paper “Macroeconomic Risk, Investor Preferences, and Sovereign Credit Spreads”, CCFin research associate Adelphe Ekponon and his co-authors explore the mechanism through which macroeconomic conditions combined with global investors aversion drive countries borrowing costs. According to this study, the link between economic conditions in the US and sovereign debt yields originate from the existence of a global business cycle, as countries tend on average, to be in good or bad time around the same periods. They found that this global business cycle increases the risk of defaulting, but also the government’s unwillingness to repay. The other mechanism is that investors’ higher risk aversion amplifies these effects. In this case, risky assets sell-offs are more pronounced in recession leading to a lower risk-free rate on average, to which the government optimally respond by issuing more debt.

It is likely that countries are going to discipline themselves in the coming months or years as borrowing costs surge… if there is no sudden switch to a global economic downturn. 

Pedro Saffi, CERF Fellow, November 2018

Predicting House Prices with Equity Lending Market Characteristics

Investors in financial markets must cope with the arrival of a myriad of news, which arrive relentlessly every day non-stop. This information must be interpreted and used in the most efficient way possible to update investment strategies. Most academics also spend their careers trying to identify variables (e.g. GDP growth, retail sales, unemployment) that can help forecast the behavior of financial market variables (e.g. stock returns, risk, and exchange rates). While less common, many articles show how financial markets’ data can be used to predict the behavior of variables in the real economy.[1]

In the article “The Big Short: Short Selling Activity and Predictability in House Prices”, forthcoming at Real Estate Economics, CERF Fellow Pedro Saffi and research collaborator Carles Vergara-Alert (IESE Business School) look at how U.S. house prices can be better understood using a previously unexplored set of financial variables.

Investors can speculate on a decrease of prices using a strategy known as “short selling”. This involves borrowing the security being sold from another investor, selling at the current price, and repurchase it in the future – hopefully at a lower price to make a profit. The market to borrow shares is known as the equity lending market, a trillion-dollar part of the financial system that allows investors to borrow and lend securities needed for short selling. While investors cannot bet in house price decreases by shorting houses directly, they can use a wide-range of financial securities to do. Dr Saffi examines use data on short selling activity from a specific type of security whose returns are highly related to house prices – Real Estate Investment Trusts (REITs) –  that are essentially portfolios of underlying real estate properties.

The authors’ main hypothesis is that REITs are strongly correlated to fundamentals of housing markets. Thus, an increase in REIT short selling activity can forecast decreases in housing prices, which is exactly what is found by the authors in the data. Furthermore, REITs invested in properties located in areas that experienced a housing boom during the expansion cycle in the 2000s are more sensitive to increases in short selling activity than REITs invested in properties located in areas that did not experience a housing boom. The study divides the US property market into four regions – Northeast, Midwest, South and West – and classifies each month in each region as being a “boom,” “average” or downturn” period. Although during boom and average periods there is little correlation between REITs short-selling and the subsequent month’s housing prices, “the correlation is significantly positive during housing market downturns.”

Using his research findings, Dr. Saffi constructs a hedging strategy based on short selling intensity to reduce the downside risk of housing price decreases, showing that investors can limit their losses using REITs’ equity lending data. The figure below (Figure 4 in the article) shows the cumulative returns for Dr. Saffi’s trading strategy (based on using the On Loan variable as a proxy of short selling activity) relative to the performance of the FHFA Housing Price index returns from July 2007 through July 2013.  These results show the usefulness of the hedging strategy in regions that experienced large house price run-ups during the years prior to 2007, i.e., Northeast and West to limit investor losses during the 2008 financial crisis. Its performance is satisfactory for the South and absent for the Midwest, where we observed a smaller house price run-up in the same period. Panel B shows similar results if we examine the performance using diversified REITs to hedge against price decreases in the aggregate FHFA index.

Overall, short selling can be a useful tool for market participants to hedge against future price decreases. Regulators can track measures from the equity lending market to improve forecasts of house prices and implement policies to prevent real estate bubbles. Furthermore, imposing short selling constraints on stocks like REITs—which invest in assets subject to high transaction costs—matters for price efficiency and the dissemination of information.

References mentioned in this post

Ang, A., G. Bekaert and M. Wei. 2007. Do Macro Variables, Asset markets, or Surveys Forecast Inflation Better? Journal of Monetary Economics 54: 1163–1212.

Bailey, W. and K.C. Chan. 1993. Macroeconomic Influences and the Variability of the Commodity Futures Basis. Journal of Finance 48: 555–573.

Koijen, R.S., O. Van Hemert and S. Van Nieuwerburgh. 2009. Mortgage Timing. Journal of Financial Economics 93: 292–324.

Liew, J. and M. Vassalou. 2000. Can Book-to-Market, Size and Momentum be Risk Factors that Predict Economic Growth? Journal of Financial Economics 57: 221–245.

[1] For example, Liew and Vassalou (2000), Ang, Bekaert and Wei (2007), Koijen, Van Hemert and Van Nieuwerburgh (2009) and Bailey and Chan (1993) use financial market data to forecast economic growth, inflation, mortgage choices and commodities, respectively.

  

Scott B. Guernsey, CERF Research Associate, October 2018

Guaranteed Bonuses in High Finance: To Reward or Retain?

Public distaste for high finance reached an all-time high in March of 2009, as the American International Group (AIG) insurance corporation announced it had paid out roughly $165 million dollars in bonuses to employees of its London-based financial services division (AIG Financial Products). Only months earlier, the same company had received roughly $170 million in U.S. taxpayer-funded bailout money and suffered a quarterly loss of $61.7 billion – the largest corporate loss on record. Then Chairman of the U.S. House Financial Services Committee, Barney Frank, remarked that payment of these bonuses was “rewarding incompetence”.

AIG countered, arguing that the bonuses had been pledged well before the start of the financial crisis and that it was legally committed to make good on the promised compensation. Additionally, Edward Liddy, who had been appointed chairman and CEO of AIG by the U.S. government, said the company could not “attract and retain” highly skilled labor if they believed “their compensation was subject to continued…adjustment by the U.S. Treasury.” And AIG wasn’t the only financial firm paying out large bonuses in 2009, as at least nine other large financial institutions, which had similarly received U.S. government assistance, distributed bonuses in excess of $1 million each to nearly 5,000 of its bankers and traders.

But why would these financial corporations risk their reputational capital to pay out bonuses? And why not condition the size and timing of bonus payments on circumstances like that experienced during the 2008 financial crisis rather than to simply guarantee large bonuses a year or more in advance?

A recent research article presented at this year’s Cambridge Corporate Finance Theory Symposium by Assistant Professor Brian Waters (University of Colorado Boulder) offers some interesting insight on these questions. To begin, the paper highlights three unique features of bonuses in the financial industry. First, unlike most other industries, bonus payments to high finance professionals (e.g., traders, bankers, analysts) comprises a large share of their total compensation. In fact, as described in the paper, more than 35% of a first-year analyst’s total pay is in the form of a bonus. This is further evidence by the hefty bonuses of $1 million or more dispensed to bankers, traders and executives by large financial institutions (AIG included) in 2009. 

Second, it seems as if bonus payments are largely guaranteed. For example, according to the paper, third-year analysts expect to receive a bonus of at least $75,000, with the possibility of earning a higher $95,000 bonus only if they performed exceptionally well. Moreover, as summarized above, AIG defended payment of its bonuses in March of 2009 by arguing they had been committed in advance and were obligated by law to fulfil this pledge. Third, observation of practice suggests financial institutions coordinate the timing of their bonuses by geography. For instance, in Europe almost all big banks determine bonuses in late February and early March, while U.S. banks do so in January. Again, this is consistent with AIG, although an American insurer, distributing bonuses to its London-based Financial Products division in March.

Considering these three stylized facts, Professor Waters (and co-author, Professor Edward D. Van Wesep) construct a mathematical model to explain why bonuses in high finance are both large and guaranteed. The general set-up of the model flows in the following manner. First, the authors assume that financial firms might find it difficult to recruit employees during certain months of the year (e.g., perhaps it is easy to replace employees in March, but difficult to do so in October). Second, in response to this periodic scarcity of labor, firms design contracts whereby large bonuses are paid during months with an abundance of talent (e.g., March), but condition the contracts such that employees must remain with the company until bonuses are paid to be eligible for this form of compensation.

Third, since financial firms operating in the same geography face similar labor market conditions, many of the firms will respond similarly, paying bonuses at the same time. Fourth, because employees are incentivized to remain with the firm until bonuses are paid, they will delay quitting until this point in time (i.e., this is when most employees leave their employers). Therefore, finally, this suggests labor markets will be flooded with talent after bonuses are paid (e.g., March), but will be relatively shallow in other months (e.g., October). Hence, arriving back at the initial step in the model and the game repeats, providing an intuitive explanation for why large and guaranteed bonuses are observed in high finance, irrespective of macroeconomic conditions and own firm performance.

Yuan Li, CERF Research Associate, July 2018

How (in)efficient is the stock market?

In 2013, the Nobel committee split the economic prize to Eugene Fama – the pioneer of efficient market hypothesis (EMH) and Robert Shiller – the critic of EMH. This decision indicated that the Nobel committee agreed with both Fama and Shiller. Was the committee right? The answer is yes, according to my findings from a recent research project.

Fama explains EMH as “the simple statement that security prices fully reflect all available information”. The empirical implication of this hypothesis is that except beta (the measurement of a firm’s systematic risk), no other publicly available information can be used to predict stock returns. However, the finance literature has found that many easily available firm characteristics, such as market capitalisation, book-to-market ratio, etc, are related to future stock returns. They are the so-called anomalies. Does the discovery of anomalies reject the EMH? Not necessarily. Because no one knows what a firm’s beta should be, and those firm characteristics can simply be proxies for beta. This is known as the joint hypothesis problem. We can say nothing about EMH unless we know what the correct asset pricing model is. Sadly, we do not know what the correct asset pricing model is.

In this project, I get around the joint hypothesis problem. I assume that a firm’s stock return is composed of two parts: risk-induced return and mispricing-induced return. Because of the joint hypothesis problem, we do not know what the risk-induced return is. However, we can estimate the mispricing-induced return (if there is any) using the forecasts issued by financial analysts. Analysts’ earnings forecasts represent investors’ expectations. More importantly, we know the actual earnings of a firm, and hence we can calculate the errors in analysts’ forecasts, which represent investors’ errors-in-expectations. We can then estimate the returns generated by investors’ errors-in-expectations, that is, the mispricing-induced return. If the market is perfectly efficient, the mispricing-induced return should be zero. I calculate the fraction of an anomaly explained by mispricing as the ratio of mispricing-induced return over the observed return. The fraction of an anomaly explained by risk is thus one minus the above ratio.

I examine 195 anomalies. On average, the fraction explained by mispricing is 17.51%, suggesting that the major fraction of anomalies is not anomalous at all. This result may be disappointing to EMH critics, who seem to think that the stock market is extremely inefficient, and it is very easy to profit from anomalies.  However, the good news to EMH critics is that the fraction explained by mispricing varies widely across different anomalies. In particular, the momentum anomalies are almost completely explained by mispricing. Hence, trading on momentum anomalies is likely to generate abnormal returns. In contrast, the high returns from the value strategies are almost entirely compensations for bearing high risk.

Dr. Hui Xu, CERF Research Associate, June 2018.

Contingent Convertibles: Does it do when it is supposed to do?

When Lehman Brother was in deep water September, 2008, the U.S. Federal government and the Federal Reserve decided not to bail it out, and several days later, the company filed Chapter 11 bankruptcy protection. Global markets immediately plummeted after the filing of bankruptcy, and both the government and central bank are accused of exacerbating investors’ panic for that decision. However, if they did, they would have been accused for a different reason: using taxpayers’ money to bail out a greedy and aggressive Wall Street giant.

The example illustrates the controversy and dilemma of bailout faced by policymakers. Since the financial crisis, one priority for the regulators has been to design a bail-in, an internal way to recapitalize distressed financial institutions and strengthen their balance sheet. The regulators hope it to become a substitute for the bailout. One way to deliver a swift and seamless bail-in is through the conversion of contingent convertible capital securities (CoCo).

CoCos are bonds issued by banks that either convert to new equity shares or experience a principal write-down following a triggering event. Because Basel III allows banks to meet part of the regulatory capital requirements with CoCo instruments, banks around the world issued a total of $521 billion in CoCos through 732 different issues between Jan 2009 and Dec 2015.

That being said, CoCos are still in their early stage in the sense that there is no consensus on how to design a CoCo. Moreover, few research has studied the response from market participants. Studying the response from market precipitants can shed light on the optimal CoCo design.

A recent research project by CERF research associate HUI (Frank) XU studies the response of incumbent equity holders when CoCos are in place. It considers two types of CoCos: CoCos convert to common shares when the stock price falls below a pre-set target, or the market capital ratio falls below a pre-set threshold. Surprisingly, the research shows that if the conversion dilutes incumbent equity holders’ security value, they will have strong incentive to issue a large amount of debt before the pre-set triggering point, and accelerate the trigger of CoCo conversion. The intuition is that since their equity value is diluted at conversion, they will issue a large amount of debt and distribute the proceeds via dividend or share repurchase just before conversion, leaving the new equity holders and debt holders much lower security value. Thus, the incumbent equity holders collect a one-time big payout at the cost of new equity holders and debt holders.

This is certainly contrary to the regulators’ expectations. Regulators expect equity holders to improve their corporate management, risk-taking strategies and financial policies under the threat of CoCo conversion. That equity holders benefit themselves by destroying the firms’ value under the threat of CoCo conversion is the least they want to see. Therefore, the research highlights the complexity of continent convertibles design, and the importance of taking the market participants’ response into account when regulators propose a CoCo design.

Dr. Alex Tse, CERF Research Associate, May 2018.

Embrace the randomness

Excerpt from the CBS sitcom “The Big Bang Theory”, S05 E04:

Leonard: Are we ready to order?
Sheldon: One moment. I’m conducting an experiment.
Howard: With Dungeons and Dragons dice?
Sheldon: Yes. From here on in, I’ve decided to make all trivial decisions with a throw of the dice, thus freeing up my mind to do what it does best, enlighten and amaze. Page 14, item seven.
Howard: So, what’s for dinner?
Sheldon: A side of corn succotash. Interesting……

It sounds insane to let a die decide your fate. But we all know that our beloved physicist Dr Sheldon Cooper is not crazy (his mother had him checked!) so there must be some wisdom behind.  To a mainstream economist, adopting randomisation in a decision task seems to violate a fundamental economic principle – more is better. By surrendering to Tyche the goddess of chance, we are essentially forgoing the valuable option to make a choice.

A well-known situation where randomised strategies are relevant is the game-theoretic setup where strategic interactions among players matter. A right-footed striker has a better chance of scoring a goal if he kicks left. A pure strategy of kicking left may not work out well though because the goalie who understands the striker’s edge will simply dive left. The optimal decisions of the two players thus always involve mixing between kicking/blocking left, right and middle etc. However, a very puzzling phenomenon is that individuals may still exhibit preference for deliberate randomisation even when there is no strategic motive. An example is a recent experimental study (Agranov and Ortoleva, Journal of Political Economy, 2017) which documents that a sizable fraction of lab participants are willing to pay a fee to flip a virtual coin to determine the type of lotteries to be assigned to them.

While the psychology literature offers a number of explanations (such as omission bias) to justify randomised strategies, how can we understand deliberate randomisation from an economic perspective? The golden paradigm of decision making under risk is the expected utility criteria where a prospect is evaluated by the linear probability-weighted average of the utility value associated with each outcome. There is no incentive to randomise the decision as the linear expectation rule would guide an agent to pick the highest value option with 100% chance. However, when the agent’s preference deviates from linear expectation, a stochastic mixture of prospects can now be strictly better than the static decision of sticking to the highest value prospect (Henderson, Hobson and Tse, Journal of Economic Theory, 2017). Rank-dependent utility model and prospect theory, which are commonly used in the area of behavioural economics, are two notable non-expected utility frameworks under which randomised strategies are internally consistent with the agent’s preference structure.

Incorporation of non-linear probability weighting and randomised strategies leads to many potential economic implications. For example, consider a dynamic stopping task where an agent decides whether to sell an asset at each time point. In a classical expected utility setup, there is no incentive for the agent to randomise the decision between to stop and to continue. This implies the optimal trading strategy must be a threshold-rule where sale only occurs when the asset price first breaches a certain upper or lower level. In reality, investors do not necessarily adopt this kind of threshold strategy even in a well-controlled laboratory environment. For example, the asset price could have visited the same level multiple times before a participant decides to sell the asset (Strack and Viefers, SSRN working paper, 2014). While expected utility theory struggles to explain trading rules that go beyond the simple “stop-loss stop-gain” style order, non-linear expectation and randomisation provide a modelling foundation to justify more sophisticated investment strategies adopted by individuals in real life.

Dr. Yuan Li, CERF Research Associate, April 2018

Are analysts whose forecast revisions correlate less with prior stock price changes better information producers and monitors?

Financial analysts are important information intermediaries in the capital markets because they engage in private information search, perform prospective analyses aimed at forecasting firms’ future earnings and cash flows, and conduct retrospective analyses that interpret past events (Beaver [1998]). The information produced by analysts is disseminated to capital market participants via analysts’ research outputs, which mainly include earnings forecasts and stock recommendations. Prior academic studies suggest that the main role of an analyst is to supply private information that is useful to parties such as investors and managers. Therefore, an analyst’s ability to produce relevant private information that is not already known to other parties is an important determinant of the analyst’s value to the capital markets. Based on this notion, CERF research associate -- Yuan Li and her co-authors propose a simple and effective measure of analyst ability.

Our measure of analyst ability is calculated as one minus the correlation coefficient between the analyst’s forecast revisions and prior stock price changes within successive forecasts. Since prior stock price changes capture the incorporation of information that is already known to investors, any information in an analyst’s forecast revisions that is not correlated with prior stock price changes reflects the analyst’s private information. In other words, our measure captures the ability of an analyst to produce information that is not already incorporated into stock prices.

We find that the stock price impact of forecast revisions issued by superior analysts identified by our measure is greater. We also find that firms covered by more superior analysts are less likely to engage in earnings management. These findings suggest that superior analysts identified by our measure are better information producers and monitors.

Dr. Jisok Kang, CERF Research Associate, March 2018

The Granular Effect of Stock Market Concentration on Market Portfolio Volatility

Ever since the Capital Asset Pricing Model (CAPM) was first introduced in 1964, a well-accepted conception in the modern portfolio theory is that the market portfolio contains only market risk or systematic risk as firm-specific risk or non-systematic risk is diversified away.

Meanwhile, Xavier Gabaix, in a paper published at Econometrica in 2011 titled as “The Granular Origins of Aggregate Fluctuations,” argues that idiosyncratic firm-specific shocks to large firms in an economy can explain a great portion of the variation in macro-economic movements if firm size distribution is fat-tailed. His argument implies that firm-specific shocks to large firms are granular in nature and may not be easily diversified away. He empirically shows that idiosyncratic movements by the largest 100 firms in the U.S. can explain roughly one third of the variation in the GDP growths of the country, the phenomenon he dubs “the granular effect.”  

Jisok Kang, a CERF research associate, in his recent research paper, shows that stock market concentration, the level of domination by the largest firms in the stock market, increases the volatility of market portfolio. This finding implies that the idiosyncratic, firm-specific risk of large firms is granular in nature and not diversified away in the market portfolio. This finding is robust whether the market portfolio volatility is defined with value-weighted or equal-weighted index.

In addition, stock market concentration causes other stock prices to co-move thus increases the market portfolio volatility further. The incremental volatility caused by stock market concentration is bad volatility in that the effect is severer when the market portfolio return is negative.

Dr. Hui (Frank) Xu, February 2018

What caused the leverage cycle run-up to 2008 financial crisis?

The 2008 financial crisis has far-reaching impact on financial markets and real economy. Although academic researchers and public policymakers have reached a consensus that the financial crisis roots in leverage cycle, they continue to debate the causes that led to the leverage cycle. Initially, it was widely accepted that financial innovation and deregulation exacerbated agency problem, incentivizing the financial intermediaries to issue consumer credit, including mortgage debt, without proper screening and monitoring (“credit supply” channel). Recently, nevertheless, a growing empirical literature has proposed a “distorted beliefs" view of the crisis, demonstrating that over optimism of investors may have led to rapid expansion of the credit market, and increased assets price in the run-up to the crisis (“credit demand” channel). The financial crisis, like any other major economic event, probably has more than one cause, and both credit demand and supply channels have contributed to it. Indeed, the two views are not entirely mutually exclusive, and may reinforce each other.

However, one still might want to ask to what extent the distorted beliefs have caused the crisis. This question is interesting for both theoretical and practical reasons. First, economists have long known that distorted beliefs have important effects on prices of financial assets, e.g., risk-free rate and stock prices, but they still find it wanting to understand why the distorted beliefs can cause massive default in 2008; second, understanding what caused the financial crisis helps to create effective changes in policy. If it is largely an agency problem, policies to prevent similar crises would include requiring financial intermediaries to “put more skin in the game”, and to enforce stricter screening and monitoring. If it is primarily a distorted expectations and beliefs problem, preventative measures would include implementing macroprudential, financial-stability polices, and improving information transparency.

One way to quantify the role of distorted beliefs in the financial crisis is to construct a dynamic general equilibrium model which features credit use and risk-taking by households purely based on distorted beliefs, effectively shutting down agency problem channel. Then, examine the explanatory power of the model by comparing the output from the calibrated model to real data. This is a research project done by CERF research associate HUI (Frank) XU.

The main findings of the paper support the distorted beliefs view of the financial crisis. The distorted beliefs view can explain the household leverage running up to the financial crisis. Quantitively, the distorted beliefs can account for more than half of the variation of the real interest rate during the crisis period.

Dr. Alex Tse, CERF Research Associate, February 2018

Transaction costs, consumption and investment

The theoretical modelling of individuals’ consumption and investment behaviours is an important micro-foundation of asset pricing. Despite being a classical problem in the literature of portfolio selection, analytical progress is very limited when we extend the model to a more realistic economy featuring transaction costs. The key obstacle thwarting our understanding in the frictional setup originates from the highly non-linear differential equation associated with the problem.

Using a judicious transformation scheme, CERF research associate Alex Tse and his collaborators David Hobson and Yeqi Zhu show that the underlying equation can be greatly simplified to a first order system. Investigation of the optimal strategies can then be facilitated by a graphical representation involving a simple quadratic function encoding the underlying economic parameters.

The approach offers a powerful tool to unlock a rich set of economic properties behind the problem. Under what economic conditions can we expect a well-defined trading strategy? How does the change in the market parameters affect the purchase and sale decisions of an individual? What are the quantitative impacts of transaction costs on the critical portfolio weights? While some features are known in the literature, there are also a number of surprising phenomena that have not been formally studied to date. For example, the transaction cost for purchase can be irrelevant to the upper boundary of the target portfolio weight in certain economic configurations.

In a follow-up project, the methodology is further extended to a market consisting of a liquid asset and an illiquid asset where transaction costs are payable on the latter. The research findings could serve as the useful building blocks towards a more general theory of investment and asset pricing.