skip to primary navigationskip to content


The CERF blog features short articles about current research and other relevant topics, written by CERF’s Fellows and researchers.

Scott B. Guernsey, CERF Research Associate, March 2019

As described in the article “The Choice between Formal and Informal Intellectual Property: A Review”, published in the Journal of Economic Literature by Bronwyn Hall (University of California, Berkeley), Christian Helmers (Santa Clara University), Mark Rogers (Oxford University), and Vania Sena (University of Essex), the UK Community Innovation Survey suggests that most UK-based companies consider trade secrets one of the most effective mechanisms to protect their intellectual property. Further, the recent passing of the “Trade Secrets (Enforcement, etc) Regulations 2018” (SI 2018 No. 597) act by Parliament indicates that UK policymakers are also concerned with protecting domestic trade secrets.

Loosely defined, trade secrets are configurations of closely held, confidential information (e.g., devices, formulas, methods, processes, programs, techniques, etc.), which are used in a firm’s operations, are not easily ascertainable by outside parties, and have commercial value for the holder because it is secret. Common examples include detailed information about a firm’s customer contact and price lists, computer algorithms, cost information, and business plans for future products and services, among others.[1] Although, despite the simplicity and straightforwardness of these examples, the opaque and intangible nature of trade secrets makes it challenging for investors to appropriately assess the risk profiles and fundamental values of companies more reliant on secrecy.

As explained in the legal article “Bankruptcy in the Age of ‘Intangibility’: The Bankruptcies of Knowledge Companies” by Mathieu Kohmann (Harvard Law School), the difficulty in assessing the risk and value of trade secrets is even more alarming for creditors of financially distressed or defaulted firms. For one, trade secrets cannot generally be collateralized in debt contracts. And second, even if the secrets were pledgeable to lenders, they do not have active secondary markets, making their redeployability and liquidation in bankruptcy costly and largely infeasible. Prior theoretical work in the financial economics literature, further suggests that firms composed primarily of intangible assets (e.g., trade secrets) sustain less debt financing because these types of assets decrease the value that can be captured by lenders in the event of default.[2]

Motivated by the increasing importance of secrecy for firms and governments, and the corresponding difficulties borne by creditors of these types of firms, in the article “Keeping Secrets from Creditors: The Uniform Trade Secrets Act and Financial Leverage”, CERF Research Associate Scott Guernsey, and research collaborators Kose John (New York University) and Lubomir Litov (University of Oklahoma), examine the impact of stronger trade secrets protection on firms’ capital structure decision-making.

To empirically analyze the relationship between trade secrets protection and financial leverage, Dr. Guernsey focuses his study on the adoption of the Uniform Trade Secrets Act (UTSA) by 46 U.S. states from 1980 to 2013. The UTSA, much like the recent “Trade Secrets (Enforcement, etc) Regulations 2018” in the UK, improves the protection of trade secrets by codifying existing common law, standardizing its legal definition, detailing what constitutes illegal misappropriation (e.g., bribery, theft, espionage), and clarifying the rights and remedies of victimized firms (e.g., injunctive relief, damages, reasonable royalties). Using the staggered adoptions of the UTSA by different states in different years, the authors find that firms located in states with enhanced trade secrets protection reduce (increase) their use of debt (equity) financing, compared to firms operating in the same U.S. Census region[3] and sharing similar industry trends but headquartered in states without the laws’ protection.

Next, Dr. Guernsey explores a possible economic explanation for the reduction in debt ratios experienced by firms located in states with the UTSA. The authors find evidence for the “asset pledgeability hypothesis” which conjectures that stronger trade secrets protection incentivizes firms to increase their reliance on secrecy (and away from patents), which, correspondingly, increases intangibility, leading to enhanced contracting problems with creditors – since such assets are more difficult to redeploy and liquidate in secondary markets –, ultimately, leading to less borrowing. For instance, relative to industry rivals operating in similar geographical regions, firms located in UTSA enacting states increase their investments in intangible assets and research and development (R&D), and experience decreases in the liquidation value of their assets and in their reliance on patents.

Overall, Dr. Guernsey’s findings provide important insights into how greater reliance on trade secrets affects corporate leverage decisions – indicating that companies with stronger protection choose to keep their secrets from creditors.


References mentioned in this post

Hall, B., C. Helmers, M. Rogers, and V. Sena. 2014. The choice between formal and informal intellectual property: A review. Journal of Economic Literature 52: 375-423.

Kohmann, M. 2017. Bankruptcy in the age of “intangibility”: The bankruptcies of knowledge companies. Unpublished Working Paper, Harvard Law School.

Long, M.S., and Malitz, I.B. 1985. Investment patterns and financial leverage. In: Corporate capital structures in the United States. University of Chicago Press, Illinois, pp. 325-352.

Shleifer, A., and Vishny, R.W. 1992. Liquidation values and debt capacity: A market equilibrium approach. Journal of Finance 47: 1343-1366.

Williamson, O.E. 1988. Corporate finance and corporate governance. Journal of Finance 43: 567-591.

[1] For instance, the Coca-Cola soft drink recipe, Google’s search algorithm, McDonald’s Big Mac special sauce, and the New York Times Bestseller List are among the most famous examples of trade secrets.

[2] For example, see, Long and Malitz (1985), Williamson (1988), and Shleifer and Vishny (1992).

[3] The U.S. Census Bureau groups states into four census regions: Northeast, Midwest, South, and West.

Dr.Adelphe Ekponon, CERF Research Associate, February 2019

Long-term Economic Outlook and Equity Prices


The very first asset pricing models (also called Capital Asset Pricing Models or CAPM) have postulated that the only risk that is needed to characterize a stock price is the contemporaneous correlation between the firm and the market portfolio returns. This implies that investors pay much more attention to information about the current economic conditions. Yet models that only incorporate this correlation risk tend to be unable to capture the dynamic of equity returns. The empirical asset pricing model proposed by Fama and French (1992) demonstrate that CAPM has no explanatory power to explain the cross-section of average stocks returns on portfolios sorted by size and book-to-market equity ratios.


An important trend of the literature has developed models to improve pricing performances of the CAPM via a consumption-based approach, CCAPM. The main innovation of CCAPM models lies in the introduction of macroeconomic conditions into asset pricing. According to these models, risk premia should be proportional to consumption beta (correlation between the firm's profit and consumption). However, this line of CCAPM models are known to produce very little level of equity risk premium, less than 1% for reasonable levels of risk aversion. These models are also rejected by several empirical tests.


Since then, two new features have been introduced in asset pricing. The first comes from the observation by Hamilton (1989) that shocks to the US economic growth are not i.i.d. as growth rates may also shift from periods of high to low levels. Secondly, a new class of utility functions introduced by Epstein and Zin (1989), allows to isolate, the aversion to future economic uncertainty from that of the current correlation risk.


Bansal and Yaron (2004) and recent papers have successfully developed consumption-based models in which the representative agent has Epstein-Zin type of preferences. These models pave the way to disentangle the impact of long-run vs. current correlation risks in stock prices. Additionally, they generate reasonable levels of equity risk premium and are able to explain some key asset pricing phenomena. Here, long-run risk (LRR) captures the unforecastable and persistent nature of future economic conditions and has two components, expected growth rate and volatility.


Constructed on this last trend of papers, Dorion, Ekponon, and Jeanneret (2019) propose a consumption-based structural approach, with endogenous default and debt policies, that allows investigating both long-run and correlation risks individually and in tandem. This is the first study to isolate and quantify, conditional on the state of economy, the impact of LRR in equity prices.


They found an average risk premium of 1% in expansion against 6% in recession. The paper also predicts that long-run risk represents about three-quarters of this risk premium and that its impact is countercyclical, being more than 90% in recession. To reduce the impact of LRR, managers lessen the optimal amount of debt to issue and lower the default barrier. Despite these adjustments, LRR still governs equity premium leading to the above predictions.


Using U.S. stocks prices, consumption growth (correlation risk), and expected economic growth rate and volatility (long-run risk), over the period from 1952 to 2016, this study confirms that LRR is priced in U.S. firms, particularly in bad times. These data show that the compensation for LRR represents around 70% of excess returns in a zero-investment portfolio, consisting in shorting stocks which returns have a low correlation with expected growth rates (or high correlation with expected growth volatilities) and buying stocks with high correlation with expected growth rates (or low correlation with expected growth volatilities). These results imply that LRR is a priced risk factor for equity.


Hence, investors are compensated for trading/holding stocks based on their sensitivity to future economic conditions. This result provides a strong evidence that long-run economic outlook is an important driver of equity premium at the cross section.



References mentioned in this post


Bansal, R. and Yaron, A. (2004), Risks for the long run: A potential resolution of asset pricing puzzles, Journal of Finance 59(4), 1481-1509.


Epstein, L. G. and Zin, S. E. (1989), Substitution, risk aversion, and the temporal behavior of consumption and asset returns: A theoretical framework, Econometrica 57(4), 937-69.


Fama, E. F. and French, K. R. (1992), The cross-section of expected stock returns, Journal of Finance 47(2), 427-65.


Hamilton, J. (1989), A new approach to the economic analysis of nonstationary time series and the business cycle, Econometrica 57(2), 357-84.




Dr. Hui Xu, CERF Research Associate, January 2019

Brexit: Investor Paranoia and the Financing Cost of Firms

Financial markets faced a bumpy ride in 2018. The Financial Times report that global bond and equity markets shrank $5tn last year. Two major risks have been disrupting the markets during the past year: US-China trade dispute and Brexit. The two risks, however, are essentially the same: both would cause new frictions and impediments to the existing trade frameworks and unsettle investors’ nerves.

The risks may have consequences on firms’ financing cost for real reasons. Take Brexit with no deal as an example. First, a firm’s revenue can decline due to the friction in the product market, especially for British firms that heavily depend on the European markets. Second, the friction in the labor market may increase a firm’s production cost. Both will lead to adverse effects on a firm’s cash flow and, consequently, the firm’s financing costs. However, the Brexit might also increase the firm’s financing cost just because the investors become paranoid and exaggerate such adverse impacts brought by Brexit.

Yet, to what extent does investor paranoia affect a firm’s financing cost? The question is interesting for two reasons. First, although economists have been assuming investors to be rational, empirical evidence has challenged this view. Answering this question not only contributes to the evidence of irrationality, but also quantifies the real impact of investor irrationality on firms. Second, irrationality drives the valuation from the fundamentals and, de facto, creates possibility for arbitrage.

A work in progress by Frank, a research associate at Cambridge Endowment for Research in Finance (CERF), and his co-authors, studies the question by studying the yield difference of British corporate bonds maturing before and after March 29th, 2019, the date on which Great Britain is set to leave European Union. The idea is simple. Take a corporate bond which matures one day before March 29th and another identical bond which matures one day after March 29th, if the yield of the latter is significantly higher, then we can conclude that the yield difference captures the impact of investor paranoia on the firm’s debt financing cost. Even if Great Britain crashes out of EU without a deal on March 29th, it can hardly affect a firm’s fundamentals, such as revenue and cost, within one day. Therefore, the only explanation for such a yield difference lies in investor paranoia.

Guided by the empirical design, the authors collect a small sample of British corporate bonds. The preliminary analysis does show that bonds maturing after the Brexit date have a higher yield than similar bonds maturing before the date, indicating the real financing cost on firms due to investor paranoia about Brexit risk. The authors are in the process of collecting more data and a working paper and more results will be published very soon.

Scott Guernsey, CERF Research Associate, December 2018

Reinvesting Market Power for the Betterment of Shareholders

On the supply side, highly competitive industries are generally characterized ashaving many firms and low barriers to entry. The first condition implies that existing firms cannot dictate or influence prices, and the second that new firms can enter markets at any time and at relatively low cost when incentivized to do so. Taken together then, in equilibrium, this setting suggests that existing firms only earn enough revenue to remain competitive and cover their total costs of production.

Yet, in reality, most industries in the United States have become increasingly less competitive. For example, in the article “Are U.S. Industries Becoming More Concentrated?”, forthcoming in Review of Finance, Gustavo Grullon (Rice University), Yelena Larkin (York University), and Roni Michaely (University of Geneva), find that more than 75% of U.S. industries experienced an increase in concentration over the past two decades.[1] As such, these industries are now composed of fewer firms, are less at risk of entry by newcomers, and earn “economic rents” or revenues in excess of that which would be economically sufficient in a competitive environment. Given these new developments, it is important for shareholders to understand how a reduction in competition might affect their holdings.

 In the article “Product Market Competition and Long-Term Firm Value: Evidence from Reverse Engineering Laws”, CERF Research Associate Scott Guernsey examines the value and investment policy implications of decreased product market competition for equity holders in the U.S. manufacturing industry.

To empirically analyze the relationship between competition and firm outcomes, Dr. Guernsey centers his study on the adoption of anti-plug-mold (APM) laws, which were adopted by 12 U.S. states from 1978 to 1987, and their subsequent reversal by a U.S. Supreme Court ruling in 1989. APM laws directly influenced the intensity of competition in product markets by protecting firms headquartered in the law adopting states from competitors copying their products using a specific type of reverse engineering (RE)[2] – the “direct molding process”.

The direct molding process enabled competitors to circumvent the R&D and manufacturing costs incurred by the originating firm by using an already finished product to create a mold which would then be used to produce duplicate items. For example, a boat manufacturer using this RE process would buy an existing boat on the open market, spray it with a mold forming substance (e.g., fiberglass), remove the original boat from the hardened substance, which would then become the mold used to produce replica boats. However, under the protection of APM laws, firms were given legal recourse to stop competitors in any U.S. state from using the direct molding process to compete with their products.

Using the staggered adoptions of APM laws by different states in different years, Dr. Guernsey finds that firms located in states with RE protection experienced increases in their value, when compared to firms operating in the same industry but located in states without the laws. Moreover, when the APM laws were later overturned by a U.S. Supreme Court ruling, which found the state laws in conflict with federal patent law, he finds all of the previous value gains subside.

Next, Dr. Guernsey explores a possible economic explanation for the increase in value experienced by firms in less competitive industries. He finds evidence for the “innovation incentives” hypothesis which poses that any of the economic rents the APM protected firms earn from increased market power are being allocated to investments in new and existing production technologies. For instance, relative to industry rivals, firms located in APM enacting states increase their investments in R&D and organizational capital.

Overall, Dr. Guernsey shows a reduction in competition is value enhancing for a subset of shareholders in the manufacturing industry as it leads their firms to reinvest the spoils of market power back into the company.


References mentioned in this post

Grullon, G., Y. Larkin, and R. Michaely. 2018. Are US industries becoming more concentrated?. Review of Finance, Forthcoming.

Gutiérrez, G., and T. Philippon. 2017. Declining competition and investment in the US. Unpublished Working Paper, National Bureau of Economic Research.

Kahle, K. M., and R. M. Stulz. 2017. Is the US public corporation in trouble?. Journal of Economic Perspectives 31:67–88.

[1] Gutiérrez and Philippon (2017) and Kahle and Stulz (2017) also document evidence confirming the recent trend in rising U.S. industry concentration.

[2] The standard legal definition of reverse engineering in the U.S. is described as “starting with the known product and working backward to divine the process which aided in its development or manufacture.”

Adelphe Ekponon, CERF Research Associate, November 2018

Emerging Markets Economies Debt Is Growing... What to expect?

After the 2008 financial crisis, Central banks have implemented accommodative monetary policies with the objective to revitalize countries economic activities. As a consequence, many countries have increased their borrowing in dollar and euro-denominated debt, leading to an increase of debt/GDP ratio around the world. As an example, this ratio was on average about 82% in Europe by the end of 2017 compared to 60% before the crisis, according to Eurostat.

The prime concern, however, is currently on the Emerging Markets Economies (EMEs) side, at least for two reasons.

First, many Emerging countries have increased their exposure to foreign debt (especially to hard currencies like dollar or euro). Their overall government debt as percentage of GDP went from 41 to 51 from 2008 to 2017 (BIS Quarterly Review, September 2017). In the same period, the government debt of EMEs doubled to reached $11.7 trillion with foreign currency debt also rising. Yet the problem with foreign-currency debt is that the government cannot inflate them away and difficulties to service them may be transmitted to the local currency debt market.

Second, the US Federal Reserve and the European central Bank are ending their accommodative monetary policies, which implies that interests rate will now be on the rise and that EMEs borrowing costs as well. From past experiences, interest rate rise in the US particularly has shown to be a trigger of many emerging countries debt crisis. Before EMEs debt crisis, such as Latin America in 1980, Mexico in 1994 and Asia in 1997, interests rate in the US were growing after remaining low.

Other factors may even worsen the situation, i.e. contagion or capital outflow, among others.  

In their paper “Macroeconomic Risk, Investor Preferences, and Sovereign Credit Spreads”, CCFin research associate Adelphe Ekponon and his co-authors explore the mechanism through which macroeconomic conditions combined with global investors aversion drive countries borrowing costs. According to this study, the link between economic conditions in the US and sovereign debt yields originate from the existence of a global business cycle, as countries tend on average, to be in good or bad time around the same periods. They found that this global business cycle increases the risk of defaulting, but also the government’s unwillingness to repay. The other mechanism is that investors’ higher risk aversion amplifies these effects. In this case, risky assets sell-offs are more pronounced in recession leading to a lower risk-free rate on average, to which the government optimally respond by issuing more debt.

It is likely that countries are going to discipline themselves in the coming months or years as borrowing costs surge… if there is no sudden switch to a global economic downturn.



Pedro Saffi, CERF Fellow, November 2018

Predicting House Prices with Equity Lending Market Characteristics

Investors in financial markets must cope with the arrival of a myriad of news, which arrive relentlessly every day non-stop. This information must be interpreted and used in the most efficient way possible to update investment strategies. Most academics also spend their careers trying to identify variables (e.g. GDP growth, retail sales, unemployment) that can help forecast the behavior of financial market variables (e.g. stock returns, risk, and exchange rates). While less common, many articles show how financial markets’ data can be used to predict the behavior of variables in the real economy.[1]

In the article “The Big Short: Short Selling Activity and Predictability in House Prices”, forthcoming at Real Estate Economics, CERF Fellow Pedro Saffi and research collaborator Carles Vergara-Alert (IESE Business School) look at how U.S. house prices can be better understood using a previously unexplored set of financial variables.

Investors can speculate on a decrease of prices using a strategy known as “short selling”. This involves borrowing the security being sold from another investor, selling at the current price, and repurchase it in the future – hopefully at a lower price to make a profit. The market to borrow shares is known as the equity lending market, a trillion-dollar part of the financial system that allows investors to borrow and lend securities needed for short selling. While investors cannot bet in house price decreases by shorting houses directly, they can use a wide-range of financial securities to do. Dr Saffi examines use data on short selling activity from a specific type of security whose returns are highly related to house prices – Real Estate Investment Trusts (REITs) –  that are essentially portfolios of underlying real estate properties.

The authors’ main hypothesis is that REITs are strongly correlated to fundamentals of housing markets. Thus, an increase in REIT short selling activity can forecast decreases in housing prices, which is exactly what is found by the authors in the data. Furthermore, REITs invested in properties located in areas that experienced a housing boom during the expansion cycle in the 2000s are more sensitive to increases in short selling activity than REITs invested in properties located in areas that did not experience a housing boom. The study divides the US property market into four regions – Northeast, Midwest, South and West – and classifies each month in each region as being a “boom,” “average” or downturn” period. Although during boom and average periods there is little correlation between REITs short-selling and the subsequent month’s housing prices, “the correlation is significantly positive during housing market downturns.”

Using his research findings, Dr. Saffi constructs a hedging strategy based on short selling intensity to reduce the downside risk of housing price decreases, showing that investors can limit their losses using REITs’ equity lending data. The figure below (Figure 4 in the article) shows the cumulative returns for Dr. Saffi’s trading strategy (based on using the On Loan variable as a proxy of short selling activity) relative to the performance of the FHFA Housing Price index returns from July 2007 through July 2013.  These results show the usefulness of the hedging strategy in regions that experienced large house price run-ups during the years prior to 2007, i.e., Northeast and West to limit investor losses during the 2008 financial crisis. Its performance is satisfactory for the South and absent for the Midwest, where we observed a smaller house price run-up in the same period. Panel B shows similar results if we examine the performance using diversified REITs to hedge against price decreases in the aggregate FHFA index.

Overall, short selling can be a useful tool for market participants to hedge against future price decreases. Regulators can track measures from the equity lending market to improve forecasts of house prices and implement policies to prevent real estate bubbles. Furthermore, imposing short selling constraints on stocks like REITs—which invest in assets subject to high transaction costs—matters for price efficiency and the dissemination of information.

References mentioned in this post

Ang, A., G. Bekaert and M. Wei. 2007. Do Macro Variables, Asset markets, or Surveys Forecast Inflation Better? Journal of Monetary Economics 54: 1163–1212.

Bailey, W. and K.C. Chan. 1993. Macroeconomic Influences and the Variability of the Commodity Futures Basis. Journal of Finance 48: 555–573.

Koijen, R.S., O. Van Hemert and S. Van Nieuwerburgh. 2009. Mortgage Timing. Journal of Financial Economics 93: 292–324.

Liew, J. and M. Vassalou. 2000. Can Book-to-Market, Size and Momentum be Risk Factors that Predict Economic Growth? Journal of Financial Economics 57: 221–245.


[1] For example, Liew and Vassalou (2000), Ang, Bekaert and Wei (2007), Koijen, Van Hemert and Van Nieuwerburgh (2009) and Bailey and Chan (1993) use financial market data to forecast economic growth, inflation, mortgage choices and commodities, respectively.



Scott B. Guernsey, CERF Research Associate, October 2018

Guaranteed Bonuses in High Finance: To Reward or Retain?

                Public distaste for high finance reached an all-time high in March of 2009, as the American International Group (AIG) insurance corporation announced it had paid out roughly $165 million dollars in bonuses to employees of its London-based financial services division (AIG Financial Products). Only months earlier, the same company had received roughly $170 million in U.S. taxpayer-funded bailout money and suffered a quarterly loss of $61.7 billion – the largest corporate loss on record. Then Chairman of the U.S. House Financial Services Committee, Barney Frank, remarked that payment of these bonuses was “rewarding incompetence”.

AIG countered, arguing that the bonuses had been pledged well before the start of the financial crisis and that it was legally committed to make good on the promised compensation. Additionally, Edward Liddy, who had been appointed chairman and CEO of AIG by the U.S. government, said the company could not “attract and retain” highly skilled labor if they believed “their compensation was subject to continued…adjustment by the U.S. Treasury.” And AIG wasn’t the only financial firm paying out large bonuses in 2009, as at least nine other large financial institutions, which had similarly received U.S. government assistance, distributed bonuses in excess of $1 million each to nearly 5,000 of its bankers and traders.

But why would these financial corporations risk their reputational capital to pay out bonuses? And why not condition the size and timing of bonus payments on circumstances like that experienced during the 2008 financial crisis rather than to simply guarantee large bonuses a year or more in advance?

A recent research article presented at this year’s Cambridge Corporate Finance Theory Symposium by Assistant Professor Brian Waters (University of Colorado Boulder) offers some interesting insight on these questions. To begin, the paper highlights three unique features of bonuses in the financial industry. First, unlike most other industries, bonus payments to high finance professionals (e.g., traders, bankers, analysts) comprises a large share of their total compensation. In fact, as described in the paper, more than 35% of a first-year analyst’s total pay is in the form of a bonus. This is further evidence by the hefty bonuses of $1 million or more dispensed to bankers, traders and executives by large financial institutions (AIG included) in 2009. 

Second, it seems as if bonus payments are largely guaranteed. For example, according to the paper, third-year analysts expect to receive a bonus of at least $75,000, with the possibility of earning a higher $95,000 bonus only if they performed exceptionally well. Moreover, as summarized above, AIG defended payment of its bonuses in March of 2009 by arguing they had been committed in advance and were obligated by law to fulfil this pledge. Third, observation of practice suggests financial institutions coordinate the timing of their bonuses by geography. For instance, in Europe almost all big banks determine bonuses in late February and early March, while U.S. banks do so in January. Again, this is consistent with AIG, although an American insurer, distributing bonuses to its London-based Financial Products division in March.

Considering these three stylized facts, Professor Waters (and co-author, Professor Edward D. Van Wesep) construct a mathematical model to explain why bonuses in high finance are both large and guaranteed. The general set-up of the model flows in the following manner. First, the authors assume that financial firms might find it difficult to recruit employees during certain months of the year (e.g., perhaps it is easy to replace employees in March, but difficult to do so in October). Second, in response to this periodic scarcity of labor, firms design contracts whereby large bonuses are paid during months with an abundance of talent (e.g., March), but condition the contracts such that employees must remain with the company until bonuses are paid to be eligible for this form of compensation.

Third, since financial firms operating in the same geography face similar labor market conditions, many of the firms will respond similarly, paying bonuses at the same time. Fourth, because employees are incentivized to remain with the firm until bonuses are paid, they will delay quitting until this point in time (i.e., this is when most employees leave their employers). Therefore, finally, this suggests labor markets will be flooded with talent after bonuses are paid (e.g., March), but will be relatively shallow in other months (e.g., October). Hence, arriving back at the initial step in the model and the game repeats, providing an intuitive explanation for why large and guaranteed bonuses are observed in high finance, irrespective of macroeconomic conditions and own firm performance.

Yuan Li, CERF Research Associate, July 2018

How (in)efficient is the stock market?


In 2013, the Nobel committee split the economic prize to Eugene Fama – the pioneer of efficient market hypothesis (EMH) and Robert Shiller – the critic of EMH. This decision indicated that the Nobel committee agreed with both Fama and Shiller. Was the committee right? The answer is yes, according to my findings from a recent research project.

Fama explains EMH as “the simple statement that security prices fully reflect all available information”. The empirical implication of this hypothesis is that except beta (the measurement of a firm’s systematic risk), no other publicly available information can be used to predict stock returns. However, the finance literature has found that many easily available firm characteristics, such as market capitalisation, book-to-market ratio, etc, are related to future stock returns. They are the so-called anomalies. Does the discovery of anomalies reject the EMH? Not necessarily. Because no one knows what a firm’s beta should be, and those firm characteristics can simply be proxies for beta. This is known as the joint hypothesis problem. We can say nothing about EMH unless we know what the correct asset pricing model is. Sadly, we do not know what the correct asset pricing model is.

In this project, I get around the joint hypothesis problem. I assume that a firm’s stock return is composed of two parts: risk-induced return and mispricing-induced return. Because of the joint hypothesis problem, we do not know what the risk-induced return is. However, we can estimate the mispricing-induced return (if there is any) using the forecasts issued by financial analysts. Analysts’ earnings forecasts represent investors’ expectations. More importantly, we know the actual earnings of a firm, and hence we can calculate the errors in analysts’ forecasts, which represent investors’ errors-in-expectations. We can then estimate the returns generated by investors’ errors-in-expectations, that is, the mispricing-induced return. If the market is perfectly efficient, the mispricing-induced return should be zero. I calculate the fraction of an anomaly explained by mispricing as the ratio of mispricing-induced return over the observed return. The fraction of an anomaly explained by risk is thus one minus the above ratio.

I examine 195 anomalies. On average, the fraction explained by mispricing is 17.51%, suggesting that the major fraction of anomalies is not anomalous at all. This result may be disappointing to EMH critics, who seem to think that the stock market is extremely inefficient, and it is very easy to profit from anomalies.  However, the good news to EMH critics is that the fraction explained by mispricing varies widely across different anomalies. In particular, the momentum anomalies are almost completely explained by mispricing. Hence, trading on momentum anomalies is likely to generate abnormal returns. In contrast, the high returns from the value strategies are almost entirely compensations for bearing high risk.


Dr. Hui Xu, CERF Research Associate, June 2018.

Contingent Convertibles: Does it do when it is supposed to do?


When Lehman Brother was in deep water September, 2008, the U.S. Federal government and the Federal Reserve decided not to bail it out, and several days later, the company filed Chapter 11 bankruptcy protection. Global markets immediately plummeted after the filing of bankruptcy, and both the government and central bank are accused of exacerbating investors’ panic for that decision. However, if they did, they would have been accused for a different reason: using taxpayers’ money to bail out a greedy and aggressive Wall Street giant.


The example illustrates the controversy and dilemma of bailout faced by policymakers. Since the financial crisis, one priority for the regulators has been to design a bail-in, an internal way to recapitalize distressed financial institutions and strengthen their balance sheet. The regulators hope it to become a substitute for the bailout. One way to deliver a swift and seamless bail-in is through the conversion of contingent convertible capital securities (CoCo).


CoCos are bonds issued by banks that either convert to new equity shares or experience a principal write-down following a triggering event. Because Basel III allows banks to meet part of the regulatory capital requirements with CoCo instruments, banks around the world issued a total of $521 billion in CoCos through 732 different issues between Jan 2009 and Dec 2015.


That being said, CoCos are still in their early stage in the sense that there is no consensus on how to design a CoCo. Moreover, few research has studied the response from market participants. Studying the response from market precipitants can shed light on the optimal CoCo design.


A recent research project by CERF research associate HUI (Frank) XU studies the response of incumbent equity holders when CoCos are in place. It considers two types of CoCos: CoCos convert to common shares when the stock price falls below a pre-set target, or the market capital ratio falls below a pre-set threshold. Surprisingly, the research shows that if the conversion dilutes incumbent equity holders’ security value, they will have strong incentive to issue a large amount of debt before the pre-set triggering point, and accelerate the trigger of CoCo conversion. The intuition is that since their equity value is diluted at conversion, they will issue a large amount of debt and distribute the proceeds via dividend or share repurchase just before conversion, leaving the new equity holders and debt holders much lower security value. Thus, the incumbent equity holders collect a one-time big payout at the cost of new equity holders and debt holders.


This is certainly contrary to the regulators’ expectations. Regulators expect equity holders to improve their corporate management, risk-taking strategies and financial policies under the threat of CoCo conversion. That equity holders benefit themselves by destroying the firms’ value under the threat of CoCo conversion is the least they want to see. Therefore, the research highlights the complexity of continent convertibles design, and the importance of taking the market participants’ response into account when regulators propose a CoCo design.


Dr. Alex Tse, CERF Research Associate, May 2018.

Embrace the randomness

Excerpt from the CBS sitcom “The Big Bang Theory”, S05 E04:

Leonard: Are we ready to order?
Sheldon: One moment. I’m conducting an experiment.
Howard: With Dungeons and Dragons dice?
Sheldon: Yes. From here on in, I’ve decided to make all trivial decisions with a throw of the dice, thus freeing up my mind to do what it does best, enlighten and amaze. Page 14, item seven.
Howard: So, what’s for dinner?
Sheldon: A side of corn succotash. Interesting……

It sounds insane to let a die decide your fate. But we all know that our beloved physicist Dr Sheldon Cooper is not crazy (his mother had him checked!) so there must be some wisdom behind.  To a mainstream economist, adopting randomisation in a decision task seems to violate a fundamental economic principle – more is better. By surrendering to Tyche the goddess of chance, we are essentially forgoing the valuable option to make a choice.

A well-known situation where randomised strategies are relevant is the game-theoretic setup where strategic interactions among players matter. A right-footed striker has a better chance of scoring a goal if he kicks left. A pure strategy of kicking left may not work out well though because the goalie who understands the striker’s edge will simply dive left. The optimal decisions of the two players thus always involve mixing between kicking/blocking left, right and middle etc. However, a very puzzling phenomenon is that individuals may still exhibit preference for deliberate randomisation even when there is no strategic motive. An example is a recent experimental study (Agranov and Ortoleva, Journal of Political Economy, 2017) which documents that a sizable fraction of lab participants are willing to pay a fee to flip a virtual coin to determine the type of lotteries to be assigned to them.

While the psychology literature offers a number of explanations (such as omission bias) to justify randomised strategies, how can we understand deliberate randomisation from an economic perspective? The golden paradigm of decision making under risk is the expected utility criteria where a prospect is evaluated by the linear probability-weighted average of the utility value associated with each outcome. There is no incentive to randomise the decision as the linear expectation rule would guide an agent to pick the highest value option with 100% chance. However, when the agent’s preference deviates from linear expectation, a stochastic mixture of prospects can now be strictly better than the static decision of sticking to the highest value prospect (Henderson, Hobson and Tse, Journal of Economic Theory, 2017). Rank-dependent utility model and prospect theory, which are commonly used in the area of behavioural economics, are two notable non-expected utility frameworks under which randomised strategies are internally consistent with the agent’s preference structure.

Incorporation of non-linear probability weighting and randomised strategies leads to many potential economic implications. For example, consider a dynamic stopping task where an agent decides whether to sell an asset at each time point. In a classical expected utility setup, there is no incentive for the agent to randomise the decision between to stop and to continue. This implies the optimal trading strategy must be a threshold-rule where sale only occurs when the asset price first breaches a certain upper or lower level. In reality, investors do not necessarily adopt this kind of threshold strategy even in a well-controlled laboratory environment. For example, the asset price could have visited the same level multiple times before a participant decides to sell the asset (Strack and Viefers, SSRN working paper, 2014). While expected utility theory struggles to explain trading rules that go beyond the simple “stop-loss stop-gain” style order, non-linear expectation and randomisation provide a modelling foundation to justify more sophisticated investment strategies adopted by individuals in real life.

Dr. Yuan Li, CERF Research Associate, April 2018

Are analysts whose forecast revisions correlate less with prior stock price changes better information producers and monitors?

Financial analysts are important information intermediaries in the capital markets because they engage in private information search, perform prospective analyses aimed at forecasting firms’ future earnings and cash flows, and conduct retrospective analyses that interpret past events (Beaver [1998]). The information produced by analysts is disseminated to capital market participants via analysts’ research outputs, which mainly include earnings forecasts and stock recommendations. Prior academic studies suggest that the main role of an analyst is to supply private information that is useful to parties such as investors and managers. Therefore, an analyst’s ability to produce relevant private information that is not already known to other parties is an important determinant of the analyst’s value to the capital markets. Based on this notion, CERF research associate -- Yuan Li and her co-authors propose a simple and effective measure of analyst ability.

Our measure of analyst ability is calculated as one minus the correlation coefficient between the analyst’s forecast revisions and prior stock price changes within successive forecasts. Since prior stock price changes capture the incorporation of information that is already known to investors, any information in an analyst’s forecast revisions that is not correlated with prior stock price changes reflects the analyst’s private information. In other words, our measure captures the ability of an analyst to produce information that is not already incorporated into stock prices.

We find that the stock price impact of forecast revisions issued by superior analysts identified by our measure is greater. We also find that firms covered by more superior analysts are less likely to engage in earnings management. These findings suggest that superior analysts identified by our measure are better information producers and monitors.

Dr. Jisok Kang, CERF Research Associate, March 2018

The Granular Effect of Stock Market Concentration on Market Portfolio Volatility

Ever since the Capital Asset Pricing Model (CAPM) was first introduced in 1964, a well-accepted conception in the modern portfolio theory is that the market portfolio contains only market risk or systematic risk as firm-specific risk or non-systematic risk is diversified away.

Meanwhile, Xavier Gabaix, in a paper published at Econometrica in 2011 titled as “The Granular Origins of Aggregate Fluctuations,” argues that idiosyncratic firm-specific shocks to large firms in an economy can explain a great portion of the variation in macro-economic movements if firm size distribution is fat-tailed. His argument implies that firm-specific shocks to large firms are granular in nature and may not be easily diversified away. He empirically shows that idiosyncratic movements by the largest 100 firms in the U.S. can explain roughly one third of the variation in the GDP growths of the country, the phenomenon he dubs “the granular effect.”  

Jisok Kang, a CERF research associate, in his recent research paper, shows that stock market concentration, the level of domination by the largest firms in the stock market, increases the volatility of market portfolio. This finding implies that the idiosyncratic, firm-specific risk of large firms is granular in nature and not diversified away in the market portfolio. This finding is robust whether the market portfolio volatility is defined with value-weighted or equal-weighted index.

In addition, stock market concentration causes other stock prices to co-move thus increases the market portfolio volatility further. The incremental volatility caused by stock market concentration is bad volatility in that the effect is severer when the market portfolio return is negative.

Dr. Hui (Frank) Xu, February 2018

What caused the leverage cycle run-up to 2008 financial crisis?

The 2008 financial crisis has far-reaching impact on financial markets and real economy. Although academic researchers and public policymakers have reached a consensus that the financial crisis roots in leverage cycle, they continue to debate the causes that led to the leverage cycle. Initially, it was widely accepted that financial innovation and deregulation exacerbated agency problem, incentivizing the financial intermediaries to issue consumer credit, including mortgage debt, without proper screening and monitoring (“credit supply” channel). Recently, nevertheless, a growing empirical literature has proposed a “distorted beliefs" view of the crisis, demonstrating that over optimism of investors may have led to rapid expansion of the credit market, and increased assets price in the run-up to the crisis (“credit demand” channel). The financial crisis, like any other major economic event, probably has more than one cause, and both credit demand and supply channels have contributed to it. Indeed, the two views are not entirely mutually exclusive, and may reinforce each other.

However, one still might want to ask to what extent the distorted beliefs have caused the crisis. This question is interesting for both theoretical and practical reasons. First, economists have long known that distorted beliefs have important effects on prices of financial assets, e.g., risk-free rate and stock prices, but they still find it wanting to understand why the distorted beliefs can cause massive default in 2008; second, understanding what caused the financial crisis helps to create effective changes in policy. If it is largely an agency problem, policies to prevent similar crises would include requiring financial intermediaries to “put more skin in the game”, and to enforce stricter screening and monitoring. If it is primarily a distorted expectations and beliefs problem, preventative measures would include implementing macroprudential, financial-stability polices, and improving information transparency.

One way to quantify the role of distorted beliefs in the financial crisis is to construct a dynamic general equilibrium model which features credit use and risk-taking by households purely based on distorted beliefs, effectively shutting down agency problem channel. Then, examine the explanatory power of the model by comparing the output from the calibrated model to real data. This is a research project done by CERF research associate HUI (Frank) XU.

The main findings of the paper support the distorted beliefs view of the financial crisis. The distorted beliefs view can explain the household leverage running up to the financial crisis. Quantitively, the distorted beliefs can account for more than half of the variation of the real interest rate during the crisis period.

Dr. Alex Tse, CERF Research Associate, February 2018

Transaction costs, consumption and investment

The theoretical modelling of individuals’ consumption and investment behaviours is an important micro-foundation of asset pricing. Despite being a classical problem in the literature of portfolio selection, analytical progress is very limited when we extend the model to a more realistic economy featuring transaction costs. The key obstacle thwarting our understanding in the frictional setup originates from the highly non-linear differential equation associated with the problem.

Using a judicious transformation scheme, CERF research associate Alex Tse and his collaborators David Hobson and Yeqi Zhu show that the underlying equation can be greatly simplified to a first order system. Investigation of the optimal strategies can then be facilitated by a graphical representation involving a simple quadratic function encoding the underlying economic parameters.

The approach offers a powerful tool to unlock a rich set of economic properties behind the problem. Under what economic conditions can we expect a well-defined trading strategy? How does the change in the market parameters affect the purchase and sale decisions of an individual? What are the quantitative impacts of transaction costs on the critical portfolio weights? While some features are known in the literature, there are also a number of surprising phenomena that have not been formally studied to date. For example, the transaction cost for purchase can be irrelevant to the upper boundary of the target portfolio weight in certain economic configurations.

In a follow-up project, the methodology is further extended to a market consisting of a liquid asset and an illiquid asset where transaction costs are payable on the latter. The research findings could serve as the useful building blocks towards a more general theory of investment and asset pricing.

Dr. Yuan Li, CERF Research Associate, December 2017

Book-to-market ratio and inflexibility: The effect of unrecorded R&D capital

R&D investment has been playing an increasingly important role in the economy. However, accounting standard requires firms to immediately expense R&D as incurred. Therefore, R&D investment is not capitalized on the balance sheet. Could the unrecorded R&D capital affect our assessment of a firm’s risk? The answer is affirmative, according to the findings from a research project conducted by CERF research associate Yuan Li.

Finance theory suggests that a firm’s risk is negatively related to its flexibility to adjust capital investment. The more flexibility a firm has in this regard, the less its cash flows are affected by economic-wide conditions, and the lower its risk. Flexibility is hard to observe directly, but it can be inferred from the book-to-market ratio (BM). High-BM firms are generally burdened with more unproductive capital and hence less flexible to downsize in bad times. Thus, according to the theory, high-BM firms are riskier than low-BM firms, especially in bad times.

However, results from this project suggest that the above theory should not be followed blindly. This is because book-to-market ratio calculated from the balance sheet data increasingly misrepresents inflexibility and risk. This in turn is because book value is understated by the unrecorded R&D capital, which is even less flexible to adjust than physical capital. Results also suggest that considering book-to-market ratio and R&D capital together is a better way to evaluate a firm’s inflexibility and risk.

Dr. Edoardo Gallo, CERF Fellow, November 2017

Financial networks and systemic collapse

In the aftermath of the 2008 crisis, Haldane – the Chief Economist at the Bank of England – stated that “the regulation of the network is needed to ensure appropriate control of large, interconnected institutions […] the financial network should be structured so as to reduce the chances of future systemic collapse”.

A project by CERF Fellow Edoardo Gallo and his research collaborators Syngjoo Choi (Seoul National University) and Brian Wallace (UCL) investigates what type of network structures cause financial contagion. In a lab experiment, participants can buy or sell assets in an artificial market knowing that one participant has been hit by a monetary shock and there is a possibility that it may spill over to others because all participants are connected by a network of liabilities. Each participant faces a trade-off between selling to raise liquidity in the short term to avoid bankruptcy or hold on to assets to realize a return in the long-term. The researchers vary the network of liabilities and the size of shocks.

The results show that contagion is particularly prevalent in core-periphery networks formed by a small number of highly connected participants – the core – and with the remaining participants at the sparsely connected periphery. The dynamics of contagion involves sharp falls in the price of assets because all participants are trying to sell to raise liquidity, and this leads to systemic collapse even for moderately sized shocks.  The researchers find evidence that a participant’s ability to comprehend the network-driven risk is predictive of how likely they are to go bankrupt. 

Core-periphery networks are ubiquitous in financial markets, and the results of this project suggest they may be particularly susceptible to systemic collapse.

The paper is available here.


Dr. Alex S.L. Tse, CERF Research Associate, September 2017

Probability weighting and stock trading behaviours

Humans are far from being a perfect machine of decision making especially in the face of uncertainty. One prevalent phenomenon is that individuals tend to overweight probabilities associated with extreme events. Examples include lottery punters’ optimism towards winning a jackpot and air passengers’ anxiety towards plane crash. In the context of finance, what are the implications of such psychological bias on investment decisions?

CCFin research associate Alex Tse and his collaborators Vicky Henderson and David Hobson investigated the effect of probability weighting on stock trading behaviours through a theoretical model of asset sale. They found that agents with probability weighting will adopt trading strategies in form of stop-loss but not gain-exit: on the one hand, probability overweighting of the worst scenario encourages investors to offload a losing stock. On the other hand, probability magnification of the best outcome encourages investors to maintain participation on the rally. This provides a potential justification of the popular usage of stop-loss orders among retail investors.

Probability weighting is also useful to explain the “price disposition effect”, a well-documented financial anomaly where investors are selling winning stocks much more often than losing stocks. Existing models typically generate a very extreme disposition effect. With inclusion of probability weighting, however, investors are now more incentivised to hold a winning stock relative to a losing stock as they find a lottery-like payoff with positive skewness attractive. This enables the model to deliver a level of disposition effect much closer to what empirical literature suggests.


Dr. Yuan Li, CERF Research Associate, July 2017

In his best-selling book—Thinking, Fast and Slow, Nobel Memorial Prize in Economics laureate Daniel Kahneman describes anchoring as “ of the most reliable and robust results of experimental psychology”. Using data from the real financial markets, CERF research associate Yuan Li and her research collaborators Thomas George and Chuan-Yang Hwang find evidence suggesting that anchoring impedes investors’ interpretation of earnings news.

Anchoring is the tendency for individuals to base their forecasts of an unknown quantity upon a salient statistic (the anchor) that might have nothing to do with the quantity being forecasted. The classic example is an experiment in which individuals observe the generation of a random number, after which they are asked to estimate the percentage of African nations in the UN as an increment to the random number. The estimates are higher (lower) for individuals who observe higher (lower) random numbers. This random number is the anchor in this experiment.


In the real financial markets, investors anchor on the 52-week high price (52WH), which is often featured in financial websites and papers. If the stock price prior to a positive (negative) earnings announcement is already close to (far from) the 52WH, investors would think the positive (negative) news has already been incorporated into the price, and hence be reluctant to bid the price higher (lower). In other words, investors behave as if future price levels are constrained not to deviate too far from the 52WH. 


Dr. Jisok Kang, CERF Research Associate, June 2017


Does the Stock Market Benefit the Economy?


A research project carried out by a CERF Research Associate, Jisok Kang, and his co-author, Kee-Hong Bae, suggests evidence that a functionally efficient stock market do promote economic growth.

Finance researchers have extensively investigated the role of stock market on real economic sector. For instance, whether well-functioning stock markets promote economic growth has received a great deal of attention from academics and policy makers. However, how to measure the functionality of stock markets has been a big empirical challenge. Researchers so far have typically used size measures (e.g., total stock market capitalization) as a proxy for stock market functionality and not found robust evidence to suggest that stock market development is associated with future economic growth.

The Research proposes a new measure of functional efficiency of stock market: stock market concentration. It has shown that concentrated stock markets dominated by a small number of large firms negatively affect economic growth; in countries with concentrated stock markets, capital is allocated inefficiently, which results in sluggish IPO activity, innovation, and economic growth. These findings suggest that a concentrated stock market offers insufficient funds for emerging, innovative firms; discourages entrepreneurship; and is ultimately detrimental to economic growth.



Dr. Chryssi Giannitsarou, CERF Fellow, May 2017


Our social interactions are informative of our investment decisions.


When we are investing, we don’t mindlessly copy our peers, according to new research carried out by CERF fellow Chryssi Giannitsarou and her research collaborators Luc Arrondel, Hector Calvo Pardo and Michael Haliassos. Instead, we are more likely to participate in the stock market if we believe that our immediate social circle is more informed about it.

The authors surveyed a representative sample of French households in 2014 and 2015 to capture measures of stock market participation and social connectedness, but also beliefs and perceptions of stock market returns. They wanted to find out whether those households invested by mindless copying, which may lead to stock market bubbles and fads, or by processing information and trying to copy good practice.

The results show that people who perceive a higher share of their financial circle as being informed about the stock market or participating in it are more likely to invest in stocks themselves. The conditional portfolio share invested in stocks is influenced by social interactions only to the extent that social interactions influence perceptions of past stock market performance and, through them, stock market expectations. There is a trace of mindless copying of behaviour, but only in the decision of whether or not to participate at all in the stock market.
All in all, their research findings suggest that social interactions tend to reduce rather than exacerbate financial literacy limitations, and to affect financial decision-making by being informative rather than ‘contagious’.

If you would like to read the relevant paper it is available here