skip to primary navigationskip to content


The CERF blog features short articles about current research and other relevant topics, written by CERF’s Fellows and researchers.

Firms' Capital Structure Dynamics, Market Competition, and Industry Dynamics


Shiqi Chen, CERF Research Associate

February 2021


Debt-equity conflict is undoubtfully one of the core paradigms of corporate finance research. It is well-known that, once debt is in place, the misalignment of interest between debt holders and equity holders can lead to asset substitution and inefficient underinvestment problems, which are initially identified by Jensen and Meckling (1976) and Myers (1977), respectively.  More recently, studies by Admati et al. (2018) and DeMarzo and He (2020) highlight another consequence of debt-equity conflicts: the leverage ratchet effect, that is, when firms cannot commit to future debt levels, once debt is in place, equity holders not only are reluctant to reduce leverage voluntarily, but also have an incentive to increases the firm’s leverage to the detriment of debt holders. They show that such debt-equity conflicts due to non-commitment give rise to leverage dynamics that differ substantially from what is implied by the static trade-off model, and may help to explain many empirical phenomena that otherwise remain puzzling—for example, the reluctance of distressed firms to recapitalize. 


However, it is important to note that an individual firm is not a solo player in the market. Firms' entry, exit, production and financing decisions need to be considered in a broader context. Indeed, firms' capital structure dynamics and product market decisions are closely intertwined. If debt-equity conflicts persist among firms, the corresponding influence can aggregate rapidly at the industry level and have a profound impact on product market competition, output prices and other dimensions of industry dynamics. In turn, the degree of product market competition and equilibrium output prices are key determinants of industry players' profitability and survivorship. Therefore, product market behaviour affects the evolution of debt-equity conflicts and funding choices at the firm level. Such interaction (shown in the figure below) is highlighted in Zingales (1998), "in the absence of a structural model, we cannot determine whether it is the product market competition that affects capital structure choices or a firm's capital structure that affects its competitive position and its survival". 



What has happened in the oil industry in 2020 mirrors such interactions. In 2020, the price war between Russia and Saudi Arabia, coupled with prolonged pandemic and subsequent collapse in global demand had tumbled many heavily indebted oil and gas producers. The West Taxes Intermediate was even trading in the negative territory for the first time in April. Since 2008, the surges in crude export have led to a shale boom in North America, as well as an all-time high aggregate debt level in the oil and gas sector. According to the report by Haynes and Boone, 46 North American oil and gas producers have filed Chapter 11 bankruptcy in 2020, among which 14 are billion-dollar bankruptcies. The imminent consolidation and reshuffle within the industry are going to create further fluctuations in oil prices and variations in the survivors' capital structure. These emphasize the intriguing interdependencies between firms' financial decisions, market competition and industry dynamics.  


In the article " Industry Dynamics and Capital Structure (Non)Commitment", CERF Research Associate Shiqi Chen and collaborator Hui Xu (University of Lancaster) attempt to address these interactions. It develops a competitive equilibrium model to understand how the debt-equity conflicts arising from the absence of equity holders' commitment to future debt levels affect industry dynamics and the corresponding feedback effect on firms' financial decisions. 


The article shows that shareholders' resistance to leverage reduction and incentives to increase leverage make debt financing more expensive. As a result, entry into the industry becomes harder, which reduces the degree of market competition and raises the equilibrium output price. The increase in the output price, in turn, improves the profitability of industry incumbents and makes the shareholders willing to wait longer before shutting down firms, thereby alleviating inefficient liquidation and the agency costs generated by non-commitment. By looking at the stationary industry distribution of firm in terms of debt-scaled cashflow, they find that, compared with the commitment case, non-commitment and the resultant higher output price increase the number of firms in the high leverage region and the overall average industry leverage. More firms now stand close to the exit boundary. Such distributional effect gives rise to a higher frequency of entry and exit, and consequently, a higher market turnover rate at the equilibrium. The results suggest that debt-equity conflicts at the firm level can ramp up and have profound implications on industry dynamics. 




References mentioned:

Admati, A. R., P. M. DeMarzo, M. F. Hellwig, and P. Pfleiderer (2018): “The leverage ratchet effect,” Journal of Finance, 73(1), 145–198.

DeMarzo, P., and Z. He (2020): “Leverage dynamics without commitment,” Journal of Finance, Forthcoming.

Jensen, M. C., and W. H. Meckling (1976): “Theory of the firm: Managerial behavior, agency costs and ownership structure,” Journal of Financial Economics, 3(4), 305–360.

Haynes, and Boone (2020): “Haynes and Boone, LLP Oil Patch Bankruptcy Monitor (31 December),” Available at:  (Accessed 11 February 2021).

Myers, S. C. (1977): “Determinants of corporate borrowing,” Journal of Financial Eco- nomics, 5(2), 147–175.

Zingales, L. (1998): “Survival of the fittest or the fattest? Exit and financing in the trucking industry,” Journal of Finance, 53(3), 905–938.



Do Firm Locations Affect Stock Prices?

Mehrshad Motahari, CERF Research Associate
15 January 2021


A large body of literature documents how firms' geographical locations can affect their stock returns. For example, Pirinsky and Wang (2006) show that the stock returns of firms headquartered in the same geographical area comove with each other. They argue that this comovement is not related to economic fundamentals but to the trading patterns of local investors. Various papers attribute this excess comovement to local bias that induces local investors to take larger positions in local stocks. Bernile et al. (2015) suggest that even institutional investors overweigh firms whose 10-Ks frequently mention the investors' state. In contrast, Kumar et al. (2013) show that retail trades cause comovement in local stocks, whereas institutional trades mitigate the issue.

Local bias also leads to the incorporation of the behaviours and preferences of local investors in the prices of local stocks. Korniotis and Kumar (2013) highlight that local risk tolerance affects the returns of local stocks. Specifically, they argue that US state-level heterogeneity in economic conditions leads to variations in investor risk tolerance across states, and heterogeneous risk tolerance results in variations in the cross section of stock returns. In other words, the economic conditions of the region in which a firm is based can affect its stock price, irrespective of the firm’s fundamentals.

In the article ‘Geographic Heterogeneity, Local Sentiment, and Market Anomalies’, CERF Research Associate Mehrshad Motahari, shows that market anomalies (i.e. strategies that beat the market, such as momentum) have different performances for stocks headquartered in different US states. In other words, if we break the US cross section down into states in which anomalies have recently worked well and those in which anomalies have worked poorly, we observe that the first group will continue to have a better performance in the future. Using a famous anomaly variable such as momentum, the study shows that we can predict how well momentum predicts future returns by taking a firm’s headquarters into account. To illustrate, if we construct the momentum strategy (i.e. going long on high momentum stocks and low on low momentum ones) for stocks headquartered in either California or Texas in 2020 and find that this strategy works better for Californian stocks, it will likely continue to generate higher alphas for stocks in California in 2021.

This pattern can be explained by arguing that investors in different regions have different levels of sentiment. Local investors in states experiencing a relatively higher level of sentiment are more likely to buy excessively or overpay for local stocks. In the presence of local bias, short-selling impediments and information uncertainty, this behaviour exacerbates stock overpricing. The resulting mispricing is more severe in states experiencing higher sentiment and will persist due to limits to arbitrage.

The study also looks at analyst forecasting errors as a proxy for information uncertainty surrounding stocks. The idea is that investor biases and sentiment levels are more likely to be reflected in prices when the stock is hard to be valued. In line with this, the findings show that geography predicts the performance of anomalies only for stocks experiencing higher levels of analyst forecasting errors.

Overall, the findings of this research and other papers in this area imply that it is preferable to tilt a portfolio towards stocks in specific geographic regions when devising systematic trading strategies to exploit mispricing. More importantly, studies on this subject establish that firms’ locations have more extensive effects on stock prices than previously documented. That is, the location of a stock can determine how the fundamentals of the stock will be priced in the cross section and relative to other local and non-local firms.



Bernile, G., Kumar, A., and Sulaeman, J. (2015) ‘Home away from home: Geography of information and local investors’, Review of Financial Studies, vol. 28, pp. 2009–2049.

Korniotis, G. M. and Kumar, A. (2013) ‘State-level business cycles and local return predictability’, Journal of Finance, vol. 68, pp. 1037–1096.

Kumar, A., Page, J. K., and Spalt, O. G. (2013) ‘Investor sentiment and return comovements: Evidence from stock splits and headquarters changes, Review of Finance, vol. 17, pp. 921–953.

Pirinsky, C. and Wang, C. (2006) ‘Does corporate headquarters location matter for stock returns?’, Journal of Finance, vol.61, pp. 1991–2015.

The Rise of Special Purpose Acquisition Company

Sunwoo Hwang, CERF Research Associate, December 2020 

2020 has been the year of special purpose acquisition companies or SPACs. A SPAC is a blank check shell company designed to take private companies public without going through the traditional initial public offering (IPO) process[1]. 2020 saw 230 SPAC IPOs and $77 billion raised in the United States alone at the time of this writing on December 13, 2020. The 230 SPAC IPOs account for 54% of the total IPOs and 45% of the total IPO proceeds (source: SPAC Analytics). Both their number and the amount of capital raised hit a record high and greater than those of the total annual US IPOs since 2014. However, SPACs have received little attention in the academic literature, presumably because they were almost invisible in the IPO market years ago. Capital raised by SPACs made up less than 7% of the total IPO proceeds before 2015, except during the financial crisis. But, in 2020, a SPAC has become the main gateway to public markets, outnumbering traditional IPOs and raising 45% of the IPO proceeds. SPACs appear to deserve more attention and research by financial economists.

The very first question to ask is why a SPAC exists. On the demand side of the SPAC product market, the SPAC offers several benefits to capital-starving private firms. First, it offers a faster timeline. If a company chooses a SPAC over a traditional IPO, it takes three to four months rather than two to three years. It also involves greater certainty regarding a firm’s valuation and the amount of capital raised. A company negotiates only with a SPAC, instead of myriad investors, and receives the IPO proceeds already raised upon the merger approval of SPAC shareholders. Third, a SPAC route is popular amongst young private firms or mature private firms such as unicorns[2], which suffer a valuation disadvantage compared to mid-aged (i.e., 6-10 years old) private firms that public investors prefer[3]. On the flip side, a SPAC is more expensive than a traditional IPO, which charges 7% to pay an underwriter. A target company leaves equity for sponsors, which is 20% pre-merger and diluted by a exchange ratio to 1 to 5%[4] post-merger, and 5.5% for underwriters.

On the supply side, SPAC sponsors obtain easier access to capital as a SPAC faces fewer regulatory obstacles than traditional IPOs. The SPAC IPO process is relatively simple and easier than, say, raising new venture capital funds[5]. Furthermore, SPAC sponsors enjoy significant upside, receiving 20% of SPAC shares post IPO, yet low risk, investing in late-stage private firms. The SPAC has additional appeals to private equity (PE) sponsors. SPACs can serve as co-investment vehicles, allowing PE sponsors to do side-by-side transactions with less leverage and more equity, and provide greater liquidity certainty than private portfolio firms in which PE firms have illiquid interests[6]

In the capital market where SPAC sponsors stand on the demand side, SPACs entice investors based on the following merits. Institutional investors invest in initial SPAC, which offers common shares and warrants combined as units, and use warrants to purchase additional shares following successful mergers, with little downside. Initial nvestors receive their money back if a SPAC fails to find a target or if they do not like a selected target. Also, retail investors can invest in PE type transactions[7]. These benefits are not without costs, however. Because a SPAC has a limited lifespan of two years, sponsors may rush to merge with any (potentially low quality) firm approaching the deadline. In so doing, sponsors may overvalue the target to pass the minimum transaction amount threshold[8].

The natural next question is why SPACs have become stunningly popular in recent years. One obvious possibility is the pandemic has increased uncertainty. In uncertain times, private firms in need of financing may fear they cannot raise additional rounds of capital from private investors. Even so, they may not go for traditional IPOs, which take years and raise an uncertain amount of capital. But the uncertainty is unlikely to explain the whole story as the trend precedes the pandemic. The SPACs’ representation of IPOs monotonically increased from 12% in 2015 to 28% in 2019, making each year a record year since the financial crisis (source: SPAC Analytics). There is little evidence of a significant surge in economic uncertainty and stock market volatility, measured by the CBOE Volatility Index, during the five years.

The other possibility that may also explain the pre-COVID trend is the sophistication of the SPAC market, where an increasing fraction of sponsors and investors have sectoral focus and expertise. A recent example is Chardan Healthcare Acquisition Corp., which closed a merger with BiomX in December 2019. Sponsored by health-care-focused investment bank Chardan, several high-profile biotech investors anchored the deal[9]. Also, an investment bank, Jefferies, finds that an increasing number of SPAC sponsors are industry executives[10]

In addition, numerous open questions are in order. First, structural complexity may create perverse incentives for different players involved, other than the aforementioned one caused by the limited lifespan. Second, in the case of PE sponsors, limited partners may concern about general partners get distracted and worse potentially usurp deal opportunities in favor of SPACs[11]. Third, the discrepancy between control and cash flow rights may engender corporate governance problems. In a typical pre-merger SPAC, sponsors own 20% cash flow rights and 100% voting rights through their class B common shares, which are only issued to sponsors and entitled to vote pre-merger. A majority independent board of directors elected solely by sponsors may not be majority independent.




Hormoz Ramian, CERF Research Associate, November 2020

Welfare Implications of Bank Valuation Disagreement 


Regulatory interventions have always been ensued by heated debates. In the years after the financial crisis reached its darkest moment, academic literature and high legislative chambers were inundated by discussions related to risk-based capital requirements. Opponents often have expressed dissatisfaction against the intervention arguing that capital holding above the laissez-faire outcome is expensive to the banking institutions leading to lower lending and ultimately suppressed economic growth. Whereas the proponents of the regulation have argued that fragility of the banking sector, that is associated with a significant economic cost to the society, rationalizes the intervention. 


Despite prolonged arguments presented by the opposing sides, these debates rarely reached to an agreeable conclusion. An important but often ignored reason to the disagreement was that the arguments emanated from incomparable bases. More specifically, the opponents’ view presented by the banking institutions weighed heavily on the cost of equity as the central reason to rail against capital holdings above the laissez-faire outcome (baselcommittee2}, Acharya et al, 2017).  Ignoring the underlying merits behind their argument for the moment, their perspective focused on the role of the asset prices as the main reason to advocate for capital deregulation. This stance, however, was not readily reconcilable with the proponents social perspective whose arguments mainly built on a welfare analysis that is concerned with the negative externalities associated with costly bank failure (Allen et al 2011, 2015, and Gersbach et al, 2017).


Much of the discussions among the academic and legislative literature on this context has understandably been devoted to the welfare implications of bank failure. For instance, James (1991) provides a comprehensive survey on loss given default across financial and non-financial sectors showing that the ex-post asset recovery rate may fall to 70% per dollar. Nonetheless, the opponents’ view on the cost of capital has remained a consistent defence that has languished further arguments to increase capital holding above Basel III. Recent empirical studies provide evidence that even in the presence of capital buffers in addition to the risk-based capital requirement, the banking institutions remain significantly undercapitalized (Piskorski et al, 2020).


Lack of a rigorous quantitative basis to evaluate the cost of capital for the banking institutions at an aggregate level is among the core reasons why the proponents have failed to discredit the merits behind capital deregulation. The methodological framework in this study, first develops a foundation to establish a realistic valuation of the bank capital in a general equilibrium under aggregate uncertainty setting. This framework simultaneously integrates the asset pricing and banking regulation disciplines to provide a mapping between the cost of capital and welfare implications of bank failure. This salient connection serves as a solution to reconcile the two counterarguments in favour and against bank capital holding.


A comprehensive capital regulation that enhances the welfare considers three simple components: (i) how is the bank funded? (ii) what is the risk profile of the bank’s assets? (iii) what is the valuation of bank net worth? Existing studies focusing on bank funding show that government-guarantees provide welfare gains by preventing self-fulfilling runs on bank debt, even if not originally justified by fundamentals (Diamond–Dybvig, 1983). Nonetheless, government-guarantees break the link between the cost of debt and borrower’s default risk and lead to the under-capitalization of the banking system. This gives rise to an alternative distortion generated by more frequent bank failure and motivates capital regulation which provides gains by lowering socially undesirable defaults. However, studies that concentrate on the liabilities provide limited predictions about the importance of bank assets composition. My research finds that the effectiveness of optimal capital regulation depends on the assets side of the bank balance sheet, particularly when the monetary policy targets reserves management. A large strand of literature focusing on the assets side of the bank balance sheet shows that conditioning the risk profile to capital provides welfare gains. However, this literature considers that households as the ultimate providers of financing, in the form of debt or equity, play a limited role or that the supply of financing is fixed. The finding in this context uncover that households’ optimal consumption-saving behaviour has important implications for the equilibrium cost of debt that is a determinant of the banking sector’s default risk. This equilibrium mechanism predicts that as the cost of debt falls, capital constraint becomes effectively overburdening and hence socially costly.


These shortcomings provide motivation to raise the following two questions: First, what is the optimal capital regulation of the banking system in an environment where the cost of financing (in the form of debt or equity) and risk profile of the asset side arise endogenously? Second, how does the effectiveness of this optimal capital regulation depend on the IOER that is decided separately by the monetary authority? I address these questions by developing a general equilibrium model in which banks finance themselves by accepting deposits and raising equity from households, and invest their funds in excess reserves and loans subject to non-diversifiable risk.


The analysis in my research takes IOER as a given policy and shows that the optimal risk-weighted capital requirement offers welfare gains by lowering the likelihood of bank failure and its associated distortions that are ultimately borne by society (Admati and Hellwig, 2014). Nonetheless, the general equilibrium provides an additional important prediction. When the bank is required to raise more capital to satisfy the capital constraint, its demand for debt financing falls. This channel leads to a lower equilibrium deposit rate. Given any lending level, lower interest expenses expand the bank’s ability to meet its debt liabilities and enhance the bank’s solvency. The optimal risk-weighted capital regulation, even in general equilibrium, fails to consider this effect and hence becomes socially costly.


I show that when IOER is above the zero bound, a marginal decrease in this rate is accompanied by a proportional decrease in the equilibrium deposit rate. Because the proportion of deposits in liabilities always exceeds that of the reserves on the asset side of the balance sheet, lower IOER leads to a faster fall in interest expenses than interest incomes. As a result, the social cost of the optimal capital constraint, which is decided in isolation of the IOER policy, increases as IOER falls towards the zero bound. This finding is an important motivation for a jointly decided capital regulation and IOER. Particularly, a lower IOER that is accompanied by a looser capital constraint is able to expand the credit flow to the real economy, while the bank’s default likelihood remains constant.


This general equilibrium framework provides a secondary prediction: the relationship between the optimal capital regulation and IOER reverses when IOER becomes very low or falls below zero. This finding is important to effective policy analysis in the current era with low or negative interest rate environment. The finding of this research shows that any further reduction in this territory is accompanied by a nonresponsive equilibrium deposit rate because depositors always require strictly positive compensations for their time preference to forgo consumption. This nonproportional transmission mechanism from IOER to deposit rate indicates that the bank’s interest incomes from reserves fall faster than its interest expenses on deposits. Given any lending level, the bank’s solvency worsens, nonetheless, the capital regulation fails to consider this effect. An interactive policy initiative provides social value when a falling IOER, below zero bound, is accompanied by a stricter capital constraint.


Mehrshad Motahari, CERF Research Associate, October 2020

Machine Learning Challenges in Finance

Machine learning (ML) is the most important branch of artificial intelligence (AI), providing tools with wide-ranging applications in finance. My previous blog posts (‘Can robots beat the market?’ and ‘Artificial intelligence in asset management: hype or breakthrough?’) discuss some of the most important ML applications in finance. The success of ML is often linked to its three key capabilities: providing flexible functional forms which can capture nonlinearities in data, selecting relevant model features without pre-specification, and capturing information from non-numerical data sources such as texts. However, recent studies including Israel et al. (2020) and Karolyi and Van Nieuwerburgh (2020) outline several challenges involved with using ML in finance. What follows provides a summary of these challenges.

Finance is often thought of as a field awash with applicable data, ranging from financial and economic sources to more recent unstructured data such as online news and social media posts. While the breadth of data that can be used in finance is quite large, the time series are often very short by ML standards. A limited number of time series observations would mean that any model using the data is also constrained to be proportionally small. The consequence of this is that data-hungry ML tools cannot operate anywhere near their full potential. Finance also does not allow for data to be produced using experiments, as it is done in other fields. For example, in image recognition, which is a successful area of ML application, scientists can simply produce millions of photos using experiments in order for the models to train from. In finance, however, one has no alternative but to wait for financial data to be produced over time.

There are exceptional cases in finance where data is available in high frequency, such as HFT trades, providing ML tools with a larger number of observations across time to learn from. However, even in these cases, ML faces its second-biggest challenge: signal-to-noise ratio. ML tools are highly dependent on data quality. Poor quality and noisy data lead to unreliable ML models. It is to no one’s surprise that financial data is considerably noisy, especially when the data frequency is high. The reason for this, of course, is that when following the Efficient Market Hypothesis (EMH), one should only be able to predict one variable in fully efficient financial markets. That variable is risk premia, which is small and difficult to capture in short horizons. In the absence of large and reliable databases, ML tools in finance are essentially tasked with finding a needle in a haystack.

Another difference in finance, compared with other areas in which ML is applied, is data evolution. Taking image recognition again as an example, images of humans always have the same features; using these features, ML tools can learn to recognise images. In contrast, financial data changes and evolves over time, as do the financial markets. Therefore, it is difficult to imagine that financial variables have the same meaning they had several decades ago. There are, of course, economic logics that do not change over time and that underly the markets’ behaviours. However, most ML models are so-called black boxes and do not provide any insights regarding how they produce specific results. This lack of interpretability makes it difficult to understand whether an ML model is capturing economically meaningful patterns or pure noise.

ML tools have essential applications in finance nowadays. The three main ML challenges of lack of data, low signal to noise ratio, and absence of model interpretability now construct the frontier of research in finance. A growing number of papers attempt to find novel and creative solutions to address these issues (see Israel, et al., 2020). These developments can pave the way for a stronger presence of ML in finance in the years to come.


Israel, R., Kelly, B.T. and Moskowitz, T.J. 2020. Can Machines 'Learn' Finance? Available at SSRN 3624052.

Karolyi, G.A. and Van Nieuwerburgh, S. 2020. New methods for the cross-section of returns. The Review of Financial Studies, 33(5), pp.1879–1890.


Sunwoo Hwang, CERF Research Associate, September 2020

Contingent employment and innovation 

There has been a rapid increase in contingent employment worldwide. As of 2015, it accounts for 15.8% of the U.S. labor force, up from 10.7% in 2005 (U.S. Bureau of Labor Statistics). In Europe, contingent workers make up an average of 43.3% for 28 European Union countries (OECD Labor Force Statistics). Contingent work is an umbrella term that represents numerous non-permanent employment arrangements. Despite these trends, little is known about the implications of contingent employment on firm outcomes. There could be certainly benefits such as flexibility in the use and reallocation of labor, which may lower operating leverage and fuel investment and growth. However, there may also be costs. Job insecurity, as compared to regular employment contracts, may discourage employees from engaging in value-enhancing activities such as innovation that typically requires a long-term commitment.  

Hwang (2020) asks whether contingent employment affects the innovation incentives of employees. The paper finds that converting temporary contracts to permanent ones has a positive effect on corporate innovation conditional on long-term rewards in place. The intuition that may explain these findings is that excessive termination following short-term failure and few rewards for long-term success, faced by contingent workers, discourages innovation (Manso, 2011). Note that this paper focuses on the contingent workforce in core functions. It speaks of neither low-skilled contingent workers who may not innovate (e.g., janitors) nor high-skilled (voluntary) ones who may innovate yet be insensitive to job security given their superior outside options (e.g., consultants). 

To answer the question, the paper exploits a novel experiment available from Korea. It allows us to compare firms that shifted contingent contracts to regular contracts with an otherwise identical set of firms that continued to use contingent laborThe experiment is composed of contingent arrangement unique in the country, under which contingent workers do the same core tasks as regular employees hired as the labor market was strong, and a Supreme Court ruling against the arrangement.  

The affected contingent workers are the so-called in-house subcontracted (IS) workers. They are similar to agency temps yet different in that they are hired through in-house subcontractors, not staffing agencies, and work for the main contractor almost permanently. Note in-house subcontractors are often created for the sole purpose of supplying the IS workersThe IS workers are unsecured because they cannot be reallocated to other firms if the main contractor stops subcontracting, which in turn closes an in-house subcontractorBusiness failure is a legitimate reason for discharge. The IS workers are likely innovators because the vast majority of patented innovations are from traditional manufacturing industries, these innovations depend on the basic education the IS workers have received (D’Acunto, 2014), and the IS workers gradually have replaced their secured colleagues as the labor market weakens over time.  

To my knowledge, Hwang (2020) provides the first evidence that contingent employment negatively affects the rate at which R&D investment translates to patented innovations at the employee level. Moreover, it shows that an optimal innovation-motivating scheme (Manso, 2011), characterized by tolerance for short-term failure and rewards for long-term success, also governs employees who execute innovation. Prior research has focused on managers who finance innovation. This paper’s findings inform debates on the costs and benefits of contingent employment and, specifically, corporate decisions as to the management of human capital which produces innovation. The findings have timely policy implications as the labor market representation of contingent labor is large and growing. Furthermore, the pandemic has hit contingent workers harder with harsher pay cuts or layoffsand the post-corona era is likely to demand more of both contingent labor and innovation.   

Hormoz Ramian, CERF Research Associate, August 2020  

Negative interest Rate: The Interaction between Monetary and Financial Regulatory Policies 

The negative interest rate has been among the frontier policies to counter the recent economic downturns. The 2020 Pandemic resurfaced the policy's role that was originally deployed to assuage prolonged slowdowns associated with the aftermath of the 2008 Financial Crisis. While lowering the cost of financing is a well-established policy initiative in response to adverse economic outcomes, the effective pass-through implications of the negative interest rates through the financial intermediaries remains an open question.  


Policymakers examine how negative interest rates lead to real economic implications through the financial intermediaries. The interest rate policy (alternatively known as the bank rate or the federal funds rate) is primarily a monetary policy. Nevertheless, its tight relationship with the interest-on-excess-reserves (IOER), paid on oversized excess reserves held by the banking institutions, generates substantial impacts on the overall performance of the banks and their lendingThis provides motivation to first, investigate how the negative interest rate policy is translated to an ultimate lending rate for the real economy through the banking institutions. Second, the tight relationship between the main monetary policy and the IOER provides motivation to examine how the interaction amongst policy initiatives by the monetary and financial regulatory authorities leads to welfare implications. 


Over the past decade, oversized excess reserves of the banking system have comprised over one-third of the total assets of major central banks in charge of 40% of the world economy.  Between January 2019 to October 2019, depository institutions in the United States held $1.41T of funds in excess reserves that accounts for over 40% of the total balance sheet size of the Federal Reserves. Over the same period, the European Central bank held over €1.9T in excess reserves forming a slightly smaller share relative to the consolidated balance sheet of the Eurosystem. The similar pattern holds for the Danish National Bank, the Swiss National Bank, the Sveriges Riksbank, and the Bank of Japan.  


Evidence shows that in July 2020, excess reserves of the depository institutions accounted for nearly 30% of the total assets of the Federal Reserves and the ECB. The cross-dependency between IOER and bank capital regulation is an important consideration with welfare implications because conflicting effects among the two policies may lead to over-regulation of the banking sector and disruptions in credit flow to the real sector. Alternatively, two policies may lead to under-regulation and re-expose the banking system to heightened default risk and possibly failures with socially undesirable outcomes. The aftermath of the 2008 financial crisis highlighted the lack of analytical frameworks to integrate multiple policies and assess their real economic implications. Policymakers constantly address distortions associated with each aspect of the economy with individual policies. Nonetheless, the policymaker's ability to provide welfare gains through a broad range of levers is limited by the understanding of the interconnecting channels among policies.  


A quintessential feature of IOER is its dual-role. This policy is decided by the monetary authority and, historically, it has been heavily correlated with the main monetary policy. In the United States, the Federal funds rate and IOER are heavily correlated. This strong relationship is a stylised fact that holds among other advanced economies ranging between 94%-99%. When the monetary authority targets reserve management, IOER simultaneously affects banking institutions' balance sheets to a great extent which strengthens the connections between the main monetary policy and the capital regulation. Existing studies in macro-finance and banking literature investigating the implications of the negative interest rate policy often focus on the interconnections between the policy and the assets side of the banking institutions. This strand of the literature provides a limited prediction about how the negative interest rate policy is passed through the real sector because when rates are negative, the exceedingly steep marginal utility of consumption of the depositors limits the banks ability to pass the negative rates to its depositors. An alternative strand of literature has tackled this shortcoming through partial equilibrium approaches and shows that given exogenous deposit holdings, the negative interest rate policy leads to lower cost of borrowing for the real sector. Nonetheless, such approaches fail to consider the downsides of the negative interest rate policy as the rate may fall indeterminately. 






These shortcomings provide the motivation to consider both sides of the banking institutions balance sheet into the policy initiative to determine the interest rate policies that passes through the bank borrower (businesses) and its lenders (depositors)When interest rates are positive, policymaker's decision to lower IOER is followed by an almost proportional fall in the bank deposit rate. Because the banking sector holds only a fraction of the deposits invested in reserves, a proportional decrease in the deposit rate in response to falling IOER leads to a faster drop in interest expenses on deposits, than the loss of interest incomes from reserves. The banking sector extends lending to the borrowers as a result of lower default risk when IOER falls and subsequently, the bank capital regulation tightens to adjust for the added risk to banks' assets. 


However, when IOER becomes very low, or possibly negative, the deposit rate exhibits an increasingly flatter response to further changes in IOER because deposit investors require a marginally positive compensation for time preference to forego current consumption. When bank deposit rate is increasingly nonresponsive to any further reductions in the policymaker's negative interest rate, loss of interest incomes from reserves exceeds lowered interest expenses on deposits. The banking institutions respond to increased default risk due to higher net interest expenses by lowering lending to maintain their shareholder value and then bank capital regulation loosensThis indicates that lower IOER dissuades the banking sector from over-relying on idle excess reserves with an expansionary effect on the real economic output only when lower rates lead to lower default risk, otherwise lowering IOER generates counterproductive results by worsening this overreliance problem and becomes contractionary economic impact. This finding provides a motivation for the monetary and financial regulatory policymakers to act jointly to provide welfare gains to the society.  


Adeplhe Ekponon, CERF Research Associate, July 2020 

Managerial Commitment and Long-Term Firm Value 


Motivated by preliminary empirical evidence showing that firms with more committed managers tend to suffer less during downturns, CERF Associate Adelphe Ekponon and collaborator Scott Guernsey (University of Tennessee) propose a model to help understand the mechanisms under this phenomenon.  


Economic crises bring about periods of prolonged turmoil. During such periods, shareholders have difficult decisions to make, in particular, regarding the retention or firing of the incumbent management team (assuming also that the change of CEO is likely to be followed by a reshuffle of the managing team)There exists a labor market for executives with two possible statuses. An executive team can be either the incumbent or an entrant. This framework assumes that the labor market for executives is not only competitive but is also highly restrictive, as managers do not have many outside optionsThus, there exists lack of diversification in the labor market – i.e., they are “all-in” on the firm. This incites executives to be committed to the firm and exert more effortIn practice, the firm can grant managers part of their performance-related compensation in derivatives such as stock options or deep out of the money options 


In their model, shareholders optimally derive both the cost of replacing and the probability of retaining the incumbentthat maximize their value. Shareholders derive these optimal decisions such that they are indifferent between keeping the incumbent or hiring an entrant after considering all firing/hiring costs. They also ensure the participation of both incumbent and entrant. Participation is defined as the gap between the pay for performance and the disutility of effortHence, their model differs from the different strands of the literature such as corporate structural models (Leland, 1994), those with agency conflicts (Jensen, 1986), macroeconomic risk (Hackbarth et al., 2006), contract incentives (Laffont and Tirole, 1988) and, governance and business cycle (Philippon, 2006; Ekponon, 2020).  


Managers’ level of effort depends on the cost replacement and the likelihood that their tenure will be extended for the subsequent period. The incumbent chooses the level of effort to exert and, under perfect information, selects a higher level of effort when the combination of the two (costs of replacement and probability of retention) is higher because the labor market for executives is restrictive and higher replacement costs indicate longer tenure. So, managers commit to the firm knowingly, but the firm does not necessarily have to commit to managers. 


In bad times, earnings are hit by poor macroeconomic conditions. To limit the losses, if shareholders adopt a higher pay for performance strategy, the model predicts a lower probability of retentionproxied by a lower governance index or good governance, but they have to face higher replacement costs. When the latter effect dominates, executives choose to exert more effort (reducing the impact of low profitability) and shareholders are better off keeping the incumbent team. 


References mentioned in this post 


Ekponon, A. (2020) "Agency conflicts, macroeconomic risk, and asset prices." Social Science Research Network, No. 3440168. 


Hackbarth, D., Miao J., and Morellec E. (2006) "Capital structure, credit risk, and macroeconomic conditions." Journal of financial economics, 82(3): 519-50. 


Jensen, M. C. (1986) “Agency costs of free cash flow, corporate finance, and takeovers. American Economic Review, 76(2): 323–29.  


Laffont, J.-J., and Tirole J. (1988) "The dynamics of incentive contracts."Econometrica, 56(5): 1153-75. 


Leland, Hayne E. (1994) "Corporate debt value, bond covenants, and optimal capital structure." The journal of finance, 49(4): 1213-52.  


Philippon, T. (2006) "Corporate governance over the business cycle." Journal of Economic Dynamics and Control, 30(11): 2117-41. 



Mehrshad Motahari , CERF Research Associate

Can Robots Beat the Market? 

June 2020 

The growing trend of replacing active investment managers with computer algorithms (The Economist, 2019) has led to a surge in the use of artificial intelligence (AI)1 in investing. This means that more AI-based algorithms (alpha algos) are being used to devise investment strategies. In most cases, the algorithm itself tests the viability of these strategies and even executes trades while keeping transaction costs to a minimumA common question, however, is whether these algorithms can generate profitable investments. The following is a summary of findings from a number of recent studies on the issue. 

AI-based investment strategies often use forecasts of future asset performance metrics, the most popular being returnsAI models utilise a range of data inputs, including technical and fundamental indicators, economic measures, and texts (like online posts and news articles), to predict future returns (Bartram, Branke, and Motahari, 2020). These predictions then form the basis of an investment strategy by rebalancing portfolio weights to favour stocks that will outperform and to move away from those that will underperform.  

In a recent hallmark study, Gu, Kelly, and Xiu (2020) investigate a variety of AI models that can be used to forecast future stock returns. The study looks at 30,000 US stocks, from 1957 to 2016 and includes a set of predictor variables, including 94 stock characteristics, interactions of each characteristic with eight aggregate time-series variables, and 74 industry sector dummy variables.  

According to the results, the best-performing investment strategy is based on the return predictions of the neural network model; a value-weighted long-short decile spread strategy using neural network predictions generates an annualised out-of-sample Sharpe ratio of 1.35. This is more than double the Sharpe ratio of a regression-based strategy from the literature. The out of sample performance of AI approaches is robust across a range of specifications. 

Why do AI approaches perform better in predicting returns than classic tools such as ordinary linear regressionsGu, Kelly, and Xiu (2020) argue that this is due to the ability of most AI techniques to capture nonlinear relationships between dependant and independent variables. Such relationships are often missed by linear regressions. Moreover, many AI techniques are able to select the most relevant variables from a large set of predictors. This allows the model inputs to shrink while keeping the most important variables. In another recent paper, Freyberger, Neuhierl, and Weber (2020) shows an investment strategy based on a model with this feature selection property can generate Sharpe ratio that is 2.5 times larger than that of an ordinary linear regression model. 

Despite the remarkable success of AI models in predicting returns, some doubt the feasibility of investment strategies that are based on these predictions. Avramov, Cheng, and Metzker (2020) looks at the neural network methodology used in Gu, Kelly, and Xiu (2020) and show that the investment strategy return based on this approach is largely driven by subsamples of microcaps, firms with no credit rating coverage, and distressed stocks. In addition, the strategy tends to be profitable mostly during periods of high limits to arbitrage, including high market volatility and low liquidity.  

It appears that AI models have improved upon return forecasting, due to their flexible structure and ability to capture complex relationships from vast amounts of data. However, the jury is still out on whether the predictions do, in fact, lead to investments that outperform conventional benchmarks in practice. What is clear for now is that AI provides us with the best tools for forecasting returns empirically. 



Avramov, D., Cheng, S. and Metzker, L. 2020. Machine Learning versus Economic Restrictions: Evidence from Stock Return Predictability, Available at SSRN 3450322. 

BartramS. M., Branke, J. and Motahari, M. 2020. Artificial Intelligence in Asset Management, CEPR Discussion Paper No. DP14525, Available at SSRN 35603330. 

The Economist, 2019. The Stockmarket INow Run by Computers, Algorithms and Passive Managers. Available from: 

FreybergerJ., Neuhierl, A. and Weber, M. 2020. Dissecting Characteristics Nonparametrically, The Review of Financial Studies, Volume 33, Issue 5, May 2020, Pages 2326–2377. 

Gu, S., Kelly, B. and Xui, D. 2020. Empirical Asset Pricing via Machine Learning, The Review of Financial Studies, Volume 33, Issue 5, May 2020, Pages 2223–2273. 


Scott B. Guernsey, CERF Research Associate

April 2020

Coronavirus and Finance: Early Evidence on Household Spending, and Investor Expectations 

Sparked by the coronavirus disease 2019 (COVID-19) pandemic, in a televised broadcast on 23 March 2020, U.KPrime Minister Boris Johnson gave the following instruction: You must stay at home. Like most of the world’s governments, the U.K. continues to implement strict lockdown restrictions on households and businesses in order to limit the spread of the disease. Early signs imply these measures are working, as within the past week U.K. Health Minister Matt Hancock confirmed that “[social distancing] is making a difference. [The U.K. is] at the peak. Moreover, The Economist recently published an article suggesting “coronavirus infections have peaked in much of the rich world.  

Putting the global economy on indefinite hold, however, has likely created a different set of problems and unknownsmany of which that are more financial in nature. In this short articleCERF Research Associate Scott Guernsey reviews some recent early-stage finance research that explores the impact of COVID-19 on important financial outcomes, such as household spending and investor expectations.  

In the first article, “How Does Household Spending Respond to an Epidemic? Consumption During the 2020 COVID-19 Pandemic”, Professors Scott Baker (Northwestern University), Robert Farrokhnia (Columbia University), Steffen Meyer (University of Southern Denmark), Michaela Pagel (Columbia University), and Constantine Yannelis (University of Chicago) investigate how U.S. households altered their consumption behavior in response to the COVID-19 outbreakUsing transaction-level household financial data, the paper finds that households’ spending markedly increased as initial news about the spread of COVID-19 in their local area intensified. The initial increase in spending suggests that households were attempting to stockpile essential goods in anticipation of current and future disruptions in their ability to frequent local retailers.  

Meanwhile, as COVID-19 spread, more households remained at home, sharply decreasing their spending at restaurants and retail stores, and their purchase of air travel and public transportation. These effects are magnified for households in states that issued “shelter-in-place” orders, with the increases in grocery spending nearly three times larger, and the decreases in discretionary spending (i.e., restaurants, retail, air travel, and public transport) twice as large, relative to households located in states without such orders. Lastly, the paper finds (perhaps surprisingly) that Republican-households, though reporting to (Axios’) pollsters as perceiving the COVID-19 threat as generally exaggerated,1 actually outspent Democrat-households in the early days of the virus on stockpiling groceries and (less surprisingly) reduced restaurant and retail expenditure less.  

The second article, “Coronavirus: Impact on Stock Prices and Growth Expectations”, by Professors Niels Gormsen (University of Chicago) and Ralph Koijen (University of Chicago) helps to quantify some of the economic costs associated with COVID-19. Employing data from the aggregate equity market and dividend futures,2 the paper explores how E.U. and U.S. investors’ expectations about economic growth has changed in response to the spread of the COVID-19 virus and subsequent actions by policymakers. The authors forecast that the annual growth in dividends is down 28% in the E.U. and 17% in the U.S. Further, their forecasts imply GDP growth is down by 6.3% in the E.U. and 3.8% in the U.S. The lower bound for the E.U. (U.S.) on the change in expected dividends is forecasted to realize at the two-year horizon at about negative 46% (30%). On the bright side, their estimates imply signs of catch-up growth over the three- to seven-year horizon.3 Finally, they document that news about economic relief programs and fiscal stimulus tends to increase long-term growth expectations but does very little in improving expectations about short-term growth. 

Adeplhe Ekponon, CERF Research Associate, March 2020 

Are Cryptocurrencies Priced in the Cross-section? A portfolio Approach


Most papers, that study determinants of cryptocurrency prices, find no relation to existing market factors. In a work-in-progress, CERF Research Associate Adelphe Ekponon and Kassi Assamoi (Liquidity analyst at MUFG Securities and University of Warwick) examine a portfolio approach to explore cross-sectional pricing within crypto-market. At its inception, Bitcoin meant to be an alternative to fiat currencies. Yet high returns in this market may have also attracted usual investors as well, as they are looking for more investments and diversification venue. Since Bitcoin, the number of cryptocurrencies has reached more than 6000 as of the beginning of 2020, according to Hence, investors have more choices when they decide to enter into the crypto-market. So, they have an incentive to understand the crypto-market interaction with their current investment.


Their paper belongs to two trends. The first one explores portfolio strategies and cross-sectional pricing to study factors embedded into major asset classes, stocks or/and bonds as in Fama-French (1989, 1992), Cochrane and Piazzesi (2008), and Koijen, Lustig, and Van Nieuwerburgh (2017); currencies as in Lustig and Verdelhan (2005); and commodity as in Fama-French (1987), and Bakshi, Gao, and Rossi (2015). The second line examines the determinants of cryptocurrency prices and returns. See, among others, Canh et al (2019), Liu and Tsyvinski (2018), Balcilar et al (2017), Bouri et al (2016), and Yermack (2015). They find that cryptocurrencies have no exposure to most market and macroeconomic factors or currency and commodity markets.


In closely related papers, Adam Hayes (2014) uses data from 66 of the most active cryptocurrencies and notes that three of the main drivers come from the blockchain technology. Bouri et al (2016) explore, in time-series analysis, the ability of Bitcoin to hedge against risk embedded within leading stock markets, bonds, oil, gold, the commodity index, and the US dollar index. They conclude that Bitcoin’s ability to hedge is weak but is suitable for diversification purposes. Moreover, its hedging and safe-haven properties depend on the horizon.


In their study, Ekponon and co-author examine ten factors from equity, currency, and commodity markets. The study uses 95+ cryptocurrencies daily quotes, from July 17, 2010, to September 9, 2019. They determine cryptocurrencies’ exposure (beta) to these factors and perform cross-sectional regressions of cryptos average returns on exposures. They alternatively build portfolios sorted on exposures to each factor.


Their findings confirm most of the previous results and produce some novel insight. Two out of the ten factors, size and commodity index, have a negative and highly significant correlation with the cross-section of cryptocurrency returns. Long-short strategies do not deliver significant returns for all ten factors. Yet they might provide excellent investment opportunities for commodities portfolios and in size factor. For example, buy/sell cryptos with negative/positive correlation to diversify a commodity portfolio or investments in blue-chip stocks. As the crypto-market is uncorrelated to market volatility (VIX), these strategies would likely be accurate in any state of the economy. Finally, these results support the market participants’ view that cryptocurrencies are still too volatile to serve as a store of value. In the sense that, cryptos with a negative sensitivity to safe-haven assets, like gold or precious metals, are appreciated by investors.


References mentioned in this post


Bouri, E., Azzi, G., and Dyhrberg A. H. (2016) "On the return-volatility relationship in the Bitcoin market around the price crash of 2013." Available at SSRN 2869855.


Hayes, A. (2016) “What Factors Give Cryptocurrencies Their Value: An Empirical Analysis.” Available at SSRN 2579445.


Mehrshad Motahari, CERF Research Associate 
February 14, 2020

Artificial Intelligence in Asset Management: Hype or Breakthrough?

Artificial intelligence (AI) has become a major trend and has disrupted most industries in recent years. The financial services sector has not been an exception to this development. With the advent of FinTech, which has had an emphasis on the use of AI, the sector has experienced a revolution in some of its core practices. Asset management is probably the most affected practice and is expected to suffer the highest number of job cuts in the foreseeable future. A sizeable proportion of asset management companies are now using AI instead of humans to develop statistical models and run trading and investment platforms.

In a recent article entitled ‘Artificial Intelligence in Asset Management’, CERF Research Associate Mehrshad Motahari and co-authors Söhnke M. Bartram and Jürgen Branke (Warwick Business School, University of Warwick) provide a systematic overview of the wide range of existing and emerging AI applications in asset management and set out some of the key debates. The study focusses on three major areas of asset management in which AI can play a role: portfolio management, trading, and portfolio risk management.

Portfolio management involves making decisions on the allocation of assets to build a portfolio with specific risk and return characteristics. AI techniques improve this process by facilitating fundamental analysis to process quantitative or textual data and generate novel investment strategies. Essentially, AI helps produce better asset return and risk estimates and solve portfolio optimisation problems under complex constraints. All these result in AI achieving portfolios with better out-of-sample performance compared to traditional approaches.

Another popular area for AI applications is trading. Today, the speed and complexity of trades nowadays have made AI techniques an essential part of trading practice. Algorithms can be trained to automatically execute trades on the basis of trading signals, which have given rise to a whole new industry of algorithmic (or algo) trading. In addition, AI techniques can help minimise transaction costs. Many traders have started using algorithms that automatically analyse the market and subsequently identify the best time and amount for trade at any point in time.

 Since the 2008 financial crisis, risk management (and compliance) have been at the forefront of asset management practices. With the increasing complexity of financial assets and global markets, traditional risk models may no longer be sufficient. Here, AI techniques that learn and evolve through the use of data can improve the tools required for monitoring risk. Specifically, AI approaches can extract information from various sources of structured or unstructured data more efficiently and produce more accurate forecasts of bankruptcy and credit risk, market volatility, macroeconomic trends, financial crises, etc. than traditional techniques. AI also assists risk managers in the validation and back-testing of risk models.

AI techniques have also started gaining popularity in new practices, such as robo-advising. This area has gained significant public interest in recent years. Robo-advisers are computer programs that provide investment advice tailored to the needs and preferences of investors. The popularity of robo-advisers stems from their success in democratising investment advisory services by making them less expensive and more accessible to unsophisticated individual investors. It is a particularly attractive tool for young (millennial) and tech-savvy investors. AI can be considered the backbone of robo-advising algorithms, relying heavily on the applications of AI in asset management discussed above.

With all the above advantages, there are also costs associated with the use of AI approaches. These models are often opaque and complex, making them difficult, if not impossible, for managers to scrutinise. AI models are also highly sensitive to data. They may be improperly trained as a result of using poor quality or inadequate data. Insufficient human supervision can result in systematic crashes, the inability to identify inference errors, and a lack of understanding of investment practices and attribution of performance by investors. Last but not least, asset managers need to ask whether the benefits associated with AI can justify their considerable development and implementation costs.

AI is still in its early days in finance and has a long way to go before it can replace humans in all aspects of asset management. What AI does today is limited to automating specific tasks within asset management, often with some form of human intervention at the implementation stage. In fact, there is not much new about the AI techniques used in finance, and they have been around as part of statistics for a long time. Instead, what has led to the recent hype is the availability of vast new data sources and the computing power to extract information from them. AI’s ability to capture complex and nonlinear relationships from the ever-growing volumes of data, including textual ones that are relatively time-consuming for humans to analyse, has proven to be highly beneficial. One can imagine that AI’s footprint will only increase as asset managers compete for more information at higher speeds. Hype or not, AI is here to stay, and its heyday is yet to come.


Bartram, Söhnke M., Jürgen Branke, and Mehrshad Motahari. 2020. Artificial Intelligence in Asset Management, Cambridge Judge Business School Working Paper No. 01/2020.


Argyris Tsiaras, CERF Research Associate, January 2020

Understanding the Cross-Section of International Equity Markets

A large literature in international finance has established the relevance of a wide array of frictions in financial investments across borders leading to the concentration of equity investments within national borders (home bias in equity portfolios) and to large biases in the composition of investors’ foreign equity portfolios (foreign bias). Moreover, despite increasing integration of international equity markets in recent decades, asymmetries in bilateral return comovement between equity markets remain large. In a working paper entitled “Asset Pricing of International Equity under Cross-Border Investment Frictions” and recently presented at the 2020 American Finance Association meetings in San Diego, CERF research associate Argyris Tsiaras and collaborator Thummim Cho (LSE) undertake a systematic theoretical investigation of how the cross-sections of equity returns and portfolio holdings across countries are jointly shaped by investment frictions and other characteristics of individual countries or equity markets, such as market size or the comovement of cash-flow fundamentals.

Overall, the authors argue that cross-country variation in the degree of cross-border investment frictions is the most important determinant of the cross-sections of equity return moments and of cross-border equity portfolio allocations. The paper investigates the implications of this observation for the literature on international asset pricing models, most of which are still tested under the assumption of frictionless cross-border investing.

The authors establish three robust empirical regularities (stylized facts) in the cross-section of international equities. First, equity markets whose returns are more highly correlated with the global equity market also have greater foreign investor presence. In particular, the share of a stock market held by U.S. investors, henceforth referred to as the U.S. investor (cross-border) position, has strong explanatory power for the cross-country variation in correlations of an equity market’s excess return with the U.S. market return. In our sample of 40 countries, the U.S. investor position in a country averaged over 2000-2017 explains about 40% of the cross-sectional variation in the return correlations over the same period. Importantly, the relative size of the equity markets or indicators of real sector comovement, such as the size of bilateral trade and the GDP correlation between the country and the U.S., are unable to account for the cross-section of return comovement. These patterns are hard to reconcile with standard portfolio choice models under frictionless access to international equity markets, which typically predict that investors wish to avoid large positions inassets that are highly correlated with their overall portfolio return.

Second, equity markets whose returns comove less with the global (or U.S.) equity market appear to have larger pricing errors with respect to the global Capital Asset Pricing Model (CAPM) and other multi-factor international asset pricing models. As a result, the security market line (average returns versus betas) in global equity markets appears to be flat or even negative, pointing to a puzzlingly low, or even negative, price of global market risk. Combining this regularity with the first stylized fact, international equity investors have low market positions in markets with high apparent expected returns and low global risk, an observation hard to reconcile with the predictions of frictionless portfolio choice models. Third, investors based in countries that comove less with the global (or U.S.) equity market have equity portfolios that are more biased towards domestic stocks (greater “home bias”).

 To rationalize these empirical patterns, the authors develop a general-equilibrium model of the global economy featuring heterogeneity across countries in cross-border financial investment frictions, modeled in reduced form as proportional holding costs, as well as rich heterogeneity in other potentially relevant aspects, such as risk preferences or cash-flow fundamentals. In the model, the activity of foreign investors in a country’s equity market amplifies return volatility relative to volatility in cash-flow fundamentals and causes fluctuations in countries’ valuation ratios. Importantly, the magnitude of this amplification is decreasing in the holding cost incurred by foreign investors, so that heterogeneity in holding cost across countries translates into heterogeneity in the degree of equity market return comovement with the large market (first stylized fact).


The model also explains the negative relationship between CAPM alphas and betas (second stylized fact), because the high apparent average returns on the stock markets of countries with low return correlations are not in fact attainable by foreign investors in these countries. Because countries with high holding costs, and thus high CAPM alphas, have endogenously low return correlations with global equity markets, a test of the standard market model, which only allows for a uniform intercept across all equity markets, yields a flat security market line and a deceptively low, or even negative, price of global market risk. Finally, high holding costs in a country’s equity market imply a large degree of home bias in the equity portfolio of investors based in that country mainly because high frictions to foreign investors in the local market in equilibrium translate into a comparative advantage of the local market relative to foreign markets as a financial investment for local investors. The impact of holding costs on the endogenous wealth of local investors amplifies the negative impact of local-investor home bias on the foreign position in the local equity market.

 Reference mentioned in this post

 Cho, Thummim and Tsiaras, Argyris (2020). “Asset Pricing of International Equity under Cross-Border Investment Frictions”. Working Paper.


Scott B. Guernsey, CERF Research Associate, December 2019

The Shareholder Value of Stakeholder Orientation


Ever since Milton Friedman’s celebrated 1970 article – The Social Responsibility of Business is to Increase its Profits – the shareholder model of the corporation has commanded widespread acceptance amongst finance academics and practitioners. Under this model, share value maximization provides the exclusive yardstick for managerial performance (Friedman, 1970), while discretion to consider the interests of other stakeholders (e.g., employees, suppliers, customers, creditors, and local communities) is interpreted as enabling managers to rationalize any self-serving action (i.e., “managerial moral hazard”) (Tirole, 2001).

More recently, however, both finance scholars and business leaders have started paying renewed attention to the interests of stakeholders (i.e., “stakeholder orientation”). For example, the article “A Theory of the Stakeholder Corporation”, published in the Econometrica by Michael Magill (University of Southern California), Martine Quinzii (University of California, Davis), and Jean-Charles Rochet (University of Geneva), shows theoretically that firms that exclusively focus on shareholder maximization are more exposed to certain risks that arise from their own investment and production decisions (i.e., “endogenous risks”). And further, that these risks generate negative externalities on stakeholders (e.g., lower wages for employees or higher product prices for consumers), leading them to underinvest in their relationship with the firm and, ultimately, decrease its long-term value. Additionally, institutional investors and CEOs of the largest U.S. corporations seem increasingly willing to accept and even advocate for a corporate model with greater stakeholder orientation (Sorkin, 2018). For instance, on the 19th of August 2019, the Business Roundtable released a trailblazing statement, signed by the nearly 200 CEOs who are its members, redefining the purpose of a corporation and calling for a governance model that benefits not only shareholders but all stakeholders.[1]

Motivated by these developments, in the article “Stakeholder Orientation and Firm Value”, CERF Research Associate Scott Guernsey, and collaborators Martijn Cremers (University of Notre Dame) and Simone Sepe (University of Arizona), study the firm-level implications of increased stakeholder orientation in director decision-making by examining how the enactment of U.S. state-level directors’ duties laws (DDLs) affect shareholder value. DDLs − also known as “corporate constituency statutes” or “stakeholder laws” – increase stakeholder orientation by permitting directors to consider the impact of corporate decisions (such as whether to accept an acquisition offer) on an expanded set of stakeholder interests.

Their main finding is that the enhanced stakeholder orientation enabled by the passage of DDLs results in an increase in the shareholder value of firms incorporated in the adopting states. They show that this value improvement is more pronounced for firms where stakeholder investments are more relevant (e.g., firms that are more reliant on employees, customers, strategic alliance partners, and creditors) or firms that are more engaged in innovative activity (e.g., R&D spending and patenting outputs). The authors also find that, after these laws are passed, employees gain in job security, creditors gain from DDL-firms being more financially sound, and DDL firms increase innovative activity (where stakeholders’ firm-specific investments are key inputs to the firm’s innovation). These benefits, however, tend to be offset in firms with more severe agency problems (e.g., firms with longer tenured CEOs, stronger union influence on management, underutilized assets, and with higher operating expenses or more free cash-flows) where it is more likely that increased director discretion might be abused in the exclusive interest of management.

Overall, their results suggest that shareholders can benefit from greater stakeholder orientation in director decision-making (via DDLs) as it improves the commitment toward stakeholders and reduces contracting costs in many firms, but one size does not fit all.

References mentioned in this post

Friedman, M. 1970. The social responsibility of business is to increase its profits. New York Times Magazine. September 13.

Magill, M., M. Quinzii, and J-C. Rochet. 2015. A theory of the stakeholder corporation. Econometrica 83:1685-1725.

Sorkin, A. 2018. BlackRock’s message: Contribute to society, or risk losing our support. New York Times. January 16.

Tirole, J. 2001. Corporate governance. Econometrica 69:1-35.


Adeplhe Ekponon, CERF Research Associate, November 2019 

Is Firms’ Debt Financing Good for Economic Growth? 

In addition to internal funds, firms have two main sources of financing: equity and debt (in general, a mix of both). The latter source of financing comes with tax benefits, and its costs have been historically lower. However, relying heavily on debt financing could increase a firm’s bankruptcy risk. This suggests that there should exist an optimal leverage level (level of debt over the total value of the firm) as suggested by the trade-off theory, i.e., Leland (1994). 

There is still some debate as to whether firms should use more debt than they do. According to Miller (1977), taxes are large and certain, whereas bankruptcy is rare and its dead-weight costs are low. Thus, firms should have higher leverage levels than what we observe. Myers (1977) argue that, in the presence of risky debt, equity holders underinvest (debt overhang) because an important fraction of the value generated by these new investments will accrue to debt holders. Debt overhang has also been shown to curb firms’ innovation and investment (see Chava and Roberts, 2008). What about the effects of debt financing at the industry/aggregate level? 

Research that follows the trade-off theory treats financing decisions as independent from investment choices, as in Modigliani and Miller (1958), by assuming that the dynamic of the firm’s assets is exogenously given. More generally, very few models consider both financing and investment decisions, particularly when agents are risk averse. Lambrecht and Myers (2017) show that different specifications of managers’ preferences produce different predictions regarding the interactions between financial decisions. With power utility, investment and financing decisions are connected, but with exponential utility, managers separate investment from financing decisions. In both cases, managers underinvest because of risk aversion, confirming the debt overhang phenomenon. 

A recent study by Geelen, Hajda, and Morellec (2019) shows that even if debt financing can have a negative effect on innovation and investment at the firm level, it also stimulates entry of new firms in the capital markets, thereby fostering innovation and growth at the aggregate level. What makes this new finding important? Recent productivity growth and job creation are the handiwork of start-ups and tech companies, particularly big tech. However, these firms heavily rely on R&D investments, which are now higher than CAPEX at the aggregate level for public firms (Doidge, Kahle, Karolyi, and Stulz, 2018). Debt is a key source of financing for large and small firms as well as for start-ups (Robb and Robinson, 2014). 

To capture these empirical observations, Geelen, Hajda, and Morellec (2019) developed a Schumpeterian growth model (innovation makes existing products obsolete) in which firms’ dynamic R&D, investment, and financing choices are jointly and endogenously determined. The paper shows that although debt financing hampers investment at the firm level (debt overhang), it increases aggregate investment by stimulating creative destruction and entry of new firms.

References mentioned in this post 

Chava, S. and Roberts M. R. (2008) "How does Financing Impact Investment? The Role of Debt Covenants." Journal of Finance, 63: 2085-2121 

Doidge, C., Kahle, K. M., Karolyi, G. A. and Stulz R. M. (2018) “Eclipse of the Public Corporation or Eclipse of the Public Markets?” Journal of Applied Corporate Finance, 30: 8-16 

Geelen, Hajda, and Morellec (2019) “Debt, Innovation, and Growth.” Working Paper, EPFL 

Lambrecht, B. M. and Myers, S. C. (2017) “The Dynamics of Investment, Payout and Debt.” Journal of Financial Economics 89: 209–231 

Robb, M., and Robinson, D. T. (2014) “The capital structure decisions of new firms.” Review of Financial Studies 27: 153-179

Oğuzhan Karakaş, CERF Fellow, October 2019

To Vote, or Not to Vote, That is the Question

With the advent of the financial engineering and technology, the fabric of the financial securities is changing. While this change has certain advantages such as bringing the costs down, it also has unintended consequences such as impairing the associated voting rights in the securities. A potential reason underlying this issue is the oversight in the design of the new financial securities and the underlying regulations: in contrast with the cash flow rights, the non-cash-flow-related contractual rights of the securities, including the right to vote, or the right to sue, tend to be overlooked.

Contemporaneously, the U.S. Securities and Exchange Commission (SEC) has been inquiring and debating about the “proxy plumbing” – extensive problems, ranging from over-voting to under-voting, associated with the complex, dated, and inefficient infrastructure supporting the proxy voting system. A recent recommendation of the SEC Investor Advisory Committee on proxy plumbing argues that SEC intervention is necessary for the overhaul of the system.[1]

In the article “Phantom of the Opera: ETF Shorting and Shareholder Voting”, CERF Fellow Oğuzhan Karakaş and research collaborators Richard Evans (University of Virginia), Rabih Moussawi (Villanova University), and Michael Young (University of Virginia), find that short-selling of Exchange Traded Funds (ETFs) lead to “phantom shares” of the underlying that are not voted. This unintended consequence is due to the underlying shares held as collateral or a hedge by the securities lenders or authorized participants/broker-dealers. The authors show that phantom shares (i) are costly, since they do not convey voting rights to the ETF owners, but are sold at the full price of share, which reflects both cash flow rights and voting rights; (ii) create inefficiencies within the voting process by leading to under-voting; (iii) are positively related to voting premium, particularly during the contentious votes; and (iv) are associated with poor governance such as value-reducing acquisitions.

Regulatory concerns regarding the above-mentioned findings would arguably be even more pronounced during times market is bearish and/or when the corporate votes are very valuable. A solution could be to incorporate the distributed ledger technology (commonly known as “blockchain”) into the proxy system, as also discussed at the recommendation of the SEC Investor Advisory Committee on proxy plumbing.

References mentioned in this post

  • Evans, R.B., O. Karakaş, R. Moussawi, and M. Young. 2019. Phantom of the Opera: ETF Shorting and Shareholder Voting. Working Paper, University of Virginia, University of Cambridge and Villanova University.


Scott B. Guernsey, CERF Research Associate, September 2019

FinTech Disruption: Is it Good or Bad for Consumers? 

Financial technology (“FinTech”) is a rapidly growing industry that applies recent digital innovations and technology-enabled business model innovations to financial services. A common example is its application of smartphone technologies to banking. For instance, from the convenience of a mobile phone, FinTech consumers can access depository accounts, transfer funds, request loans, and pay monthly bills. Correspondingly, the emergence of the FinTech industry has expanded the accessibility of many financial services to the general public.

Recent regulation in the EU (the Second Payment Services Directive – PSD2) and the UK (the Open Banking initiative) suggests that policy makers generally regard FinTech’s entrance into the financial services industry favourably.[1] Mandated by these respective legislative actions, traditional banks must release data on their customers’ accounts to authorized FinTech firms, with the aim of opening “up payment markets”, “leading to more competition, greater choice and better prices for consumers” (Summary of Directive (EU) 2015/2366 on EU-wide payment services). But is the competition/disruption created by FinTech firms in financial services’ markets always in the interests of consumers? And what role does the portability of data – as required by the PSD2 and the Open Banking initiative – play in these markets?

A recent research article presented at this year’s Cambridge Corporate Finance Theory Symposium by Professor Uday Rajan (University of Michigan) demonstrates the complex effects that may arise when a FinTech entrant and an incumbent bank compete in the market for payments processing. The paper begins by underscoring two important functions that a bank provides to consumers: (i) it processes their everyday payments (e.g., reoccurring bills), and (ii) it offers them loans when requested. Intuitively, these two financial services are interconnected as the transaction data created from processing payments enables the bank to be informed about their consumers’ credit quality. This information externality makes the bank better off and incentivizes it to bundle payment services and consumer loans. More surprisingly, the paper finds that consumers can also gain from the bank having their information as more creditworthy consumers are offered better interest rates on their loans.

From this starting point, Professor Rajan (and co-authors, Professors Christine Parlour and Haoxiang Zhu) then show that competition from FinTech firms, which act purely as payment processors, can disrupt the bank’s information flow. Consequently, the bank loses market share, consumer information, and becomes less profitable. Additionally, consumers that might need a loan can also suffer from this lost information. Moreover, the entrance of a FinTech firm can either decrease or, quite surprisingly, increase the price the bank charges for its payment services. This latter instance occurs if the bank opts to focus its payment business on the population of consumers that are more reliant on (or have a greater affinity for) brick-and-mortar banks, and thus are more tolerant of higher prices. Conversely, the consumer population that is more technologically sophisticated and willing to use FinTech services experience the greatest gains as their cost for payment services are reduced by the added competition.

The authors then apply their model to a world in which consumers are given complete ownership and portability of their payment data. They show that this policy effectively unbundles a bank’s payment services from its bank loans, which in turn has different ramifications for different consumers. On the one hand, a certain subset of the consumer population that is more technologically sophisticated and less reliant on a traditional banking relationship is made better off via more choice and lower prices. On the other hand, consumers that have a greater affinity for banks and that are less technologically sophisticated can be hurt by policies that mandate portability of their data because the bank will exploit this smaller group of bank-reliant consumers, charging a higher price for its payment services. These key results underline both the good and bad of FinTech disruption and the likely heterogeneous effects of PSD2 and the Open Banking initiative on consumer welfare.


Adeplhe Ekponon, CERF Research Associate, August 2019

Agency Conflicts and Costs of Equity

The agency problem, in the context of separation in ownership (shareholders or the principals) and control (managers or the agents), is one of the most important issues in corporate finance.

This separation may induce conflicts of interest inherent in the kind of relationship where an agent is expected to work in the best interests of a principal. In the case of a company, these conflicts of interest arise when executives, or more generally insiders which could include controlling shareholders, favour their interests at the expense of the company's goals.

There are various manifestations of this behaviour. Managers may appropriate part of the profits, sell the firm’s output or assets at under the fair value to their own business, divert profitable growth options, or recruit unqualified relatives at high positions. See Jensen and Meckling (1976), La Porta et al. (2000, 2002), and Lambrecht and Myers (2008).

Impacts of self-interested management on corporate choices and asset prices have been extensively described by several theoretical and empirical works. They document that entrenched managers tend to underinvest and choose lower leverage. In response, shareholders may force them to increase leverage, because coupon payment reduces the firm’s free cash flows which limits the amount available for cash diversion. Therefore, debt can be used as a tool to discipline managers. Entrenched managers can also resist hostile takeovers and lead shareholders to push for the adoption of more provisions that reduce their own rights.

All these frictions reduce not only profits but also operational efficiencies and affect equity prices and volatility. To measure the impact of agency costs on equity prices, Gompers, Ishii, and Metrick (2003) and Bebchuk, Cohen, and Ferrell (2009) have constructed indexes, G-index and E-index respectively, to measure for the balance of power between shareholders and managers. High index levels (extensive management power) translate into high agency costs. They document that increases in these indexes level are associated with economically significant reductions in firm value, profits and equity price during the 1990s.

Most theoretical papers that study the impacts of agency conflicts on asset prices do not emphasize on its influence on costs of equity. Empirical papers, however, only focus on the level of the severity of the conflict.

A working paper by CERF Research Associate, Adelphe Ekponon, proposes a theoretical approach and provides empirical evidence that time-series fluctuations of this conflict have as well the potential to explain cross-sectional differences in equity prices. Specifically, the difference in average index values in bad times compared to normal periods is positively correlated to the cost of equity, even after controlling for preeminent markets factors. Data are from 1990 to 2006.

The most important economic implications of this result are twofold: firms with countercyclical governance policy (better governance in bad times) have a lower cost of equity. Changes in governance practices in bad vs. good times is a pricing factor for stocks.

Interestingly, the paper shows that these results are closely linked to managers-shareholders conflicts, as it documents a U-shape relationship between changes in G-index and cost of equity (too many restrictions in bad times create conflicts and impediment managers ability to run the company efficiently), while this relationship is linear for the E-index. This latter index has been constructed on a subset of the G-index that focuses on managerial entrenchment.

References mentioned in this post

Bebchuk, L., Cohen, A. and Ferrell, A. (2009), What matters in corporate governance?, Review

of Financial Studies 22(2), 783–827.

Gompers, P., Ishii, J. and Metrick, A. (2003), Corporate governance and equity prices, Quarterly

Journal of Economics 118(1), 107–156.

Jensen, M. C. and Meckling, W. H. (1976), Theory of the firm: Managerial behavior, agency

costs and ownership structure, Journal of Financial Economics 3(4), 305–360.

Lambrecht, B. M. and Myers, S. C. (2008), Debt and managerial rents in a real-options model

of the firm, Journal of Financial Economics 89(2), 209–231.

LaPorta, R., de Silanes, F. L., Shleifer, A. and Vishny, R. (2000), Investor protection and

corporate governance, Journal of Financial Economics 58(1-2), 3–27.

LaPorta, R., de Silanes, F. L., Shleifer, A. and Vishny, R. (2002), Investor protection and

corporate valuation, Journal of Finance 57(3), 1147–170.


Dr. Hui Xu, CERF Research Associate, July 2019

What determines the cryptocurrency’s excepted return?

Since the emergence of the cryptocurrencies, they have quickly become the focus of asset managers. Although many ongoing debates about cryptocurrencies still remain to be solved, e.g. whether their value can be justified, their relationship with the fiat money endorsed by the central banks, they do offer an alternative opportunity for the investors to diversify their portfolios. Yet before constructing a portfolio that subsumes cryptocurrencies, a question has yet to be answered: what is their risk and return profile and what are the determinants?

Since stocks markets are well developed and thoroughly studied, it is only intuitive to look at whether the factors that successfully account for stocks markets can also apply to the cryptocurrencies market. Although shares and cryptocurrencies are fundamentally different, they do share quite amount of similarities. Especially, some cryptocurrencies (digital tokens) represent the claim to the issuer. Eugene F. Fama and Kenneth R. French in 1992 found that size risk and value risk can account for the stock return in addition to the well-known Beta, based on the evidence that value and small-cap stocks outperform market on a regular basis. Mark Carhart augmented the three-factor model by adding a momentum factor that describes the tendency for the stock price to continue rising if it is going up and to continue declining if it is going down.

A recent NBER Working Paper by Aleh Tsyvinski et. al. tested the idea and showed that most of the powerful explanatory factors in the stock market, namely the beta, size and momentum, are also powerful to capture the cross-sectional expected returns of cryptocurrencies. Since the advent of cryptocurrencies, many have studied the expected returns and many explanatory factors have been suggested. Interestingly, they showed that all of the excess returns generated by the trading strategies that previous studies implied, in fact, can be accounted for by the cryptocurrency three-factor model.

A further inquiry is whether there exists a “twin” value factor in the cryptocurrency market, why the size and momentum factors are so mysteriously powerful and whether they affect the cryptocurrency returns the same way as they do to the stock returns? One thing for sure is that as the cryptocurrency market continue to burgeon, all these questions will be answered eventually.


Shadow Pills and Visible Value

By: Scott B. Guernsey, CERF Research Associate, June 2019

The “poison pill” (formally known as a “shareholder rights plan”) has a long and contentious history in the United States as a tactic to deter takeovers.[1] While details can vary across different implementations, the key defensive mechanism of the pill provides existing shareholders with stock purchase rights that entitle them to acquire newly issued shares at a substantial discount in the “trigger” event that a hostile bidder obtains more than a pre-specified percentage of the company’s outstanding shares (e.g., 10-15%).[2] As a result, poison pills permit a firm’s board of directors the ability to substantially dilute the ownership stake of a hostile bidder, de facto giving the board veto power over any hostile acquisition.

Correspondingly, law and finance scholars generally agree that the poison pill is perhaps the most powerful anti-takeover defense (e.g., Malatesta and Walkling 1988; Ryngaert 1988; Comment and Schwert 1995; Coates 2000; Cremers and Ferrell 2014). However, whether a firm’s managers use the poison pill to the benefit or detriment of its shareholders is the subject of an enduring debate in both the corporate finance literature and in U.S.’ state courts.

Prior empirical studies have attempted to investigate the value implications of a firm’s decision to employ a poison pill as a strategy to deter takeovers. While earlier findings were mixed, over the past decade most studies have found that the adoption of a pill is negatively associated with firm value (e.g., Bebchuk, Cohen and Ferrell 2009; Cuñat, Gine, and Guadalupe 2012; Cremers and Ferrell 2014). Unfortunately, however, this result is challenging to interpret, as the choice to adopt a pill is endogenous – meaning, for example, that the finding might imply that a firm was losing value and decided to adopt a pill in response rather than the conclusion that the adoption of the pill led to lowered firm value. Adding to the difficulty of researchers, since poison pills can be unilaterally adopted by a firm’s board of directors, even firms that do not currently have a poison pill in place still have the right to adopt a pill at any time – this right is termed by scholars as a “shadow pill” (Coates 2000).

In the article “Shadow Pills and Long-Term Firm Value”, CERF Research Associate Scott Guernsey, and research collaborators Martijn Cremers (University of Notre Dame), Lubomir Litov (University of Oklahoma), and Simone Sepe (University of Arizona), contribute to the debate on the value implications of the poison pill by shifting the focus from “visible” (or realized) pills to shadow pills – that is, studying the effect that arises from the right to adopt a poison pill rather than its actual adoption. To do this empirically, the study’s tests focus on U.S. state-level poison pill laws (“PPLs”) – enacted by 35 states between 1986 and 2009 – which legally validated the use of the pill, hence strengthening these firms’ shadow pill.

Using the staggered enactments of PPLs by different states in different years, the authors find that firms incorporated in states with a stronger shadow pill experience significant increases in firm value, and especially for firms with stronger stakeholder relationships (e.g., with a large customer or in a strategic alliance) and more engaged in innovation (e.g., R&D investments or with patents). Additionally, the study confirms the prior literature’s results on a negative correlation between firm value and actual pill adoption.

Overall, the authors’ findings suggest that a stronger shadow pill can benefit certain firms’ shareholders, even if a visible pill does not, indicating that for these firms the right to adopt a pill could serve as a function of good corporate governance by credibly signaling a firm’s bond toward more stable stakeholder relationships and/or longer-term investment projects through its commitment against potential disruptions from short-term shareholder interference via the takeover market.

References mentioned in this post

Bebchuk, L., A. Cohen, and A. Ferrell. 2008. What matters in corporate governance?. Review of Financial Studies 22:783-827.

Coates IV, J.C. 2000. Takeover defenses in the shadow of the pill: A critique of the scientific evidence. Texas Law Review 79:271-382.

Comment, R., and G.W. Schwert. 1995. Poison or placebo? Evidence on the deterrence and wealth effects of modern antitakeover measures. Journal of Financial Economics 39:3-43.

Cremers, M., and A. Ferrell. 2014. Thirty years of shareholder rights and firm value. Journal of Finance 69:1167-96.

Cuñat, V., M. Gine, and M. Guadalupe. 2012. The vote is cast: The effect of corporate governance on shareholder value. Journal of Finance 67:1943-77.

Malatesta, P.H., and R.A. Walkling. 1988. Poison pill securities: Stockholder wealth, profitability, and ownership structure. Journal of Financial Economics 20:347-76.

Ryngaert, M. 1988. The effect of poison pill securities on shareholder wealth. Journal of Financial Economics 20:377-417.

Slaughter and May. 2010. A guide to takeovers in the United Kingdom.

[1] The use of the poison pill is not permitted in the U.K. because: (i) it is viewed as a breach of fiduciary duty, and (ii) it is disallowed by General Principle 3 and Rule 21 of the City Code (Slaughter and May 2010).

[2] This describes the “flip-in” poison pill which has become largely majoritarian in the U.S.; for other methods see: “preferred stock plans,” “flip-over” poison pills, “back-end rights plans,” “golden handcuffs,” and “voting plans.”


Adelphe Ekponon, CERF Research Associate, May 2019

A corporate finance model for Cryptocurrencies 

After the credit crunch of 2009, you may have heard of Bitcoin or cryptocurrencies in general. Bitcoin and altcoins (terminology used to refer to all other cryptocurrencies) are digital currencies built on distributed ledger technologies such as Blockchain and so are not regulated by a central authority. Whether cryptocurrencies are perceived as currencies by authorities, or treated as financial assets by investor and regulators, or whether they can be used as security token or utility token, it is clear the digital currencies’ market is a small but growing market’ as commented by Christopher Woolard, executive director of Strategy and Competition at the FCA (UK Financial Conduct Authority) [1].

From Bitcoin, invented by the so-called Shatoshi Nakomoto, more than 2000 other altcoins have been created for various purposes [2]. The market capitalization for the largest 100 cryptocurrencies has increased from 1.5 Billion in 2013 to 250 Billion in May 2019, with a peak of 795 Billion in January 2018. With such an interest from not only individual consumers but also Business users, authorities in countries such the UK, France, Switzerland, South Korea, the United States and others have initiated regulatory sandboxes either to educate consumers (UK, Guidance) or to create high level government task forces to investigate the technology and regulatory implications.

Authorities around the world, including governments and central banks, often remain skeptical about the digital currencies, rightfully because many questions either on the scalability of the underlying technology [3] or concerning the nature of the crypto assets are subject to investigation and clarification prior to potential wider adoption of the technology.

As mentioned above, the price of cryptoassets on exchange places shows extremely high volatility compared to for instance the equity market. Authorities and trading exchange often warn that prices can fall to zero overnight. This raises the questions as to whether any cryptoasset has a fundamental value which can sustain its market price or whether crypto prices follow a completely different pricing model which need to be investigated.

In ongoing work, CERF Research Associate A. Ekponon and K. Assamoi (Liquidity analyst at MUFG Securities) propose a corporate finance model for the pricing of cryptocurrencies.

First, they model the scale level of a cryptofirm, e.g. Bitcoin or Ethereum, following Bhambhwani et al (2019) and Hayes (2015). This scale is assumed to be constant but may change (infrequently) up or down over time following fundamentals. Cryptofirms remain to be clarified in term of investment. This paper takes the view that a cryptocurrency is classified as a financial asset [4].

If so the overall activity around a cryptofirm could be transcribed in a usual firm setting. Fundamental values represent initial cash-flow level whenever the firm changes scale. Miners, which work consist in validating these peer-to-peer transactions, represent labour. Successful crypto mining produces new bitcoins to miners (block fees) and improves the trust in and security of the technology, making crypto-firm more valuable. Validating transactions are also rewarded by bitcoin users through transactions fees. Rewards to miners constitute wages. Computation costs incurred by miners are also counted for.

Second, this article assumes that a cryptocurrency corresponds to equity for the firm and its cash-flow evolves around its fundamental levels following standard Brownian motions.  

Third, the optimal level of firm fundamentals, among which, the difficulty in the validation of operations, the rate of unit production, the cryptologic algorithm employed or the aggregate computing power, and also cryptocurrency price are derived by using methods from dynamic models of corporate finance (See Strebulaev, 2007).




The paper findings are tested with daily prices of more than 100 cryptocurrencies among the most actively traded. Results from the model implications and empirical tests will be detailed in a future blog.

References mentioned in this post

Bhambhwani, S., Delikouras, S. and Korniotis, G. M., Do Fundamentals Drive Cryptocurrency Prices? (May 9, 2019). Available at SSRN:

Hayes, Adam, What Factors Give Cryptocurrencies Their Value: An Empirical Analysis (March 16, 2015). Available at SSRN:

Strebulaev, I. A., 2007, Do tests of capital structure mean what they say? Journal of Finance

62, 1747–1787.



[3] EU Blockchain Observatory, Overview and Guiding on Blockchain Scalability and Security Topics, Working Group Blockchain/ICO, 2018, recommendations on future regulations.

[4] 2nd Global Cryptoasset Benmarking Study, Cambridge Center for Alternative Finance

Thies Lindenthal, CERF Fellow, Land Economy, May 2019

Machine Learning, Building Vintage and Property Values

Sometimes, all you need is a bit of luck. Erik Johnson (University of Alabama) and I had explored a new way to integrate images from Google Street View as additional input to automatic real estate valuation systems. Writing up the working paper[1], we were looking for relevant policy implications beyond the mundane goal of boosting price prediction accuracy. We struggled. But then the head of UK’s Building Better, Building Beautiful Commission went on the record, claiming that Britain’s housing supply constraints will evaporate if only developers build “as our Georgian and Victorian forebearers built [. . . ] All objections to new building would slip away in the sheer relief of the public”[2]. The research we had done enabled us to put this refreshing view to the test (and to add a policy dimension to the paper).

In a nutshell, our approach automates a process that those of us who have been trying to find a place to rent or buy are surely familiar with: To learn more about a potentially interesting home, one looks it up on Google Street View and tries to infer additional information from the images of the building itself and also get a feeling for the neighbourhood. Street level images are a rich data source, answering many questions such as: How big is the property and garden? How old is it? Is the exterior well-kept? Has the house charm? Is it’s architecture pleasing to the subjective eye? And much more. The challenge is to automatically identify the correct building on Street View, take the best possible picture and to classify the property in several dimensions using computer vision (CV) and machine learning (ML) techniques.

Extracting images of  individual buildings from Street View was a bigger challenge than expected. Google’s address information are often relatively broad guesses in the UK. Try finding e.g. “84 Vinery Road, Cambridge, CB1 3DT” on Street View to experience the problem yourself. Based on exact maps from the Ordnance Survey we solve this more technical first step and collect front images of practically all residential homes in Cambridge.

In the ML application, we initially focus on training a classifier for the vintage of buildings. According to colleagues from the architecture department, local houses can be classified into seven broad eras: Georgian (c1714–1837) houses feature key characteristics such as sash windows, fan lights above doors, the use of stucco on facades, often wrought work grilles, railings etc. In the Early Victorian era (c1837–c1870s), a growing taste for individualized embellishment led to the development of elaborate features such as carved barge boards or finials. The development of sheet glass led to sash windows becoming more affordable, and, increasingly, wider. In the Late Victorian era (c1870s–1901), bay windows became more and more widespread, and increasingly substantial. Edwardian architecture (1901-1910) tends to be less ornate than late Victorian architecture. The Interwar period (1918–1939) saw the cost of building construction fall, amidst a drive to provide better housing for the working classes. New housing types were being favoured. The Postwar (1950-1980) era continued on this path, with an embrace of high-rise as well as low rise housing. Facades vary greatly between brick, tiling, pebbledash and render. Our cut-off year for our Contemporary era to begin is 1980. Revival are contemporary buildings trying to emulate historical architecture. It should be self-evident, that the sheer amount of details and variations defies a simplistic classification approach.

We suggest a transfer learning approach in which the images are first translated into high-dimensional feature vectors using an existing CV model (Inception V3[3]). A classifier is then trained to categorise the buildings into vintages, based on the feature vectors (Softmax). An true innovation of our approach is that we include information on neighbouring buildings into the classification, exploiting spatial dependency in building vintages.


Note: Feature vectors generated by Inception V3 have 2,048 dimensions which favours a ML approach (in contrast to e.g. multinomial logit regressions) in the classification step.

Two final-year architectural students classified a large sub-sample of approximately 25,000 images from our data set of Cambridge houses. This is a much larger sample than ultimately needed. In our case, each category requires less than 250 samples to reach almost fully diminished training accuracy for additional observations. We greatly exceed this number so that we can compare the out-of-sample convolutional neural network predictions to the groundtruth as assigned by the experts. This allows us to examine the power and size of the assignment tests. In addition having both human and machine classification for a large sample of the data allows for a robustness checks on the machine comparisons. The accuracy of the automatic prediction is high (Table 1): A machine can relatively reliably tell different building vintages apart, even Revival styles are detected. All comes at modest cost, classifying the universe of buildings in Cambridge takes only seconds on a contemporary laptop. 

Table 1: Confusion matrix – Predicted vintage vs. ground truth


NoteRecall is the share of buildings from a ground truth category being predicted correctly (diagonal in mid panel) and Precision is the share of buildings predicted to belong to a category that are indeed from that category. The F1-score is the harmonious mean of Precision and Recall: F1-score = 2 Recall * Precision / (Recall + Precision)

Coming back to the claim made by Building Better, Building Beautiful on historic aesthetics being valued by the people: If that were true, buyers should prefer revival architecture over more contemporary designs. Also, buildings with adjacent buildings in historic or revival appearance should command a price premium. How hard we look, we cannot find any evidence for such a preference in real transaction data. After controlling for a house’s location, size and quality, modern designs are as sought after as replicas of old styles. Not surprising, reviving the good old times will not solve the housing shortage.

We have to speed up the publication of our paper as much as we can, or we risk losing our policy relevance again: The chairman of the helpful government commission has been fired in the meantime – for reasons not related to our research, though.

[2]     Scruton, Roger. 2018. “The Fabric of the City.” Colin Amery Memorial Lecture. Policy Exchange.

[3]     Szegedy, Christian, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2015. “Rethink-

ing the Inception Architecture for Computer Vision.”



Scott B. Guernsey, CERF Research Associate, March 2019

As described in the article “The Choice between Formal and Informal Intellectual Property: A Review”, published in the Journal of Economic Literature by Bronwyn Hall (University of California, Berkeley), Christian Helmers (Santa Clara University), Mark Rogers (Oxford University), and Vania Sena (University of Essex), the UK Community Innovation Survey suggests that most UK-based companies consider trade secrets one of the most effective mechanisms to protect their intellectual property. Further, the recent passing of the “Trade Secrets (Enforcement, etc) Regulations 2018” (SI 2018 No. 597) act by Parliament indicates that UK policymakers are also concerned with protecting domestic trade secrets.

Loosely defined, trade secrets are configurations of closely held, confidential information (e.g., devices, formulas, methods, processes, programs, techniques, etc.), which are used in a firm’s operations, are not easily ascertainable by outside parties, and have commercial value for the holder because it is secret. Common examples include detailed information about a firm’s customer contact and price lists, computer algorithms, cost information, and business plans for future products and services, among others.[1] Although, despite the simplicity and straightforwardness of these examples, the opaque and intangible nature of trade secrets makes it challenging for investors to appropriately assess the risk profiles and fundamental values of companies more reliant on secrecy.

As explained in the legal article “Bankruptcy in the Age of ‘Intangibility’: The Bankruptcies of Knowledge Companies” by Mathieu Kohmann (Harvard Law School), the difficulty in assessing the risk and value of trade secrets is even more alarming for creditors of financially distressed or defaulted firms. For one, trade secrets cannot generally be collateralized in debt contracts. And second, even if the secrets were pledgeable to lenders, they do not have active secondary markets, making their redeployability and liquidation in bankruptcy costly and largely infeasible. Prior theoretical work in the financial economics literature, further suggests that firms composed primarily of intangible assets (e.g., trade secrets) sustain less debt financing because these types of assets decrease the value that can be captured by lenders in the event of default.[2]

Motivated by the increasing importance of secrecy for firms and governments, and the corresponding difficulties borne by creditors of these types of firms, in the article “Keeping Secrets from Creditors: The Uniform Trade Secrets Act and Financial Leverage”, CERF Research Associate Scott Guernsey, and research collaborators Kose John (New York University) and Lubomir Litov (University of Oklahoma), examine the impact of stronger trade secrets protection on firms’ capital structure decision-making.

To empirically analyze the relationship between trade secrets protection and financial leverage, Dr. Guernsey focuses his study on the adoption of the Uniform Trade Secrets Act (UTSA) by 46 U.S. states from 1980 to 2013. The UTSA, much like the recent “Trade Secrets (Enforcement, etc) Regulations 2018” in the UK, improves the protection of trade secrets by codifying existing common law, standardizing its legal definition, detailing what constitutes illegal misappropriation (e.g., bribery, theft, espionage), and clarifying the rights and remedies of victimized firms (e.g., injunctive relief, damages, reasonable royalties). Using the staggered adoptions of the UTSA by different states in different years, the authors find that firms located in states with enhanced trade secrets protection reduce (increase) their use of debt (equity) financing, compared to firms operating in the same U.S. Census region[3] and sharing similar industry trends but headquartered in states without the laws’ protection.

Next, Dr. Guernsey explores a possible economic explanation for the reduction in debt ratios experienced by firms located in states with the UTSA. The authors find evidence for the “asset pledgeability hypothesis” which conjectures that stronger trade secrets protection incentivizes firms to increase their reliance on secrecy (and away from patents), which, correspondingly, increases intangibility, leading to enhanced contracting problems with creditors – since such assets are more difficult to redeploy and liquidate in secondary markets –, ultimately, leading to less borrowing. For instance, relative to industry rivals operating in similar geographical regions, firms located in UTSA enacting states increase their investments in intangible assets and research and development (R&D), and experience decreases in the liquidation value of their assets and in their reliance on patents.

Overall, Dr. Guernsey’s findings provide important insights into how greater reliance on trade secrets affects corporate leverage decisions – indicating that companies with stronger protection choose to keep their secrets from creditors.

References mentioned in this post

Hall, B., C. Helmers, M. Rogers, and V. Sena. 2014. The choice between formal and informal intellectual property: A review. Journal of Economic Literature 52: 375-423.

Kohmann, M. 2017. Bankruptcy in the age of “intangibility”: The bankruptcies of knowledge companies. Unpublished Working Paper, Harvard Law School.

Long, M.S., and Malitz, I.B. 1985. Investment patterns and financial leverage. In: Corporate capital structures in the United States. University of Chicago Press, Illinois, pp. 325-352.

Shleifer, A., and Vishny, R.W. 1992. Liquidation values and debt capacity: A market equilibrium approach. Journal of Finance 47: 1343-1366.

Williamson, O.E. 1988. Corporate finance and corporate governance. Journal of Finance 43: 567-591.

[1] For instance, the Coca-Cola soft drink recipe, Google’s search algorithm, McDonald’s Big Mac special sauce, and the New York Times Bestseller List are among the most famous examples of trade secrets.

[2] For example, see, Long and Malitz (1985), Williamson (1988), and Shleifer and Vishny (1992).

[3] The U.S. Census Bureau groups states into four census regions: Northeast, Midwest, South, and West.


Dr.Adelphe Ekponon, CERF Research Associate, February 2019

Long-term Economic Outlook and Equity Prices

The very first asset pricing models (also called Capital Asset Pricing Models or CAPM) have postulated that the only risk that is needed to characterize a stock price is the contemporaneous correlation between the firm and the market portfolio returns. This implies that investors pay much more attention to information about the current economic conditions. Yet models that only incorporate this correlation risk tend to be unable to capture the dynamic of equity returns. The empirical asset pricing model proposed by Fama and French (1992) demonstrate that CAPM has no explanatory power to explain the cross-section of average stocks returns on portfolios sorted by size and book-to-market equity ratios.

An important trend of the literature has developed models to improve pricing performances of the CAPM via a consumption-based approach, CCAPM. The main innovation of CCAPM models lies in the introduction of macroeconomic conditions into asset pricing. According to these models, risk premia should be proportional to consumption beta (correlation between the firm's profit and consumption). However, this line of CCAPM models are known to produce very little level of equity risk premium, less than 1% for reasonable levels of risk aversion. These models are also rejected by several empirical tests.

Since then, two new features have been introduced in asset pricing. The first comes from the observation by Hamilton (1989) that shocks to the US economic growth are not i.i.d. as growth rates may also shift from periods of high to low levels. Secondly, a new class of utility functions introduced by Epstein and Zin (1989), allows to isolate, the aversion to future economic uncertainty from that of the current correlation risk.

Bansal and Yaron (2004) and recent papers have successfully developed consumption-based models in which the representative agent has Epstein-Zin type of preferences. These models pave the way to disentangle the impact of long-run vs. current correlation risks in stock prices. Additionally, they generate reasonable levels of equity risk premium and are able to explain some key asset pricing phenomena. Here, long-run risk (LRR) captures the unforecastable and persistent nature of future economic conditions and has two components, expected growth rate and volatility.

Constructed on this last trend of papers, Dorion, Ekponon, and Jeanneret (2019) propose a consumption-based structural approach, with endogenous default and debt policies, that allows investigating both long-run and correlation risks individually and in tandem. This is the first study to isolate and quantify, conditional on the state of economy, the impact of LRR in equity prices.

They found an average risk premium of 1% in expansion against 6% in recession. The paper also predicts that long-run risk represents about three-quarters of this risk premium and that its impact is countercyclical, being more than 90% in recession. To reduce the impact of LRR, managers lessen the optimal amount of debt to issue and lower the default barrier. Despite these adjustments, LRR still governs equity premium leading to the above predictions.

Using U.S. stocks prices, consumption growth (correlation risk), and expected economic growth rate and volatility (long-run risk), over the period from 1952 to 2016, this study confirms that LRR is priced in U.S. firms, particularly in bad times. These data show that the compensation for LRR represents around 70% of excess returns in a zero-investment portfolio, consisting in shorting stocks which returns have a low correlation with expected growth rates (or high correlation with expected growth volatilities) and buying stocks with high correlation with expected growth rates (or low correlation with expected growth volatilities). These results imply that LRR is a priced risk factor for equity.

Hence, investors are compensated for trading/holding stocks based on their sensitivity to future economic conditions. This result provides a strong evidence that long-run economic outlook is an important driver of equity premium at the cross section.

References mentioned in this post

Bansal, R. and Yaron, A. (2004), Risks for the long run: A potential resolution of asset pricing puzzles, Journal of Finance 59(4), 1481-1509.

Epstein, L. G. and Zin, S. E. (1989), Substitution, risk aversion, and the temporal behavior of consumption and asset returns: A theoretical framework, Econometrica 57(4), 937-69.

Fama, E. F. and French, K. R. (1992), The cross-section of expected stock returns, Journal of Finance 47(2), 427-65.

Hamilton, J. (1989), A new approach to the economic analysis of nonstationary time series and the business cycle, Econometrica 57(2), 357-84.


Dr. Hui Xu, CERF Research Associate, January 2019

Brexit: Investor Paranoia and the Financing Cost of Firms

Financial markets faced a bumpy ride in 2018. The Financial Times report that global bond and equity markets shrank $5tn last year. Two major risks have been disrupting the markets during the past year: US-China trade dispute and Brexit. The two risks, however, are essentially the same: both would cause new frictions and impediments to the existing trade frameworks and unsettle investors’ nerves.

The risks may have consequences on firms’ financing cost for real reasons. Take Brexit with no deal as an example. First, a firm’s revenue can decline due to the friction in the product market, especially for British firms that heavily depend on the European markets. Second, the friction in the labor market may increase a firm’s production cost. Both will lead to adverse effects on a firm’s cash flow and, consequently, the firm’s financing costs. However, the Brexit might also increase the firm’s financing cost just because the investors become paranoid and exaggerate such adverse impacts brought by Brexit.

Yet, to what extent does investor paranoia affect a firm’s financing cost? The question is interesting for two reasons. First, although economists have been assuming investors to be rational, empirical evidence has challenged this view. Answering this question not only contributes to the evidence of irrationality, but also quantifies the real impact of investor irrationality on firms. Second, irrationality drives the valuation from the fundamentals and, de facto, creates possibility for arbitrage.

A work in progress by Frank, a research associate at Cambridge Endowment for Research in Finance (CERF), and his co-authors, studies the question by studying the yield difference of British corporate bonds maturing before and after March 29th, 2019, the date on which Great Britain is set to leave European Union. The idea is simple. Take a corporate bond which matures one day before March 29th and another identical bond which matures one day after March 29th, if the yield of the latter is significantly higher, then we can conclude that the yield difference captures the impact of investor paranoia on the firm’s debt financing cost. Even if Great Britain crashes out of EU without a deal on March 29th, it can hardly affect a firm’s fundamentals, such as revenue and cost, within one day. Therefore, the only explanation for such a yield difference lies in investor paranoia.

Guided by the empirical design, the authors collect a small sample of British corporate bonds. The preliminary analysis does show that bonds maturing after the Brexit date have a higher yield than similar bonds maturing before the date, indicating the real financing cost on firms due to investor paranoia about Brexit risk. The authors are in the process of collecting more data and a working paper and more results will be published very soon.

Scott Guernsey, CERF Research Associate, December 2018

Reinvesting Market Power for the Betterment of Shareholders

On the supply side, highly competitive industries are generally characterized ashaving many firms and low barriers to entry. The first condition implies that existing firms cannot dictate or influence prices, and the second that new firms can enter markets at any time and at relatively low cost when incentivized to do so. Taken together then, in equilibrium, this setting suggests that existing firms only earn enough revenue to remain competitive and cover their total costs of production.

Yet, in reality, most industries in the United States have become increasingly less competitive. For example, in the article “Are U.S. Industries Becoming More Concentrated?”, forthcoming in Review of Finance, Gustavo Grullon (Rice University), Yelena Larkin (York University), and Roni Michaely (University of Geneva), find that more than 75% of U.S. industries experienced an increase in concentration over the past two decades.[1] As such, these industries are now composed of fewer firms, are less at risk of entry by newcomers, and earn “economic rents” or revenues in excess of that which would be economically sufficient in a competitive environment. Given these new developments, it is important for shareholders to understand how a reduction in competition might affect their holdings.

 In the article “Product Market Competition and Long-Term Firm Value: Evidence from Reverse Engineering Laws”, CERF Research Associate Scott Guernsey examines the value and investment policy implications of decreased product market competition for equity holders in the U.S. manufacturing industry.

To empirically analyze the relationship between competition and firm outcomes, Dr. Guernsey centers his study on the adoption of anti-plug-mold (APM) laws, which were adopted by 12 U.S. states from 1978 to 1987, and their subsequent reversal by a U.S. Supreme Court ruling in 1989. APM laws directly influenced the intensity of competition in product markets by protecting firms headquartered in the law adopting states from competitors copying their products using a specific type of reverse engineering (RE)[2] – the “direct molding process”.

The direct molding process enabled competitors to circumvent the R&D and manufacturing costs incurred by the originating firm by using an already finished product to create a mold which would then be used to produce duplicate items. For example, a boat manufacturer using this RE process would buy an existing boat on the open market, spray it with a mold forming substance (e.g., fiberglass), remove the original boat from the hardened substance, which would then become the mold used to produce replica boats. However, under the protection of APM laws, firms were given legal recourse to stop competitors in any U.S. state from using the direct molding process to compete with their products.

Using the staggered adoptions of APM laws by different states in different years, Dr. Guernsey finds that firms located in states with RE protection experienced increases in their value, when compared to firms operating in the same industry but located in states without the laws. Moreover, when the APM laws were later overturned by a U.S. Supreme Court ruling, which found the state laws in conflict with federal patent law, he finds all of the previous value gains subside.

Next, Dr. Guernsey explores a possible economic explanation for the increase in value experienced by firms in less competitive industries. He finds evidence for the “innovation incentives” hypothesis which poses that any of the economic rents the APM protected firms earn from increased market power are being allocated to investments in new and existing production technologies. For instance, relative to industry rivals, firms located in APM enacting states increase their investments in R&D and organizational capital.

Overall, Dr. Guernsey shows a reduction in competition is value enhancing for a subset of shareholders in the manufacturing industry as it leads their firms to reinvest the spoils of market power back into the company.

References mentioned in this post

Grullon, G., Y. Larkin, and R. Michaely. 2018. Are US industries becoming more concentrated?. Review of Finance, Forthcoming.

Gutiérrez, G., and T. Philippon. 2017. Declining competition and investment in the US. Unpublished Working Paper, National Bureau of Economic Research.

Kahle, K. M., and R. M. Stulz. 2017. Is the US public corporation in trouble?. Journal of Economic Perspectives 31:67–88.

[1] Gutiérrez and Philippon (2017) and Kahle and Stulz (2017) also document evidence confirming the recent trend in rising U.S. industry concentration.

[2] The standard legal definition of reverse engineering in the U.S. is described as “starting with the known product and working backward to divine the process which aided in its development or manufacture.”

Adelphe Ekponon, CERF Research Associate, November 2018

Emerging Markets Economies Debt Is Growing... What to expect?

After the 2008 financial crisis, Central banks have implemented accommodative monetary policies with the objective to revitalize countries economic activities. As a consequence, many countries have increased their borrowing in dollar and euro-denominated debt, leading to an increase of debt/GDP ratio around the world. As an example, this ratio was on average about 82% in Europe by the end of 2017 compared to 60% before the crisis, according to Eurostat.

The prime concern, however, is currently on the Emerging Markets Economies (EMEs) side, at least for two reasons.

First, many Emerging countries have increased their exposure to foreign debt (especially to hard currencies like dollar or euro). Their overall government debt as percentage of GDP went from 41 to 51 from 2008 to 2017 (BIS Quarterly Review, September 2017). In the same period, the government debt of EMEs doubled to reached $11.7 trillion with foreign currency debt also rising. Yet the problem with foreign-currency debt is that the government cannot inflate them away and difficulties to service them may be transmitted to the local currency debt market.

Second, the US Federal Reserve and the European central Bank are ending their accommodative monetary policies, which implies that interests rate will now be on the rise and that EMEs borrowing costs as well. From past experiences, interest rate rise in the US particularly has shown to be a trigger of many emerging countries debt crisis. Before EMEs debt crisis, such as Latin America in 1980, Mexico in 1994 and Asia in 1997, interests rate in the US were growing after remaining low.

Other factors may even worsen the situation, i.e. contagion or capital outflow, among others.  

In their paper “Macroeconomic Risk, Investor Preferences, and Sovereign Credit Spreads”, CCFin research associate Adelphe Ekponon and his co-authors explore the mechanism through which macroeconomic conditions combined with global investors aversion drive countries borrowing costs. According to this study, the link between economic conditions in the US and sovereign debt yields originate from the existence of a global business cycle, as countries tend on average, to be in good or bad time around the same periods. They found that this global business cycle increases the risk of defaulting, but also the government’s unwillingness to repay. The other mechanism is that investors’ higher risk aversion amplifies these effects. In this case, risky assets sell-offs are more pronounced in recession leading to a lower risk-free rate on average, to which the government optimally respond by issuing more debt.

It is likely that countries are going to discipline themselves in the coming months or years as borrowing costs surge… if there is no sudden switch to a global economic downturn. 

Pedro Saffi, CERF Fellow, November 2018

Predicting House Prices with Equity Lending Market Characteristics

Investors in financial markets must cope with the arrival of a myriad of news, which arrive relentlessly every day non-stop. This information must be interpreted and used in the most efficient way possible to update investment strategies. Most academics also spend their careers trying to identify variables (e.g. GDP growth, retail sales, unemployment) that can help forecast the behavior of financial market variables (e.g. stock returns, risk, and exchange rates). While less common, many articles show how financial markets’ data can be used to predict the behavior of variables in the real economy.[1]

In the article “The Big Short: Short Selling Activity and Predictability in House Prices”, forthcoming at Real Estate Economics, CERF Fellow Pedro Saffi and research collaborator Carles Vergara-Alert (IESE Business School) look at how U.S. house prices can be better understood using a previously unexplored set of financial variables.

Investors can speculate on a decrease of prices using a strategy known as “short selling”. This involves borrowing the security being sold from another investor, selling at the current price, and repurchase it in the future – hopefully at a lower price to make a profit. The market to borrow shares is known as the equity lending market, a trillion-dollar part of the financial system that allows investors to borrow and lend securities needed for short selling. While investors cannot bet in house price decreases by shorting houses directly, they can use a wide-range of financial securities to do. Dr Saffi examines use data on short selling activity from a specific type of security whose returns are highly related to house prices – Real Estate Investment Trusts (REITs) –  that are essentially portfolios of underlying real estate properties.

The authors’ main hypothesis is that REITs are strongly correlated to fundamentals of housing markets. Thus, an increase in REIT short selling activity can forecast decreases in housing prices, which is exactly what is found by the authors in the data. Furthermore, REITs invested in properties located in areas that experienced a housing boom during the expansion cycle in the 2000s are more sensitive to increases in short selling activity than REITs invested in properties located in areas that did not experience a housing boom. The study divides the US property market into four regions – Northeast, Midwest, South and West – and classifies each month in each region as being a “boom,” “average” or downturn” period. Although during boom and average periods there is little correlation between REITs short-selling and the subsequent month’s housing prices, “the correlation is significantly positive during housing market downturns.”

Using his research findings, Dr. Saffi constructs a hedging strategy based on short selling intensity to reduce the downside risk of housing price decreases, showing that investors can limit their losses using REITs’ equity lending data. The figure below (Figure 4 in the article) shows the cumulative returns for Dr. Saffi’s trading strategy (based on using the On Loan variable as a proxy of short selling activity) relative to the performance of the FHFA Housing Price index returns from July 2007 through July 2013.  These results show the usefulness of the hedging strategy in regions that experienced large house price run-ups during the years prior to 2007, i.e., Northeast and West to limit investor losses during the 2008 financial crisis. Its performance is satisfactory for the South and absent for the Midwest, where we observed a smaller house price run-up in the same period. Panel B shows similar results if we examine the performance using diversified REITs to hedge against price decreases in the aggregate FHFA index.

Overall, short selling can be a useful tool for market participants to hedge against future price decreases. Regulators can track measures from the equity lending market to improve forecasts of house prices and implement policies to prevent real estate bubbles. Furthermore, imposing short selling constraints on stocks like REITs—which invest in assets subject to high transaction costs—matters for price efficiency and the dissemination of information.

References mentioned in this post

Ang, A., G. Bekaert and M. Wei. 2007. Do Macro Variables, Asset markets, or Surveys Forecast Inflation Better? Journal of Monetary Economics 54: 1163–1212.

Bailey, W. and K.C. Chan. 1993. Macroeconomic Influences and the Variability of the Commodity Futures Basis. Journal of Finance 48: 555–573.

Koijen, R.S., O. Van Hemert and S. Van Nieuwerburgh. 2009. Mortgage Timing. Journal of Financial Economics 93: 292–324.

Liew, J. and M. Vassalou. 2000. Can Book-to-Market, Size and Momentum be Risk Factors that Predict Economic Growth? Journal of Financial Economics 57: 221–245.

[1] For example, Liew and Vassalou (2000), Ang, Bekaert and Wei (2007), Koijen, Van Hemert and Van Nieuwerburgh (2009) and Bailey and Chan (1993) use financial market data to forecast economic growth, inflation, mortgage choices and commodities, respectively.


Scott B. Guernsey, CERF Research Associate, October 2018

Guaranteed Bonuses in High Finance: To Reward or Retain?

Public distaste for high finance reached an all-time high in March of 2009, as the American International Group (AIG) insurance corporation announced it had paid out roughly $165 million dollars in bonuses to employees of its London-based financial services division (AIG Financial Products). Only months earlier, the same company had received roughly $170 million in U.S. taxpayer-funded bailout money and suffered a quarterly loss of $61.7 billion – the largest corporate loss on record. Then Chairman of the U.S. House Financial Services Committee, Barney Frank, remarked that payment of these bonuses was “rewarding incompetence”.

AIG countered, arguing that the bonuses had been pledged well before the start of the financial crisis and that it was legally committed to make good on the promised compensation. Additionally, Edward Liddy, who had been appointed chairman and CEO of AIG by the U.S. government, said the company could not “attract and retain” highly skilled labor if they believed “their compensation was subject to continued…adjustment by the U.S. Treasury.” And AIG wasn’t the only financial firm paying out large bonuses in 2009, as at least nine other large financial institutions, which had similarly received U.S. government assistance, distributed bonuses in excess of $1 million each to nearly 5,000 of its bankers and traders.

But why would these financial corporations risk their reputational capital to pay out bonuses? And why not condition the size and timing of bonus payments on circumstances like that experienced during the 2008 financial crisis rather than to simply guarantee large bonuses a year or more in advance?

A recent research article presented at this year’s Cambridge Corporate Finance Theory Symposium by Assistant Professor Brian Waters (University of Colorado Boulder) offers some interesting insight on these questions. To begin, the paper highlights three unique features of bonuses in the financial industry. First, unlike most other industries, bonus payments to high finance professionals (e.g., traders, bankers, analysts) comprises a large share of their total compensation. In fact, as described in the paper, more than 35% of a first-year analyst’s total pay is in the form of a bonus. This is further evidence by the hefty bonuses of $1 million or more dispensed to bankers, traders and executives by large financial institutions (AIG included) in 2009. 

Second, it seems as if bonus payments are largely guaranteed. For example, according to the paper, third-year analysts expect to receive a bonus of at least $75,000, with the possibility of earning a higher $95,000 bonus only if they performed exceptionally well. Moreover, as summarized above, AIG defended payment of its bonuses in March of 2009 by arguing they had been committed in advance and were obligated by law to fulfil this pledge. Third, observation of practice suggests financial institutions coordinate the timing of their bonuses by geography. For instance, in Europe almost all big banks determine bonuses in late February and early March, while U.S. banks do so in January. Again, this is consistent with AIG, although an American insurer, distributing bonuses to its London-based Financial Products division in March.

Considering these three stylized facts, Professor Waters (and co-author, Professor Edward D. Van Wesep) construct a mathematical model to explain why bonuses in high finance are both large and guaranteed. The general set-up of the model flows in the following manner. First, the authors assume that financial firms might find it difficult to recruit employees during certain months of the year (e.g., perhaps it is easy to replace employees in March, but difficult to do so in October). Second, in response to this periodic scarcity of labor, firms design contracts whereby large bonuses are paid during months with an abundance of talent (e.g., March), but condition the contracts such that employees must remain with the company until bonuses are paid to be eligible for this form of compensation.

Third, since financial firms operating in the same geography face similar labor market conditions, many of the firms will respond similarly, paying bonuses at the same time. Fourth, because employees are incentivized to remain with the firm until bonuses are paid, they will delay quitting until this point in time (i.e., this is when most employees leave their employers). Therefore, finally, this suggests labor markets will be flooded with talent after bonuses are paid (e.g., March), but will be relatively shallow in other months (e.g., October). Hence, arriving back at the initial step in the model and the game repeats, providing an intuitive explanation for why large and guaranteed bonuses are observed in high finance, irrespective of macroeconomic conditions and own firm performance.

Yuan Li, CERF Research Associate, July 2018

How (in)efficient is the stock market?

In 2013, the Nobel committee split the economic prize to Eugene Fama – the pioneer of efficient market hypothesis (EMH) and Robert Shiller – the critic of EMH. This decision indicated that the Nobel committee agreed with both Fama and Shiller. Was the committee right? The answer is yes, according to my findings from a recent research project.

Fama explains EMH as “the simple statement that security prices fully reflect all available information”. The empirical implication of this hypothesis is that except beta (the measurement of a firm’s systematic risk), no other publicly available information can be used to predict stock returns. However, the finance literature has found that many easily available firm characteristics, such as market capitalisation, book-to-market ratio, etc, are related to future stock returns. They are the so-called anomalies. Does the discovery of anomalies reject the EMH? Not necessarily. Because no one knows what a firm’s beta should be, and those firm characteristics can simply be proxies for beta. This is known as the joint hypothesis problem. We can say nothing about EMH unless we know what the correct asset pricing model is. Sadly, we do not know what the correct asset pricing model is.

In this project, I get around the joint hypothesis problem. I assume that a firm’s stock return is composed of two parts: risk-induced return and mispricing-induced return. Because of the joint hypothesis problem, we do not know what the risk-induced return is. However, we can estimate the mispricing-induced return (if there is any) using the forecasts issued by financial analysts. Analysts’ earnings forecasts represent investors’ expectations. More importantly, we know the actual earnings of a firm, and hence we can calculate the errors in analysts’ forecasts, which represent investors’ errors-in-expectations. We can then estimate the returns generated by investors’ errors-in-expectations, that is, the mispricing-induced return. If the market is perfectly efficient, the mispricing-induced return should be zero. I calculate the fraction of an anomaly explained by mispricing as the ratio of mispricing-induced return over the observed return. The fraction of an anomaly explained by risk is thus one minus the above ratio.

I examine 195 anomalies. On average, the fraction explained by mispricing is 17.51%, suggesting that the major fraction of anomalies is not anomalous at all. This result may be disappointing to EMH critics, who seem to think that the stock market is extremely inefficient, and it is very easy to profit from anomalies.  However, the good news to EMH critics is that the fraction explained by mispricing varies widely across different anomalies. In particular, the momentum anomalies are almost completely explained by mispricing. Hence, trading on momentum anomalies is likely to generate abnormal returns. In contrast, the high returns from the value strategies are almost entirely compensations for bearing high risk.

Dr. Hui Xu, CERF Research Associate, June 2018.

Contingent Convertibles: Does it do when it is supposed to do?

When Lehman Brother was in deep water September, 2008, the U.S. Federal government and the Federal Reserve decided not to bail it out, and several days later, the company filed Chapter 11 bankruptcy protection. Global markets immediately plummeted after the filing of bankruptcy, and both the government and central bank are accused of exacerbating investors’ panic for that decision. However, if they did, they would have been accused for a different reason: using taxpayers’ money to bail out a greedy and aggressive Wall Street giant.

The example illustrates the controversy and dilemma of bailout faced by policymakers. Since the financial crisis, one priority for the regulators has been to design a bail-in, an internal way to recapitalize distressed financial institutions and strengthen their balance sheet. The regulators hope it to become a substitute for the bailout. One way to deliver a swift and seamless bail-in is through the conversion of contingent convertible capital securities (CoCo).

CoCos are bonds issued by banks that either convert to new equity shares or experience a principal write-down following a triggering event. Because Basel III allows banks to meet part of the regulatory capital requirements with CoCo instruments, banks around the world issued a total of $521 billion in CoCos through 732 different issues between Jan 2009 and Dec 2015.

That being said, CoCos are still in their early stage in the sense that there is no consensus on how to design a CoCo. Moreover, few research has studied the response from market participants. Studying the response from market precipitants can shed light on the optimal CoCo design.

A recent research project by CERF research associate HUI (Frank) XU studies the response of incumbent equity holders when CoCos are in place. It considers two types of CoCos: CoCos convert to common shares when the stock price falls below a pre-set target, or the market capital ratio falls below a pre-set threshold. Surprisingly, the research shows that if the conversion dilutes incumbent equity holders’ security value, they will have strong incentive to issue a large amount of debt before the pre-set triggering point, and accelerate the trigger of CoCo conversion. The intuition is that since their equity value is diluted at conversion, they will issue a large amount of debt and distribute the proceeds via dividend or share repurchase just before conversion, leaving the new equity holders and debt holders much lower security value. Thus, the incumbent equity holders collect a one-time big payout at the cost of new equity holders and debt holders.

This is certainly contrary to the regulators’ expectations. Regulators expect equity holders to improve their corporate management, risk-taking strategies and financial policies under the threat of CoCo conversion. That equity holders benefit themselves by destroying the firms’ value under the threat of CoCo conversion is the least they want to see. Therefore, the research highlights the complexity of continent convertibles design, and the importance of taking the market participants’ response into account when regulators propose a CoCo design.

Dr. Alex Tse, CERF Research Associate, May 2018.

Embrace the randomness

Excerpt from the CBS sitcom “The Big Bang Theory”, S05 E04:

Leonard: Are we ready to order?
Sheldon: One moment. I’m conducting an experiment.
Howard: With Dungeons and Dragons dice?
Sheldon: Yes. From here on in, I’ve decided to make all trivial decisions with a throw of the dice, thus freeing up my mind to do what it does best, enlighten and amaze. Page 14, item seven.
Howard: So, what’s for dinner?
Sheldon: A side of corn succotash. Interesting……

It sounds insane to let a die decide your fate. But we all know that our beloved physicist Dr Sheldon Cooper is not crazy (his mother had him checked!) so there must be some wisdom behind.  To a mainstream economist, adopting randomisation in a decision task seems to violate a fundamental economic principle – more is better. By surrendering to Tyche the goddess of chance, we are essentially forgoing the valuable option to make a choice.

A well-known situation where randomised strategies are relevant is the game-theoretic setup where strategic interactions among players matter. A right-footed striker has a better chance of scoring a goal if he kicks left. A pure strategy of kicking left may not work out well though because the goalie who understands the striker’s edge will simply dive left. The optimal decisions of the two players thus always involve mixing between kicking/blocking left, right and middle etc. However, a very puzzling phenomenon is that individuals may still exhibit preference for deliberate randomisation even when there is no strategic motive. An example is a recent experimental study (Agranov and Ortoleva, Journal of Political Economy, 2017) which documents that a sizable fraction of lab participants are willing to pay a fee to flip a virtual coin to determine the type of lotteries to be assigned to them.

While the psychology literature offers a number of explanations (such as omission bias) to justify randomised strategies, how can we understand deliberate randomisation from an economic perspective? The golden paradigm of decision making under risk is the expected utility criteria where a prospect is evaluated by the linear probability-weighted average of the utility value associated with each outcome. There is no incentive to randomise the decision as the linear expectation rule would guide an agent to pick the highest value option with 100% chance. However, when the agent’s preference deviates from linear expectation, a stochastic mixture of prospects can now be strictly better than the static decision of sticking to the highest value prospect (Henderson, Hobson and Tse, Journal of Economic Theory, 2017). Rank-dependent utility model and prospect theory, which are commonly used in the area of behavioural economics, are two notable non-expected utility frameworks under which randomised strategies are internally consistent with the agent’s preference structure.

Incorporation of non-linear probability weighting and randomised strategies leads to many potential economic implications. For example, consider a dynamic stopping task where an agent decides whether to sell an asset at each time point. In a classical expected utility setup, there is no incentive for the agent to randomise the decision between to stop and to continue. This implies the optimal trading strategy must be a threshold-rule where sale only occurs when the asset price first breaches a certain upper or lower level. In reality, investors do not necessarily adopt this kind of threshold strategy even in a well-controlled laboratory environment. For example, the asset price could have visited the same level multiple times before a participant decides to sell the asset (Strack and Viefers, SSRN working paper, 2014). While expected utility theory struggles to explain trading rules that go beyond the simple “stop-loss stop-gain” style order, non-linear expectation and randomisation provide a modelling foundation to justify more sophisticated investment strategies adopted by individuals in real life.

Dr. Yuan Li, CERF Research Associate, April 2018

Are analysts whose forecast revisions correlate less with prior stock price changes better information producers and monitors?

Financial analysts are important information intermediaries in the capital markets because they engage in private information search, perform prospective analyses aimed at forecasting firms’ future earnings and cash flows, and conduct retrospective analyses that interpret past events (Beaver [1998]). The information produced by analysts is disseminated to capital market participants via analysts’ research outputs, which mainly include earnings forecasts and stock recommendations. Prior academic studies suggest that the main role of an analyst is to supply private information that is useful to parties such as investors and managers. Therefore, an analyst’s ability to produce relevant private information that is not already known to other parties is an important determinant of the analyst’s value to the capital markets. Based on this notion, CERF research associate -- Yuan Li and her co-authors propose a simple and effective measure of analyst ability.

Our measure of analyst ability is calculated as one minus the correlation coefficient between the analyst’s forecast revisions and prior stock price changes within successive forecasts. Since prior stock price changes capture the incorporation of information that is already known to investors, any information in an analyst’s forecast revisions that is not correlated with prior stock price changes reflects the analyst’s private information. In other words, our measure captures the ability of an analyst to produce information that is not already incorporated into stock prices.

We find that the stock price impact of forecast revisions issued by superior analysts identified by our measure is greater. We also find that firms covered by more superior analysts are less likely to engage in earnings management. These findings suggest that superior analysts identified by our measure are better information producers and monitors.

Dr. Jisok Kang, CERF Research Associate, March 2018

The Granular Effect of Stock Market Concentration on Market Portfolio Volatility

Ever since the Capital Asset Pricing Model (CAPM) was first introduced in 1964, a well-accepted conception in the modern portfolio theory is that the market portfolio contains only market risk or systematic risk as firm-specific risk or non-systematic risk is diversified away.

Meanwhile, Xavier Gabaix, in a paper published at Econometrica in 2011 titled as “The Granular Origins of Aggregate Fluctuations,” argues that idiosyncratic firm-specific shocks to large firms in an economy can explain a great portion of the variation in macro-economic movements if firm size distribution is fat-tailed. His argument implies that firm-specific shocks to large firms are granular in nature and may not be easily diversified away. He empirically shows that idiosyncratic movements by the largest 100 firms in the U.S. can explain roughly one third of the variation in the GDP growths of the country, the phenomenon he dubs “the granular effect.”  

Jisok Kang, a CERF research associate, in his recent research paper, shows that stock market concentration, the level of domination by the largest firms in the stock market, increases the volatility of market portfolio. This finding implies that the idiosyncratic, firm-specific risk of large firms is granular in nature and not diversified away in the market portfolio. This finding is robust whether the market portfolio volatility is defined with value-weighted or equal-weighted index.

In addition, stock market concentration causes other stock prices to co-move thus increases the market portfolio volatility further. The incremental volatility caused by stock market concentration is bad volatility in that the effect is severer when the market portfolio return is negative.

Dr. Hui (Frank) Xu, February 2018

What caused the leverage cycle run-up to 2008 financial crisis?

The 2008 financial crisis has far-reaching impact on financial markets and real economy. Although academic researchers and public policymakers have reached a consensus that the financial crisis roots in leverage cycle, they continue to debate the causes that led to the leverage cycle. Initially, it was widely accepted that financial innovation and deregulation exacerbated agency problem, incentivizing the financial intermediaries to issue consumer credit, including mortgage debt, without proper screening and monitoring (“credit supply” channel). Recently, nevertheless, a growing empirical literature has proposed a “distorted beliefs" view of the crisis, demonstrating that over optimism of investors may have led to rapid expansion of the credit market, and increased assets price in the run-up to the crisis (“credit demand” channel). The financial crisis, like any other major economic event, probably has more than one cause, and both credit demand and supply channels have contributed to it. Indeed, the two views are not entirely mutually exclusive, and may reinforce each other.

However, one still might want to ask to what extent the distorted beliefs have caused the crisis. This question is interesting for both theoretical and practical reasons. First, economists have long known that distorted beliefs have important effects on prices of financial assets, e.g., risk-free rate and stock prices, but they still find it wanting to understand why the distorted beliefs can cause massive default in 2008; second, understanding what caused the financial crisis helps to create effective changes in policy. If it is largely an agency problem, policies to prevent similar crises would include requiring financial intermediaries to “put more skin in the game”, and to enforce stricter screening and monitoring. If it is primarily a distorted expectations and beliefs problem, preventative measures would include implementing macroprudential, financial-stability polices, and improving information transparency.

One way to quantify the role of distorted beliefs in the financial crisis is to construct a dynamic general equilibrium model which features credit use and risk-taking by households purely based on distorted beliefs, effectively shutting down agency problem channel. Then, examine the explanatory power of the model by comparing the output from the calibrated model to real data. This is a research project done by CERF research associate HUI (Frank) XU.

The main findings of the paper support the distorted beliefs view of the financial crisis. The distorted beliefs view can explain the household leverage running up to the financial crisis. Quantitively, the distorted beliefs can account for more than half of the variation of the real interest rate during the crisis period.

Dr. Alex Tse, CERF Research Associate, February 2018

Transaction costs, consumption and investment

The theoretical modelling of individuals’ consumption and investment behaviours is an important micro-foundation of asset pricing. Despite being a classical problem in the literature of portfolio selection, analytical progress is very limited when we extend the model to a more realistic economy featuring transaction costs. The key obstacle thwarting our understanding in the frictional setup originates from the highly non-linear differential equation associated with the problem.

Using a judicious transformation scheme, CERF research associate Alex Tse and his collaborators David Hobson and Yeqi Zhu show that the underlying equation can be greatly simplified to a first order system. Investigation of the optimal strategies can then be facilitated by a graphical representation involving a simple quadratic function encoding the underlying economic parameters.

The approach offers a powerful tool to unlock a rich set of economic properties behind the problem. Under what economic conditions can we expect a well-defined trading strategy? How does the change in the market parameters affect the purchase and sale decisions of an individual? What are the quantitative impacts of transaction costs on the critical portfolio weights? While some features are known in the literature, there are also a number of surprising phenomena that have not been formally studied to date. For example, the transaction cost for purchase can be irrelevant to the upper boundary of the target portfolio weight in certain economic configurations.

In a follow-up project, the methodology is further extended to a market consisting of a liquid asset and an illiquid asset where transaction costs are payable on the latter. The research findings could serve as the useful building blocks towards a more general theory of investment and asset pricing.

Dr. Yuan Li, CERF Research Associate, December 2017

Book-to-market ratio and inflexibility: The effect of unrecorded R&D capital

R&D investment has been playing an increasingly important role in the economy. However, accounting standard requires firms to immediately expense R&D as incurred. Therefore, R&D investment is not capitalized on the balance sheet. Could the unrecorded R&D capital affect our assessment of a firm’s risk? The answer is affirmative, according to the findings from a research project conducted by CERF research associate Yuan Li.

Finance theory suggests that a firm’s risk is negatively related to its flexibility to adjust capital investment. The more flexibility a firm has in this regard, the less its cash flows are affected by economic-wide conditions, and the lower its risk. Flexibility is hard to observe directly, but it can be inferred from the book-to-market ratio (BM). High-BM firms are generally burdened with more unproductive capital and hence less flexible to downsize in bad times. Thus, according to the theory, high-BM firms are riskier than low-BM firms, especially in bad times.

However, results from this project suggest that the above theory should not be followed blindly. This is because book-to-market ratio calculated from the balance sheet data increasingly misrepresents inflexibility and risk. This in turn is because book value is understated by the unrecorded R&D capital, which is even less flexible to adjust than physical capital. Results also suggest that considering book-to-market ratio and R&D capital together is a better way to evaluate a firm’s inflexibility and risk.

Dr. Edoardo Gallo, CERF Fellow, November 2017

Financial networks and systemic collapse

In the aftermath of the 2008 crisis, Haldane – the Chief Economist at the Bank of England – stated that “the regulation of the network is needed to ensure appropriate control of large, interconnected institutions […] the financial network should be structured so as to reduce the chances of future systemic collapse”.

A project by CERF Fellow Edoardo Gallo and his research collaborators Syngjoo Choi (Seoul National University) and Brian Wallace (UCL) investigates what type of network structures cause financial contagion. In a lab experiment, participants can buy or sell assets in an artificial market knowing that one participant has been hit by a monetary shock and there is a possibility that it may spill over to others because all participants are connected by a network of liabilities. Each participant faces a trade-off between selling to raise liquidity in the short term to avoid bankruptcy or hold on to assets to realize a return in the long-term. The researchers vary the network of liabilities and the size of shocks.

The results show that contagion is particularly prevalent in core-periphery networks formed by a small number of highly connected participants – the core – and with the remaining participants at the sparsely connected periphery. The dynamics of contagion involves sharp falls in the price of assets because all participants are trying to sell to raise liquidity, and this leads to systemic collapse even for moderately sized shocks.  The researchers find evidence that a participant’s ability to comprehend the network-driven risk is predictive of how likely they are to go bankrupt. 

Core-periphery networks are ubiquitous in financial markets, and the results of this project suggest they may be particularly susceptible to systemic collapse.

The paper is available here.


Dr. Alex S.L. Tse, CERF Research Associate, September 2017

Probability weighting and stock trading behaviours

Humans are far from being a perfect machine of decision making especially in the face of uncertainty. One prevalent phenomenon is that individuals tend to overweight probabilities associated with extreme events. Examples include lottery punters’ optimism towards winning a jackpot and air passengers’ anxiety towards plane crash. In the context of finance, what are the implications of such psychological bias on investment decisions?

CCFin research associate Alex Tse and his collaborators Vicky Henderson and David Hobson investigated the effect of probability weighting on stock trading behaviours through a theoretical model of asset sale. They found that agents with probability weighting will adopt trading strategies in form of stop-loss but not gain-exit: on the one hand, probability overweighting of the worst scenario encourages investors to offload a losing stock. On the other hand, probability magnification of the best outcome encourages investors to maintain participation on the rally. This provides a potential justification of the popular usage of stop-loss orders among retail investors.

Probability weighting is also useful to explain the “price disposition effect”, a well-documented financial anomaly where investors are selling winning stocks much more often than losing stocks. Existing models typically generate a very extreme disposition effect. With inclusion of probability weighting, however, investors are now more incentivised to hold a winning stock relative to a losing stock as they find a lottery-like payoff with positive skewness attractive. This enables the model to deliver a level of disposition effect much closer to what empirical literature suggests.


Dr. Yuan Li, CERF Research Associate, July 2017

In his best-selling book—Thinking, Fast and Slow, Nobel Memorial Prize in Economics laureate Daniel Kahneman describes anchoring as “ of the most reliable and robust results of experimental psychology”. Using data from the real financial markets, CERF research associate Yuan Li and her research collaborators Thomas George and Chuan-Yang Hwang find evidence suggesting that anchoring impedes investors’ interpretation of earnings news.

Anchoring is the tendency for individuals to base their forecasts of an unknown quantity upon a salient statistic (the anchor) that might have nothing to do with the quantity being forecasted. The classic example is an experiment in which individuals observe the generation of a random number, after which they are asked to estimate the percentage of African nations in the UN as an increment to the random number. The estimates are higher (lower) for individuals who observe higher (lower) random numbers. This random number is the anchor in this experiment.

In the real financial markets, investors anchor on the 52-week high price (52WH), which is often featured in financial websites and papers. If the stock price prior to a positive (negative) earnings announcement is already close to (far from) the 52WH, investors would think the positive (negative) news has already been incorporated into the price, and hence be reluctant to bid the price higher (lower). In other words, investors behave as if future price levels are constrained not to deviate too far from the 52WH. 


Dr. Jisok Kang, CERF Research Associate, June 2017

Does the Stock Market Benefit the Economy?

A research project carried out by a CERF Research Associate, Jisok Kang, and his co-author, Kee-Hong Bae, suggests evidence that a functionally efficient stock market do promote economic growth.

Finance researchers have extensively investigated the role of stock market on real economic sector. For instance, whether well-functioning stock markets promote economic growth has received a great deal of attention from academics and policy makers. However, how to measure the functionality of stock markets has been a big empirical challenge. Researchers so far have typically used size measures (e.g., total stock market capitalization) as a proxy for stock market functionality and not found robust evidence to suggest that stock market development is associated with future economic growth.

The Research proposes a new measure of functional efficiency of stock market: stock market concentration. It has shown that concentrated stock markets dominated by a small number of large firms negatively affect economic growth; in countries with concentrated stock markets, capital is allocated inefficiently, which results in sluggish IPO activity, innovation, and economic growth. These findings suggest that a concentrated stock market offers insufficient funds for emerging, innovative firms; discourages entrepreneurship; and is ultimately detrimental to economic growth.


Dr. Chryssi Giannitsarou, CERF Fellow, May 2017

Our social interactions are informative of our investment decisions.

 When we are investing, we don’t mindlessly copy our peers, according to new research carried out by CERF fellow Chryssi Giannitsarou and her research collaborators Luc Arrondel, Hector Calvo Pardo and Michael Haliassos. Instead, we are more likely to participate in the stock market if we believe that our immediate social circle is more informed about it.

The authors surveyed a representative sample of French households in 2014 and 2015 to capture measures of stock market participation and social connectedness, but also beliefs and perceptions of stock market returns. They wanted to find out whether those households invested by mindless copying, which may lead to stock market bubbles and fads, or by processing information and trying to copy good practice.

The results show that people who perceive a higher share of their financial circle as being informed about the stock market or participating in it are more likely to invest in stocks themselves. The conditional portfolio share invested in stocks is influenced by social interactions only to the extent that social interactions influence perceptions of past stock market performance and, through them, stock market expectations. There is a trace of mindless copying of behaviour, but only in the decision of whether or not to participate at all in the stock market.
All in all, their research findings suggest that social interactions tend to reduce rather than exacerbate financial literacy limitations, and to affect financial decision-making by being informative rather than ‘contagious’.

If you would like to read the relevant paper it is available here