skip to content

Cambridge Endowment for Research in Finance (CERF)

 

The Rise of Special Purpose Acquisition Company

Sunwoo Hwang, CERF Research Associate, December 2020 

2020 has been the year of special purpose acquisition companies or SPACs. A SPAC is a blank check shell company designed to take private companies public without going through the traditional initial public offering (IPO) process[1]. 2020 saw 230 SPAC IPOs and $77 billion raised in the United States alone at the time of this writing on December 13, 2020. The 230 SPAC IPOs account for 54% of the total IPOs and 45% of the total IPO proceeds (source: SPAC Analytics). Both their number and the amount of capital raised hit a record high and greater than those of the total annual US IPOs since 2014. However, SPACs have received little attention in the academic literature, presumably because they were almost invisible in the IPO market years ago. Capital raised by SPACs made up less than 7% of the total IPO proceeds before 2015, except during the financial crisis. But, in 2020, a SPAC has become the main gateway to public markets, outnumbering traditional IPOs and raising 45% of the IPO proceeds. SPACs appear to deserve more attention and research by financial economists.

The very first question to ask is why a SPAC exists. On the demand side of the SPAC product market, the SPAC offers several benefits to capital-starving private firms. First, it offers a faster timeline. If a company chooses a SPAC over a traditional IPO, it takes three to four months rather than two to three years. It also involves greater certainty regarding a firm’s valuation and the amount of capital raised. A company negotiates only with a SPAC, instead of myriad investors, and receives the IPO proceeds already raised upon the merger approval of SPAC shareholders. Third, a SPAC route is popular amongst young private firms or mature private firms such as unicorns[2], which suffer a valuation disadvantage compared to mid-aged (i.e., 6-10 years old) private firms that public investors prefer[3]. On the flip side, a SPAC is more expensive than a traditional IPO, which charges 7% to pay an underwriter. A target company leaves equity for sponsors, which is 20% pre-merger and diluted by a exchange ratio to 1 to 5%[4] post-merger, and 5.5% for underwriters.

On the supply side, SPAC sponsors obtain easier access to capital as a SPAC faces fewer regulatory obstacles than traditional IPOs. The SPAC IPO process is relatively simple and easier than, say, raising new venture capital funds[5]. Furthermore, SPAC sponsors enjoy significant upside, receiving 20% of SPAC shares post IPO, yet low risk, investing in late-stage private firms. The SPAC has additional appeals to private equity (PE) sponsors. SPACs can serve as co-investment vehicles, allowing PE sponsors to do side-by-side transactions with less leverage and more equity, and provide greater liquidity certainty than private portfolio firms in which PE firms have illiquid interests[6]

In the capital market where SPAC sponsors stand on the demand side, SPACs entice investors based on the following merits. Institutional investors invest in initial SPAC, which offers common shares and warrants combined as units, and use warrants to purchase additional shares following successful mergers, with little downside. Initial nvestors receive their money back if a SPAC fails to find a target or if they do not like a selected target. Also, retail investors can invest in PE type transactions[7]. These benefits are not without costs, however. Because a SPAC has a limited lifespan of two years, sponsors may rush to merge with any (potentially low quality) firm approaching the deadline. In so doing, sponsors may overvalue the target to pass the minimum transaction amount threshold[8].

The natural next question is why SPACs have become stunningly popular in recent years. One obvious possibility is the pandemic has increased uncertainty. In uncertain times, private firms in need of financing may fear they cannot raise additional rounds of capital from private investors. Even so, they may not go for traditional IPOs, which take years and raise an uncertain amount of capital. But the uncertainty is unlikely to explain the whole story as the trend precedes the pandemic. The SPACs’ representation of IPOs monotonically increased from 12% in 2015 to 28% in 2019, making each year a record year since the financial crisis (source: SPAC Analytics). There is little evidence of a significant surge in economic uncertainty and stock market volatility, measured by the CBOE Volatility Index, during the five years.

The other possibility that may also explain the pre-COVID trend is the sophistication of the SPAC market, where an increasing fraction of sponsors and investors have sectoral focus and expertise. A recent example is Chardan Healthcare Acquisition Corp., which closed a merger with BiomX in December 2019. Sponsored by health-care-focused investment bank Chardan, several high-profile biotech investors anchored the deal[9]. Also, an investment bank, Jefferies, finds that an increasing number of SPAC sponsors are industry executives[10]

In addition, numerous open questions are in order. First, structural complexity may create perverse incentives for different players involved, other than the aforementioned one caused by the limited lifespan. Second, in the case of PE sponsors, limited partners may concern about general partners get distracted and worse potentially usurp deal opportunities in favor of SPACs[11]. Third, the discrepancy between control and cash flow rights may engender corporate governance problems. In a typical pre-merger SPAC, sponsors own 20% cash flow rights and 100% voting rights through their class B common shares, which are only issued to sponsors and entitled to vote pre-merger. A majority independent board of directors elected solely by sponsors may not be majority independent.

 

 

 


[1] https://www.cbinsights.com/research/report/what-is-a-spac/

[2] https://news.crunchbase.com/news/spac-unicorn-ipos-luminar-quantumscape/

[3] https://hbr.org/2016/01/how-unicorns-grow

[4] https://www.cbinsights.com/research/report/what-is-a-spac/

[5] https://www.cbinsights.com/research/report/what-is-a-spac/

[6] https://privateequityreport.debevoise.com/the-private-equity-report-fall-2017-vol-17-no-2/pe-jumps-into-the-spac-markets

[7] https://en.wikipedia.org/wiki/Special-purpose_acquisition_company

[8] https://www-sciencedirect-com.ezp.lib.cam.ac.uk/science/article/pii/S0165410116300660

[9] https://www-barrons-com.ezp.lib.cam.ac.uk/articles/2019-was-a-record-year-for-blank-check-companies-here-are-the-biggest-trends-51581016401

[10] https://www.jefferies.com/OurFirm/2/1616

[11] https://privateequityreport.debevoise.com/the-private-equity-report-fall-2017-vol-17-no-2/pe-jumps-into-the-spac-markets

 

Hormoz Ramian, CERF Research Associate, November 2020

Welfare Implications of Bank Valuation Disagreement 

 

Regulatory interventions have always been ensued by heated debates. In the years after the financial crisis reached its darkest moment, academic literature and high legislative chambers were inundated by discussions related to risk-based capital requirements. Opponents often have expressed dissatisfaction against the intervention arguing that capital holding above the laissez-faire outcome is expensive to the banking institutions leading to lower lending and ultimately suppressed economic growth. Whereas the proponents of the regulation have argued that fragility of the banking sector, that is associated with a significant economic cost to the society, rationalizes the intervention. 

 

Despite prolonged arguments presented by the opposing sides, these debates rarely reached to an agreeable conclusion. An important but often ignored reason to the disagreement was that the arguments emanated from incomparable bases. More specifically, the opponents’ view presented by the banking institutions weighed heavily on the cost of equity as the central reason to rail against capital holdings above the laissez-faire outcome (baselcommittee2}, Acharya et al, 2017).  Ignoring the underlying merits behind their argument for the moment, their perspective focused on the role of the asset prices as the main reason to advocate for capital deregulation. This stance, however, was not readily reconcilable with the proponents social perspective whose arguments mainly built on a welfare analysis that is concerned with the negative externalities associated with costly bank failure (Allen et al 2011, 2015, and Gersbach et al, 2017).

 

Much of the discussions among the academic and legislative literature on this context has understandably been devoted to the welfare implications of bank failure. For instance, James (1991) provides a comprehensive survey on loss given default across financial and non-financial sectors showing that the ex-post asset recovery rate may fall to 70% per dollar. Nonetheless, the opponents’ view on the cost of capital has remained a consistent defence that has languished further arguments to increase capital holding above Basel III. Recent empirical studies provide evidence that even in the presence of capital buffers in addition to the risk-based capital requirement, the banking institutions remain significantly undercapitalized (Piskorski et al, 2020).

 

Lack of a rigorous quantitative basis to evaluate the cost of capital for the banking institutions at an aggregate level is among the core reasons why the proponents have failed to discredit the merits behind capital deregulation. The methodological framework in this study, first develops a foundation to establish a realistic valuation of the bank capital in a general equilibrium under aggregate uncertainty setting. This framework simultaneously integrates the asset pricing and banking regulation disciplines to provide a mapping between the cost of capital and welfare implications of bank failure. This salient connection serves as a solution to reconcile the two counterarguments in favour and against bank capital holding.

 

A comprehensive capital regulation that enhances the welfare considers three simple components: (i) how is the bank funded? (ii) what is the risk profile of the bank’s assets? (iii) what is the valuation of bank net worth? Existing studies focusing on bank funding show that government-guarantees provide welfare gains by preventing self-fulfilling runs on bank debt, even if not originally justified by fundamentals (Diamond–Dybvig, 1983). Nonetheless, government-guarantees break the link between the cost of debt and borrower’s default risk and lead to the under-capitalization of the banking system. This gives rise to an alternative distortion generated by more frequent bank failure and motivates capital regulation which provides gains by lowering socially undesirable defaults. However, studies that concentrate on the liabilities provide limited predictions about the importance of bank assets composition. My research finds that the effectiveness of optimal capital regulation depends on the assets side of the bank balance sheet, particularly when the monetary policy targets reserves management. A large strand of literature focusing on the assets side of the bank balance sheet shows that conditioning the risk profile to capital provides welfare gains. However, this literature considers that households as the ultimate providers of financing, in the form of debt or equity, play a limited role or that the supply of financing is fixed. The finding in this context uncover that households’ optimal consumption-saving behaviour has important implications for the equilibrium cost of debt that is a determinant of the banking sector’s default risk. This equilibrium mechanism predicts that as the cost of debt falls, capital constraint becomes effectively overburdening and hence socially costly.

 

These shortcomings provide motivation to raise the following two questions: First, what is the optimal capital regulation of the banking system in an environment where the cost of financing (in the form of debt or equity) and risk profile of the asset side arise endogenously? Second, how does the effectiveness of this optimal capital regulation depend on the IOER that is decided separately by the monetary authority? I address these questions by developing a general equilibrium model in which banks finance themselves by accepting deposits and raising equity from households, and invest their funds in excess reserves and loans subject to non-diversifiable risk.

 

The analysis in my research takes IOER as a given policy and shows that the optimal risk-weighted capital requirement offers welfare gains by lowering the likelihood of bank failure and its associated distortions that are ultimately borne by society (Admati and Hellwig, 2014). Nonetheless, the general equilibrium provides an additional important prediction. When the bank is required to raise more capital to satisfy the capital constraint, its demand for debt financing falls. This channel leads to a lower equilibrium deposit rate. Given any lending level, lower interest expenses expand the bank’s ability to meet its debt liabilities and enhance the bank’s solvency. The optimal risk-weighted capital regulation, even in general equilibrium, fails to consider this effect and hence becomes socially costly.

 

I show that when IOER is above the zero bound, a marginal decrease in this rate is accompanied by a proportional decrease in the equilibrium deposit rate. Because the proportion of deposits in liabilities always exceeds that of the reserves on the asset side of the balance sheet, lower IOER leads to a faster fall in interest expenses than interest incomes. As a result, the social cost of the optimal capital constraint, which is decided in isolation of the IOER policy, increases as IOER falls towards the zero bound. This finding is an important motivation for a jointly decided capital regulation and IOER. Particularly, a lower IOER that is accompanied by a looser capital constraint is able to expand the credit flow to the real economy, while the bank’s default likelihood remains constant.

 

This general equilibrium framework provides a secondary prediction: the relationship between the optimal capital regulation and IOER reverses when IOER becomes very low or falls below zero. This finding is important to effective policy analysis in the current era with low or negative interest rate environment. The finding of this research shows that any further reduction in this territory is accompanied by a nonresponsive equilibrium deposit rate because depositors always require strictly positive compensations for their time preference to forgo consumption. This nonproportional transmission mechanism from IOER to deposit rate indicates that the bank’s interest incomes from reserves fall faster than its interest expenses on deposits. Given any lending level, the bank’s solvency worsens, nonetheless, the capital regulation fails to consider this effect. An interactive policy initiative provides social value when a falling IOER, below zero bound, is accompanied by a stricter capital constraint.

 

When Sheer Predictive Power is not Good Enough:
Towards Accountability in Machine Learning Applications
by Thies Lindenthal and Wayne Xinwei Wan

The law is clear: Housing-related decisions must be free of discrimination, at least in terms of gender, age, race, ethnicity, or disabilities. Easier said than done for the plethora of machine learning (ML) empowered systems for mortgage evaluation, tenant screening, i-buying schemes or other ‘disruptions’. A rapidly expanding literature explores the potential of ML algorithms, introducing novel measurements of the physical environments or using these estimates to
improve the traditional real estate valuation and urban planning processes (Glaeser et al., 2018; Johnson et al., 2020; Karimi et al., 2019; Lindenthal & Johnson, 2019; Liu et al., 2017; Rossetti et al., 2019; Schmidt & Lindenthal, 2020). These studies, again and again, demonstrated the undisputed power of ML systems as prediction machines. Still, it remains difficult to establish
causality or for end-users to understand the internal mechanism of the models.

An “accountability gap” (Adadi & Berrada, 2018) remains: How do the models arrive at their prediction results? Can we trust them not to bend rules or cut corners? This accountability gap holds back the deployment of ML-enabled systems in real-life situations (Ibrahim et al., 2020; Krause, 2019). If system engineers cannot observe the inner workings of the models, how can they guarantee reliable outcomes? Further, the accountability gap also
leads to obvious dangers: Flaws in prediction machines are not easily discernible by classic crossvalidation approaches (Ribeiro et al., 2016). Traditional ML model validation metrics such as the magnitude of predictions errors or F 1 -scores can evaluate the models’ predictive performance, but they provide limited insights for addressing the accountability gap. Training ML models is a software development process at heart. We believe that ML developers therefore should follow best practices and industry-standards in software testing. Particularly, the system testing stage of software test regimes is essential: It verifies whether an integrated system performs the exact function as required in the initial
design (Ammann & Offutt, 2016). For ML applications, this system testing stage can help to close the accountability gap and to improve the trustworthiness of the resulting models. After all, thorough system testing has verified that system is not veering off into dangerous terrain but stays on the pre-defined path.
System testing should be conducted before evaluating the model’s prediction accuracy, which can be considered as the acceptance testing stage in the software testing framework. In recent years, several up-to-date model interpretation algorithms have been developed, which attempt
to reduce the complexity by providing an individual explanation that solely justifies the prediction result for one specific instance (Lei et al., 2018; Lundberg & Lee, 2017; Selvaraju et al., 2017; Ribeiro et al., 2016). However, most of the current local interpretation tools are qualitative and require human inspection for each individual sample. Thus, these tools for model verification do not easily scale up with large sample size.

One example – to demonstrate the general approach
In this paper, we develop an explicit system-testing stage for an ML-powered classifier for images of residential real estate. In formalizing a novel model verification test, we first define categories
of relevant and irrelevant information in the training images that we are interested in testing.
Then we identify the elements of the input images that are found to be most relevant for classification by the ML model (i.e., which pixels matter most?), using a local model interpretation algorithm. Finally, we calculate what proportion the interpretable information originates from our defined categories of relevant/irrelevant information, and we use this proportion as the model verification test score. High scores imply that the model bases its
predictions on meaningful attributes and not on irrelevant information, e.g. in the background of the images.
Specifically, we augment an off-the-shelf image classifier that has been re-trained to detect architectural styles of residential buildings in the UK (see my previous blog post for CERF) .
This type of computer-vision based classifier is selected as an illustration due to its popularity in real estate and urban studies (Naik et al., 2016), although our approach also extends to other ML classifiers, e.g. in text-mining (Fan et al., 2019; Shen, 2018).
Following architects’ advice, we define facades of houses, windows and doors as the most relevant attributes for classifying building styles, and we consider trees and cars as the irrelevant information. These objects are detected in the input images using the object detection algorithms. Further, we implement the local interpretable model-agnostic explanation algorithm (LIME) – one of the popular local model interpretation tools – to find the areas in the input images that best explains the predictions.
Finally, by comparing these interpretable areas and the areas of the objects, we calculate the verification test score/ratio for this exemplar model. Our results reveal that the classifier indeed selects information from house, windows, and doors for predicting building vintages, and it also excludes the irrelevant information from the trees as we hope. More importantly, these findings
improve the trustworthiness of the prediction results, as well as the associated implications between building vintages and real estate values (Johnson et al., 2020; Lindenthal & Johnson, 2019; Schmidt & Lindenthal, 2020). However, we find that the model also considers information from the cars for its predictions.
Our study contributes to the growing literature that applies ML in real estate and urban studies from two aspects. Firstly, we propose a ML application framework with an additional system testing stage, which aims to address the accountability gap and improve the trustworthiness of the results. Using a commonly applied computer vision model in the literature as an example, we demonstrate the capability of our approach to check whether the model is under the threat
of capturing undesirable information for predictions.
Secondly, we extend the existing qualitative model-interpretation techniques to a formal quantitative test. Methodology-wise, this helps to scale up the model interpretation analyses for a large sample size, which is essential for most of the applications in real estate and urban studies. In summary, our proposed method extends for other ML models and, due to the essence of closing the accountability gap, this study has important implications for ML applications in real estate and urban studies, as well as in other subjects beyond.
Fig 1: First, find areas that are relevant when e.g. describing a home’s vintage.


Fig 2: Second, compare to the image areas that actually lead to a specific classification: How good
is the overlap?

Mehrshad Motahari, CERF Research Associate, October 2020

Machine Learning Challenges in Finance

Machine learning (ML) is the most important branch of artificial intelligence (AI), providing tools with wide-ranging applications in finance. My previous blog posts (‘Can robots beat the market?’ and ‘Artificial intelligence in asset management: hype or breakthrough?’) discuss some of the most important ML applications in finance. The success of ML is often linked to its three key capabilities: providing flexible functional forms which can capture nonlinearities in data, selecting relevant model features without pre-specification, and capturing information from non-numerical data sources such as texts. However, recent studies including Israel et al. (2020) and Karolyi and Van Nieuwerburgh (2020) outline several challenges involved with using ML in finance. What follows provides a summary of these challenges.

Finance is often thought of as a field awash with applicable data, ranging from financial and economic sources to more recent unstructured data such as online news and social media posts. While the breadth of data that can be used in finance is quite large, the time series are often very short by ML standards. A limited number of time series observations would mean that any model using the data is also constrained to be proportionally small. The consequence of this is that data-hungry ML tools cannot operate anywhere near their full potential. Finance also does not allow for data to be produced using experiments, as it is done in other fields. For example, in image recognition, which is a successful area of ML application, scientists can simply produce millions of photos using experiments in order for the models to train from. In finance, however, one has no alternative but to wait for financial data to be produced over time.

There are exceptional cases in finance where data is available in high frequency, such as HFT trades, providing ML tools with a larger number of observations across time to learn from. However, even in these cases, ML faces its second-biggest challenge: signal-to-noise ratio. ML tools are highly dependent on data quality. Poor quality and noisy data lead to unreliable ML models. It is to no one’s surprise that financial data is considerably noisy, especially when the data frequency is high. The reason for this, of course, is that when following the Efficient Market Hypothesis (EMH), one should only be able to predict one variable in fully efficient financial markets. That variable is risk premia, which is small and difficult to capture in short horizons. In the absence of large and reliable databases, ML tools in finance are essentially tasked with finding a needle in a haystack.

Another difference in finance, compared with other areas in which ML is applied, is data evolution. Taking image recognition again as an example, images of humans always have the same features; using these features, ML tools can learn to recognise images. In contrast, financial data changes and evolves over time, as do the financial markets. Therefore, it is difficult to imagine that financial variables have the same meaning they had several decades ago. There are, of course, economic logics that do not change over time and that underly the markets’ behaviours. However, most ML models are so-called black boxes and do not provide any insights regarding how they produce specific results. This lack of interpretability makes it difficult to understand whether an ML model is capturing economically meaningful patterns or pure noise.

ML tools have essential applications in finance nowadays. The three main ML challenges of lack of data, low signal to noise ratio, and absence of model interpretability now construct the frontier of research in finance. A growing number of papers attempt to find novel and creative solutions to address these issues (see Israel, et al., 2020). These developments can pave the way for a stronger presence of ML in finance in the years to come.

References

Israel, R., Kelly, B.T. and Moskowitz, T.J. 2020. Can Machines 'Learn' Finance? Available at SSRN 3624052.

Karolyi, G.A. and Van Nieuwerburgh, S. 2020. New methods for the cross-section of returns. The Review of Financial Studies, 33(5), pp.1879–1890.

 

Sunwoo Hwang, CERF Research Associate, September 2020

Contingent employment and innovation 

There has been a rapid increase in contingent employment worldwide. As of 2015, it accounts for 15.8% of the U.S. labor force, up from 10.7% in 2005 (U.S. Bureau of Labor Statistics). In Europe, contingent workers make up an average of 43.3% for 28 European Union countries (OECD Labor Force Statistics). Contingent work is an umbrella term that represents numerous non-permanent employment arrangements. Despite these trends, little is known about the implications of contingent employment on firm outcomes. There could be certainly benefits such as flexibility in the use and reallocation of labor, which may lower operating leverage and fuel investment and growth. However, there may also be costs. Job insecurity, as compared to regular employment contracts, may discourage employees from engaging in value-enhancing activities such as innovation that typically requires a long-term commitment.  

Hwang (2020) asks whether contingent employment affects the innovation incentives of employees. The paper finds that converting temporary contracts to permanent ones has a positive effect on corporate innovation conditional on long-term rewards in place. The intuition that may explain these findings is that excessive termination following short-term failure and few rewards for long-term success, faced by contingent workers, discourages innovation (Manso, 2011). Note that this paper focuses on the contingent workforce in core functions. It speaks of neither low-skilled contingent workers who may not innovate (e.g., janitors) nor high-skilled (voluntary) ones who may innovate yet be insensitive to job security given their superior outside options (e.g., consultants). 

To answer the question, the paper exploits a novel experiment available from Korea. It allows us to compare firms that shifted contingent contracts to regular contracts with an otherwise identical set of firms that continued to use contingent labor. The experiment is composed of a contingent arrangement unique in the country, under which contingent workers do the same core tasks as regular employees hired as the labor market was strong, and a Supreme Court ruling against the arrangement.  

The affected contingent workers are the so-called in-house subcontracted (IS) workers. They are similar to agency temps yet different in that they are hired through in-house subcontractors, not staffing agencies, and work for the main contractor almost permanently. Note in-house subcontractors are often created for the sole purpose of supplying the IS workers. The IS workers are unsecured because they cannot be reallocated to other firms if the main contractor stops subcontracting, which in turn closes an in-house subcontractor. Business failure is a legitimate reason for discharge. The IS workers are likely innovators because the vast majority of patented innovations are from traditional manufacturing industries, these innovations depend on the basic education the IS workers have received (D’Acunto, 2014), and the IS workers gradually have replaced their secured colleagues as the labor market weakens over time.  

To my knowledge, Hwang (2020) provides the first evidence that contingent employment negatively affects the rate at which R&D investment translates to patented innovations at the employee level. Moreover, it shows that an optimal innovation-motivating scheme (Manso, 2011), characterized by tolerance for short-term failure and rewards for long-term success, also governs employees who execute innovation. Prior research has focused on managers who finance innovation. This paper’s findings inform debates on the costs and benefits of contingent employment and, specifically, corporate decisions as to the management of human capital which produces innovation. The findings have timely policy implications as the labor market representation of contingent labor is large and growing. Furthermore, the pandemic has hit contingent workers harder with harsher pay cuts or layoffs, and the post-corona era is likely to demand more of both contingent labor and innovation.   

Hormoz Ramian, CERF Research Associate, August 2020  

Negative interest Rate: The Interaction between Monetary and Financial Regulatory Policies 

The negative interest rate has been among the frontier policies to counter the recent economic downturns. The 2020 Pandemic resurfaced the policy's role that was originally deployed to assuage prolonged slowdowns associated with the aftermath of the 2008 Financial Crisis. While lowering the cost of financing is a well-established policy initiative in response to adverse economic outcomes, the effective pass-through implications of the negative interest rates through the financial intermediaries remains an open question.  

 

Policymakers examine how negative interest rates lead to real economic implications through the financial intermediaries. The interest rate policy (alternatively known as the bank rate or the federal funds rate) is primarily a monetary policy. Nevertheless, its tight relationship with the interest-on-excess-reserves (IOER), paid on oversized excess reserves held by the banking institutions, generates substantial impacts on the overall performance of the banks and their lending. This provides motivation to first, investigate how the negative interest rate policy is translated to an ultimate lending rate for the real economy through the banking institutions. Second, the tight relationship between the main monetary policy and the IOER provides motivation to examine how the interaction amongst policy initiatives by the monetary and financial regulatory authorities leads to welfare implications. 

 

Over the past decade, oversized excess reserves of the banking system have comprised over one-third of the total assets of major central banks in charge of 40% of the world economy.  Between January 2019 to October 2019, depository institutions in the United States held $1.41T of funds in excess reserves that accounts for over 40% of the total balance sheet size of the Federal Reserves. Over the same period, the European Central bank held over €1.9T in excess reserves forming a slightly smaller share relative to the consolidated balance sheet of the Eurosystem. The similar pattern holds for the Danish National Bank, the Swiss National Bank, the Sveriges Riksbank, and the Bank of Japan.  

 

Evidence shows that in July 2020, excess reserves of the depository institutions accounted for nearly 30% of the total assets of the Federal Reserves and the ECB. The cross-dependency between IOER and bank capital regulation is an important consideration with welfare implications because conflicting effects among the two policies may lead to over-regulation of the banking sector and disruptions in credit flow to the real sector. Alternatively, two policies may lead to under-regulation and re-expose the banking system to heightened default risk and possibly failures with socially undesirable outcomes. The aftermath of the 2008 financial crisis highlighted the lack of analytical frameworks to integrate multiple policies and assess their real economic implications. Policymakers constantly address distortions associated with each aspect of the economy with individual policies. Nonetheless, the policymaker's ability to provide welfare gains through a broad range of levers is limited by the understanding of the interconnecting channels among policies.  

 

A quintessential feature of IOER is its dual-role. This policy is decided by the monetary authority and, historically, it has been heavily correlated with the main monetary policy. In the United States, the Federal funds rate and IOER are heavily correlated. This strong relationship is a stylised fact that holds among other advanced economies ranging between 94%-99%. When the monetary authority targets reserve management, IOER simultaneously affects banking institutions' balance sheets to a great extent which strengthens the connections between the main monetary policy and the capital regulation. Existing studies in macro-finance and banking literature investigating the implications of the negative interest rate policy often focus on the interconnections between the policy and the assets side of the banking institutions. This strand of the literature provides a limited prediction about how the negative interest rate policy is passed through the real sector because when rates are negative, the exceedingly steep marginal utility of consumption of the depositors limits the banks ability to pass the negative rates to its depositors. An alternative strand of literature has tackled this shortcoming through partial equilibrium approaches and shows that given exogenous deposit holdings, the negative interest rate policy leads to lower cost of borrowing for the real sector. Nonetheless, such approaches fail to consider the downsides of the negative interest rate policy as the rate may fall indeterminately. 

 

 

 

 

 

These shortcomings provide the motivation to consider both sides of the banking institutions balance sheet into the policy initiative to determine the interest rate policies that passes through the bank borrower (businesses) and its lenders (depositors). When interest rates are positive, policymaker's decision to lower IOER is followed by an almost proportional fall in the bank deposit rate. Because the banking sector holds only a fraction of the deposits invested in reserves, a proportional decrease in the deposit rate in response to falling IOER leads to a faster drop in interest expenses on deposits, than the loss of interest incomes from reserves. The banking sector extends lending to the borrowers as a result of lower default risk when IOER falls and subsequently, the bank capital regulation tightens to adjust for the added risk to banks' assets. 

 

However, when IOER becomes very low, or possibly negative, the deposit rate exhibits an increasingly flatter response to further changes in IOER because deposit investors require a marginally positive compensation for time preference to forego current consumption. When bank deposit rate is increasingly nonresponsive to any further reductions in the policymaker's negative interest rate, loss of interest incomes from reserves exceeds lowered interest expenses on deposits. The banking institutions respond to increased default risk due to higher net interest expenses by lowering lending to maintain their shareholder value and then bank capital regulation loosens. This indicates that lower IOER dissuades the banking sector from over-relying on idle excess reserves with an expansionary effect on the real economic output only when lower rates lead to lower default risk, otherwise lowering IOER generates counterproductive results by worsening this overreliance problem and becomes a contractionary economic impact. This finding provides a motivation for the monetary and financial regulatory policymakers to act jointly to provide welfare gains to the society.  

 

Adeplhe Ekponon, CERF Research Associate, July 2020 

Managerial Commitment and Long-Term Firm Value 

 

Motivated by preliminary empirical evidence showing that firms with more committed managers tend to suffer less during downturns, CERF Associate Adelphe Ekponon and collaborator Scott Guernsey (University of Tennessee) propose a model to help understand the mechanisms under this phenomenon.  

 

Economic crises bring about periods of prolonged turmoil. During such periods, shareholders have difficult decisions to make, in particular, regarding the retention or firing of the incumbent management team (assuming also that the change of CEO is likely to be followed by a reshuffle of the managing team). There exists a labor market for executives with two possible statuses. An executive team can be either the incumbent or an entrant. This framework assumes that the labor market for executives is not only competitive but is also highly restrictive, as managers do not have many outside options. Thus, there exists a lack of diversification in the labor market – i.e., they are “all-in” on the firm. This incites executives to be committed to the firm and exert more effort. In practice, the firm can grant managers part of their performance-related compensation in derivatives such as stock options or deep out of the money options.  

 

In their model, shareholders optimally derive both the cost of replacing and the probability of retaining the incumbent, that maximize their value. Shareholders derive these optimal decisions such that they are indifferent between keeping the incumbent or hiring an entrant after considering all firing/hiring costs. They also ensure the participation of both incumbent and entrant. Participation is defined as the gap between the pay for performance and the disutility of effort. Hence, their model differs from the different strands of the literature such as corporate structural models (Leland, 1994), those with agency conflicts (Jensen, 1986), macroeconomic risk (Hackbarth et al., 2006), contract incentives (Laffont and Tirole, 1988) and, governance and business cycle (Philippon, 2006; Ekponon, 2020).  

 

Managers’ level of effort depends on the cost replacement and the likelihood that their tenure will be extended for the subsequent period. The incumbent chooses the level of effort to exert and, under perfect information, selects a higher level of effort when the combination of the two (costs of replacement and probability of retention) is higher because the labor market for executives is restrictive and higher replacement costs indicate longer tenure. So, managers commit to the firm knowingly, but the firm does not necessarily have to commit to managers. 

 

In bad times, earnings are hit by poor macroeconomic conditions. To limit the losses, if shareholders adopt a higher pay for performance strategy, the model predicts a lower probability of retention, proxied by a lower governance index or good governance, but they have to face higher replacement costs. When the latter effect dominates, executives choose to exert more effort (reducing the impact of low profitability) and shareholders are better off keeping the incumbent team. 

  

References mentioned in this post 

 

Ekponon, A. (2020) "Agency conflicts, macroeconomic risk, and asset prices." Social Science Research Network, No. 3440168. 

 

Hackbarth, D., Miao J., and Morellec E. (2006) "Capital structure, credit risk, and macroeconomic conditions." Journal of financial economics, 82(3): 519-50. 

 

Jensen, M. C. (1986) “Agency costs of free cash flow, corporate finance, and takeovers.” American Economic Review, 76(2): 323–29.  

 

Laffont, J.-J., and Tirole J. (1988) "The dynamics of incentive contracts." Econometrica, 56(5): 1153-75. 

 

Leland, Hayne E. (1994) "Corporate debt value, bond covenants, and optimal capital structure." The journal of finance, 49(4): 1213-52.  

 

Philippon, T. (2006) "Corporate governance over the business cycle." Journal of Economic Dynamics and Control, 30(11): 2117-41. 

 

 

Mehrshad Motahari , CERF Research Associate

Can Robots Beat the Market? 

June 2020 
 

The growing trend of replacing active investment managers with computer algorithms (The Economist, 2019) has led to a surge in the use of artificial intelligence (AI)1 in investing. This means that more AI-based algorithms (alpha algos) are being used to devise investment strategies. In most cases, the algorithm itself tests the viability of these strategies and even executes trades while keeping transaction costs to a minimum. A common question, however, is whether these algorithms can generate profitable investments. The following is a summary of findings from a number of recent studies on the issue. 

AI-based investment strategies often use forecasts of future asset performance metrics, the most popular being returns. AI models utilise a range of data inputs, including technical and fundamental indicators, economic measures, and texts (like online posts and news articles), to predict future returns (Bartram, Branke, and Motahari, 2020). These predictions then form the basis of an investment strategy by rebalancing portfolio weights to favour stocks that will outperform and to move away from those that will underperform.  

In a recent hallmark study, Gu, Kelly, and Xiu (2020) investigate a variety of AI models that can be used to forecast future stock returns. The study looks at 30,000 US stocks, from 1957 to 2016 and includes a set of predictor variables, including 94 stock characteristics, interactions of each characteristic with eight aggregate time-series variables, and 74 industry sector dummy variables.  

According to the results, the best-performing investment strategy is based on the return predictions of the neural network model; a value-weighted long-short decile spread strategy using neural network predictions generates an annualised out-of-sample Sharpe ratio of 1.35. This is more than double the Sharpe ratio of a regression-based strategy from the literature. The out of sample performance of AI approaches is robust across a range of specifications. 

Why do AI approaches perform better in predicting returns than classic tools such as ordinary linear regressions? Gu, Kelly, and Xiu (2020) argue that this is due to the ability of most AI techniques to capture nonlinear relationships between dependant and independent variables. Such relationships are often missed by linear regressions. Moreover, many AI techniques are able to select the most relevant variables from a large set of predictors. This allows the model inputs to shrink while keeping the most important variables. In another recent paper, Freyberger, Neuhierl, and Weber (2020) shows an investment strategy based on a model with this feature selection property can generate a Sharpe ratio that is 2.5 times larger than that of an ordinary linear regression model. 

Despite the remarkable success of AI models in predicting returns, some doubt the feasibility of investment strategies that are based on these predictions. Avramov, Cheng, and Metzker (2020) looks at the neural network methodology used in Gu, Kelly, and Xiu (2020) and show that the investment strategy return based on this approach is largely driven by subsamples of microcaps, firms with no credit rating coverage, and distressed stocks. In addition, the strategy tends to be profitable mostly during periods of high limits to arbitrage, including high market volatility and low liquidity.  

It appears that AI models have improved upon return forecasting, due to their flexible structure and ability to capture complex relationships from vast amounts of data. However, the jury is still out on whether the predictions do, in fact, lead to investments that outperform conventional benchmarks in practice. What is clear for now is that AI provides us with the best tools for forecasting returns empirically. 

 

References 

Avramov, D., Cheng, S. and Metzker, L. 2020. Machine Learning versus Economic Restrictions: Evidence from Stock Return Predictability, Available at SSRN 3450322. 

Bartram, S. M., Branke, J. and Motahari, M. 2020. Artificial Intelligence in Asset Management, CEPR Discussion Paper No. DP14525, Available at SSRN 35603330. 

The Economist, 2019. The Stockmarket Is Now Run by Computers, Algorithms and Passive Managers. Available from: https://www.economist.com/briefing/2019/10/05/the-stockmarket-is-now-run...

Freyberger, J., Neuhierl, A. and Weber, M. 2020. Dissecting Characteristics Nonparametrically, The Review of Financial Studies, Volume 33, Issue 5, May 2020, Pages 2326–2377. 

Gu, S., Kelly, B. and Xui, D. 2020. Empirical Asset Pricing via Machine Learning, The Review of Financial Studies, Volume 33, Issue 5, May 2020, Pages 2223–2273. 

 

Scott B. Guernsey, CERF Research Associate

April 2020

Coronavirus and Finance: Early Evidence on Household Spending, and Investor Expectations 

Sparked by the coronavirus disease 2019 (COVID-19) pandemic, in a televised broadcast on 23 March 2020, U.K. Prime Minister Boris Johnson gave the following instruction: “You must stay at home.” Like most of the world’s governments, the U.K. continues to implement strict lockdown restrictions on households and businesses in order to limit the spread of the disease. Early signs imply these measures are working, as within the past week U.K. Health Minister Matt Hancock confirmed that “[social distancing] is making a difference. [The U.K. is] at the peak.” Moreover, The Economist recently published an article suggesting “coronavirus infections have peaked in much of the rich world.”  

Putting the global economy on indefinite hold, however, has likely created a different set of problems and unknowns, many of which that are more financial in nature. In this short article, CERF Research Associate Scott Guernsey reviews some recent early-stage finance research that explores the impact of COVID-19 on important financial outcomes, such as household spending and investor expectations.  

In the first article, “How Does Household Spending Respond to an Epidemic? Consumption During the 2020 COVID-19 Pandemic”, Professors Scott Baker (Northwestern University), Robert Farrokhnia (Columbia University), Steffen Meyer (University of Southern Denmark), Michaela Pagel (Columbia University), and Constantine Yannelis (University of Chicago) investigate how U.S. households altered their consumption behavior in response to the COVID-19 outbreak. Using transaction-level household financial data, the paper finds that households’ spending markedly increased as initial news about the spread of COVID-19 in their local area intensified. The initial increase in spending suggests that households were attempting to stockpile essential goods in anticipation of current and future disruptions in their ability to frequent local retailers.  

Meanwhile, as COVID-19 spread, more households remained at home, sharply decreasing their spending at restaurants and retail stores, and their purchase of air travel and public transportation. These effects are magnified for households in states that issued “shelter-in-place” orders, with the increases in grocery spending nearly three times larger, and the decreases in discretionary spending (i.e., restaurants, retail, air travel, and public transport) twice as large, relative to households located in states without such orders. Lastly, the paper finds (perhaps surprisingly) that Republican-households, though reporting to (Axios’) pollsters as perceiving the COVID-19 threat as “generally exaggerated”,1 actually outspent Democrat-households in the early days of the virus on stockpiling groceries and (less surprisingly) reduced restaurant and retail expenditure less.  

The second article, “Coronavirus: Impact on Stock Prices and Growth Expectations”, by Professors Niels Gormsen (University of Chicago) and Ralph Koijen (University of Chicago) helps to quantify some of the economic costs associated with COVID-19. Employing data from the aggregate equity market and dividend futures,2 the paper explores how E.U. and U.S. investors’ expectations about economic growth has changed in response to the spread of the COVID-19 virus and subsequent actions by policymakers. The authors forecast that the annual growth in dividends is down 28% in the E.U. and 17% in the U.S. Further, their forecasts imply GDP growth is down by 6.3% in the E.U. and 3.8% in the U.S. The lower bound for the E.U. (U.S.) on the change in expected dividends is forecasted to realize at the two-year horizon at about negative 46% (30%). On the bright side, their estimates imply signs of catch-up growth over the three- to seven-year horizon.3 Finally, they document that news about economic relief programs and fiscal stimulus tends to increase long-term growth expectations but does very little in improving expectations about short-term growth. 

Adeplhe Ekponon, CERF Research Associate, March 2020 

Are Cryptocurrencies Priced in the Cross-section? A portfolio Approach

 

Most papers, that study determinants of cryptocurrency prices, find no relation to existing market factors. In a work-in-progress, CERF Research Associate Adelphe Ekponon and Kassi Assamoi (Liquidity analyst at MUFG Securities and University of Warwick) examine a portfolio approach to explore cross-sectional pricing within crypto-market. At its inception, Bitcoin meant to be an alternative to fiat currencies. Yet high returns in this market may have also attracted usual investors as well, as they are looking for more investments and diversification venue. Since Bitcoin, the number of cryptocurrencies has reached more than 6000 as of the beginning of 2020, according to coingecko.com. Hence, investors have more choices when they decide to enter into the crypto-market. So, they have an incentive to understand the crypto-market interaction with their current investment.

 

Their paper belongs to two trends. The first one explores portfolio strategies and cross-sectional pricing to study factors embedded into major asset classes, stocks or/and bonds as in Fama-French (1989, 1992), Cochrane and Piazzesi (2008), and Koijen, Lustig, and Van Nieuwerburgh (2017); currencies as in Lustig and Verdelhan (2005); and commodity as in Fama-French (1987), and Bakshi, Gao, and Rossi (2015). The second line examines the determinants of cryptocurrency prices and returns. See, among others, Canh et al (2019), Liu and Tsyvinski (2018), Balcilar et al (2017), Bouri et al (2016), and Yermack (2015). They find that cryptocurrencies have no exposure to most market and macroeconomic factors or currency and commodity markets.

 

In closely related papers, Adam Hayes (2014) uses data from 66 of the most active cryptocurrencies and notes that three of the main drivers come from the blockchain technology. Bouri et al (2016) explore, in time-series analysis, the ability of Bitcoin to hedge against risk embedded within leading stock markets, bonds, oil, gold, the commodity index, and the US dollar index. They conclude that Bitcoin’s ability to hedge is weak but is suitable for diversification purposes. Moreover, its hedging and safe-haven properties depend on the horizon.

 

In their study, Ekponon and co-author examine ten factors from equity, currency, and commodity markets. The study uses 95+ cryptocurrencies daily quotes, from July 17, 2010, to September 9, 2019. They determine cryptocurrencies’ exposure (beta) to these factors and perform cross-sectional regressions of cryptos average returns on exposures. They alternatively build portfolios sorted on exposures to each factor.

 

Their findings confirm most of the previous results and produce some novel insight. Two out of the ten factors, size and commodity index, have a negative and highly significant correlation with the cross-section of cryptocurrency returns. Long-short strategies do not deliver significant returns for all ten factors. Yet they might provide excellent investment opportunities for commodities portfolios and in size factor. For example, buy/sell cryptos with negative/positive correlation to diversify a commodity portfolio or investments in blue-chip stocks. As the crypto-market is uncorrelated to market volatility (VIX), these strategies would likely be accurate in any state of the economy. Finally, these results support the market participants’ view that cryptocurrencies are still too volatile to serve as a store of value. In the sense that, cryptos with a negative sensitivity to safe-haven assets, like gold or precious metals, are appreciated by investors.

 

References mentioned in this post

 

Bouri, E., Azzi, G., and Dyhrberg A. H. (2016) "On the return-volatility relationship in the Bitcoin market around the price crash of 2013." Available at SSRN 2869855.

 

Hayes, A. (2016) “What Factors Give Cryptocurrencies Their Value: An Empirical Analysis.” Available at SSRN 2579445.

 

Mehrshad Motahari, CERF Research Associate 
February 14, 2020

Artificial Intelligence in Asset Management: Hype or Breakthrough?

Artificial intelligence (AI) has become a major trend and has disrupted most industries in recent years. The financial services sector has not been an exception to this development. With the advent of FinTech, which has had an emphasis on the use of AI, the sector has experienced a revolution in some of its core practices. Asset management is probably the most affected practice and is expected to suffer the highest number of job cuts in the foreseeable future. A sizeable proportion of asset management companies are now using AI instead of humans to develop statistical models and run trading and investment platforms.

In a recent article entitled ‘Artificial Intelligence in Asset Management’, CERF Research Associate Mehrshad Motahari and co-authors Söhnke M. Bartram and Jürgen Branke (Warwick Business School, University of Warwick) provide a systematic overview of the wide range of existing and emerging AI applications in asset management and set out some of the key debates. The study focusses on three major areas of asset management in which AI can play a role: portfolio management, trading, and portfolio risk management.

Portfolio management involves making decisions on the allocation of assets to build a portfolio with specific risk and return characteristics. AI techniques improve this process by facilitating fundamental analysis to process quantitative or textual data and generate novel investment strategies. Essentially, AI helps produce better asset return and risk estimates and solve portfolio optimisation problems under complex constraints. All these result in AI achieving portfolios with better out-of-sample performance compared to traditional approaches.

Another popular area for AI applications is trading. Today, the speed and complexity of trades nowadays have made AI techniques an essential part of trading practice. Algorithms can be trained to automatically execute trades on the basis of trading signals, which have given rise to a whole new industry of algorithmic (or algo) trading. In addition, AI techniques can help minimise transaction costs. Many traders have started using algorithms that automatically analyse the market and subsequently identify the best time and amount for trade at any point in time.

 Since the 2008 financial crisis, risk management (and compliance) have been at the forefront of asset management practices. With the increasing complexity of financial assets and global markets, traditional risk models may no longer be sufficient. Here, AI techniques that learn and evolve through the use of data can improve the tools required for monitoring risk. Specifically, AI approaches can extract information from various sources of structured or unstructured data more efficiently and produce more accurate forecasts of bankruptcy and credit risk, market volatility, macroeconomic trends, financial crises, etc. than traditional techniques. AI also assists risk managers in the validation and back-testing of risk models.

AI techniques have also started gaining popularity in new practices, such as robo-advising. This area has gained significant public interest in recent years. Robo-advisers are computer programs that provide investment advice tailored to the needs and preferences of investors. The popularity of robo-advisers stems from their success in democratising investment advisory services by making them less expensive and more accessible to unsophisticated individual investors. It is a particularly attractive tool for young (millennial) and tech-savvy investors. AI can be considered the backbone of robo-advising algorithms, relying heavily on the applications of AI in asset management discussed above.

With all the above advantages, there are also costs associated with the use of AI approaches. These models are often opaque and complex, making them difficult, if not impossible, for managers to scrutinise. AI models are also highly sensitive to data. They may be improperly trained as a result of using poor quality or inadequate data. Insufficient human supervision can result in systematic crashes, the inability to identify inference errors, and a lack of understanding of investment practices and attribution of performance by investors. Last but not least, asset managers need to ask whether the benefits associated with AI can justify their considerable development and implementation costs.

AI is still in its early days in finance and has a long way to go before it can replace humans in all aspects of asset management. What AI does today is limited to automating specific tasks within asset management, often with some form of human intervention at the implementation stage. In fact, there is not much new about the AI techniques used in finance, and they have been around as part of statistics for a long time. Instead, what has led to the recent hype is the availability of vast new data sources and the computing power to extract information from them. AI’s ability to capture complex and nonlinear relationships from the ever-growing volumes of data, including textual ones that are relatively time-consuming for humans to analyse, has proven to be highly beneficial. One can imagine that AI’s footprint will only increase as asset managers compete for more information at higher speeds. Hype or not, AI is here to stay, and its heyday is yet to come.

References

Bartram, Söhnke M., Jürgen Branke, and Mehrshad Motahari. 2020. Artificial Intelligence in Asset Management, Cambridge Judge Business School Working Paper No. 01/2020.

 

Argyris Tsiaras, CERF Research Associate, January 2020

Understanding the Cross-Section of International Equity Markets

A large literature in international finance has established the relevance of a wide array of frictions in financial investments across borders leading to the concentration of equity investments within national borders (home bias in equity portfolios) and to large biases in the composition of investors’ foreign equity portfolios (foreign bias). Moreover, despite increasing integration of international equity markets in recent decades, asymmetries in bilateral return comovement between equity markets remain large. In a working paper entitled “Asset Pricing of International Equity under Cross-Border Investment Frictions” and recently presented at the 2020 American Finance Association meetings in San Diego, CERF research associate Argyris Tsiaras and collaborator Thummim Cho (LSE) undertake a systematic theoretical investigation of how the cross-sections of equity returns and portfolio holdings across countries are jointly shaped by investment frictions and other characteristics of individual countries or equity markets, such as market size or the comovement of cash-flow fundamentals.

Overall, the authors argue that cross-country variation in the degree of cross-border investment frictions is the most important determinant of the cross-sections of equity return moments and of cross-border equity portfolio allocations. The paper investigates the implications of this observation for the literature on international asset pricing models, most of which are still tested under the assumption of frictionless cross-border investing.

The authors establish three robust empirical regularities (stylized facts) in the cross-section of international equities. First, equity markets whose returns are more highly correlated with the global equity market also have greater foreign investor presence. In particular, the share of a stock market held by U.S. investors, henceforth referred to as the U.S. investor (cross-border) position, has strong explanatory power for the cross-country variation in correlations of an equity market’s excess return with the U.S. market return. In our sample of 40 countries, the U.S. investor position in a country averaged over 2000-2017 explains about 40% of the cross-sectional variation in the return correlations over the same period. Importantly, the relative size of the equity markets or indicators of real sector comovement, such as the size of bilateral trade and the GDP correlation between the country and the U.S., are unable to account for the cross-section of return comovement. These patterns are hard to reconcile with standard portfolio choice models under frictionless access to international equity markets, which typically predict that investors wish to avoid large positions inassets that are highly correlated with their overall portfolio return.

Second, equity markets whose returns comove less with the global (or U.S.) equity market appear to have larger pricing errors with respect to the global Capital Asset Pricing Model (CAPM) and other multi-factor international asset pricing models. As a result, the security market line (average returns versus betas) in global equity markets appears to be flat or even negative, pointing to a puzzlingly low, or even negative, price of global market risk. Combining this regularity with the first stylized fact, international equity investors have low market positions in markets with high apparent expected returns and low global risk, an observation hard to reconcile with the predictions of frictionless portfolio choice models. Third, investors based in countries that comove less with the global (or U.S.) equity market have equity portfolios that are more biased towards domestic stocks (greater “home bias”).

 To rationalize these empirical patterns, the authors develop a general-equilibrium model of the global economy featuring heterogeneity across countries in cross-border financial investment frictions, modeled in reduced form as proportional holding costs, as well as rich heterogeneity in other potentially relevant aspects, such as risk preferences or cash-flow fundamentals. In the model, the activity of foreign investors in a country’s equity market amplifies return volatility relative to volatility in cash-flow fundamentals and causes fluctuations in countries’ valuation ratios. Importantly, the magnitude of this amplification is decreasing in the holding cost incurred by foreign investors, so that heterogeneity in holding cost across countries translates into heterogeneity in the degree of equity market return comovement with the large market (first stylized fact).

 

The model also explains the negative relationship between CAPM alphas and betas (second stylized fact), because the high apparent average returns on the stock markets of countries with low return correlations are not in fact attainable by foreign investors in these countries. Because countries with high holding costs, and thus high CAPM alphas, have endogenously low return correlations with global equity markets, a test of the standard market model, which only allows for a uniform intercept across all equity markets, yields a flat security market line and a deceptively low, or even negative, price of global market risk. Finally, high holding costs in a country’s equity market imply a large degree of home bias in the equity portfolio of investors based in that country mainly because high frictions to foreign investors in the local market in equilibrium translate into a comparative advantage of the local market relative to foreign markets as a financial investment for local investors. The impact of holding costs on the endogenous wealth of local investors amplifies the negative impact of local-investor home bias on the foreign position in the local equity market.

 Reference mentioned in this post

 Cho, Thummim and Tsiaras, Argyris (2020). “Asset Pricing of International Equity under Cross-Border Investment Frictions”. Working Paper.