skip to primary navigationskip to content
 

CERF Blog

The CERF blog features short articles about current research and other relevant topics, written by CERF’s Fellows and researchers.

Oğuzhan Karakaş – CERF Fellow – October 2019

To Vote, or Not to Vote, That is the Question

With the advent of the financial engineering and technology, the fabric of the financial securities is changing. While this change has certain advantages such as bringing the costs down, it also has unintended consequences such as impairing the associated voting rights in the securities. A potential reason underlying this issue is the oversight in the design of the new financial securities and the underlying regulations: in contrast with the cash flow rights, the non-cash-flow-related contractual rights of the securities, including the right to vote, or the right to sue, tend to be overlooked.

Contemporaneously, the U.S. Securities and Exchange Commission (SEC) has been inquiring and debating about the “proxy plumbing” – extensive problems, ranging from over-voting to under-voting, associated with the complex, dated, and inefficient infrastructure supporting the proxy voting system. A recent recommendation of the SEC Investor Advisory Committee on proxy plumbing argues that SEC intervention is necessary for the overhaul of the system.[1]

In the article “Phantom of the Opera: ETF Shorting and Shareholder Voting”, CERF Fellow Oğuzhan Karakaş and research collaborators Richard Evans (University of Virginia), Rabih Moussawi (Villanova University), and Michael Young (University of Virginia), find that short-selling of Exchange Traded Funds (ETFs) lead to “phantom shares” of the underlying that are not voted. This unintended consequence is due to the underlying shares held as collateral or a hedge by the securities lenders or authorized participants/broker-dealers. The authors show that phantom shares (i) are costly, since they do not convey voting rights to the ETF owners, but are sold at the full price of share, which reflects both cash flow rights and voting rights; (ii) create inefficiencies within the voting process by leading to under-voting; (iii) are positively related to voting premium, particularly during the contentious votes; and (iv) are associated with poor governance such as value-reducing acquisitions.

Regulatory concerns regarding the above-mentioned findings would arguably be even more pronounced during times market is bearish and/or when the corporate votes are very valuable. A solution could be to incorporate the distributed ledger technology (commonly known as “blockchain”) into the proxy system, as also discussed at the recommendation of the SEC Investor Advisory Committee on proxy plumbing.

References mentioned in this post

  • Evans, R.B., O. Karakaş, R. Moussawi, and M. Young. 2019. Phantom of the Opera: ETF Shorting and Shareholder Voting. Working Paper, University of Virginia, University of Cambridge and Villanova University.

Scott B. Guernsey, CERF Research Associate, September 2019

FinTech Disruption: Is it Good or Bad for Consumers?

 

Financial technology (“FinTech”) is a rapidly growing industry that applies recent digital innovations and technology-enabled business model innovations to financial services. A common example is its application of smartphone technologies to banking. For instance, from the convenience of a mobile phone, FinTech consumers can access depository accounts, transfer funds, request loans, and pay monthly bills. Correspondingly, the emergence of the FinTech industry has expanded the accessibility of many financial services to the general public.

Recent regulation in the EU (the Second Payment Services Directive – PSD2) and the UK (the Open Banking initiative) suggests that policy makers generally regard FinTech’s entrance into the financial services industry favourably.[1] Mandated by these respective legislative actions, traditional banks must release data on their customers’ accounts to authorized FinTech firms, with the aim of opening “up payment markets”, “leading to more competition, greater choice and better prices for consumers” (Summary of Directive (EU) 2015/2366 on EU-wide payment services). But is the competition/disruption created by FinTech firms in financial services’ markets always in the interests of consumers? And what role does the portability of data – as required by the PSD2 and the Open Banking initiative – play in these markets?

A recent research article presented at this year’s Cambridge Corporate Finance Theory Symposium by Professor Uday Rajan (University of Michigan) demonstrates the complex effects that may arise when a FinTech entrant and an incumbent bank compete in the market for payments processing. The paper begins by underscoring two important functions that a bank provides to consumers: (i) it processes their everyday payments (e.g., reoccurring bills), and (ii) it offers them loans when requested. Intuitively, these two financial services are interconnected as the transaction data created from processing payments enables the bank to be informed about their consumers’ credit quality. This information externality makes the bank better off and incentivizes it to bundle payment services and consumer loans. More surprisingly, the paper finds that consumers can also gain from the bank having their information as more creditworthy consumers are offered better interest rates on their loans.

            From this starting point, Professor Rajan (and co-authors, Professors Christine Parlour and Haoxiang Zhu) then show that competition from FinTech firms, which act purely as payment processors, can disrupt the bank’s information flow. Consequently, the bank loses market share, consumer information, and becomes less profitable. Additionally, consumers that might need a loan can also suffer from this lost information. Moreover, the entrance of a FinTech firm can either decrease or, quite surprisingly, increase the price the bank charges for its payment services. This latter instance occurs if the bank opts to focus its payment business on the population of consumers that are more reliant on (or have a greater affinity for) brick-and-mortar banks, and thus are more tolerant of higher prices. Conversely, the consumer population that is more technologically sophisticated and willing to use FinTech services experience the greatest gains as their cost for payment services are reduced by the added competition.

The authors then apply their model to a world in which consumers are given complete ownership and portability of their payment data. They show that this policy effectively unbundles a bank’s payment services from its bank loans, which in turn has different ramifications for different consumers. On the one hand, a certain subset of the consumer population that is more technologically sophisticated and less reliant on a traditional banking relationship is made better off via more choice and lower prices. On the other hand, consumers that have a greater affinity for banks and that are less technologically sophisticated can be hurt by policies that mandate portability of their data because the bank will exploit this smaller group of bank-reliant consumers, charging a higher price for its payment services. These key results underline both the good and bad of FinTech disruption and the likely heterogeneous effects of PSD2 and the Open Banking initiative on consumer welfare.

Adeplhe Ekponon, CERF Research Associate, August 2019

Agency Conflicts and Costs of Equity

The agency problem, in the context of separation in ownership (shareholders or the principals) and control (managers or the agents), is one of the most important issues in corporate finance.

This separation may induce conflicts of interest inherent in the kind of relationship where an agent is expected to work in the best interests of a principal. In the case of a company, these conflicts of interest arise when executives, or more generally insiders which could include controlling shareholders, favour their interests at the expense of the company's goals.

 

There are various manifestations of this behaviour. Managers may appropriate part of the profits, sell the firm’s output or assets at under the fair value to their own business, divert profitable growth options, or recruit unqualified relatives at high positions. See Jensen and Meckling (1976), La Porta et al. (2000, 2002), and Lambrecht and Myers (2008).

 

Impacts of self-interested management on corporate choices and asset prices have been extensively described by several theoretical and empirical works. They document that entrenched managers tend to underinvest and choose lower leverage. In response, shareholders may force them to increase leverage, because coupon payment reduces the firm’s free cash flows which limits the amount available for cash diversion. Therefore, debt can be used as a tool to discipline managers. Entrenched managers can also resist hostile takeovers and lead shareholders to push for the adoption of more provisions that reduce their own rights.

 

All these frictions reduce not only profits but also operational efficiencies and affect equity prices and volatility. To measure the impact of agency costs on equity prices, Gompers, Ishii, and Metrick (2003) and Bebchuk, Cohen, and Ferrell (2009) have constructed indexes, G-index and E-index respectively, to measure for the balance of power between shareholders and managers. High index levels (extensive management power) translate into high agency costs. They document that increases in these indexes level are associated with economically significant reductions in firm value, profits and equity price during the 1990s.

 

Most theoretical papers that study the impacts of agency conflicts on asset prices do not emphasize on its influence on costs of equity. Empirical papers, however, only focus on the level of the severity of the conflict.

 

A working paper by CERF Research Associate, Adelphe Ekponon, proposes a theoretical approach and provides empirical evidence that time-series fluctuations of this conflict have as well the potential to explain cross-sectional differences in equity prices. Specifically, the difference in average index values in bad times compared to normal periods is positively correlated to the cost of equity, even after controlling for preeminent markets factors. Data are from 1990 to 2006.

 

The most important economic implications of this result are twofold: firms with countercyclical governance policy (better governance in bad times) have a lower cost of equity. Changes in governance practices in bad vs. good times is a pricing factor for stocks.

 

Interestingly, the paper shows that these results are closely linked to managers-shareholders conflicts, as it documents a U-shape relationship between changes in G-index and cost of equity (too many restrictions in bad times create conflicts and impediment managers ability to run the company efficiently), while this relationship is linear for the E-index. This latter index has been constructed on a subset of the G-index that focuses on managerial entrenchment.

 

 

References mentioned in this post

 

Bebchuk, L., Cohen, A. and Ferrell, A. (2009), What matters in corporate governance?, Review

of Financial Studies 22(2), 783–827.

 

Gompers, P., Ishii, J. and Metrick, A. (2003), Corporate governance and equity prices, Quarterly

Journal of Economics 118(1), 107–156.

 

Jensen, M. C. and Meckling, W. H. (1976), Theory of the firm: Managerial behavior, agency

costs and ownership structure, Journal of Financial Economics 3(4), 305–360.

 

Lambrecht, B. M. and Myers, S. C. (2008), Debt and managerial rents in a real-options model

of the firm, Journal of Financial Economics 89(2), 209–231.

 

LaPorta, R., de Silanes, F. L., Shleifer, A. and Vishny, R. (2000), Investor protection and

corporate governance, Journal of Financial Economics 58(1-2), 3–27.

 

LaPorta, R., de Silanes, F. L., Shleifer, A. and Vishny, R. (2002), Investor protection and

corporate valuation, Journal of Finance 57(3), 1147–170.

Dr. Hui Xu, CERF Research Associate, July 2019

What determines the cryptocurrency’s excepted return?

Since the emergence of the cryptocurrencies, they have quickly become the focus of asset managers. Although many ongoing debates about cryptocurrencies still remain to be solved, e.g. whether their value can be justified, their relationship with the fiat money endorsed by the central banks, they do offer an alternative opportunity for the investors to diversify their portfolios. Yet before constructing a portfolio that subsumes cryptocurrencies, a question has yet to be answered: what is their risk and return profile and what are the determinants?

Since stocks markets are well developed and thoroughly studied, it is only intuitive to look at whether the factors that successfully account for stocks markets can also apply to the cryptocurrencies market. Although shares and cryptocurrencies are fundamentally different, they do share quite amount of similarities. Especially, some cryptocurrencies (digital tokens) represent the claim to the issuer. Eugene F. Fama and Kenneth R. French in 1992 found that size risk and value risk can account for the stock return in addition to the well-known Beta, based on the evidence that value and small-cap stocks outperform market on a regular basis. Mark Carhart augmented the three-factor model by adding a momentum factor that describes the tendency for the stock price to continue rising if it is going up and to continue declining if it is going down.

A recent NBER Working Paper by Aleh Tsyvinski et. al. tested the idea and showed that most of the powerful explanatory factors in the stock market, namely the beta, size and momentum, are also powerful to capture the cross-sectional expected returns of cryptocurrencies. Since the advent of cryptocurrencies, many have studied the expected returns and many explanatory factors have been suggested. Interestingly, they showed that all of the excess returns generated by the trading strategies that previous studies implied, in fact, can be accounted for by the cryptocurrency three-factor model.

A further inquiry is whether there exists a “twin” value factor in the cryptocurrency market, why the size and momentum factors are so mysteriously powerful and whether they affect the cryptocurrency returns the same way as they do to the stock returns? One thing for sure is that as the cryptocurrency market continue to burgeon, all these questions will be answered eventually.

 

Shadow Pills and Visible Value

By: Scott B. Guernsey, CERF Research Associate, June 2019

The “poison pill” (formally known as a “shareholder rights plan”) has a long and contentious history in the United States as a tactic to deter takeovers.[1] While details can vary across different implementations, the key defensive mechanism of the pill provides existing shareholders with stock purchase rights that entitle them to acquire newly issued shares at a substantial discount in the “trigger” event that a hostile bidder obtains more than a pre-specified percentage of the company’s outstanding shares (e.g., 10-15%).[2] As a result, poison pills permit a firm’s board of directors the ability to substantially dilute the ownership stake of a hostile bidder, de facto giving the board veto power over any hostile acquisition.

Correspondingly, law and finance scholars generally agree that the poison pill is perhaps the most powerful anti-takeover defense (e.g., Malatesta and Walkling 1988; Ryngaert 1988; Comment and Schwert 1995; Coates 2000; Cremers and Ferrell 2014). However, whether a firm’s managers use the poison pill to the benefit or detriment of its shareholders is the subject of an enduring debate in both the corporate finance literature and in U.S.’ state courts.

Prior empirical studies have attempted to investigate the value implications of a firm’s decision to employ a poison pill as a strategy to deter takeovers. While earlier findings were mixed, over the past decade most studies have found that the adoption of a pill is negatively associated with firm value (e.g., Bebchuk, Cohen and Ferrell 2009; Cuñat, Gine, and Guadalupe 2012; Cremers and Ferrell 2014). Unfortunately, however, this result is challenging to interpret, as the choice to adopt a pill is endogenous – meaning, for example, that the finding might imply that a firm was losing value and decided to adopt a pill in response rather than the conclusion that the adoption of the pill led to lowered firm value. Adding to the difficulty of researchers, since poison pills can be unilaterally adopted by a firm’s board of directors, even firms that do not currently have a poison pill in place still have the right to adopt a pill at any time – this right is termed by scholars as a “shadow pill” (Coates 2000).

In the article “Shadow Pills and Long-Term Firm Value”, CERF Research Associate Scott Guernsey, and research collaborators Martijn Cremers (University of Notre Dame), Lubomir Litov (University of Oklahoma), and Simone Sepe (University of Arizona), contribute to the debate on the value implications of the poison pill by shifting the focus from “visible” (or realized) pills to shadow pills – that is, studying the effect that arises from the right to adopt a poison pill rather than its actual adoption. To do this empirically, the study’s tests focus on U.S. state-level poison pill laws (“PPLs”) – enacted by 35 states between 1986 and 2009 – which legally validated the use of the pill, hence strengthening these firms’ shadow pill.

Using the staggered enactments of PPLs by different states in different years, the authors find that firms incorporated in states with a stronger shadow pill experience significant increases in firm value, and especially for firms with stronger stakeholder relationships (e.g., with a large customer or in a strategic alliance) and more engaged in innovation (e.g., R&D investments or with patents). Additionally, the study confirms the prior literature’s results on a negative correlation between firm value and actual pill adoption.

Overall, the authors’ findings suggest that a stronger shadow pill can benefit certain firms’ shareholders, even if a visible pill does not, indicating that for these firms the right to adopt a pill could serve as a function of good corporate governance by credibly signaling a firm’s bond toward more stable stakeholder relationships and/or longer-term investment projects through its commitment against potential disruptions from short-term shareholder interference via the takeover market.

References mentioned in this post

Bebchuk, L., A. Cohen, and A. Ferrell. 2008. What matters in corporate governance?. Review of Financial Studies 22:783-827.

Coates IV, J.C. 2000. Takeover defenses in the shadow of the pill: A critique of the scientific evidence. Texas Law Review 79:271-382.

Comment, R., and G.W. Schwert. 1995. Poison or placebo? Evidence on the deterrence and wealth effects of modern antitakeover measures. Journal of Financial Economics 39:3-43.

Cremers, M., and A. Ferrell. 2014. Thirty years of shareholder rights and firm value. Journal of Finance 69:1167-96.

Cuñat, V., M. Gine, and M. Guadalupe. 2012. The vote is cast: The effect of corporate governance on shareholder value. Journal of Finance 67:1943-77.

Malatesta, P.H., and R.A. Walkling. 1988. Poison pill securities: Stockholder wealth, profitability, and ownership structure. Journal of Financial Economics 20:347-76.

Ryngaert, M. 1988. The effect of poison pill securities on shareholder wealth. Journal of Financial Economics 20:377-417.

Slaughter and May. 2010. A guide to takeovers in the United Kingdom. http://www.slaughterandmay.com/media/39320/a_guide_to_takeovers_in_the_uk_mar_2010.pdf



[1] The use of the poison pill is not permitted in the U.K. because: (i) it is viewed as a breach of fiduciary duty, and (ii) it is disallowed by General Principle 3 and Rule 21 of the City Code (Slaughter and May 2010).

[2] This describes the “flip-in” poison pill which has become largely majoritarian in the U.S.; for other methods see: “preferred stock plans,” “flip-over” poison pills, “back-end rights plans,” “golden handcuffs,” and “voting plans.”

Adelphe Ekponon, CERF Research Associate, May 2019

A corporate finance model for Cryptocurrencies

 

After the credit crunch of 2009, you may have heard of Bitcoin or cryptocurrencies in general. Bitcoin and altcoins (terminology used to refer to all other cryptocurrencies) are digital currencies built on distributed ledger technologies such as Blockchain and so are not regulated by a central authority. Whether cryptocurrencies are perceived as currencies by authorities, or treated as financial assets by investor and regulators, or whether they can be used as security token or utility token, it is clear the digital currencies’ market is a small but growing market’ as commented by Christopher Woolard, executive director of Strategy and Competition at the FCA (UK Financial Conduct Authority) [1].

 

From Bitcoin, invented by the so-called Shatoshi Nakomoto, more than 2000 other altcoins have been created for various purposes [2]. The market capitalization for the largest 100 cryptocurrencies has increased from 1.5 Billion in 2013 to 250 Billion in May 2019, with a peak of 795 Billion in January 2018. With such an interest from not only individual consumers but also Business users, authorities in countries such the UK, France, Switzerland, South Korea, the United States and others have initiated regulatory sandboxes either to educate consumers (UK, Guidance) or to create high level government task forces to investigate the technology and regulatory implications.

Authorities around the world, including governments and central banks, often remain skeptical about the digital currencies, rightfully because many questions either on the scalability of the underlying technology [3] or concerning the nature of the crypto assets are subject to investigation and clarification prior to potential wider adoption of the technology.

 

 As mentioned above, the price of cryptoassets on exchange places shows extremely high volatility compared to for instance the equity market. Authorities and trading exchange often warn that prices can fall to zero overnight. This raises the questions as to whether any cryptoasset has a fundamental value which can sustain its market price or whether crypto prices follow a completely different pricing model which need to be investigated.

 

In ongoing work, CERF Research Associate A. Ekponon and K. Assamoi (Liquidity analyst at MUFG Securities) propose a corporate finance model for the pricing of cryptocurrencies.

 

First, they model the scale level of a cryptofirm, e.g. Bitcoin or Ethereum, following Bhambhwani et al (2019) and Hayes (2015). This scale is assumed to be constant but may change (infrequently) up or down over time following fundamentals. Cryptofirms remain to be clarified in term of investment. This paper takes the view that a cryptocurrency is classified as a financial asset [4].

If so the overall activity around a cryptofirm could be transcribed in a usual firm setting. Fundamental values represent initial cash-flow level whenever the firm changes scale. Miners, which work consist in validating these peer-to-peer transactions, represent labour. Successful crypto mining produces new bitcoins to miners (block fees) and improves the trust in and security of the technology, making crypto-firm more valuable. Validating transactions are also rewarded by bitcoin users through transactions fees. Rewards to miners constitute wages. Computation costs incurred by miners are also counted for.

 

Second, this article assumes that a cryptocurrency corresponds to equity for the firm and its cash-flow evolves around its fundamental levels following standard Brownian motions.  

 

Third, the optimal level of firm fundamentals, among which, the difficulty in the validation of operations, the rate of unit production, the cryptologic algorithm employed or the aggregate computing power, and also cryptocurrency price are derived by using methods from dynamic models of corporate finance (See Strebulaev, 2007).

 

 

 

The paper findings are tested with daily prices of more than 100 cryptocurrencies among the most actively traded. Results from the model implications and empirical tests will be detailed in a future blog.

 

 

References mentioned in this post

 

Bhambhwani, S., Delikouras, S. and Korniotis, G. M., Do Fundamentals Drive Cryptocurrency Prices? (May 9, 2019). Available at SSRN: https://ssrn.com/abstract=3342842.

 

Hayes, Adam, What Factors Give Cryptocurrencies Their Value: An Empirical Analysis (March 16, 2015). Available at SSRN: https://ssrn.com/abstract=2579445.

 

Strebulaev, I. A., 2007, Do tests of capital structure mean what they say? Journal of Finance

62, 1747–1787.

 

[1] https://www.fca.org.uk/news/press-releases/fca-consults-cryptoassets-guidance

 

[2] https://coinmarketcap.com/all/views/all/

 

[3] EU Blockchain Observatory, Overview and Guiding on Blockchain Scalability and Security Topics, Working Group Blockchain/ICO, 2018, recommendations on future regulations.

 

[4] 2nd Global Cryptoasset Benmarking Study, Cambridge Center for Alternative Finance

 

Thies Lindenthal, CERF Fellow, Land Economy, May 2019

Machine Learning, Building Vintage and Property Values

Sometimes, all you need is a bit of luck. Erik Johnson (University of Alabama) and I had explored a new way to integrate images from Google Street View as additional input to automatic real estate valuation systems. Writing up the working paper[1], we were looking for relevant policy implications beyond the mundane goal of boosting price prediction accuracy. We struggled. But then the head of UK’s Building Better, Building Beautiful Commission went on the record, claiming that Britain’s housing supply constraints will evaporate if only developers build “as our Georgian and Victorian forebearers built [. . . ] All objections to new building would slip away in the sheer relief of the public”[2]. The research we had done enabled us to put this refreshing view to the test (and to add a policy dimension to the paper).

In a nutshell, our approach automates a process that those of us who have been trying to find a place to rent or buy are surely familiar with: To learn more about a potentially interesting home, one looks it up on Google Street View and tries to infer additional information from the images of the building itself and also get a feeling for the neighbourhood. Street level images are a rich data source, answering many questions such as: How big is the property and garden? How old is it? Is the exterior well-kept? Has the house charm? Is it’s architecture pleasing to the subjective eye? And much more. The challenge is to automatically identify the correct building on Street View, take the best possible picture and to classify the property in several dimensions using computer vision (CV) and machine learning (ML) techniques.

Extracting images of  individual buildings from Street View was a bigger challenge than expected. Google’s address information are often relatively broad guesses in the UK. Try finding e.g. “84 Vinery Road, Cambridge, CB1 3DT” on Street View to experience the problem yourself. Based on exact maps from the Ordnance Survey we solve this more technical first step and collect front images of practically all residential homes in Cambridge.

 

In the ML application, we initially focus on training a classifier for the vintage of buildings. According to colleagues from the architecture department, local houses can be classified into seven broad eras: Georgian (c1714–1837) houses feature key characteristics such as sash windows, fan lights above doors, the use of stucco on facades, often wrought work grilles, railings etc. In the Early Victorian era (c1837–c1870s), a growing taste for individualized embellishment led to the development of elaborate features such as carved barge boards or finials. The development of sheet glass led to sash windows becoming more affordable, and, increasingly, wider. In the Late Victorian era (c1870s–1901), bay windows became more and more widespread, and increasingly substantial. Edwardian architecture (1901-1910) tends to be less ornate than late Victorian architecture. The Interwar period (1918–1939) saw the cost of building construction fall, amidst a drive to provide better housing for the working classes. New housing types were being favoured. The Postwar (1950-1980) era continued on this path, with an embrace of high-rise as well as low rise housing. Facades vary greatly between brick, tiling, pebbledash and render. Our cut-off year for our Contemporary era to begin is 1980. Revival are contemporary buildings trying to emulate historical architecture. It should be self-evident, that the sheer amount of details and variations defies a simplistic classification approach.

We suggest a transfer learning approach in which the images are first translated into high-dimensional feature vectors using an existing CV model (Inception V3[3]). A classifier is then trained to categorise the buildings into vintages, based on the feature vectors (Softmax). An true innovation of our approach is that we include information on neighbouring buildings into the classification, exploiting spatial dependency in building vintages.

 

Note: Feature vectors generated by Inception V3 have 2,048 dimensions which favours a ML approach (in contrast to e.g. multinomial logit regressions) in the classification step.

 

Two final-year architectural students classified a large sub-sample of approximately 25,000 images from our data set of Cambridge houses. This is a much larger sample than ultimately needed. In our case, each category requires less than 250 samples to reach almost fully diminished training accuracy for additional observations. We greatly exceed this number so that we can compare the out-of-sample convolutional neural network predictions to the groundtruth as assigned by the experts. This allows us to examine the power and size of the assignment tests. In addition having both human and machine classification for a large sample of the data allows for a robustness checks on the machine comparisons. The accuracy of the automatic prediction is high (Table 1): A machine can relatively reliably tell different building vintages apart, even Revival styles are detected. All comes at modest cost, classifying the universe of buildings in Cambridge takes only seconds on a contemporary laptop. 

Table 1: Confusion matrix – Predicted vintage vs. ground truth

 

NoteRecall is the share of buildings from a ground truth category being predicted correctly (diagonal in mid panel) and Precision is the share of buildings predicted to belong to a category that are indeed from that category. The F1-score is the harmonious mean of Precision and Recall: F1-score = 2 Recall * Precision / (Recall + Precision)

 

Coming back to the claim made by Building Better, Building Beautiful on historic aesthetics being valued by the people: If that were true, buyers should prefer revival architecture over more contemporary designs. Also, buildings with adjacent buildings in historic or revival appearance should command a price premium. How hard we look, we cannot find any evidence for such a preference in real transaction data. After controlling for a house’s location, size and quality, modern designs are as sought after as replicas of old styles. Not surprising, reviving the good old times will not solve the housing shortage.

We have to speed up the publication of our paper as much as we can, or we risk losing our policy relevance again: The chairman of the helpful government commission has been fired in the meantime – for reasons not related to our research, though.



[2]     Scruton, Roger. 2018. “The Fabric of the City.” Colin Amery Memorial Lecture. Policy Exchange.

https://policyexchange.org.uk/wp-content/uploads/2018/11/The-Fabric-of-the-City.pdf.

[3]     Szegedy, Christian, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2015. “Rethink-

ing the Inception Architecture for Computer Vision.” https://doi.org/10.1109/CVPR.2016.308.

 

 

Scott B. Guernsey, CERF Research Associate, March 2019

As described in the article “The Choice between Formal and Informal Intellectual Property: A Review”, published in the Journal of Economic Literature by Bronwyn Hall (University of California, Berkeley), Christian Helmers (Santa Clara University), Mark Rogers (Oxford University), and Vania Sena (University of Essex), the UK Community Innovation Survey suggests that most UK-based companies consider trade secrets one of the most effective mechanisms to protect their intellectual property. Further, the recent passing of the “Trade Secrets (Enforcement, etc) Regulations 2018” (SI 2018 No. 597) act by Parliament indicates that UK policymakers are also concerned with protecting domestic trade secrets.

Loosely defined, trade secrets are configurations of closely held, confidential information (e.g., devices, formulas, methods, processes, programs, techniques, etc.), which are used in a firm’s operations, are not easily ascertainable by outside parties, and have commercial value for the holder because it is secret. Common examples include detailed information about a firm’s customer contact and price lists, computer algorithms, cost information, and business plans for future products and services, among others.[1] Although, despite the simplicity and straightforwardness of these examples, the opaque and intangible nature of trade secrets makes it challenging for investors to appropriately assess the risk profiles and fundamental values of companies more reliant on secrecy.

As explained in the legal article “Bankruptcy in the Age of ‘Intangibility’: The Bankruptcies of Knowledge Companies” by Mathieu Kohmann (Harvard Law School), the difficulty in assessing the risk and value of trade secrets is even more alarming for creditors of financially distressed or defaulted firms. For one, trade secrets cannot generally be collateralized in debt contracts. And second, even if the secrets were pledgeable to lenders, they do not have active secondary markets, making their redeployability and liquidation in bankruptcy costly and largely infeasible. Prior theoretical work in the financial economics literature, further suggests that firms composed primarily of intangible assets (e.g., trade secrets) sustain less debt financing because these types of assets decrease the value that can be captured by lenders in the event of default.[2]

Motivated by the increasing importance of secrecy for firms and governments, and the corresponding difficulties borne by creditors of these types of firms, in the article “Keeping Secrets from Creditors: The Uniform Trade Secrets Act and Financial Leverage”, CERF Research Associate Scott Guernsey, and research collaborators Kose John (New York University) and Lubomir Litov (University of Oklahoma), examine the impact of stronger trade secrets protection on firms’ capital structure decision-making.

To empirically analyze the relationship between trade secrets protection and financial leverage, Dr. Guernsey focuses his study on the adoption of the Uniform Trade Secrets Act (UTSA) by 46 U.S. states from 1980 to 2013. The UTSA, much like the recent “Trade Secrets (Enforcement, etc) Regulations 2018” in the UK, improves the protection of trade secrets by codifying existing common law, standardizing its legal definition, detailing what constitutes illegal misappropriation (e.g., bribery, theft, espionage), and clarifying the rights and remedies of victimized firms (e.g., injunctive relief, damages, reasonable royalties). Using the staggered adoptions of the UTSA by different states in different years, the authors find that firms located in states with enhanced trade secrets protection reduce (increase) their use of debt (equity) financing, compared to firms operating in the same U.S. Census region[3] and sharing similar industry trends but headquartered in states without the laws’ protection.

Next, Dr. Guernsey explores a possible economic explanation for the reduction in debt ratios experienced by firms located in states with the UTSA. The authors find evidence for the “asset pledgeability hypothesis” which conjectures that stronger trade secrets protection incentivizes firms to increase their reliance on secrecy (and away from patents), which, correspondingly, increases intangibility, leading to enhanced contracting problems with creditors – since such assets are more difficult to redeploy and liquidate in secondary markets –, ultimately, leading to less borrowing. For instance, relative to industry rivals operating in similar geographical regions, firms located in UTSA enacting states increase their investments in intangible assets and research and development (R&D), and experience decreases in the liquidation value of their assets and in their reliance on patents.

Overall, Dr. Guernsey’s findings provide important insights into how greater reliance on trade secrets affects corporate leverage decisions – indicating that companies with stronger protection choose to keep their secrets from creditors.

 

References mentioned in this post

Hall, B., C. Helmers, M. Rogers, and V. Sena. 2014. The choice between formal and informal intellectual property: A review. Journal of Economic Literature 52: 375-423.

Kohmann, M. 2017. Bankruptcy in the age of “intangibility”: The bankruptcies of knowledge companies. Unpublished Working Paper, Harvard Law School.

Long, M.S., and Malitz, I.B. 1985. Investment patterns and financial leverage. In: Corporate capital structures in the United States. University of Chicago Press, Illinois, pp. 325-352.

Shleifer, A., and Vishny, R.W. 1992. Liquidation values and debt capacity: A market equilibrium approach. Journal of Finance 47: 1343-1366.

Williamson, O.E. 1988. Corporate finance and corporate governance. Journal of Finance 43: 567-591.



[1] For instance, the Coca-Cola soft drink recipe, Google’s search algorithm, McDonald’s Big Mac special sauce, and the New York Times Bestseller List are among the most famous examples of trade secrets.

[2] For example, see, Long and Malitz (1985), Williamson (1988), and Shleifer and Vishny (1992).

[3] The U.S. Census Bureau groups states into four census regions: Northeast, Midwest, South, and West.

Dr.Adelphe Ekponon, CERF Research Associate, February 2019

Long-term Economic Outlook and Equity Prices

 

The very first asset pricing models (also called Capital Asset Pricing Models or CAPM) have postulated that the only risk that is needed to characterize a stock price is the contemporaneous correlation between the firm and the market portfolio returns. This implies that investors pay much more attention to information about the current economic conditions. Yet models that only incorporate this correlation risk tend to be unable to capture the dynamic of equity returns. The empirical asset pricing model proposed by Fama and French (1992) demonstrate that CAPM has no explanatory power to explain the cross-section of average stocks returns on portfolios sorted by size and book-to-market equity ratios.

 

An important trend of the literature has developed models to improve pricing performances of the CAPM via a consumption-based approach, CCAPM. The main innovation of CCAPM models lies in the introduction of macroeconomic conditions into asset pricing. According to these models, risk premia should be proportional to consumption beta (correlation between the firm's profit and consumption). However, this line of CCAPM models are known to produce very little level of equity risk premium, less than 1% for reasonable levels of risk aversion. These models are also rejected by several empirical tests.

 

Since then, two new features have been introduced in asset pricing. The first comes from the observation by Hamilton (1989) that shocks to the US economic growth are not i.i.d. as growth rates may also shift from periods of high to low levels. Secondly, a new class of utility functions introduced by Epstein and Zin (1989), allows to isolate, the aversion to future economic uncertainty from that of the current correlation risk.

 

Bansal and Yaron (2004) and recent papers have successfully developed consumption-based models in which the representative agent has Epstein-Zin type of preferences. These models pave the way to disentangle the impact of long-run vs. current correlation risks in stock prices. Additionally, they generate reasonable levels of equity risk premium and are able to explain some key asset pricing phenomena. Here, long-run risk (LRR) captures the unforecastable and persistent nature of future economic conditions and has two components, expected growth rate and volatility.

 

Constructed on this last trend of papers, Dorion, Ekponon, and Jeanneret (2019) propose a consumption-based structural approach, with endogenous default and debt policies, that allows investigating both long-run and correlation risks individually and in tandem. This is the first study to isolate and quantify, conditional on the state of economy, the impact of LRR in equity prices.

 

They found an average risk premium of 1% in expansion against 6% in recession. The paper also predicts that long-run risk represents about three-quarters of this risk premium and that its impact is countercyclical, being more than 90% in recession. To reduce the impact of LRR, managers lessen the optimal amount of debt to issue and lower the default barrier. Despite these adjustments, LRR still governs equity premium leading to the above predictions.

 

Using U.S. stocks prices, consumption growth (correlation risk), and expected economic growth rate and volatility (long-run risk), over the period from 1952 to 2016, this study confirms that LRR is priced in U.S. firms, particularly in bad times. These data show that the compensation for LRR represents around 70% of excess returns in a zero-investment portfolio, consisting in shorting stocks which returns have a low correlation with expected growth rates (or high correlation with expected growth volatilities) and buying stocks with high correlation with expected growth rates (or low correlation with expected growth volatilities). These results imply that LRR is a priced risk factor for equity.

 

Hence, investors are compensated for trading/holding stocks based on their sensitivity to future economic conditions. This result provides a strong evidence that long-run economic outlook is an important driver of equity premium at the cross section.

 

 

References mentioned in this post

 

Bansal, R. and Yaron, A. (2004), Risks for the long run: A potential resolution of asset pricing puzzles, Journal of Finance 59(4), 1481-1509.

 

Epstein, L. G. and Zin, S. E. (1989), Substitution, risk aversion, and the temporal behavior of consumption and asset returns: A theoretical framework, Econometrica 57(4), 937-69.

 

Fama, E. F. and French, K. R. (1992), The cross-section of expected stock returns, Journal of Finance 47(2), 427-65.

 

Hamilton, J. (1989), A new approach to the economic analysis of nonstationary time series and the business cycle, Econometrica 57(2), 357-84.

 

 

 

Dr. Hui Xu, CERF Research Associate, January 2019

Brexit: Investor Paranoia and the Financing Cost of Firms

Financial markets faced a bumpy ride in 2018. The Financial Times report that global bond and equity markets shrank $5tn last year. Two major risks have been disrupting the markets during the past year: US-China trade dispute and Brexit. The two risks, however, are essentially the same: both would cause new frictions and impediments to the existing trade frameworks and unsettle investors’ nerves.

The risks may have consequences on firms’ financing cost for real reasons. Take Brexit with no deal as an example. First, a firm’s revenue can decline due to the friction in the product market, especially for British firms that heavily depend on the European markets. Second, the friction in the labor market may increase a firm’s production cost. Both will lead to adverse effects on a firm’s cash flow and, consequently, the firm’s financing costs. However, the Brexit might also increase the firm’s financing cost just because the investors become paranoid and exaggerate such adverse impacts brought by Brexit.

Yet, to what extent does investor paranoia affect a firm’s financing cost? The question is interesting for two reasons. First, although economists have been assuming investors to be rational, empirical evidence has challenged this view. Answering this question not only contributes to the evidence of irrationality, but also quantifies the real impact of investor irrationality on firms. Second, irrationality drives the valuation from the fundamentals and, de facto, creates possibility for arbitrage.

A work in progress by Frank, a research associate at Cambridge Endowment for Research in Finance (CERF), and his co-authors, studies the question by studying the yield difference of British corporate bonds maturing before and after March 29th, 2019, the date on which Great Britain is set to leave European Union. The idea is simple. Take a corporate bond which matures one day before March 29th and another identical bond which matures one day after March 29th, if the yield of the latter is significantly higher, then we can conclude that the yield difference captures the impact of investor paranoia on the firm’s debt financing cost. Even if Great Britain crashes out of EU without a deal on March 29th, it can hardly affect a firm’s fundamentals, such as revenue and cost, within one day. Therefore, the only explanation for such a yield difference lies in investor paranoia.

Guided by the empirical design, the authors collect a small sample of British corporate bonds. The preliminary analysis does show that bonds maturing after the Brexit date have a higher yield than similar bonds maturing before the date, indicating the real financing cost on firms due to investor paranoia about Brexit risk. The authors are in the process of collecting more data and a working paper and more results will be published very soon.

Scott Guernsey, CERF Research Associate, December 2018

Reinvesting Market Power for the Betterment of Shareholders

On the supply side, highly competitive industries are generally characterized ashaving many firms and low barriers to entry. The first condition implies that existing firms cannot dictate or influence prices, and the second that new firms can enter markets at any time and at relatively low cost when incentivized to do so. Taken together then, in equilibrium, this setting suggests that existing firms only earn enough revenue to remain competitive and cover their total costs of production.

Yet, in reality, most industries in the United States have become increasingly less competitive. For example, in the article “Are U.S. Industries Becoming More Concentrated?”, forthcoming in Review of Finance, Gustavo Grullon (Rice University), Yelena Larkin (York University), and Roni Michaely (University of Geneva), find that more than 75% of U.S. industries experienced an increase in concentration over the past two decades.[1] As such, these industries are now composed of fewer firms, are less at risk of entry by newcomers, and earn “economic rents” or revenues in excess of that which would be economically sufficient in a competitive environment. Given these new developments, it is important for shareholders to understand how a reduction in competition might affect their holdings.

 In the article “Product Market Competition and Long-Term Firm Value: Evidence from Reverse Engineering Laws”, CERF Research Associate Scott Guernsey examines the value and investment policy implications of decreased product market competition for equity holders in the U.S. manufacturing industry.

To empirically analyze the relationship between competition and firm outcomes, Dr. Guernsey centers his study on the adoption of anti-plug-mold (APM) laws, which were adopted by 12 U.S. states from 1978 to 1987, and their subsequent reversal by a U.S. Supreme Court ruling in 1989. APM laws directly influenced the intensity of competition in product markets by protecting firms headquartered in the law adopting states from competitors copying their products using a specific type of reverse engineering (RE)[2] – the “direct molding process”.

The direct molding process enabled competitors to circumvent the R&D and manufacturing costs incurred by the originating firm by using an already finished product to create a mold which would then be used to produce duplicate items. For example, a boat manufacturer using this RE process would buy an existing boat on the open market, spray it with a mold forming substance (e.g., fiberglass), remove the original boat from the hardened substance, which would then become the mold used to produce replica boats. However, under the protection of APM laws, firms were given legal recourse to stop competitors in any U.S. state from using the direct molding process to compete with their products.

Using the staggered adoptions of APM laws by different states in different years, Dr. Guernsey finds that firms located in states with RE protection experienced increases in their value, when compared to firms operating in the same industry but located in states without the laws. Moreover, when the APM laws were later overturned by a U.S. Supreme Court ruling, which found the state laws in conflict with federal patent law, he finds all of the previous value gains subside.

Next, Dr. Guernsey explores a possible economic explanation for the increase in value experienced by firms in less competitive industries. He finds evidence for the “innovation incentives” hypothesis which poses that any of the economic rents the APM protected firms earn from increased market power are being allocated to investments in new and existing production technologies. For instance, relative to industry rivals, firms located in APM enacting states increase their investments in R&D and organizational capital.

Overall, Dr. Guernsey shows a reduction in competition is value enhancing for a subset of shareholders in the manufacturing industry as it leads their firms to reinvest the spoils of market power back into the company.

 

References mentioned in this post

Grullon, G., Y. Larkin, and R. Michaely. 2018. Are US industries becoming more concentrated?. Review of Finance, Forthcoming.

Gutiérrez, G., and T. Philippon. 2017. Declining competition and investment in the US. Unpublished Working Paper, National Bureau of Economic Research.

Kahle, K. M., and R. M. Stulz. 2017. Is the US public corporation in trouble?. Journal of Economic Perspectives 31:67–88.



[1] Gutiérrez and Philippon (2017) and Kahle and Stulz (2017) also document evidence confirming the recent trend in rising U.S. industry concentration.

[2] The standard legal definition of reverse engineering in the U.S. is described as “starting with the known product and working backward to divine the process which aided in its development or manufacture.”

Adelphe Ekponon, CERF Research Associate, November 2018

Emerging Markets Economies Debt Is Growing... What to expect?

After the 2008 financial crisis, Central banks have implemented accommodative monetary policies with the objective to revitalize countries economic activities. As a consequence, many countries have increased their borrowing in dollar and euro-denominated debt, leading to an increase of debt/GDP ratio around the world. As an example, this ratio was on average about 82% in Europe by the end of 2017 compared to 60% before the crisis, according to Eurostat.

The prime concern, however, is currently on the Emerging Markets Economies (EMEs) side, at least for two reasons.

First, many Emerging countries have increased their exposure to foreign debt (especially to hard currencies like dollar or euro). Their overall government debt as percentage of GDP went from 41 to 51 from 2008 to 2017 (BIS Quarterly Review, September 2017). In the same period, the government debt of EMEs doubled to reached $11.7 trillion with foreign currency debt also rising. Yet the problem with foreign-currency debt is that the government cannot inflate them away and difficulties to service them may be transmitted to the local currency debt market.

Second, the US Federal Reserve and the European central Bank are ending their accommodative monetary policies, which implies that interests rate will now be on the rise and that EMEs borrowing costs as well. From past experiences, interest rate rise in the US particularly has shown to be a trigger of many emerging countries debt crisis. Before EMEs debt crisis, such as Latin America in 1980, Mexico in 1994 and Asia in 1997, interests rate in the US were growing after remaining low.

Other factors may even worsen the situation, i.e. contagion or capital outflow, among others.  

In their paper “Macroeconomic Risk, Investor Preferences, and Sovereign Credit Spreads”, CCFin research associate Adelphe Ekponon and his co-authors explore the mechanism through which macroeconomic conditions combined with global investors aversion drive countries borrowing costs. According to this study, the link between economic conditions in the US and sovereign debt yields originate from the existence of a global business cycle, as countries tend on average, to be in good or bad time around the same periods. They found that this global business cycle increases the risk of defaulting, but also the government’s unwillingness to repay. The other mechanism is that investors’ higher risk aversion amplifies these effects. In this case, risky assets sell-offs are more pronounced in recession leading to a lower risk-free rate on average, to which the government optimally respond by issuing more debt.

It is likely that countries are going to discipline themselves in the coming months or years as borrowing costs surge… if there is no sudden switch to a global economic downturn.

 

 

Pedro Saffi, CERF Fellow, November 2018

Predicting House Prices with Equity Lending Market Characteristics

Investors in financial markets must cope with the arrival of a myriad of news, which arrive relentlessly every day non-stop. This information must be interpreted and used in the most efficient way possible to update investment strategies. Most academics also spend their careers trying to identify variables (e.g. GDP growth, retail sales, unemployment) that can help forecast the behavior of financial market variables (e.g. stock returns, risk, and exchange rates). While less common, many articles show how financial markets’ data can be used to predict the behavior of variables in the real economy.[1]

In the article “The Big Short: Short Selling Activity and Predictability in House Prices”, forthcoming at Real Estate Economics, CERF Fellow Pedro Saffi and research collaborator Carles Vergara-Alert (IESE Business School) look at how U.S. house prices can be better understood using a previously unexplored set of financial variables.

Investors can speculate on a decrease of prices using a strategy known as “short selling”. This involves borrowing the security being sold from another investor, selling at the current price, and repurchase it in the future – hopefully at a lower price to make a profit. The market to borrow shares is known as the equity lending market, a trillion-dollar part of the financial system that allows investors to borrow and lend securities needed for short selling. While investors cannot bet in house price decreases by shorting houses directly, they can use a wide-range of financial securities to do. Dr Saffi examines use data on short selling activity from a specific type of security whose returns are highly related to house prices – Real Estate Investment Trusts (REITs) –  that are essentially portfolios of underlying real estate properties.

The authors’ main hypothesis is that REITs are strongly correlated to fundamentals of housing markets. Thus, an increase in REIT short selling activity can forecast decreases in housing prices, which is exactly what is found by the authors in the data. Furthermore, REITs invested in properties located in areas that experienced a housing boom during the expansion cycle in the 2000s are more sensitive to increases in short selling activity than REITs invested in properties located in areas that did not experience a housing boom. The study divides the US property market into four regions – Northeast, Midwest, South and West – and classifies each month in each region as being a “boom,” “average” or downturn” period. Although during boom and average periods there is little correlation between REITs short-selling and the subsequent month’s housing prices, “the correlation is significantly positive during housing market downturns.”

Using his research findings, Dr. Saffi constructs a hedging strategy based on short selling intensity to reduce the downside risk of housing price decreases, showing that investors can limit their losses using REITs’ equity lending data. The figure below (Figure 4 in the article) shows the cumulative returns for Dr. Saffi’s trading strategy (based on using the On Loan variable as a proxy of short selling activity) relative to the performance of the FHFA Housing Price index returns from July 2007 through July 2013.  These results show the usefulness of the hedging strategy in regions that experienced large house price run-ups during the years prior to 2007, i.e., Northeast and West to limit investor losses during the 2008 financial crisis. Its performance is satisfactory for the South and absent for the Midwest, where we observed a smaller house price run-up in the same period. Panel B shows similar results if we examine the performance using diversified REITs to hedge against price decreases in the aggregate FHFA index.

Overall, short selling can be a useful tool for market participants to hedge against future price decreases. Regulators can track measures from the equity lending market to improve forecasts of house prices and implement policies to prevent real estate bubbles. Furthermore, imposing short selling constraints on stocks like REITs—which invest in assets subject to high transaction costs—matters for price efficiency and the dissemination of information.

References mentioned in this post

Ang, A., G. Bekaert and M. Wei. 2007. Do Macro Variables, Asset markets, or Surveys Forecast Inflation Better? Journal of Monetary Economics 54: 1163–1212.

Bailey, W. and K.C. Chan. 1993. Macroeconomic Influences and the Variability of the Commodity Futures Basis. Journal of Finance 48: 555–573.

Koijen, R.S., O. Van Hemert and S. Van Nieuwerburgh. 2009. Mortgage Timing. Journal of Financial Economics 93: 292–324.

Liew, J. and M. Vassalou. 2000. Can Book-to-Market, Size and Momentum be Risk Factors that Predict Economic Growth? Journal of Financial Economics 57: 221–245.

 



[1] For example, Liew and Vassalou (2000), Ang, Bekaert and Wei (2007), Koijen, Van Hemert and Van Nieuwerburgh (2009) and Bailey and Chan (1993) use financial market data to forecast economic growth, inflation, mortgage choices and commodities, respectively.

 

 

Scott B. Guernsey, CERF Research Associate, October 2018

Guaranteed Bonuses in High Finance: To Reward or Retain?

                Public distaste for high finance reached an all-time high in March of 2009, as the American International Group (AIG) insurance corporation announced it had paid out roughly $165 million dollars in bonuses to employees of its London-based financial services division (AIG Financial Products). Only months earlier, the same company had received roughly $170 million in U.S. taxpayer-funded bailout money and suffered a quarterly loss of $61.7 billion – the largest corporate loss on record. Then Chairman of the U.S. House Financial Services Committee, Barney Frank, remarked that payment of these bonuses was “rewarding incompetence”.

AIG countered, arguing that the bonuses had been pledged well before the start of the financial crisis and that it was legally committed to make good on the promised compensation. Additionally, Edward Liddy, who had been appointed chairman and CEO of AIG by the U.S. government, said the company could not “attract and retain” highly skilled labor if they believed “their compensation was subject to continued…adjustment by the U.S. Treasury.” And AIG wasn’t the only financial firm paying out large bonuses in 2009, as at least nine other large financial institutions, which had similarly received U.S. government assistance, distributed bonuses in excess of $1 million each to nearly 5,000 of its bankers and traders.

But why would these financial corporations risk their reputational capital to pay out bonuses? And why not condition the size and timing of bonus payments on circumstances like that experienced during the 2008 financial crisis rather than to simply guarantee large bonuses a year or more in advance?

A recent research article presented at this year’s Cambridge Corporate Finance Theory Symposium by Assistant Professor Brian Waters (University of Colorado Boulder) offers some interesting insight on these questions. To begin, the paper highlights three unique features of bonuses in the financial industry. First, unlike most other industries, bonus payments to high finance professionals (e.g., traders, bankers, analysts) comprises a large share of their total compensation. In fact, as described in the paper, more than 35% of a first-year analyst’s total pay is in the form of a bonus. This is further evidence by the hefty bonuses of $1 million or more dispensed to bankers, traders and executives by large financial institutions (AIG included) in 2009. 

Second, it seems as if bonus payments are largely guaranteed. For example, according to the paper, third-year analysts expect to receive a bonus of at least $75,000, with the possibility of earning a higher $95,000 bonus only if they performed exceptionally well. Moreover, as summarized above, AIG defended payment of its bonuses in March of 2009 by arguing they had been committed in advance and were obligated by law to fulfil this pledge. Third, observation of practice suggests financial institutions coordinate the timing of their bonuses by geography. For instance, in Europe almost all big banks determine bonuses in late February and early March, while U.S. banks do so in January. Again, this is consistent with AIG, although an American insurer, distributing bonuses to its London-based Financial Products division in March.

Considering these three stylized facts, Professor Waters (and co-author, Professor Edward D. Van Wesep) construct a mathematical model to explain why bonuses in high finance are both large and guaranteed. The general set-up of the model flows in the following manner. First, the authors assume that financial firms might find it difficult to recruit employees during certain months of the year (e.g., perhaps it is easy to replace employees in March, but difficult to do so in October). Second, in response to this periodic scarcity of labor, firms design contracts whereby large bonuses are paid during months with an abundance of talent (e.g., March), but condition the contracts such that employees must remain with the company until bonuses are paid to be eligible for this form of compensation.

Third, since financial firms operating in the same geography face similar labor market conditions, many of the firms will respond similarly, paying bonuses at the same time. Fourth, because employees are incentivized to remain with the firm until bonuses are paid, they will delay quitting until this point in time (i.e., this is when most employees leave their employers). Therefore, finally, this suggests labor markets will be flooded with talent after bonuses are paid (e.g., March), but will be relatively shallow in other months (e.g., October). Hence, arriving back at the initial step in the model and the game repeats, providing an intuitive explanation for why large and guaranteed bonuses are observed in high finance, irrespective of macroeconomic conditions and own firm performance.

Yuan Li, CERF Research Associate, July 2018

How (in)efficient is the stock market?

 

In 2013, the Nobel committee split the economic prize to Eugene Fama – the pioneer of efficient market hypothesis (EMH) and Robert Shiller – the critic of EMH. This decision indicated that the Nobel committee agreed with both Fama and Shiller. Was the committee right? The answer is yes, according to my findings from a recent research project.

Fama explains EMH as “the simple statement that security prices fully reflect all available information”. The empirical implication of this hypothesis is that except beta (the measurement of a firm’s systematic risk), no other publicly available information can be used to predict stock returns. However, the finance literature has found that many easily available firm characteristics, such as market capitalisation, book-to-market ratio, etc, are related to future stock returns. They are the so-called anomalies. Does the discovery of anomalies reject the EMH? Not necessarily. Because no one knows what a firm’s beta should be, and those firm characteristics can simply be proxies for beta. This is known as the joint hypothesis problem. We can say nothing about EMH unless we know what the correct asset pricing model is. Sadly, we do not know what the correct asset pricing model is.

In this project, I get around the joint hypothesis problem. I assume that a firm’s stock return is composed of two parts: risk-induced return and mispricing-induced return. Because of the joint hypothesis problem, we do not know what the risk-induced return is. However, we can estimate the mispricing-induced return (if there is any) using the forecasts issued by financial analysts. Analysts’ earnings forecasts represent investors’ expectations. More importantly, we know the actual earnings of a firm, and hence we can calculate the errors in analysts’ forecasts, which represent investors’ errors-in-expectations. We can then estimate the returns generated by investors’ errors-in-expectations, that is, the mispricing-induced return. If the market is perfectly efficient, the mispricing-induced return should be zero. I calculate the fraction of an anomaly explained by mispricing as the ratio of mispricing-induced return over the observed return. The fraction of an anomaly explained by risk is thus one minus the above ratio.

I examine 195 anomalies. On average, the fraction explained by mispricing is 17.51%, suggesting that the major fraction of anomalies is not anomalous at all. This result may be disappointing to EMH critics, who seem to think that the stock market is extremely inefficient, and it is very easy to profit from anomalies.  However, the good news to EMH critics is that the fraction explained by mispricing varies widely across different anomalies. In particular, the momentum anomalies are almost completely explained by mispricing. Hence, trading on momentum anomalies is likely to generate abnormal returns. In contrast, the high returns from the value strategies are almost entirely compensations for bearing high risk.

 

Dr. Hui Xu, CERF Research Associate, June 2018.

Contingent Convertibles: Does it do when it is supposed to do?

 

When Lehman Brother was in deep water September, 2008, the U.S. Federal government and the Federal Reserve decided not to bail it out, and several days later, the company filed Chapter 11 bankruptcy protection. Global markets immediately plummeted after the filing of bankruptcy, and both the government and central bank are accused of exacerbating investors’ panic for that decision. However, if they did, they would have been accused for a different reason: using taxpayers’ money to bail out a greedy and aggressive Wall Street giant.

 

The example illustrates the controversy and dilemma of bailout faced by policymakers. Since the financial crisis, one priority for the regulators has been to design a bail-in, an internal way to recapitalize distressed financial institutions and strengthen their balance sheet. The regulators hope it to become a substitute for the bailout. One way to deliver a swift and seamless bail-in is through the conversion of contingent convertible capital securities (CoCo).

 

CoCos are bonds issued by banks that either convert to new equity shares or experience a principal write-down following a triggering event. Because Basel III allows banks to meet part of the regulatory capital requirements with CoCo instruments, banks around the world issued a total of $521 billion in CoCos through 732 different issues between Jan 2009 and Dec 2015.

 

That being said, CoCos are still in their early stage in the sense that there is no consensus on how to design a CoCo. Moreover, few research has studied the response from market participants. Studying the response from market precipitants can shed light on the optimal CoCo design.

 

A recent research project by CERF research associate HUI (Frank) XU studies the response of incumbent equity holders when CoCos are in place. It considers two types of CoCos: CoCos convert to common shares when the stock price falls below a pre-set target, or the market capital ratio falls below a pre-set threshold. Surprisingly, the research shows that if the conversion dilutes incumbent equity holders’ security value, they will have strong incentive to issue a large amount of debt before the pre-set triggering point, and accelerate the trigger of CoCo conversion. The intuition is that since their equity value is diluted at conversion, they will issue a large amount of debt and distribute the proceeds via dividend or share repurchase just before conversion, leaving the new equity holders and debt holders much lower security value. Thus, the incumbent equity holders collect a one-time big payout at the cost of new equity holders and debt holders.

 

This is certainly contrary to the regulators’ expectations. Regulators expect equity holders to improve their corporate management, risk-taking strategies and financial policies under the threat of CoCo conversion. That equity holders benefit themselves by destroying the firms’ value under the threat of CoCo conversion is the least they want to see. Therefore, the research highlights the complexity of continent convertibles design, and the importance of taking the market participants’ response into account when regulators propose a CoCo design.

 

Dr. Alex Tse, CERF Research Associate, May 2018.

Embrace the randomness

Excerpt from the CBS sitcom “The Big Bang Theory”, S05 E04:

Leonard: Are we ready to order?
Sheldon: One moment. I’m conducting an experiment.
Howard: With Dungeons and Dragons dice?
Sheldon: Yes. From here on in, I’ve decided to make all trivial decisions with a throw of the dice, thus freeing up my mind to do what it does best, enlighten and amaze. Page 14, item seven.
Howard: So, what’s for dinner?
Sheldon: A side of corn succotash. Interesting……

It sounds insane to let a die decide your fate. But we all know that our beloved physicist Dr Sheldon Cooper is not crazy (his mother had him checked!) so there must be some wisdom behind.  To a mainstream economist, adopting randomisation in a decision task seems to violate a fundamental economic principle – more is better. By surrendering to Tyche the goddess of chance, we are essentially forgoing the valuable option to make a choice.

A well-known situation where randomised strategies are relevant is the game-theoretic setup where strategic interactions among players matter. A right-footed striker has a better chance of scoring a goal if he kicks left. A pure strategy of kicking left may not work out well though because the goalie who understands the striker’s edge will simply dive left. The optimal decisions of the two players thus always involve mixing between kicking/blocking left, right and middle etc. However, a very puzzling phenomenon is that individuals may still exhibit preference for deliberate randomisation even when there is no strategic motive. An example is a recent experimental study (Agranov and Ortoleva, Journal of Political Economy, 2017) which documents that a sizable fraction of lab participants are willing to pay a fee to flip a virtual coin to determine the type of lotteries to be assigned to them.

While the psychology literature offers a number of explanations (such as omission bias) to justify randomised strategies, how can we understand deliberate randomisation from an economic perspective? The golden paradigm of decision making under risk is the expected utility criteria where a prospect is evaluated by the linear probability-weighted average of the utility value associated with each outcome. There is no incentive to randomise the decision as the linear expectation rule would guide an agent to pick the highest value option with 100% chance. However, when the agent’s preference deviates from linear expectation, a stochastic mixture of prospects can now be strictly better than the static decision of sticking to the highest value prospect (Henderson, Hobson and Tse, Journal of Economic Theory, 2017). Rank-dependent utility model and prospect theory, which are commonly used in the area of behavioural economics, are two notable non-expected utility frameworks under which randomised strategies are internally consistent with the agent’s preference structure.

Incorporation of non-linear probability weighting and randomised strategies leads to many potential economic implications. For example, consider a dynamic stopping task where an agent decides whether to sell an asset at each time point. In a classical expected utility setup, there is no incentive for the agent to randomise the decision between to stop and to continue. This implies the optimal trading strategy must be a threshold-rule where sale only occurs when the asset price first breaches a certain upper or lower level. In reality, investors do not necessarily adopt this kind of threshold strategy even in a well-controlled laboratory environment. For example, the asset price could have visited the same level multiple times before a participant decides to sell the asset (Strack and Viefers, SSRN working paper, 2014). While expected utility theory struggles to explain trading rules that go beyond the simple “stop-loss stop-gain” style order, non-linear expectation and randomisation provide a modelling foundation to justify more sophisticated investment strategies adopted by individuals in real life.

Dr. Yuan Li, CERF Research Associate, April 2018

Are analysts whose forecast revisions correlate less with prior stock price changes better information producers and monitors?

Financial analysts are important information intermediaries in the capital markets because they engage in private information search, perform prospective analyses aimed at forecasting firms’ future earnings and cash flows, and conduct retrospective analyses that interpret past events (Beaver [1998]). The information produced by analysts is disseminated to capital market participants via analysts’ research outputs, which mainly include earnings forecasts and stock recommendations. Prior academic studies suggest that the main role of an analyst is to supply private information that is useful to parties such as investors and managers. Therefore, an analyst’s ability to produce relevant private information that is not already known to other parties is an important determinant of the analyst’s value to the capital markets. Based on this notion, CERF research associate -- Yuan Li and her co-authors propose a simple and effective measure of analyst ability.

Our measure of analyst ability is calculated as one minus the correlation coefficient between the analyst’s forecast revisions and prior stock price changes within successive forecasts. Since prior stock price changes capture the incorporation of information that is already known to investors, any information in an analyst’s forecast revisions that is not correlated with prior stock price changes reflects the analyst’s private information. In other words, our measure captures the ability of an analyst to produce information that is not already incorporated into stock prices.

We find that the stock price impact of forecast revisions issued by superior analysts identified by our measure is greater. We also find that firms covered by more superior analysts are less likely to engage in earnings management. These findings suggest that superior analysts identified by our measure are better information producers and monitors.

Dr. Jisok Kang, CERF Research Associate, March 2018

The Granular Effect of Stock Market Concentration on Market Portfolio Volatility

Ever since the Capital Asset Pricing Model (CAPM) was first introduced in 1964, a well-accepted conception in the modern portfolio theory is that the market portfolio contains only market risk or systematic risk as firm-specific risk or non-systematic risk is diversified away.

Meanwhile, Xavier Gabaix, in a paper published at Econometrica in 2011 titled as “The Granular Origins of Aggregate Fluctuations,” argues that idiosyncratic firm-specific shocks to large firms in an economy can explain a great portion of the variation in macro-economic movements if firm size distribution is fat-tailed. His argument implies that firm-specific shocks to large firms are granular in nature and may not be easily diversified away. He empirically shows that idiosyncratic movements by the largest 100 firms in the U.S. can explain roughly one third of the variation in the GDP growths of the country, the phenomenon he dubs “the granular effect.”  

Jisok Kang, a CERF research associate, in his recent research paper, shows that stock market concentration, the level of domination by the largest firms in the stock market, increases the volatility of market portfolio. This finding implies that the idiosyncratic, firm-specific risk of large firms is granular in nature and not diversified away in the market portfolio. This finding is robust whether the market portfolio volatility is defined with value-weighted or equal-weighted index.

In addition, stock market concentration causes other stock prices to co-move thus increases the market portfolio volatility further. The incremental volatility caused by stock market concentration is bad volatility in that the effect is severer when the market portfolio return is negative.

Dr. Hui (Frank) Xu, February 2018

What caused the leverage cycle run-up to 2008 financial crisis?

The 2008 financial crisis has far-reaching impact on financial markets and real economy. Although academic researchers and public policymakers have reached a consensus that the financial crisis roots in leverage cycle, they continue to debate the causes that led to the leverage cycle. Initially, it was widely accepted that financial innovation and deregulation exacerbated agency problem, incentivizing the financial intermediaries to issue consumer credit, including mortgage debt, without proper screening and monitoring (“credit supply” channel). Recently, nevertheless, a growing empirical literature has proposed a “distorted beliefs" view of the crisis, demonstrating that over optimism of investors may have led to rapid expansion of the credit market, and increased assets price in the run-up to the crisis (“credit demand” channel). The financial crisis, like any other major economic event, probably has more than one cause, and both credit demand and supply channels have contributed to it. Indeed, the two views are not entirely mutually exclusive, and may reinforce each other.

However, one still might want to ask to what extent the distorted beliefs have caused the crisis. This question is interesting for both theoretical and practical reasons. First, economists have long known that distorted beliefs have important effects on prices of financial assets, e.g., risk-free rate and stock prices, but they still find it wanting to understand why the distorted beliefs can cause massive default in 2008; second, understanding what caused the financial crisis helps to create effective changes in policy. If it is largely an agency problem, policies to prevent similar crises would include requiring financial intermediaries to “put more skin in the game”, and to enforce stricter screening and monitoring. If it is primarily a distorted expectations and beliefs problem, preventative measures would include implementing macroprudential, financial-stability polices, and improving information transparency.

One way to quantify the role of distorted beliefs in the financial crisis is to construct a dynamic general equilibrium model which features credit use and risk-taking by households purely based on distorted beliefs, effectively shutting down agency problem channel. Then, examine the explanatory power of the model by comparing the output from the calibrated model to real data. This is a research project done by CERF research associate HUI (Frank) XU.

The main findings of the paper support the distorted beliefs view of the financial crisis. The distorted beliefs view can explain the household leverage running up to the financial crisis. Quantitively, the distorted beliefs can account for more than half of the variation of the real interest rate during the crisis period.

Dr. Alex Tse, CERF Research Associate, February 2018

Transaction costs, consumption and investment

The theoretical modelling of individuals’ consumption and investment behaviours is an important micro-foundation of asset pricing. Despite being a classical problem in the literature of portfolio selection, analytical progress is very limited when we extend the model to a more realistic economy featuring transaction costs. The key obstacle thwarting our understanding in the frictional setup originates from the highly non-linear differential equation associated with the problem.

Using a judicious transformation scheme, CERF research associate Alex Tse and his collaborators David Hobson and Yeqi Zhu show that the underlying equation can be greatly simplified to a first order system. Investigation of the optimal strategies can then be facilitated by a graphical representation involving a simple quadratic function encoding the underlying economic parameters.

The approach offers a powerful tool to unlock a rich set of economic properties behind the problem. Under what economic conditions can we expect a well-defined trading strategy? How does the change in the market parameters affect the purchase and sale decisions of an individual? What are the quantitative impacts of transaction costs on the critical portfolio weights? While some features are known in the literature, there are also a number of surprising phenomena that have not been formally studied to date. For example, the transaction cost for purchase can be irrelevant to the upper boundary of the target portfolio weight in certain economic configurations.

In a follow-up project, the methodology is further extended to a market consisting of a liquid asset and an illiquid asset where transaction costs are payable on the latter. The research findings could serve as the useful building blocks towards a more general theory of investment and asset pricing.

Dr. Yuan Li, CERF Research Associate, December 2017

Book-to-market ratio and inflexibility: The effect of unrecorded R&D capital

R&D investment has been playing an increasingly important role in the economy. However, accounting standard requires firms to immediately expense R&D as incurred. Therefore, R&D investment is not capitalized on the balance sheet. Could the unrecorded R&D capital affect our assessment of a firm’s risk? The answer is affirmative, according to the findings from a research project conducted by CERF research associate Yuan Li.

Finance theory suggests that a firm’s risk is negatively related to its flexibility to adjust capital investment. The more flexibility a firm has in this regard, the less its cash flows are affected by economic-wide conditions, and the lower its risk. Flexibility is hard to observe directly, but it can be inferred from the book-to-market ratio (BM). High-BM firms are generally burdened with more unproductive capital and hence less flexible to downsize in bad times. Thus, according to the theory, high-BM firms are riskier than low-BM firms, especially in bad times.

However, results from this project suggest that the above theory should not be followed blindly. This is because book-to-market ratio calculated from the balance sheet data increasingly misrepresents inflexibility and risk. This in turn is because book value is understated by the unrecorded R&D capital, which is even less flexible to adjust than physical capital. Results also suggest that considering book-to-market ratio and R&D capital together is a better way to evaluate a firm’s inflexibility and risk.

Dr. Edoardo Gallo, CERF Fellow, November 2017

Financial networks and systemic collapse

In the aftermath of the 2008 crisis, Haldane – the Chief Economist at the Bank of England – stated that “the regulation of the network is needed to ensure appropriate control of large, interconnected institutions […] the financial network should be structured so as to reduce the chances of future systemic collapse”.

A project by CERF Fellow Edoardo Gallo and his research collaborators Syngjoo Choi (Seoul National University) and Brian Wallace (UCL) investigates what type of network structures cause financial contagion. In a lab experiment, participants can buy or sell assets in an artificial market knowing that one participant has been hit by a monetary shock and there is a possibility that it may spill over to others because all participants are connected by a network of liabilities. Each participant faces a trade-off between selling to raise liquidity in the short term to avoid bankruptcy or hold on to assets to realize a return in the long-term. The researchers vary the network of liabilities and the size of shocks.

The results show that contagion is particularly prevalent in core-periphery networks formed by a small number of highly connected participants – the core – and with the remaining participants at the sparsely connected periphery. The dynamics of contagion involves sharp falls in the price of assets because all participants are trying to sell to raise liquidity, and this leads to systemic collapse even for moderately sized shocks.  The researchers find evidence that a participant’s ability to comprehend the network-driven risk is predictive of how likely they are to go bankrupt. 

Core-periphery networks are ubiquitous in financial markets, and the results of this project suggest they may be particularly susceptible to systemic collapse.

The paper is available here.

 

Dr. Alex S.L. Tse, CERF Research Associate, September 2017

Probability weighting and stock trading behaviours

Humans are far from being a perfect machine of decision making especially in the face of uncertainty. One prevalent phenomenon is that individuals tend to overweight probabilities associated with extreme events. Examples include lottery punters’ optimism towards winning a jackpot and air passengers’ anxiety towards plane crash. In the context of finance, what are the implications of such psychological bias on investment decisions?

CCFin research associate Alex Tse and his collaborators Vicky Henderson and David Hobson investigated the effect of probability weighting on stock trading behaviours through a theoretical model of asset sale. They found that agents with probability weighting will adopt trading strategies in form of stop-loss but not gain-exit: on the one hand, probability overweighting of the worst scenario encourages investors to offload a losing stock. On the other hand, probability magnification of the best outcome encourages investors to maintain participation on the rally. This provides a potential justification of the popular usage of stop-loss orders among retail investors.

Probability weighting is also useful to explain the “price disposition effect”, a well-documented financial anomaly where investors are selling winning stocks much more often than losing stocks. Existing models typically generate a very extreme disposition effect. With inclusion of probability weighting, however, investors are now more incentivised to hold a winning stock relative to a losing stock as they find a lottery-like payoff with positive skewness attractive. This enables the model to deliver a level of disposition effect much closer to what empirical literature suggests.

 

Dr. Yuan Li, CERF Research Associate, July 2017

In his best-selling book—Thinking, Fast and Slow, Nobel Memorial Prize in Economics laureate Daniel Kahneman describes anchoring as “...one of the most reliable and robust results of experimental psychology”. Using data from the real financial markets, CERF research associate Yuan Li and her research collaborators Thomas George and Chuan-Yang Hwang find evidence suggesting that anchoring impedes investors’ interpretation of earnings news.

Anchoring is the tendency for individuals to base their forecasts of an unknown quantity upon a salient statistic (the anchor) that might have nothing to do with the quantity being forecasted. The classic example is an experiment in which individuals observe the generation of a random number, after which they are asked to estimate the percentage of African nations in the UN as an increment to the random number. The estimates are higher (lower) for individuals who observe higher (lower) random numbers. This random number is the anchor in this experiment.

 

In the real financial markets, investors anchor on the 52-week high price (52WH), which is often featured in financial websites and papers. If the stock price prior to a positive (negative) earnings announcement is already close to (far from) the 52WH, investors would think the positive (negative) news has already been incorporated into the price, and hence be reluctant to bid the price higher (lower). In other words, investors behave as if future price levels are constrained not to deviate too far from the 52WH. 

 

Dr. Jisok Kang, CERF Research Associate, June 2017

 

Does the Stock Market Benefit the Economy?

 

A research project carried out by a CERF Research Associate, Jisok Kang, and his co-author, Kee-Hong Bae, suggests evidence that a functionally efficient stock market do promote economic growth.

Finance researchers have extensively investigated the role of stock market on real economic sector. For instance, whether well-functioning stock markets promote economic growth has received a great deal of attention from academics and policy makers. However, how to measure the functionality of stock markets has been a big empirical challenge. Researchers so far have typically used size measures (e.g., total stock market capitalization) as a proxy for stock market functionality and not found robust evidence to suggest that stock market development is associated with future economic growth.

The Research proposes a new measure of functional efficiency of stock market: stock market concentration. It has shown that concentrated stock markets dominated by a small number of large firms negatively affect economic growth; in countries with concentrated stock markets, capital is allocated inefficiently, which results in sluggish IPO activity, innovation, and economic growth. These findings suggest that a concentrated stock market offers insufficient funds for emerging, innovative firms; discourages entrepreneurship; and is ultimately detrimental to economic growth.

 

 

Dr. Chryssi Giannitsarou, CERF Fellow, May 2017

 

Our social interactions are informative of our investment decisions.

 

When we are investing, we don’t mindlessly copy our peers, according to new research carried out by CERF fellow Chryssi Giannitsarou and her research collaborators Luc Arrondel, Hector Calvo Pardo and Michael Haliassos. Instead, we are more likely to participate in the stock market if we believe that our immediate social circle is more informed about it.


The authors surveyed a representative sample of French households in 2014 and 2015 to capture measures of stock market participation and social connectedness, but also beliefs and perceptions of stock market returns. They wanted to find out whether those households invested by mindless copying, which may lead to stock market bubbles and fads, or by processing information and trying to copy good practice.


The results show that people who perceive a higher share of their financial circle as being informed about the stock market or participating in it are more likely to invest in stocks themselves. The conditional portfolio share invested in stocks is influenced by social interactions only to the extent that social interactions influence perceptions of past stock market performance and, through them, stock market expectations. There is a trace of mindless copying of behaviour, but only in the decision of whether or not to participate at all in the stock market.
All in all, their research findings suggest that social interactions tend to reduce rather than exacerbate financial literacy limitations, and to affect financial decision-making by being informative rather than ‘contagious’.

If you would like to read the relevant paper it is available here

RSS Feed Latest news

CERF Fellow Dr. Bang Dang Nguyen - paper will be presented at the AFA Annual Meeting 2020 in San Diego

Sep 26, 2019

Political Connections and Firm Value: Evidence from Close Gubernatorial Elections, joint with Q.A. Do (SciencesPo Paris) and Yen-Teik Lee (Curtin University), is accepted and will be presented at the AFA Annual Meeting 2020 in San Diego.

CERF Scholar Shiqi Chen presented her paper at Leeds University Business School

Jun 28, 2019

Shiqi Chen presented her paper entitled “Financial Policies and Internal Governance with Heterogeneous Risk Preferences” (joint with Bart Lambrecht) in the Accounting and Finance seminar at Leeds University Business School, the Finance seminar at Essex Business School, CERF Lunch Seminar at Cambridge Judge Business School, and at the 2019 Annual Real Options Conference in London.

View all news