Rise of the Machines

“Music, this complex and mysterious act, precise as algebra and vague as a dream, this art made out of mathematics and air, is simply the result of the strange properties of a little membrane. If that membrane did not exist, sound would not exist either, since in itself it is merely vibration. Would we be able to detect music without the ear? Of course not. Well, we are surrounded by things whose existence we never suspect, because we lack the organs that would reveal them to us. ”
– Guy de Maupassant

“I call our world Flatland, not because we call it so, but to make its nature clearer to you, my happy readers, who are privileged to live in Space. … Distress not yourself if you cannot at first understand the deeper mysteries of Spaceland. By degrees they will dawn upon you. ”
– Edwin A. Abbott, “Flatland: A Romance of Many Dimensions”

“I wanted to be a psychological engineer, but we lacked the facilities, so I did the next best thing – I went into politics. It’s practically the same thing. ”
– Salvor Hardin (“Foundation”, by Isaac Asimov)

“It is vital to remember that information – in the sense of raw data – is not knowledge, that knowledge is not wisdom, and that wisdom is not foresight. But information is the first essential step to all of these. ”
– Arthur C. Clarke

“Any sufficiently advanced technology is indistinguishable from magic. ”
– Arthur C. Clarke

“What are you doing, Dave?”
– HAL (“2001: A Space Odyssey” by Arthur C. Clarke)

I thought it was appropriate in a note focused on the evolution of machine intelligence to start with some quotes by three of the all-time great science fiction writers – Abbott, Asimov, and Clarke – and something by the father of the short story, de Maupassant, as well. All four were fascinated by the intersection of human psychology and technology, and all four were able to communicate a non-human perspective (or at least a non-traditional human perspective) in their writing – which is both incredibly difficult and completely necessary in order to understand how machines “see” the world. Asimov in particular is a special favorite of mine, as his concept of psycho-history is at the heart of Epsilon Theory. If you’ve never read the Foundation Trilogy and you don’t know who Hari Seldon or the Mule is … well, you’re missing something very special.

All of these authors succeed in portraying non-human intelligence in terms of the inevitable gulf in meaning and perception that must exist between it and human intelligence. Hollywood, on the other hand, almost always represents non-human intelligence as decidedly human in its preference and utility functions, just with a mechanical exoskeleton and scary eyes. Thus the Daleks, the original Cylons, the Terminators, the Borg, etc., etc.

epsilon-theory-rise-of-the-machines-july-28-2013-robot epsilon-theory-rise-of-the-machines-july-28-2013-robot-2

At least the most recent version of Battlestar Galactica recognized that a non-human intelligence forced to interact with humans would perhaps choose a less menacing representational form.

epsilon-theory-rise-of-the-machines-july-28-2013-robot-evolution

The way to think about machine intelligence is not in terms of a mechanical version of human intelligence, but in terms of a thermostat and an insect’s compound eye.

epsilon-theory-rise-of-the-machines-july-28-2013-thermostat epsilon-theory-rise-of-the-machines-july-28-2013-insect

What I mean by this is that a thermostat is a prime example of a cybernetic system – a collection of sensors and processors and controllers that represents a closed signaling loop. It might seem strange to think of the thermostat as “making a decision” every time it turns on the heat in your house in response to the environmental temperature falling below a certain level, but this is exactly what it is doing. The thermostat’s decision to turn on the heat follows, from an Information Theory perspective, precisely the same process as your decision to buy 100 shares of Apple, just a simpler and more well-defined process. The human brain is the functional equivalent of a really complex thermostat, with millions of sensors and processors and controllers. But that also means that a really complex thermostat is the functional equivalent of a human brain.

The human brain has one big advantage over a thermostat, and that is the evolutionary development of a high degree of self-awareness or consciousness. There’s nothing mystical or supernatural about consciousness, nor is it somehow external or separate from the human brain. Consciousness is simply an emergent property of the human cybernetic system, just like Adam Smith’s Invisible Hand is an emergent property of the market cybernetic system. It is an incredibly useful property, however, allowing both the construction of thought experiments that radically accelerate learning by freeing us from the ponderously slow if-then laboratory that Nature and evolution provide non-self-aware animals, as well as the construction of belief systems that radically promote and stabilize joint utility functions of human communities. Our proficiency as both a tool-using animal and a social animal stems entirely from the development of consciousness, and we are an incredibly robust and successful species as a result.

On the other hand, a thermostat has one big advantage over the human brain in its decision-making process, and that’s the lack of evolutionary and social constraints. As phenomenally efficient as carbon-based nerve cells and chemical neurotransmitters might be, they can’t compete on a fundamental level with silicon-based transistors and electrons. As effective as social constructs such as language and belief systems might be in creating intra-group human utility, there is no inherent tension or meaning gap or ecological divide in communications between thermostats. The concept of music is a wonderful thing, but as de Maupassant points out it is entirely dependent on “the strange properties of a little membrane.” How many other wonderful concepts are we entirely ignorant of because we haven’t evolved a sensory organ to perceive them with? Just as the two-dimensional inhabitants of Flatland find it essentially impossible to imagine a third dimension, so are we conceptual prisoners of Spaceland. At best we can imagine a fourth dimension of Time in the construction of a helix or a hypercube, but anything beyond this is as difficult as storing more than 10 digits in our short-term memory. Machines have no such evolutionary limitations, and decision-making in terms of twelve dimensions is as “natural” to them as decision-making in terms of three.

This is why it’s useful to think of machine intelligence in terms of the compound eye of an insect. Not only are most compound eyes able to sense electromagnetic radiation that is invisible to the camera eyes of most vertebrate animals, particularly in the ultraviolet end of the spectrum, but there is a multi-dimensionality to insect vision that is utterly alien to humans. It’s not that insect vision is super-human, any more than machine intelligence is super-human. In fact, in terms of image resolution or location of an object within a tight 3-dimensional field, the camera eye is enormously superior to the compound eye, which is why it evolved in the first place. But for a wide field of vision and the simultaneous detection of movements within that field, the compound eye has no equal. It’s that simultaneity of movement detection that is so similar to the parallel information processing approach of most machine intelligences and is so hard to describe in human information processing terms.

epsilon-theory-rise-of-the-machines-july-28-2013-compound-eye

Because the compound eye associates a separate lens with each photo-receptor, creating a perceptive unit called an ommatidium, there is no composite 3-dimensional visual image formed as with twin camera eyes. Instead there are hundreds or thousands of separate 2-dimensional visual images processed simultaneously by insects, each of which is driven by separate signals. It’s customary to describe insect vision as a mosaic, but that’s actually misleading because the human brain sees a mosaic as a single image made up of individually discrete pieces. To an insect, there is no such thing as a single visual image. Reality to an insect is hundreds of visual images processed simultaneously and separately, and there is no corollary to this in the human cybernetic system. To a thermostat, though, with no evolutionary baggage to contend with … no problem. As a result, if a functional task is best achieved by seeing the world as an insect does – through simultaneous views of multiple individual fields – a machine intelligence can outperform a human intelligence by a god-like margin.

Over the past five to ten years, there have been three critical advances in computer science that have created extremely powerful machine intelligences utilizing a compound eye architecture.

First, information storage technology developed the capacity to store enormous amounts of data and complex data structures “in-memory”, where the data can be accessed for processing without the need to search for it on magnetic media storage devices. Again, this is a really hard concept to find a human analogy for. The best I can come up with is to envision the ability to just know – immediately and without any effort at “remembering” – the names, addresses, and phone numbers of everyone you’ve ever known in your life. Even that doesn’t really do the technology justice … it’s more like knowing the names and phone numbers of everyone in New York City, simultaneously and without any attempt to recall the information. Your knowledge vanishes the moment electrons stop powering your memory chip, so there’s still a place in the world for permanent magnetic media storage, but that place is shrinking every day.

The company that commercialized this technology first, best, and most widely is SAP, in a product they call HANA. I’ve been following its development for about three years now, and it’s changing the world. Does Oracle have a version of this technology? Yes. But if you’ve built a $150 billion market cap company on the back of selling periodic upgrades for a vast installed base of traditional relational database management software applications that query (search) a vast installed base of traditional data storage resources … hmm, how to put this in a nice way … you’re probably not going to be very excited about ripping apart that installed base and re-inventing your lucrative business model. SAP had a lot less to lose and a lot more to gain, so they’ve re-invented themselves around HANA. I have no idea whether SAP the stock is a good investment or not. But SAP the company has a phenomenal asset in HANA.

Second, advances in microprocessor technology, network connectivity, and system control software created the ability to separate physical computing resources from functional computing resources. This phenomenon goes by many names and takes multiple forms, from virtualization to distributed computing to cloud computing, but the core concept is to find enormous efficiencies in information processing outcomes by rationalizing information processing resources. Sometimes this means using hardware to do something that was previously done by software; sometimes this means using software to do something that was previously done by hardware. The point is to stop thinking in terms of “hardware” and “software”. The point is to re-conceptualize a cybernetic system into fundamental terms reflecting efficient informational throughput and functionality, as opposed to traditional terms reflecting the way that humans happened to instantiate that functionality in the past. When I write about re-conceptualizing common investment practices in terms of the more fundamental language of Information, whether it’s technical analysis (“The Music of the Spheres”) or bottom-up portfolio construction (“The Tao of Portfolio Management”), I’m not pulling the idea out of thin air.  There has been just this sort of revolutionary shift in the way people think and talk about IT systems over the past decade, with incredible efficiency gains as a result, and I believe that the same sea change is possible in the investment world.

One of the most powerful aspects of this re-conceptualization of machine cybernetic systems is the ability to create the functional equivalent of an insect’s ommatidia – thousands of individual signal processors working in parallel under a common direction to complete a task that lends itself well to the architecture of a compound eye. This architecture of simultaneity is more commonly referred to as a cluster, and the most prominent technology associated with clusters is an open-source software platform called Hadoop. There are three pieces to Hadoop – a software kernel, a file system (like a library catalog), and a set of procedures called MapReduce (like a traffic cop) – all of which were originally developed by Google. While Hadoop is in the public domain under an open-source license, I would estimate that Google is at least two generations ahead of any other entity (and that includes the NSA) in understanding and implementing the architecture of simultaneity. Obviously enough, search is a prime example of the sort of task that lends itself well to a machine intelligence organized along these lines, but there are many, many others. No one understands or directs machine intelligence better than Google, and this is why it is the most important company in the world.

Third, methodological advances in statistical inference and their expression in software applications have created the ability to utilize more fully these advances in memory, microprocessors, connectivity, and IT architecture. The range of these methodological tools is pretty staggering, so I will only highlight one that is of particular interest to the Epsilon Theory perspective. Last week I wrote about the problem of the ecological divide in every aspect of modern mass society (“The Tao of Portfolio Management”) and how humans were poor calculators of both aggregate characteristics derived from individual signals and individual characteristics derived from aggregate signals. Over the past 15 years, Gary King at Harvard University has pioneered the development of unifying methods of statistical inference based on fundamental concepts such as likelihood and information. I may be biased because Gary was a mentor and dissertation advisor, but I think his solutions to the problem of ecological inference can fundamentally change portfolio construction and risk management practices, especially now that there are such powerful cybernetic “engines” for these solutions to direct.

As described in “The Market of Babel”, these advanced machine intelligences based on the compound eye’s architecture of simultaneity have effectively taken over one particular aspect of modern markets and the financial services industry – the provision of liquidity. Understanding and predicting the patterns of liquidity demand are tailor-made for the massively parallel capabilities of these cybernetic systems, and there is no liquidity operation in modern markets – from high-frequency traders trying to skin a limit order book to asset managers trying to shift a multi-billion dollar exposure in the dark to bulge-bracket market-makers trying to post yet another quarter of zero days with a trading loss – that is not completely controlled by these extremely complex and powerful thermostats.

This is a problem for human investors in two respects.

The first is a small but constant problem. Whenever you take liquidity (i.e., whenever you create an exposure) in anything other than a “natural” transaction with a human seller of that exact same exposure, you are going to pay a tax of anywhere from 1/2 to 5 cents per share to the machine intelligences that have divined your liquidity intentions within 50 milliseconds of hitting the Enter button. I’m sorry, but you are, and it’s a tax you can only mitigate, not avoid. The problem is worse the more you use a limit order book and the more you use VWAP, but then again, no active manager ever got fired for “showing price discipline” with a limit and no trader ever got fired for filling an order at VWAP.

The second is a giant but rare problem. All of these machine intelligences designed to optimize liquidity operations are based on the same historical data patterns of human market participation. As those patterns change – particularly if the patterns change in such a way that machine-to-machine transactions dominate or are confused for human-to-machine transactions – it creates a non-trivial chance that an event causing what would otherwise be a small liquidity shock can snowball into a market-wide liquidity seizure as the machine-to-machine transactions disappear in the blink of an eye. This is what happened in the 2010 Flash Crash, and the proportion of machine-to-machine transactions in liquidity provision is, if anything, even greater today. Moreover, the owners of these machine intelligences, especially in the HFT world, are suffering much thinner margins than in 2010, and, I suspect, are taking much larger risks and operating with much itchier trigger fingers on the off switch. I have no idea when the liquidity train wreck is going to happen, but you can clearly see how the tracks are broken, and the train whistle sure sounds like it’s getting closer.

The solution to this second and more troubling problem is not to somehow dislodge machine intelligences from market liquidity operations. It can’t be done. Nor do I have much confidence in regulatory “solutions” such as Liquidity Replenishment Points and the like (read anything by Sal Arnuk and Joe Saluzzi at Themis Trading for a much more comprehensive assessment of these issues). What we need is a resurgence in “real” trading with human liquidity-takers on at least one side of the trade.

Unfortunately, I suspect that we won’t see a return to normal levels of human market activity until the Fed begins to back down from monetary policies designed explicitly to prop up market prices. You might not sell what you own with a Fed put firmly in place, but a healthy market needs buying AND selling, it needs active disagreement on whether the price of a security is cheap or dear. Markets work best and markets work more when investors venture farther out onto the risk curve on their own volition, not when they are dragged out there kicking and screaming by ZIRP and QE.

I don’t know when the Fed will stand down enough to allow normal risk-taking to return to markets, but at some point this, too, shall pass. The trick is how to protect yourself in the current investing environment AND set yourself up to do well in the investing environment to come. Now there are a thousand facets to both aspects of pulling that trick off, and anyone who tells you that he has THE answer for this puzzle is selling snake oil. But I think that part of the answer is to bring machine intelligences out of the liquidity provision shadows and into the light of portfolio construction, risk management, and trading.

Your ability to manage the risk of a liquidity-driven market crash is improved simply by recognizing the current dynamics of liquidity provision and speaking, however haltingly or humanly accented, the machine language of Liquidity. Imagine how much further that ability could be improved if you had access to a machine intelligence designed specifically for the purpose of measuring these liquidity risks as opposed to being another machine intelligence participating in liquidity operations. I am certain that it is possible to create such a liquidity-monitoring machine intelligence, just as I am certain that it is possible to create a correlation-monitoring machine intelligence, and just as I am certain that it is possible to create a portfolio-optimizing machine intelligence. These technologies are not to be feared simply because they are as alien to us as an insect’s eye. They should be embraced because they can help us see the market as it is, rather than as we wish it were or as we thought it was.

PDF Download (Paid Membership Required):

http://www.epsilontheory.com/download/15766/

The Tao of Portfolio Management

“Shape clay into a vessel;
It is the space within that makes it useful.
Cut doors and windows for a room;
It is the holes which make it useful.
Therefore benefit comes from what is there;
Usefulness from what is not there. ”
– Lao Tzu

“The limits of my language means the limits of my world. ”
– Ludwig Wittgenstein

“The question is not what you look at, but what you see.”
– Henry David Thoreau

“A European says: I can’t understand this. What’s wrong with me?
An American says: I can’t understand this. What’s wrong with him?”
– Terry Pratchett

I want to start this note in a manner that’s sure to annoy some readers, and that’s to reference the George Zimmerman trial. If gold is the third rail of financial commentary (“How Gold Lost its Luster”), then the Zimmerman trial must be the Death Star planet-destroying laser beam of such notes. But the shaping of the post-trial Zimmerman Narrative is a precise example of the behavioral phenomenon that I want to examine this week. So with considerable trepidation, here goes …

As discussed in last week’s note (“The Market of Babel”), groups speaking different languages – whether it’s an everyday language like English or Japanese, or an investing language like Value or Growth – have both a translation friction to overcome in inter-group communications as well as a potential dislocation of meaning in vocabulary and grammar. This latter problem is far more insidious and injurious to joint utility functions than the former, and the post-trial “conversation” between groups that support the Zimmerman verdict and groups that are appalled by the Zimmerman verdict is a perfect example of the problem of meaning. In fact, there is no conversation possible here at all, because each group is seeing the same observable data points through very different perceptual lenses. The chasm of meaning between these two groups is formed by an ecological divide, which is also a common source of meaning disparity in market communications and languages. As such, it is well worth our attention in Epsilon Theory.

An ecological divide is a difference in perception of useful signal aggregation. In the Zimmerman case, those appalled by the verdict are seeing the broad social context of the available information. How is it possible, they ask, for a society to allow an unarmed black minor walking home from a store to be shot dead with no legal sanction against his killer? Those supportive of the verdict, on the other hand, are seeing the individual instantiation of the available information in this particular case. How is it possible, they ask, to evaluate the specific evidence presented in this specific trial against the specific legal charges and fail to deliver a Not Guilty verdict?

The Western system of jurisprudence is based on liberal notions (that’s John Stuart Mill liberalism, not Walter Mondale liberalism) of the primacy of individual rights, as opposed to communitarian notions of aggregate social utility. What this means is that the rules of the trial-by-jury game, from jury instructions to allowed evidence, are set up to focus attention on specific fact patterns relevant to a specific defendant. And as a result, it makes a lot of sense (to me, anyway) that Zimmerman was found Not Guilty by virtue of reasonable doubt regarding the specific charges levied against him. On the other hand, the rules of the game for the Western system of political representation do not give a whit about individual rights, but favor the ability to mobilize like-minded groups of citizens on the basis of widely-held social grievances. So it also makes a lot of sense to me that the political dynamics outside the courtroom treat Zimmerman-like actions (and Zimmerman individually as a member of the Zimmerman-like set) as unjust and the object of sanction.

Each perspective is entirely valid within its relevant sphere of aggregation, and each perspective is extremely problematic in the other sphere. To deny the existence of racial bias in the aggregate data regarding crime and punishment in the United States – the application of the death penalty, for example – is, in my opinion, like denying that the Earth goes around the sun. However, this does NOT mean that ANY individual death penalty case, much less every death penalty case, is necessarily racially biased or that racial bias was a meaningful cause of any death penalty decision. I know this seems counter-intuitive … how can a population have a meaningful attribute in the aggregate, but no individual member of that population demonstrate that attribute in a meaningful way? … but it’s the truth. Or rather, it’s the inescapable conclusion of a consistent application of statistical logic.

Systems that demonstrate this sort of ecological divide are much more common than you might think, and are at the heart of any tool or technology that utilizes large numbers of small signals – each of which is inconsequential in its own right – to create or observe a meaningful signal in the aggregate. For example, the gamma knife technology used to shrink inoperable cancerous tumors works in this manner, by focusing hundreds or thousands of weak radiation beams from multiple directions on a cluster of cells. No single beam is meaningfully dangerous in and of itself, because otherwise the healthy cells hit by one of these rays might be injured, but the combination of many of these beams is deadly to a cell. Each individual beam of radiation is “Not Guilty” of causing irreparable harm to any individual cell, and no individual beam is biased/targeted specifically to any type of cell. But the overall system is a superb killer of cells subject to the bias/targeting of the system. The effectiveness of the gamma knife technology is entirely based on statistical assessments of probabilistic outcomes of cellular damage when exposed to a burst of radiation, both at the individual and aggregate levels. Because the radiation bursts can be reduced to multiple rays with very small individual impacts (probabilistically speaking), an ecological threshold can be calculated and implemented to create a potent cancer treatment therapy.

epsilon-theory-the-tao-of-portfolio-management-july-21-2013-gamma-knife

It’s no accident that technologies like the gamma knife are largely computer-driven, because humans are remarkably poor calculators of ecological thresholds. The human brain has evolved over millions of years and we have trained ourselves for hundreds of thousands of years to be very effective social animals making ecological inferences on a scale that makes sense for small group survival on an African savannah, not the smooth functioning of a mass society that spans a continent and has hundreds of millions of members. As a result, we are hard-wired to underestimate the cumulative impact of massive numbers of small signals that form part of a consistent system, and we consistently overestimate the contribution of any one contributory signal when we focus on the aggregate outcome. That latter decision-making mistake, where individual characteristics are improperly inferred from aggregate characteristics, has a name. It’s an ecological fallacy, and it’s an inherent problem for every aspect of human society in the modern age of massive aggregation, from the effective operation of a system of justice to the effective operation of a system of market exchange.

In the case of a justice system, the meaning of Trayvon Martin’s death is different when seen through the lens of individual rights at trial than when seen through the lens of social utility at large. What happened in Sanford was an instantiation of what I believe is a demonstrably unjust and racially biased system, and it deserves political action to recalibrate the societal gamma knife machine that ends up killing black cells preferentially over white cells. But that doesn’t mean that Zimmerman the individual was necessarily guilty of any crime, and to conclude that he is racially biased to a criminal degree because his actions form part of an unjust and racially biased system is an ecological fallacy. Such a conclusion is natural and all too human, but it is also illogical and unjust. It’s also a difficult point to fit into a soundbite for Fox or MSNBC, so I imagine that the demonization of both sides and the further polarization of American society will proceed with all deliberate speed.

In the case of a system of market exchange, I want to make two points about the impact of ecological divides and the hard-wired human tendency to make poor decisions under the influence of an ecological fallacy. The first, which I’ll only note briefly today but will describe in much more detail in subsequent weeks, is that it’s crucial for any investor to understand the basics of computer-driven methodologies of ecological inference. These methodologies, which fall under the rubric of Big Data, are driving revolutionary applications in fields as diverse as medicine, oil and gas exploration, and national security (this is the technology that underpins the recently revealed NSA monitoring program of mobile telephone meta-data). The technology has made some inroads within the financial services industry, particularly in the liquidity operations of market-makers (see “The Market of Babel”), but is surprisingly absent in risk management and security selection applications. It’s coming. And when these technologies do arrive, their impact on investing and money management will be as significant as that of the telegraph or the semiconductor. My hope is that Epsilon Theory will play some role in that arrival, both as a herald and as a provider.

The second point is that there is a huge ecological divide between investors, based on – as with all ecological divides – the perceived level of useful signal aggregation. When market participants describe themselves as bottom-up or fundamental investors, they typically mean that they base their decisions on signals pertaining to individual securities. When market participants describe themselves as top-down or macro investors, they typically mean that they base their decision on signals pertaining to an aggregated set of securities, perhaps an entire asset class of securities. For both bottom-up investors and top-down investors the English language uses the same word – “portfolio” – to describe the collection of securities that they own. But there is an enormous difference in meaning between a collection of securities that is seen and understood as an aggregate collection of securities versus a collection of securities that is seen and understood in terms of the individual members of that collection. The meaning of portfolio construction and risk management is very different when seen through the lens of a bottom-up stock-picking strategy than when seen through the lens of a top-down macro strategy, and the impact of this difference is underappreciated by investors, managers, allocators, and service providers.

To a top-down investor the portfolio IS the unit of analysis. A portfolio of securities is created for the express purpose of creating some set of characteristics in the aggregate. A top-down investor is trying to make a tasty stew, and the individual components of a portfolio are nothing more than ingredients that are intended to be blended together according to whatever recipe the portfolio manager is following. Securities are chosen solely for their contribution to the overall portfolio, and their usefulness is defined solely by that contribution. Individual securities have no meaning in and of themselves to a top-down investor, as it is the portfolio itself which is vested with meaning and is the object of the investor’s behavior.

To a bottom-up investor it is tempting to think of the portfolio as the unit of analysis, because it’s the performance of the portfolio that generates a manager’s compensation. But it’s not. To a bottom-up investor a portfolio is a collection of individually-analyzed exposures, where all the meaning resides in the individual exposures. It’s a “portfolio” simply because the bottom-up investor owns several individual exposures and that’s the word the English language gives to the owning of several individual exposures, not because there was any attempt to create or achieve some set of aggregate characteristics by owning several individual exposures. To use the imagery of Lao Tzu, a portfolio is a clay vessel to a fundamental investor, a provider of empty space that holds what is meaningful, rather than something that is meaningful in and of itself. The existence of a portfolio is an epiphenomenon to the behavior of a fundamental investor, not the object of that behavior, and to treat it as more than that or differently from that is a mistake.

Okay, Ben … that’s a very poetic metaphor. But what’s the problem here in concrete terms?

Both the bottom-up and top-down perspectives are demonstrably valid and effective within their own spheres. But when those spheres blur within investment decision-making you’ve got a problem. For a top-down portfolio manager this usually takes the form of imbuing meaning to an individual security (“Hmm … I think I will choose this stock to express the characteristic I want to have in my portfolio because I heard that it might be the target of a proxy fight. It’s like a free call option, right?”), and for a bottom-up portfolio manager this usually takes the form of tinkering with individual exposures in order to adjust or mitigate some portfolio-level attribute (“Hmm … I’m 40% net beta long and I’m really worried about this market. I better cut some of my high beta longs, maybe add some S&P puts. Gotta manage risk, right?”). Both of these behaviors fall into the chasm of the ecological divide, and the latter in particular is an expression of an ecological fallacy, no different in its logical inconsistency than believing that Zimmerman the individual should have been found guilty because he is part of a large set of individuals and actions that bear responsibility in the aggregate for a significant social iniquity.

The ecological fallacy expressed by tinkering with the individual exposures of a bottom-up, stock-picking portfolio happens all the time, in large part because these portfolios are typically judged and evaluated with the same tools and the same criteria used for top-down portfolios. A bottom-up portfolio manager is absolutely inundated with signals of all sorts about the aggregate characteristics of his portfolio … scenario analyses, volatilities, betas, correlations, active weights, gross and net exposures, etc. … and everyone knows that it’s critical to manage your exposure to this factor and that factor, that you should seriously consider a “trading overlay” or a “volatility hedge” to your portfolio. Or so we are told. And so we believe, because every institutional investor asks the same questions and collects the same performance and exposure data based on aggregate portfolio characteristics. We believe that everyone knows that everyone knows that it’s critical to manage your exposure to this factor or that factor, and thus it becomes Common Knowledge. And once it becomes Common Knowledge, then even if a fundamental investor privately believes that this is all hokum for the way he manages money, it doesn’t matter. The dynamics of the game are such that the rational choice is to go along with the Common Knowledge, else you are the odd man out. The Common Knowledge game is rampant in the business of money management, in exactly the same way that it is rampant in the intrinsic market activities of managing money.

The best stock-picking portfolio managers I know ignore 99% of the portfolio level data they are bombarded with, and good for them! A logically consistent bottom-up portfolio manager does not “manage to” some target Volatility or Sharpe Ratio or any other aggregate portfolio characteristic, because it makes no sense given what a portfolio means to a logically consistent fundamental investor. Again to refer to Lao Tzu, portfolio and risk management tools for the fundamental investor are moreuseful if they cut out measures and algorithms that do not make sense for the purpose or meaning of “portfolio” in the context of investing in individual securities.

But does that mean that fundamental investors are destined to fly by the seat of their pants through what is a decidedly foggy and stormy environment? Are there no effective instruments or tools that can help allocators and managers understand what makes one stock-picking portfolio different or better from another? I think that there are – or rather, could be – but these instruments need to be designed on the basis of what a portfolio means to a bottom-up investor, not what a portfolio means to a top-down investor. Unfortunately, every portfolio risk management tool or concept on the market today (to my knowledge) is based on the top-down investor’s perspective of portfolio-as-tasty-stew, as the direct object of analysis for the risk management tool, rather than the bottom-up investor’s perspective of portfolio-as-clay-vessel, as the indirect object of analysis for the risk management tool.

So what is a useful way of evaluating a portfolio-as-clay-vessel? To answer that question we need to ask why a fundamental investor has a portfolio at all. Why not just have three or four very large positions in your highest conviction stock-picking investment ideas and call it a day? One answer, of course, is that this approach doesn’t scale very well. If you’re managing more than a hundred million dollars, much less several billion dollars, finding sufficient liquidity depth in your best ideas is at least as difficult a task as identifying the best ideas in the first place. But let’s leave this aside for now as a practical challenge to a highly concentrated portfolio, not a fundamental flaw.

The fundamental flaw with concentrating investment decisions in a handful of exposures is that any investment is an exercise in decision-making under uncertainty. All fundamental investors “know their companies” as well as they possibly can, but in this business you’re wrong about something every single day. And that’s fine. In fact, it’s perfectly fine to be wrong more often than you’re right, provided that you have a highly asymmetric risk/reward payoff associated with being right or wrong with your fundamental analysis. In the same way that you would think about your bets at a horse track in terms of the expected pay-off odds AND your assessment of the expected race outcome, so are the exposures within a bottom-up portfolio based on a joint view of the likelihood of being right about future events AND the pay-off for being right. Different managers have different business models and views about the types of bets and the time frames of bets that are right for them, but this is the common language for all bottom-up investment strategies.

Thinking in terms of this joint probability function reveals why a bottom-up investor owns more than three or four exposures. Your best investment idea may not be (and in fact rarely is) the one where you are simply the most confident of the horse winning the race. It’s the one where you are most confident of the horse winning the race relative to the expected pay-off for winning the race and the expected loss for losing the race. Your best investment idea may well be (and in fact often is) based on a low probability event with a very high pay-off if you’re right and a reasonably low cost if you’re wrong, but you would be a fool to have a highly concentrated portfolio based solely on low probability events because the odds are too high that you will run into a streak of bad luck where none of the low probability events occur. Instead, you want your investment ideas to be sized in a way that maximizes the total of the expected returns from all of these individual joint probability calculations, but within a framework that won’t let a run of bad luck at the individual level put you out of business. That’s what a portfolio means to a bottom-up investor.

The language I just described – assessing risk and reward as a function of the probability of various informational outcomes and the pay-off associated with those outcomes – is called Expected Utility. It is the language of both Game Theory and Information Theory, and it is the language of the Epsilon Theory algorithms. In the same way that we can describe the informational surface of a security (see “Through the Looking Glass” and “The Music of the Spheres”), where price forms an equilibrium “trough” and the height of the “walls” around that trough represent the informational strength of the signal required to move the price outcome to a new equilibrium level, so can we describe the informational value of a specific portfolio exposure, where the vector (weight and direction) of that exposure versus the informational surface of the security represents the risk/reward asymmetry of that particular exposure from an Information Theory perspective. These individual informational values can be arrayed against probability distributions of new information coming into play for each individual security, and Monte Carlo simulations can then generate the optimal exposure weights for each individual security within the context of an overall business tolerance for bad luck. The resulting portfolio should be, by definition, the perfectly sized clay vessel to hold the securities chosen by the manager for their individual characteristics within a specified framework of business risk. The portfolio is the byproduct of the risk/reward attributes of the individual securities, not a directly constructed entity, and its own attributes (while measurable by traditional top-down tools if you care to do so) are relegated to the background where they belong.

I recognize that the preceding paragraph is quite a mouthful, and the language is foreign to most readers, in particular most bottom-up investors. I mean … very few bottom-up investors read up on Simpson’s Paradox or the latest applications of negative binomial stochastic distributions in their spare time. A stock-picker reads 10-Q’s and bond covenants in his spare time. A stock-picker is fluent in the written language of financial statements and the body language of management one-on-one’s, not the mathematical language of causal inference. But unfortunately there’s no getting around the mathematical language of statistical logic and causal inference whenever you start to aggregate complex things into complex collections of things, particularly when trillions of dollars are sloshing around in these complex aggregations. Without the structure and guard rails of mathematical tools and constructs, human decision-makers tend to fall into ecological chasms whenever they turn their focus from the individual level to the aggregate level to the individual level again.

The problem is that bottom-up investors have been ill-served by those who ARE fluent in these statistical languages. The available tools for portfolio construction and risk management aren’t guard rails at all to a bottom-up investor, but actually serve to encourage ecological fallacies and poor portfolio management. That’s because these tools were all designed from a top-down investment perspective, not out of malice or spite, but out of the intellectual hegemony that Modern Portfolio Theory exercises far and wide. It’s time for a re-conceptualization of these tools, one based on the truly fundamental language of Information and a recognition of the validity of different investment perspectives. That’s what I’m trying to achieve with Epsilon Theory.

PDF Download (Paid Membership Required):

http://www.epsilontheory.com/download/15761/

The Market of Babel

“But Achilles, weeping, quickly slipping away from his companions, sat on the shore of the gray salt sea, and looked out to the wine-dark sea.”
– Homer, “The Iliad”

The story of the Tower of Babel in the Book of Genesis, from whence we get the word “babble”, has always struck me as one of the most interesting Biblical origin myths. After the Flood, mankind is united and strong, speaking a single language. They build a great city and an even greater tower in the land of Shinar, which attracts God’s attention. God comes down from Heaven to see what Man is up to, notes that as a people with one language nothing Man sought would be out of reach, decides that this simply won’t do, and “confounds” their speech so that they no longer understand each other.

epsilon-theory-the-market-of-babel-july-14-2013-babel

The Tower of Babel before (Pieter Bruegel the Elder) …

epsilon-theory-the-market-of-babel-july-14-2013-dore

and the Tower of Babel after (Gustave Dore)

Construction on the tower stops, life in the city becomes untenable, the various linguistic groups scatter to the far corners of the globe, and a jealous God is safe once more from those uppity humans.

As described in a prior note (“Through the Looking Glass”), language is the quintessential example of Common Knowledge (usually called Convention in linguistic studies) in human behavior. This is what language IS … the belief that everyone knows that everyone knows a long-eared rodent that jumps around a lot is called a “rabbit” and not a “gavagai”, and the behavior that stems from that belief. If your group does not share the Common Knowledge or Conventions of another group when it comes to communicating about how to hunt long-eared rodents that jump around a lot, that’s a problem.

But as Jehovah knew all too well (and Quine rediscovered in 1948), the problem with people having different languages is not just the inconvenience of having to translate from one word that describes a long-eared rodent that jumps around a lot to another word that describes the same thing. If that were the only issue, then construction on the Tower could have proceeded, just at a slower pace and under the friction of translation. No, the lack of a shared language places a much more formidable obstacle in the path of human communication – the problem of meaning. Humans possessed of one set of Conventions, such as language, interpret and act on the world differently from humans possessed of another set of Conventions. The observed “facts” of the world will mean something different – sometimes slightly different and sometimes very different – to people possessed of different Conventions, and that difference in meaning is often entirely unbridgeable.

For example, consider another great classical text, the Iliad of Homer. One of the most famous phrases in that epic is “the wine-dark sea” that brooding Achilles contemplates after Agamemnon takes Briseis away from him, a strangely evocative image of the ocean that Homer uses several more times in his tale. But here’s the thing … throughout the Iliad and the Odyssey, Homer never describes the sea as blue.  He never describes the sky as blue. He never describes anything as blue. His only use of the Greek word that would later come to mean what we think of as “blue” – kuάneos – is used for a description of the dark sheen of Hector’s hair and Zeus’s eyebrows. How can the greatest epic poet in human history fail to see the ocean or the sky as blue?

Caroline Alexander has a wonderful essay (“A Winelike Sea”) in the most recent issue of Lapham’s Quarterly (Vol. VI, Num. 3, Summer 2013) that examines this mystery. As she notes, the answer to this conundrum for both Goethe in “Theory of Colors” and William Gladstone (yes, the four-time British Prime Minister was also an acclaimed classicist) was simple: Homer and all the ancient Greeks were color-blind. No, really. The greatest minds of the 19th-century (well, Goethe qualifies at least) concluded that most Greeks must have been color-blind until the fifth or sixth century BC when a poet named Simonides used the word kuάneos in a way that might mean “dark blue”. Other analysts came to the conclusion that, well … if Homer wasn’t color-blind, then that must mean that ancient Greek wine wasn’t red or purple, but was often blue! Right.

As Alexander points out, Homer may not have had the same words as we do today for color, but he had many more than we do today for light and the way it interacts with the world – so that the color white is never simply white, but is “glancing white” or “flashing white” or “gleaming white” or “shimmering white” depending on how the light strikes it. And when you start to read Homer’s phrasing through the lens of light and not the lens of color, it makes a big difference in how you understand the text. Unfortunately, no matter how skilled the translator (and this is not Alexander’s conclusion, as she is, after all, a very skilled translator), this means that it is ultimately impossible for us to read the Iliad as Homer intended us to read the IliadHomer saw the world very differently than you or I do – not because he was visually impaired or because the water was so alkaline that he had to drink blue wine – but because he and his contemporaries shared a different set of Conventions regarding how to interpret the world. And no matter how much we would like to see the ocean and sky as Homer did, as a quality of the light, we can’t stop seeing the ocean and the sky as blue. I defy anyone in the modern world to look at the picture of Santorini below and NOT use the concept of “blue” in any description of the scene.

epsilon-theory-the-market-of-babel-july-14-2013-greece

Homer could. We can’t. The difference in our perception of the world and Homer’s perception is incommensurable and ultimately unbridgeable. Such is the power of language and Convention. Such is the power of Common Knowledge.

Okay, Ben, that’s very interesting and all … but how does this help us become better investors?

First, we have to realize that the two great languages of investing – Value (along with its grammar, Reversion-to-the-Mean) and Growth (along with its grammar, Extrapolation) – are just that … languages. Neither of these sets of Conventions is timeless nor universal, and each conditions its speakers to interpret the observed facts of the world differently from the other. Not more truthfully. Just differently. Like any language, the primary usefulness of a shared set of Conventions is not found in inter-tribe communications, where both the friction of translation and the problem of meaning raise their ugly heads, but in intra-tribe communications. And like any language, the larger the tribe that shares the particular set of Conventions, the greater the utility for each individual member of the tribe. Calling a long-eared rodent that jumps around a lot a “rabbit” is much more useful to me if everyone I come into contact with shares the same vocabulary, grammar, and meaning for the word than if a sizable group speaks another language. In the latter case we will inevitably, to some degree, talk past each other whenever we try to communicate about long-eared rodents that jump around a lot, and that creates, by definition, a less efficient behavioral outcome for all of us.

The languages of Value and Growth are always useful to some degree in markets because the tribes that speak these languages are a significant enough proportion of pretty much any investment game to allow for meaningful intra-tribe communication. But the relative proportion of these tribes within any given market for any given security is extremely influential in shaping market game-playing, and the transition and inflection points of this relative proportion are predictive of transition and inflection points in security prices. There are consistent behavioral patterns, as expressed in security prices, associated with the waxing and waning of investment language population proportions. I have found the tools of linguistic evolution, as found in (among other places) the work of Brian Skyrms, particularly Signals: Evolution, Learning, and Information (Oxford University Press: 2010), to be very useful in understanding how the languages of Value/Reversion-to-the-Mean and Growth/Extrapolation wax and wane in their proportion of the overall population of investors for a particular security, and hence their importance in driving market outcomes. These are game theoretic tools, and they are at the core of the Epsilon Theory methodology.

For example, technology stocks tend to be much more driven by a Growth Narrative than by a Value Narrative. This is particularly true in large-cap tech stocks because the impact of Narrative in general is greater in large-cap stocks. Why? Because an informational “edge” is much harder to come by with large-cap stocks than small-cap or even mid-cap stocks, and as a result game-playing as driven by this Narrative or that is much more prevalent. Unless you are breaking the law, there is no possible way that you will know something about the fundamentals of, say, Apple that no one else knows and that is sufficient to move the stock. You either have a Growth language to speak with other Growth tribe members about Apple, or you have a Value language to speak with other Value tribe members about Apple. There are enough fellow tribe members that you will never be alone or seriously doubt your belief, but the Growth tribe is, historically speaking, a much more “enthusiastic” owner of tech stocks like Apple than the Value tribe.

Put differently, the day the dominant Apple Narrative shifted from “it’s expensive, but …” (a Growth tenet) to “it’s actually really cheap” (a Value tenet) is the day the stock stopped working, and the stock is unlikely to work again – regardless of how big a dividend Apple pays or whether it issues preferred stock (all Value tenets) – until Growth tenets reclaim control of the Apple Narrative. Evaluating how market opinion leaders talk about Apple is more important than what market opinion leaders say about Apple because it reflects the relative proportion and strength of one tribe of Apple owners, with a particular vision of what that ownership signifies and what behavior it entails, versus another tribe of Apple owners with a different vision.

Second, it is critical to recognize that there is a third language of investing in the world today, the language of Liquidity, and it’s not a human language at all. It is the language of Big Data, of computer-driven statistical inference, and if you try to “speak” the language of Value/Reversion-to-the-Mean or the language of Growth/Extrapolation to a computer on the other side of the trade, you are going to lose. Not a lot, but you are going to pay a tax whenever you take liquidity from a computer program. Why? Because algorithms, like Homer, see the world differently than you and I do.

The Conventions and the “biology” of modern computing systems make them very effective pattern recognizers of highly distributed and disparate data signals on a micro-second time horizon. They can “see” Liquidity signals in a way that is as alien to the human brain as the visual signals perceived by insects with compound eyes. Not only are human patterns of liquidity demand completely transparent to a modern liquidity provision algorithm, but also the typical effort made to hide liquidity demand – which is always some variation of chopping up a large order into smaller pieces and then injecting those pieces into the market according to a schedule determined by a sub-algorithm – only creates another sub-pattern or code that is in turn inevitably cracked by the liquidity provision algorithm. If the processing power available to crack these codes were limited to the human brain, then any of these chopping-up sub-algorithms would be sufficient to hide the pattern created by a modified VWAP order or one of its time-delineated kin. But with the processing power available to even the more modest liquidity provision algorithms, there is no hope – absolutely no hope – of creating any trading pattern that is somehow invisible or untraceable. As a result, algorithms dominate the liquidity operations of the modern market and have a significant trading advantage anytime a human decision-maker decides to create an exposure and take liquidity without another human simultaneously making a decision to provide liquidity.

In the same way that the relative proportion of Value-speakers to Growth-speakers makes a big difference in the medium to long-term price trends of certain securities, so does the relative proportion of human liquidity takers and providers to non-human liquidity operators make a big difference in the short-term price movements of certain securities. There are tools available to gauge this proportion (Hurst coefficients, for example), and even a cursory awareness of the language of Liquidity can help a portfolio manager anticipate the risk of pernicious reflexive price action (see “The Music of the Spheres”), particularly within an unstable informational framework.

Third, the implementation of any investment strategy can be improved by considering the common language that underpins the languages of Value, Growth, and Liquidity – the language of Information. I use the word “implementation” intentionally, because these insights of Epsilon Theory are less useful if you are buying a security, closing your eyes for three years, and then hoping to wake up and sell for a 30% gain. Epsilon Theory is most useful for investors for whom the path matters. If it matters to you whether or not this security goes down 30% before it ultimately ends up 30%, if you allow for the possibility that you might change your mind about the wisdom of holding this security at this point in time versus that point in time, then you should think about your investing in terms of Information. A concern with strategy implementation is a concern with the risk/return efficiency of exposures over time, and this is where an understanding of the common language of Information is so useful. As described in prior notes (“The Music of the Spheres” and “Through the Looking Glass”), understanding a security in terms of its informational surface (akin to a volatility surface) allows Value and Growth and Liquidity signals to be treated in a unified analytical framework. I’m not saying that those who speak the fundamental language of Information will see “nothing withheld from them which they purpose to do”. But if the power of a common language was enough to frighten God Almighty, well … that sounds like it should at least be good for a 2.0 Sharpe Ratio. Anyone else care for a bite of this apple?

PDF Download (Paid Membership Required):

http://www.epsilontheory.com/download/15752/

The Music of the Spheres and the Alchemy of Finance

“You say that we go round the sun. If we went round the moon it would not make a pennyworth of difference to me or to my work.”
– Sherlock Holmes (from “A Study in Scarlet” by Arthur Conan Doyle)

“It doesn’t matter if the cat is black or white, as long as it catches mice.”
Deng Xiaoping

“I could float off this floor like a soap bubble if I wish to. I do not wish to, because the Party does not wish it. You must get rid of those nineteenth-century ideas about the laws of Nature. We make the laws of Nature.”
O’Brien (from “1984” by George Orwell)

A few million years ago – the blink of an eye in evolutionary terms – our ancestors were roaming around some African savannah in a small band. We are still that social hunter-gatherer, for better or worse, with all the advantages and disadvantages our evolutionary heritage provides. Advantages include opposable thumbs, big eyes, and lots of neurons devoted to pattern recognition … attributes that, among other things, make our species very competent at driving cars and playing video games. Disadvantages include relatively few neurons and no sensory organs for interpreting really large numbers or physical laws that are foreign to an African savannah … attributes that, among other things, make our species poor theoretical astronomers.

We are excellent observers and pattern recognizers. For thousands of years, no astronomical event visible to the naked eye, no matter how minor, has escaped our attention and overwhelming need to find its pattern. But if understanding why these celestial patterns occur as they do requires a belief that the sun is an incomprehensibly large ball of hydrogen plasma 96 million miles away that warps the time/space continuum with its gravitational force … well, it’s pretty easy to understand why a heliocentric theory wasn’t humanity’s first choice.

For thousands of years, then, Common Knowledge – what everyone knows that everyone knows – of humanity’s place in the universe was dominated by this geocentric view, supported in the distant past by various origin myths and since 384 BC and the birth of Aristotle by the Narrative of Classical Science. Like all successful Narratives, Classical Science and Aristotelian geocentrism had a ring of truth to it (“truthiness”, as Stephen Colbert would say) and worked in concert with the interests of the most powerful political and economic entities of the day, from the Alexandrian Empire to the pre-Reformation Catholic Church. For almost 2,000 years the status quo entities of the West – whether explicitly religious such as the Catholic Church or dynastic and quasi-religious such as the Rashidun Caliphate, the Byzantine Empire, and the Holy Roman Empire – were based on a geocentric origin myth. The Narrative of Classical Science was extremely useful in efforts to maintain this myth because it allowed these political institutions to present geocentrism within the “modern” and compelling framework of Greek culture and learning, as opposed to the rather grim and ancient oral traditions of a nomadic desert tribe. Charlemagne may have famously used the sword to convert entire tribes to Christianity, but over a longer period of time Aristotle proved even more effective.

Unfortunately, however, there was a problem with the Aristotelian geocentric view of all the heavenly bodies circling the Earth in unison, creating a perfect and timeless “music of the spheres” … the data didn’t fit the theory. Mars, for example, goes back and forth in the sky with a retrograde motion during certain periods of the year, as do all of the planets to one degree or another, as opposed to a steady sweep across the sky as would be the case with a regular orbit around the Earth. Why? Because in truth both Earth and Mars go around the sun, and since Earth’s orbit is inside that of Mars, the position of Mars relative to Earth takes on a retrograde pattern when seen from Earth.

epsilon-theory-the-music-of-the-spheres-and-the-alchemy-of-finance-july-7-2013-mars-retrograde

Fortunately for the Narrative of Classical Science, however, around 140 AD an Alexandrian Greek named Claudius Ptolemy figured out how to reconcile the observed astronomical patterns with Aristotelian theory by devising the notion of “epicycles” and “deferent points”.

epsilon-theory-the-music-of-the-spheres-and-the-alchemy-of-finance-july-7-2013-aristotle epsilon-theory-the-music-of-the-spheres-and-the-alchemy-of-finance-july-7-2013-earth

Claudius Ptolemy (c. 90 – 160 AD)

In the Ptolemaic system, planets don’t orbit around the Earth directly. Instead, they orbit around a point in space (the epicycle center) that orbits around another point in space right next to Earth (the deferent center). The result is a flower-like orbit around the Earth for every planet, generating periods of retrograde movement as seen from the Earth.

If you ever had a Spirograph as a child (my favorite toy ever!), you’ll immediately understand Ptolemy’s theory. Basically he re-conceptualized the solar system as a giant, complex Spirograph, and through that brilliant insight the Narrative of Classical Science was saved.

epsilon-theory-the-music-of-the-spheres-and-the-alchemy-of-finance-july-7-2013-spirograph

Almost all of the observed data fit the Ptolemaic model well, and the theory was effective at predicting future astronomical events like eclipses and transits. Not perfect, but effective. For more than 1,000 years after his death in 168 AD, Ptolemy was the first and last word in everything to do with astronomy and astrology in both the Christian and Islamic worlds. Now that’s a useful Narrative!

So what went wrong with the Spirograph model of the universe? In school we learn that Copernicus “discovered” the heliocentric theory of the solar system and published a book to that effect in 1543, thus launching the Copernican Revolution. The popular implication is that it was the strength of the new ideas themselves that won the day for Truth and Reason against the narrow-minded intellectual tyranny of the Church. Yeah, right. In fact, it wasn’t until 60 years after Copernicus died that the Church got around to condemning his book and his theory. It took that long for his ideas to become dangerous to Rome because it took that long for his ideas to become useful to political and economic entities in Northern Europe. It also took that long because the world had to wait for Kepler and Galileo to improve on Copernicus so that his theory fit the observed data more comprehensively AND more effectively than Ptolemy.

epsilon-theory-the-music-of-the-spheres-and-the-alchemy-of-finance-july-7-2013-copernicus epsilon-theory-the-music-of-the-spheres-and-the-alchemy-of-finance-july-7-2013-kepler epsilon-theory-the-music-of-the-spheres-and-the-alchemy-of-finance-july-7-2013-galilei
Nicolaus Copernicus (1473-1543)
Johannes Kepler (1571-1630)
Galileo Galilei (1564-1642)

– I want to focus on that last point for a minute. The original heliocentric model that Copernicus developed was a lot simpler than the Ptolemaic model, but it didn’t work very well … if you wanted to predict an eclipse or the date that Easter would occur in some future year, you were still better off using the good old Ptolemaic tables. To make the observed data fit his model, Copernicus ultimately had to take the same Spirograph approach that Ptolemy had used 1,400 years earlier, complicating his original ideas enormously by introducing epicycles and the like. The problem for Copernicus was that he was hooked on the idea of circular orbits. It wasn’t until Kepler modified the heliocentric model with the idea of elliptical planetary orbits in 1615 that everything fell into place with a single simple theoretical structure. And it wasn’t until Galileo made his telescopic observations of the phases of Venus in 1610 that the Copernican model accounted for observed facts that the geocentric model could not possibly support.

In the history of Ptolemy and Copernicus we see the three necessary and sufficient conditions for a “paradigm shift”, which is just another term for an abrupt change in the Common Knowledge surrounding some socially-constructed phenomenon:

1)      new data observations that fit the new paradigm better than the old paradigm;

2)      new ideas that create a simpler and more fundamental structure for the new paradigm relative to the old paradigm;

3)      political and economic entities that come to see their self-interests better supported by the new paradigm than by the old paradigm.

I think it’s possible that we are on the cusp of just such a paradigm shift within the investment world, away from a narrow-minded faith in the power of Modern Portfolio Theory and its econometric foundations, and towards a more inclusive view of markets that incorporates an appreciation of historically-conditioned behavior as well as patterns of strategic decision-making under uncertainty.

Maybe that’s just wishful thinking on my part, but the necessary and sufficient conditions for change are present, including the realization by powerful political and economic entities that the current system … well, it just ain’t working.  Structural changes in markets (see “How Gold Lost Its Luster”) are eroding business models left and right. The collapse in trading volumes is poison to anyone who worships at the altar of Flow, like bulge bracket sell-side firms, and rampant disintermediation is death to gatekeepers like fund-of-funds and consultants. I mean, is your view on whether to buy or sell Apple really going to be influenced by the umpteenth sell-side model of Apple’s gross margins? Do you really need a consultant to tell you how to buy market exposure through ETF’s?

Of course, the same thing happened the last time we suffered through a multi-year investing environment of alpha scarcity and beta dominance, back in the 1930’s. Market makers and investment intermediaries dropped like flies throughout the decade, a process that – like today – was accelerated by sharp shifts in the regulatory environment (Glass-Steagal in 1933, Dodd-Frank in 2010). In fact, it really wasn’t until the mid-1950’s that the financial services industry began to grow dramatically again, not coincidentally with the introduction of Modern Portfolio Theory in 1952 and the expansion of retail brokerages (especially “the thundering herd” of Merrill Lynch) onto every Main Street in the U.S. These twin Narratives of the 1950’s – Everyman Stock Ownership and Modern Portfolio Theory – drove an investment paradigm shift that created modern Wall Street.

I don’t know for sure what the 21st-century equivalents of Everyman Stock Ownership and Modern Portfolio Theory will be for Wall Street, or how and when the associated Narratives will develop. But there is no more important question that Wall Street needs to answer in order to reinvent itself (again), and if I had an hour of Gary Cohn’s time this is what I’d want to talk about. I think that Epsilon Theory is a good place to start in evolving Modern Portfolio Theory into something more useful, and I think that there is already an established set of people and practices that can push paradigmatic change forward: traders and technical analysis.

Like many investors trained with a fundamental bias, for years I pooh-poohed the very idea that technical analysis had anything to offer a “true” investor. Technical analysis was the province of traders, and to call someone a “trader” was about the worst insult you could deliver in these circles. It was an entirely pejorative term, and it meant that you weren’t as serious or as thoughtful as a praiseworthy “long-term investor”. I was foolish to hold this bias, and I was a less effective portfolio manager for it. My job as a portfolio manager was to make money for my clients – to catch mice, as Deng Xiaoping would put it – not to make money in a theoretically pure way. If technical analysis could have helped me catch more mice – and it could – then I should have embraced it, not dismissed it out of hand.

Technical analysis is, at its heart, behavioral analysis, and as such is prime real estate to build a new investment paradigm that incorporates game theoretic behaviors. 

Now don’t get me wrong … there are HUGE problems with the application of technical analysis today. Technical analysis requires a Copernican Revolution. By that I mean it needs to be re-conceptualized away from the Spirograph models and the naïve empiricism that currently dominate the effort. Not because the current conceptualization is a failure, any more than the Ptolemaic conception of the solar system was a failure. The world functioned quite well for 1,400 years using Ptolemaic tables to predict the position of planets and stars, thank you very much. But technical analysis could accomplish so much more, both in terms of accuracy and of scope, if it were put on a more solid theoretical foundation. If you try to launch a rocket while believing in a geocentric model of the universe, you’re going to fail. I think that technical analysis could, in fact, “launch a rocket” in that it could drive a paradigm shift in how we think about and operate effectively within modern markets, but only if we change the conceptual center of the trading universe from Price to Information. 

When you talk with experienced options traders, it seems as if they can “see” securities in terms of volatility space, not the two-dimensional price-over-time matrix that most investors and traders use as their lens. What I want to suggest is to see ALL securities in terms of what I will call “information space”. In prior work (“Through the Looking Glass”) I laid out this methodology in detail, so I won’t repeat that here. The basic idea, though, is to describe a point in time for a security in terms of the information required to move that security’s price from its current equilibrium to a higher or lower equilibrium.

epsilon-theory-the-music-of-the-spheres-and-the-alchemy-of-finance-july-7-2013-equilibrium

Above, for example, is a simplified two-dimensional informational scenario for a broad market, say the S&P 500, with the black ball representing the current equilibrium price level for that market and the height of the trough walls representing the informational strength of the current equilibrium level. This is the informational surface. To make the ball “roll” to a new higher equilibrium level requires a strong enough signal (represented by the green arrow) to get over the right-hand trough wall, and vice versa for the market to go down. The informational surface plus the new information signal combine to create an informational scenario.

epsilon-theory-the-music-of-the-spheres-and-the-alchemy-of-finance-july-7-2013-time

This change in price equilibrium levels within an informational scenario can be mapped against a traditional price-over-time chart as shown above, but the depiction in informational space is much more useful than the depiction in price space. Why? Because any given price outcome can be generated by multiple informational scenarios, but any given informational scenario will generate one and only one price outcome. Just knowing the price outcome gives you little idea of how or why that price outcome arose. But if you know the informational scenario you know both the unique price outcome as well as how it came to be. An informational scenario can predict price, but a price can neither predict nor explain afterwards an informational scenario.

Seeing the market in terms of information space will NOT tell you whether the market is going up or down. It shows you how the market is likely to react to new information, and it gives you tools for evaluating the potential market impact of new information. Epsilon Theory is both a methodology (a toolbox for evaluating observed data) and a theory (a conceptualization of observed data). Methodologies and theories are neither true nor false, only more or less useful than alternative toolboxes and conceptualizations, and I have no doubt that Epsilon Theory is not terribly useful for some investment strategies. Sherlock Holmes didn’t care whether the Earth went around the sun or the other way around because it made “not a pennyworth of difference” to his life or his work, and the same is probably true for Epsilon Theory and, say, private equity investing.

But here’s an example of how a common dialog between traders and portfolio managers can be much more useful when reformulated in informational terms.

When a trader tells a portfolio manager that there is “resistance” at a particular price level, and “support” at another, he is making a statement about informational structures created by historical patterns of price movements. This is true regardless of the specific methodology used to determine these resistance and support levels – Bollinger Bands, Fibonacci Series, MACD, whatever. For example, here’s a MACD price chart for Apple that I’ve marked with hypothetical resistance and support levels (NB: I have no idea whether these levels are methodologically accurate, and it really doesn’t matter for the point I’m trying to make):

epsilon-theory-the-music-of-the-spheres-and-the-alchemy-of-finance-july-7-2013-bloomberg

And here’s that same chart expressed as an informational surface:

epsilon-theory-the-music-of-the-spheres-and-the-alchemy-of-finance-july-7-2013-informational-surface

The advantage of the informational surface expression over a price chart is that it is (potentially) more accurate, more comprehensive, and more understandable without being more complex.

An informational model is potentially more accurate because the resistance and support levels are not binary or categorical thresholds (resistance vs. support, strong vs. weak) but are variable representations of their inductively derived informational strength or weakness. Rather than simply saying, “there’s resistance at $446” it’s possible to say “it will require 0.6 generic bits of information to get over the resistance at $446 and 1.8 generic bits of information to get over the resistance at $492.” And because there are tools to measure the informational strength of new information, it’s possible to estimate the likelihood that this piece of new information will be sufficient to pierce the $446 resistance but that piece of information will not.

An informational model is potentially more comprehensive because non-price informational barriers can be incorporated directly into the analysis.  One of the biggest weaknesses of technical analysis as it is currently constituted is that it only “sees” informational signals based on historical price outcomes. By re-conceptualizing price as information, other important types of information, such as public statements and macro data announcements, can be plugged into the same inductive analytic framework. There’s an enormous amount of intellectual firepower embedded in technical analysis that is underutilized because it is only applied to price data. Information theory provides a common language for every type of signal, and by “translating” all sorts of signals into the language of information we can significantly expand the scope of powerful inferential tools that fall under the rubric of Big Data.

An informational model is potentially more understandable because the dimension of time can be incorporated more easily, or at least more intuitively, into the analysis. All forms of technical analysis are based on some flavor of time series regressions, so it’s not that time is ignored. But by including it graphically as an additional dimension, you can create the equivalent of a volatility surface, which makes it much easier to “see” the informational dynamics at work.

epsilon-theory-the-music-of-the-spheres-and-the-alchemy-of-finance-july-7-2013-informational-dynamic

epsilon-theory-the-music-of-the-spheres-and-the-alchemy-of-finance-july-7-2013-informational-dynamic-3d

What I’m suggesting – to treat patterns of price data as an important informational signal within a broader theory of behavioral finance – is not original to me. The Copernicus of this story is George Soros, and the application of game theory to markets has its first (and in many ways still best) expression in his magisterial 1987 book, The Alchemy of Finance. I am not going to attempt any sort of summary of the book here, because it defies easy summary. It is, as Paul Volcker (!) writes in his Foreword to the 2003 edition, “an honest struggle by an independent and searching mind to break through a stale orthodoxy with new and meaningful insights into financial and human behavior”, which is just about the highest praise an author can receive. I’ll just add that even though Soros does not frame his core ideas such as “reflexivity” in terms of formal game theory, there is no doubt that this is his intellectual home. Everything that Soros writes about the behavior of markets can be expressed, sometimes more effectively but usually less, as a game theoretic construct.

Reflexivity is the best known of Soros’s core concepts, but also tends to be misunderstood. Rather than repeat Soros’s own words or define it with the language of game theory, let me give you an example of reflexivity as a conversation that happens in one form or another hundreds of times a day, every day, all around the world.

PM:                Hey, why is XYZ down 3% all of a sudden?

Trader:         I don’t know. There’s nothing on Bloomberg. Let me ask around.

[2 minutes later]

Trader:         Nothing on chat. All the desks are calling trying to find out what’s going on.

PM:                Is there a conference or something where management is talking?

Analyst:        I don’t think so. I tried calling IR, left a message.

PM:                Well, somebody must know something. I hate this stock … it could go down 10%. Sell half the position and put a tight stop on the rest. Gotta manage the risk. Let me know if either of you hear anything.

[30 minutes later]

Trader:         We got stopped out. You’re flat.

[2 days later when XYZ has fully recovered to its original price]

PM:                Hey, false alarm, let’s start putting XYZ back on. I really like that stock.

This is reflexivity. It’s a bitch.

Like Copernicus, though, Soros has some problems with the concept of reflexivity as expressed in Alchemy of Finance. As written (and for all I know, Soros has kept his best work secret in order to build a fortune), reflexivity is more of a heuristic – a rule of thumb or a way of looking at data – than a practical methodology that can be incorporated into a rigorous evaluation. There’s no language to reflexivity other than the language of price, and that’s a problem for Soros in the same way that it’s a problem for technical analysis … it limits the enterprise in both scope and accuracy. But in the same way that elliptical planetary orbits provided the key for translating the central insights of the original Copernican theory into an extremely powerful (i.e., useful) heliocentric model of the solar system, I believe that information theory can translate the central insights of reflexivity into an extremely powerful (i.e., useful) behavioral model of markets.

Here’s the basic idea of how to describe reflexivity in informational terms. Let’s say you have an unstable equilibrium, meaning that the informational barriers for the current price equilibrium to start moving in one direction or the other are quite low. For whatever reason, maybe just the chance result of a large number of Sell orders clustering in time, the equilibrium starts to “roll” to the left.

epsilon-theory-the-music-of-the-spheres-and-the-alchemy-of-finance-july-7-2013-roll

That price action in and of itself creates a new informational signal.

epsilon-theory-the-music-of-the-spheres-and-the-alchemy-of-finance-july-7-2013-signal

And so on and so on. In retrospect it always seems obvious that the market just “had a mind of its own” and that there was nothing “real” to make the price go down. But when you’re in the middle of one of these episodes it’s not obvious at all. We are biologically hard-wired to pay attention to these signals and to interpret them as part of a larger, more meaningful pattern. These price action signals are entirely real as we experience them, and I think it’s critical to have an investing perspective that treats them as entirely real. That’s what Information Theory provides. By looking at the phenomenon of reflexivity through the lens of Information Theory, we can “see” its dynamics more clearly than if we’re just looking at price, and as a result we have the potential to anticipate and/or react more appropriately when these events occur.

This, then, is the goal of Epsilon Theory – to develop a practical methodology to identify the securities prone to game-playing behaviors like reflexivity, and the conditions under which game-playing behavior is more or less likely to occur. By building on the insights of thinkers like George Soros, E.O. Wilson, and Brian Skyrms I think it’s a very achievable goal. Whether that ultimately sparks a new investment paradigm … who knows? But I’m pretty sure it can help us catch more mice.

PDF Download (Paid Membership Required):

http://www.epsilontheory.com/download/15744/