Rise of the Machines

Category: Note

“Music, this complex and mysterious act, precise as algebra and vague as a dream, this art made out of mathematics and air, is simply the result of the strange properties of a little membrane. If that membrane did not exist, sound would not exist either, since in itself it is merely vibration. Would we be able to detect music without the ear? Of course not. Well, we are surrounded by things whose existence we never suspect, because we lack the organs that would reveal them to us. ”
– Guy de Maupassant

“I call our world Flatland, not because we call it so, but to make its nature clearer to you, my happy readers, who are privileged to live in Space. … Distress not yourself if you cannot at first understand the deeper mysteries of Spaceland. By degrees they will dawn upon you. ”
– Edwin A. Abbott, “Flatland: A Romance of Many Dimensions”

“I wanted to be a psychological engineer, but we lacked the facilities, so I did the next best thing – I went into politics. It’s practically the same thing. ”
– Salvor Hardin (“Foundation”, by Isaac Asimov)

“It is vital to remember that information – in the sense of raw data – is not knowledge, that knowledge is not wisdom, and that wisdom is not foresight. But information is the first essential step to all of these. ”
– Arthur C. Clarke

“Any sufficiently advanced technology is indistinguishable from magic. ”
– Arthur C. Clarke

“What are you doing, Dave?”
– HAL (“2001: A Space Odyssey” by Arthur C. Clarke)

I thought it was appropriate in a note focused on the evolution of machine intelligence to start with some quotes by three of the all-time great science fiction writers – Abbott, Asimov, and Clarke – and something by the father of the short story, de Maupassant, as well. All four were fascinated by the intersection of human psychology and technology, and all four were able to communicate a non-human perspective (or at least a non-traditional human perspective) in their writing – which is both incredibly difficult and completely necessary in order to understand how machines “see” the world. Asimov in particular is a special favorite of mine, as his concept of psycho-history is at the heart of Epsilon Theory. If you’ve never read the Foundation Trilogy and you don’t know who Hari Seldon or the Mule is … well, you’re missing something very special.

All of these authors succeed in portraying non-human intelligence in terms of the inevitable gulf in meaning and perception that must exist between it and human intelligence. Hollywood, on the other hand, almost always represents non-human intelligence as decidedly human in its preference and utility functions, just with a mechanical exoskeleton and scary eyes. Thus the Daleks, the original Cylons, the Terminators, the Borg, etc., etc.

epsilon-theory-rise-of-the-machines-july-28-2013-robot epsilon-theory-rise-of-the-machines-july-28-2013-robot-2

At least the most recent version of Battlestar Galactica recognized that a non-human intelligence forced to interact with humans would perhaps choose a less menacing representational form.

epsilon-theory-rise-of-the-machines-july-28-2013-robot-evolution

The way to think about machine intelligence is not in terms of a mechanical version of human intelligence, but in terms of a thermostat and an insect’s compound eye.

epsilon-theory-rise-of-the-machines-july-28-2013-thermostat epsilon-theory-rise-of-the-machines-july-28-2013-insect

What I mean by this is that a thermostat is a prime example of a cybernetic system – a collection of sensors and processors and controllers that represents a closed signaling loop. It might seem strange to think of the thermostat as “making a decision” every time it turns on the heat in your house in response to the environmental temperature falling below a certain level, but this is exactly what it is doing. The thermostat’s decision to turn on the heat follows, from an Information Theory perspective, precisely the same process as your decision to buy 100 shares of Apple, just a simpler and more well-defined process. The human brain is the functional equivalent of a really complex thermostat, with millions of sensors and processors and controllers. But that also means that a really complex thermostat is the functional equivalent of a human brain.

The human brain has one big advantage over a thermostat, and that is the evolutionary development of a high degree of self-awareness or consciousness. There’s nothing mystical or supernatural about consciousness, nor is it somehow external or separate from the human brain. Consciousness is simply an emergent property of the human cybernetic system, just like Adam Smith’s Invisible Hand is an emergent property of the market cybernetic system. It is an incredibly useful property, however, allowing both the construction of thought experiments that radically accelerate learning by freeing us from the ponderously slow if-then laboratory that Nature and evolution provide non-self-aware animals, as well as the construction of belief systems that radically promote and stabilize joint utility functions of human communities. Our proficiency as both a tool-using animal and a social animal stems entirely from the development of consciousness, and we are an incredibly robust and successful species as a result.

On the other hand, a thermostat has one big advantage over the human brain in its decision-making process, and that’s the lack of evolutionary and social constraints. As phenomenally efficient as carbon-based nerve cells and chemical neurotransmitters might be, they can’t compete on a fundamental level with silicon-based transistors and electrons. As effective as social constructs such as language and belief systems might be in creating intra-group human utility, there is no inherent tension or meaning gap or ecological divide in communications between thermostats. The concept of music is a wonderful thing, but as de Maupassant points out it is entirely dependent on “the strange properties of a little membrane.” How many other wonderful concepts are we entirely ignorant of because we haven’t evolved a sensory organ to perceive them with? Just as the two-dimensional inhabitants of Flatland find it essentially impossible to imagine a third dimension, so are we conceptual prisoners of Spaceland. At best we can imagine a fourth dimension of Time in the construction of a helix or a hypercube, but anything beyond this is as difficult as storing more than 10 digits in our short-term memory. Machines have no such evolutionary limitations, and decision-making in terms of twelve dimensions is as “natural” to them as decision-making in terms of three.

This is why it’s useful to think of machine intelligence in terms of the compound eye of an insect. Not only are most compound eyes able to sense electromagnetic radiation that is invisible to the camera eyes of most vertebrate animals, particularly in the ultraviolet end of the spectrum, but there is a multi-dimensionality to insect vision that is utterly alien to humans. It’s not that insect vision is super-human, any more than machine intelligence is super-human. In fact, in terms of image resolution or location of an object within a tight 3-dimensional field, the camera eye is enormously superior to the compound eye, which is why it evolved in the first place. But for a wide field of vision and the simultaneous detection of movements within that field, the compound eye has no equal. It’s that simultaneity of movement detection that is so similar to the parallel information processing approach of most machine intelligences and is so hard to describe in human information processing terms.

epsilon-theory-rise-of-the-machines-july-28-2013-compound-eye

Because the compound eye associates a separate lens with each photo-receptor, creating a perceptive unit called an ommatidium, there is no composite 3-dimensional visual image formed as with twin camera eyes. Instead there are hundreds or thousands of separate 2-dimensional visual images processed simultaneously by insects, each of which is driven by separate signals. It’s customary to describe insect vision as a mosaic, but that’s actually misleading because the human brain sees a mosaic as a single image made up of individually discrete pieces. To an insect, there is no such thing as a single visual image. Reality to an insect is hundreds of visual images processed simultaneously and separately, and there is no corollary to this in the human cybernetic system. To a thermostat, though, with no evolutionary baggage to contend with … no problem. As a result, if a functional task is best achieved by seeing the world as an insect does – through simultaneous views of multiple individual fields – a machine intelligence can outperform a human intelligence by a god-like margin.

Over the past five to ten years, there have been three critical advances in computer science that have created extremely powerful machine intelligences utilizing a compound eye architecture.

First, information storage technology developed the capacity to store enormous amounts of data and complex data structures “in-memory”, where the data can be accessed for processing without the need to search for it on magnetic media storage devices. Again, this is a really hard concept to find a human analogy for. The best I can come up with is to envision the ability to just know – immediately and without any effort at “remembering” – the names, addresses, and phone numbers of everyone you’ve ever known in your life. Even that doesn’t really do the technology justice … it’s more like knowing the names and phone numbers of everyone in New York City, simultaneously and without any attempt to recall the information. Your knowledge vanishes the moment electrons stop powering your memory chip, so there’s still a place in the world for permanent magnetic media storage, but that place is shrinking every day.

The company that commercialized this technology first, best, and most widely is SAP, in a product they call HANA. I’ve been following its development for about three years now, and it’s changing the world. Does Oracle have a version of this technology? Yes. But if you’ve built a $150 billion market cap company on the back of selling periodic upgrades for a vast installed base of traditional relational database management software applications that query (search) a vast installed base of traditional data storage resources … hmm, how to put this in a nice way … you’re probably not going to be very excited about ripping apart that installed base and re-inventing your lucrative business model. SAP had a lot less to lose and a lot more to gain, so they’ve re-invented themselves around HANA. I have no idea whether SAP the stock is a good investment or not. But SAP the company has a phenomenal asset in HANA.

Second, advances in microprocessor technology, network connectivity, and system control software created the ability to separate physical computing resources from functional computing resources. This phenomenon goes by many names and takes multiple forms, from virtualization to distributed computing to cloud computing, but the core concept is to find enormous efficiencies in information processing outcomes by rationalizing information processing resources. Sometimes this means using hardware to do something that was previously done by software; sometimes this means using software to do something that was previously done by hardware. The point is to stop thinking in terms of “hardware” and “software”. The point is to re-conceptualize a cybernetic system into fundamental terms reflecting efficient informational throughput and functionality, as opposed to traditional terms reflecting the way that humans happened to instantiate that functionality in the past. When I write about re-conceptualizing common investment practices in terms of the more fundamental language of Information, whether it’s technical analysis (“The Music of the Spheres”) or bottom-up portfolio construction (“The Tao of Portfolio Management”), I’m not pulling the idea out of thin air.  There has been just this sort of revolutionary shift in the way people think and talk about IT systems over the past decade, with incredible efficiency gains as a result, and I believe that the same sea change is possible in the investment world.

One of the most powerful aspects of this re-conceptualization of machine cybernetic systems is the ability to create the functional equivalent of an insect’s ommatidia – thousands of individual signal processors working in parallel under a common direction to complete a task that lends itself well to the architecture of a compound eye. This architecture of simultaneity is more commonly referred to as a cluster, and the most prominent technology associated with clusters is an open-source software platform called Hadoop. There are three pieces to Hadoop – a software kernel, a file system (like a library catalog), and a set of procedures called MapReduce (like a traffic cop) – all of which were originally developed by Google. While Hadoop is in the public domain under an open-source license, I would estimate that Google is at least two generations ahead of any other entity (and that includes the NSA) in understanding and implementing the architecture of simultaneity. Obviously enough, search is a prime example of the sort of task that lends itself well to a machine intelligence organized along these lines, but there are many, many others. No one understands or directs machine intelligence better than Google, and this is why it is the most important company in the world.

Third, methodological advances in statistical inference and their expression in software applications have created the ability to utilize more fully these advances in memory, microprocessors, connectivity, and IT architecture. The range of these methodological tools is pretty staggering, so I will only highlight one that is of particular interest to the Epsilon Theory perspective. Last week I wrote about the problem of the ecological divide in every aspect of modern mass society (“The Tao of Portfolio Management”) and how humans were poor calculators of both aggregate characteristics derived from individual signals and individual characteristics derived from aggregate signals. Over the past 15 years, Gary King at Harvard University has pioneered the development of unifying methods of statistical inference based on fundamental concepts such as likelihood and information. I may be biased because Gary was a mentor and dissertation advisor, but I think his solutions to the problem of ecological inference can fundamentally change portfolio construction and risk management practices, especially now that there are such powerful cybernetic “engines” for these solutions to direct.

As described in “The Market of Babel”, these advanced machine intelligences based on the compound eye’s architecture of simultaneity have effectively taken over one particular aspect of modern markets and the financial services industry – the provision of liquidity. Understanding and predicting the patterns of liquidity demand are tailor-made for the massively parallel capabilities of these cybernetic systems, and there is no liquidity operation in modern markets – from high-frequency traders trying to skin a limit order book to asset managers trying to shift a multi-billion dollar exposure in the dark to bulge-bracket market-makers trying to post yet another quarter of zero days with a trading loss – that is not completely controlled by these extremely complex and powerful thermostats.

This is a problem for human investors in two respects.

The first is a small but constant problem. Whenever you take liquidity (i.e., whenever you create an exposure) in anything other than a “natural” transaction with a human seller of that exact same exposure, you are going to pay a tax of anywhere from 1/2 to 5 cents per share to the machine intelligences that have divined your liquidity intentions within 50 milliseconds of hitting the Enter button. I’m sorry, but you are, and it’s a tax you can only mitigate, not avoid. The problem is worse the more you use a limit order book and the more you use VWAP, but then again, no active manager ever got fired for “showing price discipline” with a limit and no trader ever got fired for filling an order at VWAP.

The second is a giant but rare problem. All of these machine intelligences designed to optimize liquidity operations are based on the same historical data patterns of human market participation. As those patterns change – particularly if the patterns change in such a way that machine-to-machine transactions dominate or are confused for human-to-machine transactions – it creates a non-trivial chance that an event causing what would otherwise be a small liquidity shock can snowball into a market-wide liquidity seizure as the machine-to-machine transactions disappear in the blink of an eye. This is what happened in the 2010 Flash Crash, and the proportion of machine-to-machine transactions in liquidity provision is, if anything, even greater today. Moreover, the owners of these machine intelligences, especially in the HFT world, are suffering much thinner margins than in 2010, and, I suspect, are taking much larger risks and operating with much itchier trigger fingers on the off switch. I have no idea when the liquidity train wreck is going to happen, but you can clearly see how the tracks are broken, and the train whistle sure sounds like it’s getting closer.

The solution to this second and more troubling problem is not to somehow dislodge machine intelligences from market liquidity operations. It can’t be done. Nor do I have much confidence in regulatory “solutions” such as Liquidity Replenishment Points and the like (read anything by Sal Arnuk and Joe Saluzzi at Themis Trading for a much more comprehensive assessment of these issues). What we need is a resurgence in “real” trading with human liquidity-takers on at least one side of the trade.

Unfortunately, I suspect that we won’t see a return to normal levels of human market activity until the Fed begins to back down from monetary policies designed explicitly to prop up market prices. You might not sell what you own with a Fed put firmly in place, but a healthy market needs buying AND selling, it needs active disagreement on whether the price of a security is cheap or dear. Markets work best and markets work more when investors venture farther out onto the risk curve on their own volition, not when they are dragged out there kicking and screaming by ZIRP and QE.

I don’t know when the Fed will stand down enough to allow normal risk-taking to return to markets, but at some point this, too, shall pass. The trick is how to protect yourself in the current investing environment AND set yourself up to do well in the investing environment to come. Now there are a thousand facets to both aspects of pulling that trick off, and anyone who tells you that he has THE answer for this puzzle is selling snake oil. But I think that part of the answer is to bring machine intelligences out of the liquidity provision shadows and into the light of portfolio construction, risk management, and trading.

Your ability to manage the risk of a liquidity-driven market crash is improved simply by recognizing the current dynamics of liquidity provision and speaking, however haltingly or humanly accented, the machine language of Liquidity. Imagine how much further that ability could be improved if you had access to a machine intelligence designed specifically for the purpose of measuring these liquidity risks as opposed to being another machine intelligence participating in liquidity operations. I am certain that it is possible to create such a liquidity-monitoring machine intelligence, just as I am certain that it is possible to create a correlation-monitoring machine intelligence, and just as I am certain that it is possible to create a portfolio-optimizing machine intelligence. These technologies are not to be feared simply because they are as alien to us as an insect’s eye. They should be embraced because they can help us see the market as it is, rather than as we wish it were or as we thought it was.

 

epsilon-theory-rise-of-the-machines-july-28-2013.pdf (822KB)