Two Discoveries


The world made two discoveries last week. Everyone is aware of the first discovery – that ISIS is not “a junior varsity team” but an able protagonist in what Pope Francis quite rightly calls “a piecemeal third World War”. Very few are aware of the second discovery – the existence of a polynomial-time algorithm to determine whether two networks, no matter how complex, are identical. Both are watershed events, part of a continuing destabilization of politics and science. Neither will impact markets very much today. Both will change markets forever in the years to come.

I won’t say much about the first discovery here, but will take this opportunity to reprint a note I wrote in December 2014, eerily right before the Charlie Hebdo attack: “The Clash of Civilizations”. I’d also point out that the all-too-predictable Orwellian response to events like the Paris attack, namely to rewrite history and expand government monitoring of our private lives, is in full swing.

For example, here’s a before and after France Inter headline (hat-tip to Epsilon Theory reader M.O.), as noted by The Daily Telegraph. The headline as it originally ran a few weeks back calls a potential terrorist infiltration of Syrian refugees a “fantasy” of the lunatic right. Immediately after the attack, the headline has been rewritten (and the body of the article partially rewritten as well), to suggest that of course one might question whether or not a few terrorists managed to sneak in with the refugees. France Inter – surprise! – is part of the state-owned media apparatus, now in full-throated advocacy for a “pitiless” war.

epsilon-theory-two-discoveries-november-19-2015-media epsilon-theory-two-discoveries-november-19-2015-media-terrorist

Given how this Narrative is developing within left-leaning European governments (hmm, amazingly enough, no marches this time around calling for solidarity with peace-loving Muslims), my political advice to left-leaning US politicians like Connecticut governor Dannel Malloy is that you might be getting a wee bit ahead of yourself by loudly and publicly promoting Syrian refugee relocation in your state. Just sayin’.

The second discovery – an algorithm that dramatically shortens the information processing power required to tell if two networks are structurally the same – requires a bit more explication. Here’s a picture of two such visibly different but structurally identical (isomorphic) networks.


For a whole host of data science applications – from cryptography to genomics to nuclear physics to, yes, finance – we’d often like to know how or if two networks or two data arrays are permutations of the same underlying structure. Maybe you could eyeball an answer to an 8-node network like the example above, but it doesn’t take much imagination to realize that this problem gets very hard very quickly for the human brain as the number of nodes increases.

Fortunately, of course, we have non-human intelligences to help us crack these problems today, but the only known algorithms or programs for computers to follow for this particular problem existed in what is called “non-deterministic polynomial” or NP-time, where the amount of time or information processing power required to carry out the algorithm increases at a staggering rate as the number of nodes increases. This is in contrast to a polynomial or P-time algorithm, where the time required to crunch the algorithm increases at a more manageable rate as the number of nodes increases. Think of it as the difference between 2n (an NP-time algorithm) and n2 (a P-time algorithm), where n is something like 1 million. 2 raised to the 1,000,000th power is an incomprehensibly large number, greater than the number of atoms in the universe. If that’s your algorithm for solving the isomorphic network problem, then it’s physically impossible for any computer, no matter how powerful, to crack the problem for a network with 1 million nodes. On the other hand, 1,000,000 squared is a trivially small number (1 trillion) as a representation of a powerful computer’s information processing capabilities. If that’s your algorithm for solving the isomorphic network problem, then there is no network too large for you to measure and compare to another network.

The isomorphic network problem was a classic example of something that most computer scientists believed could only be solved with NP-time algorithms. But last week, Laszlo Babai at the University of Chicago announced the existence of an algorithm for this class of problems that is, for all practical purposes, in P-time. Why is this important? Because it is the modern day equivalent of discovering a new continent, one that happens to exist in cyberspace rather than human space. Because it is now by no means clear that there are ANY problems of data science that are inexorably lost in the cosmic fog of NP-time algorithms. Why will this one day change markets forever? Because the ability of computers to analyze and predict (and ultimately shape) the behavior of a complex network comprised of millions of semi-autonomous agents exchanging a set of symbolic chips with each other – The Market – just took a giant step forward. If you thought that humans were a marginalized participant in public capital markets today … if you thought that the casino-fication of markets had reached some sort of natural limit … well, you ain’t seen nothing yet.

Sigh. Last week was a tough week for the human team. With the loud explosions out of Paris, the illiberal left, the illiberal right, and the illiberal jihadists are now ALL in political ascendancy. And with the quiet announcement out of Chicago, we are oh-so close to the day when no human communication over any network can be shielded or kept private from a machine intelligence. God help us as the two discoveries merge into one.

PDF Download (Paid Subscription Required):


The Andromeda Strain


Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them.

In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.
Michael Crichton (1942-2008)


Dr. Jeremy Stone: According to this, there’ll be a super-colony of Andromeda over the entire southwest in…
Dr. Charles Dutton: Jeremy! These are biological *warfare* maps!
Dr. Jeremy Stone: Why, yes… so they are… uh… but… simulations, Charlie. Defensive… it’s just a scenario.

“The Andromeda Strain” (1971)

Sometimes you read just the right book at just the right time in your life. For me, one of those books was “The Andromeda Strain”, Michael Crichton’s first (and I think best) science thriller. The 1969 book holds up unnervingly well almost 50 years later, with its spot-on description of the Orwellian impulse of modern government, both in its language and its research programs. If you enjoy Epsilon Theory but you’ve never read “The Andromeda Strain”, do yourself a favor and check it out.

I’ve been thinking about Crichton and his books for a couple of reasons. First, I just read “Lexicon”, by Max Barry, which reminded me of Crichton’s work in its pacing and science-y hook. It’s a terrific read, even if the MacGuffin – what Hitchcock famously called “the object of desire” that motivates the plot of every human story – gets a little silly by the end. Second, I always admired Crichton’s skeptical nature regarding The Powers That Be and their use of Narrative, his ability to weave a good yarn around popularized science, and of course the fact that he made a ton of money with this particular skill set. Third, by far the most common question I get from Epsilon Theory readers (more than 60,000 email subscribers now … thank you!) is for reading recommendations, and it’s high time I updated the required reading list I put together almost 2 years ago.

In truth, the reading list hasn’t changed that much. Books by Soros, Taleb, and Mandelbrot don’t lose their charge from one year to the next. So I’ll just point out two books that I think deserve more attention, two recent books that I think are getting more attention than they deserve, and a fiction genre that I find inspiring but most will just find weird.

The first book that I wish more people would read is “Probably Approximately Correct” by Leslie Valiant. Valiant is usually described as an eminent Harvard computer scientist, and that’s all true. But what I like about his title – T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics – is the applied mathematics part. I’m a wannabe applied mathematician, as is every trader and allocator I know. Valiant is a pro. What is applied mathematics? It’s the grammar of life. Valiant’s book is a profound (sorry, there’s no better word) examination of algorithms and evolution, two topics near and dear to the Epsilon Theory heart. “Probably Approximately Correct” is a tour de force of modern science (and philosophy), as notable for its humility as for its insight.

What’s the pay-off? Keen insight into what makes for a successful process.

We live in an age of process, where the most successful among us – whether it’s a football coach like Nick Saban or Bill Belichick, or an investor like Bridgewater’s Ray Dalio – are consumed by the notion of algorithmic implementations and the primacy of process. Is a bit of, shall we say, joie de vivre lost in the translation of human behavior into reducible and repetitive machine action? No doubt, and if you’ve ever listened to a Belichick press conference you know exactly what I’m talking about. But boy does it work.


Confession: I had always thought that Ray Dalio’s “Principles” were one part brilliance, when he talks about human behavior and what makes for success, and two parts hooey, when he talks about his particular conception of Natural Law and how it informs Bridgewater’s management process. After reading Valiant’s book, I think that I understand for the first time what Dalio is trying to say. Valiant successfully breathes life into Natural Law, a concept that’s been left for dead since … I dunno … Hobbes? Aquinas? The funny thing is that I’m pretty sure Valiant isn’t thinking about philosophical conceptions of Natural Law, much less the foundations of a successful investment or management process when he’s writing about algorithms, but … whoops, there it is!

The second book I think more people should be reading here in the Golden Age of the Central Banker isn’t a single book, but is the body of work of one tough lady  Hannah Arendt. I’ve just finished “Arendt and America”, by Richard King, and once again I find myself so grateful for being dunked headfirst into a big barrel of cold Arendt water. I mean, take one look at this picture and you know exactly who you’re dealing with – a true survivor, not the Larry David sort, a human being who’s seen and heard and lived through the cold fire of totalitarianism, and who’s now going to tell you exactly what she thinks about your cute and comfortable life. Yow. Read “The Origins of Totalitarianism” if you’re up for a heavy load. Read “Eichmann in Jerusalem” (available here online) if you want an introductory dose of speaking truth to power. Arendt coined the phrase “the banality of evil” in her observations of Eichmann’s trial, and that phrase is never far from my mind when I read the news of the world.


On the other hand, I’m not crazy about two books recently published to high praise – “Phishing for Phools” by George Akerlof and Robert Shiller, and “Superforecasting” by Philip Tetlock and Dan Gardner. Smart guys all, with well-deserved reputations for academic excellence. But …

The central concept of “Phishing for Phools” is that human beings often make decisions that are contrary to their best interests. We buy a big ole sugary cinnamon roll. We take out a mortgage with high fees rolled into the financing. We do these things because we are “manipulated” and “easily confused.” Fair enough. Nolo contendere. But sometimes I just want an enormous cinnamon roll. Sometimes I want an extra side of bacon, and yes, I know full well that it’s bad for me. More fundamentally, I DON’T want government “guiding” me towards George Akerlof’s and Robert Shiller’s conception of healthy life choices, no matter how much I might personally agree with those choices. Thanks but no thanks, George and Bob. Do I want to eliminate the EPA and FDA? No, I don’t. Do I think there’s a proper role for government in regulating commercial speech? Yes, I do. But I also think that a central governing algorithm based on the primacy of individual rights rather than some aggregate social benefit is more likely to get us to a policy outcome that is probably approximately correct. Hmm … see what I did there?

I’ve got a similar problem with “Superforecasting”. On the one hand, it’s hard to argue with Tetlock and Gardner’s prescription for good decisionmaking – you should “think probabilistically”, evaluate realistically how well your predictions turn out, and possess enough humility “to admit error and change course”. Again, fair enough. Again, nolo contendere. But I think there’s a classic methodological mistake here when we get – again – to the policy or prescriptive message. Inferring the best predictive characteristics of a general population of decisionmakers from a sampling of decisionmakers who most closely “fit” a backwards-looking model is fraught with danger. It’s a close relative of the error that lots of traders make when they inductively derive a set of weights to assign to a set of variables within a discrete historical sample, and then they inevitably discover that – surprise! – those variable weights are far from optimal once you go out of the historical sample and into real life. Sorry, but I feel like we’ve seen this movie way too many times.

One last point in this brief note, and that’s the important role of fiction in the bedside reading of any investment survivor. There’s no better way to sensitize yourself to the story-telling (excuse me … I mean “communication policy”) of central bankers and other politicians than to immerse yourself in the story-telling of Narrative-centric authors like Chuck Palahniuk, Haruki Murakami, and Dave Eggers. My personal fave of late: Chuck Palahniuk’s comic book (excuse me … I mean “serial graphic novel”). Yes, a comic book – “Fight Club 2” – the best exposition of the power of memes and popular narratives since … well, since Mike Carey’s comic book “The Unwritten”. I realize that I’ll get 50 unsubscribe emails just for writing about fiction in general and comic books in particular, but I bring them up in my recommended reading list for two reasons. First, I think they’re smart, authentic, and entertaining. Second, it is astonishing to me how the examination and appreciation of Narrative pervades every aspect of modern art and literature, from the high-brow to the lowest-brow. We’re not idiots. We know we’re being played. We may suffer the occasional bout of “Gell-Mann Amnesia”, to use Crichton’s description of our innate willingness to take stories outside of our direct experience at face value, but by and large we all make an effort to see behind the face and evaluate the motivations of the finger-wagging Missionary. It’s the conceit and the fatal flaw of both the illiberal left and the illiberal right to underestimate the perceptiveness of we-the-people, and so long as our art and literature – particularly our pop art and our pop literature – remind us of the game that’s being played I think we’re going to come out of this okay.

PDF Download (Paid Subscription Required):