Alibaba’s AI, JP Morgan’s Risky Language & the Nurture of Reality (by Silly Rabbit)

Video game-playing AI

AI has moved one step closer to mastering the classic video game StarCraft. Google, Facebook and now Alibaba have been working on AI StarCraft players, and last week a team from China’s Alibaba published a paper describing a system that learned to execute a number of strategies employed by high-level players without being given any specific instruction on how best to manage combat. Like many deep learning systems, the software improved through trial and error, demonstrating the ability to adapt to changes in the number and type of troops engaged in battle. Non-technical overview via The Verge here. Original and fairly accessible technical paper here.

While an AI video game ace may not be world changing in and of itself, progress on AI intra-agent communication and coordination has potentially profound implications for markets as the approach matures, or, as the Alibaba researchers rather poetically note in their paper:

In the coming era of algorithmic economy, AI agents with a certain rudimentary level of artificial collective intelligence start to emerge from multiple domains…[including] the trading robots gaming on the stock markets [and] ad bidding agents competing with each other over online advertising exchanges.

And how do agents behave when their game playing becomes stressful? Apparently just like their human creators: Aggressively. Summary of Google’s DeepMind finds on this here.

Risky language

For anyone who has ever taken general NLP algorithms, trained them on the information of the broader world and then pointed them at financial markets-type information, you will have noticed that they get kind of sad and messed up. Partly because markets-ese is odd (try telling your doctor that being overweight is a good thing) and partly because finance folks sure do love a risk discussion…and apparently no one more so than JP Morgan Chase CEO Jamie Dimon. In his much re-published letter to shareholders:

It is alarming that approximately 40% of those who receive advanced degrees in STEM at American universities are foreign nationals with no legal way of staying here even when many would choose to do so…Felony convictions for even minor offenses have led, in part, to 20 million American citizens having a criminal record…The inability to reform mortgage markets has dramatically reduced mortgage availability.

Thanks, Jamie, my algorithm just quit and immigrated to Canada.

The more serious question on this is that as natural language algorithms (of various types) become ubiquitous, at what point do business leaders begin to craft their communications primarily to influence the machine, or at least not include detailed socio-political critiques to accidentally trip it?

The nurture of reality

Clearly, our perception of reality, our world view, is substantially informed by our memories and the stories (links) we tell ourselves about these memories. We are now, for the first time, just starting to get an understanding of how memories are physically stored in the brain. Recollections of successive events physically entangle each other when brain cells store them, as Scientific American reports.

The Map of Physics, a joyous 8 minute video by Dominic Walliman (formerly of D-Wave quantum computing), culminates in the map below with The Chasm of Ignorance, The Future and Philosophy. Walliman points to where we must be operating if we are to break truly new ground (i.e., put the regression models down, please). And if you liked that, keep watching to Your Quantum Nose: How Smell Works

And, finally, a classic, epic, challenging, practical, piece of prose/poetry from one of the the world’s greatest philosophers and orators: the late, great, Tibetan Buddhist meditation master Chögyam Trungpa. Long treatise on Zen vs. Tantra as a system for nurturing the mind:

…the discovery of shunyata [emptiness of determinate intrinsic nature] is no doubt the highest cardinal truth and the highest realization that has ever been known…

Coming next week: The next generation of flash crashes; digital Darwinism and the resurgence of hardware.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16086/

Break the Wheel: Things that Don’t Matter #3

Daenerys and Tyrion

King George III:

They say George Washington’s yielding his power and stepping away
Is that true?
I wasn’t aware that was something a person could do.
I’m perplexed.
Are they gonna keep on replacing whoever’s in charge?
If so, who’s next?
There’s nobody else in their country who looms quite as large…

― “Who’s Next”, Hamilton (2015)

Sean Maguire: Hey, Gerry, In the 1960s there was a young man that graduated from the University of Michigan. Did some brilliant work in mathematics. Specifically bounded harmonic functions. Then he went on to Berkeley. He was assistant professor. Showed amazing potential. Then he moved to Montana, and blew the competition away.
Gerry Lambeau: Yeah, so who was he?
Sean: Ted Kaczynski.
Gerry: Haven’t heard of him.
Sean: [yelling to the bartender] Hey, Timmy!
Timmy: Yo.
Sean: Who’s Ted Kaczynski?
Timmy: Unabomber.
 Good Will Hunting (1997)

Chef: Oh Lord have mercy. Children, children! No no, you’ve got it all wrong. Don’t you see, children? You have the heart, but you don’t have the soul. No, no. Wait. You have the soul, but you don’t have the heart. No, no. Scratch that. You have the heart and the soul, but you don’t have the talent.

South Park, Season 8, Episode 4

Horatio: O day and night, but this is wondrous strange!
Hamlet: And therefore as a stranger give it welcome.
There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy.
 Hamlet, Act 1, Scene 5    

Daenerys Targaryen: Lannister, Targaryen, Baratheon, Stark, Tyrell — they’re all just spokes on a wheel. This one’s on top, then that one’s on top and on and on it spins crushing those on the ground.
Tyrion Lannister: It’s a beautiful dream, stopping the wheel. You’re not the first person who’s ever dreamt it.
Daenerys: I’m not going to stop the wheel, I’m going to break the wheel.
 Game of Thrones, Season 5, Episode 8 (2015)

The King is Dead

Some six centuries ago, European monarchies adopted the practice of declaring, “The King is dead! Long live the King!” upon the death of a monarch. In films and other adaptations, we usually get only the latter half of the expression, but there is clever intent buried in the repetition: there is to be no interregnum. When the old king dies, the new king immediately ascends with all his power and majesty, and probably most of his enemies as well. It is an instantaneous change not only in the power structure of a nation, but also in the mindset of any number of subjects, who have little time to lament the amount of time and effort they had spent fawning over and currying favor with the old king. They have to reset immediately: I’m sure this king will be better, much wiser, much less murderry. That sort of thing.

Our human nature helps us adapt. As Ben has pointed out numerous times, we want to believe.

We want to believe that this king will be different, and we’re usually instantly willing to reup on our social contract with him, giving up inalienable rights for the benefit of his wisdom and authority (or something). We want to believe that President Trump will be different, that he will finally turn over the tables in the Capitol and chase corrupt, conflicted, five-term congressmen into the reflecting pool with a whip. We want to believe that this time a friend/partner/spouse is done lying/cheating/hurting us. We do all this despite every bit of evidence telling us that what we believe is so unlikely as to be unworthy of mention.

And my goodness, we want to believe that the guy running this fund is going to be loads better than that idiot we just fired.

Sure, we’ve read Murder on the Orient Express, Charlie Ellis’s brilliant 2012 FAJ submission highlighting just how badly institutions pick funds and how badly they time it. We’ve seen the statistics. We’ve seen our own P&L and those of people we think highly of. More often than not, it doesn’t matter because we want to believe. In many cases because picking these funds is our job, we have to believe.

Epsilon Theory readers, my kids eat because I’m a fund manager. Mostly hot dogs and Kraft macaroni & cheese, but they eat. So it pains me to tell you that the amount of time, personnel and attention we all spend picking, talking to, debating and stressing over fund managers is ridiculous. This is why picking fund managers comes in at #3 on our list of Things that Don’t Matter.

So why doesn’t it matter?

Because just about all of us suck at it.

I’m being a bit hyperbolic. But only a little bit.

Earlier this month, Cliff Asness from AQR wrote a beautiful rant directed mostly at Rob Arnott from Research Affiliates and maybe a bit at the fine folks over at Bloomberg. No, it wasn’t a charming comparison of their luxuriant grey beards, but a debate about claims of data mining. Arnott and the story maybe not-so-indirectly imply that Cliff and AQR are insufficiently critical of data mining techniques among fund managers, to which Cliff offered his…uh…rather pointed rejoinder.

For the record, Cliff’s right on this one. I have either been a client or competitor of AQR/AMG in every year of my career, and there’s not a firm in the world that more rigorously — maybe even rigidly, at times — applies the scientific method to investing. (Hell, if I’m telling you to stop focusing on picking fund managers, I might as well pitch you on a competitor while I’m at it.)

So what’s Arnott’s beef? A legitimate one, even if AQR is about as far as you can get from being guilty of it. The idea is that a lot of fund managers out there, especially some of those of the quantitative or quantamental (ugh) persuasion, are engaging in shoddy, non-scientific research.

Properly implemented, the scientific method is a deductive process in which a researcher starts from a question he wishes to answer, forms a hypothesis around that question and then deductively produces predictions that he tests in order to validate (fine, “not reject”) the hypothesis and its related or subsequent predictions.

The very fair criticism of data mining is that it works in reverse, and in doing so, doesn’t work at all: it starts with the testing and ends with the hypothesis and predictions. This practice, whether consciously or unconsciously applied, is a big part of the replication crisis in academia and the poor performance of investment strategies that don’t bear out their backtests.

Data mining was one of the earliest forms of scientification — putting scientific terms, a systematic-seeming process and a presentation with a bunch of PhDs around a framework that is… well… bullshit. This trend is something we have talked about a lot on Epsilon Theory podcasts. From “Fact Checking”, to dumb ideas from brilliant men like Tyson’s “Rationalia”, to the fallacy-laden idea that opposition to specific policies directed at climate change as ineffectual constitutes disbelief in the fundamental science, scientification is on the rise. We are right to worry about this with our fund managers.

But here’s the real problem: as allocators, we are way, way worse. Just about every manager selection process I’ve ever seen, and some that I have even designed, are plagued by data mining and non-deductive reasoning.

The examples are many, and in almost every case they demonstrate explicit data mining. Now, usually they do so with some small modification to make it look less blatant — you know, since we’ve all read enough to at least want to not look like we’re just hiring the manager with the best performance. I’ve seen all sorts of these kinds of second-derivative screens, which are the allocator’s version of the payday lender setting rates by zip code and pretending they’re not preying on a particular demographic (zip codes are just numbers!). Instead of looking for top quartile managers, we’re looking for the ones with the best downside capture ratios. The best batting average. The best Sortino. The best Jensen’s alpha. The best residual alpha from our proprietary multi-factor model. Or my favorite, looking for good long-term performance and patting ourselves on the back about ignoring poor short-term alpha. Unfortunately, manager alpha — like many sources of returns — tends to mean-revert over longer periods (>3 years) and continues to trend over shorter ones (<1 year).

It isn’t that I’m taking special issue with any one of these metrics or the many tools allocators use to build portfolios. In fact, many of these are exactly the type of tools that I have used and continue to use in portfolio construction, since the general character and correlations of excess returns can be persistent over time. But I am taking issue with their use in selecting and predicting ex ante the existence of some quantity of alpha, for which they are all mostly useless. As an industry we embrace this pretense that “Manager A has alpha” is a valid hypothesis, and that by pursuing various types of analysis of returns we are somehow scientifically testing that hypothesis.

No, no, no! That’s not how this works. That’s not how any of this works.

Source: xkcd.com.

To start with a hypothesis that Manager A has alpha is begging the question in the extreme. This is equally true if we’re approaching it from the more strictly scientific “null hypothesis” construction. There is no economic or market-related intuition underlying the theory. If we start with the same premise for every manager (i.e., whether he has alpha) and analyze the returns, whether quantitatively or qualitatively, to reject or not reject the hypothesis, we are not doing scientific research. We are data mining and putting a scientific dress on it. And when our experience doesn’t match the research, we almost always come up with the same reason for firing them: they deviated from their process.

It’s a self-preservation thing, of course. We weren’t wrong. The manager just changed! He deviated from his process! Firm disruption! How could I have known?

In most cases, we probably couldn’t. We have a lot of fun on the Epsilon Theory podcast at the expense of the low replication rates of much of the research that happens across many fields right now, but those rates have nothing on the horror show that is financial markets research. (I say that, but the University of Wisconsin did accept a dissertation that was “an autoethnographic study of used-kimono-wearing as experienced by a folklorist… after inheriting a piece that had belonged to her grandmother.” Replicate that!)

Even well-defended factors and return drivers are often not robust to modest changes in methodology, shifts in in-sample vs. out-of-sample periods and the like. If those findings, which can be tested across millions of data points across companies, markets and decades, lack robustness, how much more challenged are we in trying to scientifically and mathematically uncover who is a good manager and who is a bad one?

It’s no wonder that this process finds so many of us — financial advisors, institutional allocators and individual investors alike — repeating that old refrain again. My process was good. This manager deviated from their process. This new one will be better. The king is dead. Long live the king.

Spokes on a wheel, friends. Kings that are on top until they’re not. We’ve all tried to stop the wheel. How do we break the wheel?

A Return to Real Deduction

The first step is recognizing that a deductive process must start from real economic intuition. What does real economic intuition look like?

A theoretical belief about why you should be paid for investing in something.

This is true and rather well-accepted with respect to market exposure. Most of us have a pretty good idea why we get paid for owning stocks. We’re exposing ourselves to economic uncertainty, political systems and credit markets, inflation and all sorts of other subsidiary risks. Concluding that accepting these risks ought to earn a return is something I think most investors understand fairly intuitively. Most of us — although clearly fewer than with stocks — have a good sense of why we ought to be paid for holding bonds. Commodities? Less clear. (Something-something-backwardation, something-something-storage-premium.)

Rather than starting from returns and working backward, our goal should be to develop this kind of intuition for why we ought to get paid for the active risk our fund managers are taking. In a perfect world, before we ran a single screen, before we looked at a single slide deck, before we looked at a single performance number, we would sit down — like we’re doing here with this Code — and map out the things we believe we will or might be paid for.

Where do we start? Let’s walk down a simplified road from economic intuition through deductive reasoning to a familiar hypothesis in the illustration below.

Deductive Process for Identifying a Potentially Valid Strategy

Source: Salient Partners, L.P., as of 04/21/17. For illustrative purposes only.

The economic intuition on the right should be familiar if you’re an Epsilon Theory reader. The deductions on each of the left and right side should look familiar if you’re a rabid Epsilon Theory reader, since they showed up as the two basic ways in which a stock-picker could outperform in “What a Good-Looking Question.” The hypothesis on the bottom right should be the most familiar of all: we’re basically conjecturing that buying cheap stuff works. Not our bit, but a good one!

Inserting economic intuition into those two deductions alone should get us a few dozen hypotheses. There truly are more things in heaven and earth that most of us are willing to dream in our returns analysis-oriented philosophy. Some of those should be well-worn and familiar, like value. Some may be more unique. Many will be flawed and — hopefully — dismissed before we do anything stupid with them.

Frankly and rather unfortunately, your only ability to test many of your hypotheses about fund managers is often going to be through qualitative mechanisms and through live experience. That doesn’t mean you can’t be scientific in your approach. In a perfect world you’d be able to approach a manager without knowing a lick about their performance, have an intellectual conversation about what it is that they do to make money, determine whether it lines up with one of the theoretical ways you think it may be possible to do so, and then evaluate their performance to see if it corroborates that. That’s in a perfect world.

But in an imperfect world, one of the main reasons obsessing over fund managers is one of the Things that Don’t Matter is that almost all practitioners shuffle through dozens of approaches to selecting funds. And almost all those approaches are variants of historical return analysis, or represent historical returns analysis in guise. There’s only one way out of this, and it may be an uncomfortable one:

We’ve got to stop using historical returns analysis for anything other than portfolio fit. Not use it less. Not use it smarter. Those are attempts to stop the wheel. We’ve got to break the wheel.

If we’re going to break the wheel, we must have a robust concept of the sources of return we’re willing to believe in, that we’re willing to develop a hypothesis around. We’ve also got to develop comfort with interview and evaluation techniques that go beyond asking about stocks. If our diligence process is not capable of identifying whether the manager can access that source of return that we believe in, then we have to change our process. We must change the questions we ask.

It’s easier to understand this for systematic managers because they fit neatly into a more behaviorally driven, scientific mindset. Figuring out that we believe in value and that a manager is accessing value credibly isn’t exactly rocket science. So let’s instead consider what is probably the most ubiquitous, hardest-to-crack example: the fundamental long/short equity manager. The stock picker. Assume you haven’t seen their returns (hah!). You’ve got an hour to figure out if they’re going to fit into a working archetype, if there’s a hypothesis to be drawn here. What do you do?

Here’s what you don’t do: you don’t let them walk through their deck. You don’t quiz them about their companies to see how intelligent or knowledgeable they are. They’re all going to be smart. The Unabomber was smart. In most cases, you probably don’t even let them talk about their overall investment philosophy, because they’re going to do it on their terms. Don’t look them in the eyes and pretend you’re going to be able to out them as someone who’s going to screw you over. It’s not possible. Instead, ask three questions:

  • How do you make money? Why should you outperform the market?
  • Ignore the first part of their response. Feel free to hum your favorite song from the Hamilton soundtrack in your head (Cabinet Battle #2, obviously), and when they finally get to the part where they say “mumble… mumble… rigorous bottom-up research…”, you’re back on! Interrupt them and say, “Yes, but why? Why are you and your team better at spotting things that the market misses?”
  • Let’s assume you’re right about all that. How do you get comfortable that it will work for the stock?

Then, and only then can we violate Things that Don’t Matter #2 and dive into a case study. Don’t let them tell you a stock story. Don’t let them give you the thesis. Not that there’s anything wrong with having a thesis (They should! They must!), but that’s the language of their process. Instead, take a position in the portfolio, and ask them how the position fits with their answers. How did you think you would make money on the stock? Was that a differentiated view? Why are you confident that your team is better at analyzing that characteristic of this company than the other 1,000,000 investors covering it? And how did you get comfortable that this thing you found would actually make the stock work, that it would influence the people who actually have to change the price of the stock by buying and selling?

And all this is not to prove a hypothesis, but to arrive at one in the first place. You see, none of this solves the problem that we all face as allocators to funds: there is almost never enough data to come to a firm statistical conclusion about whether a strategy is likely to outperform. For those of you — financial advisors and individuals, in particular — who must select funds without the benefit of meeting the people managing the funds, you are often even more hamstrung, since you are constrained to whatever information they are willing to provide you about their process and strategy. Sometimes it is possible to glean from the marketing materials whether there may be an alpha generative process buried in there, and sometimes it is not.

If we approach investing deductively, however, we at least have a chance of focusing on the few things that do matter, like whether the fund manager is doing any of the things that even have a chance of outperforming. Is this more deductive approach to fund selection enough? Is it worth it?

Sometimes. But in most cases, sadly, probably not.

I still need to buy 16 years’ worth of hot dogs and Kraft dinner, but I’ll level with you. In most cases, whether you pick this fund or that fund is not even going to register in comparison to the decisions you make about risk, asset allocation and diversification.

The stock example from “What a Good-Looking Question” is instructive here as well, and in an even more exaggerated way than for stocks themselves. While a 5% tracking error stock portfolio is not rare, a portfolio of multiple actively managed funds with that level of tracking error is exceedingly rare. If you are hiring three, four or more mutual funds, ETFs or other portfolios within an asset class like, say, U.S. stocks or emerging markets stocks, the odds in my experience are very strong that your tracking error is probably closer to 2-3%. The amount of risk coming from your managers’ active bets is probably less than 5%.

Source: Salient Partners, L.P., as of 04/21/17. For illustrative purposes only.

In all fairness, some of this is the point of active management. Part of the reason that these numbers are so low in this hypothetical example is that “alpha” in this example is, by definition, uncorrelated to the market exposure. But remember our other lesson from “I Am Spartacus” — the tracking error of our fund managers is rarely dominated by uncorrelated sources of alpha, but comes more typically from the static biases managers have toward structural sources of risk and return.

You could make the argument that the incremental return is worth the effort, especially in an environment where returns to capital markets are likely to be muted. And that’s a reasonable argument. But it’s all a question of degree. Is that source of return, challenging as it is to find, elusive as it has proven, worth the resources, time and focus it receives in our conversations with our constituents? Our investment committees? Our boards? Our clients?

So why bother at all? This is just an argument to go passive, right?

Oh, God, no!

First, as you all know by now, we are all active investors because we all make active decisions on the most important dimensions of portfolio construction: risk, asset class composition and secondary objectives like income. But more importantly, this is a universal issue. Those of us who use passive strategies for some of our portfolios — which is probably all of us at this point — have as much to gain from this advice as any other. Just two weeks ago, I made a minor point at a dinner about how S&P futures exposure was actually cheaper than ETFs and ended up getting bogged down in a serious 10-15 minute discussion on the topic. You’ve probably observed similar discussions over which low-cost ETF or passive mutual fund is the best way to access this market or that. This obsession really does transcend party lines on the ridiculous active vs. passive bike shed debate.

Neither should this be seen as a repudiation of active management at all. Again, investors should often be working with fund managers and advisors that do things that fall under the umbrella of active management. It does make sense to exploit behavioral sources of return. It does make sense to identify the very rare examples of information asymmetry. It does make sense to pursue active strategies in markets where the passive alternatives are poor or structurally biased themselves. It does make sense to consider market structure and the extent to which forced buyers and sellers create long-term pricing opportunities. It does make sense to pursue cost-effective active approaches that deliver characteristics (risk, yield, tax benefits) that would otherwise be part of the asset allocation process.

But in pursuing those, this code would advise you of the following:

  • Be judicious in the time and resources devoted to this exercise vs. the big questions, the Things that Matter.
  • Eschew the use of backward-looking return analysis. Really avoid it as much as humanly possible until you are testing a legitimate, deductive hypothesis about why you think a fund manager might be able to add value.
  • Apply a deductive process to everything you do.

After all, a code would not be a code at all unless we intended to pursue it with intellectual honesty. Most of the industry’s experience selecting fund managers has relied on rather less rigorous standards. And so it goes that Picking Funds is #3 on our list of Things that Don’t Matter.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16093/

AI Hedge Funds, Corporate Inequality & Microdosing LSD (by Silly Rabbit)

Machines and suchlike

DARPA has produced a 15 minute AI explainer video. A fair review: “Artificial intelligence is grossly misunderstood. It’s a rare clear-eyed look into the guts of AI that’s also simple enough for most non-technical folks to follow. It’s dry, but IRL computer science is pretty dry.” Well worth watching for orientation on where we are — and where we are not — with AI today.

In case you are interested in ‘AI hedge funds’ and haven’t come across them, Sentient should be on your radar. And Walnut Algorithms, too. They look to be taking quite different AI approaches, but at some point, presumably, AI trading will become a recognized category. Interesting that the Walnut article asserts — via EurekaHedge — that “there are at least 23 ‘AI Hedge Funds’ with 12 actively trading”. Hmm …

[Ed. note — double hmm … present company excepted, there’s a lot less than meets the eye here. IMO.]

On the topic of Big Compute, I’m a big believer in the near-term opportunity of usefully incorporating quantum compute into live systems for certain tasks within the next couple of years and so opening up practical solutions to whole new classes of previously intractable problems. Nice explanation of ‘What Makes Quantum Computers Powerful Problem Solvers’ here.

[Ed. note — for a certain class of problems (network comparisons, for example) which just happen to be core to Narrative and mass sentiment analysis, the power of quantum computing versus non-quantum computing is the power of 2n versus n2. Do the math.]

Quick overview paper on Julia programming language here. Frankly, I’ve never come across Julia (that I know of) in the wild out here on the west coast, but I see the attraction for folks coming from a Matlab-type background and where ‘prototype research’ and ‘production engineering’ are not cleanly split. Julia seems, to some extent, to be targeting trading-type ‘quants’, which makes sense.

Paper overview: “The innovation of Julia is that it addresses the need to easily create new numerical algorithms while still executing fast. Julia’s creators noted that, before Julia, programmers would typically develop their algorithms in MATLAB, R or Python, and then re-code the algorithms into C or FORTRAN for production speed. Obviously, this slows the speed of developing usable new algorithms for numerical applications. In testing of seven basic algorithms, Julia is impressively 20 times faster than Python, 100 times faster than R, 93 times faster than MATLAB, and 1.5 times faster than FORTRAN. Julia puts high-performance computing into the hands of financial quants and scientists, and frees them from having to know the intricacies of high-speed computer science”. Julia Computing website link here.

Humans and suchlike

This HBR article on ‘Corporation in the Age of Inequality” is, in itself, pretty flabby, but the TLDR soundbite version is compelling: “The real engine fueling rising income inequality is “firm inequality”. In an increasingly … winner-take-most economy the … most-skilled employees cluster inside the most successful companies, their incomes rising dramatically compared with those of outsiders.” On a micro-level I think we are seeing an acceleration of this within technology-driven firms (both companies and funds).

[Ed. note — love TLDR. It’s what every other ZeroHedge commentariat writer says about Epsilon Theory!]

A great — if nauseatingly ‘rah rah’ — recent book with cutting-edge thinking on getting your company’s humans to be your moat is: Stealing Fire: How Silicon Valley, the Navy SEALs, and Maverick Scientists Are Revolutionizing the Way We Live and Work. Warning: Microdosing hallucinogens and going to Burning Man are strongly advocated!

Finally, on the human-side, I have been thinking a lot about ‘talent arbitrage’ for advanced machine learning talent (i.e., how to not to slug it out with Google, Facebook et al. in the Bay Area for every hire) and went on a bit of world-tour to various talent markets over the past couple of months. My informal perspective: Finland, parts of Canada and Oxford (UK) are the best markets in the world right now—really good talent that have been way less picked-over. Does bad weather and high taxes give rise to high quality AI talent pools? Kind of, in a way, probably.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16098/

The Horse in Motion

Scared money can’t win and a worried man can’t love.

―  Cormac McCarthy, All the Pretty Horses (1992)

In 1872, noted horseracing aficionado and San Francisco rich guy Leland Stanford (yes, of university fame) commissioned noted photographer and San Francisco smart guy Eadweard Muybridge to apply his path breaking technology of stop-action photography to settle a long-running debate — do all four hooves leave the ground at the same time when horses run? This question had bedeviled the Sport of Kings for ages, and while Stanford favored the “unsupported transit” theory of yes, all four hooves leaving the ground for a split-second in the outstretched position, allowing horses to briefly “fly”, he — as rich guys often do — really, really, really needed to know for sure.

It took Muybridge about 12 years to complete the work, interrupted in part by his murder trial. It seems that Muybridge had taken a young bride (she 21 and he 42 when they married) who preferred the company of a young dandy of a San Francisco drama critic who fashioned himself in faux militaristic fashion as Major Harry Larkyns. After learning that wife Flora’s 7-month old son Florado was perhaps not biologically his, Muybridge tracked Larkyns down and shot him point-blank in the chest with the immortal words, “Good evening, Major, my name is Muybridge and here’s the answer to the letter you sent my wife.” In one of the more prominent early cases of jury nullification (Phillip Glass has an opera, The Photographer, with a libretto based on the court transcripts), Muybridge was found not guilty on the grounds of justifiable homicide despite the judge’s clear instructions to the contrary. Or maybe the jurors were just bought off. Leland Stanford spared no expense in paying for Muybridge’s defense. Gotta get those horse pix.

And eventually he did. Muybridge’s work, The Horse in Motion, settled the question of unsupported transit once and for all.

Yes, all four hooves leave the ground at the same time. But it’s NOT in the outstretched flying position. Instead, it’s in the tucked position, which — because it’s not as romantic a narrative as flying — had never been widely considered as an answer. In fact, for decades after the 1882 publication of The Horse in Motion in book form (a book by Leland Stanford’s fellow rich guy friend, J.D.B. Stillman, who gave ZERO credit to Muybridge for the work … after all, Muybridge was just Stanford’s work-for-hire employee, a member of the gig economy of the 1870s), artists continued to prefer the more narrative-pleasing view of flying horses. Here, for example, is Frederic Remington’s 1889 painting A Dash for the Timber, a work that was largely responsible for catapulting Remington to national prominence, replete with a whole posse of flying horses (h/t to John Batton in Ft. Worth, who knows his Amon Carter Museum collection!).

Okay, Ben, that’s a fun story of technology, art, murder, and rich guy intrigue set in 1870s San Francisco. But what does it have to do with modern markets and investing?

This: Muybridge developed a technology that allowed for a quantum leap forward in how humans perceived the natural world. His findings flew in the face of the popular narrative for how the natural world of biomechanics worked, but they were True nonetheless and led to multiple useful applications over time. Today we are at the dawning of a technology that similarly allows for a quantum leap forward in how humans perceive the world, but with a focus on the social world as opposed to the natural world. Some of these findings will no doubt similarly fly in the face of the popular narrative for how the social world of markets and politics works, but they will similarly lead to useful applications. They already are.

The technology I’m talking about is the biggest revolution in the world today. It’s the ascendancy of non-human intelligences, which I’ve written about in lots of Epsilon Theory notes, from Rise of the Machines to First Known When Lost to Troy Will Burn – the Big Deal about Big Data to The Talented Mr. Ripley to One MILLION Dollars to Two Discoveries. It’s what most of the world calls Artificial Intelligence, which is a term I dislike for its pejorative anthropomorphism. It’s what Neville Crawley calls Big Compute, which is a great phrase, not least for its progression and distinction from the old hat notion of Big Data (h/t to Neville for turning me on to the Muybridge story, too).

The primary impact of Big Compute, or AI or whatever you want to call it, is that it allows for a quantum leap forward in how we humans can perceive the world. Powerful non-human intelligences are the modern day Oracle of Delphi. They can “see” dimensions of the world that human intelligences cannot, and if we can ask the right questions we can share in their vision, as well. The unseen dimensions of the social world that I’m interested in tapping with the help of non-human intelligences are the dimensions of unstructured data, the words and images and communications that comprise the ocean in which the human social animal swims.

This is the goal of the Narrative Machine research project (read about it in The Narrative Machine and American Hustle). That just as Eadweard Muybridge took snapshots of the natural world using his new technology, so do I think it possible to take snapshots of the social world using our new technology. And just as Muybridge’s snapshots gave us novel insights into how the Horse in Motion actually works, as opposed to our romantic vision of how it works, so do I think it likely that these AI snapshots will give us novel insights into how the Market in Motion actually works.

That’s the horse I’m betting on in Epsilon Theory.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16106/