Massively Fast Compute, AI Algorithms and Blockchain Development (by Silly Rabbit)

I’m limiting this week’s Rabbit Hole to three links which represent the rapid tick-tock of the trifecta of massively fast compute, AI algorithms and blockchain development as I believe that these are the top three technology mega-trends of the 2015 – 2025 period (ex-Life Sciences innovation). Personally, I still believe that within these three mega-trends massively fast compute (Big Compute) will be the most world-changing, but clearly big compute hardware and algorithm development are deeply intertwined, and I believe we will start to see blockchain intertwine in a meaningful, although as-yet somewhat unclear, way with these other two technologies too.

That’s a fast chip you got there, bud

Very accessible CB Insights write up here and denser original paper here of a test of a Photonic computer chip which “mimics the way the human brain operates, but at 1000x faster speeds” with much lower energy requirements than today’s chips. To state the obvious, the exciting/terrifying potential of chips like this becoming reality is that machines will be able to rapidly cumulatively learn while we humans are still limited by learning, passing on some fraction of that learning, and then dying, which is clearly a pretty inefficient process.

The future of AI learning: nature or nurture?

IEEE Spectrum provide an overview on a recent debate a between Yann LeCun and Gary Marcus at NYU’s Center for Mind, Brain and Consciousness on whether or not AI needs more built-in cognitive machinery similar to that of humans and animals to achieve similar intelligence.

Blockchain for Wall Street

Bloomberg reports on a major breakthrough in cryptography which may have solved one of the biggest obstacles to using blockchain technology on Wall Street: keeping transaction data private. Known as a “zero-knowledge proof,” the new code will be included in an Oct. 17 upgrade to the Ethereum blockchain, adding a level of encryption that lets trades remain private.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/15828/

Algorithmic Complexes, Alpha Male Brains, and Winnie the Pooh (by Silly Rabbit)

Massively complex complexes of algorithms

Let me come straight out with it and state, for the record, that I believe the best current truth we have is that we humans, along with all other living beings, are simply massively complex complexes of algorithms. What do I mean by that? Well, let’s take a passage from the terrific Homo Deus by Yuval Noah Harari, which describes this concept at length and in detail:

In recent decades life scientists have demonstrated that emotions are not some mysterious spiritual phenomenon that is useful just for writing poetry and composing symphonies. Rather, emotions are biochemical algorithms that are vital for the survival and reproduction of all mammals. What does this mean? Well, let’s begin by explaining what an algorithm is, because the 21st Century will be dominated by algorithms. ‘Algorithm’ is arguably the single most important concept in our world. If we want to understand our life and our future, we should make every effort to understand what an algorithm is and how algorithms are connected with emotions. An algorithm is a methodical set of steps that can be used to make calculations, resolve problems and reach decisions. An algorithm isn’t a particular calculation but the method followed when making the calculation.

Consider, for example, the following survival problem: a baboon needs to take into account a lot of data. How far am I from the bananas? How far away is the lion? How fast can I run? How fast can the lion run? Is the lion awake or asleep? Does the lion seem to be hungry or satiated? How many bananas are there? Are they big or small? Green or ripe? In addition to these external data, the baboon must also consider information about conditions within his own body. If he is starving, it makes sense to risk everything for those bananas, no matter the odds. In contrast, if he has just eaten, and the bananas are mere greed, why take any risks at all? In order to weigh and balance all these variables and probabilities, the baboon requires far more complicated algorithms than the ones controlling automatic vending machines. The prize for making correct calculations is correspondingly greater. The prize is the very survival of the baboon. A timid baboon — one whose algorithms overestimate dangers — will starve to death, and the genes that shaped these cowardly algorithms will perish with him. A rash baboon —one whose algorithms underestimate dangers — will fall prey to the lion, and his reckless genes will also fail to make it to the next generation. These algorithms undergo constant quality control by natural selection. Only animals that calculate probabilities correctly leave offspring behind. Yet this is all very abstract. How exactly does a baboon calculate probabilities? He certainly doesn’t draw a pencil from behind his ear, a notebook from a back pocket, and start computing running speeds and energy levels with a calculator. Rather, the baboon’s entire body is the calculator. What we call sensations and emotions are in fact algorithms. The baboon feels hunger, he feels fear and trembling at the sight of the lion, and he feels his mouth watering at the sight of the bananas. Within a split second, he experiences a storm of sensations, emotions and desires, which is nothing but the process of calculation. The result will appear as a feeling: the baboon will suddenly feel his spirit rising, his hairs standing on end, his muscles tensing, his chest expanding, and he will inhale a big breath, and ‘Forward! I can do it! To the bananas!’ Alternatively, he may be overcome by fear, his shoulders will droop, his stomach will turn, his legs will give way, and ‘Mama! A lion! Help!’ Sometimes the probabilities match so evenly that it is hard to decide. This too will manifest itself as a feeling. The baboon will feel confused and indecisive. ‘Yes . . . No . . . Yes . . . No . . . Damn! I don’t know what to do!’

Why does this matter? I think understanding and accepting this point is absolutely critical to being able to construct certain classes of novel and interesting algorithms. “But what about consciousness?” you may ask, “Does this not distinguish humans and raise us above all other animals, or at least machines?”

There is likely no better explanation, or succinct quote, to deal with the question of consciousness than Douglas Hofstadter’s in I Am a Strange Loop:

“In the end, we are self-perceiving, self-inventing, locked-in mirages that are little miracles of self-reference.”

Let’s accept Hofstadter’s explanation (which is — to paraphrase and oversimplify terribly — that, at a certain point of algorithmic complexity, consciousness emerges due to self-referencing feedback loops) and now hand the mic back to Harari to finish his practical thought:

“This raises a novel question: which of the two is really important, intelligence or consciousness? As long as they went hand in hand, debating their relative value was just an amusing pastime for philosophers, but now humans are in danger of losing their economic value because intelligence is decoupling from consciousness.”

Or, to put it another way: if what I need is an intelligent algorithm to read, parse and tag language in certain reports based on whether humans with a certain background would perceive the report as more ‘growth-y’ vs ‘value-y’ in its tone and tenor, why do I need to discriminate whether the algorithm performing this action has consciousness or not, or which parts of the algorithms have consciousness (assuming that the action can be equally parallelized either way)?

AI vs. human performance

Electronic Frontier Foundation have done magnificent work pulling together problems and metrics/datasets from the AI research literature in order to see how things are progressing in specific subfields or AI/machine learning as a whole. Very interesting charts on AI versus human performance in image recognition, chess, book comprehension, and speech recognition (keep scrolling down; it’s a very long page with lots of charts).

Alpha male brain switch

Researchers led by Prof Hailan Hu, a neuroscientist at Zhejiang University in Hangzhou, China have demonstrated activating the dorsal medial prefrontal cortex (dmPFC) brain circuit in mice to flip the neural switch for becoming an alpha male. This turned the timid mice bold after their ‘alpha’ circuit was stimulated.  Results also show that the ‘winner effect’ lingers on and that the mechanism may be similar in humans. Profound and fascinating work.

Explaining vs. understanding

And finally, generally I find @nntaleb’s tweets pretty obnoxious and low value (unlike his books, which I find pretty obnoxious and tremendously high value), but this tweet really captured me: “Society is increasingly run by those who are better at explaining than understanding.” I pondered last week on how allocators and Funds of Funds are going to allocate to ‘AI’ (or ‘ALIS’). This quote succinctly sums up and generalizes that concern.

And finally, finally, this has nothing to do with Big Compute, AI, or investment strategies, but it is just irresistible: Winnie the Pooh blacklisted by China’s online censors: “Social media ban for fictional bear follows comparisons with Xi Jinping.” Original FT article here (possibly pay-walled) and lower resolution derivative article (not pay-walled) by inUth here. As Pooh says “Sometimes I sits and thinks, and sometimes I just sits…”

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/15951/

Quantum Supremacy, Correlating Unemployment, and Buddhists with Attitude (by Silly Rabbit)

Quantum supremacy

As Ben and I have discussed before on an Epsilon Theory podcast, my view is that quantum computing is going to be truly, truly transformational by “redefining intractable”, as 1Qbit say, over the coming years. My conviction around quantum continues to grow and — to put a pretty big stake in the ground — I believe, at this point, the only open questions are: Which approach will dominate, and how long exactly until we get quantum machines which work on a broad set of real-world questions? I’ve long been a big fan of the applied, real-world progress D-wave have made, and Rigetti too. However, the “majors” like IBM are also making substantial progress towards true “quantum supremacy” with R&D intensive approaches, while other pieces of the ecosystem, such as the ability to “certify quantum states“, continue to fall into place. In the meantime, here is a wonderful cartoon explainer on quantum computing by Scott Aaronson and Zach Weinersmith.

What web searches correlate to unemployment

Well, in order to get the answer to that question you will have to follow this link (and be prepared to blush). The findings were generated by Seth Stephens-Davidowitz using Google Correlate. “Frequently, the value of Big Data is not its size; it’s that it can offer you new kinds of information to study — information that had never previously been collected”, says Stephens-Davidowitz.

Using verbal and nonverbal behaviors to measure completeness, confidence and accuracy

I recently came across Mitra Capital in Boston who have an interesting strategy of “using verbal indicators to judge the completeness and reliability of messages, to form predictions about company performance (via) analysis of management commentary from quarterly earnings calls and investor conferences based on a proprietary and proven framework with roots in the Central Intelligence Agency” with the underlying tech/methodology based on BIA. They’re running a relatively small fund ($53m AUM in Q1 2017) and have returned an average of 8.5% for the past four years (including a +43% year, and a -12.5% year). Neat NLP approach, although these returns imply more of a “feature than a product” (i.e., a valuable sub-system addition to a larger system, rather than a stand-alone system.) But, hey, I said the same thing about Instagram.

Buddhists with attitude / Backtesting: Methodology with a fragility problem

Probably (hopefully!) anyone reading Epsilon Theory has already read Antifragile by Nassim Nicholas Taleb. Many things which could and have been said about this book, but the most important one to highlight for my narrow, domain application is the massively important distinction (although rarely talked about facet) of machine learning/big compute approaches vs. regression-driven back test approaches. Key distinction is a simple one: Does your system gain from exposure to randomness and stress (within bounds) and improve the longer it exists and the more events it is exposed to OR does it perform less well with stress, and decay with time. Antifragile machine learning systems are profoundly different to the fragile fitting of models.

And finally, since I have already invoked Taleb, and if for no other reason that the line “If someone wonders who are the Stoics I’d say Buddhists with an attitude problem”, here is Taleb’s Commencement address to American University of Beirut last year.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16037/

Mo’ Compute Mo’ Problems (by Silly Rabbit)

Hard problems

Someone tweeted this cartoon at me last week, presumably in angry response to an Epsilon Theory post, as the Tweet was captioned “My feelings towards ‘A.I.’ (and/or machine learning) and investing”:

Source: xkcd

To be clear: YES, I AGREE

Unsurprisingly, we humans are pretty competent creatures within the domains we have contrived (such as finance) and spent decades practicing. So it is, generally, still hard (and expensive) in 2017 to quickly build a machine which is consistently better at even a thin, discrete sliver of a complex, human-contrived domain.

The challenge, as this cartoon humorously alludes to, is that it is currently often difficult (and sometimes impossible) to know in advance just how hard a problem is for a machine to best a human at.

BUT, what we do know is that once an ML/AI-driven machine dominates, it can truly dominate, and it is incredibly rare for humans to gain the upper hand again (although there can be periods of centaur dominance, like the ‘Advanced Chess’ movement).

As a general heuristic, I think we can say that tasks at which machines are now end-to-end better have one or some of the following characteristics:

  • Are fairly simple and discrete tasks which require repetition without error (AUTOMATION)
  • and/or are extremely large in data scale (BIG DATA)
  • and/or have calculation complexity and/or require a great deal of speed (BIG COMPUTE)
  • and where a ‘human in-the-loop’ degrades the system (AUTONOMY)

But equally there are still many things on which machines are currently nowhere close to being able to reach human-parity, mostly involving ‘intuition’, or many, many models with judgment on when to combine or switch between the models.

Will machines eventually dominate all? Probably. When? Not anytime soon.

The key, immediate, practical point is that the current over-polarization of the human-oriented and machine-oriented populations, particularly in the investing world, is both a challenge and an opportunity as each sect is not fully utilizing the capabilities of the other. Good Bloomberg article from a couple of months back on Point72 and BlueMountain’s challenges in reconciling this in an existing environment.

The myth of superhuman AI

On the other side of the spectrum from our afore-referenced Tweeter are those who predict superhuman AIs taking over the world.

I find this to be a very bogus argument in anything like the foreseeable future, reasons for which are very well laid out by Kevin Kelly (of Wired, Whole Earth Review and Hackers’ Conference fame) in this lengthy essay.

The crux of Kelly’s argument:

  • Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
  • Humans do not have general purpose minds and neither will AIs.
  • Emulation of human thinking in other media will be constrained by cost.
  • Dimensions of intelligence are not infinite.
  • Intelligences are only one factor in progress.

Key quote:

Instead of a single line, a more accurate model for intelligence is to chart its possibility space. Intelligence is a combinatorial continuum. Multiple nodes, each node a continuum, create complexes of high diversity in high dimensions. Some intelligences may be very complex, with many sub-nodes of thinking. Others may be simpler but more extreme, off in a corner of the space. These complexes we call intelligences might be thought of as symphonies comprising many types of instruments. They vary not only in loudness, but also in pitch, melody, color, tempo, and so on. We could think of them as ecosystem. And in that sense, the different component nodes of thinking are co-dependent and co-created. Human minds are societies of minds, in the words of Marvin Minsky. We run on ecosystems of thinking. We contain multiple species of cognition that do many types of thinking: deduction, induction, symbolic reasoning, emotional intelligence, spacial logic, short-term memory, and long-term memory. The entire nervous system in our gut is also a type of brain with its own mode of cognition.

(BTW: Kevin Kelly has led an amazing life – read his bio here.)

Can’t we just all be friends?

On somewhat more prosaic uses of AI, the New York Times has a nice human-angle on the people whose job is to train AI to do their own jobs. My favorite line from the legal AI trainer: “Mr. Rubins doesn’t think A.I. will put lawyers out of business, but it may change how they work and make money. The less time they need to spend reviewing contracts, the more time they can spend on, say, advisory work or litigation.” Oh, boy!

Valley Grammar

And finally, because it it just really tickles me in a funny-because-it’s-true way: Benedict Evans’ @a16z’s guide to the (Silicon) Valley grammar of IP development and egohood:

  • I am implementing a well-known paradigm.
  • You are taking inspiration.
  • They are rip-off merchants.

So true. So many attorney’s fees. Better rev up that AI litigator.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16065/

Future Flash Crashes, Digital Darwinism & the Resurgence of Hardware (by Silly Rabbit)

Future flash crashes

Remember a few years back when a bogus AP tweet instantly wiped $100bn off the US markets? In April 2013 the Associated Press’ Twitter account was compromised by hackers who tweeted “Breaking: Two Explosions in the White House and Barack Obama is injured.”

For illustrative purposes only.

Source: The Washington Post, 04/23/13, Bloomberg L.P., 04/23/13.

The tweet was quickly confirmed to be an alternative fact (as we say in 2017), but not before the Dow dropped 145 points (1%) in two minutes.

Well, my view is that we are heading into a far more ‘interesting’ era of flash crashes of confused, or deliberately misled, algorithms. In this concise paper titled “Deceiving Google’s Cloud Video Intelligence API Built for Summarizing Videos”, researchers from the University of Washington demonstrate that by inserting still images of a plate of noodles (amongst other things) into an unrelated video, they could trick a Google image-recognition algorithm into thinking the video was about a completely different topic.

Digital Darwinism

I’m not sure I totally buy the asserted causality on this one, but the headline story is just irresistible: “Music Streaming Is Making Songs Faster as Artists Compete for Attention.” Paper abstract:

Technological changes in the last 30 years have influenced the way we consume music, not only granting immediate access to a much larger collection of songs than ever before, but also allowing us to instantly skip songs. This new reality can be explained in terms of attention economy, which posits that attention is the currency of the information age, since it is both scarce and valuable. The purpose of these two studies is to examine whether popular music compositional practices have changed in the last 30 years in a way that is consistent with attention economy principles. In the first study, 303 U.S. top-10 singles from 1986 to 2015 were analyzed according to five parameters: number of words in title, main tempo, time before the voice enters, time before the title is mentioned, and self-focus in lyrical content. The results revealed that popular music has been changing in a way that favors attention grabbing, consistent with attention economy principles. In the second study, 60 popular songs from 2015 were paired with 60 less popular songs from the same artists. The same parameters were evaluated. The data were not consistent with any of the hypotheses regarding the relationship between attention economy principles within a comparison of popular and less popular music.

Meanwhile, in other evolutionary news, apparently robots have been ‘mating’ and evolving in an evo-devo stylee. DTR? More formal translation: Researchers have added complexity to the field of evolutionary robotics by demonstrating for the first time that, just like in biological evolution, embodied robot evolution is impacted by epigenetic factors. Original Frontiers in Robotics and AI (dense!) paper here. Helpful explainer article here.

The resurgence of hardware

As we move from a Big Data paradigm of commoditized and cheap AWS storage to a Big Compute ­­paradigm of high performance chips (and other non-silicon compute methods), we are discovering step-change innovation in applied processing power driven by the Darwinian force of specialization, or, as Chris Dixon recently succinctly tweeted: “Next stage of Moore’s Law: less about transistor density, more about specialized chips.”

We are seeing the big guys like Google develop their specialized chips custom-made for their specific big compute needs, with a very significant increase of speed of up to 30 times faster than today’s conventional processors and using much less power, too.

Also, we are seeing increased real-world applications being developed for truly evolutionary-leap technologies like quantum computing. MIT Technology Review article on implementing the powerful Grover’s quantum search algorithm here.

And, finally, because it just wouldn’t be a week in big compute-land without a machine beating a talented group of humans at one game of another: Poker-Playing Engineers Take on AI Machine – And Get Thrashed.

Key points:

  1. People have a misunderstanding of what computers and people are each good at. People think that bluffing is very human, but it turns out that’s not true. A computer can learn from experience that if it has a weak hand and it bluffs, it can make more money.
  2. The AI didn’t learn to bluff from mimicking successful human poker players, but from game theory. Its strategies were computed from just the rules of the game, not from analyzing historical data.
  3. Also evident was the relentless decline in price and increase in performance of running advanced ‘big compute’ applications; the computing power used for this poker win can be had for under $20k.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16079/

Alibaba’s AI, JP Morgan’s Risky Language & the Nurture of Reality (by Silly Rabbit)

Video game-playing AI

AI has moved one step closer to mastering the classic video game StarCraft. Google, Facebook and now Alibaba have been working on AI StarCraft players, and last week a team from China’s Alibaba published a paper describing a system that learned to execute a number of strategies employed by high-level players without being given any specific instruction on how best to manage combat. Like many deep learning systems, the software improved through trial and error, demonstrating the ability to adapt to changes in the number and type of troops engaged in battle. Non-technical overview via The Verge here. Original and fairly accessible technical paper here.

While an AI video game ace may not be world changing in and of itself, progress on AI intra-agent communication and coordination has potentially profound implications for markets as the approach matures, or, as the Alibaba researchers rather poetically note in their paper:

In the coming era of algorithmic economy, AI agents with a certain rudimentary level of artificial collective intelligence start to emerge from multiple domains…[including] the trading robots gaming on the stock markets [and] ad bidding agents competing with each other over online advertising exchanges.

And how do agents behave when their game playing becomes stressful? Apparently just like their human creators: Aggressively. Summary of Google’s DeepMind finds on this here.

Risky language

For anyone who has ever taken general NLP algorithms, trained them on the information of the broader world and then pointed them at financial markets-type information, you will have noticed that they get kind of sad and messed up. Partly because markets-ese is odd (try telling your doctor that being overweight is a good thing) and partly because finance folks sure do love a risk discussion…and apparently no one more so than JP Morgan Chase CEO Jamie Dimon. In his much re-published letter to shareholders:

It is alarming that approximately 40% of those who receive advanced degrees in STEM at American universities are foreign nationals with no legal way of staying here even when many would choose to do so…Felony convictions for even minor offenses have led, in part, to 20 million American citizens having a criminal record…The inability to reform mortgage markets has dramatically reduced mortgage availability.

Thanks, Jamie, my algorithm just quit and immigrated to Canada.

The more serious question on this is that as natural language algorithms (of various types) become ubiquitous, at what point do business leaders begin to craft their communications primarily to influence the machine, or at least not include detailed socio-political critiques to accidentally trip it?

The nurture of reality

Clearly, our perception of reality, our world view, is substantially informed by our memories and the stories (links) we tell ourselves about these memories. We are now, for the first time, just starting to get an understanding of how memories are physically stored in the brain. Recollections of successive events physically entangle each other when brain cells store them, as Scientific American reports.

The Map of Physics, a joyous 8 minute video by Dominic Walliman (formerly of D-Wave quantum computing), culminates in the map below with The Chasm of Ignorance, The Future and Philosophy. Walliman points to where we must be operating if we are to break truly new ground (i.e., put the regression models down, please). And if you liked that, keep watching to Your Quantum Nose: How Smell Works

And, finally, a classic, epic, challenging, practical, piece of prose/poetry from one of the the world’s greatest philosophers and orators: the late, great, Tibetan Buddhist meditation master Chögyam Trungpa. Long treatise on Zen vs. Tantra as a system for nurturing the mind:

…the discovery of shunyata [emptiness of determinate intrinsic nature] is no doubt the highest cardinal truth and the highest realization that has ever been known…

Coming next week: The next generation of flash crashes; digital Darwinism and the resurgence of hardware.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16086/

AI Hedge Funds, Corporate Inequality & Microdosing LSD (by Silly Rabbit)

Machines and suchlike

DARPA has produced a 15 minute AI explainer video. A fair review: “Artificial intelligence is grossly misunderstood. It’s a rare clear-eyed look into the guts of AI that’s also simple enough for most non-technical folks to follow. It’s dry, but IRL computer science is pretty dry.” Well worth watching for orientation on where we are — and where we are not — with AI today.

In case you are interested in ‘AI hedge funds’ and haven’t come across them, Sentient should be on your radar. And Walnut Algorithms, too. They look to be taking quite different AI approaches, but at some point, presumably, AI trading will become a recognized category. Interesting that the Walnut article asserts — via EurekaHedge — that “there are at least 23 ‘AI Hedge Funds’ with 12 actively trading”. Hmm …

[Ed. note — double hmm … present company excepted, there’s a lot less than meets the eye here. IMO.]

On the topic of Big Compute, I’m a big believer in the near-term opportunity of usefully incorporating quantum compute into live systems for certain tasks within the next couple of years and so opening up practical solutions to whole new classes of previously intractable problems. Nice explanation of ‘What Makes Quantum Computers Powerful Problem Solvers’ here.

[Ed. note — for a certain class of problems (network comparisons, for example) which just happen to be core to Narrative and mass sentiment analysis, the power of quantum computing versus non-quantum computing is the power of 2n versus n2. Do the math.]

Quick overview paper on Julia programming language here. Frankly, I’ve never come across Julia (that I know of) in the wild out here on the west coast, but I see the attraction for folks coming from a Matlab-type background and where ‘prototype research’ and ‘production engineering’ are not cleanly split. Julia seems, to some extent, to be targeting trading-type ‘quants’, which makes sense.

Paper overview: “The innovation of Julia is that it addresses the need to easily create new numerical algorithms while still executing fast. Julia’s creators noted that, before Julia, programmers would typically develop their algorithms in MATLAB, R or Python, and then re-code the algorithms into C or FORTRAN for production speed. Obviously, this slows the speed of developing usable new algorithms for numerical applications. In testing of seven basic algorithms, Julia is impressively 20 times faster than Python, 100 times faster than R, 93 times faster than MATLAB, and 1.5 times faster than FORTRAN. Julia puts high-performance computing into the hands of financial quants and scientists, and frees them from having to know the intricacies of high-speed computer science”. Julia Computing website link here.

Humans and suchlike

This HBR article on ‘Corporation in the Age of Inequality” is, in itself, pretty flabby, but the TLDR soundbite version is compelling: “The real engine fueling rising income inequality is “firm inequality”. In an increasingly … winner-take-most economy the … most-skilled employees cluster inside the most successful companies, their incomes rising dramatically compared with those of outsiders.” On a micro-level I think we are seeing an acceleration of this within technology-driven firms (both companies and funds).

[Ed. note — love TLDR. It’s what every other ZeroHedge commentariat writer says about Epsilon Theory!]

A great — if nauseatingly ‘rah rah’ — recent book with cutting-edge thinking on getting your company’s humans to be your moat is: Stealing Fire: How Silicon Valley, the Navy SEALs, and Maverick Scientists Are Revolutionizing the Way We Live and Work. Warning: Microdosing hallucinogens and going to Burning Man are strongly advocated!

Finally, on the human-side, I have been thinking a lot about ‘talent arbitrage’ for advanced machine learning talent (i.e., how to not to slug it out with Google, Facebook et al. in the Bay Area for every hire) and went on a bit of world-tour to various talent markets over the past couple of months. My informal perspective: Finland, parts of Canada and Oxford (UK) are the best markets in the world right now—really good talent that have been way less picked-over. Does bad weather and high taxes give rise to high quality AI talent pools? Kind of, in a way, probably.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16098/

The Horse in Motion

Scared money can’t win and a worried man can’t love.

―  Cormac McCarthy, All the Pretty Horses (1992)

In 1872, noted horseracing aficionado and San Francisco rich guy Leland Stanford (yes, of university fame) commissioned noted photographer and San Francisco smart guy Eadweard Muybridge to apply his path breaking technology of stop-action photography to settle a long-running debate — do all four hooves leave the ground at the same time when horses run? This question had bedeviled the Sport of Kings for ages, and while Stanford favored the “unsupported transit” theory of yes, all four hooves leaving the ground for a split-second in the outstretched position, allowing horses to briefly “fly”, he — as rich guys often do — really, really, really needed to know for sure.

It took Muybridge about 12 years to complete the work, interrupted in part by his murder trial. It seems that Muybridge had taken a young bride (she 21 and he 42 when they married) who preferred the company of a young dandy of a San Francisco drama critic who fashioned himself in faux militaristic fashion as Major Harry Larkyns. After learning that wife Flora’s 7-month old son Florado was perhaps not biologically his, Muybridge tracked Larkyns down and shot him point-blank in the chest with the immortal words, “Good evening, Major, my name is Muybridge and here’s the answer to the letter you sent my wife.” In one of the more prominent early cases of jury nullification (Phillip Glass has an opera, The Photographer, with a libretto based on the court transcripts), Muybridge was found not guilty on the grounds of justifiable homicide despite the judge’s clear instructions to the contrary. Or maybe the jurors were just bought off. Leland Stanford spared no expense in paying for Muybridge’s defense. Gotta get those horse pix.

And eventually he did. Muybridge’s work, The Horse in Motion, settled the question of unsupported transit once and for all.

Yes, all four hooves leave the ground at the same time. But it’s NOT in the outstretched flying position. Instead, it’s in the tucked position, which — because it’s not as romantic a narrative as flying — had never been widely considered as an answer. In fact, for decades after the 1882 publication of The Horse in Motion in book form (a book by Leland Stanford’s fellow rich guy friend, J.D.B. Stillman, who gave ZERO credit to Muybridge for the work … after all, Muybridge was just Stanford’s work-for-hire employee, a member of the gig economy of the 1870s), artists continued to prefer the more narrative-pleasing view of flying horses. Here, for example, is Frederic Remington’s 1889 painting A Dash for the Timber, a work that was largely responsible for catapulting Remington to national prominence, replete with a whole posse of flying horses (h/t to John Batton in Ft. Worth, who knows his Amon Carter Museum collection!).

Okay, Ben, that’s a fun story of technology, art, murder, and rich guy intrigue set in 1870s San Francisco. But what does it have to do with modern markets and investing?

This: Muybridge developed a technology that allowed for a quantum leap forward in how humans perceived the natural world. His findings flew in the face of the popular narrative for how the natural world of biomechanics worked, but they were True nonetheless and led to multiple useful applications over time. Today we are at the dawning of a technology that similarly allows for a quantum leap forward in how humans perceive the world, but with a focus on the social world as opposed to the natural world. Some of these findings will no doubt similarly fly in the face of the popular narrative for how the social world of markets and politics works, but they will similarly lead to useful applications. They already are.

The technology I’m talking about is the biggest revolution in the world today. It’s the ascendancy of non-human intelligences, which I’ve written about in lots of Epsilon Theory notes, from Rise of the Machines to First Known When Lost to Troy Will Burn – the Big Deal about Big Data to The Talented Mr. Ripley to One MILLION Dollars to Two Discoveries. It’s what most of the world calls Artificial Intelligence, which is a term I dislike for its pejorative anthropomorphism. It’s what Neville Crawley calls Big Compute, which is a great phrase, not least for its progression and distinction from the old hat notion of Big Data (h/t to Neville for turning me on to the Muybridge story, too).

The primary impact of Big Compute, or AI or whatever you want to call it, is that it allows for a quantum leap forward in how we humans can perceive the world. Powerful non-human intelligences are the modern day Oracle of Delphi. They can “see” dimensions of the world that human intelligences cannot, and if we can ask the right questions we can share in their vision, as well. The unseen dimensions of the social world that I’m interested in tapping with the help of non-human intelligences are the dimensions of unstructured data, the words and images and communications that comprise the ocean in which the human social animal swims.

This is the goal of the Narrative Machine research project (read about it in The Narrative Machine and American Hustle). That just as Eadweard Muybridge took snapshots of the natural world using his new technology, so do I think it possible to take snapshots of the social world using our new technology. And just as Muybridge’s snapshots gave us novel insights into how the Horse in Motion actually works, as opposed to our romantic vision of how it works, so do I think it likely that these AI snapshots will give us novel insights into how the Market in Motion actually works.

That’s the horse I’m betting on in Epsilon Theory.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16106/