Massively Fast Compute, AI Algorithms and Blockchain Development (by Silly Rabbit)

I’m limiting this week’s Rabbit Hole to three links which represent the rapid tick-tock of the trifecta of massively fast compute, AI algorithms and blockchain development as I believe that these are the top three technology mega-trends of the 2015 – 2025 period (ex-Life Sciences innovation). Personally, I still believe that within these three mega-trends massively fast compute (Big Compute) will be the most world-changing, but clearly big compute hardware and algorithm development are deeply intertwined, and I believe we will start to see blockchain intertwine in a meaningful, although as-yet somewhat unclear, way with these other two technologies too.

That’s a fast chip you got there, bud

Very accessible CB Insights write up here and denser original paper here of a test of a Photonic computer chip which “mimics the way the human brain operates, but at 1000x faster speeds” with much lower energy requirements than today’s chips. To state the obvious, the exciting/terrifying potential of chips like this becoming reality is that machines will be able to rapidly cumulatively learn while we humans are still limited by learning, passing on some fraction of that learning, and then dying, which is clearly a pretty inefficient process.

The future of AI learning: nature or nurture?

IEEE Spectrum provide an overview on a recent debate a between Yann LeCun and Gary Marcus at NYU’s Center for Mind, Brain and Consciousness on whether or not AI needs more built-in cognitive machinery similar to that of humans and animals to achieve similar intelligence.

Blockchain for Wall Street

Bloomberg reports on a major breakthrough in cryptography which may have solved one of the biggest obstacles to using blockchain technology on Wall Street: keeping transaction data private. Known as a “zero-knowledge proof,” the new code will be included in an Oct. 17 upgrade to the Ethereum blockchain, adding a level of encryption that lets trades remain private.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/15828/

Information Bottlenecks, Fake News and Boredom (by Silly Rabbit)

Information bottleneck

A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn:

“Claude Shannon, the founder of information theory, in a sense liberated the study of information starting in the 1940s by allowing it to be considered in the abstract — as 1s and 0s with purely mathematical meaning. Shannon took the view that, as Tishby put it, “information is not about semantics.” But, Tishby argued, this isn’t true. Using information theory, he realized, “you can define ‘relevant’ in a precise sense.”

Quantum computers need smart software

Nature reports “The world is about to have its first (useful) quantum computers … The problem is how best to program these devices. The stakes are high — get this wrong and we will have experiments that nobody can use instead of technology that can change the world.” Related to this, I’m excited to spend some time in a couple of weeks with Scott Aaronson of QCWare who “develop hardware-agnostic enterprise software solutions running on quantum computers”.

In other “the quantum age is nigh” news:

A pair of researchers from the University of Tokyo have developed what they’re calling the “ultimate” quantum computing method. Unlike today’s systems, which can currently only handle dozens of qubits, the pair believes their model will be able to process more than a million.

Australian researchers have designed a new type of qubit — the building block of quantum computers – that they say will finally make it possible to manufacture a true, large-scale quantum computer.

Microsoft now has 8,000 AI researchers

Apparently, Microsoft now has 8,000 AI researchers. That’s a veritable army. Presumably a big chunk of the 8,000 are datamungers, infrastructure engineers etc., just as on aircraft carrier like the USS Nimitz (pictured below) where there are, order of magnitude, the same number of personnel but most are cooks, logistics managers, medics etc. rather than fighter pilots. But still: Eight thousand!!!

And in other “that’s a lot of engineers” news: Amazon now has 5,000 people working on the Echo / Alexa.

As I’ve noted before, in my view it is utter conceit that it is possible to do something ‘AI’ which is truly and sustainably novel, scaled and production-ready in a high stakes environment (such as trading) without a decent sized team focused on a narrowly defined problem.

Fake news and botnets

Fascinating interview with Researcher Emilio Ferrara on fake news and botnets:

“We found that bots can be used to run interventions on social media that trigger or foster good behaviors,” said Ferrara. “This milestone shatters a long-held belief that ideas spread like an infectious disease, or contagion, with each exposure resulting in the same probability of infection. Now we have seen empirically that when you are exposed to a given piece of information multiple times, your chances of adopting this information increase every time.”

Representational universality

It has been at least a month since we have had a Hofstadter quote, and this week’s Rabbit Hole column feels light on existential theory, so here’s a classic:

“In the world of living things, the magic threshold of representational universality is crossed whenever a system’s repertoire of symbols becomes extensible without any obvious limit.”

Boredom

And finally, in general I have quite a bit of reticence about sharing TED Talk links as, to quote the low-agreeability Benjamin Bratton, they can be kinda “Middlebrow Megachurch Infotainment.” Having said that, here’s a link to a terrific TED Talk on why boredom is important.

Best quote:

“As one UX designer told me, the only people who refer to their customers as “users” are drug dealers and technologists.”

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/15841/

Youth, Immutable Content, and the Secondhand Scoop (by Silly Rabbit)

This week’s Rabbit Hole column is more thematic with recent links that I found interesting around the topic of ‘news,’ on which Ben wrote the defining commentary of recent years with Fiat Money, Fiat News.

Youth and news

I’ve always appreciated the quality and integrity of the work of the Knight Foundation. This report is a fascinating summary of a focus group with 52 teenagers and young adults from across the United States on how young people conceptualize and consume news in digital spaces.

A scalable blockchain protocol for publicly accessible and immutable content

This is the category of blockchain things which I think is interesting and transformative: https://steem.io/steem-bluepaper.pdf .
(NOTE: I have no connection to Steem, I just like the category)

“Compared to other blockchains, Steem stands out as the first publicly accessible database for immutably stored content in the form of plain text, along with an in-built incentivization mechanism. This makes Steem a public publishing platform from which any Internet application may pull and share data while rewarding those who contribute the most valuable content.”

The Bradd Jaffy and Kyle Griffin approach

Here I re-share a link to a Buzzfeed story about Bradd Jaffy And Kyle Griffin who re-share links on Twitter to other people’s news stories. If only Bradd Jaffy And Kyle Griffin could then re-share this link and then Buzzfeed could write about that … But, beyond the comical circularity potential, it is a very interesting story by Buzzfeed on the power of non-traditional distribution channels / influencers and ’the secondhand scoop.’

The Norwegian approach

Nieman Lab reports that a Norwegian news site (the online arm of the NRK public broadcaster) requires readers to answer questions to prove they understand story before posting comments: “We thought we should do our part to try and make sure that people are on the same page before they comment.. If everyone can agree that this is what the article says, then they have a much better basis for commenting on it.”

What words ought to exist?

And finally, here is a fun paper which the author describes as “An earnest attempt to answer the following question scientifically: What words ought to exist?” using “computational cryptolexicography, n-Markov models, coinduction…”

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/15857/

Revenge of the Humans, Emojis & Mushrooms (by Silly Rabbit)

Revenge of the Humans Part II

This is a really terrific (long) piece of writing: Revenge of the Humans Part II: A New Blueprint For Discretionary Management:

The meat really starts to kick in the section ‘There Are No Shortcuts’ and reaches peak lucidity in the section ‘Organizational Structure’. Excellent work by Leigh Drogan, Founder and CEO at Estimize, laying out what I really do believe is the blueprint for success with ‘next gen’ strategies that are foundationally systematic and substantially software-encoded:

Portfolio Manager — Of all the roles this is where I think things really need to change in terms of who sits in this seat. It can no longer be hedge fund bros, they simply won’t survive here. Nor will the pure gunslingers and tape readers, gone. And you certainly don’t want the pure quants sitting in this seat. PMs of the future are going to be far more interpersonal and process driven…. This is a cross functional role, and one that needs to be based on the behavioral attributes of the person more than anything else. An MBA may be useful here, but I would even say that having experience working at the early stages of a startup as a CEO can add a lot. I’m waiting for someone to develop a firm to leverage psychometric testing for different investment strategies so that we can identify people tuned for momentum vs value. You’re talking about a completely different psychology between those two people and it’s imperative you choose the person correctly … PMs should have some training in statistical and quantitative methods in order for them to talk intelligently with the quants and trust the factor models. Without that trust, there’s simply no point in having them and you’ll only gain that by understanding how they are built. Should a PM know how to code, no. Should they understand what the code does and why, absolutely. Basic data science classes can provide this knowledge. Quantitative research methods 101 in college is a requirement … I believe that compensation structures for the PM need to change. This is no longer “his book”. He is another player on the team, who has a specific role, to coordinate the dance. But in many ways, he will have less impact on the alpha generated by the book than the analysts or the quants who create the factor models. The PM is now the offensive coordination calling the plays, not the quarterback on the field scrambling around and throwing touchdowns. We can now compensate analysts accurately for the efficacy of their calls, and the PM for how much alpha she adds above them. The rest of the team should be bonused out based on the performance of the book.

DeepMoji

Neat work by the good folks at MIT Media Lab:

Our basic idea with our DeepMoji project is that if the model is able to predict which emoji was included with a given sentence, then it has an understanding of the emotional content of that sentence. We are training our model to predict emojis on a dataset of 1.2B tweets (filtered from 55B tweets). We can then transfer this knowledge to a target task by doing just a little bit of additional training on top with the target dataset. With this approach, we beat the state-of-the-art across benchmarks for sentiment, emotion, and sarcasm detection.

Check out the online demo here, more detailed write-up here, and full technical paper here.

Useful skills like VR, NLP and… econometrics?

This list of fastest-growing freelancer skills compiled by Upwork, a job site that matches freelancers with employers, is just so odd I feel there is either some deep pattern coded in there that explains everything, or else some intern at Upwork is having a laugh.

Growth in VR and NLP makes total sense given the relative lack of experienced talent vs growth in demand, especially for VR developers. Neural network and Docker development for the same reasons. Adobe Photoshop freelancers — sure, I guess Photoshop is still operated by a priesthood although it’s unclear why the journeyman priesthood is growing rapidly.

But then Econometrics, really??!!? — never, ever, in my life have I thought “what I really need to do is to hire a random econometrician over the internet”, and for sure that thought has not been exponentially increasing of late.

And Asana work tracking, which had only around 20,000 paying customers a year ago?!!? — that’s like having ’Tesla car polisher’ on the list.

Anyway, I leave you to ponder. It certainly is an intriguing list — perhaps what we need is an econometric hireling to make sense of it for us…

Mushrooms

And finally and frivolously, we have this article which is pretty much a total waste of storage space as it is a 700-word, not-very-good takedown of a new not-very-good mushroom-identifying mobile app with sub-par mushroom image recognition. However, it warrants inclusion in this week’s Rabbit Hole for the one immortal line:

There’s a saying in the mushroom-picking community that all mushrooms are edible but some mushrooms are only edible once.

Surely the apothegm of the week!

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/15888/

Data Access Battles, Creative Thinking & Full Script AI (by Silly Rabbit)

Data access battles

A couple of weeks back I shared a link to the story of ImageNet and the importance of data to developing algorithms. Ars Technica reports on two ‘at the coalface’ battles over data access with HiQ and Power Ventures fighting with LinkedIn and Facebook over data access. I’m not advocating a position on this but, to be sure, small — and currently obscure — court cases like these will, cumulatively, end up setting the precedents which will have a significant impact on the evolution and ownership of powerful algorithms that are increasingly driving behavior and economics.

Creative thinking

This speech from Claude Shannon at Bell Labs in 1952 has been circulating online for the past couple of weeks. It is a timeless, pragmatic speech on creative thinking which remains, 65 years later, fully relevant for developing novel computational strategies:

Sometimes I have had the experience of designing computing machines of various sorts in which I wanted to compute certain numbers out of certain given quantities. This happened to be a machine that played the game of nim and it turned out that it seemed to be quite difficult. It took quite a number of relays to do this particular calculation although it could be done. But then I got the idea that if I inverted the problem, it would have been very easy to do — if the given and required results had been interchanged; and that idea led to a way of doing it which was far simpler than the first design. The way of doing it was doing it by feedback; that is, you start with the required result and run it back until — run it through its value until it matches the given input. So the machine itself was worked backward putting range S over the numbers until it had the number that you actually had and, at that point, until it reached the number such that P shows you the correct way.

Facebook shuts down robots after they invent their own language

Facebook shuts down robots after they invent their own language has become a widely reported and wildly commentated story over the past month, referencing a story on ’Tricky chatbots’ linked here a couple of months back. For melodramatic illustrative effect, I like switching a couple of words in the Facebook headline so that it reads ‘Lehman (doesn’t) shuts down traders after they invent their own language’ as it illustrates that, in general, if you: put a bunch of agents (human or machine) together and set up a narrowly defined, adversarial, multi-player game with a strong reward function then the agents will develop their own task-specific language and protocols, keep adding complexity, lie to each other (yes, the FB bots also learnt to do that), be tempted to obfuscate behavior in order to reduce interference and maximize the reward function, and develop models which are positive for near-term reward maximization but do not necessarily deal with longer-term consequence or long tail events, and so become very hard for human overseers to truly assess…

DICK FULD (2008): 
I wake up every single night wondering what I could have done differently — this is a pain that will stay with me the rest of my life

FACEBOOK (2017):
Hold my beer

AI: From partial to full script

Thinking more broadly about the longer-term evolution of AI (and the nature of money and contracts, per Ethereum link last week), it has been interesting to re-read Sapiens: A Brief History of Humankind by Yuval Noah Harari which charts the rise to dominance of us Sapiens with especially interesting chapters on the development of written language and money. A concept which particularly grabbed me was that written language was initially developed as ‘partial script’ technology for narrow tasks such as tax accounting, and then evolved to be full script and so capable of much more than it was originally conceived for.

The history of writing is almost certainly a wonderful historical premonition of the trajectory of AI, except with the evolution being much faster and the warning that likely “the AI is more powerful than pen.”

Relevant excerpt from Sapiens:

Full script is a system of material signs that can represent spoken language more or less completely. It can therefore express everything people can say, including poetry. Partial script, on the other hand, is a system of material signs that can represent only particular types of information, belonging to a limited field of activity … It didn’t disturb the Sumerians (who invented the script) that their script was ill-suited for writing poetry. They didn’t invent it in order to copy spoken language, but rather to do things that spoken language failed at … Between 3000 BC and 2500 BC more and more signs were added to the Sumerian system, gradually transforming it into a full script that we today call cuneiform. By 2500 BC, kings were using cuneiform to issue decrees, priests were using it to record oracles, and less-exalted citizens were using it to write personal letters.

The beautiful mathematical explorations of Maryam Mirzakhani

And finally, at the risk of turning into The Economist, we conclude this week’s Rabbit Hole with a touching obituary of the Tehran-born, Fields Medal-winning mathematician Maryam Mirzakhani:

A bit more than a decade ago when the mathematical world started hearing about Maryam Mirzakhani, it was hard not to mispronounce her then-unfamiliar name. The strength and beauty of her work made us learn it. It is heartbreaking not to have Maryam among us any longer. It is also hard to believe: The intensity of her mind made me feel that she would be shielded from death.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/15893/

AI BS Detectors & the Origins of Life (by Silly Rabbit)

Confidence levels for the Social and Behavioral Sciences

DARPA recently put out an RFI:

…requesting information on new ideas and approaches for creating (semi)automated capabilities to assign ‘Confidence Levels’ to specific studies, claims, hypotheses, conclusions, models, and/or theories found in social and behavioral science research (and) help experts and non-experts separate scientific wheat from wrongheaded chaff using machine reading, natural language processing, automated meta-analyses, statistics-checking algorithms, sentiment analytics, crowdsourcing tools, data sharing and archiving platforms, network analytics, etc.

A visionary and high value RFI. Wired article on the same, enticingly titled, DARPA Wants to Build a BS Detector for Science.

Claude Berrou on turbo codes and informational neuroscience

Fascinating short interview with Claude Berrou, a French computer and electronics engineer who has done important work on turbo codes for telecom transmissions and is now working on informational neuroscience. Berrou describes his work through the lens of information and graph theory:

My starting point is still information, but this time in the brain. The human cerebral cortex can be compared to a graph, with billions of nodes and thousands of billions of edges. There are specific modules, and between the modules are lines of communication. I am convinced that the mental information, carried by the cortex, is binary. Conventional theories hypothesize that information is stored by the synaptic weights, the weights on the edges of the graph. I propose a different hypothesis. In my opinion, there is too much noise in the brain; it is too fragile, inconsistent, and unstable; pieces of information cannot be carried by weights, but rather by assemblies of nodes. These nodes form a clique, in the geometric sense of the word, meaning they are all connected two by two. This becomes digital information…

Thermodynamics in far-from-equilibrium systems

I’m a sucker for methods to try to understand and explain complex systems such as this story by Quanta (the publishing arm of the Simons Foundation — as in Jim Simons or Renaissance Technologies fame) of Jeremy England, a young MIT associate professor, using non-equilibrium statistical mechanics to poke at the origins of life.

Game theory

And finally, check out this neat little game theory simulator which explores how trust develops in society. It’s a really sweet little application with fun interactive graphics framed around the historical 1914 No Man’s Land Ceasefire. Check out more fascinating and deeply educational games from creator Nicky Case here.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/15916/

Programmable Money & Auto Public Offerings (by Silly Rabbit)

Programmable money

I’ve recently — perhaps belatedly — developed an interest in blockchain, and particularly in Ethereum. Not so much in trading crypto-currencies, but more in the realm of the type of ‘Smart Token’ protocols being developed by Bancor. As I start to process the implications of smart contracts I’m convinced that we are currently at Day Zero of a massive disruption. To quote Mike Goldin on one dimension of this disruption: “What blockchains give us, fundamentally, is programmable money. When you can program money, you can program incentives. When you can program incentives, you can kind of program people’s behavior.”

Another week, another set of ‘human’ skills which algorithms are mastering: Google demonstrates both an algorithm for tastefully selecting landscape photography, which is almost as good as a pro photographer, and, from the DeepMind division, “a new family of approaches for imagination-based planning (and) architectures which provide new ways for agents to learn and construct plans to maximize the efficiency of a task.”

Rough translation: AI which has the rudimentary ability to consider potential consequences of an action (‘imagine’) and plan ahead result in a higher success rate than AIs without this ability.

ImageNet: the data that changed AI research

Long, terrific overview of the history and impact of the ImageNet data set: “One thing ImageNet changed in the field of AI is suddenly people realized the thankless work of making a dataset was at the core of AI research. People really recognize the importance — the dataset is front and center in the research as much as algorithms.”

Auto Public Offering

Generally, ‘automation of white collar work’ is such an obviously disruptive category of AI — and near-term economic earthquake for many industries — that there is not much to say about it. However, this short piece by Bloomberg a few weeks back caught my eye: Apparently Goldman has automated (or at least mapped out how to automate) half the tasks needed to prepare for an IPO, thus replacing the work previously done by associates earning $326,000 a year. As Bill Gates famously said: “Be nice to nerds. Chances are you’ll end up working for one.”

The paradox of historical knowledge

And finally, I shared a pretty hefty quote from “Homo Deus: A Brief History of Tomorrow” by Yuval Noah Harari last week related to algorithms and self. On a completely different topic, the book also contains a fantastic quote on the paradox of historical knowledge: “This is the paradox of historical knowledge: Knowledge that does not change behavior is useless. But knowledge that changes behavior quickly loses its relevance. The more data we have and the better we understand history, the faster history alters its course, and the faster our knowledge becomes outdated.”

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/15929/

Algorithmic Complexes, Alpha Male Brains, and Winnie the Pooh (by Silly Rabbit)

Massively complex complexes of algorithms

Let me come straight out with it and state, for the record, that I believe the best current truth we have is that we humans, along with all other living beings, are simply massively complex complexes of algorithms. What do I mean by that? Well, let’s take a passage from the terrific Homo Deus by Yuval Noah Harari, which describes this concept at length and in detail:

In recent decades life scientists have demonstrated that emotions are not some mysterious spiritual phenomenon that is useful just for writing poetry and composing symphonies. Rather, emotions are biochemical algorithms that are vital for the survival and reproduction of all mammals. What does this mean? Well, let’s begin by explaining what an algorithm is, because the 21st Century will be dominated by algorithms. ‘Algorithm’ is arguably the single most important concept in our world. If we want to understand our life and our future, we should make every effort to understand what an algorithm is and how algorithms are connected with emotions. An algorithm is a methodical set of steps that can be used to make calculations, resolve problems and reach decisions. An algorithm isn’t a particular calculation but the method followed when making the calculation.

Consider, for example, the following survival problem: a baboon needs to take into account a lot of data. How far am I from the bananas? How far away is the lion? How fast can I run? How fast can the lion run? Is the lion awake or asleep? Does the lion seem to be hungry or satiated? How many bananas are there? Are they big or small? Green or ripe? In addition to these external data, the baboon must also consider information about conditions within his own body. If he is starving, it makes sense to risk everything for those bananas, no matter the odds. In contrast, if he has just eaten, and the bananas are mere greed, why take any risks at all? In order to weigh and balance all these variables and probabilities, the baboon requires far more complicated algorithms than the ones controlling automatic vending machines. The prize for making correct calculations is correspondingly greater. The prize is the very survival of the baboon. A timid baboon — one whose algorithms overestimate dangers — will starve to death, and the genes that shaped these cowardly algorithms will perish with him. A rash baboon —one whose algorithms underestimate dangers — will fall prey to the lion, and his reckless genes will also fail to make it to the next generation. These algorithms undergo constant quality control by natural selection. Only animals that calculate probabilities correctly leave offspring behind. Yet this is all very abstract. How exactly does a baboon calculate probabilities? He certainly doesn’t draw a pencil from behind his ear, a notebook from a back pocket, and start computing running speeds and energy levels with a calculator. Rather, the baboon’s entire body is the calculator. What we call sensations and emotions are in fact algorithms. The baboon feels hunger, he feels fear and trembling at the sight of the lion, and he feels his mouth watering at the sight of the bananas. Within a split second, he experiences a storm of sensations, emotions and desires, which is nothing but the process of calculation. The result will appear as a feeling: the baboon will suddenly feel his spirit rising, his hairs standing on end, his muscles tensing, his chest expanding, and he will inhale a big breath, and ‘Forward! I can do it! To the bananas!’ Alternatively, he may be overcome by fear, his shoulders will droop, his stomach will turn, his legs will give way, and ‘Mama! A lion! Help!’ Sometimes the probabilities match so evenly that it is hard to decide. This too will manifest itself as a feeling. The baboon will feel confused and indecisive. ‘Yes . . . No . . . Yes . . . No . . . Damn! I don’t know what to do!’

Why does this matter? I think understanding and accepting this point is absolutely critical to being able to construct certain classes of novel and interesting algorithms. “But what about consciousness?” you may ask, “Does this not distinguish humans and raise us above all other animals, or at least machines?”

There is likely no better explanation, or succinct quote, to deal with the question of consciousness than Douglas Hofstadter’s in I Am a Strange Loop:

“In the end, we are self-perceiving, self-inventing, locked-in mirages that are little miracles of self-reference.”

Let’s accept Hofstadter’s explanation (which is — to paraphrase and oversimplify terribly — that, at a certain point of algorithmic complexity, consciousness emerges due to self-referencing feedback loops) and now hand the mic back to Harari to finish his practical thought:

“This raises a novel question: which of the two is really important, intelligence or consciousness? As long as they went hand in hand, debating their relative value was just an amusing pastime for philosophers, but now humans are in danger of losing their economic value because intelligence is decoupling from consciousness.”

Or, to put it another way: if what I need is an intelligent algorithm to read, parse and tag language in certain reports based on whether humans with a certain background would perceive the report as more ‘growth-y’ vs ‘value-y’ in its tone and tenor, why do I need to discriminate whether the algorithm performing this action has consciousness or not, or which parts of the algorithms have consciousness (assuming that the action can be equally parallelized either way)?

AI vs. human performance

Electronic Frontier Foundation have done magnificent work pulling together problems and metrics/datasets from the AI research literature in order to see how things are progressing in specific subfields or AI/machine learning as a whole. Very interesting charts on AI versus human performance in image recognition, chess, book comprehension, and speech recognition (keep scrolling down; it’s a very long page with lots of charts).

Alpha male brain switch

Researchers led by Prof Hailan Hu, a neuroscientist at Zhejiang University in Hangzhou, China have demonstrated activating the dorsal medial prefrontal cortex (dmPFC) brain circuit in mice to flip the neural switch for becoming an alpha male. This turned the timid mice bold after their ‘alpha’ circuit was stimulated.  Results also show that the ‘winner effect’ lingers on and that the mechanism may be similar in humans. Profound and fascinating work.

Explaining vs. understanding

And finally, generally I find @nntaleb’s tweets pretty obnoxious and low value (unlike his books, which I find pretty obnoxious and tremendously high value), but this tweet really captured me: “Society is increasingly run by those who are better at explaining than understanding.” I pondered last week on how allocators and Funds of Funds are going to allocate to ‘AI’ (or ‘ALIS’). This quote succinctly sums up and generalizes that concern.

And finally, finally, this has nothing to do with Big Compute, AI, or investment strategies, but it is just irresistible: Winnie the Pooh blacklisted by China’s online censors: “Social media ban for fictional bear follows comparisons with Xi Jinping.” Original FT article here (possibly pay-walled) and lower resolution derivative article (not pay-walled) by inUth here. As Pooh says “Sometimes I sits and thinks, and sometimes I just sits…”

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/15951/

AI & Video Games, Tricky Chatbots and More… (by Silly Rabbit)

AI and video games (again)

Vicarious (a buzzy Silicon Valley company developing AI for robots) say they have a new and crazy-good AI technique called Schema Networks. The Allen Institute for Artificial Intelligence and others seem pretty skeptical and demand a throw-down challenge with AlphaGo (or, failing that, some peer-reviewed papers with commonly used terms and a broader set of tests).

In other AI video game news, Microsoft released a video of their AI winning at Ms. Pacman, with an instructive voiceover of how the system works.

Tricky chatbots

I recently stumbled upon Carl Icahn’s Twitter feed which has the tag line: “Some people get rich studying artificial intelligence. Me, I make money studying natural stupidity.” Me, I think in 2017 this dichotomy is starting to sound pretty quaint. See: Overview of recent FAIR (Facebook Artificial Intelligence Research division) study teaching chatbots how to negotiate, including the bots self-discovery of the strategy of pretending to care about an item to which they actually give little or no value, just so they can later give up that item to seem to have made a compromise. Apparently, while they were at it, the Facebook bots also unexpectedly created their own language.

The quantum age has officially arrived

I’ve been jabbering on and pointing to links about quantum computing and the types of intractable problems it can solve for some time here, here and here, but now Bloomberg has written a long piece on quantum we can officially declare “The quantum age has officially arrived, hurrah!”. Very good overview piece on quantum computing from Bloomberg Markets here.

Your high dimensional brain

We tend to view ourselves (our ‘selfs’) through the lens of the technology of the day: in the Victorian ‘Mechanical age’ we were (and partly are) bellows and pumps, and now we are, by mass imagination, a collection of algorithms and processors, and possibly living in a VR simulation. While this ‘Silicon Age’ view is probably not entirely inaccurate it is also, probably, in the grand scheme of things, nearly as naive and incomplete as the Victorian view was. Blowing up some of the reductions of current models, this new (very interesting, pretty dense, somewhat contested) paper points towards brain structure in 11 dimensions. Shorter and easier explainer here by Wired or even more concisely by the NY Post“If the brain is actually working in 11 dimensions, looking at a 3D functional MRI and saying that it explains brain activity would be like looking at the shadow of a head of a pin and saying that it explains the entire universe, plus a multitude of other dimensions.”

And in other interesting-brain-related news:

Taming the “Black Dog”

And finally, three different but complimentary technology-enabled approaches to diagnosing and fighting depression:

  • basic algorithm with limited data has shown to be 80-90 percent accurate when predicting whether someone will attempt suicide within the next two years, and 92 percent accurate in predicting whether someone will attempt suicide within the next week.
  • In a different predictive approach, researchers fed facial images of three groups of people (those with suicidal ideation, depressed patients, and a medical control group) into a machine-learning algorithm that looked for correlations between different gestures. The results: individuals displaying a non-Duchenne smile (which doesn’t involve the eyes in the smile) were far more likely to possess suicidal ideation.
  • On the treatment-side, researchers have developed a potentially revolutionary treatment that pulses magnetic waves into the brain, treating depression by changing neurological structures, not its chemical balance.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16004/

Long Short-Term Memory, Algorithms for Social Justice, and External Cognition (by Silly Rabbit)

DARPA funds graph analytics processor

Last week I posted a bunch of links pointing towards quantum computing. However, there are also other compute initiatives which also offer significant potential for “redefining intractable” for problems such as graph comparison, for example, DARPA’s HIVE which aims to create a 1000x improvement in processing speed (and at much lower power) on this problem. Write-up on EE Times of the DARPA HIVE program here.

Exploring long short-term memory networks

Nice explainer on LSTMs by Edwin Chen: “The first time I learned about LSTMs, my eyes glazed over. Not in a good, jelly donut kind of way. It turns out LSTMs are a fairly simple extension to neural networks, and they’re behind a lot of the amazing achievements deep learning has made in the past few years.” (Long, detailed and interesting blog post, but even if you just read the first few page scrolls still quite worthwhile for the intuition of the value and function of LSTMs.)

FairML: Auditing black box predictive models

Machine learning models are used for important decisions like determining who has access to bail. The aim is to increase efficiency and spot patterns in data that humans would otherwise miss. But how do we know if a machine learning model is fair? And what does fairness in machine learning mean? Paper exploring these questions using FairML, a new Python library that audits black-box predictive models.

Fast iteration wins prizes

Great Quora answer on “Why has Keras been so successful lately at Kaggle competitions?” (By the author of Keras, an open source neural net library designed to enable fast experimentation). Key quote: ”You don’t lose to people who are smarter than you, you lose to people who have iterated through more experiments than you did, refining their models a little bit each time. If you ranked teams on Kaggle by how many experiments they ran, I’m sure you would see a very strong correlation with the final competition leaderboard.” 

Language from police body camera footage shows racial disparities in officer respect

This paper presents a systematic analysis of officer body-worn camera footage, using computational linguistic techniques to automatically measure the respect level that officers display to community members.

External cognition

Large-scale brainlike systems are possible with existing technology — if we’re willing to spend the money — proposes Jennifer Hassler in A Road Map for the Artificial Brain.

Pretty well re-tweeted and shared already, but interesting nonetheless: External cognition: The Thoughts of a Spiderweb.

And related somewhat related (or at least a really nice AR UX for controlling synthesizers), a demonstration of “prosthetic knowledge” — check out the two minute video with sound at the bottom of the page – awesome stuff!

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16010/

Mo’ Compute Mo’ Problems (by Silly Rabbit)

Hard problems

Someone tweeted this cartoon at me last week, presumably in angry response to an Epsilon Theory post, as the Tweet was captioned “My feelings towards ‘A.I.’ (and/or machine learning) and investing”:

Source: xkcd

To be clear: YES, I AGREE

Unsurprisingly, we humans are pretty competent creatures within the domains we have contrived (such as finance) and spent decades practicing. So it is, generally, still hard (and expensive) in 2017 to quickly build a machine which is consistently better at even a thin, discrete sliver of a complex, human-contrived domain.

The challenge, as this cartoon humorously alludes to, is that it is currently often difficult (and sometimes impossible) to know in advance just how hard a problem is for a machine to best a human at.

BUT, what we do know is that once an ML/AI-driven machine dominates, it can truly dominate, and it is incredibly rare for humans to gain the upper hand again (although there can be periods of centaur dominance, like the ‘Advanced Chess’ movement).

As a general heuristic, I think we can say that tasks at which machines are now end-to-end better have one or some of the following characteristics:

  • Are fairly simple and discrete tasks which require repetition without error (AUTOMATION)
  • and/or are extremely large in data scale (BIG DATA)
  • and/or have calculation complexity and/or require a great deal of speed (BIG COMPUTE)
  • and where a ‘human in-the-loop’ degrades the system (AUTONOMY)

But equally there are still many things on which machines are currently nowhere close to being able to reach human-parity, mostly involving ‘intuition’, or many, many models with judgment on when to combine or switch between the models.

Will machines eventually dominate all? Probably. When? Not anytime soon.

The key, immediate, practical point is that the current over-polarization of the human-oriented and machine-oriented populations, particularly in the investing world, is both a challenge and an opportunity as each sect is not fully utilizing the capabilities of the other. Good Bloomberg article from a couple of months back on Point72 and BlueMountain’s challenges in reconciling this in an existing environment.

The myth of superhuman AI

On the other side of the spectrum from our afore-referenced Tweeter are those who predict superhuman AIs taking over the world.

I find this to be a very bogus argument in anything like the foreseeable future, reasons for which are very well laid out by Kevin Kelly (of Wired, Whole Earth Review and Hackers’ Conference fame) in this lengthy essay.

The crux of Kelly’s argument:

  • Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
  • Humans do not have general purpose minds and neither will AIs.
  • Emulation of human thinking in other media will be constrained by cost.
  • Dimensions of intelligence are not infinite.
  • Intelligences are only one factor in progress.

Key quote:

Instead of a single line, a more accurate model for intelligence is to chart its possibility space. Intelligence is a combinatorial continuum. Multiple nodes, each node a continuum, create complexes of high diversity in high dimensions. Some intelligences may be very complex, with many sub-nodes of thinking. Others may be simpler but more extreme, off in a corner of the space. These complexes we call intelligences might be thought of as symphonies comprising many types of instruments. They vary not only in loudness, but also in pitch, melody, color, tempo, and so on. We could think of them as ecosystem. And in that sense, the different component nodes of thinking are co-dependent and co-created. Human minds are societies of minds, in the words of Marvin Minsky. We run on ecosystems of thinking. We contain multiple species of cognition that do many types of thinking: deduction, induction, symbolic reasoning, emotional intelligence, spacial logic, short-term memory, and long-term memory. The entire nervous system in our gut is also a type of brain with its own mode of cognition.

(BTW: Kevin Kelly has led an amazing life – read his bio here.)

Can’t we just all be friends?

On somewhat more prosaic uses of AI, the New York Times has a nice human-angle on the people whose job is to train AI to do their own jobs. My favorite line from the legal AI trainer: “Mr. Rubins doesn’t think A.I. will put lawyers out of business, but it may change how they work and make money. The less time they need to spend reviewing contracts, the more time they can spend on, say, advisory work or litigation.” Oh, boy!

Valley Grammar

And finally, because it it just really tickles me in a funny-because-it’s-true way: Benedict Evans’ @a16z’s guide to the (Silicon) Valley grammar of IP development and egohood:

  • I am implementing a well-known paradigm.
  • You are taking inspiration.
  • They are rip-off merchants.

So true. So many attorney’s fees. Better rev up that AI litigator.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16065/

Future Flash Crashes, Digital Darwinism & the Resurgence of Hardware (by Silly Rabbit)

Future flash crashes

Remember a few years back when a bogus AP tweet instantly wiped $100bn off the US markets? In April 2013 the Associated Press’ Twitter account was compromised by hackers who tweeted “Breaking: Two Explosions in the White House and Barack Obama is injured.”

For illustrative purposes only.

Source: The Washington Post, 04/23/13, Bloomberg L.P., 04/23/13.

The tweet was quickly confirmed to be an alternative fact (as we say in 2017), but not before the Dow dropped 145 points (1%) in two minutes.

Well, my view is that we are heading into a far more ‘interesting’ era of flash crashes of confused, or deliberately misled, algorithms. In this concise paper titled “Deceiving Google’s Cloud Video Intelligence API Built for Summarizing Videos”, researchers from the University of Washington demonstrate that by inserting still images of a plate of noodles (amongst other things) into an unrelated video, they could trick a Google image-recognition algorithm into thinking the video was about a completely different topic.

Digital Darwinism

I’m not sure I totally buy the asserted causality on this one, but the headline story is just irresistible: “Music Streaming Is Making Songs Faster as Artists Compete for Attention.” Paper abstract:

Technological changes in the last 30 years have influenced the way we consume music, not only granting immediate access to a much larger collection of songs than ever before, but also allowing us to instantly skip songs. This new reality can be explained in terms of attention economy, which posits that attention is the currency of the information age, since it is both scarce and valuable. The purpose of these two studies is to examine whether popular music compositional practices have changed in the last 30 years in a way that is consistent with attention economy principles. In the first study, 303 U.S. top-10 singles from 1986 to 2015 were analyzed according to five parameters: number of words in title, main tempo, time before the voice enters, time before the title is mentioned, and self-focus in lyrical content. The results revealed that popular music has been changing in a way that favors attention grabbing, consistent with attention economy principles. In the second study, 60 popular songs from 2015 were paired with 60 less popular songs from the same artists. The same parameters were evaluated. The data were not consistent with any of the hypotheses regarding the relationship between attention economy principles within a comparison of popular and less popular music.

Meanwhile, in other evolutionary news, apparently robots have been ‘mating’ and evolving in an evo-devo stylee. DTR? More formal translation: Researchers have added complexity to the field of evolutionary robotics by demonstrating for the first time that, just like in biological evolution, embodied robot evolution is impacted by epigenetic factors. Original Frontiers in Robotics and AI (dense!) paper here. Helpful explainer article here.

The resurgence of hardware

As we move from a Big Data paradigm of commoditized and cheap AWS storage to a Big Compute ­­paradigm of high performance chips (and other non-silicon compute methods), we are discovering step-change innovation in applied processing power driven by the Darwinian force of specialization, or, as Chris Dixon recently succinctly tweeted: “Next stage of Moore’s Law: less about transistor density, more about specialized chips.”

We are seeing the big guys like Google develop their specialized chips custom-made for their specific big compute needs, with a very significant increase of speed of up to 30 times faster than today’s conventional processors and using much less power, too.

Also, we are seeing increased real-world applications being developed for truly evolutionary-leap technologies like quantum computing. MIT Technology Review article on implementing the powerful Grover’s quantum search algorithm here.

And, finally, because it just wouldn’t be a week in big compute-land without a machine beating a talented group of humans at one game of another: Poker-Playing Engineers Take on AI Machine – And Get Thrashed.

Key points:

  1. People have a misunderstanding of what computers and people are each good at. People think that bluffing is very human, but it turns out that’s not true. A computer can learn from experience that if it has a weak hand and it bluffs, it can make more money.
  2. The AI didn’t learn to bluff from mimicking successful human poker players, but from game theory. Its strategies were computed from just the rules of the game, not from analyzing historical data.
  3. Also evident was the relentless decline in price and increase in performance of running advanced ‘big compute’ applications; the computing power used for this poker win can be had for under $20k.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16079/

Alibaba’s AI, JP Morgan’s Risky Language & the Nurture of Reality (by Silly Rabbit)

Video game-playing AI

AI has moved one step closer to mastering the classic video game StarCraft. Google, Facebook and now Alibaba have been working on AI StarCraft players, and last week a team from China’s Alibaba published a paper describing a system that learned to execute a number of strategies employed by high-level players without being given any specific instruction on how best to manage combat. Like many deep learning systems, the software improved through trial and error, demonstrating the ability to adapt to changes in the number and type of troops engaged in battle. Non-technical overview via The Verge here. Original and fairly accessible technical paper here.

While an AI video game ace may not be world changing in and of itself, progress on AI intra-agent communication and coordination has potentially profound implications for markets as the approach matures, or, as the Alibaba researchers rather poetically note in their paper:

In the coming era of algorithmic economy, AI agents with a certain rudimentary level of artificial collective intelligence start to emerge from multiple domains…[including] the trading robots gaming on the stock markets [and] ad bidding agents competing with each other over online advertising exchanges.

And how do agents behave when their game playing becomes stressful? Apparently just like their human creators: Aggressively. Summary of Google’s DeepMind finds on this here.

Risky language

For anyone who has ever taken general NLP algorithms, trained them on the information of the broader world and then pointed them at financial markets-type information, you will have noticed that they get kind of sad and messed up. Partly because markets-ese is odd (try telling your doctor that being overweight is a good thing) and partly because finance folks sure do love a risk discussion…and apparently no one more so than JP Morgan Chase CEO Jamie Dimon. In his much re-published letter to shareholders:

It is alarming that approximately 40% of those who receive advanced degrees in STEM at American universities are foreign nationals with no legal way of staying here even when many would choose to do so…Felony convictions for even minor offenses have led, in part, to 20 million American citizens having a criminal record…The inability to reform mortgage markets has dramatically reduced mortgage availability.

Thanks, Jamie, my algorithm just quit and immigrated to Canada.

The more serious question on this is that as natural language algorithms (of various types) become ubiquitous, at what point do business leaders begin to craft their communications primarily to influence the machine, or at least not include detailed socio-political critiques to accidentally trip it?

The nurture of reality

Clearly, our perception of reality, our world view, is substantially informed by our memories and the stories (links) we tell ourselves about these memories. We are now, for the first time, just starting to get an understanding of how memories are physically stored in the brain. Recollections of successive events physically entangle each other when brain cells store them, as Scientific American reports.

The Map of Physics, a joyous 8 minute video by Dominic Walliman (formerly of D-Wave quantum computing), culminates in the map below with The Chasm of Ignorance, The Future and Philosophy. Walliman points to where we must be operating if we are to break truly new ground (i.e., put the regression models down, please). And if you liked that, keep watching to Your Quantum Nose: How Smell Works

And, finally, a classic, epic, challenging, practical, piece of prose/poetry from one of the the world’s greatest philosophers and orators: the late, great, Tibetan Buddhist meditation master Chögyam Trungpa. Long treatise on Zen vs. Tantra as a system for nurturing the mind:

…the discovery of shunyata [emptiness of determinate intrinsic nature] is no doubt the highest cardinal truth and the highest realization that has ever been known…

Coming next week: The next generation of flash crashes; digital Darwinism and the resurgence of hardware.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16086/

AI Hedge Funds, Corporate Inequality & Microdosing LSD (by Silly Rabbit)

Machines and suchlike

DARPA has produced a 15 minute AI explainer video. A fair review: “Artificial intelligence is grossly misunderstood. It’s a rare clear-eyed look into the guts of AI that’s also simple enough for most non-technical folks to follow. It’s dry, but IRL computer science is pretty dry.” Well worth watching for orientation on where we are — and where we are not — with AI today.

In case you are interested in ‘AI hedge funds’ and haven’t come across them, Sentient should be on your radar. And Walnut Algorithms, too. They look to be taking quite different AI approaches, but at some point, presumably, AI trading will become a recognized category. Interesting that the Walnut article asserts — via EurekaHedge — that “there are at least 23 ‘AI Hedge Funds’ with 12 actively trading”. Hmm …

[Ed. note — double hmm … present company excepted, there’s a lot less than meets the eye here. IMO.]

On the topic of Big Compute, I’m a big believer in the near-term opportunity of usefully incorporating quantum compute into live systems for certain tasks within the next couple of years and so opening up practical solutions to whole new classes of previously intractable problems. Nice explanation of ‘What Makes Quantum Computers Powerful Problem Solvers’ here.

[Ed. note — for a certain class of problems (network comparisons, for example) which just happen to be core to Narrative and mass sentiment analysis, the power of quantum computing versus non-quantum computing is the power of 2n versus n2. Do the math.]

Quick overview paper on Julia programming language here. Frankly, I’ve never come across Julia (that I know of) in the wild out here on the west coast, but I see the attraction for folks coming from a Matlab-type background and where ‘prototype research’ and ‘production engineering’ are not cleanly split. Julia seems, to some extent, to be targeting trading-type ‘quants’, which makes sense.

Paper overview: “The innovation of Julia is that it addresses the need to easily create new numerical algorithms while still executing fast. Julia’s creators noted that, before Julia, programmers would typically develop their algorithms in MATLAB, R or Python, and then re-code the algorithms into C or FORTRAN for production speed. Obviously, this slows the speed of developing usable new algorithms for numerical applications. In testing of seven basic algorithms, Julia is impressively 20 times faster than Python, 100 times faster than R, 93 times faster than MATLAB, and 1.5 times faster than FORTRAN. Julia puts high-performance computing into the hands of financial quants and scientists, and frees them from having to know the intricacies of high-speed computer science”. Julia Computing website link here.

Humans and suchlike

This HBR article on ‘Corporation in the Age of Inequality” is, in itself, pretty flabby, but the TLDR soundbite version is compelling: “The real engine fueling rising income inequality is “firm inequality”. In an increasingly … winner-take-most economy the … most-skilled employees cluster inside the most successful companies, their incomes rising dramatically compared with those of outsiders.” On a micro-level I think we are seeing an acceleration of this within technology-driven firms (both companies and funds).

[Ed. note — love TLDR. It’s what every other ZeroHedge commentariat writer says about Epsilon Theory!]

A great — if nauseatingly ‘rah rah’ — recent book with cutting-edge thinking on getting your company’s humans to be your moat is: Stealing Fire: How Silicon Valley, the Navy SEALs, and Maverick Scientists Are Revolutionizing the Way We Live and Work. Warning: Microdosing hallucinogens and going to Burning Man are strongly advocated!

Finally, on the human-side, I have been thinking a lot about ‘talent arbitrage’ for advanced machine learning talent (i.e., how to not to slug it out with Google, Facebook et al. in the Bay Area for every hire) and went on a bit of world-tour to various talent markets over the past couple of months. My informal perspective: Finland, parts of Canada and Oxford (UK) are the best markets in the world right now—really good talent that have been way less picked-over. Does bad weather and high taxes give rise to high quality AI talent pools? Kind of, in a way, probably.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16098/

The Horse in Motion

Scared money can’t win and a worried man can’t love.

―  Cormac McCarthy, All the Pretty Horses (1992)

In 1872, noted horseracing aficionado and San Francisco rich guy Leland Stanford (yes, of university fame) commissioned noted photographer and San Francisco smart guy Eadweard Muybridge to apply his path breaking technology of stop-action photography to settle a long-running debate — do all four hooves leave the ground at the same time when horses run? This question had bedeviled the Sport of Kings for ages, and while Stanford favored the “unsupported transit” theory of yes, all four hooves leaving the ground for a split-second in the outstretched position, allowing horses to briefly “fly”, he — as rich guys often do — really, really, really needed to know for sure.

It took Muybridge about 12 years to complete the work, interrupted in part by his murder trial. It seems that Muybridge had taken a young bride (she 21 and he 42 when they married) who preferred the company of a young dandy of a San Francisco drama critic who fashioned himself in faux militaristic fashion as Major Harry Larkyns. After learning that wife Flora’s 7-month old son Florado was perhaps not biologically his, Muybridge tracked Larkyns down and shot him point-blank in the chest with the immortal words, “Good evening, Major, my name is Muybridge and here’s the answer to the letter you sent my wife.” In one of the more prominent early cases of jury nullification (Phillip Glass has an opera, The Photographer, with a libretto based on the court transcripts), Muybridge was found not guilty on the grounds of justifiable homicide despite the judge’s clear instructions to the contrary. Or maybe the jurors were just bought off. Leland Stanford spared no expense in paying for Muybridge’s defense. Gotta get those horse pix.

And eventually he did. Muybridge’s work, The Horse in Motion, settled the question of unsupported transit once and for all.

Yes, all four hooves leave the ground at the same time. But it’s NOT in the outstretched flying position. Instead, it’s in the tucked position, which — because it’s not as romantic a narrative as flying — had never been widely considered as an answer. In fact, for decades after the 1882 publication of The Horse in Motion in book form (a book by Leland Stanford’s fellow rich guy friend, J.D.B. Stillman, who gave ZERO credit to Muybridge for the work … after all, Muybridge was just Stanford’s work-for-hire employee, a member of the gig economy of the 1870s), artists continued to prefer the more narrative-pleasing view of flying horses. Here, for example, is Frederic Remington’s 1889 painting A Dash for the Timber, a work that was largely responsible for catapulting Remington to national prominence, replete with a whole posse of flying horses (h/t to John Batton in Ft. Worth, who knows his Amon Carter Museum collection!).

Okay, Ben, that’s a fun story of technology, art, murder, and rich guy intrigue set in 1870s San Francisco. But what does it have to do with modern markets and investing?

This: Muybridge developed a technology that allowed for a quantum leap forward in how humans perceived the natural world. His findings flew in the face of the popular narrative for how the natural world of biomechanics worked, but they were True nonetheless and led to multiple useful applications over time. Today we are at the dawning of a technology that similarly allows for a quantum leap forward in how humans perceive the world, but with a focus on the social world as opposed to the natural world. Some of these findings will no doubt similarly fly in the face of the popular narrative for how the social world of markets and politics works, but they will similarly lead to useful applications. They already are.

The technology I’m talking about is the biggest revolution in the world today. It’s the ascendancy of non-human intelligences, which I’ve written about in lots of Epsilon Theory notes, from Rise of the Machines to First Known When Lost to Troy Will Burn – the Big Deal about Big Data to The Talented Mr. Ripley to One MILLION Dollars to Two Discoveries. It’s what most of the world calls Artificial Intelligence, which is a term I dislike for its pejorative anthropomorphism. It’s what Neville Crawley calls Big Compute, which is a great phrase, not least for its progression and distinction from the old hat notion of Big Data (h/t to Neville for turning me on to the Muybridge story, too).

The primary impact of Big Compute, or AI or whatever you want to call it, is that it allows for a quantum leap forward in how we humans can perceive the world. Powerful non-human intelligences are the modern day Oracle of Delphi. They can “see” dimensions of the world that human intelligences cannot, and if we can ask the right questions we can share in their vision, as well. The unseen dimensions of the social world that I’m interested in tapping with the help of non-human intelligences are the dimensions of unstructured data, the words and images and communications that comprise the ocean in which the human social animal swims.

This is the goal of the Narrative Machine research project (read about it in The Narrative Machine and American Hustle). That just as Eadweard Muybridge took snapshots of the natural world using his new technology, so do I think it possible to take snapshots of the social world using our new technology. And just as Muybridge’s snapshots gave us novel insights into how the Horse in Motion actually works, as opposed to our romantic vision of how it works, so do I think it likely that these AI snapshots will give us novel insights into how the Market in Motion actually works.

That’s the horse I’m betting on in Epsilon Theory.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16106/