Epsilon Theory is Dr. Ben Hunt’s ongoing examination of the narrative machine driving human behavior, political policy and, ultimately, capital markets—an unconventional worldview best understood through the lenses of history, game theory and philosophy.
Dr. Ben Hunt hosts the Epsilon Theory podcast with co-hosts and special guests from financial services, the financial media *gasp* and beyond. The Epsilon Theory podcast is the quickest way to get all of the unconventional perspective, historical context and narrative analysis you’ve come to expect from Epsilon Theory pumped directly into your head.
We’re growing our family of Epsilon Theory contributors to include a broad range of voices on an evolving range of subject matter. If you listen to the podcast, you’ll recognize some of the names as colleagues, partners and friends of Ben from Salient, any number of past lives, and the growing circle of outspoken truth-seekers in financial services and beyond.
Epsilon Theory author Dr. Ben Hunt is frequently quoted in print, radio and TV appearances.
Let’s talk. We actually read and respond to your emails. Questions, comments, theories, ideas—we’d love to hear from you.
Long Short-Term Memory, Algorithms for Social Justice, and External Cognition
Last week I posted a bunch of links pointing towards quantum computing. However, there are also other compute initiatives which also offer significant potential for “redefining intractable” for problems such as graph comparison, for example, DARPA’s HIVE which aims to create a 1000x improvement in processing speed (and at much lower power) on this problem. Write-up on EE Times of the DARPA HIVE program here.
Exploring long short-term memory networks
Nice explainer on LSTMs by Edwin Chen: “The first time I learned about LSTMs, my eyes glazed over. Not in a good, jelly donut kind of way. It turns out LSTMs are a fairly simple extension to neural networks, and they’re behind a lot of the amazing achievements deep learning has made in the past few years.” (Long, detailed and interesting blog post, but even if you just read the first few page scrolls still quite worthwhile for the intuition of the value and function of LSTMs.)
FairML: Auditing black box predictive models
Machine learning models are used for important decisions like determining who has access to bail. The aim is to increase efficiency and spot patterns in data that humans would otherwise miss. But how do we know if a machine learning model is fair? And what does fairness in machine learning mean? Paper exploring these questions using FairML, a new Python library that audits black-box predictive models.
Fast iteration wins prizes
Great Quora answer on “Why has Keras been so successful lately at Kaggle competitions?” (By the author of Keras, an open source neural net library designed to enable fast experimentation). Key quote: ”You don’t lose to people who are smarter than you, you lose to people who have iterated through more experiments than you did, refining their models a little bit each time. If you ranked teams on Kaggle by how many experiments they ran, I’m sure you would see a very strong correlation with the final competition leaderboard.”
Language from police body camera footage shows racial disparities in officer respect
This paper presents a systematic analysis of officer body-worn camera footage, using computational linguistic techniques to automatically measure the respect level that officers display to community members.
And related somewhat related (or at least a really nice AR UX for controlling synthesizers), a demonstration of “prosthetic knowledge” — check out the two minute video with sound at the bottom of the page – awesome stuff!