AI & Video Games, Tricky Chatbots and More… (by Silly Rabbit)

AI and video games (again)

Vicarious (a buzzy Silicon Valley company developing AI for robots) say they have a new and crazy-good AI technique called Schema Networks. The Allen Institute for Artificial Intelligence and others seem pretty skeptical and demand a throw-down challenge with AlphaGo (or, failing that, some peer-reviewed papers with commonly used terms and a broader set of tests).

In other AI video game news, Microsoft released a video of their AI winning at Ms. Pacman, with an instructive voiceover of how the system works.

Tricky chatbots

I recently stumbled upon Carl Icahn’s Twitter feed which has the tag line: “Some people get rich studying artificial intelligence. Me, I make money studying natural stupidity.” Me, I think in 2017 this dichotomy is starting to sound pretty quaint. See: Overview of recent FAIR (Facebook Artificial Intelligence Research division) study teaching chatbots how to negotiate, including the bots self-discovery of the strategy of pretending to care about an item to which they actually give little or no value, just so they can later give up that item to seem to have made a compromise. Apparently, while they were at it, the Facebook bots also unexpectedly created their own language.

The quantum age has officially arrived

I’ve been jabbering on and pointing to links about quantum computing and the types of intractable problems it can solve for some time here, here and here, but now Bloomberg has written a long piece on quantum we can officially declare “The quantum age has officially arrived, hurrah!”. Very good overview piece on quantum computing from Bloomberg Markets here.

Your high dimensional brain

We tend to view ourselves (our ‘selfs’) through the lens of the technology of the day: in the Victorian ‘Mechanical age’ we were (and partly are) bellows and pumps, and now we are, by mass imagination, a collection of algorithms and processors, and possibly living in a VR simulation. While this ‘Silicon Age’ view is probably not entirely inaccurate it is also, probably, in the grand scheme of things, nearly as naive and incomplete as the Victorian view was. Blowing up some of the reductions of current models, this new (very interesting, pretty dense, somewhat contested) paper points towards brain structure in 11 dimensions. Shorter and easier explainer here by Wired or even more concisely by the NY Post“If the brain is actually working in 11 dimensions, looking at a 3D functional MRI and saying that it explains brain activity would be like looking at the shadow of a head of a pin and saying that it explains the entire universe, plus a multitude of other dimensions.”

And in other interesting-brain-related news:

Taming the “Black Dog”

And finally, three different but complimentary technology-enabled approaches to diagnosing and fighting depression:

  • basic algorithm with limited data has shown to be 80-90 percent accurate when predicting whether someone will attempt suicide within the next two years, and 92 percent accurate in predicting whether someone will attempt suicide within the next week.
  • In a different predictive approach, researchers fed facial images of three groups of people (those with suicidal ideation, depressed patients, and a medical control group) into a machine-learning algorithm that looked for correlations between different gestures. The results: individuals displaying a non-Duchenne smile (which doesn’t involve the eyes in the smile) were far more likely to possess suicidal ideation.
  • On the treatment-side, researchers have developed a potentially revolutionary treatment that pulses magnetic waves into the brain, treating depression by changing neurological structures, not its chemical balance.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16004/

Long Short-Term Memory, Algorithms for Social Justice, and External Cognition (by Silly Rabbit)

DARPA funds graph analytics processor

Last week I posted a bunch of links pointing towards quantum computing. However, there are also other compute initiatives which also offer significant potential for “redefining intractable” for problems such as graph comparison, for example, DARPA’s HIVE which aims to create a 1000x improvement in processing speed (and at much lower power) on this problem. Write-up on EE Times of the DARPA HIVE program here.

Exploring long short-term memory networks

Nice explainer on LSTMs by Edwin Chen: “The first time I learned about LSTMs, my eyes glazed over. Not in a good, jelly donut kind of way. It turns out LSTMs are a fairly simple extension to neural networks, and they’re behind a lot of the amazing achievements deep learning has made in the past few years.” (Long, detailed and interesting blog post, but even if you just read the first few page scrolls still quite worthwhile for the intuition of the value and function of LSTMs.)

FairML: Auditing black box predictive models

Machine learning models are used for important decisions like determining who has access to bail. The aim is to increase efficiency and spot patterns in data that humans would otherwise miss. But how do we know if a machine learning model is fair? And what does fairness in machine learning mean? Paper exploring these questions using FairML, a new Python library that audits black-box predictive models.

Fast iteration wins prizes

Great Quora answer on “Why has Keras been so successful lately at Kaggle competitions?” (By the author of Keras, an open source neural net library designed to enable fast experimentation). Key quote: ”You don’t lose to people who are smarter than you, you lose to people who have iterated through more experiments than you did, refining their models a little bit each time. If you ranked teams on Kaggle by how many experiments they ran, I’m sure you would see a very strong correlation with the final competition leaderboard.” 

Language from police body camera footage shows racial disparities in officer respect

This paper presents a systematic analysis of officer body-worn camera footage, using computational linguistic techniques to automatically measure the respect level that officers display to community members.

External cognition

Large-scale brainlike systems are possible with existing technology — if we’re willing to spend the money — proposes Jennifer Hassler in A Road Map for the Artificial Brain.

Pretty well re-tweeted and shared already, but interesting nonetheless: External cognition: The Thoughts of a Spiderweb.

And related somewhat related (or at least a really nice AR UX for controlling synthesizers), a demonstration of “prosthetic knowledge” — check out the two minute video with sound at the bottom of the page – awesome stuff!

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16010/

Quantum Supremacy, Correlating Unemployment, and Buddhists with Attitude (by Silly Rabbit)

Quantum supremacy

As Ben and I have discussed before on an Epsilon Theory podcast, my view is that quantum computing is going to be truly, truly transformational by “redefining intractable”, as 1Qbit say, over the coming years. My conviction around quantum continues to grow and — to put a pretty big stake in the ground — I believe, at this point, the only open questions are: Which approach will dominate, and how long exactly until we get quantum machines which work on a broad set of real-world questions? I’ve long been a big fan of the applied, real-world progress D-wave have made, and Rigetti too. However, the “majors” like IBM are also making substantial progress towards true “quantum supremacy” with R&D intensive approaches, while other pieces of the ecosystem, such as the ability to “certify quantum states“, continue to fall into place. In the meantime, here is a wonderful cartoon explainer on quantum computing by Scott Aaronson and Zach Weinersmith.

What web searches correlate to unemployment

Well, in order to get the answer to that question you will have to follow this link (and be prepared to blush). The findings were generated by Seth Stephens-Davidowitz using Google Correlate. “Frequently, the value of Big Data is not its size; it’s that it can offer you new kinds of information to study — information that had never previously been collected”, says Stephens-Davidowitz.

Using verbal and nonverbal behaviors to measure completeness, confidence and accuracy

I recently came across Mitra Capital in Boston who have an interesting strategy of “using verbal indicators to judge the completeness and reliability of messages, to form predictions about company performance (via) analysis of management commentary from quarterly earnings calls and investor conferences based on a proprietary and proven framework with roots in the Central Intelligence Agency” with the underlying tech/methodology based on BIA. They’re running a relatively small fund ($53m AUM in Q1 2017) and have returned an average of 8.5% for the past four years (including a +43% year, and a -12.5% year). Neat NLP approach, although these returns imply more of a “feature than a product” (i.e., a valuable sub-system addition to a larger system, rather than a stand-alone system.) But, hey, I said the same thing about Instagram.

Buddhists with attitude / Backtesting: Methodology with a fragility problem

Probably (hopefully!) anyone reading Epsilon Theory has already read Antifragile by Nassim Nicholas Taleb. Many things which could and have been said about this book, but the most important one to highlight for my narrow, domain application is the massively important distinction (although rarely talked about facet) of machine learning/big compute approaches vs. regression-driven back test approaches. Key distinction is a simple one: Does your system gain from exposure to randomness and stress (within bounds) and improve the longer it exists and the more events it is exposed to OR does it perform less well with stress, and decay with time. Antifragile machine learning systems are profoundly different to the fragile fitting of models.

And finally, since I have already invoked Taleb, and if for no other reason that the line “If someone wonders who are the Stoics I’d say Buddhists with an attitude problem”, here is Taleb’s Commencement address to American University of Beirut last year.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16037/

AI Hedge Funds, Corporate Inequality & Microdosing LSD (by Silly Rabbit)

Machines and suchlike

DARPA has produced a 15 minute AI explainer video. A fair review: “Artificial intelligence is grossly misunderstood. It’s a rare clear-eyed look into the guts of AI that’s also simple enough for most non-technical folks to follow. It’s dry, but IRL computer science is pretty dry.” Well worth watching for orientation on where we are — and where we are not — with AI today.

In case you are interested in ‘AI hedge funds’ and haven’t come across them, Sentient should be on your radar. And Walnut Algorithms, too. They look to be taking quite different AI approaches, but at some point, presumably, AI trading will become a recognized category. Interesting that the Walnut article asserts — via EurekaHedge — that “there are at least 23 ‘AI Hedge Funds’ with 12 actively trading”. Hmm …

[Ed. note — double hmm … present company excepted, there’s a lot less than meets the eye here. IMO.]

On the topic of Big Compute, I’m a big believer in the near-term opportunity of usefully incorporating quantum compute into live systems for certain tasks within the next couple of years and so opening up practical solutions to whole new classes of previously intractable problems. Nice explanation of ‘What Makes Quantum Computers Powerful Problem Solvers’ here.

[Ed. note — for a certain class of problems (network comparisons, for example) which just happen to be core to Narrative and mass sentiment analysis, the power of quantum computing versus non-quantum computing is the power of 2n versus n2. Do the math.]

Quick overview paper on Julia programming language here. Frankly, I’ve never come across Julia (that I know of) in the wild out here on the west coast, but I see the attraction for folks coming from a Matlab-type background and where ‘prototype research’ and ‘production engineering’ are not cleanly split. Julia seems, to some extent, to be targeting trading-type ‘quants’, which makes sense.

Paper overview: “The innovation of Julia is that it addresses the need to easily create new numerical algorithms while still executing fast. Julia’s creators noted that, before Julia, programmers would typically develop their algorithms in MATLAB, R or Python, and then re-code the algorithms into C or FORTRAN for production speed. Obviously, this slows the speed of developing usable new algorithms for numerical applications. In testing of seven basic algorithms, Julia is impressively 20 times faster than Python, 100 times faster than R, 93 times faster than MATLAB, and 1.5 times faster than FORTRAN. Julia puts high-performance computing into the hands of financial quants and scientists, and frees them from having to know the intricacies of high-speed computer science”. Julia Computing website link here.

Humans and suchlike

This HBR article on ‘Corporation in the Age of Inequality” is, in itself, pretty flabby, but the TLDR soundbite version is compelling: “The real engine fueling rising income inequality is “firm inequality”. In an increasingly … winner-take-most economy the … most-skilled employees cluster inside the most successful companies, their incomes rising dramatically compared with those of outsiders.” On a micro-level I think we are seeing an acceleration of this within technology-driven firms (both companies and funds).

[Ed. note — love TLDR. It’s what every other ZeroHedge commentariat writer says about Epsilon Theory!]

A great — if nauseatingly ‘rah rah’ — recent book with cutting-edge thinking on getting your company’s humans to be your moat is: Stealing Fire: How Silicon Valley, the Navy SEALs, and Maverick Scientists Are Revolutionizing the Way We Live and Work. Warning: Microdosing hallucinogens and going to Burning Man are strongly advocated!

Finally, on the human-side, I have been thinking a lot about ‘talent arbitrage’ for advanced machine learning talent (i.e., how to not to slug it out with Google, Facebook et al. in the Bay Area for every hire) and went on a bit of world-tour to various talent markets over the past couple of months. My informal perspective: Finland, parts of Canada and Oxford (UK) are the best markets in the world right now—really good talent that have been way less picked-over. Does bad weather and high taxes give rise to high quality AI talent pools? Kind of, in a way, probably.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16098/