AI BS Detectors & the Origins of Life (by Silly Rabbit)

Confidence levels for the Social and Behavioral Sciences

DARPA recently put out an RFI:

…requesting information on new ideas and approaches for creating (semi)automated capabilities to assign ‘Confidence Levels’ to specific studies, claims, hypotheses, conclusions, models, and/or theories found in social and behavioral science research (and) help experts and non-experts separate scientific wheat from wrongheaded chaff using machine reading, natural language processing, automated meta-analyses, statistics-checking algorithms, sentiment analytics, crowdsourcing tools, data sharing and archiving platforms, network analytics, etc.

A visionary and high value RFI. Wired article on the same, enticingly titled, DARPA Wants to Build a BS Detector for Science.

Claude Berrou on turbo codes and informational neuroscience

Fascinating short interview with Claude Berrou, a French computer and electronics engineer who has done important work on turbo codes for telecom transmissions and is now working on informational neuroscience. Berrou describes his work through the lens of information and graph theory:

My starting point is still information, but this time in the brain. The human cerebral cortex can be compared to a graph, with billions of nodes and thousands of billions of edges. There are specific modules, and between the modules are lines of communication. I am convinced that the mental information, carried by the cortex, is binary. Conventional theories hypothesize that information is stored by the synaptic weights, the weights on the edges of the graph. I propose a different hypothesis. In my opinion, there is too much noise in the brain; it is too fragile, inconsistent, and unstable; pieces of information cannot be carried by weights, but rather by assemblies of nodes. These nodes form a clique, in the geometric sense of the word, meaning they are all connected two by two. This becomes digital information…

Thermodynamics in far-from-equilibrium systems

I’m a sucker for methods to try to understand and explain complex systems such as this story by Quanta (the publishing arm of the Simons Foundation — as in Jim Simons or Renaissance Technologies fame) of Jeremy England, a young MIT associate professor, using non-equilibrium statistical mechanics to poke at the origins of life.

Game theory

And finally, check out this neat little game theory simulator which explores how trust develops in society. It’s a really sweet little application with fun interactive graphics framed around the historical 1914 No Man’s Land Ceasefire. Check out more fascinating and deeply educational games from creator Nicky Case here.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/15916/

Long Short-Term Memory, Algorithms for Social Justice, and External Cognition (by Silly Rabbit)

DARPA funds graph analytics processor

Last week I posted a bunch of links pointing towards quantum computing. However, there are also other compute initiatives which also offer significant potential for “redefining intractable” for problems such as graph comparison, for example, DARPA’s HIVE which aims to create a 1000x improvement in processing speed (and at much lower power) on this problem. Write-up on EE Times of the DARPA HIVE program here.

Exploring long short-term memory networks

Nice explainer on LSTMs by Edwin Chen: “The first time I learned about LSTMs, my eyes glazed over. Not in a good, jelly donut kind of way. It turns out LSTMs are a fairly simple extension to neural networks, and they’re behind a lot of the amazing achievements deep learning has made in the past few years.” (Long, detailed and interesting blog post, but even if you just read the first few page scrolls still quite worthwhile for the intuition of the value and function of LSTMs.)

FairML: Auditing black box predictive models

Machine learning models are used for important decisions like determining who has access to bail. The aim is to increase efficiency and spot patterns in data that humans would otherwise miss. But how do we know if a machine learning model is fair? And what does fairness in machine learning mean? Paper exploring these questions using FairML, a new Python library that audits black-box predictive models.

Fast iteration wins prizes

Great Quora answer on “Why has Keras been so successful lately at Kaggle competitions?” (By the author of Keras, an open source neural net library designed to enable fast experimentation). Key quote: ”You don’t lose to people who are smarter than you, you lose to people who have iterated through more experiments than you did, refining their models a little bit each time. If you ranked teams on Kaggle by how many experiments they ran, I’m sure you would see a very strong correlation with the final competition leaderboard.” 

Language from police body camera footage shows racial disparities in officer respect

This paper presents a systematic analysis of officer body-worn camera footage, using computational linguistic techniques to automatically measure the respect level that officers display to community members.

External cognition

Large-scale brainlike systems are possible with existing technology — if we’re willing to spend the money — proposes Jennifer Hassler in A Road Map for the Artificial Brain.

Pretty well re-tweeted and shared already, but interesting nonetheless: External cognition: The Thoughts of a Spiderweb.

And related somewhat related (or at least a really nice AR UX for controlling synthesizers), a demonstration of “prosthetic knowledge” — check out the two minute video with sound at the bottom of the page – awesome stuff!

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16010/