AI & Video Games, Tricky Chatbots and More… (by Silly Rabbit)

AI and video games (again)

Vicarious (a buzzy Silicon Valley company developing AI for robots) say they have a new and crazy-good AI technique called Schema Networks. The Allen Institute for Artificial Intelligence and others seem pretty skeptical and demand a throw-down challenge with AlphaGo (or, failing that, some peer-reviewed papers with commonly used terms and a broader set of tests).

In other AI video game news, Microsoft released a video of their AI winning at Ms. Pacman, with an instructive voiceover of how the system works.

Tricky chatbots

I recently stumbled upon Carl Icahn’s Twitter feed which has the tag line: “Some people get rich studying artificial intelligence. Me, I make money studying natural stupidity.” Me, I think in 2017 this dichotomy is starting to sound pretty quaint. See: Overview of recent FAIR (Facebook Artificial Intelligence Research division) study teaching chatbots how to negotiate, including the bots self-discovery of the strategy of pretending to care about an item to which they actually give little or no value, just so they can later give up that item to seem to have made a compromise. Apparently, while they were at it, the Facebook bots also unexpectedly created their own language.

The quantum age has officially arrived

I’ve been jabbering on and pointing to links about quantum computing and the types of intractable problems it can solve for some time here, here and here, but now Bloomberg has written a long piece on quantum we can officially declare “The quantum age has officially arrived, hurrah!”. Very good overview piece on quantum computing from Bloomberg Markets here.

Your high dimensional brain

We tend to view ourselves (our ‘selfs’) through the lens of the technology of the day: in the Victorian ‘Mechanical age’ we were (and partly are) bellows and pumps, and now we are, by mass imagination, a collection of algorithms and processors, and possibly living in a VR simulation. While this ‘Silicon Age’ view is probably not entirely inaccurate it is also, probably, in the grand scheme of things, nearly as naive and incomplete as the Victorian view was. Blowing up some of the reductions of current models, this new (very interesting, pretty dense, somewhat contested) paper points towards brain structure in 11 dimensions. Shorter and easier explainer here by Wired or even more concisely by the NY Post“If the brain is actually working in 11 dimensions, looking at a 3D functional MRI and saying that it explains brain activity would be like looking at the shadow of a head of a pin and saying that it explains the entire universe, plus a multitude of other dimensions.”

And in other interesting-brain-related news:

Taming the “Black Dog”

And finally, three different but complimentary technology-enabled approaches to diagnosing and fighting depression:

  • basic algorithm with limited data has shown to be 80-90 percent accurate when predicting whether someone will attempt suicide within the next two years, and 92 percent accurate in predicting whether someone will attempt suicide within the next week.
  • In a different predictive approach, researchers fed facial images of three groups of people (those with suicidal ideation, depressed patients, and a medical control group) into a machine-learning algorithm that looked for correlations between different gestures. The results: individuals displaying a non-Duchenne smile (which doesn’t involve the eyes in the smile) were far more likely to possess suicidal ideation.
  • On the treatment-side, researchers have developed a potentially revolutionary treatment that pulses magnetic waves into the brain, treating depression by changing neurological structures, not its chemical balance.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16004/

She Screams, He Kidnaps (by Silly Rabbit)

Proximity of verbs to gender

Sometimes biases embedded within language are subtle, counter-intuitive things which you have to tease out with many layered neural nets. Other times, they are just bluntly and painfully predictable: Data scientist David Robinson tracked the proximity of verbs to gender across 100,000 stories. She screams, cries and rejects. He kidnaps, rescues and beats.

Wiki-memory

Previously I shared some research on how recollections of successive events physically entangle each other when brain cells store them. As a fascinating and different approach to studying memory, in this paper a group of European researchers used Wikipedia page views of aircraft crashes to study memory.

Fool me once, fool me twice

Sooner or later, someone is probably going to put a visually compelling 2D ‘map’ of data reduced from hundreds or thousands of dimensions via t-SNE in front of you and make some bold assertions about it. This beautiful and interactive paper provides a handy guide on what to watch out for.

A veritable zoo of machine learning techniques

A couple of months old, but still useful: Two Sigma researchers Vinod Valsalam and Firdaus Janoos write up the notable advances in machine learning presented at NIPS (Neural Information Processing Systems Foundation) 2016. Headline: The dominating theme at NIPS 2016 was deep learning, sometimes combined with other machine learning methods such as reinforcement learning and Bayesian techniques.

The NIPS conference has, improbably, found itself at the center of the universe as it is the most important event for people sharing cutting edge machine learning work. It’s in LA this year in December and promises to be very interesting, although quite technical: https://nips.cc/

Silicon Valley: a reality check

And, finally, this one is a little inside baseball but, if you can push through that, there is a very useful and accurate parsing of the types of technology companies being started and funded in the Valley and the simultaneous parallel dimensions that exist here. (You can skip the Valley defense bit and jump to the smart parsing bit by hitting ‘CRTL + F’ , typing ‘Y Combinator’ and start reading from there).

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16051/

Mo’ Compute Mo’ Problems (by Silly Rabbit)

Hard problems

Someone tweeted this cartoon at me last week, presumably in angry response to an Epsilon Theory post, as the Tweet was captioned “My feelings towards ‘A.I.’ (and/or machine learning) and investing”:

Source: xkcd

To be clear: YES, I AGREE

Unsurprisingly, we humans are pretty competent creatures within the domains we have contrived (such as finance) and spent decades practicing. So it is, generally, still hard (and expensive) in 2017 to quickly build a machine which is consistently better at even a thin, discrete sliver of a complex, human-contrived domain.

The challenge, as this cartoon humorously alludes to, is that it is currently often difficult (and sometimes impossible) to know in advance just how hard a problem is for a machine to best a human at.

BUT, what we do know is that once an ML/AI-driven machine dominates, it can truly dominate, and it is incredibly rare for humans to gain the upper hand again (although there can be periods of centaur dominance, like the ‘Advanced Chess’ movement).

As a general heuristic, I think we can say that tasks at which machines are now end-to-end better have one or some of the following characteristics:

  • Are fairly simple and discrete tasks which require repetition without error (AUTOMATION)
  • and/or are extremely large in data scale (BIG DATA)
  • and/or have calculation complexity and/or require a great deal of speed (BIG COMPUTE)
  • and where a ‘human in-the-loop’ degrades the system (AUTONOMY)

But equally there are still many things on which machines are currently nowhere close to being able to reach human-parity, mostly involving ‘intuition’, or many, many models with judgment on when to combine or switch between the models.

Will machines eventually dominate all? Probably. When? Not anytime soon.

The key, immediate, practical point is that the current over-polarization of the human-oriented and machine-oriented populations, particularly in the investing world, is both a challenge and an opportunity as each sect is not fully utilizing the capabilities of the other. Good Bloomberg article from a couple of months back on Point72 and BlueMountain’s challenges in reconciling this in an existing environment.

The myth of superhuman AI

On the other side of the spectrum from our afore-referenced Tweeter are those who predict superhuman AIs taking over the world.

I find this to be a very bogus argument in anything like the foreseeable future, reasons for which are very well laid out by Kevin Kelly (of Wired, Whole Earth Review and Hackers’ Conference fame) in this lengthy essay.

The crux of Kelly’s argument:

  • Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
  • Humans do not have general purpose minds and neither will AIs.
  • Emulation of human thinking in other media will be constrained by cost.
  • Dimensions of intelligence are not infinite.
  • Intelligences are only one factor in progress.

Key quote:

Instead of a single line, a more accurate model for intelligence is to chart its possibility space. Intelligence is a combinatorial continuum. Multiple nodes, each node a continuum, create complexes of high diversity in high dimensions. Some intelligences may be very complex, with many sub-nodes of thinking. Others may be simpler but more extreme, off in a corner of the space. These complexes we call intelligences might be thought of as symphonies comprising many types of instruments. They vary not only in loudness, but also in pitch, melody, color, tempo, and so on. We could think of them as ecosystem. And in that sense, the different component nodes of thinking are co-dependent and co-created. Human minds are societies of minds, in the words of Marvin Minsky. We run on ecosystems of thinking. We contain multiple species of cognition that do many types of thinking: deduction, induction, symbolic reasoning, emotional intelligence, spacial logic, short-term memory, and long-term memory. The entire nervous system in our gut is also a type of brain with its own mode of cognition.

(BTW: Kevin Kelly has led an amazing life – read his bio here.)

Can’t we just all be friends?

On somewhat more prosaic uses of AI, the New York Times has a nice human-angle on the people whose job is to train AI to do their own jobs. My favorite line from the legal AI trainer: “Mr. Rubins doesn’t think A.I. will put lawyers out of business, but it may change how they work and make money. The less time they need to spend reviewing contracts, the more time they can spend on, say, advisory work or litigation.” Oh, boy!

Valley Grammar

And finally, because it it just really tickles me in a funny-because-it’s-true way: Benedict Evans’ @a16z’s guide to the (Silicon) Valley grammar of IP development and egohood:

  • I am implementing a well-known paradigm.
  • You are taking inspiration.
  • They are rip-off merchants.

So true. So many attorney’s fees. Better rev up that AI litigator.

PDF Download (Paid Subscription Required): http://www.epsilontheory.com/download/16065/