I’m limiting this week’s Rabbit Hole to three links which represent the rapid tick-tock of the trifecta of massively fast compute, AI algorithms and blockchain development as I believe that these are the top three technology mega-trends of the 2015 – 2025 period (ex-Life Sciences innovation). Personally, I still believe that within these three mega-trends massively fast compute (Big Compute) will be the most world-changing, but clearly big compute hardware and algorithm development are deeply intertwined, and I believe we will start to see blockchain intertwine in a meaningful, although as-yet somewhat unclear, way with these other two technologies too.
That’s a fast chip you got there, bud
Very accessible CB Insights write up here and denser original paper here of a test of a Photonic computer chip which “mimics the way the human brain operates, but at 1000x faster speeds” with much lower energy requirements than today’s chips. To state the obvious, the exciting/terrifying potential of chips like this becoming reality is that machines will be able to rapidly cumulatively learn while we humans are still limited by learning, passing on some fraction of that learning, and then dying, which is clearly a pretty inefficient process.
The future of AI learning: nature or nurture?
IEEE Spectrum provide an overview on a recent debate a between Yann LeCun and Gary Marcus at NYU’s Center for Mind, Brain and Consciousness on whether or not AI needs more built-in cognitive machinery similar to that of humans and animals to achieve similar intelligence.
Blockchain for Wall Street
Bloomberg reports on a major breakthrough in cryptography which may have solved one of the biggest obstacles to using blockchain technology on Wall Street: keeping transaction data private. Known as a “zero-knowledge proof,” the new code will be included in an Oct. 17 upgrade to the Ethereum blockchain, adding a level of encryption that lets trades remain private.
Unsurprisingly, we humans are pretty competent creatures within the domains we have contrived (such as finance) and spent decades practicing. So it is, generally, still hard (and expensive) in 2017 to quickly build a machine which is consistently better at even a thin, discrete sliver of a complex, human-contrived domain.
The challenge, as this cartoon humorously alludes to, is that it is currently often difficult (and sometimes impossible) to know in advance just how hard a problem is for a machine to best a human at.
BUT, what we do know is that once an ML/AI-driven machine dominates, it can truly dominate, and it is incredibly rare for humans to gain the upper hand again (although there can be periods of centaur dominance, like the ‘Advanced Chess’ movement).
As a general heuristic, I think we can say that tasks at which machines are now end-to-end better have one or some of the following characteristics:
Are fairly simple and discrete tasks which require repetition without error (AUTOMATION)
and/or are extremely large in data scale (BIG DATA)
and/or have calculation complexity and/or require a great deal of speed (BIG COMPUTE)
and where a ‘human in-the-loop’ degrades the system (AUTONOMY)
But equally there are still many things on which machines are currently nowhere close to being able to reach human-parity, mostly involving ‘intuition’, or many, many models with judgment on when to combine or switch between the models.
Will machines eventually dominate all? Probably. When? Not anytime soon.
The key, immediate, practical point is that the current over-polarization of the human-oriented and machine-oriented populations, particularly in the investing world, is both a challenge and an opportunity as each sect is not fully utilizing the capabilities of the other. Good Bloomberg article from a couple of months back on Point72 and BlueMountain’s challenges in reconciling this in an existing environment.
The myth of superhuman AI
On the other side of the spectrum from our afore-referenced Tweeter are those who predict superhuman AIs taking over the world.
I find this to be a very bogus argument in anything like the foreseeable future, reasons for which are very well laid out by Kevin Kelly (of Wired, Whole Earth Review and Hackers’ Conference fame) in this lengthy essay.
The crux of Kelly’s argument:
Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
Humans do not have general purpose minds and neither will AIs.
Emulation of human thinking in other media will be constrained by cost.
Dimensions of intelligence are not infinite.
Intelligences are only one factor in progress.
Instead of a single line, a more accurate model for intelligence is to chart its possibility space. Intelligence is a combinatorial continuum. Multiple nodes, each node a continuum, create complexes of high diversity in high dimensions. Some intelligences may be very complex, with many sub-nodes of thinking. Others may be simpler but more extreme, off in a corner of the space. These complexes we call intelligences might be thought of as symphonies comprising many types of instruments. They vary not only in loudness, but also in pitch, melody, color, tempo, and so on. We could think of them as ecosystem. And in that sense, the different component nodes of thinking are co-dependent and co-created. Human minds are societies of minds, in the words of Marvin Minsky. We run on ecosystems of thinking. We contain multiple species of cognition that do many types of thinking: deduction, induction, symbolic reasoning, emotional intelligence, spacial logic, short-term memory, and long-term memory. The entire nervous system in our gut is also a type of brain with its own mode of cognition.
(BTW: Kevin Kelly has led an amazing life – read his bio here.)
Can’t we just all be friends?
On somewhat more prosaic uses of AI, the New York Times has a nice human-angle on the people whose job is to train AI to do their own jobs. My favorite line from the legal AI trainer: “Mr. Rubins doesn’t think A.I. will put lawyers out of business, but it may change how they work and make money. The less time they need to spend reviewing contracts, the more time they can spend on, say, advisory work or litigation.” Oh, boy!
And finally, because it it just really tickles me in a funny-because-it’s-true way: Benedict Evans’ @a16z’s guide to the (Silicon) Valley grammar of IP development and egohood:
I am implementing a well-known paradigm.
You are taking inspiration.
They are rip-off merchants.
So true. So many attorney’s fees. Better rev up that AI litigator.