For the past few week I’ve been meaning to write Rabbit Hole notes
about, variously, Anti-metrics, Painkillers and Vitamins, The
Objective Function of the CCP, and The Amoral Exportation of Technology
(in the Cobb–Douglas sense). But, alas, instead of writing any of these things
I have been suffering under a kind of writing ennui brought about by a
misguided bet that I could stop reading on Kindle and go 100% paper books,
which in turn has led me to start hanging out at The Mechanics Institute Library.
The Mechanics Institute Library in San Francisco is a terrific, terrific place (and also the home of the oldest continuously running chess club in the US). It is so terrific that it is in fact too terrific and so a terrible place for writing with the weight of so many great words and thoughts literally towering above.
But finally, mercifully, my writing listlessness was broken by ‘Bezos Exposes Pecker’, and my faith restored that there are new and important words to write in the English language.
Thank you, New York Post !
So, I’ll try to take on the Objective function or Anti-metric
note soon, but in the meantime, here are some links to interesting things
written by other people:
Technology and time
The BBC has a new series about the long view of humanity, which aims to stand back from the daily news cycle and widen the lens of our current place in deep time. This long-ish piece gets into a combinatorial account of technology and time:
“From the perspective of technology, humans have been getting exponentially slower every year for the last half-century. In the realm of software, there is more and more time available for adaptation and improvement – while, outside it, every human second takes longer and longer to creep past. We – evolved creatures of flesh and blood – are out of joint with our times in the most fundamental of senses.”
It is just so important to be able to step out of our day-to-day
perception of time and be able to think about technology (and other things) on
a broad arc like this.
There’s a great Steve Jobs quote on the long view of being a tech
“This is a field where one does not write a principia, which holds up for two hundred years. This is not a field where one paints a painting that will be looked at for centuries, or builds a church that will be admired and looked at in astonishment for centuries. No. This is a field where one does one’s work and in ten years it’s obsolete, and really will not be usable within ten or twenty years. It’s not like the renaissance at all. It’s very different. It’s sort of like sediment of rocks. You’re building up a mountain and you get to contribute your little layer of sedimentary rock to make the mountain that much higher. But no one on the surface, unless they have X-ray vision, will see your sediment. They’ll stand on it. It’ll be appreciated by that rare geologist.”
I think the combinatorial point is most important and missing from
the Jobs analogy but, still: True words.
As a pragmatic point, in software architecture in recent years we have all adopted the combinatorial / sediment technology paradigm (although perhaps not all with such philosophical reasoning) by moving to ‘no-end state architecture’. For a neat, pragmatic, wide ranging talk on ‘no-end-state architecture’ in a corporate environment, take a look here.
Some years ago, Ben wrote the canonical note on a Narrative trade, “Who’s Being Naive, Kay?“, illustrating with the canonical Narrative stock Salesforce.
This blog post is a neat breakdown of the Salesforce ‘strategic narrative’ by a former Salesforce marketing person.
Realistically, we all already know perfectly well that Salesforce is doing this (telling a great story in the format of the second link, and broadcasting it well for stock price management as described by Ben in the first link) but Benioff is just so good at it, and we are hackable animals, so on it goes.
In many ways Benioff is the Arjen Robben of Software-as-a-Service,
and we are all the hapless
defensive lineup (scroll to the bottom ‘Ode to the Hack’ link and watch video).
DARPA’s new “Schema” approach to understanding the world
The Defense Advanced Research Project Agency (DARPA) has created a new program called KAIROS (Knowledge-directed Artificial Intelligence Reasoning Over Schemas) aimed at creating a machine learning system that can sift through the many, many events and pieces of media generated every day and identify any threads of connection or narrative in them.
[ Ed. note: Wheeee! ]
The approach is interesting as it uses a “Schema” approach, in
this case “Schema” meaning the process humans use to understand the world
around them by creating little stories of interlinked events. For instance when
you buy something at a store, you know that you generally walk into the store,
select an item, bring it to the cashier, who scans it, then you pay in some
way, and then leave the store. This “buying something” process is a schema we
all recognize, and could of course have schemas within it (selecting a product;
payment process) or be part of another schema (gift giving; home cooking).
This is interesting as, in some ways back, it goes back to ur-semantic AI classification system concepts, but maybe now with the compute power to make it work.
Epsilon Theory’s odd cousin, Ribbon Farm
My favorite technologist / hemp farmer / Epsilon Theory reader turned me on to a long form blog called Ribbon Farm. It’s odd. I quite like it. It strikes me as kind of like Epsilon Theory if, instead of going into asset management, Epsilon Theory had spent 20 years as a kind of dilettante grad student, reading widely, getting stoned and arguing with other grad students.
[ Ed. note – in the trade, this is called “being an NYU professor.” Six years was enough for me. ]
“A story or narrative is a mental projection of characters and events embedded in a particular causal logic. Listening to a story seems passive, but in order to process the narrative, the listener must construct a coherent mental world out of the details provided. Unconscious predictions are made, and then winnowed and changes as more evidence is presented and conflicts resolved …As human beings, “projecting and sharing stylized model worlds in mental space” is both our ancestral job and our favorite hobby. The world that we interact in is mostly imaginary, constructed by all of us out of fantasies and guesses. As we get more intelligent, we will get more imaginary.”
And, finally, I found myself recently digging out the Donald
Rumsfeld ‘known unknowns’ quote:
Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.”
I then found myself falling down the rabbit hole of thinking about about Slavoj Žižek’s fourth category of unknown knowns – “the disavowed beliefs, suppositions and obscene practices we pretend not to know about, even though they form the background of our public values.”
As I previously wrote about in ‘Take Back Your Thinking’ I’m a big fan of, and investor in, meditation as I find it reveals to me my own unknown knowns , which I’ve found over the years are the real ‘gotchas’. More broadly, it seems that it is the unknown knowns that are currently, very clearly getting us into trouble as a society.
[Ed. note: both Orwell’s “collective solipsism” and Zizek’s “unknown knowns” are terrificly important aspects of Common Knowledge. Dark aspects, for sure, but no less important for that.]
After tiring myself out with a longer note on ‘Building the Narrative System’ last week, this week I offer a round up of my favorite links from recent months:
Interview with Tadashi Tokieda, collector of mathematical toys
This is a wonderful interview with Tadashi Tokieda in Quanta(the Jim Simons financed magazine with the public service philanthropic mission of “Illuminating basic science and math research through public service journalism”). Favorite quotes:
If you project on the wrong axis, something looks very complicated.
“I decided, as a personal revenge on Landau, to study the subject up to the point where I could solve this exercise. Landau said, in the biography, “Don’t waste your time on mathematicians and lectures and so on — instead, find a book with the largest number of solved exercises and go through them all. That’s how you learn mathematics.” I went back to the library and found the mathematics book with the largest number of problems.”
I’m trying to jolt myself out of my complacency. When I share, I just want to share with people. I hope that they’ll like it, but I’m not trying to educate them, and I don’t think people are complacent. People are struggling in their own ways and making efforts and trying to improve. Who am I to jolt them out of complacency?
Shockingly, auditors say the world needs more auditing (but they are probably right)
In an HBR article, several execs at Deloitte execs write about ‘Why
We Need to Audit Algorithms’. It’s actually a really good point, but hard not to mock the
I had not previously encountered not the clinical term ‘moral
injury’, but it makes so much sense. Explanation here by Dr. Michael D. Matthews,
Professor of Engineering Psychology at the United States Military Academy in
the Department of Behavioral Sciences and Leadership.
Here is a post on ‘Why Open Source misses the point of Free Software’ by Richard Matthew Stallman (RMS) who knows a lot about software (and also it seems has strong views on the TSA). It is an important topic and I really hope we can return to a world with a better signal to noise ratio on important topics like databases, security and free software (rather the the free-for-all silliness that blockchain seems to have dragged the conversation to … although I guess I am thankful that at least we are talking about database structures and their impact at all).
Can you guess who said it? – China edition
Can you guess who said it:
“Anybody who does business in China compromises some of their core values. Every single company, because the laws in China are quite a bit different than they are in our own country.”
The really drunk, chatty guy from Boston I was sat next to on a flight back to San Francisco last week;
The current US President;
The former President of Stanford University and current President of Alphabet;
And finally, I think this is, improbably, the finest
piece of sports journalism of 2018.
I admit that odd, amatuer-ish, lowbrow millennial-Gonzo journalism
is a guilty pleasure of mine but, regardless, this article from Vice UK is a
genuinely wonderful tribute to ‘the hack’. In this case to a football (soccer)
hack, but it could equally be written about a certain rare type of
entrepreneur, a certain rare type of product person:
You know what he’s going to do but it’s impossible to stop it. Arjen Robben has been scoring the same goal for so long now that he’s gone fully bald while doing so… he is the Dr Manhattan of running really fast down the right touchline before cutting inside on his left foot, shifting it, shifting it, shifting it, and then launching the ball with barbarous force into the far corner of the net … With relentless repetition, Robben’s trick is passing beyond the realms of tedium and mild annoyance into something that is pleasing and even, in these late-autumn flushes of his career, weirdly poignant.
For full impact make sure to watch the video half way down.
[Ed. note: Neville Crawley is more plugged-in than anyone I know, so when he offered to interview smart people on the front lines of the technology/mass society battleground as part of his Rabbit Hole series, I figured it would be good stuff. As it turns out, it’s GREAT stuff. Here’s the first in what I hope will be an ongoing feature for Epsilon Theory. – Ben]
This week I am interviewing Alex Gladstein, Chief Strategy Officer of Human Rights Foundation and guest lecturer at Singularity University. I met Alex a couple of years ago when he was moderating an exceptionally interesting and lively Human Rights Foundation (HRF) panel on identity, distributed systems and human rights. Alex’s work has helped me gain a deeper appreciation for how fundamentally identity and human rights are tied together, and the importance of considering freedom and control of the most vulnerable populations when designing technology infrastructure. Alex is a deep thinker on the intersection of technology, freedom and decentralization and so I am very pleased to welcome him to Epsilon Theory. – Neville Crawley
Welcome to Epsilon Theory, Alex. Firstly, what is the mission and
origin story of HRF?
The Human Rights Foundation was
founded in 2006 by the Venezuelan activist Thor Halvorssen. The world was 7
years into the Hugo Chávez experiment, and things weren’t going well in
Venezuela. The Chávez regime was jailing critics, cutting off independent
media, fatally compromising the independence of the legislature and judiciary,
and presiding over monstrous corruption. Thor was watching his country —
which, before Chávez, was a constitutional (if imperfect) democracy — slide
into outright authoritarianism. Today it’s easy to look at the human rights and
starvation disaster in Venezuela (now home to one of the world’s largest humanitarian
crises, producing more daily refugees than Syria) and say we should have done
something. But before Chavez’s death, the world did very little. In fact, the
mainstream political establishment at the time seemed to at times to be
cheering for Chávez. Human rights groups were quiet until late in his rule. So,
with this experience in mind, Thor chose to found HRF as a non-profit
organization to focus specifically on promoting individual rights and civil
liberties in closed and closing societies. As far as I know, HRF is the world’s
only organization that focuses on authoritarianism as a global problem. We work
simultaneously on challenging and exposing the crimes of dictatorships
everywhere from Cuba to Saudi Arabia to Vietnam, while at the same time running
programs to support democracy activists, civil society organizers, at-risk
journalists, and others who labor under authoritarian regimes. Our programs
include the Oslo
Freedom Forum conference series, the Flash Drives for
Freedom initiative to smuggle outside information into North
and a range of impact litigation, technology, and educational initiatives to
support rights advocates operating in tough environments. By HRF’s count, there
are approximately 4 billion people in today’s world who live under some type of
closed society, where there is no ACLU, no Washington Post, no ability to hire
a human rights lawyer, no chance at organizing a successful public protest, no
way to safely run a pride parade or expose government corruption. HRF
specializes in helping dissidents in these conditions, the future Havels and
Mandelas of the world.
Could you give us a bit about your background and how you came to
be Chief Strategy Officer of HRF?
2007 I was studying in London and interning at the British Parliament. I
managed to get a summer position at HRF, and my first task was to put together
backpacks which would be brought by my Latin American colleagues into Cuba and
given to the island’s underground library movement. Inside the backpacks were
innocent-looking cases of music CDs–Britney Spears, and the like. But despite
their labels, I had secretly burned onto the discs various dubbed films ranging
from Braveheart to V for Vendetta. Cuban civil society organizations would
watch them quietly in tiny groups inside their homes on portable DVD players which
we also supplied. The program was hugely popular, and there was always a demand
for more content. In a country where the dictatorship approves all books and
educational content, a movie can act like a red pill in The Matrix. This type
of activity later become what is now known as the paquete, a Netflix-meets-Milk Man system where Cubans now
get video on demand, delivered to their home. I worked on a few other
meaningful programs, and in 2009 we launched the first Oslo Freedom Forum, and
I was forever hooked on HRF. Thor saw that the world had prominent, popular,
high-level gatherings for finance (the World Economic Forum), ideas (TED), and
development (the Clinton Global Initiative) — but nothing similar for human
rights. Thor’s filmmaking background and Norwegian connections led us to do a
theater production in Oslo, where dissidents would tell their personal stories
on stage to an audience of industry leaders. The goal was to find the most
effective individuals pushing peacefully for freedom in places like Russia and
China and give them a platform, media attention, resources, new technical
skills, and a global network. Over the years I worked very closely on this
project, while also working in media and development areas. In 2015 I was
appointed Chief Strategy Officer and since then have led our communications and
development efforts and helped shape our overall growth strategy.
I know that you personally think a lot about ‘anti-authoritarian
technologies’ and spend a lot of time with the blockchain community. What
projects are you particularly interested in right now and why?
Through my work at HRF I’ve gained a great and deep appreciation
for liberal democracy, or, as we could just as easily say, decentralized
government. In fact, I would argue that separation of powers is the single most
important ingredient for a liberal democracy — far more important and
fundamental than elections. All dictators have elections. And we’ve had various
forms of tyranny ever since the agricultural revolution. The real innovation in
governance — arguably first sparked by Cleisthenes in ancient Greece 2,500
years ago — was that humans should be ruled by rules, not rulers. In today’s
liberal democracies, power is distributed across executive, legislative, and
judicial institutions, and is constantly checked by the people through a free
press and by civil society organizations. In a healthy democracy, no single
person or small group of people is in charge. I am drawn to bitcoin because it
brings this same concept to money and to technology. There are other
technologies that I view as anti-authoritarian that are really interesting to
me, ranging from censorship-resistant storage (IPFS) to distributed internet
access (goTenna) to zero knowledge cryptography (ZCash) to decentralized
payment networks (Lightning) to encrypted messaging (Signal). I think they (or
their counterparts) will all eventually be used in conjunction with each other,
but to me, the most groundbreaking is bitcoin.
Bitcoin is widely debated, including amongst the Epsilon Theory
community. You have a particular view of bitcoin as ‘censorship resistant
money’ – could you talk more about that and why it is important?
In the bitcoin network, no single person or small group of people
are in charge. Power is divided in a similar way to representative democracies.
Instead of the executive branch sitting in the White House, we have the miners,
who expend enormous amounts of energy to add new blocks of transactions onto
the historical bitcoin ledger. Instead of the legislative branch, we have the
coding community, who come up with new ways to improve bitcoin, whose software
has been upgraded hundreds
since its inception in 2009. But just like with a Supreme Court and judicial
system, the ultimate power in the bitcoin network is in the hands of the users,
who run full nodes all around the world. Each of these nodes — who number in
the thousands and are largely unknown to each other — hold the entire
transaction history of bitcoin, and decide independently which blocks to
approve, and which coding upgrades to allow. When I look at the bitcoin
governance model from a political science perspective, it’s the power of the
users that makes the network so interesting. Miners and developers can’t simply
take over the network. A coup cannot be orchestrated by one person or branch. Power
But decentralization is only a means to an end. In politics,
decentralization in the form of liberal democracy gives us a superior society
than centralized tyranny. There are of course exceptions but generally speaking
— Estonia or Belarus? Costa Rica or Cuba? South Korea or North Korea? Tunisia
or Egypt? Ghana or Equatorial Guinea? Whether you care about innovation,
growth, entrepreneurship, equality, prosperity, long-term stability, life
expectancy, social welfare, or even peace — no two liberal democracies have
ever fought each other — you’ll want a free and open society, not a
dictatorship. In bitcoin, decentralization gives us censorship-resistance.
Because of the distributed architecture of the network, it is impossible to censor
individual transactions. They are truly peer to peer and the “ordering
service” normally done by a centralized entity at Visa or PayPal, is done
by a global competition, where someone will always process your translation, as
long as you have enough bitcoin to complete it. This may not be very important
for those of us living in democracies where we can more or less trust our
governments and banking systems — but it’s a revolutionary development for the
billions living under authoritarian governments. For the first time, people can
transact in a global, borderless way, within minutes, with a very low fee, in a
way that cannot be stopped. So whether you are up against hyperinflation in
Venezuela or capital controls in China, bitcoin is a really important,
disruptive technology that demands to be understood. Can it be used for bad? Of
course. That’s like asking if the internet can be used for bad. But in general,
it’s going to change the world, and there are market and human impact reasons
to study it closely.
What problems do you think still need to be solved with Bitcoin
for to fulfill its potential, and who is working on them?
There are social and educational problems with bitcoin, and then
there are technical challenges. Right now I’d actually say the former are more
important to tackle. First of all, very few people on this planet have ever
used bitcoin, and far fewer understand how it works or why it would be
important for someone living under a dictatorship. I’ve seen some people say
that no more than 40 million people have ever interacted with bitcoin or any
cryptocurrency. So that’s well less than 1% of the world’s population. And even
in hyper-connected places like San Francisco and London — or, honestly, even
at blockchain conferences — people generally can’t describe to you how and why
bitcoin works. We need a world-class effort to explain the technological power
and potential of bitcoin to the average person. This information needs to be
clear, fun, engaging, and in many different languages. And we must address the
conflation problem. The conflation problem is the circumstance we find
ourselves in today when everyone starts talking about cryptocurrency and
blockchain and bitcoin as if they are the same things. Bitcoin is a
decentralized money network that runs on proof of work. Ethereum aims to be a
decentralized world computer that wants to use proof of stake. Enterprise
blockchains (i.e. blockchains with a backdoors) claim to bring more
transparency and accountability to corporate functions like supply chains.
Regardless of how bullish or bearish we are on these different projects, we
need to stop conflating them with one another. The “bitcoin” blockchain
has a radically different set of characteristics than any other blockchain. And
it has a particular set of characteristics that give it the unique quality of
censorship-resistance. This is why it is important for people who live under
dictatorships. So I believe that in human rights-centric educational materials,
we need to separate out bitcoin from other projects in the blockchain space,
and give it its own chapter, or own brochure, or own book. Unfortunately, and
partly because bitcoin is leaderless, there isn’t a coordinated effort to do
On the technical challenges side, I’m more optimistic. In order to
achieve censorship-resistance, you necessarily are going to have to sacrifice
speed and cost. So I believe that on-chain bitcoin transactions are always
ultimately going to be more expensive and slower than the competition. Also, due
to the public nature of its blockchain, bitcoin is not strictly a privacy
technology. So while it’s not easy or cheap to do chain analysis to figure out
who is sending which bitcoins to whom, it’s possible, and that’s not great if
you are living in a dictatorship. Luckily, brilliant people are working on
improvements in all of these areas. On the user side, there are wallets being
developed that help increase the privacy of bitcoin transactions. And on the
infrastructure side, there’s improvements happening on the bitcoin base layer
and on “second layer” technology, like, for example, the Lightning
Network. There are a handful of companies and lots of individual developers
working on Lightning, which is a decentralized payment network that essentially
sits on top of bitcoin. The network just launched earlier this year, and is in
the early stages of its architecture, but it should eventually allow you to
transact bitcoin very fast, with a very low fee, in a very private way (it in
fact uses similar encryption technology to the Tor browser), and thus should be
very interesting to people living under closed societies. It also introduces
the concept of being able to “stake” your bitcoin into the Lightning
Network and provide a service and make a small fee, all without giving
up control over your bitcoin, which is of course interesting from a financial
You might ask why, despite all of these interesting developments,
bitcoin is tanking in price. Well, ask yourself, were there major breakthroughs
in bitcoin technology between October 2017 and December 2017? No, but the price
quadrupled. Did the bitcoin network’s technology get compromised between
January and today? Far from it — but the price has gone down by 85%. Remember
that the fluctuating price of bitcoin is not reflecting technological
I remember I was in Cairo during
Tahrir Square in 2011 and Twitter was an incredibly useful tool for staying
safe and staying in contact. Are there are other bits of ‘mainstream tech’
today that are playing an important role in human rights and freedom?
I tend to agree with Yuval Noah Harari that technology today is,
generally speaking, authoritarian by nature. Big data, machine learning,
artificial intelligence — these are all being used by governments and
companies to control us. The most striking example, of course, is happening in
the world’s largest country — China. There, more than a billion people are
part of a grand social engineering experiment where the Communist Party is
vacuuming up all kinds of communication, location, behavior, health, and financial
data from citizens via apps like WeChat and Alipay and beginning to sort
through all of that data to understand who are good citizens and who are bad.
There are many different “social credit” experiments happening across
China where companies or municipalities are taking personal data and using it
to score people according not just to their financial responsibility but also
political loyalty. And this is beginning in some areas to dictate what kind of
basic goods and services you can have — fast internet, a good rate on a
mortgage, the ability to buy a plane ticket, leave the country, or send your
kids to a good school. This technology isn’t perfect — the New York Times
described it right now as more Kafkaesque than Orwellian — but Orwellian is certainly
the goal, and this centralized surveillance tech is now being exported to countries like Venezuela. So while I’m
interested in the potential of bitcoin and the other decentralized technology
that I mentioned above to provide alternative models for us to scale our
societies while preserving our freedoms and privacy, I’m fearful of most
mainstream tech from a human rights point of view. Certainly, it’s amazing that
we can communicate so effortlessly around the world, and that such a large
percentage of humans have access to a cell phone, but increasingly, the control
of all of these communications, devices, and data is being centralized and
that’s not good. In fact, Harari has said that “if you dislike the idea of
living in a digital dictatorship… then the most important contribution you
can make is to find ways to prevent too much data from being concentrated in
too few hands, and also find ways to keep distributed data processing more
efficient than centralized data processing. These will not be easy tasks. But
achieving them may be the best safeguard of democracy.” Amen to that.
Neville, I really appreciate your take on this. I also believe we
are at a crossroads, where we could head down one of these two paths, either a
very centralized world where all of our communications and transactions are
surveilled, censored, and policed; or a more decentralized one, where we
preserve some freedoms and privacy. And unfortunately, we don’t need to run a
thought experiment to see what might happen if we go down the centralized road.
There are hundreds of millions of people in China who are living through this
experiment right now. The Financial Times ran an interview with a 23-year-old
Chinese millennial, and she said that she wasn’t sure if she was living in a
futuristic society, or if she was building a cage for herself, which is about
right. I am happy to see a lot of people making noise about why our current
data infrastructure is bad — and not just in China, but here in the United
States and elsewhere, too. Obviously, centralized data storage exposes us to
many kinds of vulnerabilities, ranging from Equifax-style hacking to
Facebook-style manipulation. There are a lot of sharp minds speaking loudly
about the problems of our current system, including Tristan Harris, Jaron Lanier,
and Renee DiResta. And I do agree with you that ownership of data will be key
to providing an alternative to the WeChat model. Where I might challenge you is
to consider that bitcoin may play a key role in all of this. If bitcoin is the
world’s first censorship-resistant network — then what might we be able to
build on top of it? That’s one of the most important questions facing today’s
Could you talk about the recent ‘Flash Drives for Freedom’
project. What is it? How did it come about? What impact has it had?
10 years ago HRF started working with North Korean defectors.
People who had risked their lives to escape hell on earth in North Korea and
traveled thousands of miles through China (without speaking the language and
with all the trappings of modernity being completely alien to them) to make it
to freedom at a South Korean embassy in a country like Thailand or Mongolia.
People who had resettled in South Korea, found freedom, and then decided to
help those they left behind. After several years of working with many different
defector-led organizations, we decided that arguably the most important thing
we could do was help get more outside information into North Korea. It’s
difficult to imagine a better future for people in North Korea, but its
impossible to imagine a better one where they are kept under the same kind of
total brainwashing invented by the Kim dynasty. The information monopoly must
be broken. So we started supporting groups like the North Korea Strategy
Center, led by Kang Chol-hwan, whose incredible work is described in this epic
Andy Greenberg WIRED cover story. They were taking USB sticks, loading
them up with films, interviews, books, and articles, and sending them into
North Korea via the black markets on the Chinese border. In many ways, it was a
similar project to the work we once did in Cuba. But NKSC and the other
organizations had shockingly little support. To this day, they don’t receive
any money from the South Korean government. So we decided to see if we could
help. In 2014 we organized the world’s first hackathon for North Korea (as seen on
Fareed Zakaria), and, in late 2015, gathered a small group of Silicon
Valley leaders to brainstorm the best way of getting outside information in.
The solution? A flash drive drive. My colleague Jim Warnock came up with the
title “Flash Drives for Freedom”, a team at Leo Burnett did some
pro-bono creative design, and we launched at SXSW 2016. Since then, we’ve sent
more than 70,000 USB sticks into North Korea, reaching hundreds of thousands
(and possibly millions) of people. You can watch a video about the impact and
learn how to send us your flash drives here.
I read the columns Jamal Khashoggi wrote while attending HRF’s Oslo Freedom Forum in
Norway just months before he died, that you then translated. They are really
powerful and important words. What, in your view, are the implications of the
murder of Khashoggi and the US and others’ response to it?
Jamal was on the one hand inspired by the Oslo Freedom Forum, and
on the other hand, depressed. He told friends that he loved hearing the stories
of so many activists and learning about so many similar struggles around the
world. From Oslo, he even called an editor friend of his to pitch an idea to
put together a new publication that would assemble investigative journalism
from across the Arab World. At the same time, he was frustrated by the fact
that so little was being done to help these people. He focused particularly on
Leyla Yunus, an incredibly brave human rights activist from Azerbaijan, who had
been jailed, tortured, and even had her home destroyed by the dictatorship for
her peaceful activism. You can watch her Oslo testimony here.
When she shows a photo of what she looked like before her arrest, and then
shows the photo of her after her release, it’s impossible not to gasp. And
Jamal was right there with us, yelling in his mind, why can’t we help this
person? The good news is that, through HRF’s work, we are helping people like
Leyla. In fact, in the past few months, one of the individuals attending the
conference decided to financial support her organization, which is wonderful
news. We aim to spark a lot more of that kind of generosity and partnership
through our work.
When we heard the news about Jamal’s disappearance — and then
later, the grisly details — we were of course devastated. It was initially
shocking that the Saudi regime would do something so brazen. As it turns out,
they were sending a loud message to all Saudi journalists and dissidents: don’t
mess with us. And now, we’ve found out, tragically, that MBS has been torturing
the women’s rights activists that he arrested earlier this summer. The world’s
response, of course, hasn’t been strong enough. Politically, the response from
the White House has been disappointing, to say the least. Unfortunately, it has
been long-standing, bi-partisan US policy to uncritically support the Saudi
dictatorship in exchange for resource and security guarantees. Realistically,
we can’t expect that to change. But maybe the private sector can help make a
difference. The business community initially made a lot of noise about not
attending a large financial
held in Riyadh a few weeks ago, and the CEOs of Uber, Siemens, and JP Morgan
pulled out. But many attended anyway, and it seems like it’s business as usual.
What would be great is if Western companies stopped helping the Saudi regime
build blockchain technology. Will IBM stop its collaboration with the regime to
build a blockchain
smart city in Riyadh? Will R3 allow the Saudis to remain in its blockchain consortium? Will speakers like Nick Spanos
remain on the bill for the March 2019 World
Blockchain Summit in Riyadh, or will they pull out? Will software
developers boycott the Saudi
government’s plan to make its own cryptocurrency? Now is time to make a stand.
What makes you hopeful?
The cryptographer Wei Dai once said that “there has never
been a government that didn’t sooner or later try to reduce the freedom of its
subjects and gain more control over them, and there probably will never be one.
Therefore, instead of trying to convince our current government not to try,
we’ll develop the technology that will make it impossible for the government to
succeed.” I find some solace in that. I’ve seen what encrypted communications
can do to help us send messages in a way that preserves privacy. I’ve seen what
bitcoin can do to enable censorship-resistant money. We can start to see the
potential of zero knowledge cryptography to give people the power to own their
data and disclose it selectively to governments and companies. Necessarily, if
we believe that an alternative to the WeChat future (which the Venezuelans and
Saudis and North Koreans and maybe even the Americans will all gobble up)
exists, then it must be built on this kind of infrastructure. And what really
makes me hopeful is the persistence of humans. Defeating the surveillance state
and challenging authoritarianism might seem like daunting tasks but I wouldn’t
want to bet against the world’s dissident community. All of the people I’ve had
the honor to get to know through HRF’s work and through the Oslo Freedom Forum
have taught me one thing — people don’t give up so easily. Take the example of
Ji Seong-ho, for instance. He dragged himself 6,000 miles on crutches to escape
from North Korea. You read that correctly. Here is his Oslo testimony. If he could do what he did,
then we can all find fuel to achieve our goals.
Finally, what can Epsilon Theory readers do to promote and
preserve open societies?
The good news is, there are many
ways. I would encourage readers to check out HRF.org and OsloFreedomForum.com and contact me if you’d like to
get involved. Attending the Oslo Freedom Forum (coming up on May 27-29 in
Norway) is a special experience that will definitely open your eyes and
introduce you to people who are making a real difference in this struggle around
the world. Is there a particular initiative or program or research project that
you’d like to see carried out in this area? Contact me at firstname.lastname@example.org and let’s see
if we can make it happen. Then there’s the technology and investment side of
things, which will come more naturally to your readers. If you’re going to
fight the surveillance state, you have to first arm yourself with knowledge. I
think it’s an extremely good idea to learn more about how bitcoin works, if you
really want to understand decentralization in practice. Maybe the best place to
start is by reading or listening to The Internet of Money by Andreas Antonopolous, and
then diving into his remarkably educational YouTube
channel. A closing thought is that so very few people on this planet have
interacted with technology like bitcoin or encrypted messaging or
censorship-resistant storage. Now is your time to make a human impact and a
profit by investing in these areas. We talk about impact investing in
HealthTech or EdTech or CleanTech, which are all great ways to do well and do
good at the same time. What about DemTech, or Democracy Tech? Start thinking
about technology and infrastructure that can help challenge authoritarianism,
and help the world build it. That’s a fantastic legacy to leave and probably
the best thing your readers can do, given their skill set and knowledge base.
Now is the time to complement the existing impact investing space by supporting
projects that promote and protect civil liberties and open societies. And
today, that means protecting our data, money, and communications.
Riding on the coattails of Ben’s ‘Take Back Your Distance’ section of last week’s Things Fall Apart (Part 3) – Politics, I thought I would share the personal journey (with links at the bottom of this note) I have been on for the past few years to take back the ability to think about things for sustained periods of time, and to know what I am thinking about and why.
I started on the journey to ‘take back my thinking’ as I could feel myself getting caught up in the fast moving swirl of communication and ‘news’, and losing the distinction between what I was thinking and what I had simply been exposed to and thought that I was thinking.
You could say that I had developed ‘Fiat Thought’.
As a citizen and as a leader, it seems to me that mistaking Fiat
Thought for real thought is the single most dangerous thing one can do, and so
I decided to develop and deploy ‘three lines of defense’ to try to take back my
Make my personal tech somewhat inconvenient and limited.
Introduce a couple of hours per day of ‘boring’ (low external stimulation) time.
Take on daily doses of Śūnyatā.
Overall it has been a much more difficult journey than I thought it was going to be.
Most things I tried in building these defenses were at least initially quite unpleasant (the inconvenient tech irritating and friction-y; the boring time boring; the Śūnyatā doses occasionally quite disturbing), and many things I tried simply didn’t work or didn’t stick. But after a solid couple of years of sustained effort I’ve stabilized on a set of protocols for the three lines of defense that, as far as I can tell, are collectively fairly effective.
Unfortunately the Fiat Thought defense protocols I arrived at are
more like the malaria prevention protocol (daily Malerone pre-, during- and
post- + bug spray + nets) rather than a one-and-done (well, two-and-done)
inoculation like the measles vaccination, and so I keep them up whenever I am
in a high risk area (e.g., a major metropolitan area with unrestricted internet
access), which is kind of a chore.
As it happens, whenever I’m not in a high risk Fiat Thought area, I’m usually in a high risk malaria area, so you pick your poison, I guess.
Anyway, here are the three lines of defense I’ve stabilized on and have been running for the past couple of years:
Make my personal tech inconvenient and limited
Turned off notifications on my phone, with phone always in silent mode (switch from ‘push’ to ‘pull’).
Turned my phone to greyscale (makes apps literally dull).
Set phone screen brightness to minimum (makes apps even more dull).
Removed all non-utility apps, so just left with: SMS, calendar, clock, Google Maps, Uber, bike share app, Spotify (note: on iPhone I couldn’t actually figure out how to remove Safari but I could hide it and password protect it with a password I don’t know).
Stopped carrying my phone on the weekend.
Read the WSJ daily print edition instead of online aggregated news (The FT print edition would be better, but I’ll take what I can get).
Use an old 1st generation Kindle for reading books (instead of reading on my phone).
Use only an eight-year-old iMac desktop at home, so I have to intentionally go to the computer and then wait for it to boot instead of having an ‘always on’ device around.
Got off Facebook and Twitter (I haven’t deleted my profiles as I can’t be bothered to figure out how, but I no longer post to either, which has removed the interest for me).
Introduce a couple of hours per day of ‘boring time’
Walk to any appointment that is a 30 minute or less walk time, and take public transit for any journey where it is less than a 25% time increase vs. taking a car (Google maps is very good as predicting this).
An hour-ish simple daily meditation practice, with some longer more intensive periods a few times a year to really stare at my thoughts.
Absorb daily doses of doses of Śūnyatā
This one is tricky to write about.
I use the romanized Sanskrit term ‘Śūnyatā’ here in
the in the way it is commonly used in translations of the Tibetan Buddhist
canon, rather than the romanized Tibetan of ‘stong pa nyid’ , which is
quite a mouthful, or the typical English translation of ‘emptiness’,
which is misleading.
Regardless, whatever word we use, the exercise I ended up taking on and maintaining is something like Marcus Aurelius advocated in Meditations – daily consideration of the true nature of things (for Stoics the logos) and their impermanence, one’s own impermanence, etc. As Aurelius considers and notes in Meditations book 4.4, “The world is truly nothing but change. Our life is only perception.”
After a fair bit of exploration and experimentation, I believe that the Tibetans have far and away the most sophisticated and reliable technology for acquiring Śūnyatā, although for sure many other traditions have equivalent concepts and sophisticated methods for acquisition.
So, the culmination of this journey to ‘take back my thinking’ is that, outside of the office, I’m limited to an iPhone that I have spent a lot of time and effort turning into a greyscale dumb phone, an old desktop computer that takes a couple of minutes to boot, and a ten-year-old Kindle. I spend a fair amount of time sitting on a cushion half-staring at a blank wall. I’ve become a bad Uber customer. And I spend a bunch of time thinking about (literally) nothing.
I appreciate what an immense privilege it is to have the time / money / freedom to do this, and so the question is: Is it valuable, or is it just some next-level-tech-elite-anti-tech-BS?
I say without a doubt that it is beyond valuable.
In a world of Fiat News that quickly becomes Fiat Thought, taking back my thinking is absolutely foundational to my identity.
As Christopher Beirn commented to a recent Rabbit Hole note: homo sapiens is a hackable animal. So the only question really is whether you are hacking yourself or someone else is hacking you … and if someone else is hacking you, then you’re just the unwitting host for the program.
Why and how to make tech inconvenient:
Firstly, to know who you are competing with in the race to hack yourself, check out BJ Fogg, a leading thinker and practitioner on how computers can be designed to influence attitudes and behaviors. He is the author of the seminal book, Persuasive Technology, (subtitled: Using Computers To Change What We Think and Do). This is a somewhat overwrought, but really quite good Medium post that examines that dark side of using the type of ‘persuasive techniques’ that BJ Fogg developed.
Tristan Harris offers his take here on ‘How Technology Hijacks People’s Minds — from a Magician and Google’s Design Ethicist’.
From a practical perspective, here’s a bunch of ideas conveniently collected together by Tristan Harris’s Center For Human Technology on how to make your tech dull and inconvenient.
Why and how to have periods of limited external stimulus:
There are literally thousands of books on ‘Why meditation is great’ but ‘10% happier’ is one of the least irritating and easiest to read. Full title: 10% Happier: How I Tamed the Voice in My Head, Reduced Stress Without Losing My Edge, and Found Self-Help That Actually Works–A True Story by Dan Harris (a TV new anchor).
If you don’t have a meditation practice and want to get one going, my best advice is to kick start it by doing a minimum of five days in a silent Vipassana-style retreat. Here is a Medium post of a pretty average experience of attending a 10 day Goenka (a common variety) one. As the writer makes very clear: It will most likely hurt.
In many ways I hesitate to offer any links or thoughts on this.
For calibration, even Pema Chödrön – arguably the leading Western light on Tibetan Buddhism, someone who has been a serious, full time student of Tibetan Buddhism for 40 years, someone who studied directly under the legendary Chögyam Trungpa – skipped commenting on the Śūnyatā chapter (chapter nine) in her commentary on the classic ‘The Way of the Bodhisattva’.
But then, in the noble spirit of Silicon Valley, after briefly
hesitating and realizing I am fundamentally unqualified, I proceed anyway:
As mentioned above, while Meditations by Marcus Aurelius covers much more ground then just concepts of impermanence and emptiness, I find it extremely accessible and inspiring as an account of a real-world struggle to integrate this type of ‘philosophic’ thinking and perception into day-to-day action. The Modern Library edition also has a terrific introduction by Gregory Hays.
For the Zen version I would go straight to the writing of Eihai Dogen (the 13th century founder of the Soto Zen school) who offers such sage advice as “When you ride on a boat and watch the shore, you might assume that the shore is moving. But when you keep your eyes closely on the boat, you can see that the boat moves. Similarly, if you examine myriad things with a confused body and mind, you might suppose that your mind and essence are permanent. When you practice intimately and return to where you are, it will be clear that nothing at all has unchanging self” … hmm …
The Rinzai sect has its own method of ‘koan study’ (“What is the sound of one hand clapping?” etc.) to push through conceptual thought to the “selfless-self”. I personally have never gotten along with koan study. Others swear by it.
The Tibetan canon is so vast it’s hard to say where to start. The writings of Chögyam Trungpa are pretty available and accessible (Trungpa was a kind of maniac rock-n-roll Buddhist meditation master with some pretty troubling behaviors but, man, could he write).
One final comment: If you decide to go get yourself some Śūnyatā, and go after it in an intensive and sustained way (say, spending more than an hour or two a day on a combination of contemplation and meditation for more than a few months), I would strongly advise working with a professional as you will likely bring about significant adaptations to your brain and nervous system. So, y’know, if you’re not a trained brain surgeon better not to self-operate.
If you want to understand self-sovereign identity I think this is
about as good a primer presentation as any. It is no more than
a 15 minute read time even if you are new to digital identity and, I think,
hits the major points in a sensible way.
Btw: don’t be put off by the ‘Bitcoin Association of Switzerland’
logo on the cover … there is no further mention of bitcoin (or Switzerland) in
Going Underground, as a Toaster
Despite writing in recent weeks
of the rise and reach of ‘the Chinese system’, I don’t believe we will actually
get full, unavoidable surveillance in most of the real world any time soon.
For most people in the general populace (i.e. groups who are not
specifically and intensively targeted) of most countries the rise in
surveillance tech just means staying outside the surveillance system will be
inconvenient not impossible, and so most people won’t bother.
For example, if you want to stay out of the video recognition
system you (sort of) just have to put a ‘adversarial perturbation’ sticker on
your head – see link here for how to get yourself
classified as a toaster. For a more technical explanation of why you can get
yourself classified as a toaster, or panda, or whatever, there is a good paper here.
This is also why I think real world ID and app data are currently
much more effective surveillance techniques than video as video processing is
still expensive, imprecise and hackable. But video surveillance still gets the
headlines as it just feels so much more like surveillance.
Defining and Designing Fair Algorithms
While researching SB-10 bill (referenced in last week’s Rabbit
I found Stanford Computational Policy Lab and their work on SB-10 and then their presentation on Defining and Designing
Fair Algorithms – check it out, it’s long (112 pages), but really
Sinister News Readers
And, finally, there is something deeply sinister about this Chinese
(scroll down to the YouTube video to see it in action). I think the sinister
thing is something so human-like presenting news with no conscience of regard
for ‘truth’ (>> insert preferred partisan joke here<<) … I don’t
know, just deeply, deeply sinister in a Minsky / Shannon Ultimate / Useless Machine kind of way.
If I may begin at the beginning? First, there is the cherry fondue. Now this…is extremely nasty. But we can’t prosecute you for that.
Mister Milton, Owner and Proprietor of Whizzo Chocolate Company:
Next, we have Number 4: Crunchy Frog. Am I right in thinking there’s a real frog in ‘ere?
Yes, a little one.
Is it cooked?
We use only the finest baby frogs, dew-picked and flown from Iraq, cleansed in the finest quality spring water, lightly killed, and sealed in a succulent Swiss, quintuple smooth, full cream, treble milk chocolate envelope, and lovingly frosted with glucose.
That’s as may be, but it’s still a frog!
What else would it be?
Well don’t you even take the bones out?
If we took the bones out, it wouldn’t be crunchy, would it?
Constable Parrot ‘et one of those!
It says “Crunchy Frog” quite clearly.
Well, never mind that. We have to protect the public. People aren’t going to think there’s a real frog in chocolate. The superintendent thought it was an almond whirl! They’re bound to think it’s some kind of mock frog.
Mock frog? We use no artificial preservatives or additives of any kind!
Nevertheless, I advise you in the future to replace the words “Crunchy Frog” with the legend “Crunchy Raw Unboned Real Dead Frog” if you want to avoid prosecution.
What about our sales?
— Monty Python Live at the Hollywood Bowl, “Crunchy Frog” sketch (1982)
It has been pointed out to us that we write rather a lot about philosophy and psychology for a website/blog/newsletter about investing.
Is this surprising? This should not be surprising. All of us are in the business of prediction. Thankfully, not all of it is explicit prediction, like saying that we think that the price of Walmart stock will be $120 in three years, or that Tesla will be bankrupt in four years. Most of it is implicit prediction, like the way that investing money in something risky implies all sorts of things about the returns we expect from it. Predictions all the same. And any activity like this relies on developing confidence in some basis for creating (or assuming) those predictions.
Philosophy, and specifically epistemology, asks how we can know the things we need to make those predictions. Are conditions, traits, features of the thing we’re predicting observable? Are their responses observable? With what confidence may we infer traits from similar things we have observed? Further, may we reason how those traits might interact with other things to allow prediction? Psychology asks how accurate those human observations might be. It asks what evolutionary processes may have colored or influenced what we know, and what we think we know. It posits heuristics that might substitute for empirically-driven reasoning, whether helpfully or harmfully. Furthermore, in a field like investing that is responsible for making predictions about human behavior itself, psychology is recursively relevant, in that it studies both the tool of the observer and the observed.
Psychology and philosophy are critical tools for the investor. But in addition to being particularly ripe fields for bullshit, they also suffer from one of the same tendencies that plagues investors: people get so hung up on terminology and conventions that they start saying and doing dumb things. As always, the shrewd investor avoids that behavior himself and for his clients and capitalizes on it in others.
The Tyranny of Terminology
Of course, that gasbag introduction was just a way to tell you that I got into a little debate about Jordan Peterson.
If you don’t know much about him, Peterson is a professor of psychology at the University of Toronto, a cultural commentator and a bit of a rabble-rouser. As a psychologist and academic, he is heavily cited and as far as I can tell (which is not very far, but judging by citations alone), well-thought-of in his field. As a cultural commentator, he is thoughtful and incisive as a proponent of self-control, advocate of free speech, and opponent of what he characterizes as Neo-Marxism and Postmodernism, especially in the American university. As a scientific historian of philosophy? Well, this is where things get a little more controversial.
You see, the piece I was discussing with a very thoughtful senior staffer at a large U.S. university endowment (don’t tell my salespeople I’m getting into philosophical debates with clients and prospects, please and thank you) made the argument that Peterson was the wrong choice for a public conservative intellectual. The argument, if I may summarize, finds fault with him because (1) he attracts an audience of mostly young white males, (2) the traits he ascribes to Postmodernism are cherry-picked and not entirely correctly as derived from the history of the movement, and (3) he uses the terms “Neo-Marxist” and “Postmodern” seemingly interchangeably despite the different heritage and intellectual evolution of the terms and associated philosophical movements. The piece is a rousing little number, and almost enough to make you want to sit through that whole documentary on Jacques Derrida. (No, not really. Good Lord.)
Guess what? All the claims are pretty much true. Guess what else? None of them matter. I’ll get back to why, but first, I want to talk about another very current example.
You may have seen that Steven Pinker, cognitive pyschologist scientist at Harvard, published a new book called Enlightenment Now. Now, the reality is that the book doesn’t really undertake much discussion of the specifics of schools of enlightenment thought per se, but rather tells the story of human progress over the last 200 years. It makes the argument that these improvements are vastly underestimated and underappreciated. It also connects those achievements to specific influences of science and reason, sometimes very compellingly and sometimes somewhat less so. It is an encouraging and energizing read, even where its contentions are less well supported. I, for one, think there’s rather a lot in the 20th century alone that a purely scientific approach to curing society’s ills has to answer for. But much of the criticism has little to say about that, instead grousing that the science and reason the book discusses aren’t really about THE Enlightenment, but about principles of the Scottish Enlightenment specifically, and even then only about a subset of principles that Pinker particularly likes. After all, Marx was just a natural extension of the French Enlightenment!
Are you detecting a pattern here?
There are a lot of different kinds of talk about Enlightenment Principles right now. Ben and I write about them a lot. Ben wrote about them back in 2016 in Magical Thinking, and later in Virtue Signaling, or…why Clinton is in Trouble. I wrote about them in short last year in Gandalf, GZA and Granovetter. The remarkable new web publication Quillette provides a platform for writers who are thinking about them. The Heterodox Academy is building a strong core of support for them in universities. Pinker is talking about them. Chomsky has been speaking about them for decades. Hitchens, too, before he passed. In his own way, Taleb is talking about them (although he’d dislike the company I’ve chosen for him thus far). Peterson won’t shut up about them. Many of these same people — and some others — are simultaneously issuing criticisms of what is purported to be a diametrically opposed philosophy. In the early 2000s, the scandalous moniker applied was “Cultural Marxism.” Today this opposition is usually generalized into references to “Neo-Marxism” and “Postmodernism.”
But here’s the biggest shocker. Get out the fainting couch: they’re not all saying the exact same thing.
These are thinkers focused on many different areas, and so there are all sorts of topics where they disagree, sometimes vehemently. All would say that they believe in logic, truth and rationality, I think, but would define those things very differently. Most of the folks in the list above, for example, believe in a rationalism that inherently excludes faith. They are among the most prominent atheists of our time. They typically adhere to empiricism and the scientific method as the primary — even sole — method for transforming observations about the world into predictions. For two of them, Taleb and Peterson, rational thought means also incorporating evolved heuristics, intuition, instinct and long-surviving human traditions. This is not fringe stuff, but the logical conclusion of any serious consideration of Hayek and spontaneous order. It also means particular sensitivity to scientific techniques that end up equating absence of evidence with evidence of absence. All this means when you see many of the above names together, it’s…not always friendly. Like, stuff you can’t really walk back. Even among the two primary authors of this blog there are differences in how we see these things. I haven’t talked to Ben about it, but if I gave him the list of the above, I’d guess he’d hitch his wagon to Hitchens. Me? I’m probably closer to Taleb or Peterson.
What I doubt you’d find much of from this group is navel-gazing about terminology on the issue of postmodernism. While Voxsplainers and science historians quibble (very justifiably in the latter case) about whether there is a “discrete, well-defined thing called the Enlightenment” or whether it is fair to use “Postmodernism” in reference to a movement to esteem individual experience as peer or superior to free inquiry and free expression, the rest of us know exactly what people are talking about when they talk about this issue.
Don’t believe me? Fine. Go Full Cosmo and ask people you know these four questions:
Should governments and other important institutions abridge or allow (e.g., through Heckler’s Veto) the abridgement of some speech to protect people from speech which we think may be harmful to society, especially to historically oppressed groups?
Should we restrict the examination or evaluation of certain topics, especially when allowing them would prop up harmful social structures (especially power and class structures)?
Should we be skeptical that certain features and traits of the material, cosmological and biological world can ever be objectively true or important, considering the biased social lenses through which they are observed?
When making predictions about the world, should we consider personal experience and truths as equal or superior to whatever is uncovered through rational evaluation of the empirical merit or survival of a fact, idea or principle?
If you don’t think there’s a real thing happening in academia, in the public sphere, in politics and in creative media between those with three or four responses on opposite ends of the spectrum, I don’t know what to tell you. But I do know that this intuitive, arbitrary, subjective scale that I made up right just now is going to do a lot better job telling you about what people are referring to as a conflict between “Enlightenment” and “Postmodernism” than any etymologically thorough review of the terms themselves. How do I know this? Because it asks the question we should all ask any time that we see prediction or analysis oriented around terminology, categories, benchmarks, titles and jargon:
“Yes, but what is it, really?”
What is it, really?
There isn’t a question I can think of that an investor ought to ask more often, especially when it comes to any interaction they have with a representative of a financial services company trying to sell them something. And as Ben has written, all financial innovation is either finding a new way to sell something (securitization) or a new way to borrow money on things (leverage). The name of the thing being sold isn’t always a very good representation of what the thing is, sometimes for innocent reasons, and sometimes because crunchy, raw, unboned, real, dead frog doesn’t sound very appetizing.
Now, obviously the origin of most investment terminology, conventions, and even jargon IS innocent. Usually their purpose is to reduce complicated or large sets of data or principles to like dimensions. This is pretty helpful for communication and analysis. If we were constantly redefining the generally accepted conventions for a concept like “U.S. Large Cap Stocks”, for example, we would find it difficult to do a great many things with much efficiency. Economic constructs like sectors and common investment styles also have their appeal for this reason.
The problems, however, come in one of two flavors: first, as terminology becomes convention within an industry, we get further and further removed from a fundamental understanding of what the thing actually is. When we talk about U.S. Large Cap Stocks as a sort of monolithic entity unto itself, we forget that there is a lot going on underneath the hood. Sectors are changing. Companies, even entire industries are born and dying. New IPOs, companies slipping out into small cap land, companies bought out by private equity. We forget the nature of our fractional ownership, and the limited mechanical reasons why a stock’s price might rise and fall. The nature of what you own at any given time and the underlying risks attached to it really does change rather a lot, and that’s without getting into the massive sentiment-driven influences on price variation.
One of my favorite analogues to this is the ubiquitous reference to the “Top 1%” of wage earners. The concept is interesting and useful as a simplifying term, but like an asset class, it is by no means a static construction. Consider, for example, that more than 10% of wage-earners will, at some point in their lives, be among the Top 1%! Perhaps more impressively, more than 50% of Americans will at some point be in the Top 10%. Consider the impact that this has on a wide range of policies considered and rhetoric used — not invalidating, to be sure, but relevant.
The second class of problems stemming from the long-term path from terminology into convention is the inevitable realization by market participants that they can — and once enough people do, that they must — game the system. That’s where the coyotes and raccoons come in, but also your garden-variety professionals justifiably worried about career risk. But all of these folks hope you’re hungry for some delicious Crunchy Frog.
Fight Fiercely, Harvard!
What do I mean? Well, sometimes it’s obvious. Let’s consider the curious case of the Harvard Endowment.
A week ago, multiple media outlets reported that alumni from the Class of 1969 (“an artist, a clergyman, and two professors” one article reports, but disappointingly does not finish the joke) wrote incoming Harvard University President Lawrence Bacow to encourage him to force HMC to move half of the $37.1 billion endowment out of “hedge funds” and into ETFs tracking the S&P 500. The reason? This passive management strategy would have worked better over the last several years, and would have saved a bunch of money in fees.
It goes without saying that the alumni recommendation is just really, really terrible. Like, Fergie-singing-the-anthem terrible. It’s terrible because it would arbitrarily change the risk posture of the endowment by a massive amount. It’s terrible because it would shift what has historically been a well-diversified portfolio into a woefully underdiversified portfolio with extraordinarily concentrated exposure to the performance of common stock in large U.S. companies. It’s terrible because the confluence of those two changes would massively increase the drawdowns of the endowment, its risk of ruin, and potentially impact the long-term strategic planning and aims of the greatest research university on the planet.
But mostly, it’s terrible because the proposal isn’t passive at all. Not even a little bit. It’s a massively active roll of the dice on a single market! While alumni, executives and investors bicker over whether the portfolio ought to be “passively managed”, the origin of the term and the nonsense they’re proposing couldn’t be more at odds.
Now, you may be saying, “It’s a silly alumni letter. Most people get this.” No, they really don’t. Remember, the goofy letter was covered throughout the financial media, and they are the same media who triumphantly report the annual difference in return between literally anything and the S&P 500, regardless whether it is the return on a completely different type of security or vehicle with vastly different risk and diversification characteristics. This is how most of the world thinks about investing. This is how the damned Center for Economic Policy and Research thinks about investing, for God’s sake. People who are otherwise very smart think they’re making an intelligent point about fees when they’re really making a dumb point about asset allocation — about quantity and sources of risk. Even the aforementioned Steven Pinker contracted Gell-Mann Amnesia and retweeted an article attributing the Buffett bet between S&P 500 and hedge funds to a question of cost rather than the dominating risk differences between the two.
How do we cut through terminology confusion on an issue like this?
We ask: “What is it, really?”
If you’re being sold a portfolio based on principles of “passive management”, does your advisor or manager mean “low-cost”, does he mean “not making active bets against a global market portfolio”, or both (or, y’know, neither)? If it’s a low-cost story, what is it, really? Does it have a low headline fee, but with expensive underlying implementation using swaps or external funds that don’t get included in the stated fee? Does it have a low headline fee that your advisor is layering high additional costs on top of? What is the asset allocation you’re being sold on? Is it implicitly making an active bet against a global portfolio of financial assets? Is it the right amount of risk? Is it taking sufficient advantage of the benefits of diversification?
If you’re being asked by a client or prospect about “passive management” or “indexing”, are you sure they’re asking you about low-cost investing? Are you sure they care whether the portfolio is avoiding making bets against market cap-weighted indices? Are you sure they care whether you’re in-line with some measure of a global market portfolio? Or are they asking you why you weren’t invested 100% in the S&P 500?
Because whatever the “real” definition of passive management, we all know that we all know that this is almost always what people mean.
Deeper down the Rabbit Hole
The fact that people really mean, “why don’t you just buy the S&P 500” when they say, “why don’t you just invest passively” tells us something else about most investors. When it comes to what they buy and what they own, and especially when it comes to conventions that manifest in indexes and benchmarks, they frequently haven’t given much thought to what it really is.
Try this yourself, with your boards, your financial advisor, or with your clients. Ask them, “What is it, really, that you invest in when you buy a stock?”
I’ve done it, so I’ll give you a preview: you’ll get a huge range of answers, usually relating to “ownership” of companies or businesses. So what is an investment in a stock, really? It is a fractional, juniormost claim on the cash flow of a company, usually denominated in the currency of the country where it has its headquarters, the price of which at any given moment is determined by the investor out there who is willing to pay you the most for it — and nothing else. It has no “intrinsic value”, no “fundamental” characteristic that can be evaluated without knowing how a hundred million others will value and perceive it. It is a risky and inherently speculative investment.
In my experience, this is not what most investors mean when they say to their advisor, “just buy me a portfolio of stocks.” What they really mean is “I want to own things I understand.” They believe that investments in businesses are simple and straightforward. Unfortunately, while the businesses and how they make money may seem perfectly sensible on the surface, the forces influencing the returns from ownership of a common stock are anything but simple and straightforward. Sure, diversification helps a lot, and there are decades of relevant data to help us build some confidence about some range of likely outcomes. There are also theories of varying quality about rational behavior in that spontaneous order we call a market. But what you really own is something whose value may confound any attempt at analysis or linkage to economic fundamentals over your entire investment horizon.
Think this is just a misunderstanding of individual investors? Think again. This is a systematic problem. Consider, for example, that every Series 7-trained professional — by which I mean most of your brokers and financial advisors — is told that alternative investments tend to be “riskier” than traditional investments. In isolated cases this is true, and it’s certainly true that there are strategies by which the complexity of so-called alternative strategies introduces new dimensions of risk — usually as a way for financial intermediaries to confuse people into paying them more. But by and large, it’s an unequivocally false statement. Still, the dimension of complexity vs. perceived simplicity dominates how investors think about risk, even though the relationship is rarely strong. Don’t believe me? Ask a client, or better yet, your financial advisor to rank the following in terms of their riskiness: (1) $100 invested in an S&P 500 index fund, (2) $100 invested in centrally cleared financial futures contracts on German bunds, (3) $100 invested in fully collateralized, centrally cleared credit default swaps on U.S. IG credit. My guess is that nearly all individual investors, a majority of financial media members and a plurality of financial professionals would put #1 somewhere other than the top of the risk list. And it’s Not. Even. Close.
As it intersects with familiarity bias/availability heuristics (i.e., we are biased in our analysis toward things that we think that we know), the tyranny of terminology becomes less insidious and more obvious in its influence. Terms like stocks, bonds, commodities or real estate have readily ascertainable meanings and definitions but mean something very different when they come out of the mouths of most investors. They mean familiarity or foreignness. Whether we are individuals working with advisors or advisors ourselves, we must understand that when most investors say risk, they mean complexity. When most investors say simple — or something they think of as simple — they mean “low risk.” These are dangerous misconceptions.
And friends, any time there’s a dangerous misconception, there’s someone in the financial services industry poised to weaponize it. Plenty of Crunchy Frogs to go around, you see.
In every sub-field of money management, the name of the game is benchmark arbitrage. It’s a game played in three parts: risk layering, benchmark selection and multi-benchmarking. In each case, the affinity investors have for the comfort of indices makes them susceptible to marketing and fee schemes that have the potential to cause them harm.
Risk layering is the oldest of the three games. I wrote about it last year in I am Spartacus. The basic premise here is to select a benchmark that will feel attractive, familiar and conventional, and then to take additional risk on top of it to either (1) earn a better fee for the return generated by that risk or (2) generate better-looking performance to improve marketing potential. This IS the business model of private equity buyout funds, who since the massive fund raises and valuation increases of the mid-2000s, now take your cash, buy a company at a premium, layer on debt and sell it a few years down the line without having really done much of anything else. They’re not alone. Keying on the intellectual attraction of an “absolute return hurdle”, many so-called hedged and market neutral funds take on credit, equity and other risks beyond what exists in the benchmark, happily collecting incentive fees on garden-variety sources of return. Long-only funds do this too, of course. Most actively managed funds tend to buy higher beta, higher volatility stocks, and nearly all are smaller capitalization than the benchmark they are measured against.
Benchmark selection is often just a variant of risk-layering, but where the fund manager tries to control both the measurement and the measuring stick. Think of this like “venue-shopping” in the criminal justice world. Have you hired an international manager benchmarked to the MSCI EAFE Index? They do this. I don’t know who you hired, but they do this. They always have 5-10% in emerging markets stocks, don’t they? There’s a reason they didn’t select the MSCI World ex-US benchmark, folks.
As for multi-benchmarking, well…I hate to tell you, but if you have ever hired a money manager, a financial advisor or even an in-house investment team, you’ve seen this one, even if you didn’t notice it. It’s very simple: you pick two benchmarks, and then you make sure you’re always positioned between them. And that’s it. Sometimes one of the benchmarks is a peer benchmark (e.g., Morningstar, Lipper, eVestment peer group, Wilshire TUCS, Cambridge for the alts folks), or sometimes it’s a “style” benchmark (e.g., Value, Growth, High Dividend, Quality, Low Vol, etc.). But the objective is to always be able to point to something that you’re outperforming. A lot of this is well-intentioned and human, and there’s often a good reason to do it. But if you’re not looking out for it, it can confound.
And that’s kinda the point.
We can’t avoid convention, or the taxonomy that emerges naturally from an industry like ours. Nor should we want to. It helps us have conversations with each other. It helps us focus on Things that Matter instead of getting bogged down in details. But if we are to be successful, we must recognize the influence it has on us, our clients, our advisors and other investors. My advice?
Try to understand what your clients really think their investments are. Know what they really mean when they ask why you are or aren’t doing something.
Know what your advisors and managers think something is. Ask questions. Don’t assume based on terminology, and don’t be steamrolled by jargon.
Know what the things you own actually are, and build a risk management program to ensure that the baser temptations of people in this industry don’t cost you or your clients money.
As we delve further into alpha in a “Three-Body Market”, this last point will come up a lot. You can’t seek alpha if you don’t really know how to measure it. Except for the Postmodernists. Y’all can still tell us how it makes you feel.
I’m limiting this week’s Rabbit Hole to three links which represent the rapid tick-tock of the trifecta of massively fast compute, AI algorithms and blockchain development as I believe that these are the top three technology mega-trends of the 2015 – 2025 period (ex-Life Sciences innovation). Personally, I still believe that within these three mega-trends massively fast compute (Big Compute) will be the most world-changing, but clearly big compute hardware and algorithm development are deeply intertwined, and I believe we will start to see blockchain intertwine in a meaningful, although as-yet somewhat unclear, way with these other two technologies too.
That’s a fast chip you got there, bud
Very accessible CB Insights write up here and denser original paper here of a test of a Photonic computer chip which “mimics the way the human brain operates, but at 1000x faster speeds” with much lower energy requirements than today’s chips. To state the obvious, the exciting/terrifying potential of chips like this becoming reality is that machines will be able to rapidly cumulatively learn while we humans are still limited by learning, passing on some fraction of that learning, and then dying, which is clearly a pretty inefficient process.
The future of AI learning: nature or nurture?
IEEE Spectrum provide an overview on a recent debate a between Yann LeCun and Gary Marcus at NYU’s Center for Mind, Brain and Consciousness on whether or not AI needs more built-in cognitive machinery similar to that of humans and animals to achieve similar intelligence.
Blockchain for Wall Street
Bloomberg reports on a major breakthrough in cryptography which may have solved one of the biggest obstacles to using blockchain technology on Wall Street: keeping transaction data private. Known as a “zero-knowledge proof,” the new code will be included in an Oct. 17 upgrade to the Ethereum blockchain, adding a level of encryption that lets trades remain private.
A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn:
“Claude Shannon, the founder of information theory, in a sense liberated the study of information starting in the 1940s by allowing it to be considered in the abstract — as 1s and 0s with purely mathematical meaning. Shannon took the view that, as Tishby put it, “information is not about semantics.” But, Tishby argued, this isn’t true. Using information theory, he realized, “you can define ‘relevant’ in a precise sense.”
Quantum computers need smart software
Nature reports “The world is about to have its first (useful) quantum computers … The problem is how best to program these devices. The stakes are high — get this wrong and we will have experiments that nobody can use instead of technology that can change the world.” Related to this, I’m excited to spend some time in a couple of weeks with Scott Aaronson of QCWare who “develop hardware-agnostic enterprise software solutions running on quantum computers”.
In other “the quantum age is nigh” news:
A pair of researchers from the University of Tokyo have developed what they’re calling the “ultimate” quantum computing method. Unlike today’s systems, which can currently only handle dozens of qubits, the pair believes their model will be able to process more than a million.
Australian researchers have designed a new type of qubit — the building block of quantum computers – that they say will finally make it possible to manufacture a true, large-scale quantum computer.
Microsoft now has 8,000 AI researchers
Apparently, Microsoft now has 8,000 AI researchers. That’s a veritable army. Presumably a big chunk of the 8,000 are datamungers, infrastructure engineers etc., just as on aircraft carrier like the USS Nimitz (pictured below) where there are, order of magnitude, the same number of personnel but most are cooks, logistics managers, medics etc. rather than fighter pilots. But still: Eight thousand!!!
And in other “that’s a lot of engineers” news: Amazon now has 5,000 people working on the Echo / Alexa.
As I’ve noted before, in my view it is utter conceit that it is possible to do something ‘AI’ which is truly and sustainably novel, scaled and production-ready in a high stakes environment (such as trading) without a decent sized team focused on a narrowly defined problem.
Fake news and botnets
Fascinating interview with Researcher Emilio Ferrara on fake news and botnets:
“We found that bots can be used to run interventions on social media that trigger or foster good behaviors,” said Ferrara. “This milestone shatters a long-held belief that ideas spread like an infectious disease, or contagion, with each exposure resulting in the same probability of infection. Now we have seen empirically that when you are exposed to a given piece of information multiple times, your chances of adopting this information increase every time.”
It has been at least a month since we have had a Hofstadter quote, and this week’s Rabbit Hole column feels light on existential theory, so here’s a classic:
“In the world of living things, the magic threshold of representational universality is crossed whenever a system’s repertoire of symbols becomes extensible without any obvious limit.”
And finally, in general I have quite a bit of reticence about sharing TED Talk links as, to quote the low-agreeability Benjamin Bratton, they can be kinda “Middlebrow Megachurch Infotainment.” Having said that, here’s a link to a terrific TED Talk on why boredom is important.
“As one UX designer told me, the only people who refer to their customers as “users” are drug dealers and technologists.”
This week’s Rabbit Hole column is more thematic with recent links that I found interesting around the topic of ‘news,’ on which Ben wrote the defining commentary of recent years with Fiat Money, Fiat News.
Youth and news
I’ve always appreciated the quality and integrity of the work of the Knight Foundation. This report is a fascinating summary of a focus group with 52 teenagers and young adults from across the United States on how young people conceptualize and consume news in digital spaces.
A scalable blockchain protocol for publicly accessible and immutable content
This is the category of blockchain things which I think is interesting and transformative:https://steem.io/steem-bluepaper.pdf .
(NOTE: I have no connection to Steem, I just like the category)
“Compared to other blockchains, Steem stands out as the first publicly accessible database for immutably stored content in the form of plain text, along with an in-built incentivization mechanism. This makes Steem a public publishing platform from which any Internet application may pull and share data while rewarding those who contribute the most valuable content.”
The Bradd Jaffy and Kyle Griffin approach
Here I re-share a link to a Buzzfeed story about Bradd Jaffy And Kyle Griffin who re-share links on Twitter to other people’s news stories. If only Bradd Jaffy And Kyle Griffin could then re-share this link and then Buzzfeed could write about that … But, beyond the comical circularity potential, it is a very interesting story by Buzzfeed on the power of non-traditional distribution channels / influencers and ’the secondhand scoop.’
The Norwegian approach
Nieman Lab reports that a Norwegian news site (the online arm of the NRK public broadcaster) requires readers to answer questions to prove they understand story before posting comments: “We thought we should do our part to try and make sure that people are on the same page before they comment.. If everyone can agree that this is what the article says, then they have a much better basis for commenting on it.”
What words ought to exist?
And finally, here is a fun paper which the author describes as “An earnest attempt to answer the following question scientifically: What words ought to exist?” using “computational cryptolexicography, n-Markov models, coinduction…”
High level, but still interesting, overview of how Netflix recommendation system works from Wired. Short answer: “The three legs of this stool would be Netflix members; taggers who understand everything about the content; and our machine learning algorithms that take all of the data and put things together.” The tagging piece is probably the most interesting (“dozens of in-house and freelance staff who watch every minute or every show on Netflix and tag it. The tags they use range massively from how cerebral the piece is, to whether it has an ensemble cast, is set in space, or stars a corrupt cop”) and point to the continued need for ‘human-in-the-loop’ content tagging for machine learning systems.
A taxonomy of humans according to Twitter
Sam Levine, an artist and programmer from Brooklyn, scraped Twitter’s ad creation page to produce a full list of all user segments, their names, descriptions and user count: a taxonomy of human beings according to Twitter and its data brokers. My favorite tag: “Buyers of deli bulk meat.”
“People tend to think about evolution as being synonymous with population genetics. I think that’s fine, as far as it goes. But it doesn’t go far enough. Evolution was going on before genes even existed, and that can’t possibly be explained by the statistical models of population genetics alone. There are collective modes of evolution that one needs to take seriously, too. Processes like horizontal gene transfer, for example.”
The aliens on Earth
Continuing on the theme of evolution, this is a fascinating piece on ctenophores as aliens on earth:
“Leonid Moroz has spent two decades trying to wrap his head around a mind-boggling idea: even as scientists start to look for alien life in other planets, there might already be aliens, with surprisingly different biology and brains, right here on Earth. Those aliens have hidden in plain sight for millennia. They have plenty to teach us about the nature of evolution, and what to expect when we finally discover life on other worlds.”
Chinese science fiction
And finally, on the subject of aliens, I can not recommend strongly enough Liu Cixin’s The Three-Body Problem ( 三体 ) science fiction trilogy, which recently won a Hugo Award. Deeply intelligent and expansive science fiction on the scale of Asimov’s Foundation series. Without giving too much of a spoiler, my favorite quote is from the second book in the trilogy (The Dark Forest) which turns Johann Wolfgang von Goethe‘s “If I love you, what business is it of yours?” into “If I destroy you, what business is it of yours?”
The meat really starts to kick in the section ‘There Are No Shortcuts’ and reaches peak lucidity in the section ‘Organizational Structure’. Excellent work by Leigh Drogan, Founder and CEO at Estimize, laying out what I really do believe is the blueprint for success with ‘next gen’ strategies that are foundationally systematic and substantially software-encoded:
Portfolio Manager — Of all the roles this is where I think things really need to change in terms of who sits in this seat. It can no longer be hedge fund bros, they simply won’t survive here. Nor will the pure gunslingers and tape readers, gone. And you certainly don’t want the pure quants sitting in this seat. PMs of the future are going to be far more interpersonal and process driven…. This is a cross functional role, and one that needs to be based on the behavioral attributes of the person more than anything else. An MBA may be useful here, but I would even say that having experience working at the early stages of a startup as a CEO can add a lot. I’m waiting for someone to develop a firm to leverage psychometric testing for different investment strategies so that we can identify people tuned for momentum vs value. You’re talking about a completely different psychology between those two people and it’s imperative you choose the person correctly … PMs should have some training in statistical and quantitative methods in order for them to talk intelligently with the quants and trust the factor models. Without that trust, there’s simply no point in having them and you’ll only gain that by understanding how they are built. Should a PM know how to code, no. Should they understand what the code does and why, absolutely. Basic data science classes can provide this knowledge. Quantitative research methods 101 in college is a requirement … I believe that compensation structures for the PM need to change. This is no longer “his book”. He is another player on the team, who has a specific role, to coordinate the dance. But in many ways, he will have less impact on the alpha generated by the book than the analysts or the quants who create the factor models. The PM is now the offensive coordination calling the plays, not the quarterback on the field scrambling around and throwing touchdowns. We can now compensate analysts accurately for the efficacy of their calls, and the PM for how much alpha she adds above them. The rest of the team should be bonused out based on the performance of the book.
Our basic idea with our DeepMoji project is that if the model is able to predict which emoji was included with a given sentence, then it has an understanding of the emotional content of that sentence. We are training our model to predict emojis on a dataset of 1.2B tweets (filtered from 55B tweets). We can then transfer this knowledge to a target task by doing just a little bit of additional training on top with the target dataset. With this approach, we beat the state-of-the-art across benchmarks for sentiment, emotion, and sarcasm detection.
Check out the online demo here, more detailed write-up here, and full technical paper here.
Useful skills like VR, NLP and… econometrics?
This list of fastest-growing freelancer skills compiled by Upwork, a job site that matches freelancers with employers, is just so odd I feel there is either some deep pattern coded in there that explains everything, or else some intern at Upwork is having a laugh.
Growth in VR and NLP makes total sense given the relative lack of experienced talent vs growth in demand, especially for VR developers. Neural network and Docker development for the same reasons. Adobe Photoshop freelancers — sure, I guess Photoshop is still operated by a priesthood although it’s unclear why the journeyman priesthood is growing rapidly.
But then Econometrics, really??!!? — never, ever, in my life have I thought “what I really need to do is to hire a random econometrician over the internet”, and for sure that thought has not been exponentially increasing of late.
And Asana work tracking, which had only around 20,000 paying customers a year ago?!!? — that’s like having ’Tesla car polisher’ on the list.
Anyway, I leave you to ponder. It certainly is an intriguing list — perhaps what we need is an econometric hireling to make sense of it for us…
And finally and frivolously, we have this article which is pretty much a total waste of storage space as it is a 700-word, not-very-good takedown of a new not-very-good mushroom-identifying mobile app with sub-par mushroom image recognition. However, it warrants inclusion in this week’s Rabbit Hole for the one immortal line:
There’s a saying in the mushroom-picking community that all mushrooms are edible but some mushrooms are only edible once.
A couple of weeks back I shared a link to the story of ImageNet and the importance of data to developing algorithms. Ars Technica reports on two ‘at the coalface’ battles over data access with HiQ and Power Ventures fighting with LinkedIn and Facebook over data access. I’m not advocating a position on this but, to be sure, small — and currently obscure — court cases like these will, cumulatively, end up setting the precedents which will have a significant impact on the evolution and ownership of powerful algorithms that are increasingly driving behavior and economics.
This speech from Claude Shannon at Bell Labs in 1952 has been circulating online for the past couple of weeks. It is a timeless, pragmatic speech on creative thinking which remains, 65 years later, fully relevant for developing novel computational strategies:
Sometimes I have had the experience of designing computing machines of various sorts in which I wanted to compute certain numbers out of certain given quantities. This happened to be a machine that played the game of nim and it turned out that it seemed to be quite difficult. It took quite a number of relays to do this particular calculation although it could be done. But then I got the idea that if I inverted the problem, it would have been very easy to do — if the given and required results had been interchanged; and that idea led to a way of doing it which was far simpler than the first design. The way of doing it was doing it by feedback; that is, you start with the required result and run it back until — run it through its value until it matches the given input. So the machine itself was worked backward putting range S over the numbers until it had the number that you actually had and, at that point, until it reached the number such that P shows you the correct way.
Facebook shuts down robots after they invent their own language
Facebook shuts down robots after they invent their own language has become a widely reported and wildly commentated story over the past month, referencing a story on ’Tricky chatbots’ linked here a couple of months back. For melodramatic illustrative effect, I like switching a couple of words in the Facebook headline so that it reads ‘Lehman (doesn’t) shuts down traders after they invent their own language’ as it illustrates that, in general, if you: put a bunch of agents (human or machine) together and set up a narrowly defined, adversarial, multi-player game with a strong reward function then the agents will develop their own task-specific language and protocols, keep adding complexity, lie to each other (yes, the FB bots also learnt to do that), be tempted to obfuscate behavior in order to reduce interference and maximize the reward function, and develop models which are positive for near-term reward maximization but do not necessarily deal with longer-term consequence or long tail events, and so become very hard for human overseers to truly assess…
DICK FULD (2008): I wake up every single night wondering what I could have done differently — this is a pain that will stay with me the rest of my life
FACEBOOK (2017): Hold my beer
AI: From partial to full script
Thinking more broadly about the longer-term evolution of AI (and the nature of money and contracts, per Ethereumlink last week), it has been interesting to re-read Sapiens: A Brief History of Humankind by Yuval Noah Harari which charts the rise to dominance of us Sapiens with especially interesting chapters on the development of written language and money. A concept which particularly grabbed me was that written language was initially developed as ‘partial script’ technology for narrow tasks such as tax accounting, and then evolved to be full script and so capable of much more than it was originally conceived for.
The history of writing is almost certainly a wonderful historical premonition of the trajectory of AI, except with the evolution being much faster and the warning that likely “the AI is more powerful than pen.”
Relevant excerpt from Sapiens:
Full script is a system of material signs that can represent spoken language more or less completely. It can therefore express everything people can say, including poetry. Partial script, on the other hand, is a system of material signs that can represent only particular types of information, belonging to a limited field of activity … It didn’t disturb the Sumerians (who invented the script) that their script was ill-suited for writing poetry. They didn’t invent it in order to copy spoken language, but rather to do things that spoken language failed at … Between 3000 BC and 2500 BC more and more signs were added to the Sumerian system, gradually transforming it into a full script that we today call cuneiform. By 2500 BC, kings were using cuneiform to issue decrees, priests were using it to record oracles, and less-exalted citizens were using it to write personal letters.
The beautiful mathematical explorations of Maryam Mirzakhani
And finally, at the risk of turning into The Economist, we conclude this week’s Rabbit Hole with a touching obituary of the Tehran-born, Fields Medal-winning mathematician Maryam Mirzakhani:
A bit more than a decade ago when the mathematical world started hearing about Maryam Mirzakhani, it was hard not to mispronounce her then-unfamiliar name. The strength and beauty of her work made us learn it. It is heartbreaking not to have Maryam among us any longer. It is also hard to believe: The intensity of her mind made me feel that she would be shielded from death.
…requesting information on new ideas and approaches for creating (semi)automated capabilities to assign ‘Confidence Levels’ to specific studies, claims, hypotheses, conclusions, models, and/or theories found in social and behavioral science research (and) help experts and non-experts separate scientific wheat from wrongheaded chaff using machine reading, natural language processing, automated meta-analyses, statistics-checking algorithms, sentiment analytics, crowdsourcing tools, data sharing and archiving platforms, network analytics, etc.
Claude Berrou on turbo codes and informational neuroscience
Fascinating short interview with Claude Berrou, a French computer and electronics engineer who has done important work on turbo codes for telecom transmissions and is now working on informational neuroscience. Berrou describes his work through the lens of information and graph theory:
My starting point is still information, but this time in the brain. The human cerebral cortex can be compared to a graph, with billions of nodes and thousands of billions of edges. There are specific modules, and between the modules are lines of communication. I am convinced that the mental information, carried by the cortex, is binary. Conventional theories hypothesize that information is stored by the synaptic weights, the weights on the edges of the graph. I propose a different hypothesis. In my opinion, there is too much noise in the brain; it is too fragile, inconsistent, and unstable; pieces of information cannot be carried by weights, but rather by assemblies of nodes. These nodes form a clique, in the geometric sense of the word, meaning they are all connected two by two. This becomes digital information…
Thermodynamics in far-from-equilibrium systems
I’m a sucker for methods to try to understand and explain complex systems such as this story by Quanta (the publishing arm of the Simons Foundation — as in Jim Simons or Renaissance Technologies fame) of Jeremy England, a young MIT associate professor, using non-equilibrium statistical mechanics to poke at the origins of life.
And finally, check out this neat little game theory simulator which explores how trust develops in society. It’s a really sweet little application with fun interactive graphics framed around the historical 1914 No Man’s Land Ceasefire. Check out more fascinating and deeply educational games from creator Nicky Case here.
I’ve recently — perhaps belatedly — developed an interest in blockchain, and particularly in Ethereum. Not so much in trading crypto-currencies, but more in the realm of the type of ‘Smart Token’ protocols being developed by Bancor. As I start to process the implications of smart contracts I’m convinced that we are currently at Day Zero of a massive disruption. To quote Mike Goldin on one dimension of this disruption: “What blockchains give us, fundamentally, is programmable money. When you can program money, you can program incentives. When you can program incentives, you can kind of program people’s behavior.”
Another week, another set of ‘human’ skills which algorithms are mastering: Google demonstrates both an algorithm for tastefully selecting landscape photography, which is almost as good as a pro photographer, and, from the DeepMind division, “a new family of approaches for imagination-based planning (and) architectures which provide new ways for agents to learn and construct plans to maximize the efficiency of a task.”
Rough translation: AI which has the rudimentary ability to consider potential consequences of an action (‘imagine’) and plan ahead result in a higher success rate than AIs without this ability.
ImageNet: the data that changed AI research
Long, terrific overview of the history and impact of the ImageNet data set: “One thing ImageNet changed in the field of AI is suddenly people realized the thankless work of making a dataset was at the core of AI research. People really recognize the importance — the dataset is front and center in the research as much as algorithms.”
Auto Public Offering
Generally, ‘automation of white collar work’ is such an obviously disruptive category of AI — and near-term economic earthquake for many industries — that there is not much to say about it. However, this short piece by Bloomberg a few weeks back caught my eye: Apparently Goldman has automated (or at least mapped out how to automate) half the tasks needed to prepare for an IPO, thus replacing the work previously done by associates earning $326,000 a year. As Bill Gates famously said:“Be nice to nerds. Chances are you’ll end up working for one.”
The paradox of historical knowledge
And finally, I shared a pretty hefty quote from “Homo Deus: A Brief History of Tomorrow” by Yuval Noah Harari last week related to algorithms and self. On a completely different topic, the book also contains a fantastic quote on the paradox of historical knowledge: “This is the paradox of historical knowledge: Knowledge that does not change behavior is useless. But knowledge that changes behavior quickly loses its relevance. The more data we have and the better we understand history, the faster history alters its course, and the faster our knowledge becomes outdated.”
Let me come straight out with it and state, for the record, that I believe the best current truth we have is that we humans, along with all other living beings, are simply massively complex complexes of algorithms. What do I mean by that? Well, let’s take a passage from the terrific Homo Deus by Yuval Noah Harari, which describes this concept at length and in detail:
In recent decades life scientists have demonstrated that emotions are not some mysterious spiritual phenomenon that is useful just for writing poetry and composing symphonies. Rather, emotions are biochemical algorithms that are vital for the survival and reproduction of all mammals. What does this mean? Well, let’s begin by explaining what an algorithm is, because the 21st Century will be dominated by algorithms. ‘Algorithm’ is arguably the single most important concept in our world. If we want to understand our life and our future, we should make every effort to understand what an algorithm is and how algorithms are connected with emotions. An algorithm is a methodical set of steps that can be used to make calculations, resolve problems and reach decisions. An algorithm isn’t a particular calculation but the method followed when making the calculation.
Consider, for example, the following survival problem: a baboon needs to take into account a lot of data. How far am I from the bananas? How far away is the lion? How fast can I run? How fast can the lion run? Is the lion awake or asleep? Does the lion seem to be hungry or satiated? How many bananas are there? Are they big or small? Green or ripe? In addition to these external data, the baboon must also consider information about conditions within his own body. If he is starving, it makes sense to risk everything for those bananas, no matter the odds. In contrast, if he has just eaten, and the bananas are mere greed, why take any risks at all? In order to weigh and balance all these variables and probabilities, the baboon requires far more complicated algorithms than the ones controlling automatic vending machines. The prize for making correct calculations is correspondingly greater. The prize is the very survival of the baboon. A timid baboon — one whose algorithms overestimate dangers — will starve to death, and the genes that shaped these cowardly algorithms will perish with him. A rash baboon —one whose algorithms underestimate dangers — will fall prey to the lion, and his reckless genes will also fail to make it to the next generation. These algorithms undergo constant quality control by natural selection. Only animals that calculate probabilities correctly leave offspring behind. Yet this is all very abstract. How exactly does a baboon calculate probabilities? He certainly doesn’t draw a pencil from behind his ear, a notebook from a back pocket, and start computing running speeds and energy levels with a calculator. Rather, the baboon’s entire body is the calculator. What we call sensations and emotions are in fact algorithms. The baboon feels hunger, he feels fear and trembling at the sight of the lion, and he feels his mouth watering at the sight of the bananas. Within a split second, he experiences a storm of sensations, emotions and desires, which is nothing but the process of calculation. The result will appear as a feeling: the baboon will suddenly feel his spirit rising, his hairs standing on end, his muscles tensing, his chest expanding, and he will inhale a big breath, and ‘Forward! I can do it! To the bananas!’ Alternatively, he may be overcome by fear, his shoulders will droop, his stomach will turn, his legs will give way, and ‘Mama! A lion! Help!’ Sometimes the probabilities match so evenly that it is hard to decide. This too will manifest itself as a feeling. The baboon will feel confused and indecisive. ‘Yes . . . No . . . Yes . . . No . . . Damn! I don’t know what to do!’
Why does this matter? I think understanding and accepting this point is absolutely critical to being able to construct certain classes of novel and interesting algorithms. “But what about consciousness?” you may ask, “Does this not distinguish humans and raise us above all other animals, or at least machines?”
There is likely no better explanation, or succinct quote, to deal with the question of consciousness than Douglas Hofstadter’s in I Am a Strange Loop:
“In the end, we are self-perceiving, self-inventing, locked-in mirages that are little miracles of self-reference.”
Let’s accept Hofstadter’s explanation (which is — to paraphrase and oversimplify terribly — that, at a certain point of algorithmic complexity, consciousness emerges due to self-referencing feedback loops) and now hand the mic back to Harari to finish his practical thought:
“This raises a novel question: which of the two is really important, intelligence or consciousness? As long as they went hand in hand, debating their relative value was just an amusing pastime for philosophers, but now humans are in danger of losing their economic value because intelligence is decoupling from consciousness.”
Or, to put it another way: if what I need is an intelligent algorithm to read, parse and tag language in certain reports based on whether humans with a certain background would perceive the report as more ‘growth-y’ vs ‘value-y’ in its tone and tenor, why do I need to discriminate whether the algorithm performing this action has consciousness or not, or which parts of the algorithms have consciousness (assuming that the action can be equally parallelized either way)?
AI vs. human performance
Electronic Frontier Foundation have done magnificent work pulling together problems and metrics/datasets from the AI research literature in order to see how things are progressing in specific subfields or AI/machine learning as a whole. Very interesting charts on AI versus human performance in image recognition, chess, book comprehension, and speech recognition (keep scrolling down; it’s a very long page with lots of charts).
Alpha male brain switch
Researchers led by Prof Hailan Hu, a neuroscientist at Zhejiang University in Hangzhou, China have demonstrated activating the dorsal medial prefrontal cortex (dmPFC) brain circuit in mice to flip the neural switch for becoming an alpha male. This turned the timid mice bold after their ‘alpha’ circuit was stimulated. Results also show that the ‘winner effect’ lingers on and that the mechanism may be similar in humans. Profound and fascinating work.
Explaining vs. understanding
And finally, generally I find @nntaleb’s tweets pretty obnoxious and low value (unlike his books, which I find pretty obnoxious and tremendously high value), but this tweet really captured me: “Society is increasingly run by those who are better at explaining than understanding.” I pondered last week on how allocators and Funds of Funds are going to allocate to ‘AI’ (or ‘ALIS’). This quote succinctly sums up and generalizes that concern.
And finally, finally, this has nothing to do with Big Compute, AI, or investment strategies, but it is just irresistible: Winnie the Pooh blacklisted by China’s online censors: “Social media ban for fictional bear follows comparisons with Xi Jinping.” Original FT article here (possibly pay-walled) and lower resolution derivative article (not pay-walled) by inUth here. As Pooh says “Sometimes I sits and thinks, and sometimes I just sits…”
The Google corporation recently shared this technical paper: “One model to learn them all” (less technical write up here by VentureBeat). While the model in and of itself is not transformational, the approach is a pretty big deal as it lays out a template for how to create a single machine learning model that can address multiple tasks well.
And in other Google machine learning news, Google and Carnegie Mellon University ran an experiment using ‘enormous data,’ taking an unprecedentedly huge collection of 300 million labeled images (rather than a more typical one million images) to test whether it’s possible to get more accurate image recognition not by tweaking the design of existing algorithms but by feeding them much, much more data. The answer, unsurprisingly, is yes, you get better-trained models using enormous data sets and having fifty powerful GPUs grind on the data for two months solid.
Quote by Oren Etzioni, the CEO of the Allen Institute for AI (AI2), who produce Semantic Scholar: “What if a cure for an intractable cancer is hidden within the tedious reports on thousands of clinical studies? In 20 years’ time, AI will be able to read — and more importantly, understand — scientific text. These AI readers will be able to connect the dots between disparate studies to identify novel hypotheses and to suggest experiments that would otherwise be missed. AI-based discovery engines will help find the answers to science’s thorniest problem.”
AI is/isn’t taking over the world
Depending who you ask, AI is either just about to take over the world or is embryonic and trivial in its achievements to date.
In the taking-over-the-world corner, we have this canonical article titled “How AI is taking over the global economy in one chart.” The absolute comparisons of R&D budget sizes in this article (and the oversimplified social conclusions) seem pretty dubious, but the point is most likely directionally correct on the relative size of R&D spending of ‘the big eight’ compared to smaller industrialized nations, as well as the fact that the ability to fund R&D is going to be very decisive for both companies and nations over the next few decades.
For illustrative purposes only. Source: Axios 2017.
And in the embryonic-and-trivial corner, Evolutionary biologist Phil Madgwick points out that, “Artificial intelligence does not mimic natural intelligence, and it is not clear that there have been significant developments toward anything with rabbit-like intelligence, let alone human-like intelligence.”
My view: both of these things are simultaneously true in that while we are far from human-level machines, woe betide companies and countries which are currently under-investing in applied AI R&D.
MOV37, a Fund of Funds (FoF) put out their thesis/manifesto for ALIS (Autonomous Learning Investment Strategies), which, as well as being a handy anthology of every known AI trope in the last 12 months, is also, in my opinion, a pretty accurate perspective on the next wave of AI-driven investing (except for the ‘two people and a laptop’ bit, which just doesn’t jibe with anything we’re seeing in any other machine learning field, per the ‘enormous data’ link above).
The real question this piece left me with is: who is going to decide which ALIS funds to invest in? Here in the Valley, ‘Deep tech’ investors are typically ex-tech entrepreneurs with deep engineering backgrounds, so they somewhat understand what they’re investing in. What’s unclear is how the majority of FoFs and allocators are going to arrange themselves to invest in ALIS machine learning strategies without any actual experience in developing ALIS-type machine learning strategies. Perhaps the FoF strategy will be more the Consumer-VC strategy of ‘just seed a bunch of small things with limited discrimination, let most die, and wait until a couple become scaled breakouts like Instagram/Pinterest/Snapchat and return the fund.’
Time will tell.
Kai Fu Lee, Commence!
And finally, as a genre, I really like commencement speeches. Speakers seem to push themselves to ‘tell their best truth’ as well as address the meaning of their achievements (while keeping it short and accessible).
Here is a great commencement speech to the Engineering School of Columbia University by legendary engineer Kai Fu Lee (of Apple, Microsoft and Google fame).
Vicarious (a buzzy Silicon Valley company developing AI for robots) say they have a new and crazy-good AI technique called Schema Networks. The Allen Institute for Artificial Intelligence and others seem pretty skeptical and demand a throw-down challenge with AlphaGo (or, failing that, some peer-reviewed papers with commonly used terms and a broader set of tests).
In other AI video game news, Microsoft released a video of their AI winning at Ms. Pacman, with an instructive voiceover of how the system works.
I recently stumbled upon Carl Icahn’s Twitter feed which has the tag line: “Some people get rich studying artificial intelligence. Me, I make money studying natural stupidity.” Me, I think in 2017 this dichotomy is starting to sound pretty quaint. See: Overview of recent FAIR (Facebook Artificial Intelligence Research division) study teaching chatbots how to negotiate, including the bots self-discovery of the strategy of pretending to care about an item to which they actually give little or no value, just so they can later give up that item to seem to have made a compromise. Apparently, while they were at it, the Facebook bots also unexpectedly created their own language.
The quantum age has officially arrived
I’ve been jabbering on and pointing to links about quantum computing and the types of intractable problems it can solve for some time here, here and here, but now Bloomberg has written a long piece on quantum we can officially declare “The quantum age has officially arrived, hurrah!”. Very good overview piece on quantum computing from Bloomberg Markets here.
Your high dimensional brain
We tend to view ourselves (our ‘selfs’) through the lens of the technology of the day: in the Victorian ‘Mechanical age’ we were (and partly are) bellows and pumps, and now we are, by mass imagination, a collection of algorithms and processors, and possibly living in a VR simulation. While this ‘Silicon Age’ view is probably not entirely inaccurate it is also, probably, in the grand scheme of things, nearly as naive and incomplete as the Victorian view was. Blowing up some of the reductions of current models, this new (very interesting, pretty dense, somewhat contested) paper points towards brain structure in 11 dimensions. Shorter and easier explainer here by Wired or even more concisely by the NY Post: “If the brain is actually working in 11 dimensions, looking at a 3D functional MRI and saying that it explains brain activity would be like looking at the shadow of a head of a pin and saying that it explains the entire universe, plus a multitude of other dimensions.”
And finally, three different but complimentary technology-enabled approaches to diagnosing and fighting depression:
A basic algorithm with limited data has shown to be 80-90 percent accurate when predicting whether someone will attempt suicide within the next two years, and 92 percent accurate in predicting whether someone will attempt suicide within the next week.
In a different predictive approach, researchers fed facial images of three groups of people (those with suicidal ideation, depressed patients, and a medical control group) into a machine-learning algorithm that looked for correlations between different gestures. The results: individuals displaying a non-Duchenne smile (which doesn’t involve the eyes in the smile) were far more likely to possess suicidal ideation.
On the treatment-side, researchers have developed a potentially revolutionary treatment that pulses magnetic waves into the brain, treating depression by changing neurological structures, not its chemical balance.
Last week I posted a bunch of links pointing towards quantum computing. However, there are also other compute initiatives which also offer significant potential for “redefining intractable” for problems such as graph comparison, for example, DARPA’s HIVE which aims to create a 1000x improvement in processing speed (and at much lower power) on this problem. Write-up on EE Times of the DARPA HIVE program here.
Exploring long short-term memory networks
Nice explainer on LSTMs by Edwin Chen: “The first time I learned about LSTMs, my eyes glazed over. Not in a good, jelly donut kind of way. It turns out LSTMs are a fairly simple extension to neural networks, and they’re behind a lot of the amazing achievements deep learning has made in the past few years.” (Long, detailed and interesting blog post, but even if you just read the first few page scrolls still quite worthwhile for the intuition of the value and function of LSTMs.)
FairML: Auditing black box predictive models
Machine learning models are used for important decisions like determining who has access to bail. The aim is to increase efficiency and spot patterns in data that humans would otherwise miss. But how do we know if a machine learning model is fair? And what does fairness in machine learning mean? Paper exploring these questions using FairML, a new Python library that audits black-box predictive models.
Fast iteration wins prizes
Great Quora answer on “Why has Keras been so successful lately at Kaggle competitions?” (By the author of Keras, an open source neural net library designed to enable fast experimentation). Key quote: ”You don’t lose to people who are smarter than you, you lose to people who have iterated through more experiments than you did, refining their models a little bit each time. If you ranked teams on Kaggle by how many experiments they ran, I’m sure you would see a very strong correlation with the final competition leaderboard.”
Language from police body camera footage shows racial disparities in officer respect
This paper presents a systematic analysis of officer body-worn camera footage, using computational linguistic techniques to automatically measure the respect level that officers display to community members.
And related somewhat related (or at least a really nice AR UX for controlling synthesizers), a demonstration of “prosthetic knowledge” — check out the two minute video with sound at the bottom of the page – awesome stuff!
It’s that time of year, when the kids get out of school and somehow you’re supposed to have more time to spend reading. I’m going to share a few of my current, hopefully off-the-beaten-path favorites with you. These recommendations are going to focus on good old-fashioned free email subscriptions, kind of like Epsilon Theory. If you want to read great literature, please check out the McSweeney’s store, where the books are as beautiful on the outside as the words are on the inside. And if you want the list of finance-related classics, well, Ben’s already done that work for you here (I can’t recommend Fortune’s Formula highly enough!). So, on to my email list recommendations:
Ostensibly, Bob writes about music and the music business, so this is certainly most applicable for those with an interest in music and the music scene, but Bob’s near-daily communiques are about so much more than music. I’ve been reading Bob for about three years now and his advice for artists is applicable to business leaders as well — primarily to focus on being authentic and not to worry about appearing vulnerable, which is actually humanizing and allows others to bond with you. http://lefsetz.com/wordpress/
I don’t know where I first came across Scott’s blog/newsletter, which is nominally about digital marketing strategy, but it’s now a weekly blessing. He’s a professor at NYU Stern and just sold his consulting business L2, but he’s continued to publish notes that are very much in the Lefsetz vein. Scott’s an expert in his field, and he also understands that transparency and authenticity drive the connection with the reader. His tagline or motto is “life is so rich,” and it is, especially when you’re reading his smart, beautiful, and brutally honest stuff. https://www.l2inc.com/
When it comes to technology and the VC world, my go-to used to be Bill Gurley of Benchmark Capital and his wonderful Above the Crowd (great name; Bill’s super-tall); however, Bill is down to about a post a year of late, so don’t expect much on a regular basis, but consider signing up because when he does post, it’s a must-read. However, his friend and Benchmark venture partner, Scott Belsky has started doing a monthly-ish collection of his thoughts and links to interesting content in the technology and design arena which he is calling Positive Slope, and I highly recommend it. http://digest.scottbelsky.com/
Tim’s WaitButWhy blog is tech-focused also, but his specialty seems to be explaining Elon Musk’s ambitions in relatively plain but plentiful (like 40,000 words at a time) English for those of us who aren’t engineers, using low-tech stick figure diagrams and clip art. http://waitbutwhy.com/
Lacy Hunt & Van Hoisington
OK, so this is a more straightforward investment management letter, but if you want to understand why interest rates are so stubbornly low in the face of unprecedented “money printing” by central banks around the world (spoiler alert: velocity of money!), you should be reading whatever Lacy and his partner Van Hoisington of Hoisington Asset Management in Austin, Texas are writing. Yes, they run a long-dated Treasury fund and are “talking their book,” but they’ve been so right for so long while almost everybody else in our business has used every 20-basis-point backup in rates as an excuse to call for the Death of the Bond Bull Market. http://www.hoisingtonmgt.com/newsletter
I learned to meditate a few years ago using a simple technique called passage meditation pioneered (or documented!) by Blue Mountain Center of Meditation founder, Eknath Easwaran. You can sign up for a daily dose of wisdom, taken from his book Words to Live By and delivered via email. https://www.bmcm.org/subscribe/
As Ben and I have discussed before on an Epsilon Theory podcast, my view is that quantum computing is going to be truly, truly transformational by “redefining intractable”, as 1Qbit say, over the coming years. My conviction around quantum continues to grow and — to put a pretty big stake in the ground — I believe, at this point, the only open questions are: Which approach will dominate, and how long exactly until we get quantum machines which work on a broad set of real-world questions? I’ve long been a big fan of the applied, real-world progress D-wave have made, and Rigetti too. However, the “majors” like IBM are also making substantial progress towards true “quantum supremacy” with R&D intensive approaches, while other pieces of the ecosystem, such as the ability to “certify quantum states“, continue to fall into place. In the meantime, here is a wonderful cartoon explainer on quantum computing by Scott Aaronson and Zach Weinersmith.
What web searches correlate to unemployment
Well, in order to get the answer to that question you will have to follow this link (and be prepared to blush). The findings were generated by Seth Stephens-Davidowitz using Google Correlate. “Frequently, the value of Big Data is not its size; it’s that it can offer you new kinds of information to study — information that had never previously been collected”, says Stephens-Davidowitz.
Using verbal and nonverbal behaviors to measure completeness, confidence and accuracy
I recently came across Mitra Capital in Boston who have an interesting strategy of“using verbal indicators to judge the completeness and reliability of messages, to form predictions about company performance (via) analysis of management commentary from quarterly earnings calls and investor conferences based on a proprietary and proven framework with roots in the Central Intelligence Agency” with the underlying tech/methodology based on BIA. They’re running a relatively small fund ($53m AUM in Q1 2017) and have returned an average of 8.5% for the past four years (including a +43% year, and a -12.5% year). Neat NLP approach, although these returns imply more of a “feature than a product” (i.e., a valuable sub-system addition to a larger system, rather than a stand-alone system.) But, hey, I said the same thing about Instagram.
Buddhists with attitude / Backtesting: Methodology with a fragility problem
Probably (hopefully!) anyone reading Epsilon Theory has already read Antifragile by Nassim Nicholas Taleb. Many things which could and have been said about this book, but the most important one to highlight for my narrow, domain application is the massively important distinction (although rarely talked about facet) of machine learning/big compute approaches vs. regression-driven back test approaches. Key distinction is a simple one: Does your system gain from exposure to randomness and stress (within bounds) and improve the longer it exists and the more events it is exposed to OR does it perform less well with stress, and decay with time. Antifragile machine learning systems are profoundly different to the fragile fitting of models.
And finally, since I have already invoked Taleb, and if for no other reason that the line “If someone wonders who are the Stoics I’d say Buddhists with an attitude problem”, here is Taleb’s Commencement address to American University of Beirut last year.