I started writing a follow up note to ‘The Two Worlds of Data Infrastructure’ on some new pieces of ‘the Chinese system’ starting to click together, for example, how real name ID is now required for playing video games, how gait-recognition surveillance system (so begging for a Monty Python joke, but really can’t bring myself to make one) is coming on line, and on China’s global advocacy to normalize the system with other countries.
However, in many ways I don’t think the story here is the ‘play-by-play’ of each system clicking in, rather I think the story here is twofold:
The main story: The inflection point
The main story is the increased pace and arc of the Chinese system overall, not the ‘play-by-play’.
With technology, even totalitarian surveillance technology, there typically is no ‘big bang’, just a bunch of independent systems coming on line, getting adopted over time, then getting networked together, resulting in a series of subtle shifts in personal behavior, and then a tipping point.
Having watched this system come on line for nearly 20 years, the deployment of the Chinese technology-driven domestic surveillance system was pretty limited even up until 2010, but has been absolutely rip-roaring and accelerating over the last five years thanks to the same driving forces of most other tech advances since 2010:
- Ubiquitous handheld connected device
- App adoption
- Cheap sensors (inc. cameras)
- Cheap massive data storage
- Sophisticated statistical algorithms
- Leaps forward in compute power and cost
All of these advances are so powerful for surveillance with its inherent big, unstructured data characteristics that I think we are now really close to an inflection point where the system is starting to really work in a functional day-to-day way, which will then lead to a behavioral tipping point.
The sub-story: The almost total lack of interest in the West
I don’t think the main story is that controversial at this point, i.e., I don’t think anyone, even the Chinese government, denies this system is being built, the intention of it, or that it is starting to work in a practical way.
Therefore, I think the more interesting story in many ways is the sub-story of the willful ignorance of the main story by the West.
I was at an event last week where a new fancy think tank on AI ethics based here in San Francisco was presenting and expounding their tenet of “Working to protect the privacy and security of individuals”, whilst simultaneously welcoming Baidu into their organization.
I’m sorry, but that’s like “Working to protect the world from bananas” while signing up Del Monte as a member.
With hypocritical sprinkles.
And a big ignorant cherry on top.
Anyway, maybe we (in the West) shouldn’t care, and each country should be free to follow its own national agenda without the moralizing of other nations on its approach to technology and human rights.
But then let’s at least let’s just say so, or at least admit that “We are all Nationalists now”.
Just in case you think I am uniquely calling out the Chinese system – anyone who thinks the US is not on a path to using problematic algorithms to classify people and determine punishment has not been paying attention.
Check out the text of the recent California SB10 bill to end cash bail … it’s no more than a 10 minute read. It basically sets out that a person whose risk to public safety and risk of failure to appear is determined to be “low” would be released with the least restrictive non-monetary conditions possible. “Medium-risk” individuals could be released or held depending on local standards. “High-risk” individuals would remain in custody until their arraignment.
Contained in the bill (and pretty lightly reported on) are a bunch of clauses that set out the requirement to use a “Validated risk assessment tool” to assess high / medium / low risk, with the definition of a “Validated risk assessment tool” as:
“Validated risk assessment tool” means a risk assessment instrument, selected and approved by the court, in consultation with Pretrial Assessment Services or another entity providing pretrial risk assessments, from the list of approved pretrial risk assessment tools maintained by the Judicial Council. The assessment tools shall be demonstrated by scientific research to be accurate and reliable in assessing the risk of a person failing to appear in court as required or the risk to public safety due to the commission of a new criminal offense if the person is released before adjudication of his or her current criminal offense.
To be clear: this means an algorithm decides who is a risk.
And who knows what ‘demonstrated by scientific research’ means … perhaps it means it passed a backtest assuming normal distribution under an arbitrary definition of a cycle? … what could possibly go wrong …