Working to Protect the World from Bananas

I started writing a follow up note toThe Two Worlds of Data Infrastructure on some new pieces of ‘the Chinese system’ starting to click together, for example, how real name ID is now required for playing video games, how gait-recognition surveillance system  (so begging for a Monty Python joke, but really can’t bring myself to make one) is coming on line, and on China’s global advocacy to normalize the system with other countries.

However, in many ways I don’t think the story here is the ‘play-by-play’ of each system clicking in, rather I think the story here is twofold:

The main story: The inflection point

The main story is the increased pace and arc of the Chinese system overall, not the ‘play-by-play’.

With technology, even totalitarian surveillance technology, there typically is no ‘big bang’, just a bunch of independent systems coming on line, getting adopted over time, then getting networked together, resulting in a series of subtle shifts in personal behavior, and then a tipping point.

Having watched this system come on line for nearly 20 years, the deployment of the Chinese technology-driven domestic surveillance system was pretty limited even up until 2010, but has been absolutely rip-roaring and accelerating over the last five years thanks to the same driving forces of most other tech advances since 2010:

  • Ubiquitous handheld connected device
  • App adoption
  • Cheap sensors (inc. cameras)
  • Cheap massive data storage
  • Sophisticated statistical algorithms
  • Leaps forward in compute power and cost

All of these advances are so powerful for surveillance with its inherent big, unstructured data characteristics that I think we are now really close to an inflection point where the system is starting to really work in a functional day-to-day way, which will then lead to a behavioral tipping point.

The sub-story: The almost total lack of interest in the West

I don’t think the main story is that controversial at this point, i.e., I don’t think anyone, even the Chinese government, denies this system is being built, the intention of it, or that it is starting to work in a practical way.

Therefore, I think the more interesting story in many ways is the sub-story of the willful ignorance of the main story by the West.

I was at an event last week where a new fancy think tank on AI ethics based here in San Francisco was presenting and expounding their tenet of “Working to protect the privacy and security of individuals”, whilst simultaneously welcoming Baidu into their organization.

I’m sorry, but that’s like “Working to protect the world from bananas” while signing up Del Monte as a member.

Bananas.

With hypocritical sprinkles.

And a big ignorant cherry on top.

Anyway, maybe we (in the West) shouldn’t care, and each country should be free to follow its own national agenda without the moralizing of other nations on its approach to technology and human rights.

But then let’s at least let’s just say so, or at least admit that “We are all Nationalists now”.

Just in case you think I am uniquely calling out the Chinese system – anyone who thinks the US is not on a path to using problematic algorithms to classify people and determine punishment has not been paying attention.

Check out the text of the recent California SB10 bill to end cash bail … it’s no more than a 10 minute read. It basically sets out that a person whose risk to public safety and risk of failure to appear is determined to be “low” would be released with the least restrictive non-monetary conditions possible. “Medium-risk” individuals could be released or held depending on local standards. “High-risk” individuals would remain in custody until their arraignment.

Contained in the bill (and pretty lightly reported on) are a bunch of clauses that set out the requirement to use a “Validated risk assessment tool” to assess high / medium / low risk, with the definition of a “Validated risk assessment tool” as:

“Validated risk assessment tool” means a risk assessment instrument, selected and approved by the court, in consultation with Pretrial Assessment Services or another entity providing pretrial risk assessments, from the list of approved pretrial risk assessment tools maintained by the Judicial Council. The assessment tools shall be demonstrated by scientific research to be accurate and reliable in assessing the risk of a person failing to appear in court as required or the risk to public safety due to the commission of a new criminal offense if the person is released before adjudication of his or her current criminal offense.

To be clear: this means an algorithm decides who is a risk.

And who knows what ‘demonstrated by scientific research’ means  … perhaps it means it passed a backtest assuming normal distribution under an arbitrary definition of a cycle? … what could possibly go wrong …


To learn more about Epsilon Theory and be notified when we release new content sign up here. You’ll receive an email every week and your information will never be shared with anyone else.

Comments

  1. Avatar for cbeirn cbeirn says:

    In light of Yuval Harari’s “humans are hackable animals” hypothesis, the issue here is not what could go wrong, but where do we stand when it turns out that the algorithm is right – i.e., when it proves to be more accurate than parole officers, juries, judges and the rest of our “justice” apparatus?

  2. Does “Ministry of Deceptive Gaits” ring a gong? (Couldn’t resist - not sorry.)

  3. … and when would we know that “the algorithm” is right, by any quantitative, or by any, means? And who would then adjust “the algorithm”? Surely not hackable humans. No dispute with the hackable human thing - it’s obvious - but your conclusion seems a leap - quantum style, perhaps. But an interesting thought.

  4. The comments on the California proposition on cash bail miss the point, I think. Algorithms were already in place to determine bail - if you’ve got the cash or are willing to pay exorbitant rates to borrow it, you don’t have to stay in jail. That’s an algorithm controlled by “elites” - by people who already have enough and who think money is the best deterrent to flight. Bad algorithm, say many, say I. The problem with replacing the existing legal-elite-controlled algorithm is completeness - statutes try to cover all possibilities, a practical impossibility. How would YOU turn the old algorithm into a statute that combines fairness with justice and does not impact public safety?

  5. In term of better specification, Stanford Computational Policy Lab provide a solid discussion an recommendations on how “SB 10 should be amended with respect to the design, deployment, and validation of pretrial risk assessments” here: https://policylab.stanford.edu/media/improving-california-bail.pdf .

    To be clear, I’m all for reform of the bail system away from a cash bail model, but for sure concerned about the algorithmic component of this bill, as currently specified, for the reasons Human Rights Watch note (https://www.hrw.org/news/2018/08/24/human-rights-watch-urges-governor-brown-california-veto-senate-bill-10-california). To quote the relevant passage below:

    SB 10 will require biased, unfair risk assessment tools to determine release eligibility: The incarceration decision is also influenced and, in some cases, determined by actuarial or profile-based risk assessments.[15] These risk assessment tools take limited information about an individual, including arrest and conviction history and create a profile, then make a statistical estimate of the likelihood that individual will get re-arrested or miss a court date, based on data about other people with similar profiles. These tools tend to reinforce the system’s ingrained biases and lack transparency. The data they use, especially arrest and conviction history is greatly skewed by racial and class bias in policing and court outcomes and societal inequities.[16] Even proponents of the tools acknowledge that they will, at best, reflect these biases. Others fear that they will entrench and increase these inequities.[17] Further, the tools are not objective assessors of risk. The risk categories (high, medium and low) required in the proposed legislation, are policy choices, meaning that whoever controls the implementation of the tools can decide how large to make each category. This adjustability of scoring is significant given the proposed scheme in which anyone labelled “high risk” cannot be released pre-arraignment and will have a presumption of preventive detention.[18] The bill places control over implementation of the risk assessment tools entirely in the hands of the judiciary, again giving them unlimited discretion to expand or contract the pool of people ineligible for release"

  6. I’d venture the lack of popular outrage over the surveillance state is driven largely by our human preconception of safety-in-numbers. IOW, if individuals thought only THEIR activities were being recorded, they would be outraged. But because everyone’s activities are being recorded, individuals figure their own worst habits will fall within the bell curve and won’t attract attention. Also, the average consumer hasn’t yet experienced any significant downside to the data privacy breaches so widely-reported these past few years, so they just shrug it off and say “Meh, I like the convenience of those FAANG services, so I don’t care if they know I do [embarrasing thing X].”

    As much as I wish the popular consciousness was better informed on this issue, I think we are a long, long way from reaching the requisite critical mass for anything to be done about it.

  7. Avatar for cbeirn cbeirn says:

    Based on the language in the legislation, the metrics for right vs. wrong are pretty straightforward: do they do show for their court appearance and do they commit a crime while they’re out on bail. As for algorithm adjustments, it called machine learning. How did AlphaZero learn to play chess?

  8. Machine learning is still in its infancy - do you know how long it took for machine learning to play competitive chess? Stay with reality, not with hype, and stay with reasonable timeframes when discussing matters of current legal import.

  9. My issues, my question, is with the elevation of elite decision processes over democratic ones. Not all on either side is always correct, but when the people choose a side, academic (aka elite, aka conservative with lowercase “c”) sides need to contemplate (at which we are so adept) rather than reject (even passively, even without apparent aggression). Reference to academic studies makes this comparison worse, not better. But what do I know? I’m just a cat …

  10. We may be closer than expected - the system is fragile and any significant disruption could initiate popular chaos - dependence on what is now called technology is fragile itself. To what extent is instant, limited, controlled, monitored communication necessary to a thriving and productive society?

Continue the discussion at the Epsilon Theory Forum

Participants

The Latest From Epsilon Theory

DISCLOSURES

This commentary is being provided to you as general information only and should not be taken as investment advice. The opinions expressed in these materials represent the personal views of the author(s). It is not investment research or a research recommendation, as it does not constitute substantive research or analysis. Any action that you take as a result of information contained in this document is ultimately your responsibility. Epsilon Theory will not accept liability for any loss or damage, including without limitation to any loss of profit, which may arise directly or indirectly from use of or reliance on such information. Consult your investment advisor before making any investment decisions. It must be noted, that no one can accurately predict the future of the market with certainty or guarantee future investment performance. Past performance is not a guarantee of future results.

Statements in this communication are forward-looking statements. The forward-looking statements and other views expressed herein are as of the date of this publication. Actual future results or occurrences may differ significantly from those anticipated in any forward-looking statements, and there is no guarantee that any predictions will come to pass. The views expressed herein are subject to change at any time, due to numerous market and other factors. Epsilon Theory disclaims any obligation to update publicly or revise any forward-looking statements or views expressed herein. This information is neither an offer to sell nor a solicitation of any offer to buy any securities. This commentary has been prepared without regard to the individual financial circumstances and objectives of persons who receive it. Epsilon Theory recommends that investors independently evaluate particular investments and strategies, and encourages investors to seek the advice of a financial advisor. The appropriateness of a particular investment or strategy will depend on an investor’s individual circumstances and objectives.