Sanders in June: Polarizing…Except in Media

Sanders Narrative Map as of May 31, 2019 Source: Quid, Epsilon Theory Sanders Narrative Commen

Want to continue reading this and the other 1,500+ essays you won't find anywhere else?

Already a subscriber? log in here

To learn more about Epsilon Theory and be notified when we release new content sign up here. You’ll receive an email every week and your information will never be shared with anyone else.


  1. Or it could be that the media’s audience is so small and unrepresentative that it doesn’t matter. For example, Fox News gets about 2.4M daily viewers. The NY Times has about 3.5M digital subscribers and maybe 1M print subscribers. AOC has 4.3M Twitter followers and 1.8M IG followers. Donald Trump has 61M Twitter followers.

    Brad Parscale, Trump’s Digital Campaign manager’s goal is to have 80M FB/Twitter followers who, when requested will share/post/like a post. This enables them to reach followers and friends of followers more specifically with more personalized messages at virtually zero cost. I.e. they don’t have to buy political ads to reach their constituents. Maybe there are restrictions placed upon political ads in FB, but they won’t effect people sharing/posting/liking posts.

    In the last election, there were 131M people who voted and 61M who voted for Trump. If Parscale can get to 80M people who will share/post/like etc upon request, they probably vote and encourage their friends to vote. If Sander’s buys media coverage/stories/publicity to organizations like the NY Times that have maybe 4M or so people who see unpersonalized content in mass media, what does it matter? Does it matter when they see it?

    They are bringing a knife to a gun fight

  2. This is a common talking point, but I personally don’t think it stands up.

    Most of what is shared even through those methods IS a link to or deeply related to some specific piece of content, whether it is sponsored content, misinformation, a blog, a NY Times article or something else. The DJT Facebook page built posts around a Washington Examiner article, a Daily Caller article and a NY Post article in the last day alone. Some dozen blogs riffed off of every single one of his Tweets/Facebook posts. We include all of those types of sources in our universe.

    More importantly, we think it’s important to think about common knowledge as something other than “how often is something shared.” That’s why we don’t focus on the volume of coverage. Even if we don’t have all the transmission vectors, we can still see the evidence of their influence on the structure of narratives within our dataset - and we believe there are deep feedback mechanisms between both formal and informal actors.

    But missionaries are a thing, and they matter. Focusing on subscription levels misses the point, IMHO.

  3. Does your analysis do any kind of weighting of individual articles, or does each one stand on equal footing? Whether by viewership, shares, or whatever.
    I wouldn’t try to argue that one way is better than another, but it seems like there’s a whole universe of different angles on the data. What constitutes a salient cluster could have different outcomes depending on one-article-one-vote, one-publisher-one-vote, one-reader-one-vote, etc.

  4. You make excellent points, especially about the missionary supplying original content and your analysis of it. The concept I’m struggling to understand pertains to a second measure. Where content is selected for effect, personalized and slanted in a different direction and then rebroadcasted to much larger audience.

    If the NY Times’s reach ends around 4M subscribers, at which point they lose control of their message and other people leveraging social media, repurpose it and broadcast to 63M+; isn’t that medium, it’s reach and it’s ability to repurpose a message also important?

  5. To be sure, social media is critically important. Scraping that data has proven to be mostly trivial, but analysis of it is also rife with Type 1 errors. Because we are focused on meaning, we think it is entirely possible to understand a great deal about understanding social media simply by acknowledging that the purpose of much content creation is transmission through those vectors, and much of the rest is informed by the implied common knowledge from them.

    But the main thing that informed our thinking is this: social media scraping is very effective at telling you what people think (and what they want others to think about them), but that isn’t what we’re concerned about. We are concerned with common knowledge - what everyone thinks everyone else thinks. There must be a natural perception that everyone (or at least everyone within some group) else will have seen something to create a strong form of this. For this reason, we think cohesive content is either an indicator of primary missionary activity (creating common knowledge) or secondary missionary activity (repeating it because of the attractiveness of publishing things everyone thinks everyone thinks). The former captures ideas upstream of social media, and the latter captures ideas downstream, but in neither case would we be ignoring social channels’ incredible influence.

    It’s a big driver of the panopticonesque power we think has accelerated all of these things we write about.

  6. We struggle with it, in part because common knowledge - what everyone knows everyone knows - in the wild can be either inductively or deductively derived. We can observe common knowledge by identifying closeness of language, or we can hypothesize that certain parties are missionaries and observe what they are saying and how it resonates. In general, it is our theory that volume measures tend to tell you what people are seeing and what they think themselves, but not necessarily what they believe that the crowd believes. That’s a bit different when something shows up in the NY Times vs. social media, of course, and for many of our analyses (the financial research in ET Pro sector analyses, for example) we do some of what you are describing by explicitly limiting our universe of sources. It’s not ‘weighting’ per se, except inasmuch as it is weighting non-major publishers at zero. So we do combine both approaches to diversify the universe of angles at which we approach the question.

    For politics, however, our observation was that different channels have native tendencies that would bias the results if we adopted a similar approach. Anecdotally, I have looked at our results so far, and other than cutting off certain candidates entirely (due to inadequate content), view/share weighting or source importance-truncation don’t appear to really change the rank ordering on our key metrics all that much. We’ll keep eyeballing that in case they diverge, and if they do we’ll be sure to publish it (since the reasons, I suspect, would be telling us something very interesting).

  7. My aging brain struggles to fully understand the method of narrative analysis Ben and Rusty apply. However, MY interpretation is that they aren’t quite yet sure ( but will get there I think) of how to interpret the quid data to political issues either.
    For when they apply Greenspan-ion double, triple negatives to their conclusions they are either 1) not sure of the interpretation or 2) want to make the reader unsure of what they mean.
    I prefer to believe the former.
    I am quite pleased that this quid approach has been discovered and that I am ( slowly, very slowly) understanding it.
    Oh, and quite happy that my lexicon gets to grow on almost every article.

Continue the discussion at the Epsilon Theory Forum


Avatar for rguinn Avatar for ameyer32 Avatar for psherman Avatar for Landvermesser

The Latest From Epsilon Theory

This commentary is being provided to you as general information only and should not be taken as investment advice. The opinions expressed in these materials represent the personal views of the author(s). It is not investment research or a research recommendation, as it does not constitute substantive research or analysis. Any action that you take as a result of information contained in this document is ultimately your responsibility. Epsilon Theory will not accept liability for any loss or damage, including without limitation to any loss of profit, which may arise directly or indirectly from use of or reliance on such information. Consult your investment advisor before making any investment decisions. It must be noted, that no one can accurately predict the future of the market with certainty or guarantee future investment performance. Past performance is not a guarantee of future results.

Statements in this communication are forward-looking statements. The forward-looking statements and other views expressed herein are as of the date of this publication. Actual future results or occurrences may differ significantly from those anticipated in any forward-looking statements, and there is no guarantee that any predictions will come to pass. The views expressed herein are subject to change at any time, due to numerous market and other factors. Epsilon Theory disclaims any obligation to update publicly or revise any forward-looking statements or views expressed herein. This information is neither an offer to sell nor a solicitation of any offer to buy any securities. This commentary has been prepared without regard to the individual financial circumstances and objectives of persons who receive it. Epsilon Theory recommends that investors independently evaluate particular investments and strategies, and encourages investors to seek the advice of a financial advisor. The appropriateness of a particular investment or strategy will depend on an investor’s individual circumstances and objectives.