The Fifteen Faces of Fiat News

Editor’s Note: We’re putting this note outside of our subscription paywall, because we want as many people as possible to learn about our vision – to inoculate the world against the weaponized narratives of Big Tech, Big Media and Big Politics – and the concrete steps we are taking to achieve that goal. If you share those goals and would like to support our efforts … join us!

Several years ago, we introduced the concept of fiat news on these pages.

It is a simple idea. In the same way that money created by fiat debases real money, news created by fiat debases real news.

Although it misinforms, fiat news should not be understood as misinformation, at least in the colloquial sense. News which contains false information or distorted interpretations of facts can be better thought of as counterfeit news. Like counterfeit money, enough counterfeit news can debase the real thing, too. Yet even considering how widespread counterfeit news has become, fiat news exists on such a massive scale that its power to debase is in a different category. Nor is fiat news synonymous with bias. We think bias represents a causal explanation for a very specific kind of recurring fiat news.

Most “media watchdogs” are in the business of identifying one of those two things: misinformation or bias. The problem with these efforts, beyond that they do not capture the full scope of actions which debase the information content of news, is that it is practically impossible to report on misinformation or bias in a manner that is itself not colored by the opinions of the author, or else designed to shape how the reader interprets facts and events. While they may in some cases offer a useful service in the face of blatant lies published through politically invested news outlets, too often they become yet another source of fiat news.

Why? Because fiat news is the presentation of opinion as fact. Fiat news is news which is designed not to provide information for the reader to process, but to provide interpretations of information for the reader to adopt. Fiat news is the primary vector for nudging, shaping common knowledge, or what everybody knows everybody knows. Fiat news is how governments, parties, corporations and other institutions in a free and always connected society meticulously shape it – then tell us that it was our idea.

By design, fiat news isn’t always easy to spot. Outside of editorial pages, it is rare indeed that an expression of opinion as fact would include obvious phrases like “we believe” or “we think.” Instead, media outlets guide interpretations through more subtle means that may sometimes be as invisible to the author and editor as they are to the reader.

Several years ago, we also introduced what we called the narrative machine.

It is also a simple idea. We think that recurring patterns in language make it possible to identify narratives. We also think they make it possible to identify similar patterns indicative of the various types of fiat news. We think a revolution in mainstream natural language processing (NLP) software and techniques has made this feasible at a high level of detail. We think a revolution in the availability of low-cost compute power has made this feasible at scale.

Finally, some months ago we made a couple of quiet announcements. The first was that we would be devoting time and capital to develop tools to inoculate citizens against weaponized narrative, especially from media sources (in case you were wondering where I have been the last 6-7 months). Ben referred to the goal of this effort as the Narrative Early Warning System, or NEWS. The second was the announcement of an advisory partnership with Vanderbilt University, who through a major gift by long-time Epsilon Theory pack members Suzanne and Patrick McGee established the McGee Applied Research Center for Narrative Studies there.

This research is the first step toward our goals for both of these things. We are working to develop applications that will inform citizens about the fiat news content of their media consumption. Our plan is that these applications will do so in both real-time and, if users elect to make this data available, over longer periods of time.

We are also excited to work with the students and faculty attached to the Applied Research Center for Narrative Studies to find flaws with these methods and uncover better ones. Ultimately, we think that it is important that criticism of the current role of media be leavened with a passion for the indispensable function of the fourth estate in a democracy. We look forward to mutual accountability with our partners.

The early basis for both that research engagement and our NEWS application development will be the framework for identifying fiat news that we have been scaffolding in earnest over the last 18-24 months. Over the next several months, we will begin regularly publishing our insights about how topics, outlets and the general media environment are infected by attempts to tell you how to think about the news. In the months and years thereafter, we hope to be in a position to deliver tools to citizens that can be directed at any information consumed – not just what we decide to write about here at Epsilon Theory.

Allow me to be the first to introduce you to the Fifteen Faces of Fiat News.

Appeals to AuthorityAssumptions of CausalityBogeymen
Confidence and DoubtContent ContextCoverage Selection
Generalized AttributionInterpretive LanguageMissionary Statements
Missionary WarfareQuestion BeggingResponse Coverage
Rhetorical QuestioningSuperlative LanguageUnsourced Attribution

Appeals to Authority

Photo: AP

Now, there are a great many ways to tell the reader how they should interpret the facts of a story or event without actually telling them. Few are more effective and probably none more difficult to parse the ethical considerations of than telling them ‘here are the facts, and also you should know that someone smarter and more authoritative than you has this opinion about them‘.

In a wide range of news stories, of course, it is the fundamental role of the responsible journalist to provide supplemental facts from credible “expert” sources. It would be practically impossible to present useful information about the James Webb Space Telescope, Large Hadron Collider or new BA.5 variant of Covid-19, for example, without information provided by scientific experts. The same goes for an especially wonkish article about an upcoming piece of legislation, a new plan to build a light rail system in a metropolitan area, or a modified K-12 curriculum standard coming before state education commissioners. This is not what we mean by fiat news, and this is not what we have designed our model to capture.

In some cases, the information being provided by an individual of authority or influence may not necessarily lean upon their authority, influence or expertise, but be included simply because they are the primary source of the information. Think something like a chief financial officer reporting earnings or a chief executive officer discussing the company’s growth plans. The news is the thing they are saying, and in some cases the news is that they are saying it. This is also not what we mean by fiat news, and this is not what we have designed our model to capture.

To be fair, even though it is obviously sensible to reference authoritative sources for either of the reasons above, there remains a clear opportunity for the author to inject opinion. The journalist often has the power to select the authority, many of whom you may be surprised to learn do not always agree with one another. If you require exaggerated examples, imagine the expert on CRT selected for a segment by Fox News and the expert on Christian Nationalism selected by the New York Times for a feature piece. Beyond the opportunities to select the expert, the journalist may also select the specific statements used. These opportunities for the injection of opinion are very real. Based on our research to-date, we also think that these techniques are nearly always revealed by other faces of fiat news. Accordingly, our approach to seeding models of linguistic patterns associated with Appeals to Authority accepts that we may underreport injected opinion (i.e. false negatives). This will be a recurring theme.

Instead, our aim is more parsimonious. NEWS seeks to identify linguistically three recurring mechanisms for injecting opinion: (1) appeals to authority which are both generic and explicitly confidence-shaping by nature (e.g. “experts agree”), (2) author summarization of the consensus of authorities and (3) author summarization of the aggregated implications of “research” or “analysis” on a topic. In other words, our language model is designed to capture not experts or authorities reporting facts or their opinions, but rather patterns that would generally only arise in the course of an attempt by the journalist to dampen or emphasize a particular interpretation by the reader of the facts presented in the article.

Assumptions of Causality


For those in the financial industry – our largest universe of readers – this method for the injection of opinion would take all of ten seconds to explain. You see, on a daily basis, the content published by practically every financial ‘news’ outlet is dominated by explanations for the movements in the prices of stocks, indexes and markets more broadly. I don’t think it’s taken yet, so call it Rusty’s Law: For any event above a critical threshold of significance, there will be at least one ‘news’ article describing the stock market as declining/rising as a result of that event.

This is pure fiat news, and among the most easily manipulated by those with an interest in guiding how financial market events are perceived and framed. Thankfully, once a reader begins paying attention to it, it is also among the easiest faces of fiat news to detect.

However, Assumptions of Causality exist in all forms of news. Not all of them are so easy to detect as in news about markets. That’s because other social spheres generally lack the kind of daily scoreboard for which every news consumer demands an explanation and for which no one has one that could even vaguely be described as factual. But constructions suggesting implicitly or explicitly that ‘because X, then Y’ are widespread. They serve a useful function for narrative creation in that they connect facts and events to powerful memetic forces, that is, to other things that people care deeply, even innately about.

Assigning causal relationships can be used to malign or whitewash the intent of actors in events. It can be used to establish a logical chain intended to guide a reader to a desired conclusion or opinion without explicitly stating that conclusion. It can be used to amplify or relax the perceived importance of a topic or event.

As with all of these language models, there will nearly always be a baseline of causal assumptions, many of which are benign. Abnormal density of this language at a point in time, around a certain topic, or from a particular media outlet, therefore, becomes the operative focus of the analysis.


Source: AP

The Bogeyman face of fiat news means the knowing use of linguistic patterns in reference to institutions and individuals that everybody knows everybody knows have become pejorative. That does not mean that we want to flag every simple mention of “George Soros” or “the Koch Brothers.” These people do a lot of things, and many of those things are newsworthy.

And yet, if a journalist elects to violate Godwin’s law and make a comparison of some event or person to the Nazi Party and/or Hitler, more often than not we feel comfortable judging that to be an injection of opinion, even if the underlying comparison (e.g. “They were both failed aspiring artists!”) is technically factual. While this is an extreme example, there are hundreds of milder examples which incorporate varying degrees of negative affect. There are no ways to reference “shadowy cabals” or “corrupt politicians” that do not convey the author’s intent to affect the state of mind of the reader as they consume the information, friends.

Bear in mind with this category (and all others, frankly) that we aren’t deeply concerned about having some baseline result from our model that indicates the presence of fiat news even when the examples might seem to some readers to be innocent. It isn’t a grade intended to be applied to a single news article in isolation in which less is always better. Rather, it is intended to draw attention to changes in aggregate levels over time, comparative levels across outlets, comparative levels across topics, events and individuals references, and changes in levels experienced on the part of an individual reader’s news consumption habits.

Confidence and Doubt

Journalists have incredible breadth of options when it comes to affecting the reader’s perception of facts and events. Explicit and implicit assessments of confidence and doubt when presenting those facts or descriptions of events are among the most effective – and common.

Many of these pass by our eyes without notice. Yet news articles describe the implications of events as clear and obvious with regularity. Future consequences of events are described as probable, likely, unlikely, inevitable without hesitation. Facts are widely acknowledged and states of the world are widely understood. Should the journalist suffer the misfortune of a hawkeyed editor, then there is still the option for a strategically timed flip into the subjunctive.

As with the other faces, there will always be a baseline of innocent usage that will trigger any language model used to identify Confidence and Doubt. But abnormal rates of attempts to instill confidence in a fact, quoted speaker or conclusion on the one hand, and doubt in a fact, quoted speaker or conclusion on the other, absolutely debase the value of news.

Content Context

We haven’t figured out how to model this one yet. We think we will, but it’s a tough nut to crack.

We are still including it here because we want to present a full picture of how fiat news happens, and that’s a story that cannot be told without Content Context. We’re also including it because it’s important enough that we’d get half a dozen comments telling us that we missed something important if we didn’t.

By Content Context we mean the intentional or negligent graying of lines between news and opinion content. In online and print media, this happens in many ways. Outlets publish a news piece but distribute it on social media with an opinion-loaded tweet. They publish a nominally news piece but attach an opinion-loaded, click-bait headline. They physically nestle opinion and “analysis” articles among news pieces covering a similar topic. They neglect to flag opinion and analysis articles in headers, sub-headers, categories or meta-data. They treat “explainers” as a form of news with no disclosure. They treat “feature” pieces, which are typically just long-form opinion pieces that feel like reportage, as a form of news with no disclosure.

Language models are no good for categorizing this face of fiat news, since some news content is so rife with language indicative of opinions that identifying an unidentified opinion piece and attempting to distinguish it algorithmically from news content is often…shall we say, problematic. Likewise, there is a temporal element to how outlets massage the perception of news through Content Context that is difficult to capture. That is, on the average news webpage or social media account, placement changes. Tweets are deleted. Headlines are modified, almost always without disclosures that would be present for a similar change in the article’s body text.

It is disappointing, of course, not to be able to model this behavior, not least because outlets use this technique to such great effect. As we continue to explore a solution, we are open to ideas.

Coverage Selection

Like Content Context, Coverage Selection is not so much a problem to be solved by the development of language models. Unlike Content Context, however, we think we have more than adequate tools to identify and measure Coverage Selection.

The idea here is simple. Media outlets influence how readers think about the news by choosing what they deem newsworthy enough to cover, and what they do not. Newsworthiness is unquestionably an opinion, if an unavoidable and necessary one. Yet it is an opinion that manifests less at the micro level of the individual article, and more at the meta layer of the overall landscape of news coverage.

Volume of coverage on topics, both at an outlet level and aggregated across news sources, informs perceptions of common knowledge, what people believe everyone else believes. High volume reinforces confidence in the underlying contentions, especially if they align. Low volume reinforces doubt in the underlying contentions, whether in their veracity or in their importance.

Unlike our other measures, there can be no baseline of a topic’s coverage volume. That is, it would be dishonest to claim the ability to say what the correct amount of coverage is, and then to measure deviations from that. What we can do is identify an outlet-level relative measure (i.e. how much more or less coverage is an outlet devoting to a topic than peer outlets) and an aggregate-level measure of multi-polarity in perceptions of newsworthiness (i.e. how much the volume of coverage differs, on average, among outlets). The former measure may be but isn’t always a sign of the dreaded “bias”, since some outlets naturally report on topics more than others. The New York Post is going to cover Yankees games more than the Miami Herald. The Hill will cover the minutiae of Capitol Hill goings-on more than Bloomberg News.

As a result of this, the outlet-level measure is a thing we consider independent of our language-model driven measures for the other faces of fiat news. The aggregate measure, too, while less prone to harmless false positives, is best as a standalone assessment of divisive perceptions of newsworthiness and how that might be affecting the conclusions of citizens with narrow media exposure (i.e. basically everyone).

Generalized Attribution

A face of fiat news that journalism schools and style guides have been trying (unsuccessfully, I might add) to eradicate for decades, Generalized Attribution is the simple trick of attributing ideas, statements and assessments to conveniently amorphous entities. In 2022, major national publications, websites and news magazines still attribute all manner of judgments to “some”, “many” or “most” of…us? Americans? Citizens? It isn’t always clear.

And that’s the point.

What is clear is that, beyond being lazy, the Generalized Attribution technique can be used by the enterprising narrative weaponizer to great effect to establish the idea of common knowledge. It creates in the mind the reader that a deviating perception would make them an outlier.

Interpretive Language

Probably the second most on-the-nose version of what we mean by fiat news, Interpretive Language simply means the explicit language patterns you might associate with someone explaining the implications of a fact that they just provided or an event that they just described.

Interpretive Language is a more expansive face of fiat news than Assumption of Causality in that it does not apply narrowly to direct cause-and-effect relationships in the real world but to cause-and-effect relationships in the logical sense. That is, Interpretive Language tells you here’s why and here’s how.

Still, like Assumption of Causality and Missionary Statements (below), when Interpretive Language is present in a piece that is nominally news, it is nearly always an opinion-laden attempt by the author to shape how the reader thinks about the implications of facts and events.

Missionary Statements

Missionary Statements are the damn-the-torpedoes variant of both Interpretive Language and Assumptions of Causality. They do not imply that intelligent people would process the facts in a particular way. They do not imply a chain of logic that ought to be followed to come to the correct conclusion. Missionary Statements are explicit statements telling the reader this is how you should think about the facts and events they just heard about.

If it seems to you an odd thing to exist outside of explicit opinion articles, op-eds and letters to the editor, you would be correct. If you think that would make them rare in outright news coverage, you would be incorrect. In fact, over the last 15 years or so, the merger of explainer content with news content within our dataset has been inexorable.

Sources: Epsilon Theory NEWS Project

If you were wondering, we call them Missionary Statements because of their relationship to one of the seminal stories of the intersection of game theory and information theory – the Green-Eyed Tribe. Epsilon Theory’s application of this story is nearly ten years old at this point, but still holds up.

Missionary Warfare

A face of fiat news that has largely emerged in tandem with the development of a high-peaked, bi-modal political distribution in the United States, Missionary Warfare is the reference to the reportage of other media outlets for reasons other than sourcing. If you wanted to picture what our language model is trying to identify, you could do worse than imagining a Newsmax segment gleefully describing some dumb thing the unrepentant Marxists over at MSNBC said, or a condescending, tongue-in-cheek bit from a New York Times reporter about how the ignorant boobs at Fox News were reporting on a controversial topic.

Like Response Coverage (below), Missionary Warfare is fundamentally about news that treats the response to news as news. In other words, we believe that non-sourcing references to other outlets that treat their coverage as newsworthy in itself are generally likely to represent attempts to dampen the effect of, cast doubt on or effectively argue against alternative interpretations of facts and events. All of this would be perfectly acceptable, if a bit tacky, on op-ed pages. It’s the presence of this activity on news pages that hits our fiat news radar.

Question Begging

Capturing all question begging taking place in news articles – that is, statements which assume identifying a feature of a premise demonstrates the premise – would be extraordinarily complicated, especially doing systematically with an algorithm. You could argue some degree of circular logic or at least unproven premises in almost any statement of fact, especially those provided in a truncated fashion by design, as they are in most news content.

But the assumption of an undemonstrated premise is a fundamental feature of many kinds of fiat news. The Question Begging face of fiat news doesn’t present facts alone. It doesn’t even present facts and a suggested interpretation in order to start a chain of logic in the mind of the reader. It treats a fundamental premise as self-evident (and thus, not an opinion in need of being excised from news content), and proceeds from there.

We do think, however, that we have built a language model capable of capturing some of the most egregious examples while minimizing the risk of false positives. Make no mistake, this is our weakest model, with a very high rate of false negatives as a result of our parsimony and aversion to innocuous false positives; however, as with other faces of fiat news, we observe a measurable historical baseline, around which certain controversial topics and themes consistently demonstrate a measurable abnormal impulse of language indicative of Question Begging.

Response Coverage

The generalized cousin of Missionary Warfare, Response Coverage is the face of fiat news that exists in the cherry picking of “responses” to events – largely from social media. In the most benign cases, it treats those responses as sources in an article about another topic. In the more egregious cases, those responses are the news being covered.

The risk of abuse of Response Coverage manifests in multiple ways. In all of them, however, the objective, whether intentional or unintentional, is to frame common knowledge, to denigrate Bad Interpretations and to celebrate the internet points scored by people with Good Interpretations. We simply think that any news value that may exist in a citizen’s or pundit’s activity on, say, Instagram will nearly always be dwarfed by what it is that induced the journalist to select that one to be their chosen source or, God forbid, the core topic of their piece.

Some public figures use social media as a primary mechanism for conveying official information and statements. For that reason, our model does not presume any social media reference or embeds are fiat news. They may represent an entirely appropriate way to reflect a public statement, even if the trend is somewhat lazy and off-putting to some readers. Our language model seeks to identify generalized references and characterizations of the responses on major social media platforms, alongside the major linguistic patterns that tend to accompany the “laundry list” model of including selected Good Interpretations or Bad Interpretations of news events from Facebook or Twitter.

Rhetorical Questioning

Journalists and news outlets are supposed to ask questions. It’s what they do. There’s nothing wrong with that.

However, some journalists seem to lament that no one seems to want to ask them questions. After all, they are informed. They’ve done the research. It is a shame, and one empathizes. But the solution found by many of those journalists – to rhetorically ask a question in an article that they then go on to answer – is a prime entry point for fiat news.

There was a daily email some years ago – it might even still be around – published by the Wall Street Journal. It was a sort of news digest with a mild editorial bent. One of the best recurring segments was called, “Questions Nobody is Asking.” All it did, every day, is identify hilarious versions of the Rhetorical Questioning face of fiat news in the wild. And every day, they found three or four. And they were only looking for the most ridiculous examples, nearly all of which were found only in headlines.

The problem is much more widespread when you begin to examine the body text and don’t constrain yourself to the ridiculous or hilarious. When a journalist asks themselves a question in a news article, it suggests to the reader this is the kind of question you should be asking. It implies the existence of common knowledge, of this being the kind of question the journalist has become aware their readers are concerned about. Like other faces of fiat news which seek to establish in the mind of the reader a framing of the opinions of the masses, of experts and of Those Bad People who consume right-wing media/left-wing media, Rhetorical Questioning is nearly always a technique not for presenting facts, but for framing how those facts should be interpreted and to what inevitable conclusions they should lead the reader.

Superlative Language

We use the term superlative somewhat loosely in our name for this face of fiat news. We don’t only mean literal superlatives. We mean heavy adverb usage, higher order adjectives and other loaded phraseology. And yes, literal superlatives.

Unlike some of our other language models which require a bit more sophistication, this is brute force simplicity. There are words which, if used in a news article, are almost instantly fiat news on their own merits, regardless of context. Tremendous, wondrous, marvelous, atrocious, abominable, unspeakable things exist, but such descriptions simply cannot be made in a news article without what we think is brazen intent on the part of the author to color how the reader thinks about the facts reported.

This is all true with the exception of people quoted, of course. This is as good a time as any to note another principle of our project: we exclude quotations from our analysis. Again, parsimony over preciousness about false negatives. Yes, the content of quotes used in news articles is an avenue for fiat news. Yes, the manipulation of those quotes is common; however, in all of the iterations of our analysis, we nearly always found that the false positive rate contributed by including quote sources in the analyzed text was unacceptably high.

A quoted individual describing a congressman’s behavior as reprehensible may be legitimate news. A journalist doing the same with their own words in a news article is practically always fiat news.

Unsourced Attribution

There is a time and place for anonymous sources. Maybe even vague references to “sources” who “said” something, without an explicit explanation for the need for their anonymity. We aren’t here to make the argument that protecting the identify of a sensitive source is never appropriate; indeed, in order to produce institutional change of the variety we often champion here at Epsilon Theory, it may at times be indispensable.

For this reason, perhaps more than any other face of fiat news, Unsourced Attribution must nearly always be thought of against some baseline. Its presence is not a Bad Thing. We have little interest in grading a single article as “Bad! Fiat News!” because it includes unsourced attribution. We have a great deal of interest in making consumers of news aware that a topic has reached an abnormally high density of unsourced attribution. We have a great deal of interest in making consumers of news aware that an outlet has taken to a steadily increasing diet of unsourced attribution in their news content. We have a great deal of interest in making consumers of news aware that the media in general have adopted a more aggressive posture with respect to unsourced attribution.

We think the skeptical news reader, who knows that journalistic integrity is widespread but not uniform, will see the potential for sharp changes in the use of such sources to shape the perception of facts and events.

Why am I reading this now?

Those who are familiar with our work on fiat news know that our most common advice when consuming news is to ask the question: “Why am I reading this now?” So why are you reading this now?

Because we think the prevalence of fiat news is rising rapidly.

Because fiat news seems to rise sharply during election cycles.

Because fiat news seems to rise sharply during uncertain events (e.g. the immediate post-COVID period registered the highest level of fiat news in the historical period reviewed in our analysis).

Because we think narrative is being weaponized.

Consider the below nearly 15-year analysis of one of our broadest news datasets. After peaking at twice the levels of 2007 during the COVID fiat news frenzy, our measure of Fiat News density today is still at 170% of those early levels, with a steady upward trend over most of that period.

You’re also reading this now because we think we’re now at a crossover point in our capacity to analyze this credibly and offer useful ways to think about and improve news consumption.

Over the next several months, we plan to tell you more about our project. We’ll discuss our milestones. We’ll ask for feedback on tools and features you would want to see. We’ll listen to feedback. We’ll discuss our datasets. We’ll discuss new datasets we want to acquire. On some of those, we may even ask for your help.

Over the next several years, I think we can work together to inoculate ourselves against weaponized narrative.

To learn more about Epsilon Theory and be notified when we release new content sign up here. You’ll receive an email every week and your information will never be shared with anyone else.


  1. I have only begun to scratch the surface of this important piece, but as one who writes a paid-for newsletter I find there are a lot of red flags. Am I ascribing something inappropriately? Have I quoted somebody out of context? Do my facts fit? Will the reader be better informed after reading my stuff? Many places to look for shoes that fit. Thanks, Rusty.

  2. Great starting point for a hugely important public discussion.

    My point of reference is my analysis from 5-6 years ago of how Uber used these exact techniques to create massive Fiat Corporate Value (nearly $100 bn created out of thin air).

    A couple of items you might consider adding to your list:

    16: Ignore the Funding/Financial Interests of the Experts Interviewed (or the role of longstanding well-funded think tanks in political situations). Stories quote experts as if they are dispassionate analysts whose claims are backed by the kind of rigorous, peer-reviewed research you’d see in major academic settings and deliberately conceal are being paid large sums from the interests their claim supports. Tiny bits overlap with “Appeals to Authority” “Coverage Selection” and “Missionary Statements” but the overall problem here goes well beyond what you’ve included in those three items.

    1. Only One Side of The story Gets Reported. Again, small bits included in other items, For years, MSM coverage of Uber exclusively reported that they were the greatest thing since sliced bread, and had succeeded because of cutting-edge technology that created huge productivity advantages. This also applies generally to MSM coverage of US overseas military actions in the last 25 years.

    2. Never Any Attempt To Review the Bigger Picture. In isolation fiat news stories that simply repeat corporate or governmental claims are somewhat understandable given the pressures of the news cycle. But you never ever see a MSM media outlet step back after a number of years and examime whether the corporate or governmental claims uncritically published in the past actually turned out to be true. Was Uber actually the biggest thing since sliced bread? Did claims about weapons of mass destruction actually justify 20 years of war? Did those 20 years of war actually provide major benefits for the people of IRaq/Afghanistan?

    3. Reject/Ignore Longstanding Measures of Bottom-Line Results. In 12 years Uber has lost $31 billion on its actual, ongoing taxi and delivery services and has yet to generate a single dollar of positive cash flow and no one can explain how it ever could produce sustainable profits. These are well-understood ways to measure corporate performance, but you won’t find a single MSM story that discusses them seriously. Even the Guardian’s recent Uber “expose” completely ignored competitive economics and financial results in order to focus on Stylistic/Cultural issues which provide a highly inaccurate picture of Uber’s performance. I’m sure everyone can quickly recall dozens of comparable issues in the coverage of military and political problems.

    Maybe a couple of these could be subsumed in a modified version of your first 15 warning signs, but I’m guessing that list will end up expanding

  3. Excellent. I’d appreciate explicit examples of each “face”. Maybe even in a separate doc? I think I get them all but real-world examples of each would help.

  4. Hi Roy. Always a good practice, but I’d say most newsletters fall squarely within the “obviously opinion” camp. While the rise in the aggregate volume of this kind of content is probably tipping the scales on the fiat news spectrum, I don’t know that it’s our primary concern. There should be a place for people to attempt to convince others in the various media; we’d argue that place is “where people know someone is seeking to convince them.” My guess is your newsletter falls squarely in that camp.

  5. Hah! None whatsoever. I’m not a news outlet, and happy to assert some tendency toward self-aggrandization within certain professions. Couldn’t possibly be something that financial writers are guilty of, too. :innocent:

  6. Absolutely, Kevin! Definitely the plan over the next few pieces.

  7. Thanks, Hubert! I think these are good ideas for checks for anyone who is thinking critically about a topic. Really good ideas, actually.

    I also think the pitfalls of classifying them as fiat news are significant. In all three of these, you’re talking about framing through omission of one kind or another, which is absolutely a thing. I think you’re 100% on the right track and have zero disagreement with the specific examples. But framing through omission is a thing which presumes you can identify some objective baseline of “what someone should be covering” with respect to a particular topic. Not only am I not sure we can do that, I actively think that if I attempted to do it I would be programmatically introducing that unavoidably subjective assessment into the model. And I mean that both in the sense of “if I tried to do this personally in my own news consumption” and in the sense of “if I tried to incorporate this into a fixed fiat news model.”

    And there are two answers to those two senses. To the former (i.e. our personal news consumption habits) I think it is very good to be aware of each of the ways mentioned by you that authors can frame through omission. I think it is also very good to be mindful that we are not doing question begging of our own. When we start doing the “but why aren’t you talking about” game, it is very often - or at least very often is for me, as I can’t speak for you - because we’ve already drawn some conclusions of our own and are just a bit miffed that they aren’t actively working to support our conclusions.

    To the second sense (that is, modeling this more systematically), I presently think that the best way to track this is through the Coverage Selection face of fiat news. What we can model is the extent to which topics (e.g. actual financial results) are covered for one entity (e.g. your average S&P stock) at rates which they are not for another (e.g. Uber). We can model and show how that differs among outlets. We can show how it differs over time. We model topics and entities like this all the time, so this is a pretty vanilla part of what we’d potentially be looking at for Coverage Selection.

    I think that’s ultimately going to get, say, 60% of what you’re talking about, which means I’m probably leaving 40% on the table to avoid false positives. Knowing that, how do you think we could improve on that to better capture the very real things you note without injecting too much subjectivity about baseline expectations of “proper” coverage ourselves?

  8. Mapping out Content Context has to involve some mechanism that reads the headline(s) and compares them to the words used to promote the story on social media, specifically Twitter. Any revisions to the social media promotion–deleting of a tweet and replacing it with something more neutral, say–would provide another data point from which the magical algorithm (and to me it is indistinguishable from magic) that could indicate its fiat newsy-ness.

    I’m picturing something simple like a point system where 0 is pure news and 1 is pure opinion, and each ‘face’ adds some weighted amount of points to the total score. I’m thinking this way because of course I have no bloody idea how the Hell NLP works or how this project is going to judge these things. That’s multiple levels above my comprehension. But in thinking broadly about how to judge context, which is by nature subjective, I would at least in concept try to assign a value to certain actions. So deleting a clickbait tweet and replacing it with something neutral would be worth 0.1 or whatever. (For comparison an article that is written from the first person and has a lot of ‘I believe’ or ‘it’s time for us to do X’ would score 1.0 right off the bat) You have to figure out how to assign a value to not just the words used but the actions around those words. Does the headline correspond with the actual language of the piece? Does the headline have someone’s name in it but that person is only mentioned once in a 700 word story? If so is that meaningful? If it is then why? If it isn’t then why not?

    I will add, along a different line here, that Unsourced Attribution is probably the most dangerous of the 15 faces. That stories are believed ab initio even though nobody has put their name to them is frightening. We skipped past the ‘trust but verify’ model and went right into ‘because some guy who totally exists said so’. The media is obsessed with telling us that no, we wouldn’t know their girlfriend, she goes to another school. Maybe we don’t care right now because those types of stories are only hurting the Bad People :tm:, but one day they’ll move on and something or someone you care about will be next.

  9. I want to suggest a fiat news tactic I don’t think is covered by those here, forgive me if you think it is. I call it “speculation anchoring,” although it may have another name.

    This is when an article suggest causality or connection between two things in its headline or in its opening paragraphs, but then goes on to say, wait, actually, there is no real evidence of this and the connection is purely speculative. Because a lot of readers won’t read carefully past the headline or the first paragraphs, the purported connection is established in the reader’s mind, due to the fact that an article ostensibly asserting the connection was published. What they will remember is, “I saw an article a while back saying X…” and will forget the details about evidence or lack of it. In this way the reader’s mind is “anchored” toward a false or weakly evidenced belief, even as the article may not explicitly argue for that belief. The reader thinks, “X is happening”, when in fact it is only someone speculating about X. I have sometimes even seen that the article will acknowledge evidence that directly refutes X, leaving it a mystery as to why the article was written - unless it were trying to establish a narrative rather than illuminate fact.

    This technique is everywhere but I see it I think it is most rampant in articles about climate change - something weird is happening with the weather or in the natural environment. “Scientists think” this could be due to climate change. Read to see that there is no evidence that climate change has anything to do with it. Yet for some reason, you are still left with the impression, this is a climate change thing.

    Here is a recent example about shark attacks on the rise recently on the East Coast, claiming in the subheadline that “Climate change may play a factor in sharks venturing closer to shore.” There is heading that reads “Global warming may play a factor.” The first sentence of the paragraph however, admits that “there is no data” to support this claim at all. Nevertheless, we still see several paragraphs of pure speculation on the topic, which includes implicit appeals to authority because the speculation is coming from “experts.” Even though there are more paragraphs establishing the numerous other possible causes, “Scientists have an explanation” according to the headline, but the article actually establishes that they DON’T have an explanation.

    The article even goes on to say, there might not even be more sharks in the water, but just more people, conceding that even the apparent trend in shark behavior could be completely illusory. Moreover, anybody who has even semi-regular interaction with nature knows that wild animals sometimes deviate from the patterns we expect for no clear reason. But now there’s an article out there propagating the idea that “sharks are changing their behaviors because of climate change.”

  10. The point system you mention makes me think of college rankings in U.S. News. What if there was some (respected) entity assigning point values to journalism in terms of the percentage of its content that was Fiat News? College admins would sacrifice their firstborn for higher rankings; I’d think a similar ranking system for Fiatness could be a valuable incentive for outlets to identify and reduce their Fiat content.

  11. Hi David:

    Please forgive me here as I have not gone back and updated my information this morning but there was a “Shark Attack Data Base”, available years ago, not remembering who put it out, but the analytical discussion I remember was based on the “time of day” theory ( low light = greater attacks) versus more opportunity ( more people in the water). There was the appearance of a trend towards a greater number of attacks taking place mid-day as the opportunity increased at those hours. Might take me a couple of days but will go looking.

    The reason for my response was to comment that another form of what I call “moving the goalpost” on speculation ( resembles Moat and Baily) is when a string of speculative comments are offered under the guise of fact with a final, wildly speculative comment, slightly distanced from the main body of comments, tacked onto the end with a disclaimer such as “but that would be entirely speculation”.

    I am sure most remember “GIGO”. Fiat news = junk food for the brain. Less filling, tastes great!

    Postscript: International Shark Attack File , est. 1958, housed at the Florida Museum of Natural History

  12. My fear is that media organizations would simply try to figure out how to keep doing exactly what they’re doing but without it being as easy to detect. In fact, I will go so far as to say that @rguinn and company should absolutely not disclose any of the proprietary methodology that will be used to identify fiat vs legitimate news.

  13. This is definitely a thing, although it’s an open question whether is deserves its own category.

    This practice may be even more pernicious than you suggest, as what we have discovered in streaming this data into our dataset is that headline fishing is huge. That is, even very “reputable” outlets regularly lead with exactly this kind of click-bait connection, then rotate after they’ve pushed it on social media (which often caches the original click-bait header for its preview snippets) to something else, then something else. There seems to be ombudsman indifference to anything that has to do with headline shenanigans, no mattet the source.

    I DO think we intended and do capture this linguistically and behaviorally in two places. It will hit our question begging category and it will hit our rhetorical questions category. I think those are still correct personally, although this does give me several ideas to expand what we are catching. But curious, group: is this subset substantial enough to be its own thing?

  14. I am curious what the group thinks about this. Even though we are explicitly modeling something other than bias, it is inevitable that we will be attacked for being inherently biased. They may even be correct. Just because I dont see how it would happen in context doesn’t mean it isn’t possible. So I get calls for transparency.

    I also think D_Y is probably right. Both can be true.

    What do you guys think?

  15. I can see this happening, and I could see such positive effects. But to D_Y’s point, is this desirable? In the same way that restaurants hew to Michelin preferences and colleges hew to US News algorithms (i.e. arbitrarily weighted spreadsheets), does this lead to edge case optimization that doesnt change the fundamental thing?

    Not rhetorical. Honestly don’t know.

  16. You’re right that it could be considered a specific hybrid of rhetorical questioning, click-bait, and maybe question begging. Although, if I were to argue for a distinction I would say that speculation anchoring is a particular sleight of hand that relies on acknowledging an absence of evidence or contrary evidence to paradoxically create undue confidence in the original speculative assertion. It creates the illusion of “We looked into this question, and here is what we found.” If they just said, “This is because of X” then they would open themselves to criticism that this claim was unsupported. But if the author acknowledges that, saying, “I know this is unsupported, but think about it, it would make sense!” I think it tends to lower the reader’s defenses, because it makes seem like they investigated the question.

  17. updated original post with source on shark attack data

  18. Thanks for your response Rusty

    My comment was focused on your larger objective “to inoculate the world against the weaponized narratives of Big Tech, Big Media and Big Politics”

    I totally appreciate the huge effort you’ve put into building a system using natural language processing to evaluate massive quantities of text. My assumption that the battle to educate and inoculate the world would need to be fought on many fronts. This tool has obvious value in many situations including the graphs you’ve included tracking aggregate trends over time. But I don’t see how it can become the primary education and inoculation tool. Maybe that’s not what you meant but that’s what your reply implies.

    I assume other fronts in this battle would include analysis of how Big Tech, Big Media and Big Politics (and Big Finance) actually develop and promulgate their narrative campaigns, how narrative promulgation has changed over time, and the incentives that get the media to create fiat news and uncritically endorse fiat news claims being pushed by politicians and corporations. As other commenters have noted, just assembling concrete examples would help get key fiat news concepts across to people who haven’t been thinking about them for years.

    I had assumed that the “Fifteen Faces” taxonomy would be a key input to all of the components of the battle. Totally understand that additional faces I suggested (and in fact several of the Fifteen) may not lend themselves to an algorithm attempting to categorize the entire output of the media in a highly reliable manner. Counting the number of news stories featuring “bogeymen” is useful. Explaining how political/corporate interests manufacture bogeyman based narratives and then get the media to (implicitly) endorse those narratives is also important. I assume this taxonomy will continue to evolve, but doesn’t it need to serve all of the different components of the battle?

    Up to you but I see negligible risk that publishing methodological details would allow medis types to figure out how to evade detection. As your original note suggested, journalism profs have been criticizing many of these things for years. When I’ve had direct discussions with earnest journalists from prestigious publications they are always shocked (shocked!) to learn their stories are promulgating manufactured narratives and groupthink among a category of journalist… Meta analysis isn’t going to change that kind of individual behavior.

  19. Rusty:

    If being attacked is unavoidable I would recommend you just own it. Haters gonna hate.

    I have no doubt that you are being exhaustive in your methods and process. If a critique has merit and is helpful acknowledge it with gratitude. Let the other stuff slide. You have more important and better uses for your time .

    I would ease into any transparency, again more on the time management side of working on developing your project instead of spending time defending it.

    Building a Better Bullshit Detector is definitely going to get some attention. Not all positive.

  20. Avatar for xmj xmj says:

    One thing that brings a smile upon my face about all this is, you’re basically building as machine learning model something we did for fun later in university:

    Identifying which logical fallacies an argument would contain. Fun times!

    Of course, we did this by memorizing the Yuge poster from we had in our break room, and then pointing it out manually :wink:

  21. Avatar for O.P.A O.P.A says:

    I’d naively say: be transparent. With the methods at the very least if not the exact key words (though ideally that too)

    For many, the attempts to avoid getting flagged for manipulative language will involve them actually removing manipulative language.

    For those that do avoid getting flagged without removing manipulation, it will likely be so convoluted or involve weird phrases to avoid matching your models that most readers will be tipped off by the out of place phrases.

    And isn’t that the point? Not to eliminate manipulative news or identify all of it, but to make everyone (readers and journalists alike) more aware of it.

    I doubt any level of transparency/secrecy will motivate propagandists to stop publishing propaganda.

    Regardless of whether the modal is transparent, if it becomes influential there will be edge case optimization. A transparent model is easier for others to help update to account for that.

    To reduce edge case optimization perhaps you could include a score of ‘linguistic normality’? Basically flagging articles that have unusually high concentrations of unusual phrases. Such twisted phrases may be an attempt to duck the model. But this would also be prone to false-positives, particularly for articles aimed at a technical niche (perhaps one could compare across a category)

    Regardless, thank you for the fine work, and do let us know if we can help!

  22. Sounds like the chicken and the egg argument maybe. A bit of this and a bit of that.

    Though I don’t think being uneducated is the problem, it’s how education is prioritised that is.

    It’s not like I ever got a crash course in doubting news information like we get taught how to evaluate data in science. News is just taught to be factual in and of it self. It takes a lot of self learning to pull yourself away from that idea.

    I mean they did talk about propaganda in school, but its always in the past tense, like it doesn’t happen now.

    It’s a bit of a can of worms I realise, politics being mixed into education will probably end like how religion mixed into education does. Promoting an agenda, rather than providing information.

    I think the more significant thing that has changed now from back before, is the ability to leverage technology to reach a wider audience. And all the good and bad that entails. Not that people as individuals have gotten dumber, it’s just easier to craft a narrative for the herd.

  23. And here is the head exploder, school is a form of propaganda. It teaches you what the state needs you to know to be a good member of society. Not necessarily what you need to know to be a better human being.

  24. Another thing is that education is secondary to it’s purpose in freeing the adult workforce to increase productivity.

    Even if you could empirically prove that having a stay-at-home parent could improve children’s education, I don’t think the state would be interested in making that happen because it would lock out 1/2 the population from potentially working.

    The government is incentivised for Infinite growth at all costs with rewards of that growth to be siphoned to the top of the pyramid. And this system is very well optimised for that purpose.

  25. Growth of the citizens’ wealth or growth of the governing class?
    How wealthy are you if you can only share an narrow portion of your precious time with loved ones and your savings wither in the no yield salt flats of the broad consumer price inflation desert?

    Academia, a governing class, pah. Let them teach cowering ears lent by command.

    How painful, to wield that power and to be able only to exercise it upon children or to claw notionally upwards upon your colleagues thus proclaimed vagaries of thought or expression. The desperation to behold a grand vision before the hour glass leaks out.

  26. Edit: Had a 3yr old future ET reader on my lap “helping” when writing the original post. I fixed a few things but will just leave the other things I feel the urge to change as I feel my point can be understood.

    I think it’s inevitable that this will end up becoming similar to the white hat, black hat and gray hat hacker game. Where ET is a white hat and fiat news is the black hat. Over time -if this type of system reaches critical mass like firewalls/antivirus software did for protecting end users, the attack surface decreases significantly.

    Please note this analogy is not perfect but it’s what came to mind when reading the post by D_Y.

    Black hat hacker definition

    Black hat hackers are criminals who break into computer networks with malicious intent. They may also release malware that destroys files, holds computers hostage, or steals passwords, credit card numbers, and other personal information.

    Black hats are motivated by self-serving reasons, such as financial gain, revenge, or simply to spread havoc. Sometimes their motivation might be ideological, by targeting people they strongly disagree with.

    What is a white hat hacker?

    White hat hackers use their capabilities to uncover security failings to help safeguard organizations from dangerous hackers. They can sometimes be paid employees or contractors working for companies as security specialists who attempt to find gaps in security.

    Black hat hacker vs white hat hacker

    The main difference between the two is motivation. Unlike black hat hackers, who access systems illegally, with malicious intent, and often for personal gain, white hat hackers work with companies to help identify weaknesses in their systems and make corresponding updates. They do this to ensure that black hat hackers cannot access the system’s data illegally.

    In regards to D_Y’s second comment:

    And to somewhat continue with the internet security analogy, this feels very much like the open source software debate.

    Is proprietary/closed source software safer and more trustworthy than open source software? Microsft Windows/MacOS vs Linux/BSD: I think it depends. I personally default to open source options these days myself when the option is available and convienent. However, depending on what I need the software to do and how hard it will be to get running I don’t mind using proprietary solutions. Plus, even closed source software can be understood and manipulated without seeing the code itself.

    Regardless of which I end up choosing I can honestly say: I don’t read the source code for nearly any of the open source projects I use. I depend on others to audit the code to confirm it does what the project says it does in both cases. So I personally don’t see a huge difference between the two options since I don’t really know what either code base is really doing behind the scenes.

    However, I have found people over time that I somewhat trust who do read open source code. They write up reviews of their findings that I feel I can trust more than a closed source project which has corporate shareholders/investors who want ROI at any cost. That’s pretty much the deciding factor for me and why I try to use open source projects over closed source.

    Open source has it’s own issues and isn’t always secure, whether it’s a good actor who contributes poorly written code or a bad actor intentionally writing code which is exploitable, it’s not a perfect solution when it comes to trust and security.

    With that said, I do feel that if the true intent of the NEWS system is to make real change it’s best to educate the end user vs making them dependent on software without understanding what it does and trust it’s doing what is intended.

    My suggestion: publish papers, publish the code and publish exactly what the ‘key words/phrases’ the system uses, why those words/phrases are important and how it predicts what is and isn’t fiat news.

    Make is so the average person who is interested and dedicated to learn could identify what the NEWS system does and be able to do it manually if they wanted (outside of just reading ET of course). Then users could trust the system is doing what it was intended to do and use it simply for the convenience factor.

    Educating the end user should be the most important mission factor in my opinion. And yes, using open source and publishing the methods the system uses will allow fiat news to possibly change up methods to avoid detection in some cases not to mention things that we can’t even think of at the moment, aka the Unknown Unknowns. NEWS will then have to update methods. Rinse and repeat.

    Allow NEWS to become an open source project and have others to contribute, build on it and fork it to build new and different methods of detecting fiat news. This should be about changing the global perception of what news is today, what it could be and how ET will become the leader in changing news consumption at a global scale.

  27. There is a recent article on the Unz Review about “cognitive infiltration (CI)” and Cass Sunstein (by Ron Unz). If I understand correctly, CI is about deliberately neutralizing fiat (?) news (some of which are ‘conspiracy theories’ which might actually be true) by infusing absurdities and fringe into the narratives. The article notes that the Internet was originally a Darpa project and wonders whether too-much-information overload wasn’t the original intent. Hopefully NEWS will be able to discern what is White/Black Hat as is mentioned above.

Continue the discussion at the Epsilon Theory Forum

2 more replies


The Latest From Epsilon Theory


This commentary is being provided to you as general information only and should not be taken as investment advice. The opinions expressed in these materials represent the personal views of the author(s). It is not investment research or a research recommendation, as it does not constitute substantive research or analysis. Any action that you take as a result of information contained in this document is ultimately your responsibility. Epsilon Theory will not accept liability for any loss or damage, including without limitation to any loss of profit, which may arise directly or indirectly from use of or reliance on such information. Consult your investment advisor before making any investment decisions. It must be noted, that no one can accurately predict the future of the market with certainty or guarantee future investment performance. Past performance is not a guarantee of future results.

Statements in this communication are forward-looking statements. The forward-looking statements and other views expressed herein are as of the date of this publication. Actual future results or occurrences may differ significantly from those anticipated in any forward-looking statements, and there is no guarantee that any predictions will come to pass. The views expressed herein are subject to change at any time, due to numerous market and other factors. Epsilon Theory disclaims any obligation to update publicly or revise any forward-looking statements or views expressed herein. This information is neither an offer to sell nor a solicitation of any offer to buy any securities. This commentary has been prepared without regard to the individual financial circumstances and objectives of persons who receive it. Epsilon Theory recommends that investors independently evaluate particular investments and strategies, and encourages investors to seek the advice of a financial advisor. The appropriateness of a particular investment or strategy will depend on an investor’s individual circumstances and objectives.