Complex Systems, Multiscale Information and Strange Loops (by Silly Rabbit)

Complex systems

Neat and accessible primer on complex systems, multiscale information theory and universality by Yaneer Bar-Yan, and a related paper on the conceptual applications of the same topic: From Big Data To Important Information (suggest start reading from section VII if you read the primer, and from sub-section D on page 13 if you just want the markets application).

Machine learning software creates machine learning software

Lots of buzz about Google’s AutoML announcement at the Google Annual developer conference I/O 2017 last week. AutoML is machine learning software which takes over some of the work of creating machine learning software and, in some cases, came up with designs that rivals or beats the best work of human machine learning experts. MIT Technology Review article on AutoML.

One-shot imitation

Also lots of buzz around one-shot imitation using two neural nets, as demonstrated by OpenAI. Personally, one-shot imitation is the one AI-type concept which gives me the fear. But if Elon’s supporting it then it must be OK… right? One-shot imitation paper here but, more to the point, watch this video and tell me you are not at least a little bit afraid.

The power of the platform

And to the practical applications of technology, I really like the language of this recent press release by Two Sigma CEO, Nobel Gulati, and particularly the paragraph:

Moving forward, durable advantages will to accrue to those building a substantial platform based on massive amounts of data, along with the technology and institutional expertise to use it. Building such a platform requires significant and ongoing investment in R&D, and a fundamentally different culture and mindset to apply a scientific approach to the data-rich world of today.

Personally, I believe that the 2020s will be more defined by big compute than big data but this is, nonetheless, a powerful statement and language, and there’s a key implicit point buried in here on the cultural balance of ‘researchers’ (math and physics natural genii) and ‘production engineers’ (coders who, by nurture, have seen and solved many practical problems). Specifically, how the majority of quant funds have to-date been culturally focused too heavily on the math genius research folks to the detriment of hiring and rewarding the more workmanlike practical folks who can build and maintain a substantial platform which, I agree, is the new durable advantage.


I was reminded last week by China’s censorship of Google’s latest AlphaGo win against Ke Jie just how substantial a stance it was when Google shut down its Mainland search engine in 2010 and why these kind of bold moves (bets) are essential to developing a truly winning technology company (and also why I don’t live in China anymore!). As Rusty Guinn has written about: A man must have code.

Strange loops

Finally, to bring us back up to the level of self and consciousness, I finally got ‘round to reading Douglas R. Hofstadter’s 2007 book I am a Strange Loop. A long, winding and compelling book summarized by the quote “In the end, we are self-perceiving, self-inventing, locked-in mirages that are little miracles of self-reference.” If you dip in and only read one section, read the section on simmballs in Chapter 3, which loops us back to where we started this column on multiscale information.

PDF Download (Paid Subscription Required):

She Screams, He Kidnaps (by Silly Rabbit)

Proximity of verbs to gender

Sometimes biases embedded within language are subtle, counter-intuitive things which you have to tease out with many layered neural nets. Other times, they are just bluntly and painfully predictable: Data scientist David Robinson tracked the proximity of verbs to gender across 100,000 stories. She screams, cries and rejects. He kidnaps, rescues and beats.


Previously I shared some research on how recollections of successive events physically entangle each other when brain cells store them. As a fascinating and different approach to studying memory, in this paper a group of European researchers used Wikipedia page views of aircraft crashes to study memory.

Fool me once, fool me twice

Sooner or later, someone is probably going to put a visually compelling 2D ‘map’ of data reduced from hundreds or thousands of dimensions via t-SNE in front of you and make some bold assertions about it. This beautiful and interactive paper provides a handy guide on what to watch out for.

A veritable zoo of machine learning techniques

A couple of months old, but still useful: Two Sigma researchers Vinod Valsalam and Firdaus Janoos write up the notable advances in machine learning presented at NIPS (Neural Information Processing Systems Foundation) 2016. Headline: The dominating theme at NIPS 2016 was deep learning, sometimes combined with other machine learning methods such as reinforcement learning and Bayesian techniques.

The NIPS conference has, improbably, found itself at the center of the universe as it is the most important event for people sharing cutting edge machine learning work. It’s in LA this year in December and promises to be very interesting, although quite technical:

Silicon Valley: a reality check

And, finally, this one is a little inside baseball but, if you can push through that, there is a very useful and accurate parsing of the types of technology companies being started and funded in the Valley and the simultaneous parallel dimensions that exist here. (You can skip the Valley defense bit and jump to the smart parsing bit by hitting ‘CRTL + F’ , typing ‘Y Combinator’ and start reading from there).

PDF Download (Paid Subscription Required):

Mo’ Compute Mo’ Problems (by Silly Rabbit)

Hard problems

Someone tweeted this cartoon at me last week, presumably in angry response to an Epsilon Theory post, as the Tweet was captioned “My feelings towards ‘A.I.’ (and/or machine learning) and investing”:

Source: xkcd

To be clear: YES, I AGREE

Unsurprisingly, we humans are pretty competent creatures within the domains we have contrived (such as finance) and spent decades practicing. So it is, generally, still hard (and expensive) in 2017 to quickly build a machine which is consistently better at even a thin, discrete sliver of a complex, human-contrived domain.

The challenge, as this cartoon humorously alludes to, is that it is currently often difficult (and sometimes impossible) to know in advance just how hard a problem is for a machine to best a human at.

BUT, what we do know is that once an ML/AI-driven machine dominates, it can truly dominate, and it is incredibly rare for humans to gain the upper hand again (although there can be periods of centaur dominance, like the ‘Advanced Chess’ movement).

As a general heuristic, I think we can say that tasks at which machines are now end-to-end better have one or some of the following characteristics:

  • Are fairly simple and discrete tasks which require repetition without error (AUTOMATION)
  • and/or are extremely large in data scale (BIG DATA)
  • and/or have calculation complexity and/or require a great deal of speed (BIG COMPUTE)
  • and where a ‘human in-the-loop’ degrades the system (AUTONOMY)

But equally there are still many things on which machines are currently nowhere close to being able to reach human-parity, mostly involving ‘intuition’, or many, many models with judgment on when to combine or switch between the models.

Will machines eventually dominate all? Probably. When? Not anytime soon.

The key, immediate, practical point is that the current over-polarization of the human-oriented and machine-oriented populations, particularly in the investing world, is both a challenge and an opportunity as each sect is not fully utilizing the capabilities of the other. Good Bloomberg article from a couple of months back on Point72 and BlueMountain’s challenges in reconciling this in an existing environment.

The myth of superhuman AI

On the other side of the spectrum from our afore-referenced Tweeter are those who predict superhuman AIs taking over the world.

I find this to be a very bogus argument in anything like the foreseeable future, reasons for which are very well laid out by Kevin Kelly (of Wired, Whole Earth Review and Hackers’ Conference fame) in this lengthy essay.

The crux of Kelly’s argument:

  • Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
  • Humans do not have general purpose minds and neither will AIs.
  • Emulation of human thinking in other media will be constrained by cost.
  • Dimensions of intelligence are not infinite.
  • Intelligences are only one factor in progress.

Key quote:

Instead of a single line, a more accurate model for intelligence is to chart its possibility space. Intelligence is a combinatorial continuum. Multiple nodes, each node a continuum, create complexes of high diversity in high dimensions. Some intelligences may be very complex, with many sub-nodes of thinking. Others may be simpler but more extreme, off in a corner of the space. These complexes we call intelligences might be thought of as symphonies comprising many types of instruments. They vary not only in loudness, but also in pitch, melody, color, tempo, and so on. We could think of them as ecosystem. And in that sense, the different component nodes of thinking are co-dependent and co-created. Human minds are societies of minds, in the words of Marvin Minsky. We run on ecosystems of thinking. We contain multiple species of cognition that do many types of thinking: deduction, induction, symbolic reasoning, emotional intelligence, spacial logic, short-term memory, and long-term memory. The entire nervous system in our gut is also a type of brain with its own mode of cognition.

(BTW: Kevin Kelly has led an amazing life – read his bio here.)

Can’t we just all be friends?

On somewhat more prosaic uses of AI, the New York Times has a nice human-angle on the people whose job is to train AI to do their own jobs. My favorite line from the legal AI trainer: “Mr. Rubins doesn’t think A.I. will put lawyers out of business, but it may change how they work and make money. The less time they need to spend reviewing contracts, the more time they can spend on, say, advisory work or litigation.” Oh, boy!

Valley Grammar

And finally, because it it just really tickles me in a funny-because-it’s-true way: Benedict Evans’ @a16z’s guide to the (Silicon) Valley grammar of IP development and egohood:

  • I am implementing a well-known paradigm.
  • You are taking inspiration.
  • They are rip-off merchants.

So true. So many attorney’s fees. Better rev up that AI litigator.

PDF Download (Paid Subscription Required):

Future Flash Crashes, Digital Darwinism & the Resurgence of Hardware (by Silly Rabbit)

Future flash crashes

Remember a few years back when a bogus AP tweet instantly wiped $100bn off the US markets? In April 2013 the Associated Press’ Twitter account was compromised by hackers who tweeted “Breaking: Two Explosions in the White House and Barack Obama is injured.”

For illustrative purposes only.

Source: The Washington Post, 04/23/13, Bloomberg L.P., 04/23/13.

The tweet was quickly confirmed to be an alternative fact (as we say in 2017), but not before the Dow dropped 145 points (1%) in two minutes.

Well, my view is that we are heading into a far more ‘interesting’ era of flash crashes of confused, or deliberately misled, algorithms. In this concise paper titled “Deceiving Google’s Cloud Video Intelligence API Built for Summarizing Videos”, researchers from the University of Washington demonstrate that by inserting still images of a plate of noodles (amongst other things) into an unrelated video, they could trick a Google image-recognition algorithm into thinking the video was about a completely different topic.

Digital Darwinism

I’m not sure I totally buy the asserted causality on this one, but the headline story is just irresistible: “Music Streaming Is Making Songs Faster as Artists Compete for Attention.” Paper abstract:

Technological changes in the last 30 years have influenced the way we consume music, not only granting immediate access to a much larger collection of songs than ever before, but also allowing us to instantly skip songs. This new reality can be explained in terms of attention economy, which posits that attention is the currency of the information age, since it is both scarce and valuable. The purpose of these two studies is to examine whether popular music compositional practices have changed in the last 30 years in a way that is consistent with attention economy principles. In the first study, 303 U.S. top-10 singles from 1986 to 2015 were analyzed according to five parameters: number of words in title, main tempo, time before the voice enters, time before the title is mentioned, and self-focus in lyrical content. The results revealed that popular music has been changing in a way that favors attention grabbing, consistent with attention economy principles. In the second study, 60 popular songs from 2015 were paired with 60 less popular songs from the same artists. The same parameters were evaluated. The data were not consistent with any of the hypotheses regarding the relationship between attention economy principles within a comparison of popular and less popular music.

Meanwhile, in other evolutionary news, apparently robots have been ‘mating’ and evolving in an evo-devo stylee. DTR? More formal translation: Researchers have added complexity to the field of evolutionary robotics by demonstrating for the first time that, just like in biological evolution, embodied robot evolution is impacted by epigenetic factors. Original Frontiers in Robotics and AI (dense!) paper here. Helpful explainer article here.

The resurgence of hardware

As we move from a Big Data paradigm of commoditized and cheap AWS storage to a Big Compute ­­paradigm of high performance chips (and other non-silicon compute methods), we are discovering step-change innovation in applied processing power driven by the Darwinian force of specialization, or, as Chris Dixon recently succinctly tweeted: “Next stage of Moore’s Law: less about transistor density, more about specialized chips.”

We are seeing the big guys like Google develop their specialized chips custom-made for their specific big compute needs, with a very significant increase of speed of up to 30 times faster than today’s conventional processors and using much less power, too.

Also, we are seeing increased real-world applications being developed for truly evolutionary-leap technologies like quantum computing. MIT Technology Review article on implementing the powerful Grover’s quantum search algorithm here.

And, finally, because it just wouldn’t be a week in big compute-land without a machine beating a talented group of humans at one game of another: Poker-Playing Engineers Take on AI Machine – And Get Thrashed.

Key points:

  1. People have a misunderstanding of what computers and people are each good at. People think that bluffing is very human, but it turns out that’s not true. A computer can learn from experience that if it has a weak hand and it bluffs, it can make more money.
  2. The AI didn’t learn to bluff from mimicking successful human poker players, but from game theory. Its strategies were computed from just the rules of the game, not from analyzing historical data.
  3. Also evident was the relentless decline in price and increase in performance of running advanced ‘big compute’ applications; the computing power used for this poker win can be had for under $20k.

PDF Download (Paid Subscription Required):

Alibaba’s AI, JP Morgan’s Risky Language & the Nurture of Reality (by Silly Rabbit)

Video game-playing AI

AI has moved one step closer to mastering the classic video game StarCraft. Google, Facebook and now Alibaba have been working on AI StarCraft players, and last week a team from China’s Alibaba published a paper describing a system that learned to execute a number of strategies employed by high-level players without being given any specific instruction on how best to manage combat. Like many deep learning systems, the software improved through trial and error, demonstrating the ability to adapt to changes in the number and type of troops engaged in battle. Non-technical overview via The Verge here. Original and fairly accessible technical paper here.

While an AI video game ace may not be world changing in and of itself, progress on AI intra-agent communication and coordination has potentially profound implications for markets as the approach matures, or, as the Alibaba researchers rather poetically note in their paper:

In the coming era of algorithmic economy, AI agents with a certain rudimentary level of artificial collective intelligence start to emerge from multiple domains…[including] the trading robots gaming on the stock markets [and] ad bidding agents competing with each other over online advertising exchanges.

And how do agents behave when their game playing becomes stressful? Apparently just like their human creators: Aggressively. Summary of Google’s DeepMind finds on this here.

Risky language

For anyone who has ever taken general NLP algorithms, trained them on the information of the broader world and then pointed them at financial markets-type information, you will have noticed that they get kind of sad and messed up. Partly because markets-ese is odd (try telling your doctor that being overweight is a good thing) and partly because finance folks sure do love a risk discussion…and apparently no one more so than JP Morgan Chase CEO Jamie Dimon. In his much re-published letter to shareholders:

It is alarming that approximately 40% of those who receive advanced degrees in STEM at American universities are foreign nationals with no legal way of staying here even when many would choose to do so…Felony convictions for even minor offenses have led, in part, to 20 million American citizens having a criminal record…The inability to reform mortgage markets has dramatically reduced mortgage availability.

Thanks, Jamie, my algorithm just quit and immigrated to Canada.

The more serious question on this is that as natural language algorithms (of various types) become ubiquitous, at what point do business leaders begin to craft their communications primarily to influence the machine, or at least not include detailed socio-political critiques to accidentally trip it?

The nurture of reality

Clearly, our perception of reality, our world view, is substantially informed by our memories and the stories (links) we tell ourselves about these memories. We are now, for the first time, just starting to get an understanding of how memories are physically stored in the brain. Recollections of successive events physically entangle each other when brain cells store them, as Scientific American reports.

The Map of Physics, a joyous 8 minute video by Dominic Walliman (formerly of D-Wave quantum computing), culminates in the map below with The Chasm of Ignorance, The Future and Philosophy. Walliman points to where we must be operating if we are to break truly new ground (i.e., put the regression models down, please). And if you liked that, keep watching to Your Quantum Nose: How Smell Works

And, finally, a classic, epic, challenging, practical, piece of prose/poetry from one of the the world’s greatest philosophers and orators: the late, great, Tibetan Buddhist meditation master Chögyam Trungpa. Long treatise on Zen vs. Tantra as a system for nurturing the mind:

…the discovery of shunyata [emptiness of determinate intrinsic nature] is no doubt the highest cardinal truth and the highest realization that has ever been known…

Coming next week: The next generation of flash crashes; digital Darwinism and the resurgence of hardware.

PDF Download (Paid Subscription Required):

AI Hedge Funds, Corporate Inequality & Microdosing LSD (by Silly Rabbit)

Machines and suchlike

DARPA has produced a 15 minute AI explainer video. A fair review: “Artificial intelligence is grossly misunderstood. It’s a rare clear-eyed look into the guts of AI that’s also simple enough for most non-technical folks to follow. It’s dry, but IRL computer science is pretty dry.” Well worth watching for orientation on where we are — and where we are not — with AI today.

In case you are interested in ‘AI hedge funds’ and haven’t come across them, Sentient should be on your radar. And Walnut Algorithms, too. They look to be taking quite different AI approaches, but at some point, presumably, AI trading will become a recognized category. Interesting that the Walnut article asserts — via EurekaHedge — that “there are at least 23 ‘AI Hedge Funds’ with 12 actively trading”. Hmm …

[Ed. note — double hmm … present company excepted, there’s a lot less than meets the eye here. IMO.]

On the topic of Big Compute, I’m a big believer in the near-term opportunity of usefully incorporating quantum compute into live systems for certain tasks within the next couple of years and so opening up practical solutions to whole new classes of previously intractable problems. Nice explanation of ‘What Makes Quantum Computers Powerful Problem Solvers’ here.

[Ed. note — for a certain class of problems (network comparisons, for example) which just happen to be core to Narrative and mass sentiment analysis, the power of quantum computing versus non-quantum computing is the power of 2n versus n2. Do the math.]

Quick overview paper on Julia programming language here. Frankly, I’ve never come across Julia (that I know of) in the wild out here on the west coast, but I see the attraction for folks coming from a Matlab-type background and where ‘prototype research’ and ‘production engineering’ are not cleanly split. Julia seems, to some extent, to be targeting trading-type ‘quants’, which makes sense.

Paper overview: “The innovation of Julia is that it addresses the need to easily create new numerical algorithms while still executing fast. Julia’s creators noted that, before Julia, programmers would typically develop their algorithms in MATLAB, R or Python, and then re-code the algorithms into C or FORTRAN for production speed. Obviously, this slows the speed of developing usable new algorithms for numerical applications. In testing of seven basic algorithms, Julia is impressively 20 times faster than Python, 100 times faster than R, 93 times faster than MATLAB, and 1.5 times faster than FORTRAN. Julia puts high-performance computing into the hands of financial quants and scientists, and frees them from having to know the intricacies of high-speed computer science”. Julia Computing website link here.

Humans and suchlike

This HBR article on ‘Corporation in the Age of Inequality” is, in itself, pretty flabby, but the TLDR soundbite version is compelling: “The real engine fueling rising income inequality is “firm inequality”. In an increasingly … winner-take-most economy the … most-skilled employees cluster inside the most successful companies, their incomes rising dramatically compared with those of outsiders.” On a micro-level I think we are seeing an acceleration of this within technology-driven firms (both companies and funds).

[Ed. note — love TLDR. It’s what every other ZeroHedge commentariat writer says about Epsilon Theory!]

A great — if nauseatingly ‘rah rah’ — recent book with cutting-edge thinking on getting your company’s humans to be your moat is: Stealing Fire: How Silicon Valley, the Navy SEALs, and Maverick Scientists Are Revolutionizing the Way We Live and Work. Warning: Microdosing hallucinogens and going to Burning Man are strongly advocated!

Finally, on the human-side, I have been thinking a lot about ‘talent arbitrage’ for advanced machine learning talent (i.e., how to not to slug it out with Google, Facebook et al. in the Bay Area for every hire) and went on a bit of world-tour to various talent markets over the past couple of months. My informal perspective: Finland, parts of Canada and Oxford (UK) are the best markets in the world right now—really good talent that have been way less picked-over. Does bad weather and high taxes give rise to high quality AI talent pools? Kind of, in a way, probably.

PDF Download (Paid Subscription Required):

Salient and Other Just-So Origin Stories (by Jeremy Radcliffe)

I grew up in Houston wanting to be a general manager of a professional sports team. My 7th-grade buddies and I were some of the first ever fantasy sports players back in the mid-80s, except back then it was called Rotisserie Baseball (Daniel Okrent literally wrote the book on how to play, and his first league draft was held at Rotisserie Bird and Beef in NYC — here’s a great article on the origin story of what is now a multi-billion dollar industry).

Unfortunately for me, I didn’t have playing experience like Billy Beane or happen to work for a private equity gazillionaire who bought a team (Andrew Friedman, another Houstonian who ran the Devil Rays and now the Dodgers) or develop a deep understanding of statistics (Daryl Morey and Sam Hinkie of the Rockets), so I was never able to parlay my Apple IIe player value spreadsheets into a real-life GM job. However, I get to play GM in this business that we’ve built at Salient, and Ben’s not the only talent I can claim (some) credit for “drafting.” Thousands of you have already read “A Man Must Have a Code”, the fantastic debut piece from the head of Salient’s asset management business, Rusty Guinn, and we’re going to be featuring a select group of these other Ben-approved colleague-contributors.

I will never forget the first piece I read from Ben under the Epsilon Theory banner — it was called “How Gold Lost its Luster, How the All-Weather Fund Got Wet, and Other Just-So Stories.” By the end of the first page of the note, Ben had used quotes from J. Pierpont Morgan, Bob Prince of Bridgewater, and references to Rudyard Kipling, George Orwell and Stephen Colbert to highlight the power of narratives.

The asset management firm that I co-founded in 2002, Salient, manages a risk parity strategy similar to Bridgewater’s All-Weather Fund, and I’d flirted with being a gold bug for a few years, so I was naturally drawn to this note; before I’d made it to the second page, I was hooked. I felt like I was reading the pre-ESPN, pre-HBO version of Bill Simmons, when he was the Boston Sports Guy. Ben was mixing pop culture, literature, history and science, all in an effort to help his readers understand what was driving our post-crisis financial markets.

And it wasn’t flash — it worked. I finally understood why I had been so puzzled – and wrong – about gold price movements for the preceding couple of years. And Ben’s comments on the All-Weather Fund evinced a solid understanding of the strategy, which was and has remained rare for financial media types.

So I called Ben and asked him to meet with me. He knew Salient, since we had been an investor in a hedge fund he managed while at Iridian, and after we flew him down to Houston to meet with our team, we convinced him to join our firm and help our portfolio managers better understand the macro side of the markets, and to continue to write Epsilon Theory to help investors across the world with the same thing.

Somehow, we’ve been working together now for more than three years, and the new Epsilon Theory site, developed in-house by our fabulous creative team, not only includes all of Ben’s previous notes with customized image collages, but serves as a home base for a broader group of contributors and readers as Epsilon Theory develops into a community for those of us interested in understanding what drives markets.

This new Epsilon Theory site is separate from our Salient mothership at, but Ben remains a bigger part of Salient than he’s ever been, whether that’s in helping some of our other portfolio managers understand these markets or managing money himself on behalf of our clients. We’re committed to growing this Epsilon Theory community as a stand-alone site and hope you’ll not only continue to read and listen to Ben, but start to sample some of the other content we’ll be adding to the site, and of course help us grow this community of truth-seekers by spreading the word and inviting others to join us.

As far as what you can expect from me going forward as a contributor to Epsilon Theory, it’s important to me to follow the advice of Bill Belichik and “do my job” — so I promise to not to confuse the talent scout with the talent. However, if I have a skill set relevant to Epsilon Theory beyond talent-spotting, it’s in sharing or synthesizing some of the interesting news, articles and points of view I come across in my daily readings. I’ll be curating concise versions of my deep dives into a wide range of Epsilon Theory-esque subjects, and I hope you’ll come along for the ride.

Just to give you a taste of the type of rabbit holes I’ll be going down, check out “The War on Bad Science” starting with Wired’s profile on John Arnold. The Houston billionaire and his wife are challenging the fundamental structure of how scientific research is conducted, and their foundation’s work has broad implications across the scientific spectrum, from nutrition to psychology. This thing goes deep, and it has the potential to shatter many of our preconceived, scientifically-approved notions of the world.

Stay tuned, friends.

With gratitude,


PDF Download (Paid Subscription Required):

The Rabbit Hole: The War on Bad Science (by Jeremy Radcliffe)

If questioning everything you ever thought you knew about science sends you into a downward spiral of crippling anxiety, this may not be the Rabbit Hole for you.

A methodical dissection of the peer-reviewed studies underpinning all sorts of critical science (from fields as diverse as nutrition and psychology) reveals that they are likely highly flawed due to a combination of poorly-designed incentives and non-standardized, sub-optimal review processes.

Back in 2005, John Ioannidis of Stanford shocked the scientific community when he published his paper “Why Most Published Research Findings Are False.

The first article I read on the topic put it a little more bluntly. From The Atlantic: “Lies, damned lies, and medical science.”

The Washington Post found that many scientific studies can’t be replicated.

That is, indeed, a problem, but the scientific method along with the web could be the fix.

Houston billionaire John Arnold and his wife Laura are challenging the fundamental structure of how scientific research is conducted. To quote Arnold, “A new study shows…” are the four most dangerous words.

But before you start thinking these scientists are just a few bad apples publishing a few bad papers, consider this sting operation on the fraudulent and predatory practices of open-access scientific journals. It’s madness.

You don’t even need to publish your study’s findings to make a global impact if you’re a savvy enough journalist, as John Bohannon found out when his article on a fraudulent study (chocolate aids weight loss) he conducted for a documentary on lax industry standards went viral.

Forget about fake news for a second and consider how much fake science we’re talking about here. I hope this dive down the rabbit hole of bad science inspires you to continue discovering and supporting truth-seekers in the scientific community and beyond.


PDF Download (Paid Subscription Required):