This is a reference post for the Law of No Evidence.
Scott Alexander did us all a public service this week with his post The Phrase “No Evidence” Is a Red Flag for Bad Science Communication. If you have not yet read it I recommend doing so, and it is an excellent link to have handy going forward, and especially to have handy when going through studies about the severity of Omicron.
As useful as it is, he is being too kind. Not only is this ‘bad science communication’ it is also ‘not how this works, not how any of this works’ where ‘this’ is knowledge or actual science (as opposed to the brand the scientism of Science(TM)) and most importantly it is also evidence of bullshit, as per my proposed Law of No Evidence:
Law of No Evidence: Any claim that there is “no evidence” of something is evidence of bullshit.
No evidence should be fully up there with “government denial” or “I didn’t do it, no one saw me do it, there’s no way they can prove anything.” If there was indeed no evidence, there’d be no need to claim there was no evidence, and this is usually a move to categorize the evidence as illegitimate and irrelevant because it doesn’t fit today’s preferred form of scientism.
The context that led to the law’s formulation was people saying there was “no evidence” that the suspension of the J&J vaccine led to increased vaccine hesitancy, which was over-the-top levels of obvious nonsense, as I was constantly dealing with people’s concerns about that, and there was a huge dip on the vaccination chart at exactly the right time.
The context now is that there have been a lot of assessments that there is ‘no evidence’ that Omicron is less severe than Delta, often based on a particular data point not providing such evidence, which is then often flipped around to be a claim that Omicron definitely isn’t less severe than Delta, and that everyone speculating otherwise is irresponsible. Which is obvious nonsense, we clearly have plenty of evidence in lots of different directions and the whole thing is complicated and difficult and it will be a while before we can draw definite conclusions either way.
Saying there is ‘no evidence’ of something isn’t not lazy or bad science reporting (or other talk). It is definitely both of those, but that is not what it centrally is. No evidence is a magic phrase used to intentionally manipulate understanding by using a motte and bailey between ‘this is false’ and statements of the form ‘this has not been proven using properly peer reviewed randomized controlled trials with p less than 0.05.’ It makes one sound Responsible and Scientific in contrast to those who update their beliefs based on the information they acquire, no matter the source.
It purports to treat evidence the way it would be treated in a court of criminal law, where only some facts are ‘admissible’ and the defendant is to be considered innocent until proven guilty using only those facts. Other facts don’t count. In some cases, we even throw out things we know because those who discovered the facts in question were bad actors, and the information is ‘fruit of the poisoned tree.’ This is all a highly reasonable procedure when one is worried about the state attempting to imprison citizens and abusing its powers to scapegoat people, either by mistake or intentionally, and you would rather ten guilty men go free than put one innocent man in prison. In that context, when deciding whether to deny someone their freedom, I strongly feel we should keep using it.
Yet the detective often knows who did it long before they have enough formal evidence for an arrest, and should act accordingly, because they are a person who is allowed to know things and use Bayes Rule. And if the court finds the defendant not guilty, but you know things the court didn’t know, that doesn’t mean that your knowledge vanishes.
In the context of deciding how to handle a pandemic under uncertainty, or trying to model the world in the course of everyday life to make decisions, using the standards and sets of procedures of a criminal court is obvious nonsense. That goes double given those with contextual power get to choose who counts as the prosecution and who counts as the defendant, so whatever statement they dislike today requires this level of proof, and whatever they feel like asserting today is the default.
This is not an ‘honest’ mistake. This is a systematic anti-epistemic superweapon engineered to control what people are allowed and not allowed to think based on social power, in direct opposition to any and all attempts to actually understand and model the world and know things based on one’s information. Anyone wielding it should be treated accordingly.
Scott’s post eventually does point out that ‘no evidence’ is not how any of this ‘figure things out’ thing works. After pointing out how horrible and misleading it is that we say both “there is no evidence 450,000 people died of vaccine complications”(yes, the original said no evidence of 45,000 deaths, which is also true the way they are using the phrase, but I added another zero to be illustrative, because if the claim about 45,000 deaths is true than so is my claim! There’s even more no evidence for that!) and also “there is no evidence parachute use prevents death when falling from planes” Scott gets to the real issue here, which is that knowledge is Bayesian.
I challenge anyone to come up with a definition of “no evidence” that wouldn’t be misleading in at least one of the above examples. If you can’t do it, I think that’s because the folk concept of “no evidence” doesn’t match how real truth-seeking works. Real truth-seeking is Bayesian. You start with a prior for how unlikely something is. Then you update the prior as you gather evidence. If you gather a lot of strong evidence, maybe you update the prior to somewhere very far away from where you started, like that some really implausible thing is nevertheless true. Or that some dogma you held unquestioningly is in fact false. If you gather only a little evidence, you mostly stay where you started.
I’m not saying this process is easy or even that I’m very good at it. I’m just saying that once you understand the process, it no longer makes sense to say “no evidence” as a synonym for “false”.
I would once again go much farther, on multiple fronts.
I’d say that it never ‘made sense’ to use ‘no evidence’ as a synonym for ‘false’ and that this is not a word choice that is made in good faith. If someone uses ‘no evidence’ as a synonym for false, as opposed to a synonym for ‘you’re not allowed to claim that’then this is not merely evidence of bullshit. It is intentionally and knowingly ‘saying that which is not.’ It is evidence of enemy action.
I’d also assert that Scott Alexander is indeed very good at Bayesian updating. Far better than most of us. He’s saying he’s not very good because he’s comparing himself to a super high standard, a procedure which I mostly approve of for those who can psychologically handle it, but which in context is misleading. Even for those of us who have not done a bunch of explicit deliberate practice with it, you, yes you, are also very good at Bayesian updating. Not perfect, no. As Elon Musk reminded us this week, there’s tons of cognitive biases out there. Doing it exactly right is super hard. But any instinctive or reasonable attempt at approximation of this is much better than resorting to frequentism, and it is what you are doing all the time, all day, automatically, or else you would be unable to successfully put on pants.
After all, there’s ‘no evidence’ you know how. Someone really ought to do a study.
A vanishingly small minority of the general public understands the word “evidence” in such a way that “There is evidence of P” and “P is almost certainly false” could both be true statements. Using the term “evidence” in a Bayesian sense would thus confuse the hell out of everybody and lead to unfounded public perceptions of “flipflopping” type sins. With most public communications being done at anything other than the direct object level, it is no wonder they often assert there is “no evidence” of something up to the exact moment they want the reader to believe that the something in question is effectively proven true.
On every crime show the police captain will say “you don’t have enough evidence!” reasonably often, or “all your evidence is circumstance” or what not, which implies a common understanding that one can have evidence that P while being uncertain if P is true or false.
I always thought the most common conceit of those scenes in shows like Law & Order is that the front line detectives have already come to the right conclusion, and the next step is to gather the conclusive evidence to show their middle managers that they can get over the procedural hurdles of judges, juries, the constitution, rule of law, or whatever. Arguably even more problematic than how I put it above.
I do think that a lot of actual detective work is this in a non-problematic way – you can get to p(they did it) ~ 0.9 or whatever fairly quickly, then you have to prove it, but the prove it step fails much more often in the 0.1 where they didn’t do it. And my recollection of L&O (in particular, but also other similar shows) is that they’re right somewhat more often than you’d think, but they sometimes are wrong in ways they realize, and we never truly know they were right, and every so often we find out that 10 years ago they caught the wrong person.
I do find myself in a similar position all the time, where I have high confidence that X happened, or that Y did thing Z, or that principle P is true, but I can’t ‘enter it into evidence’ properly. A lot of my knowledge, especially a lot of it that few others have, is in this category, where I don’t know how to transfer it to others yet. To some extent, I believe in ‘if you can’t teach it you don’t really know it’ which is similar, but also you kind of do, at the same time, ya know?
I’ve been reading this blog for a little while now, finding a lot of value in your covid posts, and I really enjoyed the posts about the containers and the Flexport guy’s approach to describing what was going on at the port in LA, but this is my favorite post of yours so far.
I understand that simply praising this post is basically a substance-free non-contribution, so I’ll add that I think it’s a fantastic post even though I disagree that truth-seeking is inherently or always Bayesian.
You should know that comments like this are always appreciated on current margins. It feels great and provides good feedback.
I’d prefer to get twice as much as I currently do per similar reader reaction, but if I got 10x decent chance I’d think it was too much.
Seconded, in that case. This is excellent commentary on a pervasive problem.
I’d leave a little “heart/like” emoji if this was twitter, but instead I’ll second Econymous’ second.
It’s worth noting that a) the Flexport guy was talking about the port of Long Beach, *not* LA, which never had the stacking restrictions, and b) his claims were blatant lies meant to promote his company. (October was in fact an unusually busy month for the ports, not a standstill like he claimed, and relaxing the stacking restrictions in Long Beach predictably had no effect.)
The fact that Zvi got suckered in by the hoax and never posted a retraction is a huge disservice to readers like you.
I’ve started taking “No Evidence” statements to mean “I refuse to consider alternatives to my beliefs” and I think that that works pretty well. It’s used as a defense of the default belief against any challengers.
The pandemic has really brought out how ridiculous this is because there are so many questions which didn’t have a default, and so we see the default simultaneously created and defended.
Incidentally, could you recommend a good book on Bayesian reasoning for a bright 13 year old?
I’ll ask around about a book but at a minimum there is this: https://www.lesswrong.com/posts/XTXWPQSEgoMkAupKt/an-intuitive-explanation-of-bayes-s-theorem
Dumb example. The world is not flat. There is evidence that it is – it sure looks that way if you don’t know better. There is better evidence that it’s not flat (putting it mildly). Does that mean there is *no* evidence that it’s flat? I think people say “no evidence” when they really mean “there isn’t better evidence than the evidence that supports what I think”.
(I wish I could edit my comment – I meant that my example is the dumb example.)
I think your “dumb example” makes a good point, but I don’t think most people, or even very many, mean “there’s better evidence for the opposite” when they say or write “no evidence [for]”.
I think “no evidence” almost always means “no official reason to doubt”.
I find ‘flat earth theories’ pretty fascinating myself! I am _very_ sympathetic. What do I think is obviously true that isn’t?
(Flat-earth always reminds me of Aasimov’s “The Relativity of Wrong”: https://hermiene.net/essays-trans/relativity_of_wrong.html)
I think people say “no evidence” because of social epistemology as outlined in the post. The “better evidence” is really the same kind of evidence in either case.
I think it’s a really interesting exercise to think about the evidence almost anyone to gather or create, by themselves or with a group of trusted friends, i.e. _directly_, that would convince them that the world is not flat.
I think it’s _commendable_ that quite a few of the flat-Earthers have actually performed their own experiments! Truly, it’s wonderful that they’re seriously testing their beliefs. Some of the experiments are actually really well designed. It’s – of course – sad that they seem to mostly not update their beliefs, but I suspect that would be hard to observe if it _was_ occurring – because practical epistemology, for almost everyone, is _social_, not ‘technical’.
I think it would be really cool if the flat-Earthers effectively created an ‘experimental curriculum’ that basically anyone could replicate for themselves. More generally, I think that’s exactly the kind of thing we need more generally to ‘bootstrap’ greater _technical_ epistemology in more people.
I’ll do a silly thing and quibble with the “detail” that knowledge is Bayesian. Furthermore, I’ll do the even sillier thing of arguing for frequentism. I think that the people who use frequentism and largely end up shooting their feet off would also largely end up in the same situation if they used Bayesian methods. (See for instance both Bem and Wagenmakers in https://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/)
“Doing it properly” would amount to Solomonov induction, which doesn’t fit into the universe. It tells you to go to the Infinite Warehouse of Possible Planets, throw away anything that doesn’t match the measurements you take of Earth, and use the average of whatever remains (in the limit of doing *so infinite* a series of measurements that you have only one planet — an exact replica of Earth — remaining, that) as your map.
To quote Scott, “You start with a prior for how unlikely something is. Then you update the prior as you gather evidence. If you gather a lot of strong evidence, maybe you update the prior to somewhere very far away from where you started, like that some really implausible thing is nevertheless true.”. This is indeed the *outcome* of correct reasoning, but it fails to describe the very common situation where the reasoner starts from a situation of either strong belief in something else or ignorance, in which cases they don’t have the hypothesis they will eventually “settle” on in mind. “Eureka!” is pretty obviously not an experience of P(H_i|E)=P(E|H_i)P(H_i)/sum_j^n(P(E|H_j)P(H_j)) with a long-known H_i, but of a hypothesis with massive preexisting evidential support newly coming to mind.
“Putting on pants” as an example of people doing Bayesian things is rather strange, since at the level of approximation where it counts as an example, it also counts as an example of any interpretation of statistics (even propensityism) and all interpretations count as examples for each other. As an “instinctive, reasonable attempt […] all the time, automatically”, putting-pants-on is closer to the question of hypothesis generation, or “where does the prior come from”, or “where to draw the limits of the reference class” — the parts that the various interpretations mostly don’t describe (because none can) and where they trust the reader to do something reasonable (who often end up wearing their pants on their heads instead).
My argument /against against/ frequentism is that what people misunderstand is cluster membership (“but is Pluto a planet?”) in general, and do stupid things with statistics because the reference class is a cluster. Ask a non-straw frequentist about a presidential election (to follow the Arbital page), and they will mentally weigh polls, past presidential elections, state elections, etc., with varying weights as they get more or less relevant to the question in their judgment, i.e. perform much the same mental work as a Bayesian would, while perhaps describing the work in terms of “the data-generating process” or something. The part about different weights, different levels of similarity/relevance, i.e. different degrees of partial membership in the reference class is what most people misunderstand (or perhaps abuse, if/when they engage in reference class tennis). But to the extent this is the case, Bayes won’t help them make more reasonable judgments (again, see Bem&Wagenmakers), nor stop them from e.g. deliberately choosing models to produce likelihood ratios in the magnitude range they desire, or whatever the common form of goodharting would be.
My argument /for/ frequentism is that given an audience that understands how clusters work, giving the above description of how they reason is much more reasonable than saying that they are performing Solomonoff induction (and if they don’t, then trying to fix that is much more helpful). I’d say the latter entirely misunderstands the point; maps are created for the purpose of answering some class of questions faster than how long it would take to do the same directly in the terrain. Being worse than useless for answering other classes of questions (which is what other types of map and maps at different scales are for), indeed even occasionally giving wrong answers for within-category answers (e.g. someone planning a route via a copyright-trap street) are judged acceptable tradeoffs. Furthermore, as a practical matter, treating this topic as a craft makes it much more natural to advise about better and worse practices, be they common intuitions (the way most people’s physics intuitions are largely Aristotelian, or the way drawing an image in perspective needs to be taught), institutionalized (single-family zoning), cultural, or anything else.
Hey man, great post, thank you!
Pingback: Covid 1/13/22: Endgame | Don't Worry About the Vase
Pingback: Covid 1/27/22: Let My People Go | Don't Worry About the Vase
Pingback: On Bounded Distrust | Don't Worry About the Vase
Pingback: Covid 2/3/22: Loosening Bounds | Don't Worry About the Vase
Pingback: Covid 7/7/22: Paxlovid at the Pharmacy | Don't Worry About the Vase
Pingback: Covid 9/1/22: Meet the New Booster | Don't Worry About the Vase
Pingback: Covid 9/8/22: Booster Boosting | Don't Worry About the Vase
Pingback: Twitter Polls: Evidence is Evidence | Don't Worry About the Vase
Pingback: Covid 10/13/22: Just the Facts | Don't Worry About the Vase
Pingback: Colonoscopies Finally Have an RCT | Don't Worry About the Vase
Pingback: How to Bounded Distrust | Don't Worry About the Vase