@AllergyKidsDoc: Deep Down the Rabbit Hole of Bias, Plus Two

NPR: From Camping To Dining Out: Here’s How Experts Rate The Risks Of 14 Summer Activities

This article describes the potential risks for dining out, staying at hotels, getting a haircut (ask your stylist to focus on cutting and not talking), going to the beach/pool and other activities.


Moving NY Times Graphic on coronavirus toll in U.S. (May 24, 2020): An Incalculable Loss: Remembering the Nearly 100,000 Lives Lost to Coronavirus in America


A recent lecture by Dave Stukus: Deep Down the Rabbit Hole of Biases, Conspiracies, and Echo Chambers (50 minutes). Thanks to Ben Gold for this reference.

This lecture summarizes some of the challenges of misinformation and quackery.

Some interesting points:

  • Explains common biases which lead us to faulty conclusions
  • Illustrates some far-fetched claims for Himalayan Salt Lamp as a treatment for asthma as well as Dr. Oz’s unproven recommendations for the coronavirus
  • Provides several books for those interested in learning a lot more (see last slide)

Some slides:

 

 

Related blog posts:

 

What Does Richard Thaler’s Work Mean for Medicine?

A recent commentary (J Avorn. NEJM 2018; 378: 689-91) addresses a huge problem in medicine: “medicine’s ongoing assumption that clinicians and patients are, in general, rational decision makers.”

He points out that just as Albert Einstein upended Newtonian physics with the much more complex theory of relativity, Richard Thaler’s work in economics “explained that people often don’t make choices by acting as the rational balancers of risk and reward assumed by classic economics.” (More information about his work at Wikipedia post on Nudge).

Key points:

  • “We are disproportionately influenced by the most salient and digestible information” rather than the totality of information.  This “helps explain the power of simplistic pharmaceutical promotional materials, often delivered..with a tasty lunch.”
  • “Our beliefs are shaped by recent experiences…(Last-case bias).”
  • “We often overestimate small probabilities (such as uncommon drug risks).”  Another example would be fear of dying in a plane crash which is far less likely than dying in an auto accident.

The potential remedies to flawed decision-making include the following:

  • “Academic detailing” which is a process attempting to integrate more information to counter biases
  • Nudge concept. This is a strategy of “making a preferred alternative the default choice when several options exist.”  Order entry systems in computers could default to preferred drugs (ie. best drug in class)
  • Cost constraints can affect decision-making which could include targeting copayments for payments.  For physicians/administrators, looking at what drives revenue is crucial.  “As Upton Sinclair once noted, ‘It is difficult to get a man to understand something when his salary depends on his not understanding it.'”

My take: Addressing these ideas could help reduce unnecessary surgeries, increase  high value care, and improve outcomes.  This is why Richard Thaler’s work is important for medicine.

Related blog posts:

 

Short Take on Understanding Bias

A recent commentary (Rosenbaum L. NEJM 2015; 372: 1959-63) adds a couple of new terms to my lexicon regarding bias.

The author notes that there have been multiple concerns regarding industry-sponsored studies.  For example:

  • Industry-sponsored studies are more likely than government-sponsored ones to have positive results
  • Physicians who attend symposia funded by the pharmaceutical companies subsequently prescribe the featured drugs at a higher rate

While the Physician Payment Sunshine Act requires drug and device companies to disclose payments over $10, she notes that the long-term effects of this transparency are unclear.  With increased transparency, there could be a “phenomenon called ‘moral licensing’: once disclosure gets off your chest, you feel liberated and may feel licensed to behave immorally.  A corollary concern” for the audience, is that this disclosure may be interpreted as a sign of honesty or a sign of expertise rather than as a warning of potential bias.

Two new terms for me:

  • “‘Self-serving bias’: when we stand to gain from reaching a certain conclusion, we unwittingly assimilate evidence in a way that favors the conclusion.”
  • Bias blind spot“: “Studies suggest that we’re far more likely to think that drug promotions influence our colleagues than that they affect our own behavior.”

The author cautions that anti-industry bias could be detrimental as well.  If having ties to industry lessens the opportunity for individuals to voice their support (or opposition) for new drugs or devices, it could bolster individuals who may “overstate the risks and understate the benefits of these new treatments.”

Related blog posts:

Zoo Atlanta

Zoo Atlanta

How to Understand Scientific Studies

From John Pohl’s Twitter Feed:

Twenty tips for interpreting scientific claims http://bit.ly/1hY3nD5. Referenced article: Nature 503, 335–337 (21 November 2013) doi:10.1038/503335a

An excerpt:

we suggest 20 concepts that should be part of the education of civil servants, politicians, policy advisers and journalists — and anyone else who may have to interact with science or scientists. Politicians with a healthy scepticism of scientific advocates might simply prefer to arm themselves with this critical set of knowledge…

Differences and chance cause variation…

No measurement is exact. Practically all measurements have some error…Results should be presented with a precision that is appropriate for the associated error, to avoid implying an unjustified degree of accuracy…

Bias is rife. Experimental design or measuring devices may produce atypical results in a given direction….

Bigger is usually better for sample size…

Correlation does not imply causation. It is tempting to assume that one pattern causes another. However, the correlation might be coincidental, or it might be a result of both patterns being caused by a third factor — a ‘confounding’ or ‘lurking’ variable…

Regression to the mean can mislead. Extreme patterns in data are likely to be, at least in part, anomalies attributable to chance or error…

Extrapolating beyond the data is risky. Patterns found within a given range do not necessarily apply outside that range…

Beware the base-rate fallacy. The ability of an imperfect test to identify a condition depends upon the likelihood of that condition occurring (the base rate). For example, a person might have a blood test that is ‘99% accurate’ for a rare disease and test positive, yet they might be unlikely to have the disease. If 10,001 people have the test, of whom just one has the disease, that person will almost certainly have a positive test, but so too will a further 100 people (1%) even though they do not have the disease.

Controls are important. A control group is dealt with in exactly the same way as the experimental group, except that the treatment is not applied. Without a control, it is difficult to determine whether a given treatment really had an effect…

Randomization avoids bias. Experiments should, wherever possible, allocate individuals or groups to interventions randomly…

Seek replication, not pseudoreplication. Results consistent across many studies, replicated on independent populations, are more likely to be solid…

Scientists are human. Scientists have a vested interest in promoting their work, often for status and further research funding, although sometimes for direct financial gain. This can lead to selective reporting of results and occasionally, exaggeration. Peer review is not infallible: journal editors might favour positive findings and newsworthiness. Multiple, independent sources of evidence and replication are much more convincing.

Significance is significant. Expressed as P, statistical significance is a measure of how likely a result is to occur by chance. Thus P = 0.01 means there is a 1-in-100 probability that what looks like an effect of the treatment could have occurred randomly, and in truth there was no effect at all. Typically, scientists report results as significant when the P-value of the test is less than 0.05 (1 in 20).

Separate no effect from non-significance. The lack of a statistically significant result (say a P-value > 0.05) does not mean that there was no underlying effect: it means that no effect was detected. A small study may not have the power to detect a real difference…

Effect size matters. Small responses are less likely to be detected…

Study relevance limits generalizations. The relevance of a study depends on how much the conditions under which it is done resemble the conditions of the issue under consideration…

Feelings influence risk perception. Broadly, risk can be thought of as the likelihood of an event occurring in some time frame, multiplied by the consequences should the event occur…

Dependencies change the risks. It is possible to calculate the consequences of individual events, such as an extreme tide, heavy rainfall and key workers being absent. However, if the events are interrelated, (for example a storm causes a high tide, or heavy rain prevents workers from accessing the site) then the probability of their co-occurrence is much higher than might be expected…

Data can be dredged or cherry picked. Evidence can be arranged to support one point of view…

Extreme measurements may mislead…

Comment: This is a really good reference to provide context for understanding scientific studies and sources of error.