I’ve been thinking about Heuristics That Almost Always Work again lately. It’s something that kind of haunts me professionally in a particular way that I conveniently have a springboard for explaining:
Our Cerner Med Manager was recently updated to now include alert interrupts for all “Moderate” level drug interactions, whereas previously it was only showing “Major”. Now if you’re a working pharmacist you already probably see where I’m going with this, but for those of you that aren’t the short of it is that these are almost entirely a wasteful distraction. The moderates, not the majors that is. Well, even some of the majors.
Let’s start by going back to my retail pharmacy explainer where I describe Bayesian reasoning for pharmacists:
Alex comes up to your pharmacy window—brand new patient—and hands you a prescription for amoxicillin. So you ask him if he has any medication allergies. He says he doesn’t know. You ask him if he’s taken amoxicillin or any other penicillins before. He says he doesn’t know. If you dispense the prescription, what is the percent probability that Alex has an anaphylactic reaction to the amoxicillin? Have a number in mind, it doesn’t have to be a correct number, just make a guess and keep it in mind. Alright, next.
Brenda comes up to your pharmacy window—brand new patient—and hands you a prescription for amoxicillin. So you ask her if she has any medication allergies. She says she doesn’t know. You ask her if she’s taken amoxicillin or any other penicillins before. She says, “Oh yeah, penicillin! You know last time I took that it put me in the hospital, almost completely closed my throat up, I almost died! It was awful!” If you dispense the prescription, what is the percent probability that Brenda has an anaphylactic reaction to the amoxicillin? Again, just note the number for now. Next!
Charles comes up to your pharmacy window—brand new patient—and hands you a prescription for amoxicillin. So you ask him if he has any mediation allergies. He says no, he does not. You ask him if he’s ever taken amoxicillin or any other penicillins before. He says, “Oh yeah, been on amoxicillin a few times, always worked great, no side effects!” If you dispense the prescription, what is the percent probability that Charles has an anaphylactic reaction to the amoxicillin?
Now compare those numbers. This wasn’t any kind of trick question, what you should notice is that Charles’ number should be somewhere lower than Alex’s number, which should be somewhere lower than Brenda’s number. The salient point here is how more information changes the projected likelihood of the outcome. Bayes’ theorem mathematically formalizes this interaction; if you had real numbers to represent every component of the above scenarios you could insert them into the formula and get real numbers out.
By now you’re probably thinking, well duh, obviously knowing more changes your prediction. The point is, A) you should try to learn to think in probabilities that an event will occur for everything ever, B) you should learn to notice what kind of information changes your guess and how it changes that guess, and C) that you’re allowed to make decisions under uncertainty based on your best guess of what the probability is—as long as that’s a due-diligence good-faith best guess.
There’s a little bit of a trap here though that I haven’t quite been able to personally reconcile. That is, what do you do, or how do you address, when the risks are never zero?
For example: atorvastatin & gemfibrozil. Quasi-officially, this is a major reaction, most drug references will tell you such and most pharmacy software will highlight it as such. In the real world? Nobody cares. Statins and fibrates are prescribed like this quite frequently and yet this is an interaction that occurs adversely so rarely that it’s fairly safe to override and dispense, and this is true on both an individual basis and systematically. If 70% of the cards in the blue/red deck are blue it is very not correct to bet blue 70% of the time and red 30% of the time, you always bet blue.
This is a Heuristic That Almost Always Works, which is to say the correct bet here is to override the interaction alert every time. Which of course means this begs two important questions: why even have the alert and who gets the blame when the card comes up red?
Or maybe this framing is not correct and instead we should be thinking of it as a black swan.
Taleb gives cautionary examples of what happens if you ignore this. You make some kind of beautiful model that tells you there’s only a 0.01% chance of the stock market doing some particular bad thing. Then you invest based on that data, and the stock market does that bad thing, and you lose all your money. You were taking account of the quantified risk in your model, but not of the unquantifiable risk that your model was incorrect.
Under a black swan framing, we want to take the opposite action; we want to bet red if the cost of losing is so high that even the higher win rate from betting blue is a net loss.
Is there no in between? Treating everything as a black swan question has really obvious, potentially extreme trade-offs, both in terms of time and resources but also in terms of trading off against other black swans. If I refuse this guy atorvastatin because I’m scared of rhabdo, am I now (at least morally) on the hook for the heart attack he has 16 years from now?
Well, one way we can address this is to make a more pragmatic assessment of the black-swan-iness of the red-card adverse-event-happens situation. The quoted example is about finances, how do we generalize this to be about healthcare? How likely is the reaction, and how severe or life threatening or disabling? How likely and how great are the benefits? Hold on though, now we’re just going in circles, because we’re doing a Bayesian assessment, which we’ve…already done when we came up with our original heuristic conclusion! Agh!
There are interactions where this is more clear cut, so maybe it’s enough that we focus on holding those in memory? Generally, I think this is probably the best answer for most of these kinds of problems—learn and know what’s important enough to be actually concerned about—but this comes with sub-problems you need to hold in mind to make this work. Firstly, this is sort of a meta learning problem: there’s an ethical trap here whereby you can’t say in an official sense that these are the interactions you have to actually worry about, without risking obvious liability (and diminished public professional credibility) for teaching ignorance of the low-risk-high-impact black swans. Likewise (and perhaps more importantly) you can’t design your workflow in this way for the same ethical/legal reasons—which is why we’re stuck having to wade through tons of useless interaction alerts. The obvious/scary next question here, that I don’t have an answer to, is if the top-down education/employment apparatus is covered by throwing every possible interaction at you, are you adequately covered in the bottom-up sense for when one can’t reasonably (probabilistically or practically) be expected to divine a bad call before it happens? Best guess: make sure you have liability coverage.
Heuristics That Almost Always Work was probably intended to criticize bad heuristics, which is why it isn’t pointing us at answers to the question of what to do when the Heuristics Actually Probably Do Work At Least Almost Always. On the other hand, it may be fair to say those answers may not exist in any kind of sound systematic sense. This means, unfortunately, that we’re stuck punting it up to the next meta level of having heuristics for handling our heuristics.