Stanford intervened just before Christmas when they distributed the supply of COVID vaccines according to a homemade algorithm. When the vaccine algorithm was triggered, he invited staff who did not confront the patient and only one resident as part of the first phase of the vaccine. Apparently, it failed because residents as institutional nomads had no “base” or address to identify them as front-line physicians. Despite a media-ready celebration, Stanford faced a Category 5 Shiitake show before Christmas.
From ProPublica:
An algorithm chose who would be the first 5,000 on the line. Residents said they were told they were at a disadvantage because they did not have an “location” assigned to connect to the calculation and because they are young, according to an email sent by a primary resident to their peers. Residents are the lowest-ranking doctors in a hospital. Stanford Medicine has about 1,300 in all disciplines.
As a reminder, an algorithm is a series of logical instructions that allow us to complete a task. More specifically, by Hannah Fry in her excellent book, Hello World: Human beings in the age of algorithms:
They are summarized in a list of step-by-step instructions, but these algorithms are almost always mathematical objects. They take a sequence of mathematical operations (using equations, arithmetic, algebra, calculus, logic, and probability) and translate them into computer code.
This failure of the Stanford vaccine algorithm provides information about the algorithms and their limitations. Here are a couple of thoughts.
A vaccine algorithm is only as good as the humans who design it
So what happened here? In the simplest terms …
- Someone wrote it wrong. This was a faulty human work masked in code.
- Someone else trusted him. Or they did not trust him, but only gave him the advantage of doubt.
From MIT Technology Review:
Irene Chen, an MIT doctoral candidate studying the use of fair algorithms in health care, suspects this is what happened at Stanford: Formula designers chose variables they thought would serve as good representatives of the level of health care. risk covidi of a given staff. But they did not verify that these proxies gave sensitive results or responded significantly to community input when the vaccination plan came to light on Tuesday last week. “It’s not bad for people to think about it later,” Chen says. “It’s just that there was no mechanism to fix it.”
In the two previous steps it is implied that there is a bias built into the creation of any algorithm and definitely in the mind of the viewer.
Expectations are key to understanding the limits of technology
Based on the second point of failure it is our expectations as humans that we look at this algorithm. This problem of giving the tool the advantage of doubt may have been a problem aversion to the algorithm. Also from Hello world:
But there is a paradox in our relationship with machines. While we tend to overly rely on what we don’t understand, as soon as we know an algorithm can make mistakes, we also have a rather annoying habit of overreacting and discarding it altogether, returning to our own flawed judgment. . Researchers know it as an aversion to the algorithm. People are less tolerant of errors in an algorithm than their own, even if their own errors are larger.
Therefore, beyond understanding how the algorithm is formed we must understand our own shortcomings.
Sherry Turkle a Alone together – Why do we expect more from technology and less from each other describes our strange human tendency to “cover” technologies like complicity. We cling to these things and we like to defend them. (Alone together is a basic reading to understand our relationship with technology.)
Algorithms will not solve the vaccine distribution dilemma
While we want to reduce things to a simple equation that draws humans out of it, the truth is that there are a lot of human criteria behind who gets the vaccine when. Local standards, values, vaccine supply, medical care, and staff are among the list of factors that may explain the differences in deployment.
The debacle of the Stanford vaccine algorithm offers a window into the future of medical decision making. And, as we have seen, algorithms do not provide the sterile objectivity that many imagine.
As we have seen here, the failure of AI will be in the 21st century the dog ate my homework (h / ta David Armano on Twitter).
If you like this, you may like it 33 graphics COVID files. All site content is written on COVID. Check it out…
Links to Amazon are affiliate links. Photo of Markus Winkler turned on Unsplash