Remember drugs with adverse effects, why not remember AI algorithms with racial bias?


Ed Ikeguchi, MD, CEO of AiCure

Over the last decade, artificial intelligence has been transformed from a futuristic tool and driven by exaggeration to being a daily part of our lives. We use artificial intelligence for everything from unlocking our smartphones to evaluating our credit scores and helping doctors make decisions about patient care. In particular, the potential of AI to transform drug development and patient care is unmatched: its ability to gather objective information can help elevate data from a clinical trial and deliver life-saving drugs to patients. people who need it faster. But the data mining capabilities that make AI so exciting are also the same ones that are of concern to industry leaders. To ensure that AI does not inadvertently perpetuate human biases and put minority populations at a disadvantage, industry must work together to help AI reach its full potential. AI is only as strong as the data it feeds, so ensuring the quality of the data we start with must be the core of everything: the credibility of AI databases must be secure.

When this technology is sufficiently governed, it has significant potential to automate processes across industries and take innovation to new levels. We saw COVID-19[feminine[feminine brings to light long-standing issues of health disparities, and now more than ever, the life sciences industry is challenged to reassess the fundamentals of AI on which our development decisions are increasingly based. of drugs and patient care. It is now the responsibility, both ethically and for the sake of “good science,” to thoroughly test the algorithms. Companies are responsible for ensuring that their algorithms work as expected outside of a controlled research environment. They can do this by first establishing processes that help determine which data sets are representative of the wider population and, secondly, by normalizing them back to the drawing table when the algorithms are not working as intended, by reconstructing them from the base.

Application of “checks and balances” to detect bias

Often in the current environment, once TO THE solution receives a relatively arbitrary seal of approval, there are limited protocols to assess how it works in the real world. We need to be careful with this, as today’s AI developers do not yet have constant access to large and diverse data sets and often train algorithms with small, single-source data samples with limited diversity. In general, this is because many of the open source data sets used by developers were trained by volunteer computer programmers, a predominantly white population. When these algorithms are applied in real-world scenarios to a wider population of different races, genders, ages, and more, technology that seems highly accurate in research does not live up to its promise and can lead to erroneous conclusions about human health. ‘a person.

Similar to how new drugs go through years of clinical trial testing with thousands of patients to determine adverse events, an AI verification process can help companies understand whether their technology will fall short in real-world scenarios. . There are usually unexpected results when you move from a controlled research environment to real-world populations. For example, even after a new drug is approved, once given to hundreds of patients outside of a clinical trial, new side effects or findings that never arose during the trial are often highlighted. Just as there is a process for re-evaluating this drug, there should be a similar protocol of controls and balances for AI that detects inaccuracies in real-world scenarios, revealing when it does not work for certain skin colors or other biases. An element of equal governance and review should be imposed on all algorithms, as even the most robust and proven algorithm is bound to produce unexpected results. An algorithm is never learned; it needs to be constantly developed and fed with more data over time to improve.

Identify and perfect

When companies notice that algorithms are not working properly for the entire population, they should be encouraged to rebuild their algorithms and incorporate more diverse patients into their testing. Whether it’s patients with different skin tones or people wearing hats, sunglasses, or patterned clothing, training the AI ​​to distinguish the individual person regardless of their appearance, dress, or environment will produce stronger algorithms. therefore, it will improve patient outcomes.

As an industry, we need to be more skeptical of AI conclusions and promote transparency. Should companies be able to easily answer basic questions, such as how was the algorithm trained? On what basis did you draw this conclusion? Only once we constantly interrogate and evaluate an algorithm in common and rare scenarios with varied populations will it be ready to introduce it into real-world situations.

Recognition that there is work to be done

The first step in resolving the problem is to recognize that there is one. Many have not yet understood the idea that different complexions and appearances must be taken into account in algorithms for technology to work effectively. As the AI ​​industry continues to grow and these tools become an increasingly important part of the way we research drugs and provide new treatments, the future of the healthcare industry and patient care is very promising. We need to prioritize equality in the technology used by our patients and pharmaceutical companies to help them reach their full potential and make health a more inclusive industry.

About Ed Ikeguchi, MD, CEO, AiCure

Edward F. Ikeguchi, MD is the CEO of AiCure. Prior to joining AiCure, he was co-founder and medical director of Medidata for nearly a decade, where he also served on its board of directors. Dr. Ikeguchi worked as an assistant professor of clinical urology at Columbia University, where he has experience in the use of health technology solutions as a clinical researcher in numerous trials sponsored by both the commercial industry and the National Institutes of Health. . Dr.

Source link