How can the government implement helpful and accurate models in the healthcare industry?
By: Alejandro Moro
As Artificial Intelligence continues to be integrated into the healthcare industry, its potential to transform patient care is becoming increasingly evident. AI models can make thousands of decisions at rapid rates with high accuracy, which could revolutionize healthcare. However, patient privacy and lack of data, as well as the lack of diverse and quality data sets, prevent it from reaching its potential. To take away this roadblock, the government should find a delicate balance between fostering innovation and protecting patient privacy while also investing in the diversification and expansion of healthcare data.
AI has the potential to improve patient care, and in some cases, it is already being implemented to do so. Precision medicine, for example, is using AI to analyze the genetics and clinical information to determine the best course of treatment. Other NLP (Natural Language Processing) models are also being utilized to analyze patient clinical documentation and support treatment decisions. Additionally, there is a growing use of AI-powered surgical robots, as well as virus and bacteria detectors. Despite these greatly beneficial advancements, AI is still not trusted by many doctors, hospitals, and regulatory bodies like the FDA. Much of this is due to the lack of reliability within these tools.
This unreliability is primarily due to the lack of data, which is impeding developers from creating safe, helpful, and accurate AI models. In Shandong Wu’s Ph.D. experiment, she tests the accuracy and the ability of an AI model to detect deep fakes in healthcare images. She found that the model was fooled by 69.1% of the falsified images and had an accuracy when detecting cancer cases of 80%. However, this model failed because of a lack of data to test it on, so it is evident that as the industry progresses, data sharing will be imperative in establishing accurate and useful AI models.
AI usage on the different sectors of the healthcare industry
Most countries in Europe, for example, use the “consent or anonymize approach” when it comes to sharing patient data; however, this approach fails to support data-driven AI models because most patients and hospitals aren’t keen on sharing, and when they do, important data is often missing, causing the AI model’s accuracy to decrease. If countries wish to integrate AI successfully in healthcare, they must be willing to invest in data access as well, as investing in AI production alone will not be fruitful.
Additionally, the current healthcare data sets often suffer from a lack of diversity, particularly in terms of ethnicity, leading to various unintended consequences. For instance, in 2019, researchers discovered that several AI models, including SkinVision and Google’s Derm Assist, were trained on data sets that were heavily skewed toward male and white patients. As a result, these AI models had high rates of error for women and people of color, frequently misdiagnosing conditions. In order to ensure that AI in healthcare benefits all citizens equitably, it is essential that governments invest in the creation of diverse data sets and take steps to address these systemic biases and injustices. Through this, governments can help foster innovation while protecting patient privacy and promoting equity in healthcare practices and outcomes.
AI holds great potential for transforming the health industry, but its progress is hindered by patient privacy concerns, data access, and the lack of diverse data sets. As AI becomes more prevalent and implemented in healthcare, it is imperative that governments find a balance between fostering innovation and protecting patient privacy while investing in data set diversification. Only then can we create models that will benefit the populace in a faster and more fair manner.
Algorithms Are Making Decisions About Health Care, Which May Only Worsen Medical Racism