Unexpected consequences may arise if scientific conclusions formed using AI-driven data are applied to the healthcare sectors, a new opinion piece warns.
The paper, ‘Learn from AI: The pursuit of objectivity’ was recently accepted for publication by Letters in Applied Microbiology, an Applied Microbiology International publication.
The international team of authors said the aim of the review was to explore whether it is possible to reduce bias and reach objectivity in research by introducing artificial intelligence and was targeted at researchers or diagnosticians in microbiological research and clinical sectors.
Excitement and concern
“Artificial intelligence has recently attracted significant attention due to its rapid development and potentially influences to daily life,” said corresponding author Dr Chien-Yi Chang, BDS Stage 1 Director at the School of Dental Sciences, Newcastle University, UK.
“When applied to scientific research, AI brings both excitement and concern. In this article, we discuss the advantages and drawbacks of artificial intelligence (AI) in reducing human bias in scientific research and clinical diagnosis, while also considering its potential to introduce unexpected biases.”
Biases may arise
The review aimed to inform the integration of AI into scientific and clinical practices while emphasising the need for ongoing vigilance in its application.
“To a certain degree, AI can serve as a tool to reduce human bias and uncover objective truths. However, biases may arise from the data used to train AI models or from the design of algorithms, potentially leading researchers away from scientific truths,” Dr Chang said. “When applying these AI-driven scientific conclusions to the healthcare sector or using AI directly in clinical settings, unexpected consequences may arise.”
READ MORE: Study finds artificial intelligence can develop treatments to prevent superbugs
READ MORE: AI spots cancer and viral infections at nanoscale precision
One surprising find was that the pace of developing regulations and guidelines has not kept pace with technological advancements. Ethical issues related to data acquisition for AI training, as well as concerns about AI’s credibility and accountability, need to be addressed.
Towards objectivity
“In science, we strive to pursue objectivity to inform future research. Researchers are increasingly relying on AI tools to conduct their studies and draw conclusions. This article aims to raise awareness about whether the AI tools we use today or in the near future are helping us toward scientific objectivity or steering us away from it,” Dr Chang said.
“Following the above, further discussion is needed on the development of regulations and guidelines in research ethics, data sharing, and the credibility and accountability of AI in healthcare.”
The four authors (Fengyi Wang, Angeliki Marouli, Pisit Charoenwongwatthana, Chien-Yi Chang) are all passionate about and excited by new technologies in microbiology and clinical research. CYC led the discussion, but all authors contributed to this article.
“Our research in microbiology, immunology, and infection at Newcastle University is supported by various sources, including the NIHR Newcastle Biomedical Research Centre (BRC), Newcastle University (UK), and Mahidol University (Thailand),” Dr Chang said.“The views expressed in this article are those of the authors and do not necessarily reflect those of the NIHR or the Department of Health and Social Care. The funding agencies had no role in the preparation of this manuscript.”
The paper, ‘Learn from AI: The pursuit of objectivity’ appears in Letters in Applied Microbiology, an Applied Microbiology International publication.
No comments yet