In diagnostics, successful artificial intelligence tests have already been performed: for example, the computer can learn to categorise images with great accuracy according to whether they show pathological changes or not.
However, it is more difficult to train an artificial intelligence to examine the time-varying conditions of patients and to calculate treatment suggestions – yet this is precisely what has now been achieved at TU Wien in cooperation with the Medical University of Vienna.
With the help of extensive data from intensive care units of various hospitals, an artificial intelligence was developed that provides suggestions for the treatment of people who require intensive care due to sepsis. Analyses show that artificial intelligence already surpasses the quality of human decisions. However, it is now important to also discuss the legal aspects of such methods, according to scientists.
More parameters
“In an intensive care unit, a lot of different data is collected around the clock. The patients are constantly monitored medically. We wanted to investigate whether these data could be used even better than before,” says Prof. Clemens Heitzinger from the Institute for Analysis and Scientific Computing at TU Wien (Vienna). He is also Co-Director of the cross-faculty “Center for Artificial Intelligence and Machine Learning” (CAIML) at TU Wien.
Medical staff make their decisions on the basis of well-founded rules. Most of the time, they know very well which parameters they have to take into account in order to provide the best care. However, the computer can easily take many more parameters than a human into account – and in some cases this can lead to even better decisions.
“In our project, we used a form of machine learning called reinforcement learning,” says Clemens Heitzinger. “This is not just about simple categorisation – for example, separating a large number of images into those that show a tumour and those that do not – but about a temporally changing progression, about the development that a certain patient is likely to go through. Mathematically, this is something quite different. There has been little research in this regard in the medical field.”
The computer becomes an agent that makes its own decisions: if the patient is well, the computer is “rewarded”. If the condition deteriorates or death occurs, the computer is “punished”. The computer programme has the task of maximising its virtual “reward” by taking actions. In this way, extensive medical data can be used to automatically determine a strategy which achieves a particularly high probability of success.
Better than human
“Sepsis is one of the most common causes of death in intensive care medicine and poses an enormous challenge for doctors and hospitals, as early detection and treatment is crucial for patient survival,” says Prof. Oliver Kimberger from the Medical University of Vienna.
“So far, there have been few medical breakthroughs in this field, which makes the search for new treatments and approaches all the more urgent. For this reason, it is particularly interesting to investigate the extent to which artificial intelligence can contribute to improve medical care here. Using machine learning models and other AI technologies are an opportunity to improve the diagnosis and treatment of sepsis, ultimately increasing the chances of patient survival.”
Analysis shows that AI capabilities are already outperforming humans: “Cure rates are now higher with an AI strategy than with purely human decisions. In one of our studies, the cure rate in terms of 90-day mortality was increased by about 3% to about 88%,” says Clemens Heitzinger.
Of course, this does not mean that one should leave medical decisions in an intensive care unit to the computer alone. But the artificial intelligence may run along as an additional device at the bedside – and the medical staff can consult it and compare their own assessment with the artificial intelligence’s suggestions. Such artificial intelligences can also be highly useful in education.
Discussion about legal issues is necessary
“However, this raises important questions, especially legal ones,” says Clemens Heitzinger. “One probably thinks of the question of who will be held liable for any mistakes made by the artificial intelligence first. But there is also the converse problem: what if the artificial intelligence had made the right decision, but the human chose a different treatment option and the patient suffered harm as a result?
“The research project shows: artificial intelligence can already be used successfully in clinical practice with today’s technology – but a discussion about the social framework and clear legal rules are still urgently needed,” Clemens Heitzinger says.
No comments yet