AI models in the field of stroke with Vince Madai and Michelle Livne from Charite hospital in Berlin, who work on predictive models for decision support systems for the treatment of strokes. Vince is a senior medical AI researcher at Charité with an M.D., a Ph.D. in Medical Neuroscience and an M.A. in Medical Ethics, and Michelle is a PhD machine learning engineer with extensive experience in applying predictive algorithms in healthcare. After obtaining a B.Sc. in Biomedical Engineering in 2012 at the Technion Technological Institute of Israel, Haifa she concluded her Master degree in Neuroscience at Charité University Medicine in 2014.
Apart from the current state of stroke treatment research and development, we talked about the state of digital health in Germany compared to Israel and ethical issues surrounding AI, such as data bias and data privacy. In healthcare challenges in data acquisition are reducing the opportunity to save lives and are opening many ethical dilemmas.
Some questions addressed:
Signs of strokes are well known: numbness in the arms, problems with speaking fluently. The brain is not getting enough blood. Someone calls an ambulance. What happens when a patient reaches the hospital?
How many types of strokes are there?
Time is crucial in stroke treatment – what are the current support systems available to doctors when a patient hit by stroke is brought to them? What kind of systems are in development?
Even if you are having a stroke, it might not be seen on a CT scan. A lot of AI at the moment is based on pattern recognition. If there is nothing visible on a CT – what does this mean for the development of AI supported decision support systems?
One of the discussion topics in AI is interpretability. Complex models are harder to understand and the more accurate an AI model is, the less interpretable it is. For example, a decision tree is easily interpretable, but has lower accuracy, compared to deep neural networks, that have higher accuracy and lower interpretability. Why is that important?
Opinions are divided between Yes, interpretability is needed and No, interpretability is not needed if the network proved to be effective. Where do you stand on that?
A lot of companies are working on AI, but most of the development and testing ATM happens with retrospective studies. How big of an issue is in your view lack of clinical studies done on patients? What does this mean in terms of time needed for AI support systems to come to regular clinical practice if everything needs to be validated through clinical studies which take years to finalize?
If you wanted to apply your knowledge on another field in healthcare – what could be the next frontier you could focus on that is closest to stroke research?