Last week, I had the opportunity to attend and speak at the Mayo Clinic Platform_Conference with government, academic, and industry leaders in health technology. We discussed how AI solutions can be leveraged to improve healthcare. There were several sessions focused on how predictive AI is currently being translated into clinical practice. And of course, we also dove into topics of the potential benefits and perils of integrating generative AI in healthcare.
Here are a few highlights that resonated with me and our mission at Delfina:
- Despite models developed on historical data, we need to ensure that AI-enabled solutions do not amplify outdated practices. In the “Blueprint for Trustworthy AI in Healthcare” panel, the use of phenylephrine was given as an example of the importance of integrating up-to-date evidence into algorithms. Almost four decades after phenylephrine received FDA approval, it has been shown to be ineffective as a nasal decongestant. In this example, one could imagine that a model developed on historical data to recommend treatment for cold-like symptoms may still recommend the use of phenylephrine, despite currently available scientific evidence. At Delfina, we develop algorithms to identify patients who may benefit from interventions aimed at reducing the risk of pregnancy-related complications. However, we recognize that the “best” interventions will evolve over time, and our clinical and research teams ensure that recommended interventions reflect the currently available scientific evidence.
- Providers want to spend time with patients—not learning how to use new tech. In the “Better Care Delivered to Patients Through Novel Technology” panel, the panelists discussed how new technologies should be integrated within existing workflows and should not create additional work for providers. At Delfina, we’ve created a care platform that’s designed to promote better, more in-depth conversations between doctors and their patients.
- Model development is only the tip of the iceberg. In “Theory to Practice: Enabling Transparency Through AI Model Validation,” the panelists discussed how developing a high-performing prediction model is only the start – not the finish line. After the model is developed, it must be seamlessly integrated into existing workflows, monitored closely over time for distribution shift, and evaluated to see how it actually affects health outcomes. At Delfina, our Data Science team works closely with our Product team and provider partners to ensure that model results are transparent and actionable for use in clinical settings.
I was also lucky to have the opportunity to share the stage with other health innovators to discuss how "platform thinking" can enable rapid innovation and scaling. For us at Delfina, the MCP_Accelerate program's data platform enabled the development and validation of a variety of prediction models for pregnancy care. However, we gained more from this "platform" than the opportunity to validate models. The platform enabled us to connect with OB providers at Mayo Clinic to embark on research collaborations and grow our evidence-based product offering. For more details on our experience with MCP_Acclerate, please see my previous blog post!
AI solutions have the power to transform healthcare as we know it. At Delfina, we’re constantly thinking about how we can best leverage data and new technology to improve patient outcomes. If you’re excited about the future of health tech, join us.