Now she’s using AI to create new models to predict cancer, and making impressive progress on that front. Additionally, by focusing on emerging markets, she contributes a much-needed balance to the prevailing North American-centric discussions of health informatics. She warns that we should prioritize interpretability, privacy, and cost management when deploying these technologies.
Bodagala’s work is focused on establishing AI models that are reliable, transparent, and high in regulatory compliance with ethical standards. As a researcher in high-resource environments, she’s enthusiastic about following HIPAA and PHI regulations to the letter. In settings with fewer resources, she architects privacy-in-mind ecosystems based on federated learning or edge computing. This method has the added value of keeping sensitive information safe, as it never moves beyond its originating source.
Her observations illustrate that deploying AI-driven cancer risk prediction models can be upwards of $1 million annually at first, largely due to data sourcing. Yet, she pinpoints steps to reduce these costs and make other strategies more affordable. Bodagala is using infrastructure they can build on to keep up-front costs down. Their mission is to bring the power of technology to the most underserved communities in the world.
The Importance of Interpretability in AI Models
Another one of Bodagala’s main areas of focus is model interpretability, something she thinks is essential to clinical adoption. As an expert practitioner, she uses sophisticated methods like SHAP (Shapley Additive Explanations) to help interpret the model as it’s being developed. As it does so, clinicians can better understand and trust the results of model predictions since this method makes model predictability easier to follow and understand.
“I typically focus on three pillars: diagnostic accuracy (AUC, sensitivity, specificity), intervention cost reduction, and system-level impact,” – Bodagala
Her focus on interpretability particularly is so important. It helps ensure that AI models deliver diagnostics that are more than precise predictions but earn the trust of healthcare practitioners. Bodagala is quick to underscore the need for clinicians to understand and trust the model outputs. Absent this understanding, the promised returns — such as increased efficiency, realized cost-savings and better patient outcomes — will never materialize.
In addition to SHAP, Bodagala uses other powerful tools such as BLAST (Basic Local Alignment Search Tool) and HMMER. These resources assist them in pulling out new, actionable insights from complicated data. She integrates different types of data such as multi-omics data such as genomics, transcriptomics, and metabolomics. This powerful combination holds enormous promise to create AI applications with real impact in predicting cancer.
Navigating Regulatory Challenges
Bodagala points out how regulatory compliance becomes an obstacle in both high resource and emerging market environments. Second, she illustrates that regulatory compliance, while important, takes creativity and adaptability to implement, as well.
“Regulatory compliance is non-negotiable, but it requires flexibility,” – Bodagala
When she’s not in an under-resourced environment, she makes sure that her projects fit within strict guidelines such as HIPAA and PHI. In comparison, her work in emerging markets focuses on building privacy-first architectures powered by federated learning or edge computing solutions. This both keeps data secure at the source and helps mitigate privacy concerns without sacrificing model effectiveness.
Beyond that, she folds transparency provisions into all of her projects. Bodagala reinforces her AI innovations by integrating encryption and audit trails into each level of model deployment. It is this process that upholds the quality and reliability of her research.
“Transparency, encryption, and audit trails are baked into every layer of model deployment,” – Bodagala
Vision for Future AI-Driven Cancer Prediction
Bodagala envisions a future where AI-driven cancer prediction is accessible to all communities, particularly those underserved by current healthcare systems. She’s convinced that decentralized, interpretable AI models powered by emerging, privacy-preserving technologies will be key to realizing these possibilities.
“The future lies in decentralized, interpretable AI models powered by privacy-preserving technologies,” – Bodagala
She understands the need to harmonize clinical, genomic, and operational data to develop AI solutions that make a meaningful difference. This alignment can be difficult to achieve, but it is necessary to ensure care is patient-focused and health outcomes are maximized in an efficient manner.
“Aligning clinical, genomic, and operational data is tough—but essential for truly impactful AI,” – Bodagala
Bodagala works with interdisciplinary teams to develop value-based care models. We want them to invest in early detection as a smart cost-saving measure rather than investing in it as a premium. By recontextualizing cancer prediction with all this in mind, she encourages broad acceptance and embedding of these technologies into healthcare practice.