The medical imaging community is defining AI use cases for radiologists, data scientists, researchers, and developers to improve patient care.
FREMONT, CA: According to a report in the Journal of the American College of Radiology, implementing radiology artificial intelligence (AI) technology for routine clinical procedures require four essential considerations: data sharing methods, structured use cases, monitoring and validation tools, and new data elements and standards. Thus the key to actualizing AI-driven clinical practice needs to have a mobile ecosystem where the radiologists, developers, researchers, and regulatory bodies can share the platform and contribute toward the promotion of the practical application of AI researches.
Based on an initial medical imaging AI roadmap published on radiology, the authors have stressed upon four critical translational research priorities:
• Encouraging data sharing to promote the training and testing of AI algorithms will enable the implementation of these algorithms across clinical practices.
• Developing structured use cases to portray the clinical challenges where AI can contribute.
• Focusing on Standardized tools for monitoring and validating the efficiency of AI algorithms in clinical procedures to achieve regulatory requirements.
• Designing standards and common data elements for the seamless integration of AI-driven technologies with the existing clinical systems.
While defining and prioritizing AI use cases, it is critical that the medical imaging community has a clear view on what is essential to radiology and how the researchers and developers can contribute to improving the existing systems. The advancements must reflect in the everyday workflow.
Adoption of standardized inputs will enable the algorithms to function on the modality, in the cloud or a local server. Further, application programming interfaces (APIs) can be developed as per these standardized inputs to integrate AI into a system. Moreover, specifications for data in the structured use cases can be collected to convey the developer about the performance of the algorithms in the actual clinical processes.
The use case specifications or the algorithms can later be refined based on the performance variations across various patient populations, with respect to different manufacturers, or with different protocols of acquisition.