The growing emphasis on trustworthy artificial intelligence (AI) in health care reflects a shift away from models optimized for predictive performance toward governable and auditable systems that can be adopted and sustained in clinical practice. Nonetheless, many clinical AI applications continue to privilege technical performance while underaddressing ethical, regulatory, and societal considerations, leading to concerns around robustness, transparency, and clinical adoption. To address this, governance frameworks such as the Assessment List for Trustworthy Artificial Intelligence (ALTAI) have been proposed to operationalize trust-related requirements across the AI lifecycle. However, evidence on the practical use of these frameworks remains limited. In this Viewpoint, we describe the application of ALTAI as a procedural governance framework within the Horizon Europe AI-PROGNOSIS project, which aims to support Parkinson disease diagnosis and care through predictive models and digital biomarkers derived from everyday devices. The seven ALTAI requirements (ie, human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, nondiscrimination, and fairness; societal and environmental well-being; and accountability) were mapped to key stages of the AI lifecycle within the project, including design and specification, data preparation, model development and validation, user interface and user experience deployment, external prospective validation, and overarching management and workflow. To examine how these requirements were perceived in practice, we conducted a structured internal survey among AI developers and data scientists involved in the AI-PROGNOSIS project (n=10). Participants rated the relevance of the 17 ALTAI subdomains using a three-point prioritization scale. Technical accuracy, data governance, and privacy were consistently rated as highly relevant, whereas societal impact received the lowest prioritization. This pattern reflects a documented tension in AI development, where technical teams tend to deprioritize broader societal concerns under delivery and performance constraints. Nonetheless, this work should be interpreted as a context-specific case study rather than a validation of ALTAI. The small sample size and project-specific setting limit generalizability, and these findings should not be considered as representative of broader clinical AI development. Overall, by making prioritization gaps explicit and embedding multidisciplinary review across lifecycle checkpoints, this case study illustrates how structured governance frameworks can surface implementation tensions and support accountable AI development. While these approaches do not resolve all of the aforementioned challenges, they provide practical guidance for integrating trust-related considerations into clinical AI projects.
Journal article
JMIR Publications Inc.
2026-04-29T00:00:00+00:00
28
e85433 - e85433