Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

BACKGROUND: Parkinson disease (PD) is the fastest-growing neurodegenerative disorder in the world, with prevalence expected to exceed 12 million by 2040, which poses significant health care and societal challenges. Artificial intelligence (AI) systems and wearable sensors hold potential for PD diagnosis, personalized symptom monitoring, and progression prediction. Nonetheless, ethical AI adoption requires several core principles, including user trust, transparency, fairness, and human oversight. OBJECTIVE: This study aims to explore and synthesize the perspectives of diverse stakeholders, such as individuals living with PD, health care professionals, AI experts, and bioethicists. The aim was to guide the development of AI-driven digital health solutions, emphasizing transparency, data security, fairness, and bias mitigation while ensuring robust human oversight. These efforts are part of the broader Artificial Intelligence-Based Parkinson's Disease Risk Assessment and Prognosis (AI-PROGNOSIS) European project, dedicated to advancing ethical and effective AI applications in PD diagnosis and management. METHODS: An exploratory qualitative approach, based on 2 datasets constructed from cocreation workshops, engaged key stakeholders with diverse expertise to gather insights, ensuring a broad range of perspectives and enriching the thematic analysis. A total of 24 participants participated in the cocreation workshops, including 11 (46%) people with PD, 6 (25%) health care professionals, 3 (13%) AI technical experts, 1 (4%) bioethics expert, and 3 (13%) facilitators. Using a semistructured guide, key aspects of the discussion centered on trust, fairness, explainability, autonomy, and the psychological impact of AI in PD care. RESULTS: Thematic analysis of the cocreation workshop transcripts identified 5 key main themes, each explored through various corresponding subthemes. AI trust and security (theme 1) was highlighted, focusing on data safety and the accuracy and reliability of the AI systems. AI transparency and education (theme 2) emphasized the need for educational initiatives and the importance of transparency and explainability of AI technologies. AI bias (theme 3) was identified as a critical theme, addressing issues of bias and fairness and ensuring equitable access to AI-driven health care solutions. Human oversight (theme 4) stressed the significance of AI-human collaboration and the essential role of human review in AI processes. Finally, AI's psychological impact (theme 5) examined the emotional impact of AI on patients and how AI is perceived in the context of PD care. CONCLUSIONS: Our findings underline the importance of implementing robust security measures, developing transparent and explainable AI models, reinforcing bias mitigation and reduction strategies and equitable access to treatment, integrating human oversight, and considering the psychological impact of AI-assisted health care. These insights provide actionable guidance for developing trustworthy and effective AI-driven digital PD diagnosis and management solutions.

Original publication

DOI

10.2196/73710

Type

Journal article

Journal

J Med Internet Res

Publication Date

06/08/2025

Volume

27

Keywords

Parkinson disease management, advanced care strategies, artificial intelligence, assessment, cocreation, digital health care solutions, disease risk, prognosis, stakeholder insights, trust in AI systems, Parkinson Disease, Humans, Artificial Intelligence, Qualitative Research, Trust, Stakeholder Participation