Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Andrey Kormilitzin outlines a new participatory study aimed at improving AI to take account of LGBTQI+ people so that their needs are better met by mental health services.

Image of a monitor with code displayed on the screen.

Historically LGBTQI+ people have not been well served in terms of their mental health for a number of reasons, including societal stigma, discrimination, and lack of access to appropriate care.  Societal stigma and discrimination against LGBTQI+ individuals have created a hostile environment for them to live in. This environment can result in chronic stress, anxiety, and depression, which can negatively impact their mental health.

Studies have shown that LGBTQI+ individuals are at a higher risk of developing mental health issues due to their experiences of prejudice, discrimination, and rejection from society. 

The lack of access to appropriate care has been a significant barrier for LGBTQI+ individuals seeking mental health support. Furthermore, homosexuality and gender non-conformity have been often pathologised as a mental illness, resulting in stigma and discrimination within the mental health system, which has led to the misdiagnosis and inadequate treatment of LGBTQI+ individuals by medical professionals.

Mental health difficulties such as self-harm and suicidal distress are more prevalent among LGBTQI+ people; for example,  in the UK government’s equality survey,  3% of gay and bisexual men (compared with 0.4% of men in the general UK population) attempted to end their life by suicide in 2013;  over 80% of trans-identifying young people have self-harmed at some point in their lives (compared to around 10% in the general population) and 24% of trans-people had accessed mental health services in the preceding year.

These marked differences in levels of mental distress are believed to be – in part – because of stressors including being victimised, socially isolated and further, problems with accessing LGBTQI-affirming care.  In the NHS, the specific needs of the LGBTQI+ community can be overlooked because of how services are organised and consequently, many people approach local charities, community and peer support groups.

We frequently hear that in the NHS, there is potential to leverage technology and data (including data science and AI) to improve how mental health services respond to specific community’s needs.  For LGBTQI+ people, this means we require high fidelity data on how people identify and whether or not they might access NHS services. At the same time, there is growing evidence that data-driven algorithms deployed in healthcare carry a risk of unintended harm for people belonging to under-represented racial, ethnic, sexual and gender minority communities. 

By reviewing existing research our team found that:

  • Data on sexual orientation and gender identity is rarely recorded and any estimates of rates of mental distress and illness among LGBTQI+ will be biased because of this missing data
  • Some research found that who asks – and how people are asked – about their sexual orientation or gender identity is often not gender-affirming and fails to account for their specific needs
  • While there is high quality research, for example, on how people from minority ethnic and racial groups are disadvantaged by biases arising from algorithmic decisions or forecasts, there remains very little research on the same for LGBTQI+ people.

It is therefore vital to address data collection, recording and fidelity issues to specifically account for the needs of LGBTQI+ people to help mitigate the disproportionate rates of mental distress and suffering. At the same time, we want to ensure LGBTQI+ people are treated fairly and equitably when data-driven tools (like AI) are used to inform mental health services and practices.

A fundamental requirement for research in data-driven, LGBTQI-affirming care is that a participatory approach is taken. Participatory design emerged in the field of human computer interaction around 40 years ago and has parallels with what healthcare research describes as stakeholder, patient and public involvement and co-design.

We want to make sure LGBTQI+ people are front-and-centre of a process to improve how data is collected and used to improve services and benefit the community so that their experiences and needs are properly reflected in the data, tools and technology used to support theirs’ and the community’s mental health.

We want to make sure LGBTQI+ people are front-and-centre of a process to improve how data is collected and used to improve services and benefit the community so that their experiences and needs are properly reflected in the data, tools and technology used to support theirs’ and the community’s mental health.

Our approach has been to launch a stakeholder Delphi process to arrive at a consensus toolkit that can be used in mental healthcare settings. The Delphi process is a structured and iterative research technique that was developed to achieve consensus among a panel of experts on a particular topic through a series of feedback rounds. Community, technology and research expert participants will drive rounds of consensus building to arrive at a deliverable toolkit for LGBTQI+ to deliver sexual orientation and gender-affirming data collection and reuse for research and to improve service provision.  It is expected that the toolkit will help industry, clinicians, the NHS and researchers design, implement and use sensitive characteristic data appropriately, avoiding the unfortunate mistakes made for other under-represented communities when data-driven technologies have been deployed.

Specifically, the toolkit will deliver guidelines on how to structure questions and present ways for people to articulate their sexual orientation and gender identities so that data is robust and representative.  Further, the toolkit will provide guidance on how and when such routinely-collected data (for example, from a clinical service) can be legitimately re-used, what processes and safeguards need to be present to assure communities that data is only used for benevolent purposes, in a way that meets the community’s needs and does not unintentionally harm. 

The Delphi process and method itself is intended to be reproducible (e.g. in territories outside the UK) and the resulting toolkit will be open and freely available, to ensure that data collection and the mental health support that it informs is as relevant and responsive as possible. It will also facilitate participatory development of future bespoke LGBTQI+-adapted data collection, harmonisation and use for data-driven AI applications specifically in mental health care settings.   

 

Dr Andrey Kormilitzin is Senior Research Scientist in the Department of Psychiatry at the University of Oxford. Read more in Nature.