<aside> ℹ️ This presentation at PAIRS 2026 Online on 17th February 2026 09:45 UTC. Registered participants will receive zoom links to join the session via e-mail.

🎙️ Thread on PAIRS Discussion Server (Discord) (register first)

</aside>

Abstract

The rapid proliferation of AI-based technologies has left legislative bodies, at both national and international levels, seeking to re-situate their development and deployment within ethical governance frameworks (European Commission, 2021). To date, these technologies have been largely led by a small technical elite (Birhane et al., 2022), giving rise to ethical concerns around bias and discrimination, alongside broader risks to the public, who have thus far been excluded from meaningful participation in their development (Ahrweiler et al., 2025). While efforts to develop ethical AI have received significant attention in areas such as Explainable AI, which seeks to render the opaque logics embedded within complex neural networks more transparent and accountable (Gunning et al., 2019), there is increasing skepticism that AI technologies reliant on selective datasets and advanced machine learning techniques can truly be transparent, accountable, or ethical (Amoore, 2020).

It is against this backdrop that participatory approaches to developing more ethical AI are gaining traction under the term “Participatory-AI” (Ahrweiler et al., 2025). While this emerging space draws on a long tradition of participatory research developed in other fields (Halskov & Hansen, 2015), it also introduces new challenges around engaging wider publics in a domain often defined by technical complexity and intangibility (Pasquale, 2015). Adding to this, is the challenge of involving vulnerable groups, whose marginalisation within institutional data-collection systems is often reproduced in public-facing AI systems (Pérez-Escolar & Canet, 2022).

As frameworks for ‘Participatory-AI’ emerge, alongside calls for case studies aligned with pre-agreed values and procedural guidelines to support meaningful analysis (Ahrweiler et al., 2025), the London AI Voices Archive (LAVA) project seeks to examine the role of creative practice and methods within this emerging sub-disciplinary field. It asks: (1) what is the value of creative approaches in the development of participatory-led AI technologies, and (2) how can creative approaches be embedded from early stages within emerging frameworks of Participatory AI?

In seeking answers to these questions, the LAVA project employs public-facing design experiments (DiSalvo, 2022) informed by longer traditions of speculative and critical design (Dunne, 1999; Dunne & Raby, 2013), which invite participants to reflect on the erosion of privacy, agency, and representation in the context of AI technologies. Rather than emphasising explainability or transparency, the project works with vulnerable groups to engage directly with AI systems and collectively undertake processes of unpacking and critical reflection (Amoore, 2020). The design experiments bring together structured dialogue (Dugal et al., 2017), participatory mapping (Barbrook-Johnson & Penn, 2022), and photovoice (Wang & Burris, 1997) across different stages of Andrew Ng’s machine learning lifecycle (Ng, 2011).

The study was conducted in collaboration with community organisations across Camberwell, Peckham, and Loughborough Junction in South London, areas marked by digital exclusion (Holmes & Burgess, 2022). It stands as a proof of concept for the feasibility of creative, participatory methods in engaging vulnerable groups affected by digital exclusion in AI development, demonstrating how such approaches might help address longstanding concerns with participatory processes and contribute to more inclusive ethical AI practices (Birhane et al., 2022).