<aside> âšī¸ This presentation at PAIRS 2026 Online on 17th February 2026 09:45 UTC. Registered participants will receive zoom links to join the session via e-mail.
đī¸ Thread on PAIRS Discussion Server (Discord) (register first)
</aside>
Participatory AI calls for the involvement of stakeholders in AI design, development, evaluation, and deployment to attain more inclusive, transparent, and accountable AI. However, actual implementations of participatory AI remain little incentivized by governments, despite appeals issued by academia and also industry. In this work, we investigate the role of 'participation' in the obligations of high-risk AI system providers and deployers set out by the EU AI Act. (1) We analyze the gaps between the participation explicitly stated in the non-binding Recitals of the AI Act and the legally-binding Articles of the Act itself, showing that there are no legally binding requirements for participation beyond informing specified (groups of) people. For example, the Recitals 65 and 96 on risk management systems and fundamental rights impact assessment suggest to "involve [...] relevant stakeholders". However, neither Article 9 on risk management systems nor Article 27 on the fundamental rights impact assessment mentions any form of participation. Article 95 on the voluntary codes of conduct is the only enacting term that explicitly suggests stakeholder participation. While informing constitutes the most frequently referenced mode of participation, the information provided through these requirements is generally not available to the public, granting little benefit for external collective monitoring action. We argue that the AI Act represents a missed opportunity to incentivize stakeholder participation in AI design, development, evaluation, and deployment. (2) Based on these results, we analyze opportunities for participation emerging from the obligations of high-risk AI system providers and deployers (AI Act, Chapter III, Sections 2 and 3). We identify and describe five clusters of obligations with participatory opportunities: risk management, data and data governance, information provision, resilience testing, and impact assessment. (3) We provide examples of use cases for each of the identified opportunities for participation and reflect upon the provided examples. This work contributes to a better understanding of the regulatory demands and practical opportunities associated with participatory AI in the context of the AI Act's obligations for high-risk AI system providers and deployers.