<aside> ℹ️ This presentation at PAIRS 2026 Online on 17th February 2026 09:00 UTC. Registered participants will receive zoom links to join the session via e-mail.

🎙️ Thread on PAIRS Discussion Server (Discord) (register first)

</aside>

Abstract

To deliver inclusive, safe, and trustworthy AI, governance must move beyond top-down regulation or corporate self-auditing to center the voices of affected communities, particularly in the Global South. We propose a Community-in-the-Loop (CITL) AI Governance Framework, extending the "human-in-the-loop" paradigm to embed collective social oversight across the AI lifecycle—design, deployment, and evaluation. Unlike passive supervision, CITL emphasizes public negotiation of algorithmic values, enabling communities to co-create social contracts that align AI systems with local ethical priorities and cultural contexts. This framework integrates participatory design, community-based data stewardship, and public auditing platforms to ensure continuous feedback and accountability in high-impact AI applications, such as predictive policing or climate adaptation models.

Drawing on case studies from South Asia, including a pilot in rural India where local farmers co-designed flood-prediction models using sensor data and indigenous ecological knowledge, we demonstrate how CITL fosters equitable outcomes by transforming communities from data sources into active co-designers. The framework employs iterative workflows, such as participatory workshops and open-source auditing tools inspired by MIT’s Data Nutrition Project, to empower citizens to inspect datasets, document biases, and shape remediation policies. These mechanisms bridge ethics research with policy interventions, enhancing trust calibration and value alignment.

Our work also addresses AI literacy as a prerequisite for meaningful participation. Through co-created curricula developed with educators, activists, and youth in marginalized communities, CITL equips participants to critically engage with algorithms and negotiate governance terms. This approach is particularly relevant to India and the Global South, where top-down AI systems often overlook local needs, leading to mistrust and inefficiency.

This submission represents a cross-sector collaboration between Indian civil society organizations, academic researchers, and technologists, prioritizing voices from Most Affected People and Areas (MAPA). By decentralizing audit power and fostering community-led governance, CITL offers a scalable model for participatory AI that aligns with the India AI Impact Summit’s principles of People, Planet, and Progress. We propose a 10-minute presentation to share empirical insights, practical toolkits, and a research agenda for institutionalizing community-driven AI governance globally.