<aside> ℹ️ This presentation at PAIRS 2026 Online on 17th February 2026 09:45 UTC. Registered participants will receive zoom links to join the session via e-mail.

🎙️ Thread on PAIRS Discussion Server (Discord) (register first)

</aside>

Abstract

Public agencies in the UK are increasingly encouraged to take advantage of AI to improve public services, while acting as a role model for using AI responsibly and meeting public expectations around equity, fairness and transparency. We developed a novel process for AI assurance – the AI Social Readiness Assessment. It measures public confidence and trust in specific AI tools being used in UK public services, and provides easy-to-understand guidance on how to address public concerns. The Social Readiness Assessment moves beyond the current landscape of services being developed to test and evaluate AI systems to fill a critical gap - making sure the public’s views on the social acceptability of risks and their expectations around safe deployment are considered alongside technical assurance or legal compliance. It follows a structured, facilitated deliberative polling approach that can be replicated for new tools and use cases. All information is provided during the session through videos, which are created in advance based on a technical assessment of the AI tool performed by an independent expert. The expert evaluates the tool against a standardised set of risk criteria for both tool-specific and deployment risks using information provided by the tool developer, which includes evaluation results from pilots, or performance metrics and technical documentation from the Algorithmic Transparency Recording Standard.

We piloted the process with two tools - Consult, an AI tool to support the analysis of public consultation responses by central government and Magic Notes, a transcription and summarisation tool for social workers in local government. For each pilot we ran 18 deliberative workshops, half of them taking place online and half in-person across three locations (Manchester, Newcastle and London). Overall, 281 people completed the Social Readiness Assessment. Our results suggest that the public moderate their views on risk, depending on how a tool is implemented and the mitigations that have been put in place by developers. For the Consult tool, we saw concern about risks like Inaccuracy, Data Privacy and Unfair Outcomes drop by 30% compared to concerns about these risks for AI in general. On the other hand, concerns about Model Manipulation and Environmental Impact remained high because people didn’t think the developer’s current mitigations were sufficient. A majority of people (60%) felt positive about the Consult use case, compared to 18% who felt negative, but they also had strong expectations about how safeguards like human oversight should be implemented by public agencies to make them effective. Overall, we found that taking part in the Social Readiness Assessment led to a 24% rise in individuals’ sense of agency about decisions related to AI. There was also a 12% increase in people’s trust that the public sector would use AI responsibly and 16% increase in confidence that public sector AI would be used to benefit society. Social assurance could be an effective mechanism for providing actionable guidance on the overall acceptability of specific AI use cases and key priorities for risk mitigation or safeguards, as well as helping to build trust between institutions and the public.