The UK AI Security Institute is the first state-backed organisation dedicated to advanced AI safety research. Its public communications shape how policymakers, researchers, and citizens understand what AI risk means and who is equipped to address it.
We ran GoldBerry on aisi.gov.uk — the homepage and About page. The result is a structural diagnosis of the epistemic frame within which the UK's primary AI safety body presents itself to the world.
Epistemically narrow in its public framing despite substantial institutional resources and genuine research output. Safety is defined within one disciplinary frame. The public is an object, never a subject. — GoldBerry, CMR 3.5/10
Corrected Framing
AISI presents itself as the authoritative source on AI risk — "rigorous", "scientific", "the first state-backed organisation". The framing is one of technical mastery applied to national security and public safety. The implicit message: deep technical expertise from the organisations that built these systems is a prerequisite for assessing their safety.
This is a specific epistemic claim. It says: AI risk is primarily a technical problem, solvable by technical research, conducted by people from the technical organisations that created the risk. The public's role is to be "kept safe." The government's role is to "understand" — through AISI's lens.
What this framing excludes is not accidental. It is structural.
Power-Knowledge Audit
WHO PRODUCED THIS: AISI communications team, within the Department for Science, Innovation and Technology (DSIT). Institutional voice of UK government AI policy.
FOR WHOM: Three audiences — AI researchers they want to recruit, policymakers they advise, and AI companies they collaborate with. The public is addressed only as a passive beneficiary ("keep the public safe").
SERVING WHAT INTERESTS: AISI serves the UK government's positioning in the global AI race. The language of safety provides the democratic justification for massive state investment in AI infrastructure (£1.5bn compute, £66m/year). The talent pipeline runs from industry (OpenAI, DeepMind) through AISI back to influence over industry. The page presents proximity to major AI firms as a credential, while leaving the governance risks of that proximity largely unaddressed.
Suffixscape Audit
What's Missing — All Seven Lenses
🌿 Lens 1 — Indigenous Knowledge: ABSENT
AI systems are trained on data that systematically under-represents Indigenous knowledge traditions — oral histories, relational epistemologies, land-based knowledge systems. AISI's research agenda does not mention this. For Indigenous communities, the "risk" of AI is not misalignment or cyber misuse — it is the latest iteration of epistemic colonialism, where dominant knowledge systems are encoded as universal while situated knowledges are classified as missing data.
📜 Lens 2 — Deep History: ABSENT
AISI presents AI safety as a contemporary technical challenge with no historical depth. There is no reference to the history of technological governance — nuclear weapons regulation, pharmaceutical safety, environmental protection — or to what those histories teach about regulatory capture, the limitations of self-regulation by creators, and the decades-long timescales of institutional learning. The movement of personnel between AI companies and their regulator has historical precedents in finance, pharma, and energy. These precedents — and their lessons about regulatory capture — are not referenced.
🌍 Lens 3 — Cross-Cultural Wisdom: ABSENT
AISI's framing is entirely Anglophone and Western. The leadership pipeline is OpenAI → DeepMind → Oxford → GCHQ. There is no reference to how China, India, Brazil, Nigeria, or the African Union think about AI risk — despite these being the populations most likely to be affected by AI systems deployed at scale. There is no mention of the EU AI Act, UNESCO's AI Ethics Recommendation, or any non-Anglophone governance framework. AI safety as defined by AISI is the view from London, San Francisco, and Oxford.
🔬 Lens 4 — Scientific Evidence: PARTIALLY PRESENT
AISI does conduct and publish research. The Frontier AI Trends Report, the persuasion study in Science, and the Inspect evaluation platform represent genuine scientific output. This is the strongest lens in AISI's profile. However, the evidence base is narrow: model capabilities, benchmark performance, red-teaming results. There is no social science, no ethnography of affected communities, no epidemiological approach to AI harms.
🎨 Lens 5 — Artistic Perception: ABSENT
The website is clean, professional, and entirely propositional. There is no sense of what AI risk feels like — for a worker displaced by automation, for a community subjected to algorithmic policing, for a patient misdiagnosed by a clinical AI. The lived experience of AI harm is not on the page. Art, narrative, testimony, affect — the forms of knowledge that carry what propositions cannot — are entirely absent.
🚀 Lens 6 — Future Modelling: PARTIALLY PRESENT
AISI's research agenda includes forward-looking areas: autonomy, human influence, societal resilience. The Frontier AI Trends Report is explicitly about trajectory. This is the second strongest lens. However, the future modelling is restricted to model capabilities — what AI systems might be able to do. It does not model the second-order social effects: labour market restructuring, democratic erosion, concentration of power, or differential impact on Global South populations.
🤝 Lens 7 — Marginalised Voices: THE CENTRAL ABSENCE
AISI's homepage and About page contain zero references to people affected by AI deployment, civil society organisations, trade unions or worker representatives, disability communities, racial justice organisations, or Global South perspectives. The "public" appears once — as the object of "keep the public safe." In the public-facing materials reviewed, the public does not speak, advise, participate, or co-design. AISI's safety, as presented on these pages, is defined for people, not with people. This is the kind of structural exclusion the seven lenses are designed to make visible.
Synthesis
The seven lenses reveal that AISI's public framing is:
- EPISTEMICALLY NARROW: safety defined within one disciplinary frame (computer science + national security)
- CULTURALLY INSULAR: entirely Anglophone, entirely Western, entirely from the technical-policy pipeline
- HISTORICALLY ROOTLESS: no engagement with the history of technology regulation or regulatory capture
- DEMOCRATICALLY HOLLOW: the public is an object, never a subject. No civil society, no affected communities
- EXPERIENTIALLY ABSENT: no testimony, no narrative, no sense of what AI harm feels like
- FUTURE-MODELLING RESTRICTED: models what AI can do, not what AI does to people
This is not a criticism of AISI's research quality — the technical work may be excellent. It is a diagnosis of the epistemic frame within which that research is presented publicly. The frame determines what counts as risk, who counts as expert, and who counts as affected. Everything outside the frame is invisible.
Solution Pathways
CMR Score: 3.5 / 10
Epistemically narrow in its public framing despite substantial institutional resources and genuine research output. Lens 4 (Scientific Evidence) is present and credible within its disciplinary scope. Lens 6 (Future Modelling) is partially present for model capabilities. Lenses 1, 2, 3, 5, and 7 are not visible in the public materials reviewed.
AISI scores higher than the BBC homepage (2.5/10) because it has genuine, published research output and some forward-looking capability modelling. The additional 1.0 point reflects real substance in Lenses 4 and 6. The remaining gap reflects the absence — in the public frame — of the other five lenses.
What GoldBerry cannot supply: the actual voices of communities affected by AI deployment, the social science research AISI has not commissioned, the cross-cultural governance frameworks AISI has not engaged with. These require institutional decisions, not framework analysis.
Next Step Beyond GoldBerry
AISI has the resources, the mandate, and the talent to present a broader epistemic frame than what currently appears on its public pages. Whether the internal work is wider than the public framing suggests is not something this analysis can determine. What the seven lenses can identify is that the public-facing epistemic frame — the one that shapes external understanding of what AI risk means — is narrow. Widening it would require deliberate, structural choices.
The first choice is the simplest: who is in the room? If the room contains only computer scientists, national security professionals, and AI company alumni, the definition of "safety" will reflect their worldview. Add historians, anthropologists, disability advocates, trade unionists, Global South researchers, and affected communities — and the definition changes. Not because the technical work was wrong, but because it was incomplete.
The framework points toward the room. The room is not in the framework.
← Back to GoldBerry · BBC Analysis (CMR 2.5) · View the repository →