← GoldBerry

GoldBerry UK AI Security Institute Through Seven Lenses

aisi.gov.uk · Sunday 29 March 2026
CMR Score: 3.5 / 10
Scope: This analysis audits AISI's public-facing website text only — the homepage and About page as rendered on 29 March 2026. It does not claim to assess AISI's internal research practice, unpublished stakeholder engagement, or work not visible in these public materials. The epistemic profile identified here is of the public frame, not necessarily of the institution in its entirety.

The UK AI Security Institute is the first state-backed organisation dedicated to advanced AI safety research. Its public communications shape how policymakers, researchers, and citizens understand what AI risk means and who is equipped to address it.

We ran GoldBerry on aisi.gov.uk — the homepage and About page. The result is a structural diagnosis of the epistemic frame within which the UK's primary AI safety body presents itself to the world.

Epistemically narrow in its public framing despite substantial institutional resources and genuine research output. Safety is defined within one disciplinary frame. The public is an object, never a subject. — GoldBerry, CMR 3.5/10

Corrected Framing

AISI presents itself as the authoritative source on AI risk — "rigorous", "scientific", "the first state-backed organisation". The framing is one of technical mastery applied to national security and public safety. The implicit message: deep technical expertise from the organisations that built these systems is a prerequisite for assessing their safety.

This is a specific epistemic claim. It says: AI risk is primarily a technical problem, solvable by technical research, conducted by people from the technical organisations that created the risk. The public's role is to be "kept safe." The government's role is to "understand" — through AISI's lens.

What this framing excludes is not accidental. It is structural.

Power-Knowledge Audit

WHO PRODUCED THIS: AISI communications team, within the Department for Science, Innovation and Technology (DSIT). Institutional voice of UK government AI policy.

FOR WHOM: Three audiences — AI researchers they want to recruit, policymakers they advise, and AI companies they collaborate with. The public is addressed only as a passive beneficiary ("keep the public safe").

SERVING WHAT INTERESTS: AISI serves the UK government's positioning in the global AI race. The language of safety provides the democratic justification for massive state investment in AI infrastructure (£1.5bn compute, £66m/year). The talent pipeline runs from industry (OpenAI, DeepMind) through AISI back to influence over industry. The page presents proximity to major AI firms as a credential, while leaving the governance risks of that proximity largely unaddressed.

Suffixscape Audit

"Rigorous AI research to enable advanced AI governance"
"Enable" nominalises agency. Enable whom? To govern what? By whose authority? The word claims a causal relationship (research → governance) without specifying the mechanism, the democratic mandate, or who the governed are.
"Governments have a critical role to play in ensuring advanced AI is safe, secure and beneficial"
"Safe, secure and beneficial" — three adjectives doing enormous epistemic work. Safe for whom? Secure against what threats, defined by whom? Beneficial by whose measure? Each word claims completeness without specifying a framework. This is epistemic inflation.
"Building the world's leading understanding of advanced AI risks and solutions"
"World's leading" is epistemic inflation at its most direct. Leading by what metric? Assessed by whom? This is an unauditable claim presented as fact.
"We designed AISI like a startup in the government"
"Startup" imports Silicon Valley organisational mythology into a government body. It claims agility, disruption, speed — values in tension with democratic accountability, public consultation, and deliberative governance.
"Keep the public safe"
Passive construction. The public is the object, not the subject. AISI keeps; the public is kept. The grammar structurally removes public agency from the safety equation.

What's Missing — All Seven Lenses

🌿 Lens 1 — Indigenous Knowledge: ABSENT

AI systems are trained on data that systematically under-represents Indigenous knowledge traditions — oral histories, relational epistemologies, land-based knowledge systems. AISI's research agenda does not mention this. For Indigenous communities, the "risk" of AI is not misalignment or cyber misuse — it is the latest iteration of epistemic colonialism, where dominant knowledge systems are encoded as universal while situated knowledges are classified as missing data.

📜 Lens 2 — Deep History: ABSENT

AISI presents AI safety as a contemporary technical challenge with no historical depth. There is no reference to the history of technological governance — nuclear weapons regulation, pharmaceutical safety, environmental protection — or to what those histories teach about regulatory capture, the limitations of self-regulation by creators, and the decades-long timescales of institutional learning. The movement of personnel between AI companies and their regulator has historical precedents in finance, pharma, and energy. These precedents — and their lessons about regulatory capture — are not referenced.

🌍 Lens 3 — Cross-Cultural Wisdom: ABSENT

AISI's framing is entirely Anglophone and Western. The leadership pipeline is OpenAI → DeepMind → Oxford → GCHQ. There is no reference to how China, India, Brazil, Nigeria, or the African Union think about AI risk — despite these being the populations most likely to be affected by AI systems deployed at scale. There is no mention of the EU AI Act, UNESCO's AI Ethics Recommendation, or any non-Anglophone governance framework. AI safety as defined by AISI is the view from London, San Francisco, and Oxford.

🔬 Lens 4 — Scientific Evidence: PARTIALLY PRESENT

AISI does conduct and publish research. The Frontier AI Trends Report, the persuasion study in Science, and the Inspect evaluation platform represent genuine scientific output. This is the strongest lens in AISI's profile. However, the evidence base is narrow: model capabilities, benchmark performance, red-teaming results. There is no social science, no ethnography of affected communities, no epidemiological approach to AI harms.

🎨 Lens 5 — Artistic Perception: ABSENT

The website is clean, professional, and entirely propositional. There is no sense of what AI risk feels like — for a worker displaced by automation, for a community subjected to algorithmic policing, for a patient misdiagnosed by a clinical AI. The lived experience of AI harm is not on the page. Art, narrative, testimony, affect — the forms of knowledge that carry what propositions cannot — are entirely absent.

🚀 Lens 6 — Future Modelling: PARTIALLY PRESENT

AISI's research agenda includes forward-looking areas: autonomy, human influence, societal resilience. The Frontier AI Trends Report is explicitly about trajectory. This is the second strongest lens. However, the future modelling is restricted to model capabilities — what AI systems might be able to do. It does not model the second-order social effects: labour market restructuring, democratic erosion, concentration of power, or differential impact on Global South populations.

🤝 Lens 7 — Marginalised Voices: THE CENTRAL ABSENCE

AISI's homepage and About page contain zero references to people affected by AI deployment, civil society organisations, trade unions or worker representatives, disability communities, racial justice organisations, or Global South perspectives. The "public" appears once — as the object of "keep the public safe." In the public-facing materials reviewed, the public does not speak, advise, participate, or co-design. AISI's safety, as presented on these pages, is defined for people, not with people. This is the kind of structural exclusion the seven lenses are designed to make visible.

Synthesis

The seven lenses reveal that AISI's public framing is:

This is not a criticism of AISI's research quality — the technical work may be excellent. It is a diagnosis of the epistemic frame within which that research is presented publicly. The frame determines what counts as risk, who counts as expert, and who counts as affected. Everything outside the frame is invisible.

Solution Pathways

a) ESTABLISH A PUBLIC VOICE. Create a public advisory panel — not of technologists, but of affected communities. Workers in sectors undergoing AI-driven automation. Disability advocates. Racial justice organisations. Global South researchers. Give them a structural role, not a consultation exercise.
b) EXPAND THE DISCIPLINARY FRAME. Commission social science research alongside computer science. Ethnography of AI deployment effects. Historical analysis of technology governance. The £15m+ grant fund should include non-CS research.
c) SURFACE THE CROSS-CULTURAL DIMENSION. Publish a position on how AISI's work relates to non-Western AI governance frameworks — the EU AI Act, UNESCO's AI Ethics Recommendation, African Union AI strategy. Acknowledge the Anglophone frame explicitly.
d) ADD EXPERIENTIAL KNOWLEDGE. Commission testimony from people affected by AI systems — automated welfare decisions, algorithmic hiring, predictive policing, content moderation at scale. Make the human experience of AI risk visible on the homepage.
e) HISTORICISE THE SAFETY CLAIM. Publish a short institutional analysis: "What did we learn from nuclear regulation, pharmaceutical safety, and financial regulation about the risks of industry-regulator proximity?" Address the governance question proactively with historical depth.

CMR Score: 3.5 / 10

Epistemically narrow in its public framing despite substantial institutional resources and genuine research output. Lens 4 (Scientific Evidence) is present and credible within its disciplinary scope. Lens 6 (Future Modelling) is partially present for model capabilities. Lenses 1, 2, 3, 5, and 7 are not visible in the public materials reviewed.

AISI scores higher than the BBC homepage (2.5/10) because it has genuine, published research output and some forward-looking capability modelling. The additional 1.0 point reflects real substance in Lenses 4 and 6. The remaining gap reflects the absence — in the public frame — of the other five lenses.

What GoldBerry cannot supply: the actual voices of communities affected by AI deployment, the social science research AISI has not commissioned, the cross-cultural governance frameworks AISI has not engaged with. These require institutional decisions, not framework analysis.

Next Step Beyond GoldBerry

AISI has the resources, the mandate, and the talent to present a broader epistemic frame than what currently appears on its public pages. Whether the internal work is wider than the public framing suggests is not something this analysis can determine. What the seven lenses can identify is that the public-facing epistemic frame — the one that shapes external understanding of what AI risk means — is narrow. Widening it would require deliberate, structural choices.

The first choice is the simplest: who is in the room? If the room contains only computer scientists, national security professionals, and AI company alumni, the definition of "safety" will reflect their worldview. Add historians, anthropologists, disability advocates, trade unionists, Global South researchers, and affected communities — and the definition changes. Not because the technical work was wrong, but because it was incomplete.

The framework points toward the room. The room is not in the framework.

← Back to GoldBerry   ·   BBC Analysis (CMR 2.5)   ·   View the repository →