Summary
Abi’s Corpus note inspired by Sarah Cridland’s Experimentation Elite keynote, exploring why AI needs context, how plausible answers can mislead, and why human scepticism still has to stay in control.
Description
This Corpus note captures Abi’s biggest takeaway from Sarah Cridland’s Experimentation Elite keynote: AI does not replace thinking. It exposes it.
Using a murder-story framing, the piece shows how a detective can have evidence, motive and opportunity, yet still reach the wrong conclusion when critical context is missing. That becomes the central AI lesson: AI can only work with the case file it is given. It does not automatically know your intent, history, constraints, hidden evidence or previous decisions. When context is missing, it fills gaps, and fluent guessing is still guessing.
Abi then pushes into the more dangerous problem: not just missing context, but using AI to prove what you already think. The carousel uses Sarah’s “hallucinated gamekeeper” example to show how leading prompts, plausible details and confirmation bias can combine into something that sounds convincing but is not evidence. As the piece puts it: plausible is not proof, and confirmation bias now has autocomplete.
The argument is not anti-AI. AI can make work better through faster drafts, better brainstorming, cleaner summaries, less grunt work and more room for judgement. But it is the scalpel, not the surgeon. The human still needs to define the problem, provide trusted sources, ban gap-filling, ask for uncertainty, verify the evidence and keep training their own brain.
Topics
-
Why AI needs context to be useful
-
How missing context changes the answer
-
Why fluent output can still be wrong
-
Hidden knowledge, internal research and “unknown knowns”
-
Why experts need to make their thinking visible
-
How AI can reinforce confirmation bias
-
Why plausible answers are not proof
-
AI hallucination and leading prompts
-
Experimentation discipline as a model for responsible AI use
-
AI as a thinking amplifier, not a replacement for scepticism
-
Why the human should remain in control
Best for
UX researchers, experimentation teams, analysts, AI leads, digital strategists, product teams, content strategists
Background
This piece is part of Abi’s Corpus work on AI, context and better digital decision-making. It extends the Corpus focus on evidence and interpretation into day-to-day AI use: the question is not just what AI can produce, but what assumptions, gaps and biases are sitting underneath the output.
It also connects to the wider Corpus view that trust depends on proof, coherence and operational discipline. AI can accelerate useful work, but only when the human provides context, constraints and scepticism. Without that, the machine does not create clarity. It scales whatever thinking was already there, including the bad bits.

About The Author: Abi Hough
Founder UU3 / WeAreCorpus
Abi Hough is the founder of UU3 and WeAreCorpus. Through UU3, she works across UX research, optimisation, audits and digital strategy. Through Corpus, she explores the upstream web: the trust, proof, signals and contradictions that shape how humans and machines understand organisations before anyone reaches a website.
Recent Comments