Design as Reflection: Rethinking Design Audits in the Age of AI
For most of my career, a design audit has been something you do to a piece of work.
You pause.
You step back.
You review what’s on the screen against a known set of rules.
Heuristics.
Accessibility guidelines.
Best practices.
Peer feedback, if you’re lucky and time allows.
And then you move on.
This approach has served us well. It has helped raise quality, catch obvious issues, and create a shared language for critique. But it also comes with an unspoken assumption: that audits are something we run late, once enough of the design has settled to be judged.
Lately, I’ve been questioning that assumption.
Not because audits aren’t valuable.
But because the way we think about them might be limiting how much value they can actually create.
The problem with late-stage audits
When audits happen near the end of a process, they tend to become defensive exercises.
We’re no longer asking, “What’s the right thing to build?”
We’re asking, “Is this good enough to ship?”
At that point:
- constraints are locked in
- technical decisions are already made
- timelines are tight
- and feedback often becomes negotiation rather than learning
Edge cases are noted, but deferred.
Cognitive load is acknowledged, but accepted as “necessary complexity.”
User friction is logged, but rarely rethought.
The audit still happens, but its power is diminished.
This isn’t a failure of designers or teams. It’s a structural issue in how audits are positioned in the workflow.
A shift in how I think about audits
Recently, I’ve started treating audits less as checkpoints and more as moments of reflection.
Not something you do once.
But something you return to, lightly, often, and early.
This is where AI has quietly changed my own practice.
Not as a generator of UI.
Not as a replacement for judgement.
But as a thinking partner that helps me interrogate the work while it’s still fluid.
When framed correctly, AI can act like a persistent design reviewer who never gets tired of asking uncomfortable questions.
AI as a pressure-testing tool, not an answer machine
The most useful thing AI does in a design audit context isn’t giving solutions.
It’s applying pressure.
Pressure to assumptions.
Pressure to flows we’ve looked at too long.
Pressure to decisions that feel “obvious” only because we made them weeks ago.
When you provide clear context about:
- who the user is
- what they’re trying to achieve
- the constraints they’re operating under
- and where failure would matter most
AI becomes surprisingly effective at surfacing things like:
- hidden cognitive load
- moments where users are forced to remember too much
- unclear transitions between steps
- states where the system knows something the user doesn’t
These aren’t insights we’re incapable of finding ourselves.
They’re insights we often don’t have the time or distance to uncover consistently.
From quality gate to design habit
This has led me to a reframing that’s stuck with me.
Design audits don’t need to be a late-stage quality gate.
They can be a continuous design habit.
A way of regularly stepping out of “making mode” and into “thinking mode.”
Instead of asking, “Does this meet the bar?”
We ask:
- Where could this break?
- Who might struggle here?
- What complexity are we quietly outsourcing to the user?
- What assumptions are we making about knowledge, context, or behaviour?
Run early, these questions shape the design.
Run late, they merely critique it.
Slowing down the right moments
There’s a lot of conversation about AI making design faster.
That’s rarely the benefit I care about.
The real value, for me, is that AI helps slow down the right moments. The moments where momentum might otherwise carry us past something important.
Used well, AI introduces friction where friction is healthy.
It creates space to think.
To reflect.
To notice.
And in complex systems, especially enterprise environments, that pause can be the difference between a design that technically works and one that actually supports people doing real work under real constraints.
Design as reflection
I’ve come to see this approach less as “AI-assisted auditing” and more as design as reflection.
A practice of regularly holding a mirror up to the work and asking what it reveals.
Not to seek validation.
Not to chase perfection.
But to remain intentional.
Because good design isn’t just about solving problems.
It’s about noticing them early enough that we still have the freedom to respond.
Closing thought
AI won’t replace design judgement.
But it can sharpen it.
If we treat audits not as something we perform at the end, but as a way of thinking throughout the process, we move from designing outputs to designing understanding.
And that, at least for me, feels like a meaningful shift.


