An AI-assisted zoning pre-screen tool that helps municipal planners evaluate development proposals with confidence, ensuring every decision traceable, defensible, and human.
Explore interactive prototype
Many tools promise efficiency: summaries, risk flags, recommendations. But when outputs arrive without clear reasoning, they introduce a new risk. In a governance context, a decision must be traceable because it must hold up under questioning.
The model might be accurate but accuracy isn't enough. I need to be able to stand up in a council meeting and explain why.
— Municipal planner, research insightDesign challenge: How might we design an AI planning tool where every output a planner acts on is something they can explain and stand behind?
Early research conversations revealed a consistent pattern: the barrier to AI adoption wasn’t skepticism about model accuracy, it was the inability to explain any individual output to a colleague, a councillor, or a community member asking difficult questions.
This reframed the design problem entirely. The goal wasn't to make AI smarter or faster. It was to make it legible, and to keep a human hand visible at every step.
The original framing
How do we speed up the proposal review process with AI?
The reframe
How might we design AI-assisted decisions that a planner can clearly explain, trace, and defend?
CAN-SIMPLAN is designed around one principle: the AI surfaces analysis but the planner makes the call.
Every interface decision was tested against that constraint.
I owned the end-to-end interface, from login through audit trail, as the sole UX designer on a four-day multidisciplinary sprint team.
When a concern is raised, the workflow pauses. The planner must record a rationale before proceeding.
Model confidence, version, and timestamp are visible at the top of each proposal.
Each simulation flag includes a plain-language explanation written for planners, not data scientists. The planner’s role is judgment. The interface’s role is translation.
Every action is logged directly within the proposal view rather than hidden in an administrative panel.
A decision-support tool that strengthens a planner’s ability to explain what they decide.
The prototype was completed in four days and selected as the winning project amongst our cohort. The interface covers eight distinct screens, from proposal intake through audit trail, each passing through a mandatory human checkpoint before the workflow can advance. The sprint confirmed the central thesis: in civic AI tools, legibility isn't a nice-to-have, it's the condition under which the tool gets used at all.
The project reinforced a consistent pattern: in civic contexts, adoption depends less on performance than on whether the system can be understood. When reasoning is visible, trust becomes possible.
UX design often treats friction as something to eliminate. Here, it was necessary as it slowed the workflow at critical moments, ensuring decisions are considered rather than passed through.
It also required translating in both directions—making system logic legible to planners, while shaping planner input into something the system could support.
If the project continued, the next layer of work would include:
The most interesting question CAN-SIMPLAN raises isn’t “How do we make AI faster?” It’s “How do we make accountability a feature of the system itself?