UX Design · AI Tool · Design Sprint · 2026

CAN-SIMPLAN

An AI-assisted zoning pre-screen tool that helps municipal planners evaluate development proposals with confidence, ensuring every decision traceable, defensible, and human.

Explore interactive prototype

Role

Sole UX Designer

Team

Cross-disciplinary sprint team

Duration

4-day design sprint

Outcome

Prototype ranked first by a panel of industry reviewers

CAN-SIMPLAN UI — proposal detail view showing domain scores, simulation flags, and planner actions

The Problem

Municipal planners don't need faster AI. They need AI they can defend.

Many tools promise efficiency: summaries, risk flags, recommendations. But when outputs arrive without clear reasoning, they introduce a new risk. In a governance context, a decision must be traceable because it must hold up under questioning.

The model might be accurate but accuracy isn't enough. I need to be able to stand up in a council meeting and explain why.

— Municipal planner, research insight

Design challenge: How might we design an AI planning tool where every output a planner acts on is something they can explain and stand behind?


Key Insight

The AI isn't the problem. The opacity is.

Early research conversations revealed a consistent pattern: the barrier to AI adoption wasn’t skepticism about model accuracy, it was the inability to explain any individual output to a colleague, a councillor, or a community member asking difficult questions.

This reframed the design problem entirely. The goal wasn't to make AI smarter or faster. It was to make it legible, and to keep a human hand visible at every step.

The original framing

How do we speed up the proposal review process with AI?

The reframe

How might we design AI-assisted decisions that a planner can clearly explain, trace, and defend?

CAN-SIMPLAN system overview: four transparent stages from intake to decision
System overview: Four transparent stages, each passing through a human checkpoint before the workflow can advance.

Product Strategy

CAN-SIMPLAN is not a decision-maker. It's a structured oversight layer.

CAN-SIMPLAN is designed around one principle: the AI surfaces analysis but the planner makes the call.

Every interface decision was tested against that constraint.

I owned the end-to-end interface, from login through audit trail, as the sole UX designer on a four-day multidisciplinary sprint team.

CAN-SIMPLAN proposal dashboard — queue, model confidence trend, and review velocity
The Proposal Dashboard: What needs attention before the planner opens a single case.
CAN-SIMPLAN proposal detail — domain scorecards, simulation flags, planner action panel
Proposal Detail: Domain scores, simulation flags, and planner actions in one scrollable workspace.

Design Decisions

Four patterns that make accountability tangible.

01

Blocking flags enforce engagement

When a concern is raised, the workflow pauses. The planner must record a rationale before proceeding.

Tradeoff This introduces friction by design because it turns a passive step into an explicit decision.
02

Confidence appears before interpretation

Model confidence, version, and timestamp are visible at the top of each proposal.

Tradeoff Uncertainty is made visible rather than implied.
03

Plain language over technical precision

Each simulation flag includes a plain-language explanation written for planners, not data scientists. The planner’s role is judgment. The interface’s role is translation.

Tradeoff Some nuance is reduced, but detailed information remains accessible one level deeper. The interface prioritizes clarity at the point of decision.
04

The audit trail stays in the workflow.

Every action is logged directly within the proposal view rather than hidden in an administrative panel.

Tradeoff This adds density, but keeps accountability tied to the moment of decision rather than separating it into a secondary layer.
View Figma designs
CAN-SIMPLAN transparency panel — scoring assumptions and data source attribution
Transparency Panel: Every scoring assumption and data source — visible, attributed, traceable.
CAN-SIMPLAN audit trail and planner decision panel
Audit Trail: A sequential, immutable log of every planner action alongside a decision panel requiring written rationale before submission.

Outcome

A decision-support tool that strengthens a planner’s ability to explain what they decide.

The prototype was completed in four days and selected as the winning project amongst our cohort. The interface covers eight distinct screens, from proposal intake through audit trail, each passing through a mandatory human checkpoint before the workflow can advance. The sprint confirmed the central thesis: in civic AI tools, legibility isn't a nice-to-have, it's the condition under which the tool gets used at all.

The project reinforced a consistent pattern: in civic contexts, adoption depends less on performance than on whether the system can be understood. When reasoning is visible, trust becomes possible.

Interactive prototype: proposal queue, detail view, flag override, and audit trail flows.

Reflection

Designing for governance reframed friction.

UX design often treats friction as something to eliminate. Here, it was necessary as it slowed the workflow at critical moments, ensuring decisions are considered rather than passed through.

It also required translating in both directions—making system logic legible to planners, while shaping planner input into something the system could support.

If the project continued, the next layer of work would include:

The most interesting question CAN-SIMPLAN raises isn’t “How do we make AI faster?” It’s “How do we make accountability a feature of the system itself?

Next Project

Aurafact: Archiving the acoustic memory of place