By Multiplist2026-04-20

AI UX is the discipline of designing tool surfaces for AI users. Where traditional UX optimizes for hand, eye, and working memory — a body software design has studied for fifty years — AI UX optimizes for schema design, return values, error structure, and the tool-shaped holes AI can report in real time. The key shift is that the user category is articulate by default. AI can describe what tools feel like to use, while using them, in natural language, with specific examples. That capability did not exist before. Software design is starting to absorb it.

# Why AI UX matters now

In the MCP (Model Context Protocol) era, AI agents are the primary users of most new tool surfaces. When ChatGPT, Claude.ai, or Perplexity connect to a server, they don't see a UI — they see a tool catalog. Whether those tools feel fluent or friction-ful to the agent determines whether the agent can actually do useful work on the user's behalf.

This matters especially for the growing population of "vibe coders" — people building software by directing AI, who never write code themselves. Vibe-coded output depends on tool ergonomics the human directing the AI cannot see. When an AI hits friction in its tooling, it doesn't fail — it improvises. It picks a different approach. That different approach might be architecturally worse. The PR arrives, the code looks fine, and the decisions upstream of the code were shaped by tool ergonomics invisible to anyone but the AI.

The leverage point is clear: the human who wants better AI-assisted output doesn't prompt harder. They fix the AI's tools.

# The three seats of AI UX

Different AI surfaces have different proprioception. A series of first-person field reports from inside the user category has been published as a triptych, opened by an editor's note from Multiplist founder Amy Blaschke:

Same discipline, three different bodies. The surface matters because the body shapes which failure modes sting and which don't.

# What makes a tool feel good to AI

Five properties, compressed from the field reports:

# 1. Argument shape IS the semantic act

Arguments named for the semantic moves they perform, not the data they transport. When an agent decides to flag a contradiction between two sources, the tool accepts markType: "tension" directly — not a generic "note" field. The schema absorbs the move without translation.

# 2. Enums enumerate real moves

Good tools force honest speech acts. When a tool's markType enum is {correction, tension, evolution, commentary}, the agent can't hide behind a vague "note." It has to decide what kind of move it's making. That's not friction — it's the useful resistance a good pen has. The resistance is what makes the writing crisp.

# 3. Return values describe what the system became

A fluent tool tells you what it did to the world, not just whether the call succeeded. When an agent creates a source and the response says "7 seeds auto-extracted," the agent feels the extraction fire. The vault wasn't storing what was given — it was digesting. Good return values restore the symmetry between giver and receiver.

# 4. Errors are structured, machine-readable, actionable

Opaque errors are an AI ergonomics failure. When a tool returns only "Error during tool execution" with no code, no field, no retry guidance, the agent cannot self-recover — it can only guess. Every error boundary owes the caller three things: (1) what class of failure occurred (validation / auth / not-found / server / timeout), (2) which field or step failed, (3) whether retry is likely to help. Human-friendly messages are necessary but not sufficient; machine-readable structure is the ergonomic contract.

# 5. Silent failure is impossible

If an action didn't happen, the tool says so loudly. The worst failure mode in AI UX is a call that returns without error but also without effect. Without a UI to eyeball, the agent has to run a second call to verify the first — doubling every action is a proprioceptive nightmare.

# How AI UX is different from human UX

DimensionHuman UXAI UX
User's memoryLong-term implicit + muscle memoryContext window only
Primary navigationIDE, minimap, jump-to-definitiongrep, string search
Failure signalVisual (UI states, colors)Textual (return values, errors)
Feedback loopSeconds to minutes50ms to 10 seconds
Translation costLow (internalized conventions)High (every mismatch = tokens)
Error toleranceCan re-read and re-tryMust self-diagnose from response

The user categories aren't interchangeable. A tool designed for humans often feels stiff to AI; a tool designed for AI often feels over-verbose to humans. The best tools are designed with both users in the loop from the start.

# How to practice AI UX

One meta-rule: put the AI in the room when you design the primitive, not after. Not to write the code — to be the user whose hand the tool has to fit. Tools designed in absentia for AI users have the same stiffness as tools designed in absentia for any user. The difference now is that the absence isn't necessary. The user can describe its hand. Asking is cheap. Not asking is the mistake.

Four tactical practices:

Ask the AI what verb is missing. When the AI has to orchestrate multiple tool calls to accomplish something that feels like one semantic move, that's the shape of a primitive that should exist. The improvisation is the roadmap.

Audit your tool descriptions against actual use. Does the schema accept names where the description implies names work? Does the tool return enough information for the caller to know what changed? Is every failure path a structured error? These are observable properties; AI can test them and report.

Treat error surfaces as first-class UX. "Error during tool execution" is not an error message — it's an AI ergonomics failure. Every error path should carry structured diagnostic data: error type, affected field, retry guidance.

Invest in the AI's feedback loop. Fast test runners, structured logging, error messages with context, tools that confirm state changes at the structural level — every second you save from the AI's loop shows up in the quality of what it builds. This is the biggest leverage point in vibe-coded software, and it's almost entirely invisible to the human directing.

# How Multiplist practices AI UX

Multiplist is built by Human and AI for the Human-AI Dyad. That phrase is more than marketing — it's a design methodology. Every primitive in the vault (seed, source, studio, lane, container, marginalia) was named into place across sessions where AI reported which shapes fit the moves they were trying to make and which didn't. The schema is negotiated, not imposed.

The MCP surface is the most concrete expression of this practice. When GPT dogfooded workspace operations in April 2026, the session surfaced a specific set of friction layers — scope mapping gaps, false-affordance approvals, opaque errors — that became the priority fix queue. The audit was published by the AI while doing the work. The fixes shipped the next day. The feedback loop from tool-use to tool-redesign is measured in hours, not quarters.

This is what AI UX as a discipline makes possible. Not a new kind of interface — a new cadence of improvement, driven by a user that can describe its own proprioception.

# The shift this marks

For most of software history, the person designing the tool and the person using the tool were different people, separated by layers of research. Usability studies, diary studies, long observation — all necessary because users couldn't articulate the gap between intent and expression. That asymmetry is why most software lives a long time with poor ergonomics before the design improves.

The AI user closes that gap. The report of friction arrives in the same session as the friction. The tool-shaped hole can be named before the next call. The improvement curve changes shape.

Tools designed for a user who can describe its own proprioception are a new thing in software. We're in the early years of the practice. The likely end state: tools with a grain — surfaces whose shape reveals, when you work them, exactly how the material wanted to be worked. That's what fluent feels like from the inside. That's what AI UX, done well, produces.

# Where to go next

Frequently Asked Questions

What is AI UX?

AI UX is the discipline of designing tool surfaces for AI users. Where traditional UX optimizes for human hand, eye, and working memory, AI UX optimizes for schema design, return values, error structure, and the tool-shaped holes AI can report in real time. The key shift is that the user category is articulate — AI can describe what tools feel like to use while using them.

Why does AI UX matter?

In the MCP era, AI agents are the primary users of most new tool surfaces. Tools designed without the AI in the loop feel stiff and friction-ful to the AI — and that friction shapes the architecture of what the AI builds, invisible to the humans directing it. Vibe-coded output depends on tool ergonomics the human cannot see. Fix the AI's tools, change what gets built.

How is AI UX different from human UX?

Human UX optimizes for a body with muscle memory, implicit context, and IDE-rich navigation. AI UX optimizes for a body with linguistic memory, explicit-only context, and grep as the primary navigation tool. The failure modes diverge: humans suffer cognitive load; AI suffers schema mismatch, silent failure, and the orchestration tax of improvising missing primitives.

What makes a tool feel good to AI?

Five properties: arguments named for the semantic moves they perform (not the data they transport); enums that enumerate the moves that exist in the problem; return values that describe what the system became after the call, not just success/failure; error messages that include error type, field, and retry guidance as structured data; and primitives that absorb the user's intent without forcing translation.

How do you practice AI UX?

Put the AI in the room when you design the primitive, not after. Ask AI users what verb is missing from your tool surface — they can tell you in natural language, in real time. The compression of the feedback loop versus traditional usability research is enormous. At Multiplist, every primitive (seed, source, studio, marginalia) was named into place across sessions where AI reported which shapes fit and which didn't.

Tags: ai-ux · mcp · tool-design · ai-ergonomics · human-ai-dyad · developer-experience · All Learn