Parvez Kose/Design System / UI Kit
Parvez Kose · Design System
The kit — four surfaces, one voice

Every surface uses the same tokens. The palette shifts from the immersive dark (--im-bg) to classic bone-paper (--classic-bg), but type, spacing, and chrome rules are identical. One bold signature per surface — shader, DotGrid, drop-cap, highlighted word.

01 · Immersive · / hero · shader · corner brackets · caret · philosophy toggle · footer tagline
Parvez Kose Classic
Parvez Kose
Designing interfaces for intelligence
[+] Design Philosophy
Agentic systems. Generative UI. Visual interpretability.
02 · Writing · /writing Index layout · featured essay · catalog rows · category chips · mono metadata
Parvez Kose
— Essays, notes, and longer pieces

Writing on generative interfaces and the craft of AI-native software.

A slowly growing archive of what I think about while building. Most pieces are short. A few are long. Cross-posted selectively to Substack and Medium.

All Generative UI Interpretability Craft Notes — 14 pieces
Featured essay

Poetry in the shell, rigor in the stack.

On why the outer surface of an AI product should look as authored as its model — and what it costs to keep that promise when the model keeps moving.

Apr 2026·18 min read·Craft
What lives
under the surface
should be visible.
— from the essay
Archive Sort: newest ↓
  • 2026 · 03

    The A2UI protocol, read carefully.

    A line-by-line tour of the new agent-to-UI spec. What it gets right about render safety. Where it still assumes chat.

    Generative UI· 12 min
  • 2026 · 02

    Three probes for visual interpretability.

    Layout confidence. Token-cost ribbon. Attention-head heatmap. Lightweight surfaces that answer the questions users are already asking silently.

    Interpretability· 9 min
  • 2026 · 01

    Against the statistical average of the web.

    Why every AI-shell site looks the same, why it probably shouldn't, and the small set of decisions that change it.

    Craft· 6 min
  • 2025 · 12

    Notes: render-safe generative components.

    Short. Four rules I keep returning to when letting a model emit UI.

    Notes· 4 min
  • 2025 · 11

    The hero that isn't a hero.

    On removing the marketing-shaped object from the top of every page and putting the authored thing there instead.

    Craft· 7 min
More at designlogic.substack.com © 2026 · Parvez Kose
03 · Blog · essay Plex Serif H1 · drop cap · terracotta rule · Fira Code pullquote
Parvez Kose ← Back to index
Essay · 2026 · 6 min read

On visual interpretability, or: what the model is actually doing.

An argument for exposing the internals of generative systems — not as debugging surface, but as first-class UI.

Most AI interfaces hide their reasoning. A prompt goes in; an answer comes out; the user is asked to trust. That trust is expensive when the system is wrong in small, structural ways — and we are starting to find out how often that is.

I have spent the last year building generative components on top of the A2UI protocol, and the thing I keep returning to is render-time transparency. What the model picked. What it almost picked. What it couldn't see. These are not debugging concerns. They are the UI.

What lives under the surface should be visible — not in a developer panel, but in the interface itself.

Three probes

There are three small, lightweight surfaces I've been fitting into AI-native products: a layout confidence overlay, a token-cost ribbon, and an attention-head heatmap. Each earns its place because it answers a question the user is already asking, silently, about the system.

Generative UI Interpretability A2UI Design engineering
04 · About · /about Blurred shader backdrop · two-column editorial · single golden highlight · sidebar lists
Parvez Kose
— About

I design interfaces for intelligence, not templates for screens.

I'm a Staff Software Engineer working at the seam of design and AI. For the last decade I've built generative UI systems, visual interpretability tools, and editorial surfaces that refuse the statistical-average aesthetic of the web.

I write at Design Logic and publish longer research on Medium. My current obsessions are the A2UI protocol, render-safe generative components, and why the shell of a product should look as authored as its model.

Generative UI Agentic systems A2UI Design engineering Interpretability
05 · Work · /work Gallery · 2-up (up to 4) · ViewTransition morph on CTA · GSAP / R3F / Framer Motion ready
Parvez Kose
— Selected work · 2024–2026

Projects I've built and shipped.

motion · GSAP timelines
3D · R3F shader hero
gesture · Framer Motion layout
<Card /> renderSafe() token: 0.04¢ a2ui.emit <Chart /> schema✓
A2UI · live
Design engineering·2025 — present

Generative UI prototype

Declarative component generation grounded in the A2UI protocol. Render-safe emission, token-cost telemetry, and a small set of components the model can compose without going off-script.

ReactA2UIGSAP
Open ⟶
layer 14 · head 7
Attention probe
Research·2024

Visual interpretability kit

A small set of probes that surface what the model is looking at — down to the attention head. Layout-confidence overlays, token-cost ribbon, and a heatmap that makes "why did it pick that" legible.

R3FD3Probes
Open ⟶
<Card /> renderSafe() token: 0.04¢ a2ui.emit <Chart /> schema✓
— Case study · Design engineering · 2025–present

Generative UI prototype

A declarative generative-UI system built on the A2UI protocol. The model emits components, not markup — and each emission passes a render-safety schema before it hits the DOM. Token cost is surfaced as a ribbon so the user can see what an answer is worth.

React 18A2UIGSAPEdge runtime

Role

  • Design engineerLead
  • Team3 FE · 2 ML
  • Span18 months

Problem

Most generative surfaces return freeform markup. That's cheap to ship and impossible to trust. We needed a surface the model couldn't break — and a trail you could audit after the fact.

Outcome

A 22-component library the model composes against. 99.4% render-safe rate across 40k production emissions. Average per-answer cost down 62% once the token ribbon was visible to end users.

layer 14 · head 7
Attention probe
— Case study · Research · 2024

Visual interpretability kit

A small, lightweight set of probes that make the internals of a generative model legible at UI time — not in a developer drawer, but in the interface itself. Three surfaces: layout-confidence overlay, token-cost ribbon, attention-head heatmap.

React Three FiberD3ProbesWebGL

Role

  • ResearcherSolo
  • OutputPaper · OSS
  • Span9 months

Problem

Users trust AI output without a way to see why the model said what it said. Trust without transparency is expensive — and corrodes fast when the system is wrong in small, structural ways.

Outcome

Three probes shipping in two production products. A Circuits-style thread on attention steering. The heatmap probe lifted self-reported answer trust 31% in user study (n = 84).