A visual workspace for shadcn/ui components. Browse, inspect, edit, build from scratch, and export production-ready code without touching a config file. Built solo in three weeks with Claude as design and engineering partner.
Self-initiated••Designer + Developer
Component Lab is a visual playground for shadcn/ui. You browse components with live previews, select any part and edit its styles visually, toggle variants, test breakpoints, build new components from scratch, and export production-ready code without touching a config file.
The tool is live at comp-lab.netlify.app
I keep saying designers with engineering skills have an edge. Component Lab was my attempt to act on that rather than just say it, built solo with Claude as design and engineering partner.

shadcn/ui is the default component library for most Next.js projects now. But there's no good visual layer for it. You read the docs, clone the source, or figure it out by trial and error.
The gap this creates is real. Designers spec components they don't fully understand. Developers jump between docs, Figma, and their IDE to prototype something that should take minutes. And anyone who wants to build something custom is essentially starting from scratch with no guardrails.
Component Lab puts everything in one place. Think Figma meets Webflow, but built specifically for shadcn. Browse any component and see exactly how it's built. Edit it visually. Build something new from scratch with the right structure enforced at every step. Export code that's ready to use.
Before any code was written, the major scope questions were resolved in writing on day one. How should things work? What should the export look like? Where should the tool draw the line between helping and over-engineering? Each decision was captured with a reason behind it, not just the answer.
This included a proper briefing stage: the full stack was decided upfront and documented.
Having these locked in before building meant the AI wasn't making infrastructure guesses mid-session.
The repo itself was set up deliberately from the start. A CLAUDE.md file gave the AI persistent context about the project: what it was, how it should behave, what conventions to follow. Relevant MCPs were installed and configured so the AI had direct access to the right tools. Skill scripts the AI could read from were in place to guide code quality and pattern consistency from the first line written. This isn't something most people think about when building with AI, but it makes a significant difference. The quality of the output is directly tied to the quality of the environment you set up around it.
Decisions and rationale lived in Notion and as comments on Linear tickets before any implementation began. Notion held the master plan and milestone structure. Linear held the individual tickets with acceptance criteria written upfront.
The point wasn't that the decisions were set in stone. It was that having them resolved kept the build focused. The same questions wouldn't get re-litigated later when I was tired or the AI's context had shifted.
The whole first milestone shipped across two evening sessions. That sounds fast, and it was. But it covered a lot of ground: the component browser, the main interface, the style class analyser, accessibility checker, semantic HTML checker. It moved quickly because the planning had done its job. A milestone that ships fast is usually one that was correctly scoped.






The planning wasn't just about what to build. It extended into how every session would run.
The AI followed a consistent process for every task:
Feedback and course corrections were mirrored back into Linear as comments rather than just existing in the conversation. This kept the project history honest and meant context wasn't lost between sessions.
Lessons were treated as first-class outputs. If something went wrong or a better approach was found, it got documented in Notion and the AI's memory was updated so the same mistake wouldn't surface again. Not just "noted and moved on" but genuinely captured and carried forward. Every session started a little smarter than the last.
This kind of structured workflow isn't the default when building with AI. Most people treat it as a fast back-and-forth with no paper trail. The overhead of doing it properly is low, and the payoff in consistency, quality, and context retention is significant.
Component Lab is built entirely with shadcn/ui components. That wasn't the lazy choice. It was a deliberate one.
The people using this tool are already working with shadcn. If the interface around the components looks and feels like something they've never seen before, there's a layer of friction before they've even started. But if it feels immediately familiar, that friction disappears. They can focus on the components they came to explore rather than learning a new UI.
There's a design principle behind this: people spend most of their time in other tools, so they bring those expectations with them. Meeting users where they already are is almost always the right call.
It also meant the app didn't fight itself visually. The components being inspected are the focus. The interface holding them should get out of the way.



The first milestone built the inspection layer. Load any component, see how it's put together, understand the styles, check accessibility in real time.
The second added editing. Select a part, change its styles visually, save and export. Simple approach: read the source, apply the changes, write it back. It worked. A Playwright test suite covering the core flows also landed here, something that would prove its value later.
The third was the from-scratch builder. A guided tool that kept you on the right path, making it hard to build something structurally wrong. It had its own editor and its own underlying structure.
The from-scratch builder was generating code based on the latest version of the component library. But the component files actually installed in the project had never been updated since day one. They were a much older version of the same library. One side was writing modern code. The other was reading old code. Neither was tested against the other, so it went unnoticed. It surfaced during an audit before the fourth milestone, and it's what forced everything to stop for a full reset.



The fourth milestone started with a simple backlog task: let users copy an existing component as a starting point. Straightforward. Half a day.
Scoping it out revealed a problem. The copy would need to open in the visual builder, which meant the tool needed to be able to read any component's source and understand it. Which meant knowing what patterns to expect across all 46 components. Which meant auditing every one of them. Which meant the component files needed to be up to date first. Which meant upgrading two core dependencies before anything else.
The right call was to follow it rather than cut corners. A proper milestone was scoped for the foundation work, and the original task was pushed to a follow-up. Several other backlog items that had been quietly stuck behind the same gap became straightforward once it was resolved.
The lesson: always ask what a task assumes already exists. It felt obvious here. It wasn't.
Before the fourth milestone could start properly, the project needed to get current.
The component files hadn't been updated since the project started. This led to one of the more surprising discoveries of the build: the shadcn library doesn't work like a normal dependency. Updating your project doesn't update your component files. The philosophy is explicit: "the components are your code, not a dependency." You install them once, they land in your codebase, and they're yours. No automatic updates. The trade-off is real: full freedom to modify them, at the cost of remembering to update them manually. That had been missed.
So: upgrade React, upgrade Tailwind CSS, refresh all 46 component files from the live library, and migrate to the latest recommended configuration.
One thing learned the hard way here: don't trust automated migration tools on your own source code. The official Tailwind migration tool handled the config changes correctly, then went through 60 source files and made over 30 incorrect changes. It can't tell the difference between a style class name and the same word used elsewhere in the code for a completely different purpose. Use these tools for config only, and manually check anything they touch beyond that.
A second issue surfaced: the production build had been silently broken. It had gone unnoticed because day-to-day development runs on a local server that works differently. Tracing the cause took far longer than fixing it. Production builds should run automatically on every code change, not just at deploy time.
The main engineering challenge of the fourth milestone was building a parser: something that could read any component file, understand its structure, and reproduce it exactly on the way out.
The quality bar was strict. Whatever gets exported has to be byte-for-byte identical to the original. Not roughly the same. Not tidied up. Identical. An automated test enforces this: every component gets put through the parser and back out again on every code change, and if anything differs, it fails.
The reason this mattered: if the tool quietly reformats things on export, even slightly, it creates problems. Future updates become messier. Hand-edits the user already made get silently changed. The tool starts working against the user rather than for them.
The rule was simple: if the parser encounters something it doesn't fully understand, preserve it exactly and move on. Never reformat. Never guess. This meant the parser could always make progress on new patterns without risking what it already handled.
All 46 components pass the test. Getting the thinking right upfront meant the implementation was mostly straightforward.

After the parser, the export flow, the automated test, and the deletion of the old editing code were all shipped, I called the milestone done.
I opened the app for the first time in hours and found the problem straight away. Parts of the interface were reading from the new parser. Others were still reading from a frozen snapshot of component data that predated all of this work. One panel showed eight size options for a component. The toolbar showed four. Two different answers to the same question, visible at the same time.
When I opened the from-scratch builder, it got worse. The two sides of the app weren't using the same interface. One had a navigation panel the other didn't. Different layouts, different labels for the same controls. Two different products doing the same job.
The instinct from the AI side was to patch the visible inconsistencies and call it done. I pushed back: the from-scratch version was gospel, and the inspect version needed to match it, not the other way around.
This is exactly why reading what the AI produces carefully matters, rather than accepting edits blindly. The difference between quality output that scales well and a rush job that only looks like it works is often a single moment of critical judgment. Left unchecked, the shortcut would have shipped. It would have looked fine. And it would have fallen apart the next time something needed to change.
The honest diagnosis: I had built the interesting parts and quietly skipped the tedious part. Getting the from-scratch builder's editor onto the same underlying structure as the new parser. The milestone was called "Unified Editor." I had built a new engine. I hadn't unified anything.
A proper fix was scoped out and booked as its own session the next morning. The done criterion was simple: a specific search across the codebase had to return zero results. Not "feels done." Not "tests pass." Zero. Either the old approach still had active users in the code, or it didn't.
It shipped five PRs in a four-hour session. Every shortcut that came up was off the table because the check would still fail. When the criterion is mechanical, there's nothing to argue about. The milestone was actually done.
The lesson: when you're unifying two things, done doesn't mean "I built the new thing." It means "I replaced everything that relied on the old things." If users still see two different surfaces, you've built a third thing alongside the original two.
The production build broke silently. Worth restating from the foundation reset. Automated production builds should run on every code change. If they don't, you find out at the worst possible moment.
The Playwright test suite drifted. The test suite proved its value during a major dependency upgrade, where a clean baseline confirmed nothing had broken. But several tests had rotted in the meantime because they weren't updated when the UI changed. That discipline matters: update the tests in the same change as the UI, not later. Rotted tests are worse than no tests. They create noise exactly when you need a clear signal.
Calling the milestone done before it actually was. Covered above. The pattern: the interesting parts get built first. The tedious parts are where false finishes happen. Define what "done" looks like before you start, not when you think you're finished.
The landing page took too many iterations. The first version used animated wireframe graphics. It looked considered and communicated almost nothing. Screenshots of the actual product worked better. Classic trap for design-led work: more effort on how it looks than how well it does its job.
Scope creep was a constant pull. Every milestone had a moment where adding something else felt reasonable. The milestone structure kept it contained. Without it, things would have bloated and stalled.
Deciding things before building them. The decisions-before-code approach paid for itself throughout. Having a record to refer back to meant no time lost re-litigating the same questions mid-build.
The "don't touch what you don't own" principle. Building the escape valve into the parser from day one (if you don't understand it, preserve it exactly) meant it could always handle new patterns without breaking anything already working. Nothing got stuck.
The way the collaboration actually worked. "AI-assisted" can mean a lot of things. This wasn't a one-shot, accept-the-output mentality. Claude moved fast. I steered, questioned, and pushed back when things were heading in the wrong direction.
The false victory is the clearest example: Claude declared the milestone done; I opened the app and immediately found the seams. There was also a session where the compound component renderer spent hours going in the wrong direction, until a single question about why a layout had unnecessary padding unlocked the whole approach. Moving fast in the wrong direction is still wrong. The value came from both of us (me and the AI assistant) doing what we're actually good at.
Design thinking as a constant. UX principles ran throughout the build, not as a layer added at the end:
That thinking happened alongside the engineering, not as an afterthought.
Following the cascade rather than cutting corners. When the half-day task revealed a missing foundation, the short-term slowdown cleared the path for everything that had been stuck behind it.
A lot of designers worry that AI is coming for their jobs. This project is my answer to that. The planning, the structure, the UX principles applied throughout (Hick's Law, Gestalt, Cognitive Load, progressive disclosure): none of that comes from the AI. That's the human layer. It's the difference between something vibe-coded and one-shotted, and something built with purpose that actually holds up. AI makes the execution faster. Design thinking is what makes it worth building in the first place.
A designer with enough patience, the right tools, and a willingness to throw away work can ship a full-stack product with genuine depth. The design background wasn't separate from the engineering decisions. It shaped them. Knowing what makes a component feel right made the quality bar easier to define. Understanding how people process information shaped how the editor was structured. The thinking isn't that different from designing a component library in Figma. The medium is just different.