Bitovi Blog - UX and UI design, JavaScript and Front-end development

Figma's MCP Just Gave AI Agents 'Write' Access. A First Look at What It Can Do.

Written by Ali Shouman | March 25, 2026

Figma recently announced that AI agents can now write directly to Figma's canvas through their MCP server. It's a big deal. For designers and developers who've struggled to keep code and designs in sync, the idea of an agent that can generate components the right way, apply variables, and work within your design system is the kind of thing that sounds too good to be true.

So I tested it.

As part of Bitovi's Figma partnership, I've been getting early access to the tool and putting it through a structured set of test cases. More meaningful scenarios like building components from scratch against a design system, creating a feature screen using linked components, and restructuring output from Figma Make into something production-ready.

Here's what I found.

How to get set up

The tool is available through Figma's MCP server, which works with any MCP-compatible client. That includes Claude Code, Cursor, Copilot in VS Code, Codex, and others. To get started:

  • Open your MCP client of choice and connect the Figma MCP server
  • Use the use_figma tool to start issuing design instructions directly from your agent

It's currently free during the beta period, so now is a good time to experiment with it before it becomes a paid feature. Worth noting: write access is available for Full seats, while Dev seats can use it in drafts only.

The good stuff first

A few things impressed me. Not just that the agent could produce designs fast, but that it understood what correct output looks like. Using real components instead of plain frames, applying variables instead of hardcoding values, pulling from the library instead of rebuilding from scratch. It's exactly where most AI design tools fall short.

Building screens without a design system: Without any components to reference, the agent did something sensible: it identified what base components the feature needed, built them first, and then reused them to construct the screen. That's the right instinct.

Translating component code into Figma: When I gave the agent an existing Button component with all its props and variants, it produced an accurate Figma component set with correct variants, correct styles, and clean layer naming.

Restructuring Figma Make output: When I explicitly asked it to create proper component sets with variables before building the feature, it got it. The output was clean, structured, and production-ready in a way that Figma Make alone wouldn't produce.

Cleaning up existing designs: The same capability that lets the agent build correctly from scratch can also be pointed at existing files to fix what's already there. Hardcoded hex values swapped for proper tokens, plain frames replaced with real components from the library, and more.

Badge component generated by the agent from a prompt, showing all four variants with correct layer naming and component structure.

The agent created both desktop and mobile designs for the Customers Directory feature, using the linked design system components for the modal, input, and customer cards.

Where it needs guidance

The limitations I ran into aren't really bugs. They're more the nature of how agentic tools work, and once you understand that, they start to make sense.

Design system detection isn't automatic. The agent doesn't pick up on a linked design system unless you tell it to. Every time. In one case, it built a feature using plain frames for some components while correctly using design system components for others, as if it made the decision component by component without a consistent strategy. The result needed cleanup before it could be used in production.

Variables follow the same pattern. Left to its own defaults, the agent hardcodes values: colors, spacing, typography. Ask it explicitly to use variables, and it does. Don't ask, and you get a file full of hardcoded hex values that won't respond to theme changes or token updates.

This inconsistency isn't unique to Figma's tool. It's a pattern that shows up across agentic workflows generally. Agents are non-deterministic by nature. The same prompt can produce different results on different runs. An agent will sometimes skip steps that seem obvious to a human, especially when those steps aren't explicitly required by the task as described. The agent isn't being lazy. It's optimizing for the literal goal you gave it, not the implicit standards you have in your head.

The test cases

I ran 6 test cases across 3 scenarios. Here's a full breakdown of each one, the prompt used, and how it went.

Skills change the equation

Here's where Figma's announcement gets interesting beyond just the writing capability itself.

A quick note on what skills actually are. In the context of the Figma MCP server, a skill is a markdown file that packages a repeatable workflow into a reusable instruction set. Instead of rewriting long prompts and re-explaining your conventions each time, a skill gives the agent a stable sequence of steps to follow. The result is more consistent output and less drift across runs. You can also go beyond a single file and include supporting scripts, reference docs, and assets to handle more complex workflows.

Alongside the use_figma tool, Figma introduced skills as a way to teach the agent to extend your existing design system rather than work around it. A well-written skill can direct the agent to search your libraries first, apply your variable naming conventions, and build with real components and auto layout that matches your team's standards.

This matters a lot for the gaps I described above. Every limitation I ran into, the design system detection, the variable defaults, the component reuse, is addressable through a custom skill. You can write a skill that tells the agent:

  • Always check for a linked design system before generating anything
  • Always use variables for color and spacing
  • Always create component sets rather than plain frames

Skills work across the main MCP clients. In Claude Code you invoke one with /skill-name, in Cursor via the slash command menu, and in Codex through the CLI. Figma recommends testing any new skill against a duplicate or example file before running it on anything important.

Figma has already published a set of example skills from the community, including ones for generating library components, applying design systems, and syncing design tokens. They're a good starting point for teams building their own.

Where this fits

I want to be honest that this is early testing across a focused set of scenarios. A more thorough evaluation across more complex design systems and edge cases would give a more complete picture. But even from these initial findings, the direction is clear.

The bigger picture here is that designers now have a way to use AI to create designs the right way, not just designs that look right. And teams have a real path to keeping code and designs in sync, with an agent that understands your component structure and builds to it. That's a meaningful shift from what's been available before.

The version of this tool that most teams will actually use day-to-day isn't the raw agent. It's the agent plus a set of skills that encode your team's decisions. That combination gets you much closer to reliable, design-system-aligned output than anything available before.

If you work with Figma and haven't looked at the MCP server yet, now is the time. It's free during the beta period. Figma's canvas is open to agents, and the teams that figure out how to work with that are going to move significantly faster than those that don't.