Figma Integrates AI Agents via Canvas MCP Server
- •Figma launches MCP server allowing AI agents to design directly on the digital canvas.
- •New 'skills' feature uses markdown files to guide agents with team-specific design context.
- •Integration supports Claude Code and Codex for bidirectional code-to-design workflows.
Figma is bridging the gap between design and development by turning its canvas into an interactive environment for AI agents. By launching a Model Context Protocol (MCP) server, the platform allows agents to read and write directly to Figma files, effectively treating the design canvas as a live codebase. This shift means that instead of just generating static images, AI can now manipulate actual layers, components, and variables within a team’s established design system.
To ensure these agents do not produce generic or off-brand outputs, Figma introduced "skills." These are essentially instruction sets—written as simple markdown files—that provide agents with the necessary judgment to make decisions aligned with a team's specific standards. For example, a skill could define how to apply hierarchical spacing or how to translate a structured JSON contract into a visual component. This allows the AI to understand not just what to build, but how to build it according to existing rules and conventions.
The update fosters a "self-healing" loop where agents can iterate on designs by comparing visual output to code structures. Currently in beta, the tool supports major AI clients like Claude Code and Codex. By opening the canvas to agentic workflows, Figma is positioning itself as the central hub where product intent and execution converge, allowing teams to move fluidly between the command line and the visual design interface without losing critical context.