Editor's Note: Since we published this blog post, we decided to change our sandbox implementation to an alternative approach: compiling a JavaScript VM written in C to WebAssembly. As you'll see in the blog post below, it was one of several ideas we originally weighed.

We decided to implement this alternative after a security vulnerability in the Realms shim (which our original approach uses) was privately disclosed to us. The security vulnerability was promptly fixed by the Realms shim team before the vulnerability was made public and we have no evidence it was ever exploited. Read more about the vulnerability and our approach to fixing it here.

At Figma, we recently tackled one of our biggest engineering challenges yet: supporting plugins. Our plugins API enables third-party developers to run code directly inside our browser-based design tool, so teams can adapt Figma to their own workflows. They can enable accessibility checkers to measure contrast, translation apps to convert language, importers to populate designs with content, and anything else their heart desires.

We knew we needed to design this plugin functionality carefully. Throughout the history of software, there have been many instances where third-party extensions negatively affected a platform. In some cases, they slowed the tool to a crawl. In other cases, the plugins broke whenever new versions of the platform were released. To the extent that we could control it, we wanted users to have a better plugin experience with Figma.

Furthermore, we wanted to make sure plugins would be safe for users to run, so we knew we wouldn’t want to simply eval(PLUGIN_CODE) . That is the quintessential definition of insecure! Yet, eval is essentially what running a plugin boils down to.

To add to the challenge, Figma is built on a very unconventional stack with constraints that previous tools haven’t had. Our design editor is powered by WebGL and WebAssembly, with some of the user interface implemented in Typescript & React. Multiple people can be editing a file at the same time. We are powered by browser technologies, but also limited by them.

This blog post will walk you through our pursuit for the perfect plugin solution. Ultimately, our efforts boiled down to one question: How do you run plugins with security, stability, and performance? Here’s a brief overview of our many non-trivial constraints:

We considered dozens of different approaches branching into all sorts of different paths. We had weeks of discussing, prototyping and brainstorming. This blog post will focus on just three of those attempts that formed the most central path in our exploration.

For me, it has been the most satisfying exercise in first principles thinking. I had fun leveraging all the computer science fundamentals I learned in the classroom (the ones I never thought I’d ever use in the real world). If that idea excites you too, we’re hiring!

During our first few weeks of research, we found many interesting attempts at third-party code sandboxing. Some used techniques such as code-to-code transformation. However, most had not been validated in a production application and as such, carried some risk.

In the end, for our first attempt, we went for the closest thing to a standard sandboxing solution: the <iframe> tag. It’s used in applications that need to run third-party code, such as CodePen.

The <iframe> is not your everyday HTML tag. To understand why <iframe> s are secure, it’s necessary to think about what properties they need to guarantee. An <iframe> is typically used to embed one website into another website. For example, in the screenshot below, you can see that yelp.com is embedding google.com/maps to provide map functionality.

Here, you wouldn’t want Yelp to be able to read what’s inside a Google website just by virtue of embedding it. There could be private user information in there. Similarly, you wouldn’t want Google to be able to read what’s inside a Yelp website just by virtue of having been embedded.

This means communication to and from the <iframe> is heavily restricted by browsers. When an <iframe> has a different origin than its container (e.g. yelp.com v.s. google.com), they are completely isolated. Then, the only way to communicate with the <iframe> is via message-passing. These messages are, for all intents and purposes, plain strings. Upon receipt, each website is free to take action on those message, or ignore them.

They’re so isolated, in fact, that the HTML spec allow browsers to implement <iframe> s as separate processes, if they chose to.

Now that we know how <iframe> s work, we can implement plugins by creating a new <iframe> every time a plugin runs, and sticking the plugin’s code inside the <iframe> . The plugin can do anything it wants inside the <iframe> . However, it won’t be able to interact with Figma documents except through explicit, whitelisted messages. The <iframe> is also a given a null origin, which means that any attempt to make a request to figma.com will be denied by the browser’s cross-origin resource sharing policies.

Effectively, the <iframe> acts as a sandbox for the plugin. Furthermore, the security properties of the sandbox are guaranteed by browser vendors, who have spent years searching for and fixing any vulnerability in the sandbox.

An actual plugin using this sandbox model would use an API that we add into the sandbox and look roughly like this: