Hello world,

quick questions:

Aren’t (Web / Mobile) Designers and Developers supposed to speak the same language?

Aren’t we all in pursuit of apps / sites that look pretty and work smoothly?

Can we make the whole mobile / web building process a bit more humane?

There’s clearly a gap between designers and developers. This seems all too familiar, and to my surprise it seems we’ve been mostly trying to navigate around it instead of facing it head on.

Can we right this wrong?

About a year back, as I was working on a mobile app, I got myself stuck in a never ending ping-pong with my designer, mainly tweaking how the app looks and behaves. Most of these changes were only about positioning, colors, spacing and so on. Instead of using my time to implement the real value bringing guts of the app, about 60% was spent doing non-business-logic relevant work aka “pixel pushing”. Essentially, all the “actions” the designer was doing in Sketch, I had to re-do in code. That, to me, looked like a completely inefficient process and the paved way to frustration. And a missed deadline. Or two :)

My first thought was “I’m probably not using the right tools… there must be a way…”

Not being a designer myself, and fairly unfamiliar with Sketch, I started to look around for some solutions that could help, but all I could find were plugins or apps that were doing part of the job. So, why bother using something that would only add more complexity to an already inefficient process?

Hang on, so… what are we actually looking for?

In a nutshell this is what I would like:

the designer should be able to “preview” the designs on his device / browser in realtime. But, not simulated. He should see the preview as the real deal, as if a developer would have implemented it for him. when the designer draws a button, the developer should get a “Button” code snippet ready to be used in the app / site The developer should chose the snippet’s language and dialect or coding style.

It looks like there are some excellent tools to cover parts of the requirements, and they fall within two categories: prototyping tools and target specific “helpers”.

While tools such as InVision and lately Sketch’s own built in prototyping system are doing a great job for prototyping, they only go as far as “prototyping”. They are mostly approximations or simulations of the end result, and they stop being useful right there. There’s not much more you can do with your prototypes once you have built them (apart from contemplate their beauty). They cannot be re-used or further expanded toward a “real”, production ready result. Hence you now need a developer to code the UI, so back to square one.

On the other hand, Target specific tools, such as Zeplin, do a great deal in helping the developer “pick” styling information, but they rarely give the full context, plus they are exactly what they are called “target specific”, with hardly any configurations possible.

Another worth mentioning approach has been proposed by the guys from Anima, through their Launchpad plugin for Sketch which exports plain HTML/CSS.

While there are plenty of tools that’ll get you a fair bit ahead towards the goal, the designer still needs a developer to be able to experience his designs in the “native world”, plus a lot of manual tweaking is needed afterwards.

Then, the thought:

What would it take to capture the designer’s “input” and translate it to code, in real-time?

And this was the seed thought that brought together a bunch of techies whom I met at the 2017 JSHeroes conference in Cluj, Romania.