What’s your background and how did you come up with the idea for Zyl?

I’m one of those thirty-somethings who started coding quite young. Jumping on the internet trend when I was 12, I created my first video game website in my free time. Soon after, I fell in love with coding because it was the quickest way to transform an idea into something real. For those teenagers living in a fantasy world, it was like waving a wand to make your dreams a reality.

The rest is pretty standard. I went to a French engineering school, spent some time in Nagoya University (🇯🇵) as a researcher in computer vision. I worked on a projectable interface for the human hand (similar to MIT Media Lab’s SixthSense but without a necklace and color indicator). After that, I studied Design Thinking at Stanford University (🌲).

Between my time at Nagoya and Stanford, I attended an exchange program at King’s College London and got closer to one of my childhood friends, Mathieu Spiry. We had a lot of free time and decided to start our first company, an online education platform for medical prep. All together we started three companies. After graduating, we co-founded Zyl.

The idea behind Zyl is very simple — you do not choose your phone gallery app. Do you feel satisfied with it? We noticed that most people don’t choose their photo apps — the default galleries on smartphones (iOS Photo and Google Photos) come prepackaged with the phone and are simply storage apps for media files.

For us, this was a great opportunity to improve on mobile media storage, and obviously, there’s a lot of room for improvement. Combining the latest in machine learning research, we wanted to make a smart photo gallery.

What’s does your tech stack look like and what tools did you find helpful?

Zyl is a mobile app both available on iOS and Android. We’re using Caffe and Tensorflow frameworks for machine learning server-side with the help of Docker to keep things clean. We train our model with an in-house deep learning rig that we build ourselves. AWS GPU instances are way too expensive.

To convert our models to run on mobile devices, we use Tensorflow for Android and Core ML for iOS. In order to test the integration, we use small tools we built to compare the output on the server with the device’s output on a designated dataset.

What was the hardest part about building Zyl?

Device limitation! To stand by our principle of privacy by design, we run all our model on-device, so we can’t use data centers or leverage cloud computing to make predictions. Unfortunately, this does make things pretty complicated but the hassle is worth it if it means users feel more secure and have a better experience.

The first problem we faced was with app size. The first time we released the iOS app with our universal search feature, the app weighted more than 300MB! (You can download apps up to 150MB on 4G from the App Store but for apps larger than 150MB, the phone must be on wifi in order to download it). It clearly impacted the acquisition rate 📉 . So we built a remote model store that allows us to keep the app small when users download it from the app store, and then lazily loaded models later on. Our model store also helps to keep our models updated without having to push a new version of the app.

Another problem was with battery consumption and inference time. We haven’t found a silver bullet yet so we learned to live with it and built a dispatcher that can run things as fast as possible while keeping phones cold (It’s really unpleasant to feel your phone burning up in your pocket). In our case, we also have the chance to do everything without a need for real time performance.

Do you have any advice for other developers who are looking to get started with machine learning?

Define, from the start, the frame where your model will live. It will help you make quick decisions about what can be done and how. Define your project rational! As the common saying goes, don’t try to reinvent the wheel. Plenty of open-source models are available that have already been trained. Rationalize a project to get started on and understand where your model will run its predictions.

My final advice, if you plan to work on mobile, would be to keep in mind constraints (power, storage, battery, layer type, …) — because contrary to a lot of machine learning posts out there, you don’t run your model on a extensive server cluster. But, getting a model to run on mobile is magical. So keep things small and integrate as soon as possible on mobile.