An ambitious project

Going deeper into privacy-preserving Machine learning 🧐

For those of you following my blog, you’ve noticed that I’ve not been very active lately and my last feedback post (on the awesome OpenMined community) date from March 2018.

The reason is that I’ve actually met some great guys in OpenMined working on an existing OpenSource Library called tf-encrypted (originating from Morten Dahl’s research). We decided to work together on it toward a sub-goal of the community: building a library for privacy-preserving machine learning.

As a side note, I would like to highlight the opportunities online communities represent for anyone looking to enlarge their activity and potentially do research/create a startup/find a job/etc.

What should a privacy-preserving machine learning Library look like? We were not yet sure at the time but we settled on:

Provide a nice interface for machine learning researchers to tinker with privacy options and protocols without having to understand how they work

The same interface should be nice to cryptographic researchers too so they can add new protocols and privacy options without spending time engineering solutions to deal with the field requirements (like dealing with a big int implementation working efficiently with ML algorithm)

new protocols and privacy options without spending time engineering solutions to deal with the field requirements (like dealing with a big int implementation working efficiently with ML algorithm) The overall interface should also be nice to ML engineers without a strong background in research so they can implement state of the art models and secure it easily

From a technical and a user-experience perspective the goal is ambitious and we were quite excited to tackle it. But doing a great job at the task would need more commitment than the one you can provide when you work on a problem as a side-project: a team needed to be brought together and that team needed to be able to focus full-time on it.

Dropout Lab, a research-centric startup 🤓

In a previous life, I was a CTO/co-founder of a startup. The startup world was then all about speed (“move fast and break things”, “ship, gather feedbacks and iterate”, etc.). Of course, that culture has some upsides but it is tailored to a pure virtual company building a product. I was not sure how we could reconcile this view and the pace needed by research.

Of course, moving fast and breaking things is out of the question when you build a tool to secure people’s work but how fast could a startup iterate on an emerging research-focused technology?

I expected it to be slower and less prone to the famous startup rollercoaster but the founding team pulled out some very enlightening tricks to keep a good velocity and so we embraced this well known iterative process.

Let me first introduce the constraints we had:

We would work in a fully remote environment. The team was living in multiple cities around USA, Canada, and Europe which means multiple timezones too.

too. We would focus on OpenSource solutions . It was a requirement of many team members (including me) to join the project so non-negotiable.

. It was a requirement of many team members (including me) to join the project so non-negotiable. We would work at the edge of ML and crypto research fields .

. Most of the members had already multiple research work ongoing, a blog on which to write, etc. Some freedom and so , trust had to be the cornerstone of the team .

. The leadership will not transform its workforce into task-monkeys. Honest communications had to be the standard and decisions had to be distributed.

How to satisfy all those constraints while ensuring productivity? The most important thing is to understand that it has to be a team effort, the team has to grow together.

Bonding a remote team 🤜🏻🤛🏾

The overall goals of the work were set, but a company is much more than that: values, processes, tools, etc.

The first thing we did was to define values in a “one voice”-”one vote” manner. Little tips: values can sometimes feel distant for team members, it’s good to add a simple and concrete use-case so everyone can easily look at them to seek guidance on complex decisions.

Then we defined the tools: Google suites (docs/calendar/meet/etc.), Slack, Waffle, Github became naturally a necessity for us. All those tools brought the most important feature for multiple time zones remote work: asynchronicity between the team members.

Asynchronicity is mandatory but not sufficient, you also need processes. But again you need to strike the right balance, If you have none, you basically live in chaos and frustration will soon creep in. If you have too much, people will feel like they are monitored and trust will suffer.

What you need, is to add processes that simplify the work. For example, let’s say I want to contribute to the codebase, what should I do? Also, the more you can automatise it, the better. We leveraged continuous integration to facilitate distributed contribution. We forced reviews to encouraged team members discussion, etc. Github issues became the go-to solution to organise any discussion about anything (even strategy had a Github repository with issues).

Finally, regular online meetups were organised at the most acceptable hour for everybody so we can keep in sync and push forward together.

Asynchronicity is mandatory but not sufficient, you also need to build habits for your remote team to be efficient

The results was a fluent workflow and decision process.

Side effect: When we actually met in real life for the first time, we all felt like “Everybody was actually looking the same as behind the internet”. That might look a detail, but I think it reveals how successfully we were able to create communication and trust.

The last thing I want to point out is that, from day one, the company would capitalise on every employee’s natural drive outside of the project. OpenSource would bring everyone together on the same boat. Research paper and blog posts would be written together: welcoming everyone as contributors was galvanising for the team!

Ups and also downs🎢

Let me point out two generic experience which could have led to a rotten situation if we didn’t handle it properly:

A project is taking wayyyyy to long to conclude in the company. It will happen and It tends to push down the team dynamics and energy, what should you do?

This is where leadership and dynamism should show up! Keep cheering up your team, help them focus on what matters, keep repeating the long-term goals and most importantly don’t lose faith in them.

We built a product in Go for 2 months which we end up throw out to the window… How do you know when to do that?

We used internal hackathons. Using our own prototype showed us some profound performance issues which we were able to acknowledge. Unless someone has a clear path to fix the problem you find or there is no other way to solve it, it might mean you should ditch your work too.

But all those are actually deeply interesting moments to live as a team. You know what they say: teams are not created in good moments but when it struggles and succeed!

I will not dive into all the process we made at Dropout Lab to achieve this resilient capacity to failure because I’m sure the team will talk about it in future blog posts. 🤫

Tf-encrypted, the power of TensorFlow 💥

I can summarise it in one sentence: building a fully defined static graph is awesome. And I know people tend to hate TF exactly because of that (Pytorch is eager by default, that’s more pythonic, yaknow…), so let me give it a little bit of love. 💒

We are about to dive into the requirements introduced by cryptography on top of those already required by machine learning at scale, take a deep breath and enjoy.

First requirement: most of the time cryptography is not using floats, it’s using integers and even not the usual integers you have the habit to work with, no. It’s using big ints. What does that mean, you say? Well, it means that we use numbers bigger than what fits into an int64 . There exist multiple mechanisms to handle those huge numbers but let’s keep it simple and say that all we need is multiple int64 to represent those big numbers.

Numbers you work with, are represented by a set of native numbers because CPU/GPU doesn’t have instruction for those big numbers.

Second requirement: you want to encrypt everything. To do so we use multi-party computation which can be understood as: we want multiple servers to be involved in the computation and those servers can’t be friends (they won’t share their information with other servers).

We need distributed computation.

Third requirement: you want to do machine learning at scale, which means you want to vectorize all those numbers so you can compute massively in parallel (which also leverage requirements 2).

We need to code in a concurrency-friendly way.

Concretely in practice, this means that you will aggressively grow the number of computations needed by your non-secured neural network and since you want to run big neural networks which contain lots of weights, it means you need to have optimised code.

So now, you are stuck with the problem of optimising the order of every computation, network and memory calls in a complete asynchronous and distributed environment over millions of operations…

Thanks but no thanks.

This is where TF static graph come into play: TensorFlow is crazy good at optimising computation on CPU/GPU, memory calls and network calls out of the box thanks to static graph and a set of built-in heuristics.

Good guy TF needs an ovation.

What this means for us is that we only need to understand TF deeply and make a good use of it. Then it will optimise a completely encrypted and distributed neural networks over multiple devices and servers automatically for us! And it will do it amazingly well!

Come on, you can’t say that’s not quite a feat!

The library is OpenSource, Go play with it! And more importantly, do give feedback to the team by creating Github issues 😉

Future work

The experience was overall a great success, being able to share the very creation of a startup and provide my engineering and machine learning skills to the team was a very revigorating experience.

But it’s time for me to sail in other directions. This deep dive into cryptography was very interesting but is not ultimately my subject of interest. So I decided not to contribute any further, it was a real struggle to say goodbye but it has to be done.

I wish all the best to the team, it was a great moment for me and I hope it was shared. I will keep following all of you as I’m pretty sure you will end up being major contributors to privacy-preserving machine learning.🕊

And now, I will focus back on machine learning. I would also love to improve collaboration in the fields, so if you feel like starting a working group that could potentially lead to co-authoring papers or even blog posts. reach out!

Goodbye Dropout lab bros!

Good guys to follow 👀