On the heels of their rumored $2B valuation, we sat down with Instacart to chat about the tech behind their popular grocery delivery service. We find out how they went from an iOS-only MVP with mostly manual processes to their current multi-platform automated system.

Contents

Highlights

The Instacart MVP

We actually launched iOS first. The iOS app looked okay. It worked; you ordered groceries from Instacart, there was no notion of stores or retailers on the site. You just ordered your bananas and milk from Instacart and we had them delivered from Safeway.



That was the first three or four months of the company. The backend infrastructure was just really terrible. You placed an order, it did a lot of just client side checks to make sure that it was okay and then pushed it through the system. That system just accepted any order you'd give it, regardless of whether or not we could get it to you on time or anything. It wasn’t smart, which worked okay when you’ve only got a few orders coming through. Then, we very quickly started to expand past that, but there were a lot of times where me, Max, Apoorva, everyone there ended up going out all day Sunday delivering orders ourselves until two in the morning because we'd taken so many orders that we didn’t have enough drivers to fulfill.

Logistics & Backend

Probably the biggest engineering challenge that we really started to look into was late orders. We started to build a system that could actually predict how many orders we could take for a given time and not take more than we could, and make sure that every customer got a really amazing experience. We really only have one job, which is to take your groceries and deliver them to you on time, and if we can’t do that, what's the point?



We have to look at how many orders can a shopper take? How many items are in this order? A five-item order is way different from a 60-item order, and it takes way longer. Shoppers have different speeds at which they work.



Even in different markets, we have different types of shoppers, so some people have cars, and some people don’t, and we have to take that into account. We have to figure out how do we arrange all of these orders and assign them to people in the right way so that you always get your delivery between 3pm and 4pm, or it comes within one hour or two hours or whatever you’ve picked, and we do that all the time.

Mobile & Web Apps

One of the cool things about the APIs is the way Brandon had built them originally. The main web store is a Backbone.js single page web application, and so it hits API endpoints to do all the calls, and we use those same API endpoints for the consumer iPhone app, the consumer Android app, and the consumer mobile web app. It worked out really nicely, us just building one API and just all these clients, including the web client, using that.



We have all of these shoppers running these custom apps that have all these features that better enable them to scan items, to check the items okay, get instructions on how to pick a good melon, to check this is the correct organic melon, and this is the correct section, which items to pick in which order so that they’ll be fresh, which items to check out when, a scanning system to go through the checkout aisles faster.



Before two weeks ago or so, it used to be Backbone views and models, and everything was on our main store app, and our mobile web app, but actually, we just switched our mobile web app to using ReactJS for the interface. So it’s using Backbone models but ReactJS front-end components. Really, it was borne out of the frustration with how the Backbone model-view bindings worked, and it wasn’t especially performant for large views, and we had to do lots of tricks to make it performant. But swapping that out with React views meant that it could be both simpler and faster without having to spend a lot of time on that.

Inventory & Photos

Back in the day, we would grab all of the items and then we started to open new stores, and we had to figure how to acquire that inventory. We would do stupid things like buy an entire isle of stuff, and then set up a photography studio in our office, take photos of everything, because that product data just doesn’t exist.



Early on, we had some funny, funny things happen...there was a situation where we got this really angry phone call from a customer, and they were like, “What am I supposed to do with all these bananas?” We were like … “What do you mean? What are you talking about?” We pulled up their order. “You ordered 10 bananas. I don’t understand what's going on.” So he sent us a picture, and he had just bananas on bananas, like bags of bananas flowing out. So the guy had ordered 10 bananas, but the image of the bananas had a bushel of bananas, so the shopper when he went there bought 10 bushels of bananas and that’s what he delivered to the guy. Little stuff like that will just catch you off guard. We ended up sending him a banana bread recipe and asked him to donate it or give it to his neighbors and stuff, and it worked out.

Infrastructure Tools & Services

I’m sure a lot of people have talked about the blue-green deployment cycle where you have this version of code, and it’s slowly rolled out to everything else. We do a much more small scale version of that where we have basically what we call our beta servers, so whenever you ship code, it immediately goes to a set of beta servers, and you can test for that in your code and say, “If I’m a beta server, do this differently.” Right now, I think 10% of users get the beta experience. Once they log in, 10% of users are randomly assigned to these beta servers, and they'll get some new feature or some new user interface thing change or a new color scheme or something. This isn’t really for A/B testing. This is more for rollout of a new feature. We have an entirely separate way in which we do A/B testing.



Blazer is a newer one. We built out a tool that allows anyone, city managers and people in operations can query data from our database, and we'll limit access to the columns that they have access to for privacy reasons...You write a SQL query, and then you can that SQL query with other people. Basically templated SQL queries, but in our own system, as a gem, as opposed to an external service.



So before, we were having … not a huge, but we were having a fraud problem where people were placing orders, and they were getting fulfilled even though they were very obviously using a stolen credit card. So we started using Sift Science, which basically, we send Sift a collection of signals from users, so like they added this item to the cart. They tried to add a credit card, but it failed. They added this address and then they submitted. So we send them the collection of signals, and they run machine learning on those signals and send us back a classification of the user, and we use that as one of our elements to decide if we should fulfill that order or not.





Yonas: I guess we can just start with introductions. Andrew, Nick, Brandon. Do you want to talk briefly about your backgrounds?

Brandon: Sure, my name is Brandon. I don't know how far back you want me to go, but coming out of college I worked at Cisco systems. It was good; I enjoyed it there. Then I left and I joined a startup. I just haven't looked back. So basically, all I've done in my career is startups.

A few years ago, I was the first engineer at AngelList, so I helped take that from a mailing list, basically, from like Naval and Nivi’s inbox, into a Rails app, and started building out the application. Now it's this amazing app. I left after a year to do my own thing, but the seeds of it are still there, and that was a great time.

Then, two and a half years ago, I joined up with Apoorva and Max, and we started Instacart. I'm an engineer, and I've done a lot of the backend system work over the course of time.

Nick: My name is Nick Elser, and I'm probably the second engineer hired after Brandon came, and I've worked on pretty much every end of the stack for Instacart. Before I came from another startup. I've only known startups. Unfortunately, the other startup was acquired by a very large company, so I've also had big company experience, but it was kind of a little startup within the big company. I personally have worked on every aspect of our system. The whole stack. I'm really excited about the opportunities here because we use … all our engineers are basically expected to work on every end of the stack. That’s pretty much why I joined and why I liked Instacart.

Andrew: My name's Andrew Kane, and like Nick said, I'm also an early engineer at Instacart. Some of the cooler things I've worked on here, search is one of the big ones. Now I work on our logistics system, making sure customers get their orders on time.

Y: Cool. There’s probably a lot of different moving parts, and we probably won't have time to go into each aspect of the architecture, but from my understanding there's at least four major pieces. There's iOS, Android, Web, and backend. But before we get into the specific aspects, in general, for the first version of your product, what did that look like?

The Instacart MVP

Instacart MVP Rails • Heroku • PostgreSQL • Objective-C

B: It was terrible. It was really atrocious. Although the iOS app always looked okay.

Y: So you launched with iOS?

B: Yeah we launched iOS first. The iOS app looked okay. It worked; you ordered groceries from Instacart, there was no notion of stores or retailers on the site. You just ordered your bananas and milk from Instacart and we had them delivered from Safeway.

That was the first three or four months of the company. The backend infrastructure was just really terrible. You placed an order, it did a lot of just client side checks to make sure that it was okay and then pushed it through the system. That system just accepted any order you'd give it, regardless of whether or not we could get it to you on time or anything. It wasn’t smart, which worked okay when you’ve only got a few orders coming through. Then, we very quickly started to expand past that, but there were a lot of times where me, Max, Apoorva, everyone there ended up going out all day Sunday delivering orders ourselves until two in the morning because we'd taken so many orders that we didn’t have enough drivers to fulfill.

Y: Okay, so it was a people problem.

B: Yeah, a lot of our problems are actually people problems.

Y: Gotcha. For that first version, by the way, what was the backend written in?

B: It’s always been in Rails from the beginning, so we used Redis for caching our items, which we had, from the beginning. Rails is kind of what we were comfortable with, and we knew we wanted the front end to be really, really snappy, so we de-normalized all the item attributes into Redis, and that's how it got served out.

We had Postgres, and we started on Heroku. When an order would come in, we would just send it to a shopper. Whichever shopper received an order last, so it’s kind of just like this revolving door of shoppers getting text messages that would lead them to a web page that wasn’t really mobile formatted, that just told them what to get, and then they would say “I got it,” and that was like the extent of the smarts of it. We just iterated.

So overall, I would say it’s somewhat indicative of the way we work, which is we try to push something up as soon as we can, the smallest possible version of whatever it is and then iterate and iterate and iterate until we get it right.

Y: Cool, so that was V1, iOS only, no web, no Android, and it sounds like a lot of features weren’t there. Did you have any major engineering challenges early on, or was it all sort of on the people side?

B: We've never run into an engineering challenge that we haven't been able to solve, so looking back, it always seems like there was never really a big problem, and we’re fortunate in that we had Heroku and the site was so small that we could scale it really easily. We didn’t have to worry about infrastructure. We didn’t have to worry about all that, all we focused on was just coding and solving application problems. A lot of the interesting problems early on, or things that I found interesting personally, were things like figuring out shopper scheduling and systems like that, which are very much people problems as well as engineering.

Probably the biggest engineering challenge that we really started to look into was late orders. We started to build a system that could actually predict how many orders we could take for a given time and not take more than we could, and make sure that every customer got a really amazing experience. We really only have one job, which is to take your groceries and deliver them to you on time, and if we can’t do that, what's the point?

That was kind of our first big challenge, but we were able to push that off a lot by manually doing batching for the first couple of months, and by manually assigning orders and doing that until we got a little bit smarter about what we actually needed to do to build it.

Y: Gotcha. So you learned that you needed to do some demand forecasting and just be smarter about how you fulfill that demand.

Logistics & Backend

Logistics and Backend Ruby • Python • R • NumPy • Pandas • PostgreSQL

Y: So fast-forwarding to now. Are there any major engineering challenges that you're facing now, and how are you dealing with those?

B: You guys want to talk about anything?

A: I can go into logistics.

B: Yeah, logistics is probably the most interesting.

A: With logistics, we have a vehicle routing problem, which is an NP hard problem, so it’s very difficult to solve, and you can’t possibly run through all the different combinations.

Just, I guess, to take a step back and describe our logistic system - so what it essentially does is that we need to grab all these orders, and we have all these shoppers, and we need to figure out which shoppers should get which deliveries.

We need to figure out when we need to start these deliveries and how we should combine them together. So one shopper can be working on multiple deliveries at a time to make things more efficient so they can go to the store, shop for the items, and then drop off at a couple different customers at once to make things the most efficient.

B: A shopper is one of the personal shoppers. Customers are the people who are on Instacart buying groceries. A batch is the unit that a shopper works on, so it might be one order or one delivery might be multiple orders.

A: Logistics is a big challenge there. Kind of as the company grows, the number of combinations, the solutions base that we’re working in, continues to blow up, so we have to come up with heuristics to figure out which shoppers should get which deliveries, so a lot of work goes into that. Coming up with the heuristics, testing different heuristics, and figuring out how we can make things really fast.

We actually have two systems right now. One is in Ruby and one is in Python.

N: Which Python libraries do you guys use?

A: We use kind of the typical data science stack, which is NumPy, Pandas, there are a few others that I can't think of off the top of my head.

B: The data science is what's all in Python, and the application infrastructure is still Ruby.

A: Yeah, so we have a data science team that works in both Python and R, so kind of how that code typically interacts is they'll all read things from our production database, do any computation that they need to, and either put it back in the production database in another table or put it in another data store, and from there, the Ruby app will kind of pick up where it left off to do whatever it needs to with the numbers. So in the case of demand forecasting, we have Python or R code that does the estimates, that reads all the data, comes up with how many shoppers we’re going to need for the next week or two, and then writes those values, and then the Ruby app kind of displays those values and gives our operations team the tools to schedule shoppers and things like that.

We have four people doing data science, and they’re all on the logistics team right now, so the logistics team is six people. So myself, one other software engineer and then four data scientists, and we'll work on the logistics engine. We do a lot of predictive modeling, like figuring out how long a shopper’s going to be in the store, how long it takes to drive from location A to location B, so those kind of estimates go into the logistics system so it’s more intelligent.

Y: Gotcha. In terms of the database, you’re still using Postgres for everything?

A: Yup. We run that on AWS RDS.

N: Although we do use a variety of Redis instances both for data science use and for our production item use.

A: We also use Memcache mostly for web.

B: How fast do you think the plan is updated, if you think about what's going on a given day, let’s say right now we have our existing orders for the rest of the day. We have our shoppers that have already ordered today, and then we have new orders being placed all the time, and all of this has to go in … You have shoppers that are starting from different locations, coming from different stores. They’re all working on different batches, and so we’re just constantly updating the plan for the day and for tomorrow.

A: Every minute we have a new plan, completely new plan written from scratch.

Y: When you say “plan”, you mean everything that needs to happen from a logistics perspective right?

A: Yeah, so we’ll plan out the entire days’ worth of deliveries, so you could go, you could log into an admin page right now and see where, if we didn’t end up changing the plan at all if no more deliveries came in, what we would be doing for the rest of the day right now, and so every minute we update that with new information. This is saying which orders go to which shoppers when, basically.

And so what goes into that is, new orders are coming in, so we can’t do it yesterday. We have to do it all the time for the rest of the day. But also, we have to look at how many orders can a shopper take? How many items are in this order? A five-item order is way different from a 60-item order, and it takes way longer. Shoppers have different speeds at which they work.

Even in different markets, we have different types of shoppers, so some people have cars, and some people don’t, and we have to take that into account. We have to figure out how do we arrange all of these orders and assign them to people in the right way so that you always get your delivery between 3pm and 4pm, or it comes within one hour or two hours or whatever you’ve picked, and we do that all the time.

Y: So that’s the fulfillment process. Was it just you working on it, or what did it look like early on?

A: So Brandon did the first version of the system.

B: One of the ways we work here at Instacart is we usually like to have a metric to measure first. And so Apoorva wrote the very, very first version of it, which was just a text message immediately to a shopper, and then I wrote the first real version, which tried to match up orders and try to be a little bit smarter about which shopper to give it to, but it didn’t do any kind of capacity planning or whatever, it would just find a shopper, and then if there was no shopper available, it would still find a shopper, so it would just give it out.

Then we would iterate on that, but at some point we were starting to grow, and even with some manual batching and manual help on that page, at a certain threshold, it was just too much to do, and so the number of late orders was going up, which is, again, really terrible. So Andrew’s job was actually to fix late orders.

That was it. That was the problem, right? You have a percentage of orders that are late on a given day or whatever. Push that down, and so that’s kind of how Andrew took over and started working on that and making it a lot smarter.

Then he was just constantly iterating, and the nice thing is that you can push a change today and see immediately whether or not it makes a difference. We were able to move pretty quickly.

Mobile & Web Apps

Mobile and Web Apps Objective-C • React • Backbone • Firebase • Android SDK

Y: Very cool. Do you want to talk a little bit about what you focus on Nick?

N: I focus on a variety of things. I guess what's probably most interesting is talking about our front-end stack on the variety of platforms, and currently, I do a bunch of tools for our internal team, so basically building the tools that are consumed by our operations teams, our customer support teams, and a variety of admin tools and everything. We were growing so rapidly, we needed to have… There's a huge surface area of stuff that we do that we need to have the ability to control and change to better serve our customers.

But previously, I’d worked extensively on our consumer products, which are pretty interesting. Besides our apps, which are pretty traditional iOS and Android apps, I don’t think we do anything that crazy.

B: I think people are always surprised when I show candidates our shopper app. It’s something you don’t even think about, but clearly, the shoppers have an app.

Y: I was about to ask about that.

N: You mentioned the four things we were going to focus on - backend, iOS, Android, and our web interface. We also have a web, Android, and iOS app that we provide for our shoppers, so it’s a custom app, completely fully featured, and in fact, it takes as much development time as our consumer product. Then we have all of these shoppers running these custom apps that have all these features that better enable them to scan items, to check the items, get instructions on how to pick a good melon, to check this is the correct organic melon, and this is the correct section, which items to pick in which order so that they’ll be fresh, a scanning system to go through the checkout aisles faster.

All these ways in which we make it more efficient and better for the shopper's experience to deliver a better customer experience. For a while, we had a shared UI kit that we shared between our shopper apps and our consumer apps that they had a similar aesthetic and a similar set of background functions.

Y: You mean on the iOS side?

N: On the iOS side, and the Android side to some degree, but we moved away from that as they’ve diverged, as our consumer app has became more fully featured, integrated all of these more robust and modern technologies like Apple Pay and all these other things versus our Shopper App, which doesn't need any of those things, and is a much more traditional app. But it’s been interesting to see them grown and diverge and to see how many people are using them on a day-to-day basis.

Y: Yeah, so originally, was it always two separate apps, shoppers and consumers?

N: Yeah. We used to just send the shoppers to a web view, basically, a website that had the list of items.

But what happens is that, let’s say you’re at a store that has no signal. There's concrete everywhere. You don’t get a signal. If someone adds an item to it or something changes and they need to mark the “I couldn’t find this,” they'd have to go and find some signal and hit the stupid thing and then go back into the store, versus now, we have an internal queuing system in the app so it can handle network activity loss and sense so we know where they are when they’re on shift and able to do all of those other things.

Y: Did you guys figure that out pretty quickly?

N: Well, in addition to us receiving feedback from our shoppers constantly, we actually all go on shopping trips several times per year. All the engineers at Instacart go on a shopping trip at least once or twice a year, and this means that we’re able to experience firsthand, like “Oh, this is really frustrating” or “This is great.”

And we’re all, of course, consumers at Instacart as well, so we get to see both ends of it.

N: I’m not sure what else I can add to that.

B: You could talk about how the app started.

N: Yeah, that’s actually an interesting point. Basically, the trajectory was we had our iOS app, which started out native, right? It started as a native app, and then we realized you have to go through a review process and it’s slow, and at a very early stage, it made sense for us to make it a wrapped web view. Basically, the app would open, and it would be a web view inside of it that we could iterate on quickly and change very rapidly and not have to wait for app store view process to change it. It wasn’t a totally native experience, but it was actually pretty good and lasted for a very long time and was up until recently the foundation of our current mobile web experience, which is different from our app situation.

So for a long time, basically, our app store iOS Instacart app was a wrapped web view of just our store, a condensed version of our store, which meant that we could add things. We could change sales. We could change the formatting. We could change the UI really fast and not have to worry about the app store review process.

This all changed about a year ago, I would like to say, at which point it became a totally native app. We felt comfortable enough with the product and all the features that we made it a native experience and made it a fully featured app.

Now we’ve been featured on the app store countless times.

Y: Very cool, very cool. Do you want to talk about the Android side?

N: We started from the get-go, I mean, we didn’t have one for a long time, but when we did introduce it, it was in a similar time frame to when we made our iOS app native and we just introduced an equivalent native Android app, and they’ve kept feature parity ever since.

Y: Was there any big difference between the Android and iOS experience, or were you pretty much aiming for…

N: They follow the paradigms of both, so it’s not an equivalent UI, but in terms of feature-wise and interaction-wise and everything else, it’s all driven by our APIs, so they’re showing the same content, and they're interacting with it pretty much the exact same way.

One of the cool things about the APIs is the way Brandon had built them originally. The main web store is a Backbone.js single page web application, and so it hits API endpoints to do all the calls, and we use those same API endpoints for the consumer iPhone app, the consumer Android app, and the consumer mobile web app. It worked out really nicely, us just building one API and just all these clients, including the web client, using that.

B: Everyone talks to the same endpoint, so the same information, just different client.

A: Even the iPhone and Android developers will add their own API endpoints with limited knowledge of Ruby. It’s cool to kind of see that. I think a lot of people are, I guess, cross-functional/disciplinary.

B: We like owning things completely, and so as a mobile developer it sucks if you have to wait for someone to build this API that you need, so our first iOS developer knew Rails before he learned iOS. He would just write his own stuff, and then our Android guys didn't, they came from a Java background, they didn’t have Rails experience. They just picked it up and started writing their own APIs, and now they’re completely self-sufficient. So if they need something done, they just go do it and get it done.

Y: Nice.

A: Same situation on the data science side, so none of the data scientists knew Ruby coming into the company, and after maybe a week or two, they were writing their own dashboards and features and stuff like that in Ruby to show the data that they had generated in either Python or R, because they wanted to put it on a dashboard and show it to the rest of the company.

N: So I know we mentioned that it’s a Backbone app, on the web side. Before two weeks ago or so, it used to be Backbone views and models, and everything was on our main store app, and our mobile web app, but actually, we just switched our mobile web app to using ReactJS for the interface. So it’s using Backbone models but ReactJS front-end components.

Having a mobile website allows our Windows phone users to order or people with Blackberry’s.

Y: Why’d you guys switch from React to Backbone?

N: Really, it was borne out of the frustration with how the Backbone model-view bindings worked, and it wasn’t especially performant for large views, and we had to do lots of tricks to make it performant. But swapping that out with React views meant that it could be both simpler and faster without having to spend a lot of time on that.

B: One other interesting thing about that is, since React actually works okay with the Backbone models and the Backbone router and stuff like that, we didn’t have to rewrite the mobile web application and update it to ReactJS. Rewrites are almost always a bad idea. We were able to upgrade pieces of it at a time, move on to React, and now the entire thing is using React and just has the Backbone router and models and stuff like that that we already had, so it's a lot faster.

Y: Gotcha. And you’re only using React on the mobile site.

N: Right, but I would like to use it in more things. It’s been a very nice experience, but in general, we don't like the idea of a rewrite for a rewrite’s sake. It’s always in terms of some other objective. In this case, it was the maintainability and the performance of our mobile web app.

Y: Okay. Cool. Was there any particular reason you went with Backbone originally?

B: You know, it was just what I knew. So we very, very early on, we were iOS only, then we thought, well we’re missing out on half of the market. We need to add Android. So we had a friend of ours start working on the Android app, and I had to build the API for him, but I was having a really hard time doing that because I didn’t know what he needed exactly, so I built the first version of the web store over the weekend because I wanted to have a client to consume myself for the API I was building.

It was just Backbone because that’s what we knew, and that was a lot more stable back then. React didn’t exist. Angular was brand, brand new. So that's what we went with.

Y: You also could’ve just used Rails, right?

B: Yeah, but … yeah, we could’ve. But I like single-page applications. They feel more responsive.

Y: So then on the website, shoppers also have their own experience?

B: Actually, shoppers interface almost exclusively through the iOS app and Android app.

N: We still use a web view to some degree on there. We have a Backbone single-page application that does a lot of this stuff, like to allow them to manage their own information to schedule themselves, to change their information to see their paychecks and a few other things, but for the most part, it’s all native.

Y: Gotcha. So we’re mostly looking at Rails, Backbone, of course the mobile stuff, Postgres all around.

B: Yeah we want to use Redshift, or we think Redshift will be good at some point, but we’re not there yet.

Inventory & Photos

Inventory and Photos Amazon S3 • Amazon CloudFront • CarrierWave

Y: Very cool. Do you want to share any of the metrics you guys have? Have you shared that publicly?

B: We really haven't, and we’re not going to.

Y: Fair enough.

B: The one thing I’ll say is, I work on the partner and catalogue team. That’s kind of where my main focus is, so that’s interfacing with our retailer partners and some of other partners, and then also working on our catalogue data, which is all the items that you would buy and order through Instacart. What's really interesting is that the scale of that has increased so dramatically.

Back in the day, we would grab all of the items and then we started to open new stores, and we had to figure how to acquire that inventory.

We would do stupid things like buy an entire isle of stuff, and then set up a photography studio in our office, take photos of everything, because that product data just doesn’t exist.

Then eventually we, actually Nick, I think, over the weekend or something or over a week, built a scanning app to allow teams of shoppers to go into the store and very quickly acquire the inventory that was in the store, and then now we work with our partners and we have better systems to handle that, but the number of items that we work with has gone from a few hundred-thousand to several or many millions over the last few months, very quickly.

So the scale of that has just grown so dramatically but the team working on it is still basically one engineer and a few data people cataloging.

Y: I’m surprised that the grocery stores don’t have some of that to make available to you, even if you had to pay for it or something, because they have their own inventory system, right?

B: You'd be surprised at the data they have or don’t have.

So in some cases, there are certain stores where we know more about what's in their store than they do.

Y: Yeah, I’m sure. So you said it’s just one engineer working on that right?

B: There's one engineer who focuses on inventory. But no one owns any part of the code base, so you might be working on a lot of different things depending on what you need to get done.

There's one who focuses primarily on that, and we have another team that focuses on data quality, assuring that, you know, this says Crystal Geyser Sparking Mineral Water, that the size is right, all these things that we have a good picture for it, so you know when you’re on Instacart and you see this image that you’re actually buying that product.

Early on, we had some funny, funny things happen. Actually, this is really really early on in the company, but there was a situation where we got this really angry phone call from a customer, and they were like, “What am I supposed to do with all these bananas?”

We were like … “What do you mean? What are you talking about?”

“What am I going to do with all these bananas?”

We pulled up their order. “You ordered 10 bananas. I don’t understand what's going on.”

“What am I supposed to do?” so he sent us a picture, and he had just bananas on bananas, like bags of bananas flowing out. What is going on here? So the guy had ordered 10 bananas, but the image of the bananas had a bushel of bananas, so the shopper when he went there bought 10 bushels of bananas and that’s what he delivered to the guy. Little stuff like that will just catch you off guard.

We ended up sending him a banana bread recipe and asked him to donate it or give it to his neighbors and stuff, and it worked out. But there's a lot of weird subtleties that you don’t even think about that happen all the time with that team.

Y: That’s funny. The photography aspect sounds interesting. The Airbnb photography set up is pretty well known now, where they’ve had professional photographers since early on. But I guess for you guys it’s maybe just as important because it feeds into how people are buying.

B: Yeah, an item without any pictures is like on order of magnitude less likely to be purchased. So you have to have images. It’s just more comfortable for the customer. The shopper has no idea what to pick, to effectively go get this like random item they’ve never heard of, like granola bars or something, you could search forever trying to find that, but if you have an image it’s a lot easier. You narrow it down very quickly, and then you just have to pay attention, it’s like oh they wanted the chocolate chip milk thing.

So it directly impacts huge efficiencies in our business.

Y: Right, and so I’m sure you guys at this point have millions of photos. Are you just throwing them in S3?

A: Yeah we use the Carrier Wave gem to sync them to S3, then we use CloudFront to serve them up to all the apps.

Infrastructure Tools & Services

Instacart Rails • Redis • Amazon EC2 • Rollbar • GitHub

Y: Let’s talk about the actual tools and some of the services that you’re using. So you went from Heroku to AWS right?

A: Yeah. We liked a lot of things about Heroku. We loved the build packs, and we still in fact use Heroku build packs, but we were frustrated by lack of control about a lot of things.

B: It’s nice to own your own destiny.

A: It’s nice to own the complete stack, or rather as far down as AWS goes. It gave us a lot of flexibility and functionality that we didn’t have before.

Y: Gotcha. So EC2, RDS, CloudFront…

A: Yup, we use a lot of Amazon technology.

EBS. We use ElastiCache, both Redis and Memcache. Route 53. S3.

B: There's basically only one thing that we have another company manage, which is our ElasticSearch clusters.

We use someone else for some Redis instances too. But it’s all hosted on AWS when you get down to it.

Search

Y: Do you want to talk a little bit about search?

B: Yeah, we've always had search. The very first version of the search was just a Postgres database query. It wasn’t terribly efficient, and then at some point, we moved over to ElasticSearch, and then since then, Andrew just did a lot of work with it, so ElasticSearch is amazing, but out of the box, it doesn’t come configured with all the nice things that are there, but you spend a lot of time figuring out how to put it all together to add stemming, auto suggestions, all kinds of different things, like even spelling adjustments and tomato/tomatoes, that would return different results, so Andrew did a ton of work to make it really, really nice and build a very simple Ruby gem called SearchKick.

Y: I was about to say, yeah, I've seen that. SearchKick, so you guys did that. Okay, very cool.

A: Yeah, the big kind of epiphany for us for search was what we should’ve been doing all along, but when we started tracking what were users searching for and really looking into that data. So the biggest advice for anyone doing search would be just be to track what users are searching for and if they’re converting. Once we started doing that, we had a live dashboard of just what people were searching in real time. It was like, wow, this is really cool, but then you'd see that no knows how to spell Sriracha or zucchini, so you have these things you’d notice in the search results, and it was so painful because you would see someone just struggle with all these different misspellings. And we couldn’t really do anything to help them at first, so kind of just looking at the search data, we came up with a few different things. Misspelling was one of them to kind of help out searches.

The other big thing on that was conversions. So by default, ElasticSearch chooses the shortest token that matches, so “banana,” if you have a product called “banana,” “banana bread” and “organic bananas,” it'll put it in that order. It'll put it in the shortest name that matches bananas. Of, you don’t know if that’s … Maybe you want organic bananas to be the first result, but since it’s longer, that’s not how ElasticSearch is going to do it by default.

So we started using conversions, which made a huge difference so if someone searches for “banana,” maybe the first time they search for it, “organic banana” is number three, but when they add to their cart, that result now gets a little bit of a boost, and then when enough people are doing that, very quickly the good results bubble up to the top. So it kind of trains itself, which is really handy, because then we don’t have to do as much with it.

Y: Very cool, very cool. That’s awesome. Then, you just use that across all of the apps via API. Very cool. Can you talk a little bit about how you guys ship code, just your deploy process?

Build & Deploy

N: Yeah, so we use GitHub, and we basically use a variant of continuous deployment where when anyone merges in a feature that they’ve finished with, they ship it immediately, and we bundle it up as a build pack and send it to all of our EC2 servers.

We use CircleCI for our tests, but not for our builds. The builds are all manually triggered right now.

B: Any developer on the team can trigger a build and deploy at any time. So on a given day, we probably deploy 20 or 30 times to prod.

Y: I’m assuming you guys have a few different environments.

N: We have a staging system that obviously is smaller in scale but mirrors our production information exactly, so you can test anything there. We, for example, have a staging database that has a sanitized version of our production database that is synced over every day in addition to a staging Redis and everything else, so basically, we have an exact copy that we can test anything on and see what the implication is to run load testing or whatever else.

B: You want to talk about Beta maybe?

N: Yeah, sure. One slight thing that we do that other people may not be doing in terms of deployment is that we have … I’m sure a lot of people have talked about the blue-green deployment cycle where you have this version of code, and it’s slowly rolled out to everything else. We do a much more small scale version of that where we have basically what we call our beta servers, so whenever you ship code, it immediately goes to a set of beta servers, and you can test for that in your code and say, “If I’m a beta server, do this differently.”

Right now, I think 10% of users get the beta experience. Once they log in, 10% of users are randomly assigned to these beta servers, and they'll get some new feature or some new user interface thing change or a new color scheme or something. They’ll get it first, and we can see the impact from the beta servers in terms of load, in terms of user feedback and everything else. We'll know, “Hey, this user hit the beta server, and they're going to gain the beta experience.” And if it’s positive, we'll roll it out to everyone else.

Y: You guys just pretty much built up your own A/B testing framework.

N: This isn’t really for A/B testing. This is more for rollout of a new feature. We have an entirely separate way in which we do A/B testing because this wouldn’t be a great A/B test for some things. If it’s only 10% of users, maybe we would need to do a 30% test or a 60%, 65% test.

We have a much more robust A/B testing framework that we wrote ourselves. It’s not open source yet, but it will be.

Y: Okay so you do this for all the apps.

N: Well, specifically for all the APIs. For the apps specifically, I don’t think they’re doing any A/B testing right now. I’m not sure what they usually do.

We do it at an API-level. We'll send feature flags. We'll set this user as on this test, for example, send it over via the APIs.

And testing, of course, for other unexpected problems from actual production traffic that we didn’t foresee. We’re still looking at the ramifications of that.

Utilities

Y: Gotcha. Are you guys using any service for texting?

N: We use Twilio extensively for voice call routing, for texting. We, in fact, use it for MMS messaging as of recently.

B: We do number masking. So the personal shopper never has your actual number, you call the Twilio number and it routes you intelligently to the customer. And vice-versa.

We use SendGrid for email.

B: One of the cool things that other companies might not do, or one of the services we use is Sift, for fraud protection.

N: So before, we were having… not a huge, but we were having a fraud problem where people were placing orders, and they were getting fulfilled even though they were very obviously using a stolen credit card. So we started using Sift, which basically, we send Sift a collection of signals from users, so like they added this item to the cart. They tried to add a credit card, but it failed. They added this address and then they submitted. So we send them the collection of signals, and they run machine learning on those signals and send us back a classification of the user, and we use that as one of our elements to decide if we should fulfill that order or not.

So that's all happening in real-time. Without human intervention, you can tell. If they have a very high Sift score, you can say, “This person is clearly fraudulent. They’re using credit cards from six different places and ordering only Patrón.”

Y: Right, so that immediately sets off a flag, Patrón.

N: Yeah, razorblades and Patrón.

Y: Payments?

B: Stripe. Their marketplace payouts as well as plain old credit cards.

Y: And then you use Firebase right? What do you guys use it for?

N: We use it for a few things. We use it internally for a few dashboards because it’s actually really nice to have real-time dashboard data with Firebase. We also use it extensively for live order updating. For example, when a shopper is picking your items, you'll be able to go on your order screen. There will be live showing like found or not found or whatever. You'll have live position updating of your shopper on the map. You will have live information of the status of the order like “Nicole is now picking up your order,” and all these kind of things, so you don’t have to reload the page or pull or anything. Just live updates happen natively through the Firebase API, which is nice.

B: We use Rollbar for exception tracking. It’s fantastic. I've used other things, but Rollbar is just really, really fast. Their speed at development is amazing. The features, you can tell it’s developers building it.

Y: Anything else on the testing or monitoring front?

N: We use RSpec. I think for tests, it’s actually good to have a very boring stack, like something, so that the test itself isn’t something that you have to test. It’s just that it works.

Y: From an, I guess, workflow perspective, are there any tools you guys are using day in and day out?

B: Trello and HipChat for coordination.

Y: No GitHub Issues?

B: No. They don’t really fit into our development process.

Trello is good for that. Actually, to expand upon that.. Trello allows the entire company to have insight into what the developers are doing, they can contribute and comment about things versus if we used GitHub Issues, and we’d all need to have accounts, we'd have tons of people that don’t need to be on our GitHub account. There's no reason to manage them all there.

Open Source

Y: Gotcha. Cool. I’m trying to think about other things we didn’t cover here.

A: Yeah, so we have a couple more open source projects.

Ahoy is analytics. So that you don’t have to build your own analytics framework from scratch. So we built it to better understand user behavior. The API is similar to Mixpanel, so you have visits, and then you have events, and these events have a name and different properties to them, and you’re sending the data to your server, and then from there, it supports a lot of different backends, so if you want to put your analytics into Mongo, you can do that out of the box. You can put it into log files. You can put it in your production database, or you can write your own kind of store to put it in. Kind of the reason for that is as we grow, we need to start changing data stores so it'll just be kind of easy to swap it out.

B: It also does email tracking. It actually has a lot of really cool features that you don’t want to build but like having that data inside your app because you can do things with it.

Y: Gotcha, and where do you guys send that data?

A: Right now, we send it to, some of it to our production database, so anything we need to query in real-time, we'll put in the production database, and then the rest of it that we’re just tracking but not really doing much with right now we'll put into log files. So then we can analyze those log files later.

Y: Speaking of logs, are you using anything for log management?

N: We use PaperTrail, so basically all of our instances write into SysLog entries that are shifted to PaperTrail.

Y: Any other open source things you want to touch on?

N: Blazer is a newer one. We built out a tool that allows anyone, city managers and people in operations can query data from our database, and we'll limit access to the columns that they have access to for privacy reasons, but it essentially allows them to query their own data so they don’t have to wait for an engineer, so we’re kind of very big about allowing people to help themselves here.

You write a SQL query, and then you can share that SQL query with other people, and what makes it even better than the typical SQL client is that it supports variables, so if you’re in one city, you can write a query with the city variable, and then you get a drop-down of all the cities, and anyone from another city can run that exact same analysis on their city, which is really nice.

Basically templated SQL queries, but in our own system, as a gem, as opposed to an external service.

Y: That’s really cool. There are a bunch of services that do that, so that’s cool that you open sourced that.

A: Yeah, it does charts as well, so we’ll try to automatically generate charts from some of the queries.

Y: This has been awesome guys. Thanks a lot for doing this.









Note: click on the stack links below to see highlights from this interview.



