by Michael Oliver

Matt Yule-Bennett is the CTO at revenue management software company Pace. He tells Client Server’s Michael Oliver about how they use Python to change the world of hotel pricing.

Michael Oliver: Tell me about what Pace does.

Matt Yule-Bennett: Pace is a software company focused on hotel revenue management. We have a pricing engine that predicts demand and reacts in real time to changes in that demand with a dynamic optimal room price. The optimal price is something that maximises revenue, i.e. makes sure the hotel is as full as possible and the hotelier is getting as much revenue from those bookings as they can.

MO: How does that work in practice?

MYB: We believe that with perfect information about demand, or at least a very good model of demand, you can know the exact right price to charge for that thing at the time where it's being consumed. We’ve solved that problem with some pretty clever algorithms, statistical models and crunching of the numbers that come out of the hotel's own systems.

MO: Why the hotel industry?

MYB: The very genesis of the company was about three years ago, when our founder Jens Munch noticed some curious behaviour in how consumers reacted to price changes in general. Some people were very price-sensitive, and any change in price would change their behaviour vastly. Other people were very insensitive, and you could change the price a lot and they would just do what they were going to do anyway. Jens thought there were some interesting unsolved problems in that, so he started thinking about how we could solve them. There were various experiments and prototypes, trying to figure out whether or not “dynamic pricing” could be solved in a generic way. It was about two years ago when they reached the conclusion that it would be smarter to focus on one industry and get very deep in that. The way forward would be to focus on a particular industry that has characteristics we like, and the hotel industry fit that bill.

MO: What does your tech stack look like?

MYB: We're predominantly Python because its base science ecosystem is so great. We have a fairly standard pandas, NumPy, SciPy science stack and then the rest of the backend is also in Python. Our API is a Flask-based monolith that we are migrating to Nameko microservices. Nameko is a framework for microservices in Python that I and a few others developed and open-sourced back when I worked at onefinestay. It’s now a fairly successful open source project with ~60k downloads a month. Several of us here at Pace are involved as core contributors. I’m a big believer in open-source and will always encourage my team to get involved with projects they’re passionate about. On the front-end we have a React and Redux app. At the moment, this sits on top of REST endpoints but in the future we'll likely move to GraphQL. I’ve worked in GraphQL before, and I really liked its programming paradigm. It works really well with microservices. So I'm looking forward to adopting it when the time is right.

MO: It strikes me that you’re a very hands-on CTO?

MYB: Yeah, very hands-on. I like being in the code and architecture is really my thing. I think being a technical lead is about enabling your team, and that involves tools and architecture as well as process and direction. So I very rarely touch application-level code but I’m often trying to improve our tooling. Given Pace’s growth trajectory I won’t be able to stay hands-on forever, but I want that transition to happen in an environment where I’ve been involved in the foundations.

MO: Is Agile part of Pace’s day-to-day?

MYB: Over the years I’ve experimented with a bunch of different ways of being agile. I think different things work for different teams, but where we are right now, we're trying to go for continuous everything. We want our changes to be as small as possible and as frequent as possible. We want to move them from being an idea in someone's brain to deployed and running in production as quickly as possible. We're trying to apply those same things to our planning as well. When I first joined, we had a planning session every couple of weeks, and we loosely had sprints. But then we said, "Actually, we can do this weekly. We can make the meetings shorter, and we could have more regular touch points." But that change didn't give us what we were looking for. So we decided to do it every day. Our stand up sometimes has a 2 minute addition which is a quick recalibration of this is what's at the top of the backlog. So our planning is super, super lightweight. We just do it very frequently.

MO: What kind of person does well at Pace?

MYB: Everybody here is very capable, and our values are to give trust and take ownership. That naturally results in a lot of independence. I don't like the term ‘cultural fit’ because it's often used in a negative way (as in, ‘we want people that look like us and behave like us’) which is the opposite of what you need for a healthy team. That said, we do have a culture here, and it's one of openness, transparency and high trust. We don't have a well-defined spec that says “You must do this and not do that”. We're looking for people who are able to see an opportunity and run with it.

"Over the years I’ve experimented with a bunch of different ways of being agile... we're trying to go for continuous everything."

MO: What are you looking forward to?

MYB: As we're going towards Series A, it's a very exciting time to be in the company. We've got a steep revenue curve that we are steadily climbing. In fact, we're ahead of target at the moment, which is a good place to be. At some point next year we will do our Series A fundraising round, and that will be like getting to the top of the mountain and then seeing a bigger mountain next. I'm super excited about getting to that inflection point.

MO: And from a tech point of view?

MYB: Software architecture is my thing, and I'm really enjoying being able to adopt the latest, greatest tools out there, and making sure that our stack gives us the strongest foundation. We’re already on Kubernetes in Google Cloud, and we're migrating towards a more microservice-based architecture, away from our MVP monolith. We already talked about introducing GraphQL for the API there and we're using as many cloud native tools as possible. One of the things we’re doing right now is improving our science tooling. Again, it's about transition from MVP to the more of a scalable architecture. Our data scientists currently cut quite a lot of Python code directly, and they run their experiments on their local machines. We're starting to get to the point where we need better tooling than that, stuff that is more high leverage.



MO: What will that look like?



MYB: For example, a model validation framework that lets you run multiple versions of candidate models next to each other and compare them. That's something you could run on a local machine, but you’d quickly get to the point where people were waiting for their laptops, and you never want someone whose time is expensive to be waiting for a computer to do something. Using cloud native tools means we can easily run hundreds of these things at the same time. We’d see the resources in the cloud spike up in an elastic way and then come back down again, and just like that we’d have our results. And it’d cost us just a few pounds for the privilege. That's something that I find really exciting.

Driven by technology, powered by people





About Us

Meet the Team

Our Blog​

​Let's start the conversation

Follow Client Server on LinkedIn, Twitter, Facebook and Instagram.