Benn Stancil

Yeah, so internally at mode, the one of the things that we really care about is, we want to make it something that is easy to use for the analysts and data scientists are actually consuming that data. So again, kind of come back to the to the point from that stitch pitch, stick fix blog post, we really believe that that the data scientists here at mode should be responsible for as much kind of data management as possible that there's a lot of great tools out there now, that are ATL tools or warehouse tools, or pipeline tools, that analysts can manage pretty well. And you don't have need someone to be kind of a dedicated capital E engineer to really build out the initial phases of the pipeline. And so for us, when we, when we evaluate those tools, internally, we want to make sure that that there are things that we can set up pretty easily. And there are things that as customers have those tools who aren't necessarily the, again, fully fledged engineers, ourselves, we still know how it works, and can still make sure that it's up and running and performing the way we want. I think the analogy we often use with this is it's like buying a car, that you don't necessarily need to know the ins and outs of how the car works. But you need to know that it's reliable. And if you learn to not trust the car that is not actually going to drive and learning to drive. You don't want to actually learn how to fix the car, you just want to buy a different car that actually works. And so when we're when we're looking for tools ourselves, we tend to focus a lot on that on like, what's the experience like for for the folks who are using it? Can we rely on it? And is it something that we need to, you know, have a dedicated person to run? Or is it something that we can kind of run in the background and the the analysts,

the data scientists can get it to work the way I like to work.

The other thing I think that we really look for is usability. So I think this is a place where where ETL tools and data pipeline tools, the folks who are building them often often don't think about as much as perhaps they could, which is the surface area of those tools isn't the application itself or the web interface, I really think of the surface area those tools as the data itself, that that if I'm using an ATL tool, the way that I interact with that tool day in and day out is by actually interacting with the data that that tool is providing, not by logging into the web interface and and you know, checking the status of the pipelines and things like that. And so in those cases, little things matter, it ends up being column names that matter, like, Are there weird capitalization schemes, or are there periods and column names, and those little things that make it more frustrating to work with that day in and day out, end up being things that really drive kind of our experience with those tools, working with customers. And so most customers range from being being small startups to much larger enterprises. I think for small startups, they often look like us. For the large enterprises, the place that we really try to try to focus is making sure that the tools that we recommend your modular that data stacks end up becoming very complicated, they end up having to serve a lot of different folks across a lot of different departments, pulling data from tons of different sources. We try to avoid people focusing on like one tool to rule them all. This kind of having one pipeline, one warehouse, one analytics tool, all of these things serving every need, I think is often a it sounds nice, it's often very difficult to actually create that. And we'd rather people be able to kind of modularize different parts of their stack so that if something new comes along that they want to use, they can easily swap something else in and out without having to kind of re architect the entire the entire pipeline.