Michael Drogalis

That's the idea. So when you download k SQL DB, what you're able to do is actually add connectors to the classpath. So connectors are are jars that you add to your classpath. And once you have them on the classpath, you can use some syntax in case SQL DB to say create source connector or creates and connector. And we offer that up in two ways. We think like one of the easier ways to use it on the box is actually to configure nothing else, you put the connector on your classpath. And then you're just able to go you're able to say, create this connector and in internally, k SQL DB will actually run the connector for you on its servers. And so there's there's no separate cluster, there's actually just one cluster doing both of these things for you. We do have lots of People who do high volume workloads and once you get into sort of higher volume, territory, you are very mindful of resource isolation. And so the same syntax, being able to say create source connector actually works not only in this embedded mode, it also works on an externalized Connect cluster. And so we actually let you choose. And it's the same program for both. And so we think this is a pretty powerful way to let people go from beginner stages to sort of intermediate, and then all the way to high volume, mission critical use case. When you're dealing with multiple different systems that are all working together, even when they're designed to be consumed as an entire unit. There are some cases where you have some weird edge cases or some design considerations that you might not have done if you were to redo the entire system from scratch. So if you were to just rebuild the entire platform today, primarily focused on the case, equal DB use case, what are some of the things that you think you would do differently? That's a great question. I think one thing we would have done is taken a much harder line about what do things look like at the top level, I think if you read through our documentation, a very fair criticism is that we, we don't really make a call around how much coffee that you need to know. And so you sometimes you see these low level implementation details about coffee that sort of bubbling up all the way through. And a good example is configuring auto offset reset, which is like the lowest layer in the Kafka client libraries for Do you consume from the beginning or the end of the stream? Super important concept, it actually does need to surface in some form at the very top in case equal dB. So you can, you know, really know which end of the log to consume from, but we actually do it in the most low level way. And so I think trying to make a call about how to make that much more unified would have been better, you can probably make similar arguments about product seems around partitions. partitions are very, very important to every layer of the stack. But sort of the way that we handled each layer can be a bit different in a bit confusing, and those are the challenges that we need to work through. I think we've done a pretty good job so far, and making sure that you can sort of not get too much in Louisiana use case equal dB. But there's certainly cases here and there where if we were to reconstruct it from the Start, that would be something we keep a very careful eye towards.