When is a container serverless? That question might sound like troll bait for infrastructure geeks, but Microsoft’s Azure Container Instances, now generally available, is a blend of two prominent trends in cloud-native software development.

Azure Container Instances was first announced last July alongside Microsoft’s decision to join the Cloud Native Computing Foundation, and after some customer feedback it’s ready for everyone to try. ACI allows Azure customers to enjoy two of the benefits of “serverless” computing — invisible infrastructure and per-second billing — for their containerized applications, said Gabe Monroy, principal product manager for containers on Azure.

“It’s not vey often that we get to see the introduction of a new compute primitive in the public cloud,” Monroy said. He’s referring to how ACI runs independent from the underlying hardware infrastructure, in that customers don’t have to specify where they’d like to run their containers and spin up a virtual machine to get going.

There has been a surge of interest in containers over the last few years in part because they allow companies to get much more performance out of their existing infrastructure investment, similar to how virtual machines performed a similar trick a decade ago. But until recently, if you wanted to run containers in the public cloud you still had to designate the virtual machine on which those containers would run, and virtual machines are slow and billed in per-hour increments whether you need the full hour or not.

ACI lets you get apps up and running in ten to 20 seconds because you don’t have to boot a virtual machine, Monroy said. And once the app has run its course it can be shut down just as quickly, meaning you only pay Microsoft for the time it was active. This makes it ideal for “burst” workloads, such as when an Instagram celebrity posts a link to your app or service and the crowd starts filing in.

This isn’t “serverless” in the true sense, so much as any enterprise tech buzzword has actual meaning. There has been an equal if not stronger surge of interest in serverless computing over the last year or so, which combines per-second billing and invisible infrastructure but also allows developers to write apps in a very different fashion, geared around events and functions.

With ACI, you still write applications for containers the same way as if you write them for any container service, but you get to skip the step where you provision hardware for your apps. This is important because your app can run into problems if you don’t assign enough hardware services amid unexpected demand, or you can waste your money if you provision too much hardware and demand flops.

ACI is akin to Amazon Web Services’ Fargate, announced last year at AWS re:Invent 2017 and available in its U.S. East region. Like ACI, Fargate removes a layer of complexity from the overall process of running containers in the cloud.

Microsoft’s version also supports Kubernetes through the virtual kubelet announced at Kubecon last December in Austin, Tex., while AWS plans to bring its managed Kubernetes service together with Fargate later in 2018.

There remains a broader discussion about the best way to future-proof your software development strategies in 2018, assuming such a thing is even possible.

Containers promise portability, but managing them at scale still takes a lot of skill. Serverless computing, meaning functions-as-a-service, is a whole new lightweight way of thinking about software development at scale that could also lock you into your cloud provider to a degree IBM and Oracle would recognize from back in the day.

If you’ve already invested in containers, tools like ACI and Fargate make a lot of sense. But if you’re just now starting to contemplate containers, there’s a growing drumbeat that you might be better off skipping containers entirely and investing now in serverless computing. In a few years we’ll find out who was right.

(Editor’s note: This post was updated to clarify how AWS Fargate works.)