K8s is just an abbreviation of Kubernetes ("K" followed by 8 letters "ubernete" followed by "s"). However, normally when people talk about either Kubernetes or K8s, they are talking about the original upstream project, designed as a really highly available and hugely scalable platform by Google.

For example, here's an example of a Kubernetes cluster handling a zero downtime update while performing 10 million requests per second on YouTube.

The problem is that while you can run Kubernetes on your local developer machine with Minikube, if you're going to run it in production you very quickly get in to the realm of "best practices" with advice like:

Separate your masters from your nodes - your masters run the control plane and your nodes run your workload - and never the twain shall meet. Run etcd (the database for your Kubernetes state) on a separate cluster to ensure it can handle the load. Ideally have separate Ingress nodes so they can handle the incoming traffic easily, even if some of the underly nodes are slammed busy.

Very quickly this can get you to 3 x K8s masters, 3 x etcd, 2 x Ingress plus your nodes. So a realistic minimum of 8 medium instances before you even get to "how many nodes do I need for my site?".

Don't misunderstand us, if you're running a production workload this is VERY sane advice. There's nothing worse than trying to debug a down production cluster that's overloaded, late on a Friday night!

However, if you want to just learn Kubernetes, or maybe host a development/staging cluster for non-essential things - it feels like a little overkill, right? At least it does to us. If I want to fire up a cluster to see if my Kubernetes manifests (configuration for the deployments, etc) are correct - I'd rather not incur a cost of over a hundred dollars per month to do it.