This may be a bit long and present some oversimplification but should be sufficient to get the idea across.

Physical machines

Some time ago, the best way to deploy simple applications was to simply buy a new webserver, install your favorite operating system on it, and run your applications there.

The cons of this model are:

The processes may interfere with each other (because they share CPU and file system resources), and one may affect the other's performance.

Scaling this system up/down is difficult as well, taking a lot of effort and time in setting up a new physical machine.

There may be differences in the hardware specifications, OS/kernel versions and software package versions of the physical machines, which make it difficult to manage these application instances in a hardware-agnostic manner.

Applications, being directly affected by the physical machine specifications, may need specific tweaking, recompilation, etc, which means that the cluster administrator needs to think of them as instances at an individual machine level. Hence, this approach does not scale. These properties make it undesirable for deploying modern production applications.

Virtual Machines

Virtual machines solve some of the problems of the above:

They provide isolation even while running on the same machine.

They provide a standard execution environment (the guest OS) irrespective of the underlying hardware.

They can be brought up on a different machine (replicated) quite quickly when scaling (order of minutes).

Applications typically do not need to be rearchitected for moving from physical hardware to virtual machines.

But they introduce some problems of their own:

They consume large amounts of resources in running an entire instance of an operating system.

They may not start/go down as fast as we want them to (order of seconds).

Even with hardware assisted virtualization, application instances may see significant performance degradation over an application running directly on the host. (This may be an issue only for certain kinds of applications)

Packaging and distributing VM images is not as simple as it could be. (This is not as much a drawback of the approach, as it is of the existing tooling for virtualization.)

Containers

Then, somewhere along the line, cgroups (control groups) were added to the linux kernel. This feature lets us isolate processes in groups, decide what other processes and file system they can see, and perform resource accounting at the group level.

Various container runtimes and engines came along which make the process of creating a "container", an environment within the OS, like a namespace which has limited visibility, resources, etc, very easy. Common examples of these include docker, rkt, runC, LXC, etc.

Docker, for example, includes a daemon which provides interactions like creating an "image", a reusable entity that can be launched into a container instantly. It also lets one manage individual containers in an intuitive way.

The advantages of containers:

They are light-weight and run with very little overhead, as they do not have their own instance of the kernel/OS and are running on top of a single host OS.

They offer some degree of isolation between the various containers and the ability to impose limits on various resources consumed by them (using the cgroup mechanism).

The tooling around them has evolved rapidly to allow easy building of reusable units (images), repositories for storing image revisions (container registries) and so on, largely due to docker.

It is encouraged that a single container run a single application process, in order to maintain and distribute it independently. The light-weight nature of a container make this preferable, and leads to faster development due to decoupling.

There are some cons as well:

The level of isolation provided is a less than that in case of VMs.

They are easiest to use with stateless 12-factor applications being built afresh and a slight struggle if one tries to deploy legacy applications, clustered distributed databases and so on.

They need orchestration and higher level primitives to be used effectively and at scale.

Container Orchestration

When running applications in production, as the complexity grows, it tends to have many different components, some of which scale up/down as necessary, or may need to be scaled. The containers themselves do not solve all our problems. We need a system that solves problems associated with real large-scale applications such as:

Networking between containers

Load balancing

Managing storage attached to these containers

Updating containers, scaling them, spreading them across nodes in a multi-node cluster and so on.

When we want to manage a cluster of containers, we use a container orchestration engine. Examples of these are Kubernetes, Mesos, Docker Swarm etc. They provide a host of functionality in addition to those listed above and the goal is to reduce the effort involved in dev-ops.

GKE (Google Container Engine) is hosted Kubernetes on Google Cloud Platform. It lets a user simply specify that they need an n-node kubernetes cluster and exposes the cluster itself as a managed instance. Kubernetes is open source and if one wanted to, one could also set it up on Google Compute Engine, a different cloud provider, or their own machines in their own data-center.

ECS is a proprietary container management/orchestration system built and operated by Amazon and available as part of the AWS suite.