TL;DR: Kubernetes tools from k14s (ytt, kbld, kapp, kwt) when used together offer a powerful way to create, customize, iterate on, and deploy cloud native applications. These tools are designed to be used in various workflows such as local development, and production deployment. Each tool is designed to be single-purpose and composable, resulting in easier ways of integrating them into existing or new projects, and with other tools. Watch Joe Beda on TGI Kubernetes 079: YTT and Kapp go through this blog post and share his thoughts about these tools.

The k14s (stands for "Kubernetes Tools") Github organization (https://github.com/k14s) contains several tools we created as a result of working with complex, multi-purpose tools like Helm. We believe that working with simple, single-purpose tools that easily interoperate with one another results in a better, workflow compared to the all-in-one approach chosen by Helm. We have found this approach to be easier to understand and debug.

In this blog post we will focus on local application development workflow; however, tools introduced here work also well for other workflows, for example, for production GitOps deployments or manual application deploys. We plan to publish additional blog posts for other workflows. Let us know what you are most interested in!

We break down local application development workflow into the following stages:

Source code authoring Configuration authoring (e.g. YAML configuration files) Packaging (e.g. Dockerfile) Deployment (e.g. kubectl apply ...) Repeat!

Helm arguably tries to address stages 2, 3, and 4, with configuration, packaging and deployment together in one tool. The community has varied opinions on advantages and disadvantages of using Helm. However, let's explore an alternative approach with tools from k14s.

For each stage, we have open sourced a tool that we believe addresses that stage's challenges (sections below explore each tool in detail):

configuration -> ytt for YAML configuration and templating

packaging -> kbld for building Docker images and record image references

deployment -> kapp for deploying k8s resources

We'll use k8s-simple-app-example application as our example to showcase how these tools can work together to develop and deploy an application.

Preparation

Before getting too deep, let's get some basic preparations out of the way:

Find a Kubernetes cluster (preferably Minikube as it better fits local development; Docker for Mac/Linux is another good option as it now includes Kubernetes)

Check that the cluster works via kubectl get nodes

Install k14s tools by following instructions on https://k14s.io/

Deploying the application

To get started with our example application, clone k8s-simple-app-example locally:

git clone https://github.com/k14s/k8s-simple-app-example cd k8s-simple-app-example

This directory contains a simple Go application that consists of app.go (an HTTP web server) and a Dockerfile for packaging. Multiple config-step-* directories contain variations of application configuration that we will use in each step.

ls -la total 64 drwxr-xr-x 12 argonaut staff 384 May 8 16:42 . drwxr-xr-x 9 argonaut staff 288 May 8 16:54 .. drwxr-xr-x 12 argonaut staff 384 May 9 13:54 .git -rw-r--r-- 1 argonaut staff 241 May 9 13:09 Dockerfile -rw-r--r-- 1 argonaut staff 360 May 9 13:43 app.go drwxr-xr-x 3 argonaut staff 96 May 8 16:24 config-step-1-minimal ...

Typically, an application deployed to Kubernetes will include Deployment and Service resources in its configuration. In our example, config-step-1-minimal/ directory contains config.yml which contains exactly that. (Note that the Docker image is already preset and environment variable HELLO_MSG is hard coded. We'll get to those shortly.)

Traditionally, you can use kubectl apply -f config-step-1-minimal/config.yml to deploy this application. However, kubectl (1) does not indicate which resources are affected and how they are affected before applying changes, and (2) does not yet have a robust prune functionality to converge a set of resources (GH issue). kapp addresses and improves on several kubectl's limitations as it was designed from the start around the notion of a "Kubernetes Application" - a set of resources with the same label:

kapp separates change calculation phase (diff), from change apply phase (apply) to give users visibility and confidence regarding what's about to change in the cluster

kapp tracks and converges resources based on a unique generated label, freeing its users from worrying about cleaning up old deleted resources as the application is updated

kapp orders certain resources so that the Kubernetes API server can successfully process them (e.g., CRDs and namespaces before other resources)

kapp tries to wait for resources to become ready before considering the deploy a success

Let us deploy our application with kapp:

kapp deploy -a simple-app -f config-step-1-minimal/ Changes Namespace Name Kind Conditions Age Changed Ignored Reason default simple-app Deployment - - add - ~ simple-app Service - - add - 2 add, 0 delete, 0 update, 0 keep 2 changes Continue ? [yN]: y 6:11:44PM: --- applying changes 6:11:44PM: add service/simple-app (v1) namespace: default 6:11:44PM: add deployment/simple-app (apps/v1) namespace: default 6:11:44PM: waiting on add service/simple-app (v1) namespace: default 6:11:44PM: L waiting on endpoints/simple-app (v1) namespace: default ... done 6:11:44PM: waiting on add deployment/simple-app (apps/v1) namespace: default 6:11:44PM: L waiting on replicaset/simple-app-5fb5ff9bdb (extensions/v1beta1) namespace: default ... done 6:11:44PM: L waiting on pod/simple-app-5fb5ff9bdb-2clpj (v1) namespace: default ... in progress: Pending 6:11:45PM: 6:11:45PM: --- waiting on 1 changes 6:11:45PM: waiting on add deployment/simple-app (apps/v1) namespace: default 6:11:45PM: L waiting on replicaset/simple-app-5fb5ff9bdb (apps/v1) namespace: default ... done 6:11:45PM: L waiting on pod/simple-app-5fb5ff9bdb-2clpj (v1) namespace: default ... done 6:11:45PM: --- changes applied Succeeded

Our simple-app received a unique label kapp.k14s.io/app=1557433075084066000 for resource tracking:

kapp ls Apps in namespace ' default ' Name Label Namespaces Last Change Last Change Successful Age simple-app kapp.k14s.io/app=1557433075084066000 default true 1s 1 apps Succeeded

Using this label, kapp tracks and allows inspection of all Kubernetes resources created for sample-app :

kapp inspect -a simple-app --tree Resources in app ' simple-app ' Namespace Name Kind Managed by Conditions Age default simple-app Deployment kapp 2 OK / 2 4h default L simple-app-6f884d8d9d ReplicaSet cluster - 4h default L.. simple-app-6f884d8d9d-nn5ds Pod cluster 4 OK / 4 4h default simple-app Service kapp - 4h default L simple-app Endpoints cluster - 4h 5 resources Succeeded

Note that it even knows about resources it did not directly create (such as ReplicaSet and Endpoints).

kapp logs -f -a simple-app # starting tailing 'simple-app-6f884d8d9d-nn5ds > simple-app' logs simple-app-6f884d8d9d-nn5ds > simple-app | 2019/05/09 20:43:36 Server started

inspect and logs commands demonstrate why it's convenient to view resources in "bulk" (via a label). For example, logs command will tail any existing or new Pod that is part of simple-app application, even after we make changes and redeploy.

Additional kapp resources:

Accessing the deployed application

Once deployed successfully, you can access the application at 127.0.0.1:8080 in your browser with the help of kubectl port-forward command:

kubectl port-forward svc/simple-app 8080:80

One downside to the kubectl command above: it has to be restarted if the application pod is recreated.

Alternatively, you can use k14s' kwt tool which exposes cluster IP subnets and cluster DNS to your machine. This way, you can access the application without requiring any restarts.

With kwt installed, run the following command

sudo -E kwt net start

and open http://simple-app.default.svc.cluster.local/ .

Additional kwt resources:

Deploying configuration changes

Let's make a change to the application configuration to simulate a common occurrence in a development workflow. A simple observable change we can make is to change the value of the HELLO_MSG environment variable in config-step-1-minimal/config.yml :

- name: HELLO_MSG - value: stranger + value: somebody

and re-run kapp deploy :

kapp deploy -a simple-app -f config-step-1-minimal/ --diff-changes --- update deployment/simple-app (apps/v1) namespace: default ... 29, 29 - name: HELLO_MSG 30 - value: stranger 30 + value: somebody 31, 31 image: docker.io/dkalinin/k8s-simple-app@sha256:4c8b96d4fffdfae29258d94a22ae4ad1fe36139d47288b8960d9958d1e63a9d0 32, 32 name: simple-app Changes Namespace Name Kind Conditions Age Changed Ignored Reason default simple-app Deployment 2 OK / 2 22h mod - 0 add, 0 delete (13 hidden), 1 update, 0 keep (1 hidden) 1 changes Continue ? [yN]: y 12:09:35PM: --- applying changes 12:09:35PM: update deployment/simple-app (apps/v1) namespace: default 12:09:35PM: waiting on update deployment/simple-app (apps/v1) namespace: default 12:09:35PM: L waiting on replicaset/simple-app-5694b49489 (apps/v1beta2) namespace: default ... done 12:09:35PM: L waiting on pod/simple-app-6d6d64dd54-npsnh (v1) namespace: default ... in progress: Pending 12:09:35PM: L waiting on pod/simple-app-577499b464-874lp (v1) namespace: default ... done ... 12:09:37PM: --- waiting on 1 changes 12:09:37PM: waiting on update deployment/simple-app (apps/v1) namespace: default 12:09:37PM: L waiting on replicaset/simple-app-5694b49489 (extensions/v1beta1) namespace: default ... done 12:09:37PM: L waiting on pod/simple-app-6d6d64dd54-npsnh (v1) namespace: default ... done 12:09:37PM: L waiting on pod/simple-app-577499b464-874lp (v1) namespace: default ... in progress: Deleting 12:09:38PM: 12:09:38PM: --- waiting on 1 changes 12:09:38PM: waiting on update deployment/simple-app (apps/v1) namespace: default 12:09:38PM: L waiting on replicaset/simple-app-5694b49489 (apps/v1beta2) namespace: default ... done 12:09:38PM: L waiting on pod/simple-app-6d6d64dd54-npsnh (v1) namespace: default ... done 12:09:38PM: --- changes applied Succeeded

Above output highlights several kapp features:

kapp detected a single change to simple-app Deployment by comparing given local configuration against the live cluster copy

Deployment by comparing given local configuration against the live cluster copy kapp showed changes in a git-style diff via --diff-changes flag

flag since simple-app Service was not changed in any way, it was not "touched" during the apply changes phase at all

Service was not changed in any way, it was not "touched" during the apply changes phase at all kapp waited for Pods associated with a Deployment to converge to their ready state before exiting successfully

To double check that our change applied, go ahead and refresh your browser window with our deployed application.

Given that kapp does not care where application configuration comes from, one can use it with any other tools that produce k8s configuration, for example, Helm's template command:

helm template my-chart --values values.yml | kapp deploy -a my-app -f- --yes

Templating application configuration

Managing application configuration is a hard problem. As an application matures, typically configuration needs to be tweaked for different environments, and different constraints. This leads to the desire to expose several, hopefully not too many, configuration knobs that could be tweaked at the time of the deploy.

This problem is typically solved in two ways: templating or patching. ytt supports both approaches. In this section we'll see how ytt allows to template YAML configuration, and in the next section, we'll see how it can patch YAML configuration via overlays.

Unlike many other tools used for templating, ytt takes a different approach to working with YAML files. Instead of interpreting YAML configuration as plain text, it works with YAML structures such as maps, lists, YAML documents, scalars, etc. By doing so ytt is able to eliminate a lot of problems that plague other tools (character escaping, ambiguity, etc.). Additionally ytt provides Python-like language (Starlark) that executes in a hermetic environment making it friendly, yet more deterministic compared to just using general purpose languages directly or non-familiar custom templating languages. Take a look at ytt: The YAML Templating Tool that simplifies complex configuration management for a more detailed introduction.

To tie it all together, let's take a look at config-step-2-template/config.yml . You'll immediately notice that YAML comments (#@ ...) store templating metadata within a YAML file, for example:

env : - name : HELLO_MSG value : # @ data.values.hello_msg

Above snippet tells ytt that HELLO_MSG environment variable value should be set to the value of data.values.hello_msg. data.values structure comes from the builtin ytt data library that allows us to expose configuration knobs through a separate file, namely config-step-2-template/values.yml . Deployers of simple-app can now decide, for example, what hello message to set without making application code or configuration changes.

Let's chain ytt and kapp to deploy an update, and note -v flag which sets hello_msg value:

ytt template -f config-step-2-template/ -v hello_msg= " k14s user " | kapp deploy -a simple-app -f- --diff-changes --yes --- update deployment/simple-app (apps/v1) namespace: default ... 29, 29 - name: HELLO_MSG 30 - value: somebody 30 + value: k14s user 31, 31 image: docker.io/dkalinin/k8s-simple-app@sha256:4c8b96d4fffdfae29258d94a22ae4ad1fe36139d47288b8960d9958d1e63a9d0 32, 32 name: simple-app Changes Namespace Name Kind Conditions Age Changed Ignored Reason default simple-app Deployment 2 OK / 2 23h mod - 0 add, 0 delete (13 hidden), 1 update, 0 keep (1 hidden) 1 changes 1:08:49PM: --- applying changes 1:08:49PM: update deployment/simple-app (apps/v1) namespace: default 1:08:49PM: waiting on update deployment/simple-app (apps/v1) namespace: default 1:08:49PM: L waiting on replicaset/simple-app-577499b464 (apps/v1) namespace: default ... done 1:08:49PM: L waiting on replicaset/simple-app-5694b49489 (apps/v1) namespace: default ... done 1:08:49PM: L waiting on pod/simple-app-78c59bd9f4-fj5sl (v1) namespace: default ... in progress: Pending 1:08:49PM: L waiting on pod/simple-app-6d6d64dd54-npsnh (v1) namespace: default ... done ... 1:09:00PM: --- waiting on 1 changes 1:09:00PM: waiting on update deployment/simple-app (apps/v1) namespace: default 1:09:00PM: L waiting on replicaset/simple-app-577499b464 (apps/v1) namespace: default ... done 1:09:00PM: L waiting on replicaset/simple-app-5694b49489 (apps/v1) namespace: default ... done 1:09:00PM: L waiting on pod/simple-app-78c59bd9f4-fj5sl (v1) namespace: default ... done 1:09:00PM: L waiting on pod/simple-app-6d6d64dd54-npsnh (v1) namespace: default ... in progress: Deleting 1:09:01PM: 1:09:01PM: --- waiting on 1 changes 1:09:01PM: waiting on update deployment/simple-app (apps/v1) namespace: default 1:09:02PM: L waiting on replicaset/simple-app-577499b464 (apps/v1) namespace: default ... done 1:09:02PM: L waiting on replicaset/simple-app-5694b49489 (apps/v1) namespace: default ... done 1:09:02PM: L waiting on pod/simple-app-78c59bd9f4-fj5sl (v1) namespace: default ... done 1:09:02PM: --- changes applied Succeeded

We covered one simple way to use ytt to help you manage application configuration. Please take a look at examples in ytt interactive playground to learn more about other ytt features which may help you manage YAML configuration more effectively.

Additional ytt resources:

Patching application configuration

ytt also offers another way to customize application configuration. Instead of relying on configuration providers (e.g. authors of k8s-simple-app) to expose a set of configuration knobs, configuration consumers (e.g. users that deploy k8s-simple-app) can use the ytt overlay feature to patch YAML documents with arbitrary changes.

For example, our simple app configuration templates do not make Deployment's spec.replicas configurable as a data value to control how may Pods are running. Instead of asking authors of simple app to expose a new data value, we can create an overlay file config-step-2a-overlays/custom-scale.yml that changes spec.replicas to a new value.

ytt template -f config-step-2-template/ -f config-step-2a-overlays/custom-scale.yml -v hello_msg= " k14s user " | kapp deploy -a simple-app -f- --diff-changes --yes --- update deployment/simple-app (apps/v1) namespace: default ... 15, 15 spec: 16 + replicas: 3 16, 17 selector: 17, 18 matchLabels: Changes Namespace Name Kind Conditions Age Changed Ignored Reason default simple-app Deployment 2 OK / 2 1d mod - 0 add, 0 delete (13 hidden), 1 update, 0 keep (1 hidden) 1 changes ...

Additional resources:

Building container images locally

K8s embraced use of container images to package source code and its dependencies. One way to deliver updated application is to rebuild a container when changing source code. kbld is a small tool that provides a simple way to insert container image building into deployment workflow. kbld looks for images within application configuration (currently it looks for image keys), checks if there is an associated source code, if so builds these images via Docker (could be pluggable with other builders), and finally captures built image digests and updates configuration with new references.

Before running kbld, let's change app.go by uncommenting fmt.Fprintf(w, "<p>local change</p>") to make a small change in our application.

config-step-3-build-local/build.yml is a new file in this config directory, which specifies that docker.io/dkalinin/k8s-simple-app should be built from the current working directory where kbld runs (root of the repo).

If you are using Minikube, make sure kbld has access to Docker CLI by running eval $(minikube docker-env) . If you are using Docker for Mac (or related product that comes with Docker and Kubernetes), make sure that docker ps succeeds. If you do not have a local environment (i.e. running a remote cluster and have a local Docker daemon), read on but you may have to wait until the next section when we show how to use a remote registry.

Let's insert kbld between ytt and kapp so that images used in our configuration are built before they are deployed by kapp:

ytt template -f config-step-3-build-local/ -v hello_msg= " k14s user " | kbld -f- | kapp deploy -a simple-app -f- --diff-changes --yes docker.io/dkalinin/k8s-simple-app | starting build (using Docker): . - > kbld:1557534242219453000-docker-io-dkalinin-k8s-simple-app docker.io/dkalinin/k8s-simple-app | Sending build context to Docker daemon 223.7kB docker.io/dkalinin/k8s-simple-app | Step 1/8 : FROM golang:1.12 docker.io/dkalinin/k8s-simple-app | --- > 7ced090ee82e docker.io/dkalinin/k8s-simple-app | Step 2/8 : WORKDIR /go/src/github.com/k14s/k8s-simple-app-example/ docker.io/dkalinin/k8s-simple-app | --- > Using cache docker.io/dkalinin/k8s-simple-app | --- > cfcc02b178b8 docker.io/dkalinin/k8s-simple-app | Step 3/8 : COPY . . docker.io/dkalinin/k8s-simple-app | --- > 7c5f468a7d66 docker.io/dkalinin/k8s-simple-app | Step 4/8 : RUN CGO_ENABLED=0 GOOS=linux go build -v -o app docker.io/dkalinin/k8s-simple-app | --- > Running in 5e21b3183646 docker.io/dkalinin/k8s-simple-app | net docker.io/dkalinin/k8s-simple-app | net/textproto docker.io/dkalinin/k8s-simple-app | crypto/x509 docker.io/dkalinin/k8s-simple-app | internal/x/net/http/httpguts docker.io/dkalinin/k8s-simple-app | internal/x/net/http/httpproxy docker.io/dkalinin/k8s-simple-app | crypto/tls docker.io/dkalinin/k8s-simple-app | net/http/httptrace docker.io/dkalinin/k8s-simple-app | net/http docker.io/dkalinin/k8s-simple-app | github.com/k14s/k8s-simple-app-example docker.io/dkalinin/k8s-simple-app | Removing intermediate container 5e21b3183646 docker.io/dkalinin/k8s-simple-app | --- > bd702bf4a0c4 docker.io/dkalinin/k8s-simple-app | Step 5/8 : FROM scratch docker.io/dkalinin/k8s-simple-app | --- > docker.io/dkalinin/k8s-simple-app | Step 6/8 : COPY --from=0 /go/src/github.com/k14s/k8s-simple-app-example/app . docker.io/dkalinin/k8s-simple-app | --- > Using cache docker.io/dkalinin/k8s-simple-app | --- > 753b71824c31 docker.io/dkalinin/k8s-simple-app | Step 7/8 : EXPOSE 80 docker.io/dkalinin/k8s-simple-app | --- > Using cache docker.io/dkalinin/k8s-simple-app | --- > 3c5e4cdbdc38 docker.io/dkalinin/k8s-simple-app | Step 8/8 : ENTRYPOINT [ " /app " ] docker.io/dkalinin/k8s-simple-app | --- > Using cache docker.io/dkalinin/k8s-simple-app | --- > f999be3e0d96 docker.io/dkalinin/k8s-simple-app | Successfully built f999be3e0d96 docker.io/dkalinin/k8s-simple-app | Successfully tagged kbld:1557534242219453000-docker-io-dkalinin-k8s-simple-app docker.io/dkalinin/k8s-simple-app | Untagged: kbld:1557534242219453000-docker-io-dkalinin-k8s-simple-app docker.io/dkalinin/k8s-simple-app | finished build (using Docker) resolve | final: docker.io/dkalinin/k8s-simple-app - > kbld:docker-io-dkalinin-k8s-simple-app-sha256-f999be3e0d96c78dc4d4c8330c8de8aff3c91f5e152f021d01cb3cd0e92a1797 --- update deployment/simple-app (apps/v1) namespace: default ... 30, 30 value: k14s user 31 - image: docker.io/dkalinin/k8s-simple-app@sha256:4c8b96d4fffdfae29258d94a22ae4ad1fe36139d47288b8960d9958d1e63a9d0 31 + image: kbld:docker-io-dkalinin-k8s-simple-app-sha256-f999be3e0d96c78dc4d4c8330c8de8aff3c91f5e152f021d01cb3cd0e92a1797 32, 32 name: simple-app 33, 33 status: Changes Namespace Name Kind Conditions Age Changed Ignored Reason default simple-app Deployment 2 OK / 2 1d mod - 0 add, 0 delete (13 hidden), 1 update, 0 keep (1 hidden) 1 changes ...

As you can see, the above output shows that kbld received ytt's produced configuration, and used the docker build command to build simple app image, ultimately capturing a specific reference and passing it onto kapp.

Once the deploy is successful check out application in your browser, it should have an updated response.

It's also worth showing that kbld not only builds images and updates references but also annotates Kubernetes resources with image metadata it collects and makes it quickly accessible for debugging. This may not be that useful during development but comes handy when investigating environment (staging, production, etc.) state.

kapp inspect -a simple-app --raw --filter-kind Deployment | kbld inspect -f- Images Image kbld:docker-io-dkalinin-k8s-simple-app-sha256-f999be3e0d96c78dc4d4c8330c8de8aff3c91f5e152f021d01cb3cd0e92a1797 Metadata - Path: /Users/pivotal/workspace/k14s-go/src/github.com/k14s/k8s-simple-app-example Type: local - Dirty: false RemoteURL: git@github.com:k14s/k8s-simple-app-example SHA: e877718521f7ccea0ab0844db0f86fe123a8d8ef Type: git Resource deployment/simple-app (apps/v1) namespace: default 1 images Succeeded

Additional resources:

Building and pushing container images to a registry

The above section showed how to use kbld with local cluster that's backed by local Docker daemon. No remote registry was involved; however, for a production environment or in absence of a local environment, you will need to instruct kbld to push out built images to a registry accessible to your cluster.

config-step-4-build-local/build.yml specifies that docker.io/dkalinin/k8s-simple-app should be pushed to a repository as specified by push_images_repo data value.

Before continuing on, make sure that your Docker daemon is authenticated to the registry where image will be pushed via docker login command.

ytt template -f config-step-4-build-and-push/ -v hello_msg= " k14s user " -v push_images=true -v push_images_repo=docker.io/your-username/your-repo | kbld -f- | kapp deploy -a simple-app -f- --diff-changes --yes ... docker.io/dkalinin/k8s-simple-app | starting push (using Docker): kbld:docker-io-dkalinin-k8s-simple-app-sha256-268c33c1257eed727937fb22a68b91f065bf1e10c7ba23c5d897f2a2ab67f76d - > docker.io/dkalinin/k8s-simple-app docker.io/dkalinin/k8s-simple-app | The push refers to repository [docker.io/dkalinin/k8s-simple-app] docker.io/dkalinin/k8s-simple-app | 2c82b4929a5c: Preparing docker.io/dkalinin/k8s-simple-app | 2c82b4929a5c: Layer already exists docker.io/dkalinin/k8s-simple-app | latest: digest: sha256:4c8b96d4fffdfae29258d94a22ae4ad1fe36139d47288b8960d9958d1e63a9d0 size: 528 docker.io/dkalinin/k8s-simple-app | finished push (using Docker) resolve | final: docker.io/dkalinin/k8s-simple-app - > index.docker.io/dkalinin/k8s-simple-app@sha256:4c8b96d4fffdfae29258d94a22ae4ad1fe36139d47288b8960d9958d1e63a9d0 --- update deployment/simple-app (apps/v1) namespace: default ... 30, 30 value: k14s user 31 - image: kbld:docker-io-dkalinin-k8s-simple-app-sha256-f999be3e0d96c78dc4d4c8330c8de8aff3c91f5e152f021d01cb3cd0e92a1797 31 + image: index.docker.io/your-username/your-repo@sha256:4c8b96d4fffdfae29258d94a22ae4ad1fe36139d47288b8960d9958d1e63a9d0 32, 32 name: simple-app 33, 33 status: Changes Namespace Name Kind Conditions Age Changed Ignored Reason default simple-app Deployment 2 OK / 2 1d mod - 0 add, 0 delete (13 hidden), 1 update, 0 keep (1 hidden) 1 changes ...

As a benefit of using kbld, you will see that image digest reference (e.g. index.docker.io/your-username/your-repo@sha256:4c8b96... ) was used instead of a tagged reference (e.g. kbld:docker-io... ). Digest references are preferred to other image reference forms as they are immutable, hence provide a gurantee that exact version of built software will be deployed.

Clean up cluster resources

Given that kapp tracks all resources that were deployed to k8s cluster, deleting them is as easy as running kapp delete command:

kapp delete -a simple-app Changes Namespace Name Kind Conditions Age Changed Ignored Reason default simple-app Deployment 2 OK / 2 1d del - ~ simple-app Service - 1d del - 0 add, 2 delete (13 hidden), 0 update, 0 keep 2 changes Continue ? [yN]: y ...

Summary

We've seen how ytt, kbld, kapp, and kwt can be used together ( ytt ... | kbld -f- | kapp deploy ... ) to deploy and iterate on an application running on Kubernetes. Each one of these tools has been designed to be single-purpose and composable with other tools from k14s org and larger k8s ecosystem.

We are eager to hear your thoughts and feedback in #k14s in Kubernetes slack and/or via Github issues and PRs (https://github.com/k14s/) for each project. Don't hesitate to reach out!

Authors

Dmitriy Kalinin is a Software Engineer at Pivotal working on Kubernetes and Cloud Foundry projects. (@dmitriykalinin on twitter)

Nima Kaviani is a Senior Cloud Engineer with IBM. Nima has been a contributor to Cloud Foundry, Kubernetes, and Knative open source projects. He holds a PhD in Computer Science and tweets and blogs about distributed systems, life, and technology in general. (@nimak on twitter)