The Real Problem: Cloud-Based Development

Many companies are already using cloud-based development, i.e. they give developers access to the cloud to run and test their software. This can be for two reasons: Sometimes, they simply have to do it because they need additional cloud computing resources that are not available on local computers. This is especially the case with larger applications, microservice architectures or applications that require GPUs, such as machine learning apps. The second reason is that businesses want to use cloud development. This can be due to its easier maintenance in teams or its ease of use for developers without k8s knowledge, so they can work more productively in this environment.

As described in my other post, there are two possibilities to give developers access to a Kubernetes cluster: Giving each developer an own cluster or share one cluster with many developers. Giving each developer an own cluster has several disadvantages. One of them is that it is not efficient in using the resources as the necessary Kubernetes functions have to run many times in parallel and cannot be shared, which is a waste of resources. For this, sharing a cluster is more efficient regarding the use of computing resources, which is why I assume this approach in the following.

However, also one shared cluster can need quite a lot of resources, especially if multiple developers want to develop complex software on it in parallel. There are different ways of dealing with this issue:

Approaches to Deal With High Cloud Computing Cost For Development

1. Just Pay For It

The first solution is to just pay the cost of the cloud resources. This is, of course, a very simple approach, but while it sounds very trivial, it might still be a good option in some cases. These include cases where you have a lot of cloud credits from the public cloud providers (in some cases you can get $100,000 or even more), so you do not actually “pay” for it. These credits are often available for startups, but even companies without credits may choose to simply pay the price without trying to optimize anything if other issues are more important. For example, imagine a startup that wants to release a new product as quickly as possible and thus does not want to deal with other issues besides product development. Just paying the cloud resources also makes sense if the associated cost is not very high in absolute terms, so an investment in optimization would not pay off.

However, this simplistic approach is obviously not optimized in any way and can become very expensive in the long run.

Advantages, Disadvantages and Use Cases for the “Just-Pay-For-It”-Approach

2. Establish Resource Limits

The second approach is to limit the resource usage by implementing either technical limits or rules that forbid to use more than a pre-specified amount of resources. This approach will set a maximum cap for your cost, so you can be sure that not more than this is spent every month. Technical enforcement of resource limits can be ensured by connecting your Kubernetes cluster to DevSpace Cloud, where you can centrally set limits for individual users and namespaces in a GUI.

With this approach, it is important to set the limit to the right amount as otherwise you could spend too much or it can slow down the development flow. When you are just setting rules on how developers shall deal with the cloud resources, you also need to pay attention that everybody in the team is abiding by these rules as they otherwise will become useless.

While this solution can save you some cost and will set a cap on your cloud bill, it is still not optimized. In some situations, more resources might be needed than allowed, e.g. to test a new feature, while in other situations, tasks might have been executed with fewer resources than allowed.

Overall, this approach is a good starting point and easy to implement and has the main advantage that your total costs become predictable. For your workflows, it is still not optimized but it can be combined with one of the next two solutions.

Advantages, Disadvantages and Use Cases for the “Resource Limits”-Approach

DevSpace Cloud is available as a SaaS-solution or on-premise. In any case, you simply connect your Kubernetes cluster and can invite additional users to the cluster whose permissions and limits can be set in a graphical UI.

3. Shut Off Namespaces Manually

A third approach is to instruct all developers to delete or scale down their namespaces as soon as they do not need them anymore and to restart or scale them up when they resume their work. This can save a lot of cost because most of the time, the computing resources will not be needed but you still have to pay them, e.g. at night, on the weekends, on holidays or during meetings. Due to the use of containers and Kubernetes, it is usually possible to delete containers during development and to restart them again. With this approach, your cost savings can be significant and at the same time, it does not limit your developers in any way as they can still scale up as far as they want when it is needed.

However, this solution also comes with some disadvantages: At first, restarting the application after every shut-off will cost some time. Secondly, this solution is implemented by asking the developers to do it, so you have to rely on them to actually do it. Unfortunately, scaling down and shutting off resources is a task that can be easily forgotten, especially if sudden events happen, such as a spontaneous meeting before the weekend starts. It can also be seen as an “annoying” task as the developers do not see any benefit from doing it, but it rather leads to them waiting when they want to continue to work. For this, it might make sense to incentivize them to reliably do this task, which again would lead to some additional efforts for the management.

Advantages, Disadvantages and Use Cases for the “Shut Off Namespaces”-Approach

4. Automatically Pause Namespaces

Related to the third solution is the approach to automatically pause namespaces after some time of inactivity. This can be done with DevSpace Cloud. DevSpace Cloud will automatically scale Kubernetes ReplicaSets, Deployments and StatefulSets to zero, the pod amount in the resource quota will be set to 0 and all existing pods in the namespace will be killed. (click here for more information)

It is possible to configure the time of inactivity that is needed for the namespaces to pause, so you can individually decide if they should already pause for shorter breaks and meetings or just for weekends and holidays. This configuration can be set on individual namespace, user or cluster level. As long as the user uses commands such as devspace dev , devspace deploy , devspace logs or devspace enter , they will signal to DevSpace Cloud that the namespaces are still active, so they will not go into sleep mode. When the developer starts to work again with a paused namespace, the namespace will automatically continue to work.

If you configure this sleep mode appropriately, it becomes a “smart” approach to reduce your cloud computing cost by considering specifications of your application and work behavior of individual developers. As a result, you can save up to 70% of cloud cost without limiting developers in their work. It is also pretty easy to implement, as the developers do not need to adapt their normal workflows, but the cluster just has to be connected to DevSpace Cloud and the sleep mode has to be configured once in its graphical user interface.