In a previous article, I tried giving a small glimpse of requirements and implementation when it comes to the subject of dynamic storage increase for Apache Kafka instances.

What about VM scaling? Is there any way to check if we are not using it properly?

Even if a lot of us migrate services to Kubernetes, there is still a long way for application which are critical to data consistency. StatefulSets is a good and long awaited addition to K8s, but there is still not enough experience on their behavior for Kafka deployments.

If we are stuck using VM’s for a period of time, we should optimize as best as we can for resource allocation.

First step in this process is realized by receiving suggestions on points we can improve and this can be easily done with Recommender API

Let me guide you with a small setup to get you started.

First thing that you will need to define is of course a role. Because security is critical, especially in the cloud, the best approach would be to create it only with these permissions.

recommender.computeInstanceMachineTypeRecommendations.list

resourcemanager.projects.get

serviceusage.services.use

compute.instances.list

compute.zones.list

After this is define, please create also a service account that is linked to this role.

The prerequisites on the cloud part are checked, please make sure that you have also the following packages installed: google-api-python-client and google-cloud-recommender.

Once those are also covered, we can start taking a look at the Python script.

import mmap

import sys

from google.cloud.recommender_v1beta1 import RecommenderClient

from google.oauth2 import service_account

from prettytable import PrettyTable

from googleapiclient import discovery if len(sys.argv) < 2:

print("The script needs service file as parameter")

sys.exit()

else:

service_account_file = sys.argv[1] credentials = service_account.Credentials.from_service_account_file(service_account_file) def get_project_id(service_account_file):

with open(service_account_file) as service_account_ref :

s = mmap.mmap(service_account_ref.fileno(), 0, access=mmap.ACCESS_READ)

for line in iter(s.readline, b""):

if b"project_id" in line:

project_name=str(line).split('"')[3]

return project_name def get_project_zone(project_name):

project_zone, project_zone_buffer, all_project_zone = [], [], []

service = discovery.build('compute', 'v1', credentials=credentials)

request = service.zones().list(project=project_name)

while request is not None:

response = request.execute()

for zone in response['items']:

project_zone_buffer.append(zone) request = service.zones().list_next(previous_request=request, previous_response=response)

for i in range(len(project_zone_buffer)):

all_project_zone.append(project_zone_buffer[i]['name'])

for element in all_project_zone:

if (check_zone(project_name, element)):

project_zone.append(element) return project_zone; def check_zone(project_name, zone):

instance_list_buffer = []

service = discovery.build('compute', 'v1', credentials=credentials)

request = service.instances().list(project=project_name, zone=zone)

while request is not None:

response = request.execute() if 'items' in response.keys():

return True

else:

return False def main():

recommender_table = PrettyTable(['Recommendation', "Instance", "Costs", "Sort"])

recommender_table.align["Recommendation"] = "l"

recommender_table.align["Instance"] = "l"

project = get_project_id(service_account_file)

locations = get_project_zone(project)

recommender = 'google.compute.instance.MachineTypeRecommender'

client = RecommenderClient(credentials=credentials)

for location in locations:

name = client.recommender_path(project, location, recommender)

element = client.list_recommendations(name)

for i in element:

chi = i.content.operation_groups[0].operations[0].resource.rfind("/")

ch = str(i.content.operation_groups[0].operations[0].resource)

host = str(ch[chi+1:])

recommender_table.add_row([i.description,host,str(i.primary_impact.cost_projection.cost.units)+" USD",host])

recommender_table.sortby = "Sort"

print(recommender_table.get_string(fields=["Recommendation", "Instance" ,"Costs" ])) main()

The script takes the name of the service account file as input parameter, reads the project name from within it, and the rest of the connectivity parameters, and it uses it to list the available zones. It then queries each zone to see if there are any VM’s and creates a final list of active zones.

To get the recommendations, there are multiple queries cases using the project and the zone. At the end it will output something like the picture you see below.

Since in our case, the actual resizing of the VM’s is a step that should be aligned with the client, we will not do that automatically, or by API.

If you want to see also a script that will do just that, please let us a comment and we will give you also a minimal working implementation.

Not everything can be executed dynamically. If it uses VM’s, make sure that they are properly sized.