This is the setup:

There is a Master-Worker architecture which is being orchestrated via Ansible from inside of Master. The code for creating the Workers is as follows:

- name: Provisioning Spot instaces ec2: assign_public_ip: no spot_price: "{{ ondemand4_price }}" spot_wait_timeout: 300 assign_public_ip: no aws_access_key: "{{ assumed_role.sts_creds.access_key }}" aws_secret_key: "{{ assumed_role.sts_creds.secret_key }}" security_token: "{{ assumed_role.sts_creds.session_token }}" region: "{{ aws_region }}" image: "{{ image_instance }}" instance_type: "{{ large_instance }}" key_name: "{{ ssh_keyname }}" count: "{{ ninstances }}" state: present group_id: "{{ priv_sg }}" vpc_subnet_id: "{{ subnet_id }}" instance_profile_name: 'ML-Ansible' wait: true instance_tags: Name: Worker #delete_on_termination: yes register: ec2 ignore_errors: True

So, the Worker instances are created with a profile name (/role) 'ML-Ansible' which contains all the necessary permissions.

However, when trying to execute an AWS shell command ( aws cloudwatch put-metric-data ... ), but it returns the following error:

"stderr": "

An error occurred (InvalidClientTokenId) when calling the PutMetricData operation: The security token included in the request is invalid.",

We have recently rotated all our credentials. So, we have a fresh set of aws_access_key_id and aws_secret_access_key

So, when I looked at my ~/.aws/credentials file, it contains the previous set of credentials even when the Ansible file was run today.

Why is it happening? Any change needed to be done in the corresponding IAM profile too?