AWS Certified Developer Associate Exam Study Guide

The AWS Certified Developer Associate certification is for those who are interested in handling cloud-based applications and services. Typically, applications developed in AWS are sold as products in the AWS Marketplace. This allows other customers to use the customized, cloud-compatible application for their own business needs. Because of this, AWS developers should be proficient in using the AWS CLI, APIs and SDKs for application development.

The AWS Certified Developer Associate exam (or AWS CDA for short) will test your ability to:

Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices.

Demonstrate proficiency in developing, deploying, and debugging cloud-based applications using AWS.

Having prior experience in programming and scripting for both standard, containerized and/or serverless applications will greatly make your review easier. Additionally, we recommend having an AWS account available for you to play around with to better visualize parts in your review that involves code. For more details regarding your exam, you can check out this AWS exam blueprint.

Study Materials

If you are not well-versed in the fundamentals of AWS, we suggest that you visit our AWS Certified Cloud Practitioner review guide to get started. AWS also offers a free virtual course called AWS Cloud Practitioner Essentials that you can take in their training portal. Knowing the basic concepts and services of AWS will make your review more coherent and understandable for you.

The primary study materials you’ll be using for your review are the: FREE AWS Exam Readiness video course, official AWS sample questions, AWS whitepapers, FAQs, AWS cheat sheets, and AWS practice exams.

For whitepapers, they include the following:

Microservices on AWS – This paper introduces the ways you can implement a microservice system on different AWS Compute platforms. You should study how these systems are built and the reasoning behind the chosen services for that system. Running Containerized Microservices on AWS – This paper talks about the best practices in deploying a containerized microservice system in AWS. Focus on the example scenarios where the best practices are applied, how they are applied, and using which services to do so. Optimizing Enterprise Economics with Serverless Architectures – Read upon the use cases of serverless in different platforms. Understand when is it best to use serverless vs maintaining your own servers. Also familiarize yourself with the AWS services that are under the serverless toolkit. Serverless Architectures with AWS Lambda – Learn about Serverless and Lambda as much as you can. Concepts, configurations, code and architectures are all important and are most likely to come up in the exam. Creating a Lambda function of your own will help you remember features faster. Practicing Continuous Integration and Continuous Delivery on AWS Accelerating Software Delivery with DevOps – If you are a developer aiming for the DevOps track, then this whitepaper is packed with practices for you to learn. CI/CD involves many stages that allows you to deploy your applications faster. Therefore, you should study the different deployment methods and understand how each of them works. Also, familiarize yourself with the implementation of CI/CD in AWS. We recommend performing a lab of this in your AWS account. Blue/Green Deployments on AWS – Blue/Green Deployments is a popular deployment method that you should learn as an AWS Developer. Study how blue/green deployments are implemented and using what set of AWS services. It is also crucial that you understand the scenarios where blue/green deployments are beneficial, and where they are not. Do NOT mix up your blue environment from your green environment. Architecting for the Cloud: AWS Best Practices – Be sure to understand the best practices in AWS since exam questions will focus their scenarios around these best practices. The whitepaper contains a number of design principles with examples for each. These will help you realize which services are most suitable for which kinds of situations. AWS Security Best Practices – Understand the security best practices and their purpose in your environment. Some services offer more than one form of security feature, such as multiple key management schemes for encryption. It is important that you can determine which form is most suitable to the given scenarios in your exam. AWS Well-Architected Framework – This whitepaper is one of the most important papers that you should study for the exam. It discusses the different pillars that make up a well-architected cloud environment. Expect the scenarios in your exam to be heavily based upon these pillars. Each pillar will have a corresponding whitepaper of its own, that discusses the respective pillar in more detail.

Also check out this article: Top 5 FREE AWS Review Materials.

AWS Services to Focus On

AWS offers extensive documentation and well-written FAQs for all of their services. These two will be your primary source of information when studying AWS. You need to be well-versed in a number of AWS products and services since you will almost always be using them in your work. I recommend checking out Tutorials Dojo’s AWS Cheat Sheets which provides a summarized but highly informative set of notes and tips for your review on these services.

Services to study for:

Amazon EC2 ELB / Auto Scaling – Be comfortable with integrating EC2 to ELBs and Auto Scaling. Study the commonly used AWS CLI commands, APIs and SDK code under these services. Focus as well on security, maintaining high availability, and enabling network connectivity from your ELB to your EC2 instances. AWS Elastic Beanstalk – Know when Elastic Beanstalk is more appropriate to use than other compute solutions or infrastructure as a code solutions like CloudFormation or OpsWorks. Experiment with the service yourself in your AWS account, and understand how you can deploy and maintain your own application in Beanstalk. Amazon ECS – Study how you can manage your own cluster using ECS. Also, figure out how ECS can be integrated to a CI/CD pipeline. Be sure to read the FAQs thoroughly since the exam includes multiple questions about containers. AWS Lambda – The best way to learn Lambda is to create a function yourself. Also remember that Lambda allows custom runtimes that a customer can provide himself. Figure out what services can be integrated with Lambda, and how Lambda functions can capture and manipulate incoming events. Lastly, study the Serverless Application Model (SAM). Amazon RDS / Amazon Aurora – Understand how RDS integrates with your application through EC2, ECS, Elastic Beanstalk and more. Compare RDS to DynamoDB and Elasticache and determine when RDS is best used. Also know when it is better to use Amazon Aurora than Amazon RDS, and when RDS is more useful than hosting your own database inside an EC2 instance. Amazon DynamoDB – You should have a complete understanding of the DynamoDB service as this is very crucial in your exam. Read the DynamoDB documentation since it is more detailed and informative than the FAQ. As a developer, you should also know how to provision your own DynamoDB table, and you should be capable of tweaking its settings to meet application requirements. Amazon Elasticache – Elasticache is a caching service that you’ll be encountering often in the exam. Compare and contrast Redis from Memcached. Determine when Elasticache is more suitable than DynamoDB or RDS. Amazon S3 – S3 is usually your go-to storage for objects. Study how you can secure your objects through KMS encryption, ACLs, and bucket policies. Know how S3 stores your objects to keep them highly durable and available. Also learn about lifecycle policies. Compare S3 to EBS and EFS to know when is S3 more preferred than the other two. Amazon EFS – EFS is used to set up file systems for multiple EC2 instances. Compare and contrast S3 to EFS and EBS. Also study on file encryption and optimizing EFS performance. Amazon Kine sis – There are usually tricky questions on Kinesis so you should read its documentation too. Focus on Kinesis Data Streams. Also have an idea of the other Kinesis services. Familiarize yourself with Kinesis APIs, Kinesis Sharding, and integration with storage services such as S3 or compute services such as Lambda. Amazon API Gateway – API gateway is usually used together with AWS Lambda as part of the serverless application model. Understand API Gateway’s structure such as resources, stages and methods. Learn how you can combine API Gateway with other AWS services such as Lambda or CloudFront. Determine how you can secure your APIs so that only a select number of people can execute it. Amazon Cognito – Cognito is used for mobile and web authentication. You usually encounter Cognito questions in the exam along with Lambda, API Gateway, and DynamoDB. This usually involves some mobile application requiring an easy sign up/sign in feature from AWS. It is highly suggested that you try using Cognito to better understand its features. Amazon SQS – Study the purpose of different SQS queues, timeouts and how your messages are handled inside queues. Messages in an SQS queue are not deleted when polled, so be sure to read on that as well. There are different polling mechanisms in SQS, so you should compare and contrast each one. Amazon CloudWatch – CloudWatch is your primary monitoring tool for all your AWS services. Be sure to know what metrics can be found under CloudWatch monitoring, and what metrics require a CloudWatch agent installed. Also study CloudWatch Logs, CloudWatch Alarms and Billing monitoring. Differentiate the kinds of logs stored in CloudWatch vs logs stored in CloudTrail. AWS IAM – IAM is the security center of your cloud. Therefore, you should familiarize yourself with the different IAM features. Study how IAM policies are written, and what each section in the policy means. Understand the usage of IAM user roles and service roles. You should have read upon the best practices whitepaper in securing your AWS account through IAM. AWS KMS – KMS contains keys that you use to encrypt EBS, S3, and other services. Know what these services are. Learn the different types of KMS keys and on which situations is each type of key used. AWS CodeBuild / AWS CodeCommit / AWS CodeDeploy / AWS CodePipeline – These are your tools in implementing CI/CD in AWS. Study how you can build applications in CodeBuild (buildspec), and how you’ll prepare configuration files (appspec) for CodeDeploy. CodeCommit is a git repository so having knowledge in Git will be beneficial. I suggest to build a simple pipeline of your own in CodePipeline to see how you should manage your code deployments. It is also important to learn how you can rollback to your previous application version after a failed deployment. The whitepapers above should have explained in-place deployments and blue/green deployments, and how to perform automation. AWS CloudFormation – Study the structure of CloudFormation scripts and how you can use them to build your infrastructure. Be comfortable with both json and yaml formats. Read a bit about stacksets. List down the services that use CloudFormation in the backend for provisioning AWS resources, such as AWS SAM, and processes such as in CI/CD.

Aside from the concepts and services, you should study about the AWS CLI, the different commonly used APIs (for services such as EC2, EBS or Lambda), and the AWS SDKs. Read up on the AWS Serverless Application Model (AWS SAM) and AWS Server Migration Services as well as these may come up in the exam. It will also be very helpful to have experience interacting with AWS APIs and SDKs, and troubleshooting any errors that you encounter while using them.

Validate Your Knowledge

The AWS CDA exam will be packed with tricky questions. It would be great if you could get a feel of how the questions are structured through practice tests. Luckily, Tutorials Dojo offers a great set of practice questions for you to try out here. These practice tests will help validate your knowledge on what you’ve learned so far, and fill in any missing details that you might have skipped in your review. Together with the practice exams and the Tutorials Dojo AWS cheat sheets, this review guide should sufficiently prepare you for your exam.

Sample Practice Test Questions:

Question 1

A web application hosted in Elastic Beanstalk has a configuration file named .ebextensions/debugging.config which has the following content:

option_settings:

aws:elasticbeanstalk:xray:

XRayEnabled: true

For its database tier, it uses RDS with Multi-AZ deployments configuration and Read Replicas. There is a new requirement to record calls that your application makes to RDS and other internal or external HTTP web APIs. The tracing information should also include the actual SQL database queries sent by the application, which can be searched using the filter expressions in the X-Ray Console.

Which of the following should you do to satisfy the above task?

Add metadata in the segment document. Add annotations in the segment document. Add metadata in the subsegment section of the segment document. Add annotations in the subsegment section of the segment document.

Show me the answer! Correct Answer: 4 Even with sampling, a complex application generates a lot of data. The AWS X-Ray console provides an easy-to-navigate view of the service graph. It shows health and performance information that helps you identify issues and opportunities for optimization in your application. For advanced tracing, you can drill down to traces for individual requests, or use filter expressions to find traces related to specific paths or users. When you instrument your application, the X-Ray SDK records information about incoming and outgoing requests, the AWS resources used, and the application itself. You can add other information to the segment document as annotations and metadata. Annotations are simple key-value pairs that are indexed for use with filter expressions. Use annotations to record data that you want to use to group traces in the console, or when calling the GetTraceSummaries API. X-Ray indexes up to 50 annotations per trace. Metadata are key-value pairs with values of any type, including objects and lists, but that are not indexed. Use metadata to record data you want to store in the trace but don’t need to use for searching traces. You can view annotations and metadata in the segment or subsegment details in the X-Ray console. A trace segment is a JSON representation of a request that your application serves. A trace segment records information about the original request, information about the work that your application does locally, and subsegments with information about downstream calls that your application makes to AWS resources, HTTP APIs, and SQL databases. Hence, adding annotations in the subsegment section of the segment document is the correct answer. Adding annotations in the segment document is incorrect because although the use of annotations is correct, you have to add this in the subsegment section of the segment document since you want to trace the downstream call to RDS and not the actual request to your application. Adding metadata in the segment document is incorrect because metadata is primarily used to record custom data that you want to store in the trace but not for searching traces since this can’t be picked up by filter expressions in the X-Ray Console. You have to use annotations instead. In addition, you have to add this in the subsegment section of the segment document since you want to trace the downstream call to RDS and not the actual request to your application. Adding metadata in the subsegment section of the segment document is incorrect because, just as mentioned above, metadata is just used to record custom data that you want to store in the trace but not for searching traces. References:

https://docs.aws.amazon.com/xray/latest/devguide/xray-concepts.html#xray-concepts-annotations

https://docs.aws.amazon.com/xray/latest/devguide/xray-console-filters.html Check out this AWS X-Ray Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-aws-x-ray/

Question 2

You have an Amazon Kinesis data stream which has 20 open shards and 5 EC2 instances running as Kinesis Client Library (KCL) workers. Upon monitoring the metrics of your application in CloudWatch, it shows that your EC2 workers are consuming all of their CPU resources when processing the shards.

Which of the following is the BEST solution to optimize your application and prevent workers from maxing out their CPU resources?

Launch an additional 15 instances Launch an additional 20 instances Launch an additional 25 instances Decrease the number of shards that each instance processes

Show me the answer! Correct Answer: 1 Amazon Kinesis Data Streams supports resharding, which lets you adjust the number of shards in your stream to adapt to changes in the rate of data flow through the stream. Resharding is considered an advanced operation. If you are new to Kinesis Data Streams, return to this subject after you are familiar with all the other aspects of Kinesis Data Streams. There are two types of resharding operations: shard split and shard merge. In a shard split, you divide a single shard into two shards. In a shard merge, you combine two shards into a single shard. Resharding is always pairwise in the sense that you cannot split into more than two shards in a single operation, and you cannot merge more than two shards in a single operation. The shard or pair of shards that the resharding operation acts on are referred to as parent shards. The shard or pair of shards that result from the resharding operation are referred to as child shards. Typically, when you use the KCL, you should ensure that the number of instances does not exceed the number of shards (except for failure standby purposes). Each shard is processed by exactly one KCL worker and has exactly one corresponding record processor, so you never need multiple instances to process one shard. However, one worker can process any number of shards, so it’s fine if the number of shards exceeds the number of instances. To scale up processing in your application, you should test a combination of these approaches: – Increasing the instance size (because all record processors run in parallel within a process) – Increasing the number of instances up to the maximum number of open shards (because shards can be processed independently) – Increasing the number of shards (which increases the level of parallelism) The question asks how many additional instances can be launched to remediate the high CPU Utilization issue. Since there are 20 shards open and 5 instances running, launching an additional 15 instances will make each instance process exactly one shard. Launching more than 15 will not be beneficial in any way since the instances will only become idle workers. Therefore, the options that suggest launching an additional 20 or 25 instances are incorrect. It is important to note that each shard is processed by exactly one KCL worker and has exactly one corresponding record processor, so you never need multiple instances to process one shard. Decreasing the number of shards that each instance processes is incorrect because although decreasing the number of shards will allow the current 5 instances to not max out CPU consumption, this will affect the read and write capacity of your Kinesis stream. If you are to optimize the system, it would be better to add additional instances to match the open shard count. References:

https://docs.aws.amazon.com/streams/latest/dev/key-concepts.html

https://docs.aws.amazon.com/streams/latest/dev/kinesis-record-processor-scaling.html Check out this Amazon Kinesis Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-kinesis/ Kinesis Scaling, Resharding and Parallel Processing:

https://tutorialsdojo.com/aws-cheat-sheet-kinesis-scaling-resharding-and-parallel-processing/

Click here for more AWS Certified Developer Associate practice exam questions.

Check out our other AWS practice test courses here:

Additional Materials: High Quality Video Courses on Udemy

There are a few top rated AWS Certified Developer Associate video courses on Udemy that you can check out as well, which can complement your exam preparations especially if you are the type of person who can learn better through visual courses instead of reading long whitepapers:

Based on the feedback of thousands of our students in our practice test course, the combination of any of these video courses plus our practice tests and the Tutorials Dojo Study Guide and Cheat Sheets – AWS Certified Developer Associate eBook was enough to pass the exam and even get a good score.

The AWS CDA certification is one of the most sought after certifications in the DevOps industry. It validates your knowledge of the AWS Cloud and foundational DevOps practices. It is an achievement of its own if you become AWS certified. Hence, it will be best if you could get proper sleep the day before your exam. Review any notes that you have written down, and go over the incorrect items in your practice tests if you took it. You should also check again the venue, the time and the things needed for your exam. As so, we wish you the best of luck and the best of outcomes!