A-Z - Solution Architect Associate Exam :
Basic info before starting the preparation of Solution Architect Associate Exam :
Practice exam registration fee is 20 USD
Exam registration fee is 150 USD
Please click here for Complete info on certification details from AWS official link
Link to schedule your certification exam - https://www.aws.training/certification?src=arc-assoc
Basic info before starting the preparation of Solution Architect Associate Exam :
- 130 MInutes in Length
- 60 Questions ( This can change)
- Multiple Choice
- Results are between 100 - 10000 with a passing score of 720
- Aim for 70%
- No direct questions - almost everything scenario based
Please click here for Complete info on certification details from AWS official link
Link to schedule your certification exam - https://www.aws.training/certification?src=arc-assoc
(Login require)
SYLLABUS
Chapter 1Introduction to AWS
This chapter provides an introduction to the AWS Cloud computing platform. It discusses the advantages of cloud computing and the fundamentals of AWS. It provides an overview of the AWS Cloud services that are fundamentally important for the exam.Chapter 2Amazon Simple Storage Service (Amazon S3) and Amazon Glacier Storage
Chapter 3Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Block Store (Amazon EBS)
Chapter 4Amazon Virtual Private Cloud (Amazon VPC)
Chapter 5Elastic Load Balancing, Amazon CloudWatch, and Auto Scaling
Chapter 6AWS Identity and Access Management (IAM)
Chapter 7Databases and AWS
Chapter 8SQS, SWF, and SNS
his chapter focuses on application services in AWS, specifically Amazon Simple Queue Service (Amazon SQS), Amazon Simple Workflow Service (SWF), and Amazon Simple Notification Service (Amazon SNS). It also covers architectural guidance on using these services and the use of Amazon SNS in mobile applications.Chapter 9Domain Name System (DNS) and Amazon Route 53
In this chapter, you will learn about Domain Name System (DNS) and the Amazon Route 53 service, which is designed to help users find your website or application over the Internet.Chapter 10Amazon ElastiCache
This chapter focuses on building high-performance applications using in-memory caching technologies and Amazon ElastiCache.Chapter 11Additional Key Services
Additional services not covered in other chapters are covered in this chapter. Topics include Amazon CloudFront, AWS Storage Gateway, AWS Directory Service, AWS Key Management Service (KMS), AWS CloudHSM, AWS CloudTrail, Amazon Kinesis, Amazon Elastic Map Reduce (Amazon EMR), AWS Data Pipeline, AWS Import/Export, AWS OpsWorks, AWS CloudFormation, AWS Elastic Beanstalk, AWS Trusted Advisor, and AWS Config.Chapter 12Security on AWS
Chapter 13AWS Risk and Compliance
Chapter 14Architecture Best Practices
Appendix AAnswers to Review Questions
Index
Exam Objectives
Objective Map
The following table lists each domain and its weighting in the exam, along with the chapters in the book where that domain's objectives and subobjectives are covered.Open table as spreadsheet
DomainPercentage of ExamChapter1 Domain 1.0: Designing highly available, cost-efficient, fault-tolerant, scalable systems60%1.1 Identify and recognize cloud architecture considerations, such as fundamental components and effective designs.1, 2, 3, 4, 5,7, 8, 9, 10,11, 14 Content may include the following:How to design cloud services1, 2, 3, 4, 8,9, 11, 14 Planning and design1, 2, 3, 4, 7,8, 9, 10, 11,14 Monitoring and logging2, 3, 8, 9,11 Familiarity with:Best practices for AWS architecture1, 2, 4, 7, 8,9, 10, 14 Developing to client specifications, including pricing/cost (e.g., on Demand vs. Reserved vs. Spot; RTO and RPO DR Design)2, 7, 9 Architectural trade-off decisions (e.g., high availability vs. cost, Amazon Relational Database Service (RDS) vs. installing your own database on Amazon Elastic Compute Cloud (EC2))2, 4, 7, 8, 9,10 Hybrid IT architectures (e.g., Direct Connect, Storage Gateway, VPC, Directory Services)1, 2, 4, 14 Elasticity and scalability (e.g., Auto Scaling, SQS, ELB, CloudFront)1, 2, 5, 7, 8,9, 10, 14 2 Domain 2.0: Implementation/Deployment10%2.1 Identify the appropriate techniques and methods using Amazon EC2, Amazon S3, AWS Elastic Beanstalk, AWS CloudFormation, AWS OpsWorks, Amazon Virtual Private Cloud (VPC), and AWS Identity and Access Management (IAM) to code and implement a cloud solution.1, 2, 3, 4, 5,6, 8, 11, 13 Content may include the following:Configure an Amazon Machine Image (AMI).2, 3, 11 Operate and extend service management in a hybrid IT architecture.1, 4 Configure services to support compliance requirements in the cloud.2, 3, 4, 11,13 Launch instances across the AWS global infrastructure.1, 2, 3, 5, 8,11 Configure IAM policies and best practices.2, 6 3 Domain 3.0: Data Security20%3.1 Recognize and implement secure practices for optimum cloud deployment and maintenance.2, 4, 10, 12,13 Content may include the following:AWS shared responsibility model12, 13 AWS platform compliance11, 12, 13 AWS security attributes (customer workloads down to physical layer)4, 11, 12,13 AWS administration and security services7, 10, 11,12 AWS Identity and Access Management (IAM)6, 12 Amazon Virtual Private Cloud (VPC)4, 12 AWS CloudTrail11, 12 Ingress vs. egress filtering, and which AWS services and features fit11, 12 "Core" Amazon EC2 and S3 security feature sets2, 4, 12 Incorporating common conventional security products (Firewall, VPN)4, 12 Design patterns7, 13 DDoS mitigation12 Encryption solutions (e.g., key services)2, 11, 12 Complex access controls (building sophisticated security groups, ACLs, etc.)2, 12 Amazon CloudWatch for the security architect5 Trusted Advisor11 CloudWatch Logs5 3.2 Recognize critical disaster recovery techniques and their implementation.3, 7, 9, 10 Content may include the following:Disaster recovery3 Recovery time objective7 Recovery point objective7 Amazon Elastic Block Store3 AWS Import/Export11 AWS Storage Gateway11 Amazon Route539 Validation of data recovery method3 4 Domain 4.0: Troubleshooting10%Content may include the following:
Assessment Test :1.Under a single AWS account, you have set up an Auto Scaling group with a maximum capacity of 50 Amazon Elastic Compute Cloud (Amazon EC2) instances in us-west-2. When you scale out, however, it only increases to 20 Amazon EC2 instances. What is the likely cause?- Auto Scaling has a hard limit of 20 Amazon EC2 instances.
- If not specified, the Auto Scaling group maximum capacity defaults to 20 Amazon EC2 instances.
- The Auto Scaling group desired capacity is set to 20, so Auto Scaling stopped at 20 Amazon EC2 instances.
- You have exceeded the default Amazon EC2 instance limit of 20 per region.
Answer D. Auto Scaling may cause you to reach limits of other services, such as the default number of Amazon EC2 instances you can currently launch within a region, which is 20.
2.
Elastic Load Balancing allows you to distribute traffic across which of the following?- Only within a single Availability Zone
- Multiple Availability Zones within a region
- Multiple Availability Zones within and between regions
- Multiple Availability Zones within and between regions and on-premises virtualized instances running OpenStack
A:) B. The Elastic Load Balancing service allows you to distribute traffic across a group of Amazon Elastic Compute Cloud (Amazon EC2) instances in one or more Availability Zones within a region.
3) Amazon CloudWatch offers which types of monitoring plans? (Choose 2 answers)
- Basic
- Detailed
- Diagnostic
- Precognitive
- Retroactive
Answer A and B. Amazon CloudWatch has two plans: basic and detailed. There are no diagnostic, precognitive, or retroactive monitoring plans for Amazon CloudWatch.4.) An Amazon Elastic Compute Cloud (Amazon EC2) instance in an Amazon Virtual Private Cloud (Amazon VPC) subnet can send and receive traffic from the Internet when which of the following conditions are met? (Choose 3 answers)- Network Access Control Lists (ACLs) and security group rules disallow all traffic except relevant Internet traffic.
- Network ACLs and security group rules allow relevant Internet traffic.
- Attach an Internet Gateway (IGW) to the Amazon VPC and create a subnet route table to send all non-local traffic to that IGW.
- Attach a Virtual Private Gateway (VPG) to the Amazon VPC and create subnet routes to send all non-local traffic to that VPG.
- The Amazon EC2 instance has a public IP address or Elastic IP (EIP) address.
- The Amazon EC2 instance does not need a public IP or Elastic IP when using Amazon VPC.
Answer : B, C, and E. You must do the following to create a public subnet with Internet access: Attach an IGW to your Amazon VPC.Create a subnet route table rule to send all non-local traffic (for example, 0.0.0.0/0) to the IGW.Configure your network ACLs and security group rules to allow relevant traffic to flow to and from your instance.You must do the following to enable an Amazon EC2 instance to send and receive traffic from the Internet:Assign a public IP address or EIP address.
5) If you launch five Amazon Elastic Compute Cloud (Amazon EC2) instances in an Amazon Virtual Private Cloud (Amazon VPC) without specifying a security group, the instances will be launched into a default security group that provides which of the following? (Choose 3 answers)- The five Amazon EC2 instances can communicate with each other.
- The five Amazon EC2 instances cannot communicate with each other.
- All inbound traffic will be allowed to the five Amazon EC2 instances.
- No inbound traffic will be allowed to the five Amazon EC2 instances.
- All outbound traffic will be allowed from the five Amazon EC2 instances.
- No outbound traffic will be allowed from the five Amazon EC2 instances.
Answer A, D, and E. If a security group is not specified at launch, then an Amazon EC2 instance will be launched into the default security group for the Amazon VPC. The default security group allows communication between all resources within the security group, allows all outbound traffic, and denies all other traffic.
6.
Your company wants to host its secure web application in AWS. The internal security policies consider any connections to or from the web server as insecure and require application data protection. What approaches should you use to protect data in transit for the application? (Choose 2 answers)- Use BitLocker to encrypt data.
- Use HTTPS with server certificate authentication.
- Use an AWS Identity and Access Management (IAM) role.
- Use Secure Sockets Layer (SSL)/Transport Layer Security (TLS) for database connection.
- Use XML for data transfer from client to server.
Answer B and D. To protect data in transit from the clients to the web application, HTTPS with server certificate authentication should be used. To protect data in transit from the web application to the database, SSL/TLS for database connection should be used.
7.
You have an application that will run on an Amazon Elastic Compute Cloud (Amazon EC2) instance. The application will make requests to Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB. Using best practices, what type of AWS Identity and Access Management (IAM) identity should you create for your application to access the identified services?- IAM role
- IAM user
- IAM group
- IAM directory
Answer A. Don't create an IAM user (or an IAM group) and pass the user's credentials to the application or embed the credentials in the application. Instead, create an IAM role that you attach to the Amazon EC2 instance to give applications running on the instance temporary security credentials. The credentials have the permissions specified in the policies attached to the role. A directory is not an identity object in IAM.
8.
When a request is made to an AWS Cloud service, the request is evaluated to decide whether it should be allowed or denied. The evaluation logic follows which of the following rules? (Choose 3 answers)- An explicit allow overrides any denies.
- By default, all requests are denied.
- An explicit allow overrides the default.
- An explicit deny overrides any allows.
- By default, all requests are allowed.
Answer B, C, and D. When a request is made, the AWS service decides whether a given request should be allowed or denied. The evaluation logic follows these rules:
1) By default, all requests are denied (in general, requests made using the account credentials for resources in the account are always allowed).2) An explicit allow overrides this default.3) An explicit deny overrides any allows.
9.
What is the data processing engine behind Amazon Elastic MapReduce (Amazon EMR)?- Apache Hadoop
- Apache Hive
- Apache Pig
- Apache HBase
Answer A. Amazon EMR uses Apache Hadoop as its distributed data processing engine. Hadoop is an open source, Java software framework that supports data-intensive distributed applications running on large clusters of commodity hardware. Hive, Pig, and HBase are packages that run on top of Hadoop.10) What type of AWS Elastic Beanstalk environment tier provisions resources to support a web application that handles background processing tasks?- Web server environment tier
- Worker environment tier
- Database environment tier
- Batch environment tier
Answer
B. An environment tier whose web application runs background jobs is known as a worker tier. An environment tier whose web application processes web requests is known as a web server tier. Database and batch are not valid environment tiers.11.What Amazon Relational Database Service (Amazon RDS) feature provides the high availability for your database?- Regular maintenance windows
- Security groups
- Automated backups
- Multi-AZ deployment
Answer : D. Multi-AZ deployment uses synchronous replication to a different Availability Zone so that operations can continue on the replica if the master database stops responding for any reason. Automated backups provide disaster recovery, not high availability. Security groups, while important, have no effect on availability. Maintenance windows are actually times when the database may not be available.12.What administrative tasks are handled by AWS for Amazon Relational Database Service (Amazon RDS) databases? (Choose 3 answers)- Regular backups of the database
- Deploying virtual infrastructure
- Deploying the schema (for example, tables and stored procedures)
- Patching the operating system and database software
- Setting up non-admin database accounts and privileges
Answer :A, B, and D. Amazon RDS will launch Amazon Elastic Compute Cloud (Amazon EC2) instances, install the database software, handle all patching, and perform regular backups. Anything within the database software (schema, user accounts, and so on) is the responsibility of the customer.13.Which of the following use cases is well suited for Amazon Redshift?- A 500TB data warehouse used for market analytics
- A NoSQL, unstructured database workload
- A high traffic, e-commerce web application
- An in-memory cache
Answer :A. Amazon Redshift is a petabyte-scale data warehouse. It is not well suited for unstructured NoSQL data or highly dynamic transactional data. It is in no way a cache.14.Which of the following statements about Amazon DynamoDB secondary indexes is true?- There can be many per table, and they can be created at any time.
- There can only be one per table, and it must be created when the table is created.
- There can be many per table, and they can be created at any time.
- There can only be one per table, and it must be created when the table is created.
Answer : D. There can be one secondary index per table, and it must be created when the table is created.15.15) What is the primary use case of Amazon Kinesis Firehose?- Ingest huge streams of data and allow custom processing of data in flight.
- Ingest huge streams of data and store it to Amazon Simple Storage Service (Amazon S3), Amazon Redshift, or Amazon Elasticsearch Service.
- Generate a huge stream of data from an Amazon S3 bucket.
- Generate a huge stream of data from Amazon DynamoDB.
Answer B. The Amazon Kinesis family of services provides functionality to ingest large streams of data. Amazon Kinesis Firehose is specifically designed to ingest a stream and save it to any of the three storage services listed in Response B.16.Your company has 17TB of financial trading records that need to be stored for seven years by law. Experience has shown that any record more than a year old is unlikely to be accessed. Which of the following storage plans meets these needs in the most cost-efficient manner?- Store the data on Amazon Elastic Block Store (Amazon EBS) volume attached to t2.large instances.
- Store the data on Amazon Simple Storage Service (Amazon S3) with lifecycle policies that change the storage class to Amazon Glacier after one year, and delete the object after seven years.
- Store the data in Amazon DynamoDB, and delete data older than seven years.
- Store the data in an Amazon Glacier Vault Lock.
Answer :B. Amazon S3 and Amazon Glacier are the most cost-effective storage services. After a year, when the objects are unlikely to be accessed, you can save costs by transferring the objects to Amazon Glacier where the retrieval time is three to five hours.17.What must you do to create a record of who accessed your Amazon Simple Storage Service (Amazon S3) data and from where?- Enable Amazon CloudWatch logs.
- Enable versioning on the bucket.
- Enable website hosting on the bucket.
- Enable server access logs on the bucket.
- Create an AWS Identity and Access Management (IAM) bucket policy.
Answer : D. Server access logs provide a record of any access to an object in Amazon S3.18.Amazon Simple Storage Service (Amazon S3) is an eventually consistent storage system. For what kinds of operations is it possible to get stale data as a result of eventual consistency?- GET after PUT of a new object
- GET or LIST after a DELETE
- GET after overwrite PUT (PUT to an existing key)
- DELETE after GET of new object
Answer - C. Amazon S3 provides read-after-write consistency for PUTs to new objects (new key), but eventual consistency for GETs and DELETEs of existing objects (existing key). Response C changes the existing object so that a subsequent GET may fetch the previous and inconsistent object.19.How is data stored in Amazon Simple Storage Service (Amazon S3) for high durability?- Data is automatically replicated to other regions.
- Data is automatically replicated to different Availability Zones within a region.
- Data is replicated only if versioning is enabled on the bucket.
- Data is automatically backed up on tape and restored if needed.
Answer -B. AWS will never transfer data between regions unless directed to by you. Durability in Amazon S3 is achieved by replicating your data geographically to different Availability Zones regardless of the versioning configuration. AWS doesn't use tapes.20.Your company needs to provide streaming access to videos to authenticated users around the world. What is a good way to accomplish this?- Use Amazon Simple Storage Service (Amazon S3) buckets in each region with website hosting enabled.
- Store the videos on Amazon Elastic Block Store (Amazon EBS) volumes.
- Enable Amazon CloudFront with geolocation and signed URLs.
- Run a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances to host the videos.
Answer C. Amazon CloudFront provides the best user experience by delivering the data from a geographically advantageous edge location. Signed URLs allow you to control access to authenticated users.21.Which of the following are true about the AWS shared responsibility model? (Choose 3 answers)- AWS is responsible for all infrastructure components (that is, AWS Cloud services) that support customer deployments.
- The customer is responsible for the components from the guest operating system upward (including updates, security patches, and antivirus software).
- The customer may rely on AWS to manage the security of their workloads deployed on AWS.
- While AWS manages security of the cloud, security in the cloud is the responsibility of the customer.
- The customer must audit the AWS data centers personally to confirm the compliance of AWS systems and services.
Answer A, B, and D. In the AWS shared responsibility model, customers retain control of what security they choose to implement to protect their own content, platform, applications, systems, and networks, no differently than they would for applications in an on-site data center.22.Which process in an Amazon Simple Workflow Service (Amazon SWF) workflow implements a task?- Decider
- Activity worker
- Workflow starter
- Business rule
Answer
23.Which of the following is true if you stop an Amazon Elastic Compute Cloud (Amazon EC2) instance with an Elastic IP address in an Amazon Virtual Private Cloud (Amazon VPC)?- The instance is disassociated from its Elastic IP address and must be re-attached when the instance is restarted.
- The instance remains associated with its Elastic IP address.
- The Elastic IP address is released from your account.
- The instance is disassociated from the Elastic IP address temporarily while you restart the instance.
Answer
B. In an Amazon VPC, an instance's Elastic IP address remains associated with an instance when the instance is stopped.
24.Which Amazon Elastic Compute Cloud (Amazon EC2) pricing model allows you to pay a set hourly price for compute, giving you full control over when the instance launches and terminates?- Spot instances
- Reserved instance
- On Demand instances
- Dedicated instances
AnswerC. You pay a set hourly price for an On Demand instance from when you launch it until you explicitly stop or terminate it. Spot instances can be terminated when the spot price goes above your bid price. Reserved instances involve paying for an instance over a one- or three-year term. Dedicated instances run on hardware dedicated to your account and are not a pricing model.
25.
Under what circumstances will Amazon Elastic Compute Cloud (Amazon EC2) instance store data not be preserved?- The associated security groups are changed.
- The instance is stopped or rebooted.
- The instance is rebooted or terminated.
- The instance is stopped or terminated.
- None of the above
AnswerD. The data in an instance store persists only during the lifetime of its associated instance. If an instance is stopped or terminated, then the instance store does not persist. Rebooting an instance does not shut down the instance; if an instance reboots (intentionally or unintentionally), data on the instance store persists. Security groups have nothing to do with the lifetime of an instance and have no effect here.
No comments:
Post a Comment