Wednesday, July 29, 2020

AWS Certified Solutions Architect – Associate C02 Tips and Tricks




Recently I have passed AWS Certified Solutions Architect - Associate[SAA-C02]. I would like to share some of my experience regarding the latest exam and its preparations.
Now you can schedule your exam at your Home(Pearson Vue only support this opportunity at the moment)

Requirements for writing your Exam from your Home:

  • Windows 10 OS( Linux OS will not work)
  • I have used my cousin sister's Laptop[4GB DDR3, 500HDD,i3]
  • Broadband internet connection ( My JIO connection failed during the start of my exam so I changed the connection, it's maybe because of my home location, It is too dangerous to do this during the Exam)
  • Passport or Driving license.
  • Test your Machine [Network, Audio, Camara] from here: https://home.pearsonvue.com/aws/onvue


I have done the following for preparing my exam:

  • A Cloud Guru CEO Ryan Kroonenburg's course
  • Solve Dumps/Questions in between 200 - 500(min)
  •  Attended AWS meetups + AWS webinars ( Optional )[I am an active member of AWS Users Kochi]
  • Do some discussion with your friend who already attended the exam before[ For me it was Muhasin-Urolime, AWS Expert, Thanks, dude.. ]
  • However, you should have some LUCK anyways...You should have to keep in mind we need to crack the AWS exam pattern[ NB: For beginners and Intermediates ]

Some of the exam topics came for my SAA-C02:
  • VPC (more than 4)
  • EFS
  • AutoScaling ( more than 5 )
  • EBS (more than 2 )
  • RDS (more than 5)
  • S3  (more than 3 )
  • Storage Gateway (more than 2)
  • Data Sync (more than 2)
  • DynamoDB
  • ElastiCache
  • RedShift
  • Kinesis
  • ServerLESS (more than 3 )
  • SQS (more than 2 )
  • Cloudfront  (more than 3 )
  • Cognito
  • Key management
  • etc others i am not sure...

Tips and Tricks

AWS Organizations
1. single point of maintenance +  limiting access to specific services or actions in all of the team members AWS accounts = Use Service control policies

EFS 
2. Shared storage between multiple EC2 instances + file locking capabilities= EFS
3. high availability +  POSIX-compliant and access concurrently from EC2 instances.= EFS

VPC
4. HA , we need two AZ and each AZ contains 3 subnets (1 public for ALB + 1 private for Web servers + 1 private for Database).
5. to provide VPC private connection to AWS services = Use VPC endpoint
6. IPv6 traffic =Egress-only internet gateway

AutoScaling
7. High availability + Scalability + Web server + Session Stickiness  = Auto Scaling group +  ALB +  multiple AZs
8. prevent any scaling delay = Use a Scheduled scaling to scale-out EC2 instances
9. HA = Auto Scaling group(ASG) + ELB + multi EC2 instances in each AZ
10. Scaling based on high demand at peak times = Dynamic Scaling
11. Scheduled workloads  = use schedule scaling.

EBS
12. I/O intensive + relational databases =  EBS Provisioned IOPS SSD (io1)
13. Improve the performance of EBS volume + handle workloads = use EBS Provisioned IOPS SSD (io1)
14. SAN disc = Object Store = EBS
15. log processing + sequentially + throughput rate 500 MB/s = EBS Throughput Optimized HDD (st1)
16. Proprietary File System  = EBS

RDS
17. For performance = Add more read replica to Amazon RDS
18. Transactional + High performance + Data size range 16 TB to 64 TB = Amazon Aurora
19. RDS Reserved Instance's Region, DB Engine, DB Instance Class, Deployment Type and term length cannot be changed later.

S3
20. backup data  less frequently + rapid access + low cost = Amazon S3 Standard-IA
21. restrict access = generate S3 pre-signed URLs
   The pre-signed URLs are valid only for the specified duration.
22. To restrict access to content that you serve from Amazon S3 buckets
Create a special CloudFront user called an origin access identity (OAI) and associate it with your distribution.
Configure your S3 bucket permissions so that CloudFront can use the OAI to access the files in your bucket and serve them to your users.
23. PUT Request prefixes = 3,500
24. Encrypt S3 bucket + Encrypt Redshift  + Move data = Data at rest.
25. Secure + Salable + High available = S3
26. Short-term/Temporary access  = Amazon S3 pre-signed URL
27. Enable versioning in both source and destination buckets is prerequisites for cross-region replication in Amazon S3
28. Object Store + Immutable  = Amazon Glacier
29. bypass web servers and store the files directly into S3 bucket = pre-signed URL.
30. Expedited retrieval within 1 - 5 minutes
31. bulk retrieval within 5 - 12 hours
32. The Vault Lock and Standard Retrieval - with 3-5 hours

Storage Gateway
32. Storage Gateway with the cached mode is the best option to migrate iSCSI  to Cloud
33. NFS supported by File Gateway only

DynamoDB
32. 50 ms latency +  increasing exponentially = Amazon DynamoDB
33. S3, DynamoDB and Lambda all have HA
34. Centralized database + strong consistency + scalable + cost optimized = Amazon DynamoDB
35. Enable Amazon DynamoDB Auto Scaling = LESS changes
36. data in chunks + little latency = NoSQL DB = DynamoDB
37. Big Data + flexible schema + indexed data + scalable =Amazon DynamoDB
38. Lowest latency data retrieval + highest scalability = DynamoDB

ElastiCache
39. Repeated Complex Queries = Caching = Amazon ElastiCache.
40. Real data + in-memory = Use Redis.
41. Memcached -> simplicity , Redis -> rich sets of features.

RedShift
42. High Performance + Big Historical Data +  business intelligence tools = data warehouse= Amazon Redshift
43. run different queries types on big data = Amazon Redshift workload management
44. data warehouse + Big Data  + fast = Amazon Redshift

Kinesis
45. 1000 bid per second + process in order + no losing messages + multiple services to process each bid= Amazon Kinesis Data Streams
46. real-time stream +large-volume+ AWS Serverless + custom SQL = Amazon Kinesis Data Analytics
47. Real-time + BIG data streaming = Amazon Kinesis Data Streams
48. Data Analytics + SQL = Amazon Kinesis Data Analytics
49. 100,000 requests per second + Sequential events + click stream analyzing = Use Amazon Kinesis Stream
50. IOT data +  streams+ Partition by equipment + s3 =Use Amazon Kinesis Data Streams

ServerLESS [ Lambda, ]
51. Migration to AWS + Stateless application + Static Content + Less operation overhead = ServerLess solution = Amazon Cognito + Amazon S3 + Amazon API Gateway+ and AWS Lambda
52. Lambda has a limitation of 1000 concurrent requests
53. AWS compute solution +  no special hardware + use 512 MB of memory to run = AWS Lambda functions
54. Lambda is the best option to handle S3 events
55. Scalable + cost-effective + ServerLess = Amazon API Gateway with AWS Lambda Function
56. securely store database passwords + customer master key + Lambda Function  = Lambda Environment Variables

SQS
57. SQS prevent losing orders
58. sold old items first = Use Amazon SQS with FIFO Queue
59. MOST efficient + cost-effective = Decouple the two tiers using Amazon SQS.
60. handle failed messages = Amazon SQS dead-letter queue

Cloudfront
61. CloudFront has a geo-restriction, not a geo routing
62. Custom Origin "On-Premises" + enhance the performance of downloading static files  = Amazon CloudFront

Clodformation:
63. Pilot light DR scenario= DB replication + AWS CloudFormation

CloudTrail
64.  for API calls monitoring

Elastic Beanstalk
65. easy deploying + without managing infrastructure = Elastic Beanstalk 
66. simple deployment + scalable + running on IIS = AWS Elastic Beanstalk

Amazon Cognito
67. MFA +Mobile = Amazon Cognito
68. block suspicious sign-ins = Amazon Cognito user pools

AWS Shield
69. protect the application from DDOS attack = Use AWS Shield

KEY:
70. AWS manages both data key and master key + automatically manage both encryption and decryption  = SSE-S3
71. automated rotation of the encryption keys+  track the usage of encryption key = use SSE-KMS

Misc:
72. machine learning, high-performance computing, video processing, and financial modeling = Amazon FSx for Lustre