I’ve started studying for AWS certification many times already. This time around, I’m not sure which one I should take, as I’m considering moving to Data Science. With the goal of not being stuck choosing what to study, I will start studying for the most basic certification: AWS Cloud Practitioner.
These are some notes I’ve taken on my previous attempt at studying the topic.
AWS Cloud Practitioner
The exam contains 65 multiple choice questions, and it has the length of 90 minutes. It currently costs 100 USD (~ 500 BRL). The domains covered on it are: Cloud Concepts (26% of the exam), Security and Compliance (25%), Technology (33%), and Billing and Pricing (16%). All of this is described on the exam guide. There is a free official practice exam and there are some on Udemy, such as this one.
Shared Responsibility Model: Anything you can configure, is your responsibility; if you cannot, it is AWS’ responsibility.
Glossary
A
- [AWS] Abuse Team: Previous name of the AWS Trust & Safety team.
[AWS] Acceptable Use Policy: AWS policy outlining acceptable usage of services.
Description: AWS Acceptable Use Policy provides guidelines and restrictions on the usage of AWS services. It defines the acceptable behavior of customers and prohibits activities that violate AWS terms and conditions or applicable laws. The policy helps ensure the security, reliability, and availability of AWS services for all users.
- AMI (Amazon Machine Image): Is the base on which EC2 instances are created. Can be of any supported OS and already have pre-installed software. It belongs to a Region, but can be replicated.
[AWS] Amplify: Wrapper around a set of services to develop and deploy full stack web and mobile applications.
- Inside the Amplify Studio, you can define everything needed for the whole lifecycle of the application — from SCM and CI/CD to authentication and storage.
[Amazon] API Gateway: Fully managed service for building, deploying, and managing APIs.
Description: Amazon API Gateway is a fully managed service that makes it easy to create, deploy, and manage APIs at any scale. It provides a secure and scalable front door for applications to access backend services and resources. With features like caching, throttling, and request/response transformations, it enables developers to build robust and efficient API architectures.
Use Case: Amazon API Gateway is commonly used for building RESTful APIs, mobile backends, and serverless applications. It acts as a central hub for managing API traffic, authentication, and authorization. For example, an e-commerce application can use API Gateway to expose product catalog and order management APIs to mobile and web clients.
[AWS] Application Discovery Service: Service to gather information about your on-premises data centers and how the applications are connected.
- Agentless Discovery (AWS Agentless Discovery Connector). It can look at the Virtual Machine inventory, configuration, and performance history.
- Agent-based Discovery (AWS Application Discovery Agent). It can look more in-depth data, like the system configuration, performance, running processes, and details of the network connections.
- Output can be viewed within AWS Migration Hub.
[AWS] Application Migration Service (MGN): Can perform lift-and-shift of your applications to AWS.
- Supports a wide range of platforms, OSs, and databases.
- It can be done with minimal downtime.

- [Amazon] AppStream 2.0: Stream a Desktop Application to a browser. You can use hardware-heavy applications running on AWS machines through a browser.
[AWS] AppSync: Used to create GraphQL and Pub/Sub APIs for web and mobile apps.
- It integrates with DynamoDB and Lambda.
- Enables offline data synchronization.
- Has fine-grained security.
[AWS] Artifact: Centralized repository for compliance-related documents and reports.
AWS Artifact is a service that provides access to various compliance-related documents and reports for AWS services. It offers a centralized repository for accessing and downloading documents such as compliance reports, agreements, and security and privacy documents.
- Can be used to support internal audit or compliance analysis.
[Amazon] Athena: Serverless query service for analyzing data from many data sources (including S3 and on-premises) by SQL queries or Python.
Description: Amazon Athena is an interactive query service that enables users to analyze data stored in Amazon S3 using standard SQL queries. It is serverless, meaning there is no infrastructure to manage, and users only pay for the queries they run. Athena supports a wide range of data formats and integrates with various AWS services. It provides fast query performance by automatically parallelizing and optimizing queries. With Athena, users can gain insights from their data without the need for upfront data loading or complex ETL processes.
Use Case: Amazon Athena is commonly used for ad hoc analysis, log analysis, and business intelligence tasks. It is suitable for scenarios where users need to quickly query and analyze large amounts of data stored in S3, without the need for setting up and managing a separate database or data warehouse.
[Amazon] Aurora: A fully managed relational database engine that's compatible with MySQL and PostgreSQL.
Description: Amazon Aurora is a fully managed relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It provides up to five times better performance than MySQL and up to three times better performance than PostgreSQL with the security, availability, and reliability of a commercial database at one-tenth the cost.
Use Case: Amazon Aurora can be used for web and mobile applications, enterprise applications, gaming applications, e-commerce websites, and SaaS applications.
B
[AWS] Backup: Managed backup and restoring service with configurable backup plans. Supports databases, file systems, storage volumes, and other type of resources.
- Can be used by multiple AWS services.
- Supports on-demand or scheduled backups.
- Supports Point-in-time Recovery (PITR).
- Includes Cross-Region and Cross-Account backup (when using AWS Organizations).

[AWS] Batch: Fully managed service for running batch computing workloads at scale.
Description: AWS Batch is a fully managed service that enables users to run batch computing workloads at any scale. It provides a flexible and scalable environment for executing batch jobs, such as data processing, scientific simulations, and analytics. AWS Batch automatically provisions and scales compute resources based on workload demands, allowing users to focus on their applications rather than managing infrastructure. It integrates with other AWS services and provides features like job scheduling, job dependencies, and resource allocation.
Use Case: AWS Batch is ideal for organizations and individuals who need to process large volumes of data or perform computationally intensive tasks. It is suitable for scenarios such as data transformation, image rendering, financial modeling, and genomics processing.
[AWS] Budgets: Service for tracking and managing AWS costs and usage.
Description: AWS Budgets is a service that helps you track and manage your AWS costs and usage. It allows you to set budget limits, receive cost alerts, and monitor spending across various dimensions such as services, regions, and tags. You can gain insights into your AWS spending and take proactive actions to control costs.
Use Case: AWS Budgets is useful for organizations of all sizes to monitor and manage their AWS spending. It helps control costs, optimize resource utilization, and ensure budget adherence. You can set budget thresholds and receive notifications when spending exceeds the defined limits.
C
[AWS] Certificate Manager (CAM): Provision, managed, and deploy SSL/TLS certificates.
- Supports both public and private TLS certificates.
- Automatic TLS certificate renewal.
- To speed up the retrieval of the certificates, it integrates with: ELB, CloudFront, API Gateway.

[AWS] Cloud Development Kit (CDK): Library for declaring your AWS infrastructure using a familiar language. It is then compiled to CloudFormation.
[AWS] CloudTrail: Service for auditing and monitoring AWS API activity.
Description: CloudTrail records API calls to provide visibility into AWS account activity for governance, compliance, and auditing purposes.
Use Case: Security analysis, compliance auditing, troubleshooting.
[Amazon] CloudWatch: Monitoring and observability service for AWS resources and applications.
Description: Amazon CloudWatch collects and tracks metrics, logs, and events from various AWS resources and applications. It provides real-time monitoring, automated actions based on defined thresholds, and insights into the operational health of your AWS infrastructure.
Use Case: CloudWatch is used to monitor application performance, track resource utilization, set up alarms for specific events, troubleshoot issues, and gain insights into system behavior for optimization and operational visibility.
- [AWS] CodeBuild: A fully managed build service that compiles source code, runs tests, and produces deployable artifacts.
[AWS] CodeCommit: Managed source control service that hosts secure and scalable Git repositories.
Description: AWS CodeCommit provides a secure and highly scalable Git-based version control system for hosting and managing source code repositories. It offers features such as code collaboration, pull requests, access control, and integrations with other AWS developer tools. CodeCommit ensures data encryption at rest and in transit, and it can seamlessly integrate with existing Git workflows and IDEs.
Use Case: Development teams can use AWS CodeCommit as a centralized repository for their source code, enabling collaboration, version control, and seamless integration with other AWS services. For example, a software development team can store their codebase in CodeCommit and utilize its features to manage code changes, review and approve pull requests, and trigger automated deployments using AWS CodePipeline.
[AWS] CodeDeploy: A fully managed deployment service for automating application deployments.
Description: AWS CodeDeploy automates the process of deploying applications to a variety of compute services, including Amazon EC2 instances, Lambda functions, and on-premises servers. It provides a consistent and reliable deployment mechanism, allowing developers to define deployment configurations, rollbacks, and canary deployments. CodeDeploy integrates with other AWS services and third-party tools to enable continuous integration and delivery workflows.
Use Case: Organizations can use AWS CodeDeploy to automate their application deployments, reducing the risk of errors and minimizing downtime. For example, a web application hosted on Amazon EC2 instances can be deployed using CodeDeploy. With a defined deployment configuration, developers can release new versions of the application, perform automated testing, and gradually roll out the updates across instances, ensuring smooth and reliable deployments.
- [Amazon] CodeGuru: Machine Learning service to automate code reviews and give application performance recommendations at runtime.
[AWS] CodePipeline: Managed continuous delivery service for orchestrating build, test, and deployment workflows.
Description: AWS CodePipeline is a continuous delivery service that helps automate software release processes. It allows developers to define and visualize end-to-end workflows for building, testing, and deploying applications. CodePipeline integrates with various AWS services, including CodeCommit, CodeBuild, CodeDeploy, and others, as well as popular third-party tools. It provides visualization of the pipeline stages, manual approval gates, and can trigger actions based on events or schedules.
Use Case: Development teams can use AWS CodePipeline to create automated software release pipelines. For example, a web application's code stored in CodeCommit can trigger a build using CodeBuild, followed by automated tests and deployment using CodeDeploy. CodePipeline ensures consistent and repeatable deployments, enabling teams to deliver software updates more frequently and reliably.
[AWS] CodeStar: A fully integrated service for developing, building, and deploying applications on AWS.
- [Amazon] Cognito: Customer IAM — User authentication and authorization for your applications.
[AWS] Compute Optimizer: Service that analyzes compute resources and provides optimization recommendations regarding historical resource utilization.
Description: AWS Compute Optimizer is a service that analyzes the utilization of compute resources, such as Amazon EC2 instances, and provides recommendations to optimize performance and cost. It uses machine learning algorithms to analyze historical resource utilization data and generate recommendations for right-sizing instances and choosing optimal instance types.
Use Case: AWS Compute Optimizer helps organizations optimize their compute resources by identifying over-provisioned or under-utilized instances. By following its recommendations, customers can achieve better performance and cost efficiency in their AWS deployments. For example, Compute Optimizer can recommend downsizing an instance that is consistently underutilized, leading to cost savings.
[AWS] Config: Service for auditing and recording configurations of your AWS resources over time. Used to evaluate compliance requirements.
What can AWS Config answer:
- Is there unrestricted SSH access to my security groups?
- Do my buckets have any public access?
- How has my ALB configuration changed over time?
Some important characteristics:
- It is a regional service — you will have to configure the service for each AWS Region you use. But you can aggregate data from other Regions and even other accounts.
- Can store the auditing data into Amazon S3.
- Can enable SNS notification for any changes recorded.
- [Amazon] Connect: Virtual contact center, where you receive calls and create contact flows.
- [AWS] Control Tower: Automates the configuration of your AWS Organization following best practices. Sets up BUs and accounts with their policies.
[AWS] Cost & Usage Report: Detailed cost and usage data export service.
Description: AWS Cost & Usage Report is a service that provides detailed cost and usage data for your AWS resources. It enables you to export comprehensive reports with granular information on resource utilization, costs, and usage patterns. The reports can be customized and scheduled for automated delivery.
Use Case: The Cost & Usage Report is used for cost analysis, budgeting, and chargeback purposes. It helps organizations gain visibility into their AWS spending, identify cost optimization opportunities, and allocate costs accurately to different departments or projects.
- [AWS] Cloud9: A cloud-based IDE that supports coding, debugging, and collaboration.
[AWS] CloudHSM (hardware security module): Hardware-based key storage and cryptographic operations service.
Description: AWS CloudHSM (Hardware Security Module) is a cloud-based service that provides dedicated hardware to securely store cryptographic keys and perform cryptographic operations. CloudHSM helps customers meet regulatory and compliance requirements by offering a tamper-resistant environment for key management. It integrates with various AWS services and applications, enabling secure key storage, encryption, and decryption of sensitive data. CloudHSM provides dedicated HSM instances that are physically isolated and protected by industry-standard security controls.
Use Case: AWS CloudHSM is often used in applications that require strong security and compliance measures for handling sensitive data. It is suitable for industries such as finance, healthcare, and government, where protecting data confidentiality and integrity is crucial. Use cases include secure key management for encrypting customer data, protecting digital assets such as certificates and private keys, and enabling secure communication channels between different systems.
D
[AWS] Database Migration Service: Managed service for migrating many different databases from other AWS Region, on-premises environment, or even other cloud platforms.
Description: AWS Database Migration Service (DMS) is a fully managed service that enables seamless migration of databases to AWS, both from on-premises environments and other cloud platforms. It supports a variety of source and target databases, including Amazon RDS, Amazon Aurora, Amazon Redshift, and self-managed databases running on EC2 instances.
With AWS DMS, you can easily and securely migrate your databases with minimal downtime and data loss. The service takes care of schema conversion, data replication, and ongoing change synchronization, ensuring data consistency during the migration process. It provides a highly available and scalable infrastructure to handle large-scale database migrations.
AWS DMS supports both one-time migrations and continuous replication for ongoing data synchronization. You can also perform database schema conversions during the migration, allowing you to migrate between different database engines.
Use Case: An example use case for AWS Database Migration Service is when an organization wants to migrate their on-premises Oracle database to Amazon Aurora. By using AWS DMS, they can set up a replication instance, configure the source and target endpoints, and initiate the migration process. AWS DMS will handle the data replication, transformation, and validation, ensuring a seamless migration to Amazon Aurora.
Data in transit (not specific to AWS): Data being transferred between systems over a network.
Description: Data in transit refers to data that is in motion and being transferred between systems or networks. It includes data transmitted over the internet, private networks, or other communication channels.
Use Case: Examples of data in transit include sending emails, browsing websites, making online transactions, and transferring files over a network.
Data at rest (not specific to AWS): Data stored and not actively being accessed or transferred.
Description: Data at rest refers to data that is stored in storage systems or devices and is not actively being accessed or transferred. It includes data stored in databases, file systems, data lakes, or physical storage devices.
Use Case: Examples of data at rest include data stored on hard drives, backups stored in storage systems, archived data, and static files stored in cloud storage.
[AWS] DataSync: Replicate large amounts of data from on-premises to AWS. Can synchronize to Amazon S3, Amazon EFS, and Amazon FSx.
- Replication tasks can be scheduled.
- Once the first full replication is done, the following replications will be incremental — will only replicate the changed data.

[AWS] Device Farm: Run automated or manual tests on desktop browsers or real mobile devices. You get almost full access to the mobile device’s features.
- Can run tests concurrently on multiple devices.
- Allows the configuration of GPS, language, Wi-FI, Bluetooh, and other mobile features.
- After running the automated tests you get reports, logs, and screenshots.
- [Amazon] Detective: Investigates your infrastructure for suspicious activities or, more specifically, to identify the root cause of security issues.
- [AWS] Directory Services: Managed Microsoft Active Directory (AD) that simplifies the deployment and management of AD in the AWS Cloud.
[Amazon] DocumentDB: Fully managed, highly scalable, and MongoDB-compatible document database service.
Description: Amazon DocumentDB (with MongoDB compatibility) is a fully managed native JSON document database that makes it easy and cost-effective to operate critical document workloads at virtually any scale without managing infrastructure. It simplifies your architecture by providing built-in security best practices, continuous backups, and native integrations with other AWS services.
Use Case: Amazon DocumentDB can be used to store and query content management data, manage user profiles, preferences, and requests, scale mobile and web applications.
[Amazon] DynamoDB: A fully managed NoSQL database service for any scale.
Description: Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. It allows you to offload the administrative burdens of operating and scaling a distributed database so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching or cluster scaling.
Use Case: Amazon DynamoDB can be used for cases such as shopping carts on Amazon.com. These use cases cannot tolerate inconsistency and slow performance of joins as a data set is scaled.
[Amazon] DynamoDB Accelerator: In-memory cache for accelerating read performance of Amazon DynamoDB tables.
Description: DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for Amazon DynamoDB. It enables you to improve the read performance of your DynamoDB applications by caching frequently accessed data and reducing the need to perform read operations directly on the DynamoDB tables. DAX integrates seamlessly with DynamoDB and provides a caching layer that can handle millions of requests per second with low latency. It automatically manages the cache, synchronizing updates with the underlying DynamoDB tables to ensure data consistency.
Use Case: A common use case for DynamoDB Accelerator is in applications that require fast read performance for frequently accessed data. By using DAX, you can significantly reduce the latency of read operations, as the data is retrieved from the in-memory cache instead of querying the DynamoDB tables directly. This can be particularly beneficial for applications with high read workloads or real-time use cases that require low-latency access to data. DAX can be easily integrated with existing DynamoDB applications, requiring minimal code changes.
E
- EC2 User Data: Script launched at the first start of an instance.
- EC2 Image Builder: Pipeline for creating, testing, and distributing custom AMIs. It can run on a given schedule or on-demand.
- [AWS] Ecosystem: The collection of services, tools, and partners that make up the broader AWS environment.
[AWS] Elastic Beanstalk: Fully managed service for deploying and scaling web applications using other AWS resources. It uses Cloud Formation files to provision the required resources for you.
Description: AWS Elastic Beanstalk is a fully managed service that simplifies the deployment and management of web applications. It provides a platform for developers to upload their code, and Elastic Beanstalk automatically handles the deployment, capacity provisioning, and load balancing. It supports various programming languages and frameworks.
Use Case: Elastic Beanstalk is suitable for developers who want to focus on writing code without worrying about infrastructure management. It is used to deploy and scale web applications easily, ensuring high availability and scalability.
[Amazon] Elastic Block Store (EBS): Network block-level storage service for EC2 instances that can be attached to a single instance at a time (there is a feature that allows multi-attach for some EBS types).
- You need to specify the size of the volume and pay for the reserved capacity.
- Bounded to a specific AZ. But can be moved by making snapshots and replicating in another AZ.
- Can only be attached to one EC2 instance, but one EC2 instance can use multiple EBS volumes.
- EBS volumes can be detached from EC2 instances and still be “alive”. If you want to delete an EBS volume when deleting the EC2 instance, there is a toggle you can enable.
- There are certain types of EBS volumes that can be attached to multiple EC2 instances, but they are an exception (EBS Multi-Attach).
Can think of it as a Network USB drive.
[AWS] Elastic Disaster Recovery (AWS DRS): Recover physical, virtual, and cloud-based servers into AWS, within minutes.
You install an AWS Replication Agent in your infrastructure, and it continuously replicates applications, operational systems, and databases to AWS. If your infrastructure goes down, the AWS DRS will fail-over in minutes to the replicated AWS infrastructure. Once your servers are back up, it will fail-back traffic into them.

[Amazon] Elastic File System (EFS): Network File System that can be used by multiple Linux EC2 instances at a time and is multi-AZ.
- Different from EBS, you don’t need to specify the size you want and pay only for what you use.
- Similar to S3, you have the EFS Infrequent Access (EFS-IA) storage class. It has significantly lower prices for files you do not access every day.
- If enabled, EFS will automatically move files into this storage class over time.
[Amazon] Elastic Load Balancer (ELB): Managed load balancer with Application, Network, and Gateway options.
- Application Load Balancer. Works at the Layer 7, with protocols like HTTP, HTTPS, and gRPC.
- Network Load Balancer. Works at Layer 4 (Transport Layer), with TCP and UDP protocols. It is highly performant, up to millions of requests per second.
- Gateway Load Balancer. Works at Layer 3 (Network Layer), with the GENEVE protocol on IP packets. Used to run security appliances, like filtering and intrusion detection.
[Amazon] Elastic MapReduce (EMR): Service to manage and deployment open-source big data analytics platforms, such as Apache Spark, Apache Hive, and Presto.
Description: Is a fully managed big data platform that simplifies the processing and analysis of large-scale data sets. It provides a managed Hadoop framework, along with popular big data processing engines like Apache Spark and Presto. EMR automatically provisions and scales the underlying infrastructure, allowing users to focus on data processing tasks. It integrates with various AWS services and offers features like data encryption, data lake integration, and fine-grained access control.
Use Case: Amazon EMR is commonly used for big data processing, data transformation, and analytics workloads. It is suitable for scenarios where users need to process and analyze large volumes of data, such as log analysis, genomics processing, machine learning, and data warehousing.
- Elastic Network Interface (ENI): Is a logical networking component in a VPC that represents a virtual network card.
[Amazon] Elastic Transcoder: Used to convert Amazon S3 media files from one format into another one (usually one that will be consumed by end users).
Description: It is easy to use, fully managed and secure. Also highly scalable and has pricing based on the duration of the transcoding.
[Amazon] EventBridge: A serverless event bus that enables event-driven architecture and integrations between AWS services.
Description: Amazon EventBridge is a serverless event bus service that simplifies the building of event-driven architectures. It provides a central hub for routing and processing events generated by various sources, such as AWS services, SaaS applications, and custom applications. EventBridge supports event filtering, transformation, and routing to target AWS services or custom event handlers. It allows decoupling of application components, enabling scalability, flexibility, and easy integration between services.
Use Case: Organizations can use Amazon EventBridge to build event-driven architectures for various use cases. For example, a cloud-native application can leverage EventBridge to connect AWS services like Lambda, SNS, and SQS, allowing seamless communication and coordination between components. Events can be generated when data is ingested, files are uploaded, or services encounter specific conditions, triggering automated workflows or real-time processing.
F
[AWS] Fault Injection Simulator (FIS): Use Chaos Engineering to simulate failures around your resources and monitor how your system behaves.
- Supports only a set of services within AWS.
- You run the disruptions with (pre-built) templates.

[AWS] Fargate: A way to use Amazon ECS containers in a managed way.
Description: AWS Fargate is a serverless compute engine for containers that allows developers to run containers without the need to manage the underlying infrastructure. Fargate abstracts away the complexities of provisioning and scaling infrastructure resources, providing an efficient way to deploy and manage containerized applications. It integrates with container orchestration services like Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).
Use Case: AWS Fargate is ideal for running microservices-based applications, batch processing, and other workloads that require containerization. For instance, a media streaming platform can use Fargate to deploy containers that transcode videos or process large volumes of data in a scalable and cost-effective manner. Fargate automatically handles resource provisioning and scaling, allowing developers to focus on application development and deployment.
- [Amazon] Forecast: Service that ingests historical business and related data to generate accurate time-series forecasts of different measurements.
[Amazon] FSx: Used for third-party high-performance file systems, like for Windows, Lustre, NetApp ONTAP, and OpenZFS.
- Can also be accessed from on-premises infrastructure.
G
[Amazon] GuardDuty: Intelligent threat detection service that looks at CloudTrail, VPC Flow, and DNS logs (along other optional data) to perform anomaly detection.
- Only looks at AWS information, so it can be easily enabled.
- Uses ML and third party data to enhance its discovery power.
- Can notify EventBridge in case some threat has been discovered — then EventBridge can run some Lambda or send a notification through SNS.
- Has a specialized protection against CryptoCurrency attacks.

- [AWS] Global Accelerator: Allow for users to access your application through an edge location (and consequently through AWS’ private network), instead of going through the public network.
[AWS] Glue: Fully managed extract, transform, and load (ETL) service for data analytics.
Description: AWS Glue is a fully managed ETL service that makes it easy to prepare and load data for analytics. It provides capabilities for discovering, cataloging, transforming, and loading data into various data stores and data warehouses. Glue automatically generates ETL code and runs it at scale, simplifying the data integration process.
Use Case: AWS Glue is used for various data integration and data preparation tasks. For example, it can be used to extract data from different sources, transform it into a common format, and load it into a data warehouse for analytics. It enables organizations to build scalable and automated data pipelines.
[AWS] Ground Station: Connect your satellites to AWS. It lets you control communications and process data from ground station near AWS Region.
- Download satellite data to S3 or EC2 within seconds.

H
[AWS] Health: View both overall status of AWS services health, and status and recommendations regarding the health of your AWS resources in a personalized manner.
Description: AWS Health is a service that provides personalized information and recommendations regarding the health of AWS resources. It helps customers optimize their AWS infrastructure by notifying them of events that may impact their applications or services. AWS Health aggregates and organizes information from various AWS services, such as AWS Trusted Advisor, AWS Personal Health Dashboard, and AWS CloudWatch. It provides insights into service disruptions, security vulnerabilities, and planned maintenance activities. With AWS Health, users can proactively identify and mitigate potential issues, ensuring the availability and reliability of their AWS resources.
Use Case: AWS Health is a valuable tool for monitoring the health of AWS resources and maintaining operational excellence. It can be used by administrators, operations teams, and developers to stay informed about the status of their infrastructure, quickly identify and resolve issues, and take proactive measures to optimize resource utilization and minimize downtime.
- Hardware security module (HSM): physical device specialized in generating, managing, and storing cryptographic keys.
I
[AWS] IAM: Identity and Access Management service for managing user permissions.
Description: AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. It allows you to create and manage users, groups, and permissions to grant or deny access to AWS services and resources.
Use Case: AWS IAM is used to control and manage access to various AWS resources and services. For example, you can create IAM users for your team members and assign appropriate permissions to each user, ensuring they only have access to the resources they need to perform their tasks.
- IAM Access Analyzer: Find out which resources can be accessed by clients outside your Zone of Trust (which are a set of AWS Accounts from your AWS Organization). This resource is free.
- IAM Identity Center: Service to enable your SSO to work for AWS accounts and connect to multiple AWS accounts with the same credentials.
- [AWS] IoT Core: Connects IoT devices to the AWS cloud, which you can leverage to aggregate and analyze tons of data. The AWS IoT Core itself includes a range of features like message broker and caching.
[Amazon] Inspector: Automated security threats assessments only for Amazon EC2 instances, Amazon ECR, and Lambda functions.
- Analyzes known vulnerabilities registered on (CVE) for EC2, ECR, and Lambda functions.
- Analyzes network accessibility for EC2 instances.
- It assigns a risk score to all vulnerabilities, so you can prioritize your work.
- Integrates with Security Hub.
- Can send findings to EventBridge.
Instance Store: Temporary block-level storage available on some Amazon EC2 instances types (e.g. m5d).
Description: Instance Store provides temporary block-level storage that is directly attached to Amazon EC2 instances. It offers high I/O performance and low latency, making it suitable for applications that require fast and temporary storage. However, the data stored in Instance Store is volatile and will be lost if the instance is stopped, terminated, or fails.
Use Case: Instance Store is commonly used for applications that need temporary storage for caching, scratch space, or other temporary data processing tasks. For example, high-performance databases or data analytics applications can benefit from the fast and local storage provided by Instance Store to improve processing speed. It's important to note that Instance Store should not be used for storing critical or persistent data, as it is not durable and does not provide data persistence beyond the lifetime of the instance.
- [AWS] IQ: A service that connects AWS customers with third-party experts for on-demand project work.
J
K
- [Amazon] Kendra: Natural language search service for unstructured data that connects to many data sources (Amazon S3, Slack, Salesforce, …).
[AWS] Key Management Service: Secure and scalable key management service for creating and managing encryption keys.
Description: AWS Key Management Service (KMS) is a managed service that enables you to create and control the encryption keys used to encrypt your data. It provides a highly secure and scalable solution for key management, allowing you to generate, store, and manage encryption keys in a centralized manner. AWS KMS integrates with various AWS services and client-side encryption libraries to help you protect your data in a wide range of use cases.
Use Case: An example use-case of AWS Key Management Service is encrypting data stored in Amazon S3 buckets. By using AWS KMS, you can create and manage encryption keys, and then configure S3 to automatically encrypt objects using these keys. This ensures that your data is encrypted at rest and provides an additional layer of security. AWS KMS can also be used to encrypt data in other AWS services, such as Amazon EBS volumes, Amazon RDS databases, and Amazon SNS messages.
[AWS] Knowledge Center: A collection of articles and videos covering frequent AWS customer questions and requests.
Description: The AWS Knowledge Center is a part of AWS re:Post and includes AWS Official Knowledge Center articles and videos covering the most frequent questions and requests that AWS receives from its customers. It is a helpful resource for finding answers to common questions about AWS services and features.
L
[Amazon] Lex: Service for building conversational interfaces using voice or text (chatbots).
Description: Amazon Lex enables developers to build conversational interfaces, commonly known as chatbots or virtual assistants, that can interact with users using both voice and text inputs. It uses automatic speech recognition (ASR) and natural language understanding (NLU) capabilities to convert user speech or text into structured data that applications can process. Lex supports creating and managing chatbot conversational flows, handling user input, and integrating with various AWS services or custom back-end systems.
Use Case: Companies can leverage Amazon Lex to build interactive and intelligent chatbot applications for customer support, information retrieval, and other use cases. For instance, an e-commerce business can create a chatbot that assists customers in finding products, provides order status updates, or answers frequently asked questions. Lex's NLU capabilities allow the chatbot to understand user intents and entities, enabling it to provide accurate and personalized responses, leading to an improved customer experience.
- [Amazon] Lightsail: Easy-to-use virtual private server (VPS) service with simplified management that hides the services being used under the hood.
[AWS] Local Zones: AWS infrastructure deployments that place compute, storage, and other services closer to end-users. AWS manages the physical servers.
Description: AWS Local Zones are extension of AWS regions and provide you the ability to run low-latency, high-bandwidth applications closer to your end-users. They are geographically-distributed, small-scale data centers that are located in proximity to major metropolitan areas. Local Zones are designed to address applications that require single-digit millisecond latency to end-users or local data processing.
Each Local Zone is connected to an AWS region through a low-latency, high-bandwidth network link. They offer a subset of AWS services and can be used in conjunction with the services and resources available in the corresponding AWS region. Local Zones can be useful for latency-sensitive applications, such as video streaming, gaming, real-time analytics, and machine learning, where proximity to end-users is critical.
Use Case: An example use case for AWS Local Zones is a gaming company that wants to provide low-latency gaming experiences to its users. By deploying their game servers in a Local Zone near a specific city, they can reduce the latency between players and the game server, resulting in a more responsive and immersive gaming experience. Another use case is an application that requires real-time data processing, such as financial market analysis or IoT sensor data processing. By leveraging a Local Zone, the application can reduce the time it takes to process and respond to incoming data, enabling near real-time decision-making.
M
[Amazon] Macie: Data security and data privacy service that uses machine learning to continuously analyze your S3 data, organize the issues, and provide insights measures to take.
- It can identify personally identifiable information (PII), financial data, intellectual property, and other sensitive content.

- [Amazon] Managed Blockchain: Creation of blockchain networks with an open source framework and select the nodes to replicate the ledger.
- [AWS] Managed Services: A fully managed service that helps offload the management of AWS infrastructure and applications.
[AWS] Migration Evaluator: Focused on building a data-driven business case for migrating to AWS.
- You install a Agentless Collector to perform a discovery process within your infrastructure. Another option is to import data you already have into the AWS Migration Evaluator.
- Takes a snapshot of your servers and their dependencies.
- It provides a clear baseline of what you currently use on your data center. Define a target state. Then, develop a migration plan.

- [AWS] Migration Hub: Central location to leverage the whole set of migration related services on AWS.
[Amazon] MQ: Managed message broker service for Apache ActiveMQ and RabbitMQ.
Description: Amazon MQ is a managed message broker service that simplifies the setup, operation, and maintenance of message queues. It supports the popular open-source messaging protocols Apache ActiveMQ and RabbitMQ, providing reliable and scalable messaging capabilities. Amazon MQ takes care of the underlying infrastructure, including provisioning, patching, and monitoring, allowing users to focus on building messaging applications. It integrates with other AWS services and offers features like message encryption, message filtering, and message replay.
Use Case: Amazon MQ is commonly used for decoupling applications, enabling asynchronous communication, and building event-driven architectures. It is suitable for scenarios where reliable and scalable messaging is required, such as order processing, inventory management, and real-time data streaming.
N
- [Amazon] Neptune: Graph database service that enables highly connected, relationship-oriented applications.
[AWS] Network Firewall: Firewall at the VPC level for traffic between VPCs, Internet, and on-premises. Can inspect communications from Layer 3 to Layer 7.
- Can inspect traffic from:
- VPC to VPC
- Internet outbound and inbound
- To and from Direct Connect and Site-to-Site VPN.

O
- [AWS] OpsHub: Dashboard to manage devices from the Snow family (physical data-transfer devices).
[AWS] OpsWorks: Service to manage server configurations using Chef or Puppet on EC2 or on-premise machines.
Description: AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments.
Use Case: Web application hosting, mobile app backends, microservices.
[AWS] Organizations: Service for centrally managing and governing multiple AWS accounts through Organizational Units (OUs).
- It is a global service.
- Can enable Consolidated Billing to merge payment into a single bill for all accounts.
- Can benefit from discounts of aggregated usage across all accounts.
- Can share a pool of reserved EC2 instances.
- Can use an API to programmatically create AWS accounts.
- Can restrict account privileges with Service control policies (SCP).
- Can enable policies for Tag usage across accounts and OUs.
- Can enable Backup policies.
- Can enable AI services opt-out policies.
- [AWS] Outposts: Bring native AWS services, infrastructure, and operating models to virtually any data center, co-location space, or on-premises facility. The hardware belongs to AWS, but they do not manage them.
P
[AWS] Partner Network (APN): Global partner program that helps businesses build, market, and sell their AWS solutions.
Description: The AWS Partner Network (APN) is a comprehensive program designed to support and enable AWS partners in delivering value-added services and solutions to customers. It provides resources, training, technical support, and co-selling opportunities to help partners build successful businesses on AWS.
Use Case: APN is not a service, but rather a program that helps businesses leverage the AWS platform. It allows organizations to expand their reach, access specialized expertise, and create innovative solutions for their customers by partnering with AWS.
[AWS] Personal Health Dashboard: Service for monitoring the health of AWS services and accounts.
Description: AWS Personal Health Dashboard provides personalized views into the performance and availability of AWS services and resources that you are using. It offers alerts, notifications, and remediation guidance to help you understand and address any operational issues or planned activities impacting your AWS infrastructure.
Use Case: The Personal Health Dashboard is used to monitor the health of your AWS services and accounts, ensuring you have visibility into any service disruptions, performance issues, or scheduled maintenance events that may impact your applications. It helps you proactively address and resolve any operational issues to maintain the availability and performance of your applications.
[Amazon] Personalize: Managed machine learning service for creating personalized recommendations based on user behavior and items’ metadata.
Description: Amazon Personalize is a fully managed service that uses machine learning algorithms to generate personalized recommendations. It leverages user behavior and item metadata to create recommendation models tailored to individual users' preferences. Personalize supports real-time recommendations for personalized content delivery across applications and devices. It handles data ingestion, feature engineering, model training, and deployment, making it easier for developers to add recommendation functionality to their applications.
Use Case: Businesses can utilize Amazon Personalize to deliver personalized product recommendations, content suggestions, or targeted marketing campaigns. For example, a streaming platform can employ Personalize to recommend movies or TV shows to its users based on their viewing history, ratings, and preferences. By offering personalized recommendations, businesses can enhance user engagement, increase customer satisfaction, and drive conversions.
[Amazon] Pinpoint: Communicate with users (2-way communication) through multiple-channels and collect engagement data.
- Can send and receive emails, SMS, push, voice, and in-app messages.
- Allows segmentation and personalization of the messages.
- It differs from Amazon SNS e Amazon SES in the scope of the service. Amazon Pinpoint manages the message’s audience, content, and delivery schedule. SNS and SES simply deliver the message to a specified audience.
[Amazon] Polly: Text-to-speech service that turns text into lifelike speech.
Description: Amazon Polly is a text-to-speech service that uses advanced deep learning technologies to convert text into natural-sounding speech. It supports multiple languages and offers a variety of voices with customizable speech parameters. Polly can be used to generate audio content for applications, including voice-overs, interactive responses, and accessibility features.
Use Case: Amazon Polly is widely used in applications that require speech synthesis capabilities. For example, a news aggregator app can use Polly to convert news articles into audio summaries for users to listen to. It enhances user experiences and accessibility by providing spoken content in addition to text.
[AWS] PrivateLink: Secure and scalable way to access AWS services privately.
Description: AWS PrivateLink is a networking service that allows you to securely access AWS services in a private manner, without requiring internet connectivity. It enables you to access AWS services over private IP addresses, keeping traffic within your own virtual private network (VPC) or on-premises network.
Use Case: AWS PrivateLink is used to securely access AWS services, such as AWS Lambda, Amazon S3, and Amazon EC2, from your VPC or on-premises environment. It is particularly useful when you want to access AWS services without exposing them to the public internet, ensuring enhanced security and compliance.
[AWS] Professional Services: Team to seek for expert guidance, support, and implementation assistance.
Description: AWS Professional Services is a team of experienced consultants and solution architects who provide expert guidance, support, and implementation assistance to AWS customers. They help organizations design, deploy, and optimize their applications and infrastructure on AWS, ensuring best practices and successful outcomes.
Use Case: Architecture design, migration planning, performance optimization, and cost management.
Q
- [Amazon] Quantum Ledger Database (QLDB): Ledger database that provides a transparent, immutable, and cryptographically verifiable transaction log.
[Amazon] QuickSight: Cloud-native business intelligence service for data visualization and analytics with data from AWS and many other sources.
Description: Amazon QuickSight is a fully managed business intelligence service that enables you to create interactive dashboards, perform ad hoc analysis, and share insights with others. It integrates with various data sources and provides powerful visualization and analytics capabilities.
Use Case: QuickSight is useful for a range of analytics use cases, such as visualizing sales data, monitoring key performance indicators (KPIs), analyzing customer behavior, and gaining insights from large datasets. It allows you to transform raw data into actionable insights and make data-driven decisions.
R
- [AWS] Resource Access Management (RAM): Management of resource access by multiple AWS accounts when using AWS Organization.
- [AWS] re:Post: An online community and platform for sharing best practices, architectural patterns, and success stories.
[Amazon] Redshift: Fully managed data warehousing service.
Description: Amazon Redshift is a fully managed data warehousing service that allows you to analyze large datasets with high performance and scalability. It provides columnar storage, parallel query execution, and automatic data compression, enabling fast and cost-effective data analysis. Redshift integrates with popular BI tools and data integration services.
Use Case: Amazon Redshift is commonly used for business intelligence, data analytics, and reporting purposes. It is suitable for organizations that need to process and analyze large volumes of data to gain insights and make data-driven decisions. For example, an e-commerce company can use Redshift to analyze sales data and customer behavior patterns.
[Amazon] Rekognition: Deep learning-based image and video analysis service.
Description: Amazon Rekognition is a deep learning-based service that analyzes images and videos to extract valuable information. It can identify objects, people, text, scenes, and activities within visual content. Rekognition provides capabilities like facial recognition, celebrity recognition, content moderation, and image sentiment analysis. It is powered by advanced machine learning algorithms and can be easily integrated into applications using the Rekognition API. The service is highly scalable, making it suitable for a wide range of use cases.
Use Case: Amazon Rekognition is used for various applications, including facial recognition systems, video content analysis, content moderation in social media, intelligent surveillance, and personalized user experiences based on visual data.
[Amazon] Relational Database Service (RDS): Managed relational database service for various database engines.
Description: AWS RDS simplifies the deployment, management, and scaling of relational databases. It supports popular database engines such as MySQL, PostgreSQL, Oracle, and Microsoft SQL Server.
Use Case: Web applications, e-commerce platforms, business applications.
- Rightsizing Recommendations: Are a feature of Cost Explorer service for optimizing EC2 instances usage.
S
- S3 Transfer Acceleration: Instead of uploading or downloading your files directly to the S3 Region you want to use, transfer them to the closest Edge location and let it go to the bucket through the AWS network. (You can test it out with an AWS tool.)
[AWS] Security Hub: Security service that centralizes the findings about security and compliance from many AWS services, partner tools, and custom checks. It facilitates the assessing and tracking of security and compliance issues.
- Can manage security across multiple AWS accounts.
- It has an integrated dashboard showing current security and compliance status.
- It integrates with AWS Config, AWS GuardDuty, Amazon Inspector, Amazon Macie, IAM Access Analyzer, and many others.
- For it to work, you need to first enable AWS Config.

- [AWS] Security Token Service: A service that enables users to request temporary security credentials to securely access AWS resources.
[AWS] Secrets Manager: Service for securely storing and managing secrets, such as API keys and database credentials.
Description: AWS Secrets Manager is a service that helps you protect secrets needed to access your applications, services, and IT resources. It provides a secure and scalable solution for storing and managing secrets, such as database credentials, API keys, and encryption keys. Secrets Manager integrates with AWS services and offers automatic rotation capabilities.
Use Case: AWS Secrets Manager simplifies the management of secrets in distributed systems. For example, an application that requires access to a database can retrieve the database credentials securely from Secrets Manager at runtime, without storing them directly in the application code or configuration files. This improves security and reduces the risk of unauthorized access to sensitive information.
- [AWS] Service Catalog: Service to only allow a set of products for your organization. Admins create CloudFormation templates and say who can execute them or not.
Service Control Policies (SCP): Is a feature of AWS Organizations to restrict privileges of Organizational Unit or AWS accounts. (With IAM you can only do that for IAM users, not AWS accounts.)
- Deny SCPs take precedence over Allow ones. So if an AWS account inherits a Deny rule from its Organizational Unit, even if itself has a direct SCP Allow rule, it still will be denied.
- [AWS] Shield Advanced: Premium DDoS protection against sophisticated attacks, with 24/7 access to the AWS DDoS Response team. It is a costly service with fixed monthly billing, but protects against higher fees during usage spikes due to DDoS attacks.
- [AWS] Shield Standard: Standard DDoS protection for application living inside AWS. It protects against layer 3 and layer 4 attacks. It comes with no additional cost for every AWS customer.
[AWS] Snow Family: Physical devices provided by AWS to make data-transfer or computations on the edge (anywhere with limited or no internet and computing access).
You can also transfer data securely through the AWS Network by combining other AWS services.


[AWS] Step Functions: Service for orchestrating workflows in a visual way. It integrates with Lambda functions, EC2, ECS, on-premise servers, and others.
- It counts with features like sequencing, parallelization, conditions, timeouts, and error handling.
- Can implement a step which requires human interaction.

[AWS] Storage Gateway: Enables the usage of storage services on AWS by on-premises applications.
Description: AWS Storage Gateway seamlessly integrates on-premises environments with AWS cloud storage services. It provides file, volume, and tape gateways, enabling you to extend your on-premises storage infrastructure to the cloud. You can store backups, disaster recovery data, and archive data in AWS while using on-premises applications.
Use Case: Data backup, disaster recovery, archiving.
[AWS] Systems Manager: Unified interface for managing AWS resources and applications.
Description: AWS Systems Manager provides a unified interface for managing and controlling AWS resources and applications. It offers a set of tools for system and resource configuration, automation, and monitoring. Systems Manager helps organizations maintain consistent system configurations, simplify operational tasks, and improve visibility into their AWS environment.
Use Case: AWS Systems Manager is used for a wide range of tasks, including managing instances, applying patches, configuring and deploying applications, and collecting operational data. It simplifies operational activities and enables automation at scale. For example, Systems Manager can be used to automate the deployment and configuration of software across multiple instances, reducing manual effort and ensuring consistency.
T
[Amazon] Transcribe: Automatic Speech Recognition (ASR) service that converts audio to text.
Description: Amazon Transcribe is an automatic speech recognition (ASR) service that can convert speech from various audio sources into written text. It supports real-time and batch transcription, and provides accurate results even with challenging audio conditions. Transcribe is used for applications such as transcription services, voice analytics, and closed captioning.
Use Case: An example use case for Amazon Transcribe is in the healthcare industry. Transcribe can be used to transcribe medical dictations, enabling healthcare professionals to easily convert spoken notes into text, which can then be further processed or stored electronically.
- [Amazon] Translate: A neural machinne translation service that provides fast and accurate language translation.
[AWS] Trust & Safety: Team responsible for ensuring the security, privacy, and compliance of AWS services.
Description: The AWS Trust & Safety team plays a crucial role in maintaining the security and trustworthiness of the AWS platform. They are responsible for implementing security measures, monitoring for malicious activities, and enforcing compliance with industry standards and regulations. The team works closely with customers, partners, and internal AWS teams to address security concerns and maintain a secure cloud environment.
Use Case: The AWS Trust & Safety team works behind the scenes to protect AWS customers and their data from security threats. They continuously monitor the platform for any signs of abuse, fraud, or unauthorized access. In case of security incidents, they respond swiftly to mitigate risks and ensure the integrity and availability of AWS services. Their efforts contribute to building trust among customers and making AWS a secure and reliable cloud provider.
[AWS] Trusted Advisor: Service for providing best practices and optimization recommendations.
Description: Trusted Advisor analyzes AWS environments and provides recommendations to improve security, performance, and cost optimization.
Use Case: Security improvement, cost optimization, performance optimization.
This is a standalone service and covers many topics. Rightsizing Recommendations is a feature of the Cost Explorer service and only looks at cost of EC2 instances.
U
V
- VDI: Virtual Desktop Infrastructure is used to decouple the desktop environment from the physical device.
VPC gateway endpoint: A gateway endpoint to establish a private connection to S3 or DynamoDB resources.
Description: Gateway VPC endpoints provide reliable connectivity to Amazon S3 and DynamoDB without requiring an internet gateway or a NAT device for your VPC. Gateway endpoints do not use AWS PrivateLink, unlike other types of VPC endpoints, and there is no additional charge for using them.

W
[AWS] Web Application Firewall (WAF): Protects applications from common web exploits and attacks based on data from the requests (only layer 7). Capable of identifying IP addresses locations.
- Can be deployed on Application Load Balancer, API Gateway, or CloudFront.
- It defines a Web ACL (Web Access Control List):
- Rules can include IP addresses and any HTTP content.
- Protects against SQL Inject and Cross-Site Scripting.
- Can block large requests.
- Can block requests from certain countries.
- Can block based on rate of requests (e.g. a user can only make 5 requests per minute).
[AWS] Well-Architected: Framework for designing and optimizing cloud architectures.
Description: AWS Well-Architected provides best practices for building secure, high-performing, resilient, and efficient applications on AWS.
[AWS] Wavelength: Service that allows deployment of some AWS services at the infrastructure of 5G providers that partnered with AWS.
Description: AWS Wavelength is a service that extends the capabilities of AWS to the edge of 5G networks. It enables developers to build applications that require ultra-low latency and high bandwidth by running them directly at the edge of the 5G network infrastructure. With Wavelength, developers can deploy their workloads in close proximity to end-users, reducing latency for applications like real-time gaming, augmented reality, and video streaming. Wavelength integrates with AWS services, providing a seamless development and deployment experience for edge computing applications.
Use Case: AWS Wavelength is ideal for applications that demand ultra-low latency and real-time responsiveness, leveraging the capabilities of 5G networks. Use cases include immersive gaming experiences, interactive live video streaming, connected vehicle services, and industrial IoT applications. Wavelength allows developers to deliver high-performance applications that require immediate response times by leveraging the benefits of edge computing.
- [Amazon] WorkSpaces: desktop infrastructure (VDI) where you can run Windows or Linux machines and access them through RDP software. They can stay active or be created on-demand.
X
- [Amazon] X-Ray: Service for analyzing and debugging distributed applications.
Y
Z
Shallow Dive
Architecture
What are the cloud architecture design principles about?
Cloud Architecture Design Principles are essential for designing robust and scalable infrastructure in the cloud. They include designing for failure, decoupling application components at the infrastructure level, leveraging the elasticity of the cloud, and thinking parallel by dividing application logic into individual components. These principles ensure resilience, scalability, and cost-efficiency in cloud-based systems.
What is the AWS Cloud Adoption Framework (CAF)?
The AWS CAF helps organizations to identify specific capabilities that sustain successful cloud migrations. These capabilities are grouped into six categories.
- Business: ensuring that your cloud investments accelerate your digital transformation ambitions and business outcomes.
- People: guiding the changes that the organization might need in their culture, structure, leadership, and workforce.
- Governance: orchestrating your cloud initiatives while maximizing organizational benefits and minimizing transformation-related risks.
- Platform: helping to build an enterprise-grade, scalable, and hybrid cloud platform. As well as implement new cloud-native solutions.
- Security: helps to achieve confidentiality, integrity, and availability of your data and workloads.
- Operations: guidance to deliver cloud services at the level the business needs.
For a deep dive, you can check this link.
What does “active-active” implies?
It is an approach for building reliable software using the full duplication of your systems on different data-centers. It improves latency for your end-users; gives a better disaster recovery strategy; allows you to operate on countries with strict regulation.
Let’s say your company is using AWS
us-east-1
Region for all your systems. Now, your market on Japan is expanding rapidly and the latency for your users is too high. Using the active-active architecture, you duplicate all your services to ap-northeast-3
, and use a load balancing strategy to route users to the system closest to them.Besides Amazon EC2, what other services can be reserved?
- Amazon RDS has an option to reserve instances of DBs for a one or three year term.
- Amazon DynamoDB has the capacity of reserving capacity for a one or three year term.
AWS Global Infrastructure
AWS is available across the world with different Regions. A Region is a cluster of data centers. Those data centers are grouped into Availability Zones. Besides that, AWS has an abundant amount of Points of Presence, some being Edge Locations.
- Regions have at least three Availability Zones and at most six. But most Regions have only three AZs.
- An Availability Zone is one or more data centers with redundant power, networking, and connectivity. The AZs are completely separate from each other but connected through a high bandwidth with ultra-low latency network. (Those data centers are called “discrete data centers” because their location and quantity are not disclosed with customers.)
- Most services have the scope of a Region. So things you do in one Region won’t be available on the others. For instance, if you create an instance on EC2 us-east-1 it won’t appear on other Regions just as is.
- You choose a Region to deploy your services considering some aspects: compliance, latency, available services, and pricing.
- AWS Points of Presence (PoP) are another network that AWS provides. This one includes more than 450 PoPs around the globe (13 being Regional Edge Caches and the others Edge Locations). A PoP hosts Amazon CloudFront, Amazon Route 53, and AWS Global Accelerator. Those PoP may be just a single server or Edge Locations.
- An AWS Edge Location is a data center specialized in caching, that usually resides closer to its users. They are connected to AWS’s Regions through the AWS backbone.
Cloud Deployment Models
There are at least two different sets of definitions. One is used by the development community, the other by AWS.
Cloud Deployment Models by the community:
- Public cloud, anyone can use in a pay-per-use basis (AWS, CloudFare, Vercel);
- Private cloud, you own the infrastructure;
- Hybrid cloud, you use both public and private cloud;
- Community cloud, a share infrastructure is managed by organizations with common interests;
- Multi-Cloud, you use multiple public cloud providers.
And by AWS:
- Cloud, all applications are deployed in a CSP’s infrastructure;
- Hybrid, you use the services of a CSP as well as private infrastructure;
- On-premises, you use fully private infrastructure.
How to prevent from a DoS or DDoS attack?
The things you can to do protect yourself against a DoS attack include:
- Reduce the attack surface. Hide every resource and port that does not need to be publicly accessible in a private network or with ACLs.
- Architect your applications and infrastructure to scale. Make sure your resources are scalable in network and server capacity. With web applications, you can also leverage CDNs and smart DNS resolution services to provide an additional layer of infrastructure for your users.
- Know what is the normal traffic baseline for your service. When you know the regular amount of traffic your services get, you can define a rate limiting to block the overload of requests.
- Use specific protection services.
- Firewalls, like AWS WAF, can create rules based on information contained by the requests to deny those that seem fake.
- Use the AWS Shield service. It has two versions: Shield Standard, that comes by default on every AWS account and protects against know layer 3 and layer 4 attacks; and Shield Advanced, that is a paid service with more sophisticated protections against attacks from Layer 3 to Layer 7.
How to migrate applications and data from one AWS Region to another?
The AWS support team cannot help with the migration, they might only guide you on what path to take.
There is no off-the-shelf solution to help migrate application and data between AWS Regions. You need to manually create and delete the applications in their respective regions. Here is a process which you could follow:
- Assess the application. Evaluate the application and its dependencies to understand any region-specific configurations, services, or resources it relies on. Identify potential compatibility issues or regional limitations that may affect the migration.
- Set up the target Region. Create the necessary resources, such as VPCs, subnets, security groups, and any region-specific services required in the target region. Ensure that the target environment is properly configured and aligned with the requirements of the application.
- Network connectivity. Establish connectivity between the source and target regions to facilitate data transfer and application communication. This can be achieved through AWS Direct Connect, VPN connections, or other networking solutions.
- Start the data migration. Determine the best approach to migrate the application's data. This could involve options such as database replication, data synchronization, backup and restore, or using AWS services like AWS Database Migration Service (DMS) or AWS Snowball.
- Application migration. There are multiple strategies to migrate applications, depending on their architecture and complexity. Some common approaches include:
- Lift and Shift: Replicate the application infrastructure in the target region using equivalent AWS services and configurations. This involves re-creating the infrastructure and deploying the application components.
- Re-Architect: Optimize and re-architect the application to leverage region-specific services, enhance scalability, and improve performance. This may involve modifying the application code and architecture.
- Hybrid Approach: Implement a hybrid architecture where parts of the application remain in the source region while migrating specific components or functionality to the target region.
- Testing and Validation. Thoroughly test the migrated application in the target region to ensure functionality, performance, and reliability. Conduct testing and validation activities, such as load testing, functional testing, and user acceptance testing.
- Cut-Over and DNS Update. Plan for the cut-over window to switch traffic from the source region to the target region. Update DNS records or implement a traffic routing mechanism to direct users and clients to the new region.
- Monitor and optimize. Continuously monitor the migrated application and its performance in the target region. Optimize the application, infrastructure, and resource utilization based on the specific requirements and characteristics of the new region.
What are the Disaster Recovery Strategies AWS can help with?
You need to use two different data centers, so it can be:
- on-premises infrastructure with AWS Cloud, or
- one AWS AZ with another AZ, or even
- one AWS Region with another AWS Region.
Once you decide which data centers you are going to use, you need to pick the strategy:
- Backup and Restore. The cheapest way to have a disaster recovery. We continuously back up the data from our data center to AWS and pay to restore them when needed.
- Pilot Light. You run only the core functions of your application in the cloud — but have it always ready to scale.
- Warm Standby. The full version of the application is ready to be used in the cloud, but with the minimum scale.
- Multi-Site/Hot-Site. The full version at full scale, always ready to be used.
How to protect against DDoS attacks on AWS?
There are a set of best practices to protect your infrastructure on AWS against DDoS attacks. By default, you already receive the AWS Shield Standard service. You can build your infrastructure to leverage that and other additional services.
- Amazon Route 53 and Amazon CloudFront can be entry-points for your service. When you use them, AWS will run AWS Shield right on the first level of your application, preventing the DDoS attack to reach your Amazon EC2 instances (or any other service).
- You can also leverage AWS WAF (Web Application Firewall) to ignore requests based on rules you build.
- If you pay extra, you can use AWS Shield Advanced.
- Your architecture should be ready to scale and handle the potential extra traffic that might leak in your applications.

Business
What are the AWS Support Plans?
AWS currently offers 5 support plans. They are described below in order of increased support by AWS. Each plan includes the features of the previous plan.
- Basic Support: is the default support plan and is included for free for anyone using AWS services.
- Has access to customer service, documentations and AWS re:Post community.
- Also includes limited access to AWS Trusted Advisor with their core checks.
- Developer: recommended when you are experimenting on AWS (non-production workloads).
- Allows connection with Cloud Support Associates during business hours through the web.
- Unlimited cases with 1 primary contact.
- The response time is under 24 hours for general guidance and under 12 hours for impaired systems.
- Increases the basic support plan limited access to AWS Trusted Advisor with Service Quota and basic Security checks.
- Business: if you have production workloads in AWS.
- Changes the access to Cloud Support Engineers to 24/7, instead of business hours. Also includes access through phone and chat.
- Access to AWS Support App in Slack.
- Increases to unlimited contacts.
- The response times stop requiring business hours and receive two new categories: production systems impaired, response time is under 4 hours; production systems down, response time under 1 hour.
- Can ask for architectural guidance within the context of your use-case.
- Receives help with third-party software.
- Includes all checks from AWS Trusted Advisor.
- Access to AWS Support API for programmatic access.
- Can, for an additional fee, subscribe to AWS Managed Services.
- Enterprise On-Ramp: if you have business critical workloads in AWS.
- The response times receive another category: business-critical system down, response under 30 minutes.
- Can get one consultative review and guidance on your applications’ architecture per year.
- Access to a pool of Technical Account Managers to provide proactive guidance on improvements to your usage of AWS, as well as help you connect with subject experts.
- Receives account assistance from a Concierge Support Team.
- Enterprise: if you have business or mission-critical workload in AWS.
- Receives AWS Trusted Advisor Priority with recommendation curated by your AWS account team.
- The response time when business/mission-critical systems are down reduces to under 15 minutes.
- Access to online self-paced labs.
- Unlimited contextual architectural guidance.
- For an additional fee, you can hire the AWS Incident Detection and Response service, which puts a team on proactive support of selected workloads. (If the AWS Managed Services is hired, this is included without additional cost.)
- Gets a dedicated Technical Account Manager to your account.
What is the AWS Cloud value proposition to businesses?
AWS proposes to take part on the infrastructure pains that an IT organization has. It assumes the responsibility over hardware and on-premise compliance requirements. As well as providing a range of services that only a large-scale infrastructure can provide.
Billing the customer by only what they use. So if the architecture and infrastructure are well-handled the company can achieve the most optimal cost of infrastructure services.
How does cloud acquisition work? Is it different from on-premises infrastructure acquisition?
Cloud acquisition is completely different from on-premise infrastructure acquisition. There are six main parts to it:
- Procurement: when buying cloud resources, you don’t buy hardware, but access to standardized compute, storage, network and other IT services. Different to using on-premises infrastructure where you buy hardware.
- Legal: since you are not buying hardware, but “renting” it, you need to engage with the cloud source providers and make sure that you are well-aligned.
- Security: you should use third-party compliance standards to evaluate the cloud service providers’ security, instead of asking low-level infrastructure related questions. It just saves a lot of time for every party.
- Governance: this responsibility is shared between the customer and the cloud service provider. The cloud source provider provides a set of features so that the customer can bring their governance standard to the cloud.
- Finance: you won’t have fixed-pricing contracting. Instead, you’ll have pay-as-you go pricing, which should be compared against the market pricing for cloud services.
- Compliance: before engaging on using the cloud services provider features, ask them if those features met the certain compliance aspects you require.
All six parts of cloud adoption can — and should — be evaluated simultaneously. So if any of them renders the adoption of that specific provider unfeasible, you don’t waste any more time on them.
What kinds of APN Partner there are?
- APN Consulting Partners: professionals that help all types of customers to design, architect, build, migrate, and manage their workloads on AWS.
- APN Technology Partners: they provide hardware, connectivity services, or software solutions that are either hosted on, or integrated with, the AWS Cloud.
How to remove an account from the AWS Organization?
To remove an account from an AWS Organization, you need to follow these steps:
- Sign in to the AWS Management Console using the credentials of the AWS Organization's master account.
- Open the AWS Organizations console.
- In the left navigation pane, click on "Accounts."
- Locate the account you want to remove and select it.
- Click on the "Actions" button and choose "Move account."
- In the "Move account" dialog box, select "Leave organization" as the destination and click on "Move account."
- A confirmation dialog will appear. Review the details and click on "Confirm move account" to proceed with the removal.
- The account will be removed from the AWS Organization and will become a standalone account outside the organization.
What are the benefits of cloud computing?
- Agility. With easy access to infrastructure, you can innovate faster in a wide range of technologies.
- Elasticity. Do not require to over-provision resources and can scale up (vertical) or out (horizontal), down or in, with easiness.
- Cost savings. Trade fixed up-front expenses, for only what you use.
- Deploy globally in minutes. Expand to new geographic region and deploy globally in minutes, leveraging the AWS Global infrastructure.
Curiosity
Why are some services prefixed with “AWS” and other with “Amazon”?
Some services may not follow the pattern due to historical reasons, rebranding efforts, or alignment with specific product offerings. However, in general, the "AWS" prefix indicates core infrastructure and platform services provided directly by AWS, while the "Amazon" prefix denotes higher-level services or specialized offerings built on top of AWS infrastructure.
Database
What are the DynamoDB’s global tables?
They are DynamoDB table replicas distributed around the world. They automatically replicate data across all AWS Regions that you choose and scale capacity on-demand. With this replica tables, your distributed applications can get up to 9 milliseconds read and write performance.
Elastic Cloud Computing (EC2)
How are EC2 instances billed?
You pay for the time your instances are running and the data transfer.
Instances are billed in a per-second basis, but there is a one-minute minimum charge. So an instance that ran for 30 seconds, will be billed as 60 seconds. If it ran for 65 seconds, the charge would be for 65 seconds.
What are the EC2 Instances purchasing options?
- On-Demand. Use when you are not sure if you will keep the workload active for a larger period of time.
- No up-front payment; pay only for your use.
- For Linux or Windows OSes: billing per second, after the first minute.
- For all other OSes, billing is per hour.
- Reserved. Use for long-term workloads, of 1 or 3 years commitment.
- You reserve a specific instance attribute (e.g. instance type, region, tenancy, OS).
- You can choose for a 1 or 3 years commitment, and for making some upfront payment to get more discounts.
- The reserved instance has a scope, either Regional or Zonal (specific to AZ).
- Reserved Instances can be bought or sold in the Reserved Instance Marketplace.
- Convertible Reserved. Use for long-term workload where you need the ability to change the EC2 instances types.
- In exchange for getting less discount, you are allowed to change instance type, region, tenancy, and OS.
- Savings Plans. Use for a 1 or 3 years commitment of EC2 usage in dollars.
- Get up to the same discount as reserved instances, but based on a commitment of amount of money being spent.
- If you use EC2 instances more than what you committed to, you are going to be billed with on-demand pricing for the extra usage.
- You get locked into a specific instance family and region (e.g. M5 in us-east-1). Although, you will have the flexibility of instance size, OS, and tenancy (dedicated host, dedicated hardware, or shared).
- Spot. Use for workloads that can be interrupted.
- Up to 90% discount compared to on-demand.
- Dedicated Hosts. Use when you need to meet compliance or regulatory criteria that require an entire physical server.
- Can be purchased as on-demand or reserved for 1 or 3 years.
- Some software licenses have compliance requirements that require to know hardware information, so dedicated host is the option for this use case.
- It is the most expensive purchasing option.
- Dedicated Instances. Use when you need your instances to run on dedicated hardware.
- You do not have control over how instances are placed on a physical server.
- Hardware may be shared by instances in the same account.
- Pay for instance, not host.
- Capacity Reservations. Reserves on-demand instances capacity in a specific AZ for any duration.
- You pay for the reserved capacity whether the instances are running or not.
- There is no time commitment, and no discounts. You are just making sure that the specific AZ will have the required capacity waiting for you.
- If you want to leverage some discount, you can combine with Regional Reserved instances and Saving Plans to benefit of some discount.
What are the auto-scaling groups strategies?
Auto-scaling groups have three numbers that drive the ASG’s behavior:
- Minimum. The absolute minimum number of instances that the group can have.
- Maximum. The absolute maximum number of instances that the group can have.
- Desired. The current number of instances the group has, but respecting the limits. (e.g. Let’s say desired is at 5, and maximum is 3. The number of instances will be 3.)
The scaling strategies are:
- Manual scaling. You update the desired quantity manually.
- Dynamic scaling. When a CloudWatch alarm is triggered.
- With simple or step scaling. Increases or decreases the desired amount of instances.
- With target tracking scaling. Increases or decreases until a certain target amount.
- Scheduled scaling. Increases or decreases the desired amount on a configured scheduled.
- Predictive scaling. Increases or decreases the desired amount based on forecasts generated from historical CloudWatch data.
Finance
What is the Shared Responsibility Model?
On the Shared Responsibility Model are defined the areas of responsibility for both AWS and the customer using their services. AWS is in charge of all hardware and infrastructure and the software services they provide as PaaS and SaaS. On the other hand, the customer is fully responsible for their own applications, their own and their user’s data.
What is consolidated billing?
Consolidated billing is a free feature of AWS Organizations that allows you to merge billing and payment of all accounts in your organization. With that feature enabled, you get some benefits:
- one bill for all accounts in your organization;
- combined tracking of charges in multiple accounts;
- option to leverage volume pricing discounts, Reserved Instances discounts, and Savings Plans;
How can credits be used on AWS?
Credits are vouchers that can be applied to bills of eligible services. So a credit of $50 to EC2, cannot be used with S3 billing.
They are applied in following the order:
- soonest expiring;
- least number of applicable products;
- oldest credit.
For example, you have two credits available.
- Credit one is for R$ 50,00, expires in January 2024, and can be used for either Amazon S3 or Amazon EC2
- Credit two is for R$ 100,00, expires in December 2024, and can be used only for Amazon EC2.
- On the bill of December 2023 you incurred two charges: R$ 1000,00 for Amazon EC2 and R$ 500,00 for Amazon S3.
- Your credit one, which will expire sooner than credit two, will be applied to the Amazon EC2 charge. (I think that it doesn’t matter if it is applied to the EC2 or S3 charge, since you will have to pay for both.)
- Now, your bill is of R$ 900,00 for Amazon EC2 and, still, R$ 500,00 for Amazon S3.
- Then, your second credit is applied to Amazon EC2 (since it can only be used for it) and your final bill is of R$ 850,00 plus R$ 500,00.
What are the data transfer costs on AWS?
Data transfer charges apply based on the source, destination, and amount of traffic. The following scenarios describe the types of data transfer, but keep in mind that depending on the services, gateways, and infrastructure being used the charges may differ.
- From internet to AWS: no charge.
- From AWS to internet: depends on the service and originating Region.
- On Amazon EC2: first 10TB/Month costs $0.09 per GB; next 40TB/Month, $0.085 per GB.
- From one service to another
- within the same Region: depends on the services, if they reside in the same VPC and AZ, and through what they communicate.
- If using the NAT Gateway to communicate with other AWS services, there will be a per-hour charge. If you use the Internet Gateway, there will be no charge.
- in different Regions: there will be a charge and it depends on which Regions.
- within the same Availability Zone: no charge is applied.
- across different Availability Zones: typically there is a charge.
There are some general rules to keep data transfer costs at a minimum:
- When connecting to AWS services, use VPC endpoints instead of going through the internet.
- Use Direct Connect for sending data to your on-premises networks.
- Use resources in the same AZ whenever possible.
- Avoid cross-Region communication unless your business requires it.
- Use AWS Pricing Calculator to estimate the data transfer costs for your solution.
- Create a dashboard to better visualize data transfer charges. (Reference.)
Route 53
What are the routing policies available on Route 53?
- Simple routing policy: point your domain to a single resource.
- Failover routing policy: for active-passive failover.
- Geolocation routing policy: routing based on the location of users.
- Geoproximity routing policy: routing based on the location of your resources.
- Latency routing policy: when you have AWS resources in multiple regions and want to route to the one with the lowest latency.
- IP-based routing policy: when you know the IP addresses that originates the traffic, and want to route based on location.
- Multivalue routing policy: route to a record selected at random from a pool of up to 8 different routing records.
- Weighted routing policy: route to multiple resources, with a portion of traffic to each of one.
Security and Compliance
How do I know if a service is compliant to some regulation?
In AWS, different services have different compliance standards. If you need some kind of compliance, you should look at the individual service at AWS Artifact. Artifact hold information around compliance for all AWS services.
Can I run penetration tests against the AWS Cloud?
For tests that would only affect your systems, not AWS, you are allowed without any prior approval from AWS. Only a subset of services are currently allowed, but the set is increased from time to time.
Tests that might affect other AWS clients besides yourself (like DDoS, DNS zone walking, Port/Protocol/Request flooding), you require prior approval from AWS Security team.
The list of permitted services as well as the latest policy is outlined here.
What is the difference between the root user and the IAM user?
The root user has unrestricted access to everything in your AWS account, so you should not use the root user for daily activities, be them code or even administrative ones. After creating your AWS account, you should create an IAM user with administrative access for your daily activities. The only scenario where you will actually need the root user is for one of the following tasks:
- Change your account settings. This includes the account name, email address, root user password, and root user access keys.
- Restore IAM user permissions. If the only IAM administrator accidentally revokes their own permissions, you can sign in as the root user to edit policies and restore those permissions.
- View certain tax invoices. An IAM user with the aws-portal:ViewBilling permission can view and download VAT invoices from AWS Europe, but not AWS Inc. or Amazon Internet Services Private Limited (AISPL).
- Register as a seller in the Reserved Instance Marketplace.
- Request AWS GovCloud (US) account root user access keys from AWS Support.
How to enable encryption of data in transit and data at rest on AWS?
- Encryption of Data in Transit:
- Enable SSL/TLS: Use secure communication protocols (such as HTTPS, SSL/TLS) when accessing AWS services or transferring data over the network. Many AWS services offer SSL/TLS encryption by default.
- Use AWS PrivateLink: AWS PrivateLink enables private connectivity between VPCs and AWS services over the AWS network. It ensures data transmitted between services remains within the AWS network and doesn't traverse the public internet.
- Implement VPN or Direct Connect: Set up a Virtual Private Network (VPN) or AWS Direct Connect to establish an encrypted connection between your on-premises infrastructure and your VPC in the AWS cloud.
- Encryption of Data at Rest:
- Use AWS Managed Services: AWS provides various managed services that automatically encrypt data at rest. For example, Amazon S3 can encrypt objects using server-side encryption, and Amazon RDS can encrypt data stored in database instances.
- Implement AWS Key Management Service (KMS): AWS KMS allows you to create and manage encryption keys that can be used to encrypt data at rest. You can enable server-side encryption for services like Amazon S3 and Amazon EBS using KMS-managed keys.
- Utilize Encryption SDKs and Libraries: AWS provides Encryption SDKs and libraries that you can use to encrypt data before storing it in AWS services. These SDKs offer client-side encryption, where the data is encrypted before it reaches the AWS service.
When is it advantageous to use on-premises infrastructure over the using cloud?
- Security and compliance standards. In industries with strict data security and compliance requirements, such as finance, healthcare, and government sectors, organizations may prefer to keep their sensitive data on-premises. By having direct control over their infrastructure, they can implement specific security measures and ensure compliance with regulations without relying on a third-party cloud provider.
- Data Sovereignty and Control. Certain countries or organizations have strict regulations around data sovereignty, which require data to be stored and processed within specific geographical boundaries. In such cases, on-premises infrastructure allows organizations to retain full control over their data and ensure compliance with local laws.
- Latency and Performance. Applications that require real-time processing or low latency, such as high-frequency trading or certain scientific simulations, may benefit from on-premises infrastructure. By keeping the data and processing closer to the source, organizations can reduce network latency and achieve better performance compared to accessing resources over the internet.
- Cost Predictability. In some cases, on-premises infrastructure can provide cost predictability. Organizations with steady and predictable workloads may find it more cost-effective to invest in their own hardware and infrastructure upfront, rather than paying for cloud resources on a usage basis. However, this advantage may vary depending on factors such as scale, maintenance costs, and depreciation of hardware.
- Legacy Systems and Dependencies. If an organization heavily relies on legacy systems or applications that are not easily migrated to the cloud, maintaining on-premises infrastructure may be necessary. Rewriting or re-architecting such systems can be time-consuming and costly, making it more practical to continue using existing infrastructure until a more suitable solution becomes available.
- Internet Dependency and Outages. Cloud services rely on internet connectivity, and any issues with network connectivity or cloud provider outages can disrupt operations. In scenarios where uninterrupted availability is critical, such as remote areas with limited connectivity or mission-critical systems, on-premises infrastructure offers a more reliable and self-contained environment.
Does AWS have encryption SDKs?
Yes, it does, and it’s called AWS Encryption SDK. It’s free of use under the Apache 2.0 license.
Are S3 objects encrypted?
Since January 2023 all new objects uploaded to Amazon S3 are automatically encrypted with SSE-S3 (Server-Side Encryption with Amazon S3 managed keys). This comes at no additional charge or performance decrease.
What is a Customer Managed Key (CMK)?
It is a type of encryption key that can be created with AWS Key Management Service (KMS). This kind of key is, as the name implies, created and managed by the user. The other type of keys are the ones AWS manages (AWS managed keys).
Services
What is the difference between AWS Config and AWS CloudTrail?
The AWS Config keeps track of configurations on resources. You can, for instance, define remediation actions when an EC2 instance has a blacklisted application.
On the other hand, CloudTrail tracks specific logs for AWS management API. For example, you can enable storage of logs of all S3 instances for your AWS Organizations and query them on CloudWatch.
So they have similar purposes and might have overlap of functionality, but AWS Config focuses on configurations of resources and CloudTrail on logs to the AWS API.
Which are the differences between AWS PrivateLink and AWS Direct Connect?
You use AWS PrivateLink to connect two different VPCs with a private connection. Something like a VPN between the two VPCs. On the other hand, AWS Direct Connect provides a dedicated private connection between your on-premises infrastructure and the AWS Cloud.
What is the difference between Amazon EBS and Amazon EFS?
Both EBS and EFS are storage systems on AWS. But EBS is a block-level storage and EFS is a file-level storage. Another difference is that EBS is only attachable to EC2 (and a single one at the same time) instances, but EFS can be used and shared by many different clients: EC2 instances, Amazon ECS, Amazon EKS, Lambda functions and even on-premises infrastructure.
What are the differences between Amazon DocumentDB and Amazon DynamoDB?
Amazon DocumentDB and Amazon DynamoDB are AWS database services with the following differences:
- Data Model:
- DocumentDB: MongoDB-compatible document database.
- DynamoDB: NoSQL key-value database.
- Scalability and Performance:
- DocumentDB: Horizontal scaling, read replicas for high read scalability.
- DynamoDB: Seamless scalability, partitioning for high performance.
- Consistency Models:
- DocumentDB: Strong consistency.
- DynamoDB: Eventual consistency by default, option for strong consistency.
- Use Cases:
- DocumentDB: Flexible and scalable document database for various applications.
- DynamoDB: High scalability and low-latency access for dynamic workloads.
- Indexing and Querying:
- DocumentDB: Supports MongoDB query operations and indexing options.
- DynamoDB: Primary key-based querying, secondary indexes for flexible querying.
What are the differences between Amazon Inspector and Amazon TrustedAdvisor?
Amazon TrustedAdvisor is a tool that makes automated analysis on security, performance, and costs of your infrastructure and makes recommendations on how to fix them. Not all of its features are available for free.
On the other hand, Amazon Inspector is also an automated analysis tool but focused on security. It continuously verifies your workload for common vulnerabilities.
What are the differences between Amazon Budget, AWS Cost & Usage Report, and Amazon CostExplorer?
The services are all finance related, but each has its own use-case that cannot be done with a different service.
- With Amazon Budget, you can configure alerts when a certain budget threshold is trespassed.
- With AWS Cost & Usage Report, you can generate detailed and customized reports of your usage within AWS.
- With Amazon Cost Explorer, you have a web-base tool to visualize and perform some analytics on your AWS costs.
When to use AWS KMS or AWS CloudHSM?
Both services are specialized in managing the lifecycle of cryptographic keys. Their difference is that CloudHSM gives you access to the hardware, and you need to manage everything involved with the HSM, including creating users, setting their permissions, and creating the keys. On the other hand, KMS is a managed service in which you just use the cryptographic keys. KMS also uses devices of FIPS-validated HSMs, but AWS handles its complexity for you.
In a nutshell, you use AWS KMS whenever you need reliant cryptographic keys to encrypt things, but if you need to manage the HSM module and cannot trust AWS with that, go with AWS CloudHSM.
Storage
Can Amazon RDS automatically scale?
RDS’ databases can auto-scale only for storage, not for compute. Scaling its computational power (CPU and RAM) can only be done when changing the instance type, which needs to be done manually.
What kinds of connection can AWS Storage Gateway do?
AWS Storage Gateway for connection of your on-premise services to AWS storage. It can currently:
- upload and download files as objects to Amazon S3, using NFS and SMB protocols with Amazon S3 File Gateway;
- connect to Amazon FSx with Amazon FSx File Gateway;
- connect to Amazon S3 for Tape Backup with Tape Gateway;
- connect to Amazon EBS with Volume Gateway.
Deep Dive
Databases
Remember
- Amazon Aurora is compatible with Postgres and MySQL.
- Amazon Aurora is built for the AWS Cloud.
- DynamoDB is a flagship AWS service.
- OLTP: Online Transaction Processing
- OLAP: Online Analytical Processing
- EMR is to run Hadoop clusters.
Notes
- Even though you can deploy your databases manually on EC2 machines, you will need to maintain them. If you’d like to just handle the data, you can use an AWS managed database.
- For relational databases, you can use Amazon RDS (Relational Databases). They support a variety of options to choose from, but there is one AWS owned solution that says to be more performant. That solution is Amazon Aurora.
- Amazon Aurora is a database optimized for the AWS Cloud. It is compatible with Postgres and MySQL.
- With Amazon RDS, there are a couple of architecture choices you can make to improve reliability and performance:
- Read replica: in the same AWS Region, you have multiple RDS instances that will be used to read from, while updating from the writes on the main instance.
- Multi-AZ deployment: set up a different AZ with a failover instance. If the main AZ has an outage, RDS automatically switches to the other instance.
- Multi-Region deployment: to be reliant on Regional failures, you can create read replicas on another AWS Region. (The main instance is still used to write.)
- You can use Amazon ElastiCache for deploying a Redis or Memcache instance. (From an architecture perspective, you can use this to reduce the load from your main database.)
- The flagship NoSQL database from AWS is Amazon DynamoDB; so AWS gives a lot of attention to it.
- Amazon DynamoDB is a key-value database. It is serverless, replicates to 3 AZs, has high performance, and has auto-scaling capabilities. For pricing, it is considered low-cost and has an Infrequent Access data category.
- DynamoDB also integrates well with IAM for authentication and authorization and has services created around it:
- DynamoDB Accelerator (DAX): managed in-memory cache that can speed queries up to 10 times.
- Global Tables: create active-active replication of DynamoDB tables on other AWS Regions. (Active-active replication means that all instances replicate among themselves.)
- For online analytical processing (OLAP) with a Postgres compatible database, you can use Amazon Redshift. It is based on Postgres but built with performance for analytical processes (data warehouse).
- Amazon Redshift is a columnar database that integrates with AWS QuickSight and Tableau for visualizing data and facilitate querying.
- It loads data from other data sources every hour.
- It has a massive parallel query execution.
- Highly scalable.
- An alternative to handle big data workloads is Apache Hadoop. Within AWS, you can run a managed cluster of Hadoop instances with Amazon Elastic MapReduce (EMR).
- It spins up EC2 instances configured with Apache Hadoop while leveraging auto-scaling and spot instances.
- If you wish to query unstructured data on S3 with SQL without setting up servers, you can use Amazon Athena. It is built upon an open-source software with the same functionality, Presto.
- For business intelligence — dashboards with analytical views of your data — you can use Amazon QuickSight. It integrates with many AWS data-sources and can be embedded on your application as well as send notifications.
- QuickSight has ML powered features for forecasting and giving insights on your data.
- If you want to use MongoDB within AWS, you can use Amazon DocumentDB. It runs MongoDB in a managed manner, so you don’t have to worry about servers.
- AWS’s graph database service is Amazon Neptune.
- For a digital ledger, AWS offers two alternatives: Amazon Quantum Ledger Database (QLDB) and Amazon Managed Blockchain. Both are immutable ledger databases, but the managed blockchain is decentralized, in opposition to QLDB which is a centralized database in total control of AWS.
- Data is unstructured and distributed across many sources. To perform a ETL (extract, transform, and load) process, you can use AWS Glue. It is a serverless service that can extract data from many data sources, perform transformations on that data, and load it to your data warehouse (like Redshift).
- Glue also has the AWS Glue Data Catalog, which is a database for metadata about your different datasets. It can be used by Athena, Redshift, and EMR.
- To migrate data from one database to another: AWS Database Migration Service. It is smart enough to handle migrations when the source and target are different DBMS — as well as same DBMSs.
- For instance, it can migrate data from Oracle to Postgres.
Questions
🗃️ Resources
📚 Bibliography
Summary: AWS has a lot of database services, for relational ones RDS is the most appropriate and Aurora is specialized for the cloud; for key-value storage you can use its flagship database DynamoDB. When handling analytics, Glue can help to prepare data and later store it on Redshift. While studying unstructured data on your S3 buckets, you can query them with Athena. If you are already familiar with Hadoop, use Elastic MapReduce to set up a cluster.
Cloud Watch
Remember
- CloudWatch Billing metric is available on us-east-1 and holds billing info from your whole account.
- Can use Alarms to trigger automatic changes to the infrastructure.
Notes
- CloudWatch includes different important features like: Metrics, Alarms, and Logs.
- CloudWatch Metrics can monitor specific variables, like CPU Utilization and Network inbound, over time. Some metrics are counter-intuitive:
- Billing: Total estimated charges for your account. Although the metric shows the total charge on the entire account, it is only available at the CloudWatch of us-east-1 region.
- Service Limits: Measures how much you have been using the services APIs.
- You can also leverage CloudWatch Metrics to create your own metrics.
- Once you have the metrics you want, you can set up CloudWatch Alarms for them, which will trigger notifications or action.
- CloudWatch Alarms integrate with other services APIs to perform actions when a certain threshold is reached by a metric. For example, bump up the amount of desired EC2 instances for Auto Scaling groups when the traffic baseline increases.
- Logs can be extracted from many different places.
- ECS, Lambda, CloudTrail, EC2 machines, on-premises servers (with CloudWatch log agents), Route 53…
- By default, EC2 instances logs will not be sent to CloudWatch. You need to set up a CloudWatch agent on the EC2 instance to push the logs you want. For that to happen, the EC2 instances must have the right IAM Role.
Questions
🗃️ Resources
📚 Bibliography
Summary: Is the observability hub of AWS. It has logs, metrics, and alarms.
VPC and Networking
Remember
- Use a NAT (Gateway or Instance) for private resources to access the internet.
- Put up a firewall for your subnet with NACL and for your instances with Security Groups.
- AWS probably created the separated VPC Endpoints to charge the highest possible value on the services, not enough people will complain.
- An Elastic Network Interface is a logical component of a VPC that represents a virtual network card.
- For complex networks, you can use Transit Gateway to simplify it.
Notes
- AWS has public and private IPv4 addresses. The public ones can be accessed through the Internet. And the private addresses can only be accessed by private networks. It also includes a public IPv6 address.
- EC2 instances automatically get a random public IPv4 address whenever you start and stop them. If you do not want your IP to change every time you restart the instance, you can leverage Elastic IP.
- With Elastic IP, you can pay to have a static IPv4 address for and EC2 instance. Even if you terminate the instance, the address will still be yours.
- It is different for private IPv4 addresses. They are automatically generated once and stay static for your EC2 instance.
- A VPC (Virtual Private Cloud) is a private network linked to an AWS Region. It is partitioned in subnets.
- A subnet in AWS is a section of your VPC network specific to an Availability Zone, a Local Zone, an Outpost, or a Wavelength Zone. It can be public (accessible through the internet) or private.
- To specify access rules in the network, you use Route Tables.
- The VPC allows connection to the internet through the Internet Gateway. To use it, you create an Internet Gateway on the VPC and specify a routing rule to it, in the route table of a subnet.
- Once a subnet’s route table has a route for an Internet Gateway, it becomes a public subnet.
- For resources in a private subnet to use the internet — while remaining private — you need to use a NAT Gateway or NAT Instance. The former, is managed by AWS; the latter, by you.
- You can protect your subnet using a Network Access Control List (NACL). With the NACL firewall, you can define ALLOW and DENY rules by IP addresses and type of connection (UDP, TCP, Port specific).
- NACL by default denies all outbound traffic, but that can be overridden.
- Rules are processed by order (i.e. if your rule is not being used, there is one with higher precedence that also matches).
- If you’d like to protect a specific IP address, you can use Security Groups. You can only define ALLOW rules, so by default all traffic is denied, and you can reference other IP addresses or Security Groups.
- It can be attached to EC2 instances and Elastic Network Interfaces, which are those components you can pay to have a reserved IP address detached from the EC2 instance.
- Security Groups by default allow all outbound traffic, but that can be overridden.
- Rules are processed all at once.
- AWS logs all the network traffic on your VPC with VPC Flow Logs. You can watch logs from your whole VPC, from a subnet, and even a specific Elastic Network Interface.
- This helps you troubleshoot connectivity issues from subnets, AWS services, and the internet.
- Can send the VPC Flow Logs into S3, CloudWatch Logs, and Kinesis Data Firehose.
- With VPC Peering, you can connect two VPCs in a private manner inside the AWS network. They work as if they were the same VPC.
- For the VPC Peering to be possible, there cannot be overlap in their CIDRs.
- VPC Peering is not transitive. If VPC X has a peering to VPC Y, and VPC Y has a peering to VPC Z, the VPC X cannot leverage that connection to interact with VPC Z. You need to create a direct connection from VPC X to VPC Z.
- If you need to expose a service to a number of other VPCs without leaving the AWS network, you can use AWS PrivateLink.
- It is different from the VPC Peering. You do not merge the VPCs, but connect specific services, where the application exposed has a Network Load Balancer to which the consumer application connects with an Elastic Network Interface (ENI). [Visual resource available]
- The standard connection from an instance inside your VPC to other AWS services is through the public network (the internet). For that connection to be private, you need to use VPC Endpoints.
- It comes in two different types: VPC Gateway Endpoint, which supports only S3 and DynamoDB, but it is free; and VPC Interface Endpoint, which supports most AWS services, but is paid.
- The Gateway Endpoint is at the VPC level, same as the Internet Gateway. And the Interface Endpoint is at the subnet level.
- Both kinds of endpoint can only connect to a single service.
- When you require connecting the on-premises data-center to the AWS infrastructure, you have two options.
- Site to Site VPN: a logical encrypted connection between the two ends that runs through the public network. It connects a Customer Gateway to a Virtual Private Gateway.
- Direct Connect (DX): a physical private connection between the data-center to AWS. Since it requires a physical connection, it costs a lot more and takes at least a month to be established.
- For connecting your private computer to the AWS Network, you can use AWS Client VPN. It logically places your computer directly inside the AWS VPC, making the connection through the public network in an encrypted way.
- If you have a connection between your private data-center and the AWS VPC, you can leverage the same AWS Client VPN to privately connect to your on-premises services.
- If your network is getting too complex, with many different VPN services to manage, you can start using the AWS Transit Gateway. It is a centralized hub for all network connections. [Visual resource available]
Questions
- Can I use a public subnet with an ACL with forbidden inbound traffic in place of a private subnet with a NAT Gateway?
- Although it is technically possible, it goes against best practices. The ACL blocking inbound traffic does not provide the same level of secure isolation as a private subnet.
- Can I connect my AWS Client VPN directly to the AWS Transit Gateway?
- No, you cannot. You connect your AWS Client VPN to one VPC network and connect that network to the AWS Transit Gateway.
🗃️ Resources



📚 Bibliography
Summary:
Amazon Simple Storage Service (Amazon S3)
Remember
- Except for S3 One Zone-IA, all S3 Storage Classes store data in a minimum of three AZs.
- You set the Storage Class in a per-object basis when uploading it.
- Durability of 99.999999999% when across multiple AZs.
- Availability of 99.99% over a given year.
- By default, an Amazon S3 bucket is region-specific.
- Objects are only accessible if has both ALLOW rule and no DENY rule.
- S3 Glacier is automatically encrypted.
Notes
- With Amazon S3, your data is stored in buckets. Those buckets are like file systems in the sense of folders and file structure.
- By default, your buckets are in a single region, but you can enable Cross-Region Replication to get your data automatically replicated into multiple ones.
- The Amazon S3 storage classes differ in aspects of data access, resiliency, and cost. The lowest possible cost depends on picking the right class for your use-case.
- Amazon S3 Intelligent-Tiering: monitors objects and automatically moves them to their most appropriate storage class depending on their access frequency. You can configure how much time your object needs to stay not accessed until going to S3 solutions for more infrequent access.
- This monitoring and automatic moving of files has a monetary cost.
- You can use S3 Lifecycle policies to apply a similar transition mechanism (probably does not support access frequency).
- Amazon S3 Standard: for frequently accessed data.
- Amazon S3 Standard-Infrequent Access (S3 Standard-IA): for data that is less frequently accessed but requires rapid access when needed.
- Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA): equal to S3 Standard-IA, but data is stored in a single AZ.
- Since data is stored in a single AZ, it is more recommended for data can be easily recreated or can be lost. For example, data that is replicated from another AWS Region with S3 Cross-Region replication.
- Amazon S3 Glacier Instant Retrieval: archive for long-term data that needs rapid access (in milliseconds).
- You can save up to 68% on storage costs compared to using S3 Standard-IA, if your data is accesses once per quarter.
- Amazon S3 Glacier Flexible Retrieval (Formerly S3 Glacier): for data that is accessed 1–2 times per year and is with a retrieval time range from minutes to hours.
- Amazon S3 Glacier Deep Archive: for data accessed less than once a year, with retrieval of hours.
- Amazon S3 Outposts: for data stored in your AWS Outposts on-premises environment. It enables you to use S3 APIs as well as other features on your data residing in on-premises infrastructure.
- Security can be applied as user-based and resource-based. The former is done through IAM Policies. The latter, can be done with:
- Bucket Policies. Cross-Account bucket wide rules from the S3 console.
- Access Control List (ACL) at bucket-level.
- ACL at object-level.
- Amazon S3 also has another layer of security which is Block Public Access. It ensures no data leaks on misconfigured buckets. It can be set at either bucket or account level.
- You can enable bucket-level file versioning.
- Files are never overridden — a new version is created.
- Files are never deleted — it changes to soft delete.
- When deleting non-versioned object, it works a soft delete. If you delete a object version, it acts as a hard delete.
🗃️ Resources
📚 Bibliography
Summary: file data storage with high durability and availability with multiple pricing strategies designed as Storage Classes. It includes the option to use AWS APIs with on-premises data.
Identity and Access Management (IAM)
Remember
- An IAM user with administrator permissions is not the same as the root user.
- User groups cannot be nested. (They contain only users, not other groups.)
Notes
- Using IAM you can create users for humans or applications to access the AWS console or API. The user consist of a username and credentials.
- IAM users can be grouped in a collection called Group. Each group can have their own permissions, which are applied to the group’s users.
- Groups are a way to designate permissions, not authentication, which still per-user.
- The number of groups that can exist on an AWS account and the number of groups a user can be a member of are limited.
- There is no default group that all users are assigned to. If you’d like one, you’d have to create and assign each user to it.
- To give temporary access to something on AWS, instead of manually giving a user some permissions and removing them afterward, you can use IAM roles. An IAM role has no long-term credentials. It just has temporary security credentials for a session.
- You can give IAM roles to users, inside and outside your AWS account, and resources (like EC2 instances).
- AWS security is based on permissions. They are what specifies if an IAM identity or resource can do something or not. But you do not give permissions directly to identities or resources. Instead, you use Policies.
- The IAM identities are users, groups and roles.
- A IAM Policy can be of some types:
- Resource-based: inline policies to resources.
Identity-based: grant permissions to an identity. They are JSON permissions documents which can be managed (standalone and can be assigned to any identity) or inline (linked to an identity and erased when the identity deleted).
{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "iam:CreateUser", "Resource": "*" } }
- You can define permission boundaries to limit what an IAM entity can be entitled to. For instance, if a user group called
TrustedUsers
has the permission boundary allowing only access to all S3 actions, even if a user in that group receives the permission to operate on EC2 resources they won’t be allowed to do that.
Resources
Summary: AWS Identity and Access Management (IAM) allows you to create users for humans or applications to access AWS services. Users can be grouped into collections called Groups, each with their own permissions. IAM roles provide temporary access to AWS resources. Permissions are managed through IAM policies, which can be identity-based or resource-based. Permission boundaries can be set to restrict entitlements for IAM entities.
Well-Architected Framework
Remember
- Operational excellence cannot be achieved isolated; it needs to be closely connected to what it supports: business and development teams.
- It’s vital to have a standard processes to handle grave incidents (security, application failing, …).
- There might be some low-level hardware failures on cloud servers. The application should be resilient to that.
- Always use a data-driven approach for your decisions. Don’t be hasty. Benchmark your options with your current data.
- Do you want to optimize for speed to market or for cost?
- Always make sure to be making high usage of your deployed infrastructure. Not just for cost, but for the environment.
- The sustainability pillar shares many principles with both performance efficiency and cost optimization.
- Reliability has foundational requirements to be in place before architecting your workloads. To ensure those, these services are vital: Amazon VPC, AWS Trusted Advisor, and AWS Service Quotas.
Notes
- It helps architects to build secure, high-performing, resilient, and efficient infrastructure in the cloud. It’s built around 6 pillars: operational excellence, security, reliability, performance efficiency, cost optimization, and sustainability.
- The operational excellence pillar (in-depth) includes the ability to support development and run workloads effectively. You do that by creating and evolving procedures after validating their effectiveness. It includes five design principles:
- perform operations as code (means use code for anything that need to be changed if something were to happen within the cloud infrastructure);
- make frequent, small reversible changes;
- refine operations procedures frequently;
- anticipate failure;
- learn from all operational failures.
- Operational excellence is difficult to achieve when operation is perceived as a function isolated and distinct from business and development teams.
- On the security pillar (in-depth), we care about protecting data, systems, and assets leveraging cloud technologies. There are seven design principles on this pillar:
- implement a strong identity foundation;
- maintain traceability;
- apply security at all layers;
- automate security best practices;
- protect data in transit and at rest;
- keep people away from data;
- prepare for security events (always have incident management and investigation policy; run incident-response simulations and automate what you can).
- If your workload goes through a security incident, that should be easily identified and be responded to in a well-defined and practiced process.
- The reliability pillar (in-depth) encompasses the ability of a workload to perform its intended function consistently. This includes operating production and test scenarios through its whole lifecycle. It contains five design principles:
- automatically recover from failure;
- test recovery procedures;
- scale horizontally to increase aggregate workload availability;
- stop guessing capacity (automate the change of capacity in regard to your demand);
- manage change in automation.
- Fro reliability, you should consider the amount of service quotas available before you start architecting your workloads. For that, you can leverage AWS Service Quotas.
- Some of the important requirements that influence reliability are already handled by the cloud services provider. Others, like handling changes to your workload or its environment, are your responsibility.
- In data-centers, it is common for low-level hardware components to fail. Even though that is often abstracted away by the cloud provider, some events might affect your workload. Considering that, your workload should be architected to handle those failures.
- Performance efficiency (in-depth) is the pillar that includes using computing resources efficiently, as demand grows or shrinks, and as technologies evolve. This pillar’s design principles are:
- democratize advanced technologies;
- go global in minutes (be ready to use the multiple AWS Regions around the globe);
- use serverless architectures;
- experiment more often;
- consider mechanical sympathy (consider your use-case to select your tool).
- Gather data on all aspects of the architecture, from high-level design to monitoring performance, and review the decisions you make on a regular basis to ensure you are taking full advantage of your cloud provider services.
- Delivering business value at the lowest possible cost is the concern of the cost optimization pillar (in-depth). This pillar includes five design principles:
- implement cloud financial management;
- adopt a consumption model;
- measure overall efficiency;
- stop spending money on undifferentiated heavy lifting (since you are using cloud, stop spending money on things the cloud provider does for you);
- analyze and attribute expenditure.
- The sustainability pillar (in-depth) addresses the long-term environmental, economic, and societal impact of your business activities. This pillar includes:
- understand your impact;
- establish sustainability goals;
- maximize utilization;
- anticipate and adopt new, more efficient hardware and software offerings;
- use managed services;
- reduce the downstream impact of your cloud workloads (reduce the amount of energy or resources needed to consume your services).
- To improve sustainability goals, you can, for example, scale down the infrastructure when not needed, position resources close to where your users will access it (reducing the network usage), and remove unused assets.
🗃️ Resources
📚 Bibliography
Summary: The AWS Well-Architected Framework provides guidance for architects to build secure, high-performing, resilient, efficient, and sustainable infrastructure in the cloud. It consists of six pillars with specific design principles, emphasizing collaboration, data-driven decision-making, standardized incident handling, workload resilience, and optimization for what your business needs more.
Cloud Economics
Remember
- How operating in AWS can affect an organization’s ownership and operational costs of infrastructure.
- Identifying which operations could be done to optimize the cost of your infrastructure in the could is vital.
Notes
- The amount of money spent to keep your infrastructure running is treated as Total Cost of Ownership (TCO).
- The TCO has some important aspects: operational expenses (OpEx), capital expenses (CapEx), labor expenses for on-premises operations, and software licensing.
- OpEx describe the day-to-day cost to keep your product up and running. It can go from your office pencils to the rent of your office.
- CapEx is about the investments for new long-term stuff. Such as buying a building or servers.
- Labor costs are about the total amount of workers you need to keep the on-premises operations running.
- Some software licenses cannot be seamlessly switched to the could. There are licenses that have different costs when running on cloud, or simply cannot be used there.
- Some main areas commonly attacked to reduce cost of operations are:
- Right-sizing infrastructure. Since cloud is efficient on elasticity, you don’t have to pay for the costs of your infrastructure as if they were running on peak demand all the time.
- Automation. AWS (an any other cloud providers) have ways to automate scalability in accordance with the current demand. You can use Infrastructure as Code to configure your infrastructure once and set it up however many times you wish.
- Reduce compliance scope. Certain compliance aspects need physical restrictions, which you will leave as the responsibility of AWS. You also have the opportunity to revisit any legacy systems.
- Managed services. AWS provides a variety of PaaS options which might do the same thing some of your services do. Leaving it to AWS can reduce the scope of your concerns.
Summary: to evaluate if you should move to the cloud, in financial aspects, you need to ponder on your current CapEx, OpEx, labor licensing costs. When moving to the cloud, you can apply some changes to your infrastructure and reduce your costs enormously.