Amazon AWS Certified Developer — Associate DVA-C02 Exam Dumps and Practice Test Questions Set15 Q211-225
Visit here for our full Amazon AWS Certified Developer — Associate DVA-C02 exam dumps and practice test questions.
Question 211:
What is the maximum number of concurrent executions for Lambda functions per region by default?
A) 500
B) 1000
C) 1500
D) 2000
Answer: B
Explanation:
The default concurrent execution limit for AWS Lambda functions per region is 1,000 concurrent executions, representing the total number of function instances that can run simultaneously across all functions in your AWS account within a specific region. This limit is a soft limit that can be increased by requesting a limit increase from AWS Support, but understanding the default limit is important for capacity planning and avoiding throttling in production environments.
Concurrent executions represent function instances actively processing requests at any given moment. If you have 10 functions in a region and they’re collectively executing 1,000 invocations simultaneously, you’ve reached the regional limit. Additional invocation requests beyond this limit are throttled, meaning Lambda returns errors (for synchronous invocations) or queues requests for retry (for asynchronous invocations from supported event sources).
The 1,000 concurrent execution limit is account-wide per region, shared across all Lambda functions unless you configure reserved concurrency. This shared pool model provides flexibility, allowing functions with variable workloads to borrow unused capacity from other functions. However, it also means one function experiencing high load can consume capacity needed by other functions, potentially causing widespread throttling.
Reserved concurrency allows dedicating a portion of your account’s concurrent execution capacity to specific functions, ensuring those functions always have capacity available. For example, you might reserve 200 concurrent executions for a critical API function, guaranteeing it can always scale to 200 concurrent invocations regardless of other functions’ activity. The reserved capacity is subtracted from the shared pool available to other functions.
Provisioned concurrency is a related but different concept. It pre-initializes a specified number of execution environments to eliminate cold starts, but provisioned instances still count against your concurrent execution limits. Provisioned concurrency doesn’t increase your total capacity; it ensures some capacity is pre-warmed for immediate use.
If you consistently approach or exceed the 1,000 concurrent execution limit, you should request a limit increase through AWS Support. AWS typically grants reasonable increases to accommodate legitimate workload requirements. You should also optimize function execution time to reduce concurrency consumption, since faster-executing functions complete sooner and free up concurrency capacity more quickly.
Understanding the concurrent execution limit is critical for architecting Lambda-based applications at scale. Exceeding limits causes throttling that impacts application availability and user experience. Proper capacity planning, monitoring of concurrent execution metrics, and requesting appropriate limit increases ensures your Lambda functions can scale to meet production workload demands.
Question 212:
Which CloudFormation intrinsic function is used to conditionally create resources?
A) Fn::If
B) Fn::Condition
C) Fn::Select
D) Fn::Equals
Answer: A
Explanation:
The Fn::If intrinsic function in AWS CloudFormation enables conditional creation of resources or conditional setting of resource properties based on condition evaluations defined in the template’s Conditions section. This function is fundamental to creating flexible, reusable CloudFormation templates that can adapt their behavior and resource creation based on input parameters, environment differences, or other criteria.
Fn::If evaluates a condition (defined in the Conditions section) and returns one of two values depending on whether the condition evaluates to true or false. The syntax is Fn::If: [condition_name, value_if_true, value_if_false]. You can use Fn::If to conditionally create entire resources by using it in the Condition property of a resource, or to conditionally set specific property values within a resource definition.
CloudFormation Conditions are defined in a dedicated Conditions section of your template using logical functions like Fn::Equals, Fn::And, Fn::Or, and Fn::Not to evaluate expressions based on parameters or other values. For example, you might create a condition «IsProduction» that evaluates to true when an environment parameter equals «prod». You then reference this condition in Fn::If functions throughout your template.
Common use cases for Fn::If include creating resources only in certain environments (like creating a NAT Gateway only in production), choosing between different instance types or configurations based on parameters (using t3.micro in development but m5.large in production), conditionally enabling features like encryption or backup based on compliance requirements, and varying capacity settings based on expected load.
The power of Fn::If combined with Conditions enables creating single templates that work across multiple environments or scenarios. Rather than maintaining separate templates for development, staging, and production, you can use conditional logic to adapt a single template to different contexts based on parameters provided at stack creation time. This reduces template duplication and ensures consistency.
CloudFormation also supports using conditions directly on resources through a Condition property. When a resource has a Condition, CloudFormation only creates that resource if the condition evaluates to true, providing a simpler syntax for conditionally creating entire resources compared to using Fn::If for every property.
Fn::Equals compares two values for equality and is typically used within Conditions to define conditional logic, not directly for conditional resource creation. Fn::Select retrieves values from arrays. Fn::Condition doesn’t exist as a CloudFormation function. Fn::If is specifically designed for implementing conditional logic in resource creation and property values, making it essential for building flexible, environment-agnostic CloudFormation templates.
Question 213:
What is the purpose of Amazon Cognito user pools?
A) To provide temporary AWS credentials
B) To manage user authentication and authorization
C) To cache user session data
D) To synchronize user data across devices
Answer: B
Explanation:
Amazon Cognito User Pools provide fully managed user directory and authentication services for web and mobile applications, handling user registration, sign-in, account recovery, and multi-factor authentication without requiring custom authentication infrastructure. User Pools specifically focus on managing the authentication (verifying user identity) and authorization (determining user permissions and attributes) aspects of application security.
User Pools manage the complete user lifecycle including sign-up with customizable username and password requirements, email or phone verification, user profile attributes storage, password recovery through email or SMS, and account confirmation workflows. This eliminates the need to build and maintain custom user management systems, significantly reducing development time for authentication features.
The service provides multiple authentication flows including username/password authentication, OAuth 2.0 and OpenID Connect flows for social identity federation (signing in with Google, Facebook, Amazon, Apple), SAML-based enterprise identity federation, and custom authentication flows through Lambda triggers. This flexibility enables applications to support various authentication methods through a single service.
User Pools return JSON Web Tokens (JWTs) after successful authentication, including an ID token containing user identity information, an access token for API authorization, and a refresh token for obtaining new tokens without re-authentication. Applications use these tokens to verify user identity and authorize access to resources, implementing secure authentication without session cookies or server-side session storage.
Customization capabilities include Lambda triggers that execute custom code during authentication flows, enabling custom validation, user migration from existing systems, authentication challenges, and post-authentication processing. You can also customize UI elements like hosted login pages, emails, and SMS messages to match your application branding.
Security features include adaptive authentication detecting suspicious sign-in attempts, compromised credential detection checking passwords against known breached credential databases, account takeover protection, and advanced security features like risk-based adaptive authentication requiring additional verification for risky sign-ins. These features provide enterprise-grade security without custom implementation.
While Cognito Identity Pools provide temporary AWS credentials for accessing AWS services, and Cognito Sync historically synchronized user data across devices (now superseded by AppSync), User Pools specifically focus on user directory management and authentication. Understanding that User Pools handle the «who is this user and are their credentials valid» aspect of security is essential for implementing authentication in applications using Cognito.
Question 214:
Which Step Functions state type is used to wait for a specific time period?
A) Delay
B) Wait
C) Pause
D) Sleep
Answer: B
Explanation:
The Wait state type in AWS Step Functions introduces delays into workflow execution by pausing state machine execution for a specified duration or until a specific timestamp, enabling implementation of timing requirements, rate limiting, polling intervals, and scheduled execution within workflows. This state type is essential for workflows requiring deliberate delays between operations or waiting for external processes to complete.
Wait states support multiple timing specifications. You can specify Seconds to wait for a fixed number of seconds, Timestamp to wait until a specific date and time, SecondsPath to wait for a duration specified in the input data, or TimestampPath to wait until a time specified in the input data. This flexibility enables both static waits defined in the workflow definition and dynamic waits based on runtime data.
Common use cases for Wait states include implementing polling patterns where a workflow periodically checks external system status (check status, wait 30 seconds, check again), rate limiting by introducing delays between API calls to avoid overwhelming downstream services, scheduling by waiting until a specific time before executing subsequent operations, and implementing retry delays between retry attempts after failures.
Wait states are particularly valuable in combination with Choice states for implementing sophisticated polling logic. A typical pattern checks a condition, and if not met, transitions to a Wait state before looping back to check again. This enables workflows to wait for external processes like batch jobs, ETL processes, or approval workflows to complete before proceeding.
An important characteristic of Wait states is that they don’t consume compute resources while waiting. Step Functions simply schedules the workflow to resume at the appropriate time without any active execution, making Wait states cost-effective for delays ranging from seconds to up to one year (the maximum wait duration).
For Standard Workflows, executions remain in the Wait state for the entire duration, and you can see them as «in progress» in the console. The workflow resumes automatically when the wait period expires. There’s no manual intervention required, and the wait time counts toward the workflow’s maximum execution duration.
Step Functions doesn’t have state types called Delay, Pause, or Sleep. While these are intuitive names for delay functionality, Wait is the official state type name in Amazon States Language. Understanding Wait states and their various timing configurations enables implementing workflows with proper pacing, polling, and scheduling logic essential for many real-world automation scenarios.
Question 215:
What is the correct method to enable detailed monitoring for Lambda functions?
A) Configure CloudWatch detailed monitoring in function settings
B) Metrics are automatically sent to CloudWatch
C) Enable X-Ray tracing
D) Install CloudWatch agent in function
Answer: B
Explanation:
Lambda automatically sends metrics to CloudWatch without requiring any configuration or enablement, providing built-in monitoring for all Lambda functions by default. Every function execution generates metrics including invocation count, duration, errors, throttles, and concurrent executions, which are available in CloudWatch Metrics immediately without additional setup, unlike EC2 where detailed monitoring must be explicitly enabled.
The automatic metrics Lambda provides include Invocations (number of times the function was invoked), Errors (invocations resulting in function errors), Duration (execution time in milliseconds), Throttles (invocation requests rejected due to concurrency limits), DeadLetterErrors (failures sending events to dead-letter queues), ConcurrentExecutions (number of function instances running simultaneously), and UnreservedConcurrentExecutions (concurrent executions against the unreserved account pool).
These metrics are published to CloudWatch at one-minute granularity for all functions automatically. You can view them in the CloudWatch console, create alarms based on them, add them to dashboards, and analyze them using CloudWatch metrics math. This automatic monitoring provides immediate operational visibility into Lambda function behavior without configuration overhead.
Beyond the basic automatic metrics, Lambda also publishes detailed invocation records to CloudWatch Logs. Each function execution writes log output (from console.log, print(), or similar statements in your code) to a CloudWatch Logs log stream within a log group named after your function. These logs provide detailed execution information including function output, errors, and execution environment details.
The concept of «detailed monitoring» doesn’t apply to Lambda the way it does to EC2. With EC2, you must enable detailed monitoring to get one-minute metric granularity instead of five-minute granularity. Lambda always provides one-minute metrics automatically, and there’s no enhanced or detailed monitoring option because the standard monitoring is already comprehensive and fine-grained.
If you need even more detailed monitoring and tracing, you can enable AWS X-Ray tracing, which provides distributed request tracing showing how requests flow through your Lambda functions and other services. However, X-Ray provides tracing (request path and timing details), not basic metrics. X-Ray complements CloudWatch metrics but doesn’t replace them and serves a different monitoring purpose.
There’s no CloudWatch agent for Lambda functions — the agent is used for EC2 instances to collect system-level metrics and logs. Lambda’s execution environment automatically handles all metric collection and log forwarding to CloudWatch. Understanding that Lambda monitoring is automatic and comprehensive by default helps developers immediately leverage operational visibility without configuration complexity.
Question 216:
Which DynamoDB capacity mode is recommended for unpredictable workloads?
A) Provisioned capacity mode
B) On-demand capacity mode
C) Reserved capacity mode
D) Burst capacity mode
Answer: B
Explanation:
On-demand capacity mode is specifically designed for and recommended for unpredictable workloads where traffic patterns are unknown, highly variable, or have significant spikes that are difficult to forecast. This fully managed capacity mode automatically handles all capacity scaling decisions, eliminating capacity planning requirements while ensuring tables can accommodate any request rate from zero to peak capacity without throttling.
On-demand mode charges per request rather than for provisioned capacity, meaning you pay only for the read and write requests your application actually makes. This pay-per-use model is ideal for applications where usage is sporadic, unpredictable, or spiky — such as new applications without established traffic patterns, development and test environments with intermittent activity, or applications experiencing viral traffic spikes or seasonal variations.
The mode handles virtually unlimited scaling automatically. DynamoDB accommodates traffic increases up to double your previous peak traffic within 30 minutes and continues scaling as needed for sustained higher traffic. This automatic scaling happens instantly without the delays associated with provisioned capacity auto-scaling, which takes minutes to respond to traffic changes and requires you to define target utilization and scaling policies.
For truly unpredictable workloads, on-demand mode eliminates the risk of over-provisioning (wasting money on unused capacity) or under-provisioning (experiencing throttling that impacts users). Since capacity adjusts instantly to actual demand, you don’t need to forecast traffic, set capacity values, or monitor utilization to adjust provisioning — DynamoDB handles everything automatically.
The tradeoff is cost. On-demand mode is more expensive per request than provisioned capacity when workloads are steady and predictable. However, for unpredictable workloads, the cost difference may be justified by the elimination of throttling, the reduction in operational overhead, and the ability to handle traffic spikes without pre-provisioning for peak capacity.
Provisioned capacity mode requires specifying expected read and write capacity units, making it appropriate for predictable workloads where you can accurately forecast traffic. While auto-scaling can adjust provisioned capacity automatically, it still requires traffic patterns stable enough to configure appropriate minimum, maximum, and target utilization values. Reserved capacity provides cost savings for provisioned capacity through long-term commitments but doesn’t change capacity behavior. Burst capacity and burst capacity mode don’t exist in DynamoDB.
For unpredictable workloads, on-demand mode’s instant automatic scaling and pay-per-use pricing provide the simplicity and reliability needed without capacity planning complexity, making it the recommended choice for applications where traffic patterns cannot be accurately predicted.
Question 217:
What is the purpose of AWS SAM local CLI commands?
A) To deploy applications to AWS
B) To test serverless applications locally
C) To monitor production deployments
D) To manage IAM permissions
Answer: B
Explanation:
AWS SAM CLI local commands enable testing and debugging serverless applications on your local development machine before deploying to AWS, providing a local execution environment that simulates Lambda, API Gateway, and other AWS services. This local testing capability significantly accelerates development by enabling rapid iteration and debugging without deploying to AWS for every code change, reducing deployment time and AWS costs during development.
The primary local commands include «sam local start-api» which runs a local API Gateway-like HTTP server that routes requests to your Lambda functions, «sam local invoke» which directly invokes a specific Lambda function with test events, and «sam local start-lambda» which starts a local endpoint that emulates the Lambda invoke API. These commands use Docker to create containers mimicking the Lambda execution environment, ensuring local testing closely resembles actual AWS Lambda behavior.
sam local start-api is particularly valuable for API development, starting a local web server (typically on http://localhost:3000) that responds to HTTP requests by invoking your Lambda functions defined in your SAM template. You can use curl, Postman, or web browsers to send requests to your local API, testing endpoints, request validation, response formatting, and integration logic without deploying to AWS API Gateway.
sam local invoke allows testing individual functions with specific event payloads, useful for testing functions triggered by S3 events, SQS messages, DynamoDB streams, or custom events. You provide an event JSON file, and SAM invokes your function locally with that event, displaying logs and return values immediately. This enables debugging event processing logic and validating function behavior with various input scenarios.
The local testing environment supports debugging by allowing you to attach debuggers to running functions. You can set breakpoints in your IDE, step through code execution, inspect variables, and debug issues interactively — capabilities difficult or impossible with functions running in AWS. This dramatically improves developer productivity when troubleshooting complex issues.
SAM local commands support environment variables, layers, VPC configuration (though actual VPC resources aren’t accessible locally), and other Lambda features, making the local environment representative of AWS execution. However, some AWS service integrations can’t be perfectly simulated locally, so comprehensive testing still requires deployment to AWS environments.
While SAM also provides deployment commands (sam deploy), monitoring capabilities through integration with CloudWatch, and works with IAM permissions defined in templates, the local commands specifically focus on local testing and debugging. Understanding SAM local capabilities enables developers to iterate quickly during development while maintaining confidence that locally-tested code will behave similarly when deployed to AWS.
Question 218:
Which CodePipeline action type enables manual approval before proceeding to next stage?
A) Manual action
B) Approval action
C) Review action
D) Gate action
Answer: B
Explanation:
The Approval action type in AWS CodePipeline enables manual approval gates within automated pipelines, pausing pipeline execution at specified points until designated reviewers manually approve or reject the continuation. This action type is essential for implementing human oversight in automated deployment workflows, ensuring critical changes receive appropriate review before proceeding to sensitive environments like production.
When a pipeline execution reaches an Approval action, CodePipeline pauses and sends notifications to configured SNS topics, which can distribute approval requests via email, SMS, or integration with collaboration tools like Slack. The notification includes information about what’s being deployed, the pipeline execution details, and links to review and approve or reject the continuation. Pipeline execution remains paused until someone with appropriate IAM permissions approves or rejects the action.
Approval actions typically appear between deployment stages, such as between deploying to a staging environment and deploying to production. This pattern allows automated deployment to staging for testing, then requires explicit human approval after validating the staging deployment before proceeding to production deployment. This balances automation benefits with governance requirements for production changes.
You can configure Approval actions with custom notification messages providing context about what’s being approved, URLs linking to additional information like change tickets or test results, and specific SNS topics for different approval types. Multiple approvers can be configured through SNS topic subscriptions, though only one approval is typically required to proceed (or one rejection to halt).
The approval workflow tracks who approved or rejected continuations, when approvals occurred, and any comments provided during approval. This audit trail satisfies compliance requirements for change management, providing documented evidence that production deployments received appropriate approval before proceeding.
Approval actions integrate with IAM, allowing you to control who can approve pipeline continuations through IAM policies. This enables implementing segregation of duties where developers can commit code and initiate pipelines, but only operations or management personnel can approve production deployments. This separation enhances security and governance.
While «Manual action» might seem like a logical name, CodePipeline specifically uses «Approval action» for this functionality. Review action and Gate action aren’t CodePipeline action types. Understanding Approval actions enables implementing automated pipelines with appropriate human gates ensuring critical deployments receive review and explicit approval before proceeding, balancing automation with governance.
Question 219:
What is the correct method to implement request throttling in API Gateway?
A) Configure Lambda concurrency limits
B) Set usage plans and API keys
C) Implement throttling in backend code
D) Use CloudFront rate limiting
Answer: B
Explanation:
Configuring usage plans and API keys in Amazon API Gateway provides the built-in mechanism for implementing request throttling and quota management at the API Gateway layer, enabling rate limiting, burst control, and request quotas without requiring backend code changes or external services. This native API Gateway feature allows controlling API consumption at both the overall API level and per-client level through API keys associated with usage plans.
Usage plans define throttling and quota limits that apply to API stages. You configure rate limits (steady-state requests per second), burst limits (maximum concurrent requests), and quotas (maximum requests per day, week, or month). When you create a usage plan, you associate it with one or more API stages, applying the configured limits to requests targeting those stages.
API keys identify clients making requests to your API. By requiring API keys for specific API methods and associating those keys with usage plans, you implement per-client throttling and quotas. Different clients can have different usage plans, allowing you to offer tiered service levels — perhaps free tier users get 100 requests per day while premium users get 10,000 requests per day.
When requests exceed configured limits, API Gateway automatically returns 429 Too Many Requests responses to clients, implementing throttling without requiring any backend code or Lambda function involvement. This protects backend systems from overload and enables fair resource allocation across API consumers. The throttling happens at the API Gateway layer before requests reach Lambda or other backend integrations.
API Gateway provides two levels of throttling granularity. Account-level limits apply to all APIs in your account and region, protecting against overall account-level quota exhaustion. Stage-level limits configured through usage plans provide finer control, allowing different limits for development versus production stages or implementing per-client limits through API key associations.
You can also configure method-level throttling overriding stage-level settings for specific API methods. This enables scenarios like applying stricter limits to expensive operations while allowing higher rates for lightweight operations.
While Lambda concurrency limits control how many function instances run concurrently, they don’t implement request-level throttling at the API layer. Implementing throttling in backend code is possible but requires custom logic, doesn’t prevent backend processing of throttled requests, and is less efficient than API Gateway throttling. CloudFront provides some rate limiting capabilities but is designed for content delivery rather than API throttling. API Gateway usage plans and API keys specifically implement API-level request throttling and quota management, making them the correct tool for controlling API consumption.
Question 220:
Which Lambda feature enables sharing code and dependencies across multiple functions?
A) Lambda aliases
B) Lambda versions
C) Lambda layers
D) Lambda extensions
Answer: C
Explanation:
Lambda layers enable sharing code, libraries, dependencies, custom runtimes, and configuration files across multiple Lambda functions by packaging these common components separately from function code and referencing them in function configurations. This feature eliminates code duplication, simplifies dependency management, and promotes code reuse across Lambda functions, making serverless applications easier to maintain and update.
A Lambda layer is a ZIP archive containing libraries, custom runtimes, or other function dependencies. When you create a layer, you upload the ZIP archive to Lambda, and Lambda extracts the contents to the /opt directory in the function execution environment when any function using that layer executes. Your function code can then access layer contents as if they were part of the function deployment package.
Layers support versioning, allowing you to maintain multiple versions of a layer simultaneously. Functions reference specific layer versions in their configuration, enabling controlled updates where you publish new layer versions and gradually update functions to use new versions, or immediately update all functions by changing their layer version references. This provides flexibility in dependency management and updates.
Common use cases for layers include sharing custom libraries or utility code across multiple functions so updates require only updating the layer rather than every function, distributing specific versions of dependencies like ML libraries or database drivers ensuring all functions use consistent versions, providing custom runtimes enabling use of programming languages not natively supported by Lambda, and sharing configuration files or certificate bundles needed by multiple functions.
Each function can reference up to five layers, and the total unzipped size of function code plus all layers cannot exceed 250 MB. Layers are regional resources, so you must create layers in each region where you want to use them, or use Lambda’s layer ARN format to reference layers across regions if they’re published publicly.
AWS provides publicly available layers for common dependencies like the AWS SDK, database drivers, and observability tools. You can use these AWS-managed layers in your functions or create your own custom layers for organization-specific dependencies.
Lambda aliases provide friendly names for function versions enabling traffic routing and versioning, Lambda versions are immutable snapshots of function code and configuration, and Lambda extensions extend Lambda execution environment with monitoring and security tools. While all are valuable Lambda features, layers specifically enable code and dependency sharing across functions, addressing the need for centralized dependency management in serverless applications.
Question 221:
What is the purpose of DynamoDB conditional writes?
A) To encrypt data based on conditions
B) To write data only when specified conditions are met
C) To automatically retry failed writes
D) To replicate writes conditionally across regions
Answer: B
Explanation:
DynamoDB conditional writes enable writing, updating, or deleting items only when specified conditions are met, providing atomic conditional operations essential for implementing optimistic locking, ensuring data integrity, and preventing race conditions in concurrent access scenarios. This capability allows applications to enforce business rules and consistency requirements at the database level rather than relying solely on application logic.
Conditional writes work through condition expressions specified in PutItem, UpdateItem, or DeleteItem operations. These expressions evaluate attributes of the existing item (if any) before performing the write operation. If the condition evaluates to true, the write proceeds. If the condition evaluates to false, DynamoDB rejects the write and returns a ConditionalCheckFailedException, allowing your application to handle the failure appropriately.
Common use cases include implementing optimistic locking where you include a version number or timestamp attribute in items, read the item with its current version, and update it conditionally requiring the version hasn’t changed since you read it. This prevents lost updates in scenarios where multiple clients concurrently modify the same item. If the version changed (meaning another client updated the item), your conditional write fails, and you can re-read and retry.
Conditional writes enable enforcing uniqueness constraints by writing items conditionally requiring the item doesn’t already exist (using attribute_not_exists condition). This prevents duplicate item creation in scenarios where primary key generation doesn’t guarantee uniqueness or where you need to enforce uniqueness on non-key attributes.
Business rule enforcement becomes straightforward with conditional writes. For example, you can decrement inventory quantities conditionally requiring the quantity remains positive, preventing negative inventory. You can update account balances conditionally requiring sufficient funds exist, preventing overdrafts. These atomic checks and updates eliminate race conditions that would occur with separate read-then-write operations.
Condition expressions support rich comparison operators including equality, inequality, greater than, less than, attribute existence checks, attribute type checks, and logical operators (AND, OR, NOT) enabling complex conditional logic. You can reference multiple attributes in conditions, implementing sophisticated validation logic executed atomically by DynamoDB.
The atomic nature of conditional writes is crucial. DynamoDB evaluates the condition and performs the write (if the condition is met) as a single atomic operation. There’s no possibility of the item changing between condition evaluation and write execution, ensuring data integrity even under high concurrency.
Conditional writes don’t relate to encryption, automatic retries (though applications should retry on throttling), or cross-region replication. They specifically enable atomic conditional data modifications ensuring writes occur only when application-defined conditions are satisfied, making them fundamental to building correct, concurrent-safe DynamoDB applications.
Question 222:
Which Step Functions integration pattern enables waiting for a callback response from external systems?
A) Request response integration
B) Wait for callback integration
C) Run a job integration
D) Asynchronous integration
Answer: B
Explanation:
The wait for callback integration pattern (also called the callback pattern with task tokens) in AWS Step Functions enables state machines to pause execution while waiting for external systems, human processes, or long-running jobs to send callback responses, providing a mechanism for integrating asynchronous external processes into Step Functions workflows. This pattern is essential for scenarios where external systems control timing and Step Functions must wait for their completion notification.
When using the callback pattern, Step Functions generates a unique task token when entering the state and passes this token to the external system along with any input data. The state machine then pauses execution, waiting for an external process to call the Step Functions API with the task token and either success data (SendTaskSuccess) or failure notification (SendTaskFailure). The workflow remains paused until receiving the callback or until a configured timeout expires.
This integration pattern is identified in state definitions by appending «.waitForTaskToken» to the resource ARN. For example, a Lambda function invoked with callback pattern would use «arn:aws:states:::lambda:invoke.waitForTaskToken» as the resource. Step Functions knows to wait for the callback when it sees this pattern and includes the task token in the input passed to the integrated service.
Common use cases include integrating manual approval processes where a notification is sent to approvers containing the task token, and approvers’ responses trigger SendTaskSuccess or SendTaskFailure calls; integrating with external systems that process requests asynchronously and call back when complete; waiting for long-running jobs in external systems like data processing pipelines, third-party APIs with webhook responses, or legacy systems that can make HTTP callbacks.
The callback pattern enables workflow execution times exceeding what would be possible with synchronous integrations. For example, a Lambda function can run for maximum 15 minutes synchronously, but with the callback pattern, the Lambda function can start a long-running process, return immediately, and that external process can run for hours or days before sending the callback, all within a single Step Functions workflow execution.
Timeouts are configurable for callback states using TimeoutSeconds, preventing workflows from waiting indefinitely if callbacks never arrive. When timeouts occur, the state transitions to a failure state, and you can implement error handling through Retry or Catch configurations.
Request response integration (the default) waits for immediate synchronous responses, Run a job integration (.sync pattern) waits for jobs to complete but doesn’t use explicit callbacks, and «asynchronous integration» isn’t a specific Step Functions pattern name. The wait for callback pattern specifically implements the task token mechanism enabling external systems to control workflow progression through explicit callbacks, essential for integrating asynchronous external processes into Step Functions workflows.
Question 223:
What is the correct method to implement custom domain names for API Gateway?
A) Configure CNAME records pointing to API Gateway
B) Use Route 53 alias records with custom domain configuration
C) Update API Gateway stage settings only
D) Configure CloudFront distribution with custom domain
Answer: B
Explanation:
Implementing custom domain names for API Gateway requires creating a custom domain name resource in API Gateway with an SSL/TLS certificate, then creating Route 53 alias records (or CNAME records for edge-optimized APIs) pointing your custom domain to the API Gateway domain name provided when you configure the custom domain. This two-step process connects your custom domain (like api.example.com) to API Gateway APIs while ensuring secure HTTPS connections through proper certificate validation.
The process begins in API Gateway by creating a custom domain name resource specifying your desired domain name (like api.example.com) and providing an SSL/TLS certificate for that domain from AWS Certificate Manager (ACM). The certificate must match your custom domain name and be validated, proving you control the domain. API Gateway uses this certificate to establish HTTPS connections with clients accessing your custom domain.
After creating the custom domain name, API Gateway provides a target domain name (like d-abcdef123.execute-api.us-east-1.amazonaws.com for regional APIs or abcdef123.cloudfront.net for edge-optimized APIs). You create DNS records pointing your custom domain to this target domain. For regional APIs, use Route 53 alias records pointing to the API Gateway domain. For edge-optimized APIs, use CNAME records.
Base path mappings connect your custom domain to specific API stages. You can map different base paths to different APIs or stages, allowing multiple APIs to coexist under one custom domain. For example, api.example.com/v1 might map to your production API while api.example.com/v2 maps to a newer API version, all under the same custom domain.
The choice between regional and edge-optimized custom domains affects configuration. Regional custom domains distribute API requests within a specific AWS region, suitable for APIs serving clients primarily in one geographic area. Edge-optimized custom domains use CloudFront distribution to serve requests from edge locations globally, providing lower latency for geographically distributed clients.
Custom domain names provide professional API URLs hiding AWS-specific domains, enable API versioning through base path mappings, allow migrating APIs between stages or implementations without changing client-facing URLs, and satisfy branding requirements for customer-facing APIs.
Simply configuring CNAME records without API Gateway custom domain configuration doesn’t work because API Gateway validates the Host header and requires proper custom domain setup. Updating stage settings alone doesn’t create custom domains. While you could manually configure CloudFront with custom domain, API Gateway’s built-in custom domain feature handles this for edge-optimized APIs automatically. The correct approach combines API Gateway custom domain configuration with appropriate DNS records, enabling custom domains with proper SSL/TLS certificate handling.
Question 224:
Which DynamoDB stream view type provides both new and old item images?
A) KEYS_ONLY
B) NEW_IMAGE
C) OLD_IMAGE
D) NEW_AND_OLD_IMAGES
Answer: D
Explanation:
The NEW_AND_OLD_IMAGES stream view type in DynamoDB Streams captures both the new item image (the item’s state after modification) and the old item image (the item’s state before modification) for write operations, providing complete before-and-after visibility into item changes. This comprehensive view type is essential for use cases requiring understanding both what changed and what the previous state was, such as audit logging, data replication with transformation, or triggering business logic based on specific attribute changes.
DynamoDB Streams capture item-level modifications to DynamoDB tables in near real-time, creating a time-ordered sequence of item-level changes. When you enable streams, you choose a view type determining what information appears in stream records for each modification. The view type significantly affects what your stream processing application can know about changes.
NEW_AND_OLD_IMAGES provides the most comprehensive information. For modify operations (updates), the stream record includes both the complete item before modification and the complete item after modification, enabling precise determination of what changed. For insert operations (new items), the record includes the new item (old image is null). For delete operations, the record includes the deleted item (new image is null).
This view type is valuable when your stream processor needs to make decisions based on specific attribute changes. For example, you might trigger notifications only when a status attribute changes from «pending» to «approved», requiring comparison of old and new values. Or you might implement audit logging tracking who changed what fields when, requiring both before and after states.
The tradeoff is stream record size and processing complexity. NEW_AND_OLD_IMAGES produces larger stream records (containing two complete item images) compared to other view types, potentially affecting stream processing throughput and Lambda function execution time when processing stream records. Your stream processing logic must also handle comparing old and new images to determine what changed.
The other view types provide less information: KEYS_ONLY includes only the item’s key attributes, suitable when you only need to know which items changed and can query the table for current state; NEW_IMAGE includes only the item after modification, suitable for maintaining read replicas or caches; OLD_IMAGE includes only the item before modification, useful for archiving deleted items.
Understanding stream view types and choosing appropriately based on your use case ensures stream processing applications have necessary information while minimizing stream record sizes and processing complexity. NEW_AND_OLD_IMAGES provides maximum visibility into changes, making it the correct choice when both before and after states are needed for stream processing logic.
Question 225:
What is the purpose of AWS CodeBuild buildspec file?
A) To define source code repository configuration
B) To specify build commands and settings
C) To configure deployment targets
D) To manage build artifacts storage
Answer: B
Explanation:
The buildspec file in AWS CodeBuild is a YAML or JSON formatted configuration file that defines the build commands, settings, environment configuration, and artifact specifications CodeBuild uses to execute your build process. This file serves as the build script describing exactly how CodeBuild should compile your code, run tests, produce artifacts, and handle build lifecycle events, making it central to CodeBuild’s build automation capabilities.
A buildspec file is organized into phases corresponding to build lifecycle stages: install (installing dependencies and runtime versions), pre_build (commands before building like logging into registries or running configuration scripts), build (actual compilation and testing commands), and post_build (commands after building like creating deployment packages or cleaning up). Each phase contains commands that CodeBuild executes sequentially, with the build failing if any command returns a non-zero exit code.
The file also specifies build environment configuration including environment variables needed during the build, runtime versions for build tools and languages (like Java 11, Node 16, Python 3.9), and parameter store or secrets manager references for secure credential access. This enables builds to access necessary tools and secrets without hard-coding sensitive information.
Artifacts configuration defines what files or directories CodeBuild should upload to S3 after successful builds. You specify artifact locations, file patterns for inclusion or exclusion, and whether to preserve directory structures. This ensures build outputs like compiled binaries, Docker images, or deployment packages are properly captured and made available for subsequent deployment steps.
Cache configuration optimizes build performance by specifying directories or files CodeBuild should preserve between builds. Common cached content includes dependency directories (like node_modules or Maven’s .m2 directory), reducing build time by avoiding re-downloading dependencies for every build.
Buildspec files can be stored directly in your source repository’s root directory (named buildspec.yml by default) or as separate files referenced in CodeBuild project configuration. Storing buildspec files in source control ensures build process evolves with code and provides version history for build configuration changes.
The buildspec file is specifically about build execution — what commands to run, in what order, with what environment. It doesn’t configure source repositories (that’s configured in CodeBuild project settings), deployment targets (that’s CodeDeploy’s responsibility), or artifact storage locations (configured in project settings, though artifact names and contents are specified in buildspec). Understanding buildspec files is essential for implementing automated build processes in CodeBuild, as they define the complete build workflow from dependency installation through artifact creation.