Amazon AWS Certified Developer — Associate DVA-C02 Exam Dumps and Practice Test Questions Set5 Q61-75
Visit here for our full Amazon AWS Certified Developer — Associate DVA-C02 exam dumps and practice test questions.
Question 61:
Which CloudFormation section defines resources to create?
A) Parameters
B) Resources
C) Outputs
D) Mappings
Correct Answer: B
Explanation:
The Resources section in CloudFormation templates defines AWS resources to create, forming the core of infrastructure definitions. This section is the only required template section, containing resource declarations with logical IDs, types, and properties. Each resource specification tells CloudFormation what AWS service resource to create and how to configure it.
Resource declarations include a logical ID uniquely identifying resources within the template, enabling cross-referencing. The Type property specifies the AWS resource type like AWS::Lambda::Function or AWS::DynamoDB::Table. Properties define resource configuration specific to each type, such as function code locations, runtime versions, or table key schemas. Resources support additional attributes like DependsOn for controlling creation order.
CloudFormation evaluates templates, creating resources in appropriate order based on dependencies. Some dependencies are implicit through references using Ref or GetAtt functions. Explicit dependencies use DependsOn attribute. CloudFormation provisions resources in parallel when possible, improving deployment speed while respecting dependency constraints.
Parameters define input values users provide during stack creation. Outputs export values from stacks. Mappings define lookup tables for conditional resource configuration. While these sections support resource configuration, only Resources actually defines what infrastructure to create. Understanding template structure enables creating comprehensive infrastructure as code definitions.
Best practices include using consistent logical ID naming, organizing resources logically within templates, adding metadata and descriptions for documentation, and avoiding hardcoded values by using parameters and mappings. Modular template design with nested stacks or stack exports creates reusable infrastructure components. CloudFormation Designer provides visual template editing and validation. Linting tools catch errors before deployment. Mastering CloudFormation template structure enables reliable, repeatable infrastructure deployments aligned with DevOps practices.
Question 62:
What is the maximum number of tags per Lambda function?
A) 10
B) 25
C) 50
D) 100
Correct Answer: C
Explanation:
AWS Lambda supports up to 50 tags per function, providing extensive metadata capabilities for resource organization, cost allocation, access control, and automation. Tags are key-value pairs enabling categorization across multiple dimensions like environment (dev, staging, prod), cost center, application, owner, or compliance requirements. Understanding tag limits and strategies enables effective resource management.
Tags serve multiple purposes in AWS environments. Cost allocation tags enable tracking expenses by project, department, or application across multiple functions and services. Resource groups organize functions by tags for bulk operations or monitoring. IAM policies reference tags for attribute-based access control, granting permissions based on resource tags. CloudWatch and X-Ray use tags for filtering and organizing metrics and traces.
Implementing consistent tagging strategies requires organizational standards defining required tags, valid values, and naming conventions. Common tags include Environment, Application, Owner, CostCenter, Project, and Version. Automation during function creation through CI/CD pipelines or infrastructure as code enforces tagging policies, preventing untagged resources. AWS Organizations tag policies enforce tagging requirements across accounts.
The 50-tag limit per function is sufficient for comprehensive tagging across multiple categorization dimensions. Exceeding this limit requires consolidating tags or reconsidering tagging strategy. Each tag key and value can be up to 128 and 256 characters respectively, allowing descriptive tags without excessive abbreviation.
Tag Editor in AWS Console enables bulk tagging operations across regions and services. AWS Resource Groups API enables programmatic tag management. CloudFormation and SAM templates include tag definitions for functions and other resources. Cost and Usage Reports break down costs by tags, enabling chargeback models. Service Catalog enforces tagging through product portfolios. Effective tagging transforms resource management from chaos to organized, governable infrastructure aligned with business requirements and operational practices.
Question 63:
Which API Gateway method enables clients to discover API capabilities?
A) GET
B) OPTIONS
C) HEAD
D) DESCRIBE
Correct Answer: B
Explanation:
The OPTIONS HTTP method in API Gateway enables CORS (Cross-Origin Resource Sharing) preflight requests, allowing browsers to discover which origins, methods, and headers are permitted for cross-origin API calls. When web applications hosted on different domains attempt to call APIs, browsers send OPTIONS preflight requests before actual requests to verify the API allows cross-origin access.
CORS configuration in API Gateway requires enabling CORS support and configuring allowed origins, methods, headers, and credentials. API Gateway can automatically generate OPTIONS method responses with appropriate CORS headers including Access-Control-Allow-Origin, Access-Control-Allow-Methods, and Access-Control-Allow-Headers. Properly configured CORS enables modern web applications to securely interact with APIs across different domains.
Preflight requests occur for requests meeting certain criteria like using PUT, DELETE, or custom headers. Simple requests like basic GET requests skip preflight. Understanding when browsers send preflight requests helps debug CORS issues. Common CORS errors result from missing OPTIONS methods, incorrect header configurations, or overly restrictive origin specifications.
GET retrieves resources, HEAD retrieves resource metadata without bodies, and DESCRIBE isn’t a standard HTTP method. Only OPTIONS specifically handles CORS preflight discovery. Configuring OPTIONS methods properly is essential for browser-based API access from web applications.
API Gateway simplifies CORS configuration through console wizards or CLI commands that automatically create OPTIONS methods and mock integrations returning appropriate headers. For APIs requiring authentication, CORS configuration must allow Authorization headers. Wildcard origins (*) enable public API access but prevent credential sharing. Specific origin lists restrict access to known domains. Testing CORS with browser developer tools reveals configuration issues. Understanding CORS and OPTIONS method configuration enables building web applications that securely consume APIs across domain boundaries following browser security models.
Question 64:
What is the purpose of Lambda reserved concurrent executions?
A) Improve performance
B) Guarantee capacity and set limits
C) Reduce costs
D) Enable VPC access
Correct Answer: B
Explanation:
Lambda reserved concurrent executions serve dual purposes: guaranteeing minimum capacity availability for specific functions and setting maximum concurrency limits to protect downstream resources. When you configure reserved concurrency for a function, Lambda allocates that capacity exclusively to the function, preventing other functions from consuming it while limiting the function to that capacity regardless of demand.
Capacity guarantee ensures critical functions have sufficient concurrency during high account-wide demand. Without reserved concurrency, functions compete for shared unreserved capacity, and high-volume functions might exhaust capacity, throttling lower-volume functions. Reserving capacity for critical functions prevents this contention, ensuring availability for essential operations like payment processing or authentication.
Concurrency limiting protects downstream systems from overload by capping maximum concurrent executions. Functions accessing databases with limited connection pools benefit from concurrency limits preventing connection exhaustion. Similarly, functions calling rate-limited external APIs need limits matching API capacity. Setting reserved concurrency below account limits intentionally restricts function scaling to safe levels.
Reserved concurrency doesn’t directly improve performance, though guaranteed capacity prevents throttling delays. It doesn’t reduce costs; pricing remains based on actual executions. VPC access requires separate VPC configuration. Reserved concurrency specifically manages capacity allocation and limits, enabling both protective and guarantee scenarios.
Configuring reserved concurrency requires understanding function behavior, downstream dependencies, and scaling characteristics. Setting reserved concurrency to zero effectively disables functions, useful for emergency shutdowns without deleting resources. Monitoring throttles indicates when reserved concurrency is too low, requiring increases. Combining reserved concurrency with provisioned concurrency provides both capacity guarantees and reduced cold starts. Understanding concurrency management enables building reliable, well-behaved serverless applications that scale appropriately while protecting shared resources and downstream systems from overload.
Question 65:
Which service provides centralized logging for containerized applications?
A) Amazon CloudWatch Logs
B) Amazon S3
C) AWS CloudTrail
D) Amazon Kinesis
Correct Answer: A
Explanation:
Amazon CloudWatch Logs provides centralized logging for containerized applications running on ECS, EKS, or Fargate, automatically capturing container stdout and stderr output. Container orchestration services integrate with CloudWatch Logs through awslogs log driver configuration, streaming logs to CloudWatch without requiring custom logging code. This centralization enables comprehensive monitoring, troubleshooting, and analysis across distributed containerized environments.
Configuring CloudWatch Logs for containers involves specifying the awslogs driver in task definitions or pod specifications, defining log groups and optionally log streams. Container logs automatically stream to CloudWatch, organized by log groups representing applications or services and log streams representing individual containers or tasks. This organization facilitates filtering, searching, and analysis across container fleets.
CloudWatch Logs Insights enables querying container logs using specialized query language, aggregating data across multiple log streams and log groups. Complex queries identify error patterns, analyze request latencies, track specific transactions, or generate operational metrics from log data. Dashboards visualize query results, providing real-time operational visibility into containerized application behavior.
While S3 can store exported logs for long-term retention or analytics, it doesn’t provide real-time streaming or querying. CloudTrail logs API calls for audit, not application logs. Kinesis handles streaming data but requires custom integration for log collection. CloudWatch Logs specifically addresses centralized application logging with native container service integration.
Container logging best practices include structured logging using JSON formats for easier parsing, implementing appropriate log levels, avoiding sensitive data in logs, and configuring log retention periods balancing cost with compliance requirements. Metric filters extract metrics from logs, triggering alarms for error thresholds or performance degradation. Log subscriptions stream data to Lambda, Kinesis, or OpenSearch for real-time processing or advanced analytics. Understanding CloudWatch Logs capabilities enables comprehensive observability for containerized applications.
Question 66:
What is the purpose of DynamoDB conditional writes?
A) Improve performance
B) Ensure data consistency and prevent conflicts
C) Reduce costs
D) Enable encryption
Correct Answer: B
Explanation:
DynamoDB conditional writes enable implementing optimistic locking and preventing conflicting concurrent updates by specifying conditions that must be true for writes to succeed. Conditions check attribute values, existence, or other characteristics before applying writes. If conditions fail, DynamoDB rejects the write with a ConditionalCheckFailedException, preventing data corruption from race conditions in distributed systems.
Conditional writes are essential for preventing lost updates when multiple clients modify the same item simultaneously. For example, inventory management systems use conditional writes ensuring quantities don’t decrease below zero or preventing double-spending. Conditional expressions check current quantity before decrementing, failing if insufficient inventory exists. This atomicity prevents common distributed system errors.
Conditions use expression syntax supporting comparisons, logical operators, and functions. Expressions can check attribute values match expected values (optimistic locking), attributes exist or don’t exist (preventing overwrites or duplicates), numeric values fall within ranges, or lists contain specific elements. Complex conditions combine multiple checks using AND, OR, and NOT operators.
Conditional writes don’t improve performance; they add processing overhead for condition evaluation. They don’t reduce costs; capacity consumption includes condition evaluation. They don’t enable encryption; encryption uses separate KMS settings. Conditional writes specifically ensure consistency and prevent conflicts through programmatic checks before data modifications.
Implementing optimistic locking typically includes version numbers or timestamps in items, incrementing versions on updates. Conditional expressions verify version numbers match expected values before updating, failing if versions changed indicating concurrent modifications. Applications retry failed conditional writes after refreshing data, ensuring latest values inform subsequent attempts. Understanding conditional writes enables building robust distributed applications maintaining data integrity despite concurrent access. Monitoring ConditionalCheckFailedException occurrences identifies high-contention items potentially requiring schema redesign or conflict resolution strategy adjustments.
Question 67:
Which Lambda feature reduces cold start latency?
A) Reserved concurrency
B) Provisioned concurrency
C) Memory allocation
D) Execution timeout
Correct Answer: B
Explanation:
Lambda provisioned concurrency initializes requested number of execution environments before invocations, keeping them warm and ready to respond immediately. This feature eliminates cold start latency for latency-sensitive applications by maintaining pre-initialized function instances. Provisioned concurrency is configurable per function version or alias, enabling selective performance optimization for production traffic.
Cold starts occur when Lambda creates new execution environments, including downloading code, initializing runtime, and executing initialization code outside handlers. This process adds latency ranging from milliseconds to several seconds depending on runtime, code size, and VPC configuration. Provisioned concurrency prevents cold starts by maintaining initialized environments continuously.
Configuring provisioned concurrency specifies how many concurrent environments to keep warm. These environments handle requests immediately without initialization delays. Provisioned concurrency works with application auto-scaling, adjusting based on schedules or metrics. For example, increasing provisioned concurrency before anticipated traffic surges ensures capacity and performance.
Reserved concurrency guarantees capacity and limits but doesn’t reduce cold starts. Memory allocation affects CPU power, not initialization behavior. Execution timeout controls how long functions run. Provisioned concurrency specifically addresses cold start latency through pre-initialization.
Provisioned concurrency incurs costs for maintaining warm environments based on configured concurrency and duration, in addition to standard invocation costs. Cost evaluation requires balancing performance requirements against budget constraints. Monitoring ProvisionedConcurrencyUtilization and ProvisionedConcurrencySpilloverInvocations metrics optimizes configuration, ensuring sufficient provisioned concurrency without over-provisioning. Combining provisioned concurrency with proper initialization code optimization, connection pooling, and minimal dependencies maximizes cold start reduction. Understanding provisioned concurrency enables meeting strict latency requirements for APIs, mobile backends, or any application where cold starts negatively impact user experience.
Question 68:
What is the maximum number of items in a DynamoDB batch write request?
A) 10
B) 25
C) 50
D) 100
Correct Answer: B
Explanation:
DynamoDB BatchWriteItem operations support up to 25 put or delete requests in a single batch, enabling efficient bulk operations compared to individual PutItem or DeleteItem calls. Batch operations reduce round trips to DynamoDB, improving throughput and reducing latency for applications writing multiple items. Understanding batch limits and behavior enables efficient data manipulation in high-volume scenarios.
Each batch request can include up to 25 operations across one or more tables, with total request size not exceeding 16 MB. Operations within batches can mix PutItem and DeleteItem requests freely. However, BatchWriteItem doesn’t support UpdateItem operations; updates require individual calls or creative patterns using conditional PutItem operations with complete item replacement.
Batch writes are not atomic; some items may succeed while others fail. DynamoDB returns unprocessed items when capacity limits are reached, throttling occurs, or individual item operations fail. Applications must implement retry logic for unprocessed items, typically using exponential backoff. This partial failure behavior requires careful error handling to ensure complete batch processing.
Batch operations consume write capacity proportional to item sizes and operation count, similar to individual operations. Successful operations consume capacity regardless of failures elsewhere in the batch. Monitoring consumed capacity and throttling helps size batches appropriately for provisioned capacity.
The 25-item limit balances efficiency with manageability and memory constraints. Processing larger datasets requires splitting into multiple batches. Parallel batch execution maximizes throughput when provisioned capacity supports it. Comparing batch versus individual operations shows batch operations significantly reduce overall latency and request overhead for bulk data manipulation. Understanding batch operation capabilities, limits, and error handling enables building efficient data ingestion pipelines, bulk updates, and batch processing workflows leveraging DynamoDB effectively.
Question 69:
Which service provides secrets rotation automation?
A) AWS KMS
B) AWS Secrets Manager
C) Systems Manager Parameter Store
D) AWS IAM
Correct Answer: B
Explanation:
AWS Secrets Manager provides automated secrets rotation capabilities, automatically updating credentials in both Secrets Manager and target services on defined schedules. Rotation reduces security risks from long-lived credentials by regularly changing passwords, API keys, and other secrets without manual intervention or application downtime. This automation is a key differentiator from Parameter Store which stores secrets but doesn’t provide built-in rotation.
Secrets Manager includes pre-built rotation Lambda functions for RDS databases (MySQL, PostgreSQL, Oracle, SQL Server, MariaDB), Redshift, DocumentDB, and other AWS services . These functions handle the complete rotation process including creating new credentials, updating them in target services, and updating stored secret values. Custom rotation functions support rotating secrets for applications or third-party services without native support.
Rotation strategies include single-user rotation where credentials are changed in place, or alternating users where two sets of credentials alternate, ensuring one is always valid during rotation. The alternating strategy eliminates potential downtime during credential propagation. Secrets Manager orchestrates rotation, invoking Lambda functions and managing version staging throughout the process.
AWS KMS manages encryption keys, not secrets or rotation. Parameter Store stores configuration and encrypted secrets but lacks automatic rotation. IAM manages AWS access but doesn’t rotate application secrets. Secrets Manager specifically addresses comprehensive secret lifecycle management including rotation automation.
Enabling rotation requires specifying rotation schedules (30, 60, 90 days, or custom intervals) and selecting or creating rotation functions. Secrets Manager automatically creates necessary IAM roles and policies for rotation functions. Applications retrieve secrets using versioning labels like AWSCURRENT for current secrets, ensuring they always use valid credentials. Monitoring rotation failures through CloudWatch enables proactive remediation. Understanding Secrets Manager rotation capabilities enables meeting security compliance requirements mandating regular credential rotation while reducing operational overhead and eliminating manual rotation processes.
Question 70:
What is the purpose of Step Functions choice state?
A) Invoke Lambda
B) Implement conditional logic
C) Catch errors
D) Wait for time
Correct Answer: B
Explanation:
Step Functions Choice states implement conditional branching logic in workflows, enabling different execution paths based on input data or execution results. Choice states evaluate input against defined conditions, routing execution to appropriate next states based on which conditions match. This capability enables complex workflows with dynamic behavior adapting to data values, error conditions, or business logic requirements.
Choice states define multiple choices, each with conditions and next state specifications. Conditions use comparison operators testing string equality, numeric comparisons, boolean values, timestamps, or checking for null values and type matching. Complex conditions combine multiple checks using AND, OR, and NOT operators. A default state handles cases where no conditions match, preventing workflow failures from unexpected inputs.
Common use cases include processing workflows differently based on item types, implementing approval workflows where different stakeholders review based on request amounts, error handling where different recovery paths apply to different error types, or data validation workflows rejecting invalid inputs while processing valid ones. Choice states enable workflows to implement sophisticated business logic without Lambda functions for simple conditional routing.
Invoking Lambda uses Task states with Lambda resource types. Catching errors uses Catch clauses in states. Waiting uses Wait states. Choice states specifically implement conditional logic through branching. Understanding state types enables designing workflows leveraging appropriate states for each requirement.
Best practices include keeping Choice state logic simple and maintainable, documenting decision logic in workflow definitions, testing all branches including default paths, and using meaningful state names indicating routing logic. Choice states don’t incur state transition charges beyond entering and exiting the choice state, making them cost-effective for implementing complex conditional workflows. Combining Choice states with Parallel, Map, and Task states creates sophisticated orchestrations handling diverse scenarios efficiently while maintaining clear, understandable workflow definitions through visual representation in Step Functions console.
Question 71:
Which DynamoDB operation updates item attributes without replacing entire items?
A) PutItem
B) UpdateItem
C) ModifyItem
D) PatchItem
Correct Answer: B
Explanation:
The DynamoDB UpdateItem operation allows modification of specific attributes within an existing item without replacing the entire item, making partial updates more efficient than performing a full read-modify-write cycle. UpdateItem accepts update expressions that specify which attributes to change and the manner of modification, enabling operations such as adding or removing attributes, incrementing or decrementing numeric values, appending to lists, and updating individual map elements. This targeted approach reduces write throughput consumption and improves application performance compared to fetching the item, modifying it locally, and using PutItem to overwrite the full item.
Update expressions employ specialized syntax with action keywords to define the modifications. The SET action is used to add new attributes or modify existing ones. REMOVE deletes specified attributes from an item. ADD can increment or decrement numeric attributes or add elements to number or string sets. DELETE removes elements from sets. To safely reference attribute names that are reserved words or contain special characters, expression attribute names and values are used, which also helps prevent injection vulnerabilities.
UpdateItem supports atomic operations such as counters, allowing numeric attributes to be incremented or decremented safely without race conditions even when multiple clients update the same attribute concurrently. Conditional expressions can be combined with updates to enforce rules, ensuring that modifications occur only when specified conditions are true. This functionality enables optimistic locking patterns to prevent conflicting updates and maintain data consistency.
Unlike UpdateItem, PutItem replaces an entire item or creates a new one, overwriting all attributes. DynamoDB does not have operations named ModifyItem or PatchItem; UpdateItem is specifically designed for targeted attribute modification and is essential for efficient and precise data updates.
UpdateItem also allows control over returned data. Return values include NONE, which returns no data; ALL_OLD, which returns the item as it existed before the update; UPDATED_OLD, which returns only the updated attributes prior to modification; ALL_NEW, which returns the entire updated item; and UPDATED_NEW, which returns only the modified attributes after the update. Selecting the appropriate return option balances the need for information with response size and consumed capacity.
Understanding UpdateItem—including update expressions, conditional updates, atomic counters, and return value options—enables developers to implement efficient, reliable, and scalable DynamoDB operations. These features simplify application code, reduce capacity usage, and provide precise control over item-level updates, ensuring high-performance and consistent data manipulation in production applications.
Question 72:
What is the purpose of Lambda function aliases?
A) Encrypt data
B) Point to specific function versions
C) Store environment variables
D) Configure VPC access
Correct Answer: B
Explanation:
Lambda function aliases provide named pointers to specific function versions, enabling stable references for clients while simplifying version management behind the scenes. Each alias has a static Amazon Resource Name (ARN) that remains unchanged even if the alias is updated to point to a different function version. This allows integration points such as API Gateway, EventBridge, Step Functions, or IAM policies to reference aliases instead of version-specific ARNs, reducing the need to update configuration whenever new versions are deployed.
Common alias naming conventions include dev, test, staging, and prod, with each alias pointing to the appropriate function version for its environment. Updating an alias involves changing the target version it references, instantly redirecting traffic without modifying any client integrations. This indirection enables safe deployment strategies such as blue-green deployments, where a new version is deployed alongside the existing one and traffic is shifted only after validation. In case of issues, traffic can quickly revert to the previous version by updating the alias target, facilitating instant rollback.
Aliases also support weighted routing, allowing distribution of incoming requests across multiple versions simultaneously. For instance, an alias can route 90 percent of traffic to a stable production version while sending 10 percent to a newly released version for canary testing. By monitoring metrics such as invocations, errors, and latency split by version, developers can detect potential issues early before fully promoting new code. Gradually adjusting the traffic weights enables controlled rollouts that minimize risk.
It is important to note that aliases do not manage data encryption, environment variables, VPC access, or other function configurations; those are properties of the function itself. Aliases purely provide a mechanism for referencing versions and implementing deployment strategies.
Best practices include maintaining separate aliases for each environment, documenting their purpose clearly, integrating alias updates into automated CI/CD pipelines, and monitoring alias-specific metrics to track performance and error rates. Infrastructure as code tools such as CloudFormation or AWS SAM allow aliases to be managed declaratively, ensuring consistent configuration across environments. Combining aliases with versioning enables advanced deployment workflows including canary releases, blue-green deployments, and instant rollbacks, transforming Lambda from a simple execution service into a production-ready deployment platform that meets enterprise standards for reliability, operational control, and safe, controlled updates.
Question 73:
Which API Gateway feature prevents abuse through request limits?
A) Caching
B) Throttling
C) Validation
D) Encryption
Correct Answer: B
Explanation:
API Gateway throttling prevents API abuse by enforcing request rate limits, protecting backend services from excessive traffic whether the source is legitimate activity, misconfigured clients, or malicious attacks. Throttling establishes a safety barrier between external traffic and internal systems, ensuring that sudden surges do not overwhelm compute, database, or downstream dependencies. It operates at several hierarchical levels—including account-level, stage-level, method-level, and per-client throttling through usage plans—offering granular control over how much traffic an API can receive.
API Gateway uses a token bucket algorithm to regulate traffic. At the account level, AWS imposes default throttling limits of 10,000 requests per second for steady-state throughput and a burst capacity of 5,000 requests. Burst capacity determines how many requests can be handled instantly before throttling begins, while the steady rate determines the sustained throughput. Stage-level throttling allows different environments (such as dev, test, and production) to have custom rate and burst limits. Method-level throttling goes even deeper by applying specific limits to individual API operations, ensuring that expensive or sensitive endpoints receive additional protection.
Usage plans provide per-client throttling when clients authenticate using API keys. This allows organizations to create differentiated service tiers such as free, standard, or premium plans, each with its own request limits. Individual customers or partner applications receive managed quotas, preventing a single client from consuming disproportionate resources.
When throttling limits are exceeded, API Gateway issues HTTP 429 Too Many Requests responses. Well-behaved clients are expected to implement retry strategies that use exponential backoff and jitter to avoid retry storms. These retry patterns help prevent clients from contributing to further load pressures during high-traffic events.
Throttling is distinct from other API Gateway features. Caching reduces latency and backend load but does not restrict request volume. Request validation checks payload correctness but does not limit traffic. Encryption protects data confidentiality but offers no rate control. Only throttling directly controls request rates and protects systems from overload.
CloudWatch metrics such as Count, ThrottleCount, and Latency provide insights into throttling behavior. Consistently elevated throttling rates may indicate that legitimate traffic exceeds provisioned limits, requiring limit adjustments. Throttling spikes from specific API keys can reveal client bugs, misuse, or potential attacks, prompting corrective actions or enhanced protection with AWS WAF.
Properly configuring throttling ensures that APIs remain stable, responsive, and secure even under unpredictable traffic conditions. By balancing client needs with backend safety, throttling plays a crucial role in maintaining API availability and resilience.
Question 74:
What is the maximum number of attributes in a DynamoDB Global Secondary Index?
A) 20
B) 100
C) 1600
D) No limit
Correct Answer: C
Explanation:
DynamoDB Global Secondary Indexes (GSIs) support up to 1600 total projected attributes when using the ALL projection type, which aligns with DynamoDB’s maximum item attribute count. This limit represents the total number of distinct attributes that may appear across all indexed items, not a per-item limit. Even if individual items contain fewer attributes, the union of all attributes that could appear in the index must remain within the 1600-attribute boundary. Understanding this projection limit is important for designing efficient schemas, especially for workloads using wide or semi-structured items.
Projection types determine which attributes the GSI stores in addition to the index key attributes. KEYS_ONLY projects only the table’s primary key attributes and the index’s partition and sort key attributes. This results in the smallest index footprint, minimizing both storage and write costs. INCLUDE projection allows specifying a list of non-key attributes to include, providing a balance between storage efficiency and the ability to satisfy common queries directly from the index. ALL projection copies every attribute from the source item into the index, offering maximum flexibility but at the highest storage and write amplification cost.
Choosing the right projection type requires analyzing your read patterns. If queries consistently access the same subset of attributes, an INCLUDE projection with precisely those attributes avoids unnecessary table reads and keeps the index relatively small. For applications where query requirements vary widely or where developers need full item representation for analytics-style queries, ALL projection might be necessary. However, this increases index storage cost and doubles write capacity consumption for each write, as updates must be propagated to the GSI.
Although DynamoDB technically supports 1600 projected attributes, most practical workloads use far fewer. Wide tables with hundreds of dynamic or sparse attributes must consider whether ALL projection is appropriate, as it can significantly increase index size. Sparse indexes help reduce costs because they only include items where the index keys are present, omitting irrelevant records automatically.
Monitoring GSI storage, write usage, and read patterns helps ensure projection choices remain cost-effective. If an index grows large or consumes significant write capacity, consider switching from ALL to INCLUDE projection or reducing the attribute set. Regularly reviewing access patterns can prevent unnecessary expansion of attribute counts over time. Understanding projection limits and behaviors allows teams to design GSIs that efficiently support application queries while maintaining predictable performance and optimized storage costs.
Question 75:
Which service provides real-time communication for web applications?
A) API Gateway REST APIs
B) API Gateway WebSocket APIs
C) Amazon SNS
D) Amazon SQS
Correct Answer: B
Explanation:
API Gateway WebSocket APIs enable real-time, bidirectional communication between clients and backend services, making them suitable for use cases such as chat applications, collaborative editing tools, multiplayer gaming, live dashboards, IoT device messaging, and financial trading platforms. Unlike traditional HTTP request-response interactions, WebSocket connections remain open and persistent, allowing servers to push data to clients the moment events occur. This eliminates the need for polling or long-polling mechanisms, drastically reducing latency, network overhead, and backend processing costs.
WebSocket APIs in API Gateway are structured around routes that correspond to different message types or connection lifecycle events. The $connect route triggers when a client first establishes a WebSocket connection; this is where authentication, API key validation, and connection registration typically occur. The $disconnect route runs when a connection is closed, whether intentionally or due to network issues, allowing backend systems to remove stale connection records. Custom routes are used for application-defined message types, which invoke Lambda functions, HTTP backends, or other AWS services to process inbound messages, perform business logic, and send responses or broadcasts to connected clients.
Connection management is a crucial part of WebSocket architectures. Each connected client receives a unique connectionId, which the backend can use to send messages through the @connections API. DynamoDB is commonly used to store connectionIds along with user attributes, channel memberships, or session data. This allows applications to target individual users, groups of users, or broadcast messages to all active connections.
Unlike REST APIs, which require a client to initiate every request and cannot push data back to the client, WebSocket APIs support continuous two-way communication. SNS and SQS provide messaging capabilities but do not maintain client connections or deliver messages directly to browsers or mobile devices. Only WebSockets enable true real-time interaction across large numbers of users with persistent server-client communication channels.
Pricing for WebSocket APIs is based on connection minutes and message volume, making them efficient for workloads involving frequent updates. Robust implementations handle reconnection logic, network interruptions, and heartbeat messages to detect dropped connections. Authorization can be implemented using Lambda authorizers or IAM, particularly during the $connect route. CloudWatch metrics help monitor connection counts, error rates, integration latency, and message throughput.
By understanding and effectively using WebSocket APIs, developers can build modern, responsive, and interactive applications that deliver immediate updates and smooth real-time experiences without the limitations or latency of traditional HTTP-based communication models.