CompTIA CV0-004 Cloud+ Exam Dumps and Practice Test Questions Set 15 Q 211-225

CompTIA CV0-004 Cloud+ Exam Dumps and Practice Test Questions Set 15 Q 211-225

Visit here for our full CompTIA CV0-004 exam dumps and practice test questions.

Question 211: 

A cloud administrator is designing a disaster recovery solution for a critical application hosted in a public cloud. The organization requires a Recovery Time Objective (RTO) of 1 hour and a Recovery Point Objective (RPO) of 15 minutes. Which of the following disaster recovery strategies would BEST meet these requirements?

A) Backup and restore

B) Pilot light

C) Warm standby

D) Multi-site active-active

Answer: C

Explanation:

Disaster recovery planning in cloud environments requires careful consideration of business requirements, cost constraints, and technical capabilities. Two critical metrics guide disaster recovery strategy selection: Recovery Time Objective, which defines the maximum acceptable time to restore services after a disaster, and Recovery Point Objective, which defines the maximum acceptable amount of data loss measured in time. Understanding how different disaster recovery strategies align with these objectives is essential for cloud administrators.

A warm standby disaster recovery strategy maintains a scaled-down but fully functional version of the production environment that runs continuously in the recovery site. Core infrastructure components including servers, databases, and network configurations are deployed and running, but typically at reduced capacity compared to production. Data replication occurs continuously or at frequent intervals to keep the standby environment synchronized with production. When disaster strikes, the recovery process involves scaling up the standby environment to full production capacity and redirecting traffic, which can typically be accomplished within the one-hour RTO requirement.

The warm standby approach effectively meets the specified RPO of 15 minutes through continuous or near-continuous data replication. Modern cloud platforms offer database replication services, storage synchronization tools, and block-level replication that can maintain recovery point objectives measured in minutes. The standby environment’s continuous operation means that failover primarily involves capacity scaling and traffic redirection rather than complete system restoration, making the one-hour RTO achievable.

Cost considerations make warm standby an attractive middle-ground solution. While more expensive than cold standby approaches due to continuously running infrastructure, warm standby costs significantly less than full active-active configurations. Cloud platforms enable cost optimization through features like reserved instances for the baseline standby environment, auto-scaling capabilities for rapid capacity expansion during failover, and the ability to run smaller instance types in standby mode.

Backup and restore (A) is the most economical disaster recovery strategy, involving regular backups of data and configurations that are restored to new infrastructure during recovery. However, this approach typically requires several hours or even days to complete full restoration, making it unsuitable for a one-hour RTO. The restoration process includes provisioning new infrastructure, installing operating systems and applications, restoring data from backups, and reconfiguring systems. Additionally, RPO depends entirely on backup frequency, and achieving a 15-minute RPO would require extremely frequent backups that approach continuous replication, negating much of the cost advantage.

Pilot light (B) maintains minimal critical infrastructure components in a ready state, with core elements like databases continuously replicated but application servers and other components only provisioned during recovery. While pilot light can achieve the 15-minute RPO through database replication, the RTO typically ranges from several hours to half a day because significant infrastructure must be provisioned and configured during failover. The time required to launch instances, install applications, configure load balancers, and perform testing generally exceeds the one-hour RTO requirement.

Multi-site active-active (D) deploys fully operational production environments across multiple locations with live traffic distributed between sites. This strategy provides the fastest recovery capabilities with RTOs measured in seconds or minutes and RPOs approaching zero through synchronous replication. However, active-active configurations represent the most expensive disaster recovery approach, requiring full production infrastructure duplication, complex traffic management, data consistency mechanisms, and sophisticated application architecture. The specified requirements of one-hour RTO and 15-minute RPO do not justify the significant additional cost and complexity of active-active deployment.

The correct answer is C.

Question 212: 

A company is migrating its on-premises application to a cloud environment. The application requires consistent high-performance storage with low latency for database operations. Which of the following storage types would be MOST appropriate?

A) Object storage

B) Block storage

C) File storage

D) Archive storage

Answer: B

Explanation:

Understanding the different types of cloud storage and their performance characteristics is crucial for making appropriate architectural decisions when migrating applications to the cloud. Storage selection directly impacts application performance, user experience, and operational costs. Each storage type serves specific use cases and offers different performance profiles, durability guarantees, and access methods.

Block storage provides the highest performance and lowest latency among cloud storage options, making it ideal for database workloads and applications requiring consistent, predictable I/O operations. Block storage presents raw storage volumes to instances as if they were directly attached physical disks, allowing operating systems to format volumes with any file system and manage data at the block level. This direct attachment model eliminates network file system overhead and enables features like database write-ahead logging, transaction management, and cache optimization that databases rely on for performance and consistency.

Cloud block storage services typically offer provisioned IOPS capabilities, allowing administrators to specify required input/output operations per second to guarantee consistent performance regardless of other workloads. High-performance block storage options use solid-state drives and are optimized for low-latency operations measured in single-digit milliseconds. These characteristics make block storage the standard choice for relational databases, NoSQL databases, transactional applications, and any workload requiring high-throughput random access patterns.

Block storage volumes integrate seamlessly with cloud compute instances through standard storage protocols like iSCSI or NVMe, and can be dynamically resized, snapshotted for backup and recovery, and replicated across availability zones for durability. The ability to create point-in-time snapshots without downtime is particularly valuable for database environments requiring consistent backup strategies.

Object storage (A) is designed for storing large amounts of unstructured data like documents, images, videos, backups, and log files. Object storage excels at scalability and durability, storing data as objects with metadata in a flat namespace accessible through HTTP APIs. However, object storage introduces higher latency compared to block storage and is not suitable for applications requiring file system-level operations or low-latency database access. Object storage typically exhibits latencies measured in tens or hundreds of milliseconds and does not support the random write patterns that databases generate.

File storage (C) provides network-attached storage accessible through standard file system protocols like NFS or SMB, making it suitable for shared file systems accessed by multiple instances simultaneously. While file storage supports many enterprise applications and provides good performance for sequential operations, it introduces network latency and protocol overhead that makes it less suitable than block storage for high-performance database workloads. File storage typically serves use cases like content management systems, home directories, and shared application data rather than database back-ends.

Archive storage (D) is optimized for long-term retention of infrequently accessed data with very low storage costs but significant retrieval latency measured in minutes or hours. Archive storage uses slower storage media and may require data rehydration before access. This storage class is appropriate for compliance data, historical records, and backup archives but completely unsuitable for active database operations requiring millisecond-level response times.

The correct answer is B.

Question 213: 

An organization wants to implement a cloud security solution that can automatically adjust security policies based on user behavior, location, and device posture. Which of the following would BEST meet this requirement?

A) Cloud access security broker

B) Web application firewall

C) Virtual private network

D) Intrusion prevention system

Answer: A

Explanation:

As organizations increasingly adopt cloud services and support remote work, traditional perimeter-based security models have become insufficient. Modern cloud security requires dynamic, context-aware controls that can adapt to changing risk conditions and enforce policies based on multiple factors beyond simple network boundaries. Understanding the capabilities of different security technologies is essential for implementing appropriate cloud security architectures.

A cloud access security broker is a security solution positioned between cloud service users and cloud applications to enforce security policies, monitor activity, and provide visibility into cloud service usage. CASB solutions offer comprehensive security capabilities including dynamic access control that evaluates multiple contextual factors before granting access to cloud resources. These factors include user identity and group membership, behavioral analytics that detect anomalous activities, device posture assessment evaluating security configuration and compliance, geographic location and network characteristics, application and data sensitivity classifications, and risk scores calculated from multiple inputs.

Modern CASB platforms use adaptive access control mechanisms that automatically adjust security policies based on calculated risk levels. For example, a CASB might allow full access when a user connects from a corporate-managed device on the office network but require multi-factor authentication and restrict file downloads when the same user accesses from an unknown device or suspicious location. If behavioral analytics detect unusual activity patterns like accessing resources at abnormal times or downloading excessive amounts of data, the CASB can automatically enforce additional verification steps or block access entirely.

CASB solutions typically operate in multiple deployment modes including forward proxy mode where traffic routes through the CASB before reaching cloud services, reverse proxy protecting internally hosted applications, and API-based integration providing visibility into cloud service activity without inline traffic inspection. This flexibility allows organizations to implement CASB protection across sanctioned cloud applications, detect shadow IT usage, enforce data loss prevention policies, and maintain compliance with regulatory requirements.

Additional CASB capabilities include threat protection against malware and compromised accounts, data security through encryption and tokenization, compliance monitoring and reporting, and unified policy management across multiple cloud platforms. The ability to integrate with identity providers, endpoint security solutions, and security information and event management systems enables CASBs to correlate information from multiple sources and make intelligent access decisions.

Web application firewall (B) protects web applications from attacks like SQL injection, cross-site scripting, and other OWASP top ten vulnerabilities by inspecting HTTP traffic and blocking malicious requests. While some advanced WAF solutions incorporate machine learning for threat detection, they primarily focus on protecting applications from external attacks rather than enforcing dynamic access policies based on user behavior and context. WAFs operate at the application layer to filter malicious traffic but do not provide the comprehensive visibility and access control across multiple cloud services that CASBs offer.

Virtual private network (C) creates encrypted tunnels for secure remote access to network resources, protecting data in transit from eavesdropping and tampering. VPNs provide network-level connectivity and authentication but do not inherently evaluate device posture, analyze user behavior, or dynamically adjust security policies based on risk factors. Traditional VPN solutions follow an all-or-nothing access model where authenticated users gain broad network access rather than granular, context-aware access control to specific cloud resources.

Intrusion prevention system (D) monitors network traffic for malicious activity and known attack signatures, automatically blocking detected threats before they reach protected systems. IPS solutions excel at identifying and preventing network-based attacks but focus on threat detection rather than access control policy enforcement. IPS systems do not evaluate user behavior patterns, device compliance, or contextual factors to dynamically adjust security policies across cloud applications.

The correct answer is A.

Question 214: 

A cloud engineer needs to ensure that a containerized application can scale horizontally based on CPU utilization. The application should automatically add container instances when CPU usage exceeds 75% and remove instances when usage drops below 25%. Which of the following should the engineer implement?

A) Horizontal pod autoscaling

B) Vertical pod autoscaling

C) Cluster autoscaling

D) Manual scaling

Answer: A

Explanation:

Container orchestration platforms provide sophisticated automation capabilities for managing application lifecycle, resource allocation, and scaling behavior. Understanding the different types of scaling mechanisms available in container environments is essential for designing resilient, cost-effective applications that can respond to changing demand patterns. The scenario describes requirements for automatic horizontal scaling based on CPU metrics, which requires specific orchestration features.

Horizontal pod autoscaling automatically adjusts the number of container instances (pods) running in a deployment based on observed resource utilization metrics or custom metrics. HPA controllers continuously monitor configured metrics and calculate the desired number of replicas needed to maintain target utilization levels. When CPU usage exceeds the defined threshold of 75%, the HPA controller increases the replica count, distributing load across more container instances. Conversely, when utilization drops below 25%, the controller decreases replicas to avoid wasting resources.

The horizontal scaling approach is particularly effective for stateless applications where adding more identical instances improves overall capacity and performance. HPA implementations typically include safeguards against rapid scaling oscillations, such as cooldown periods that prevent scaling actions from occurring too frequently. Administrators can configure minimum and maximum replica counts to ensure adequate baseline capacity while preventing excessive resource consumption.

Modern HPA implementations support multiple metric sources beyond basic CPU utilization, including memory usage, custom application metrics exposed through monitoring systems, and external metrics from services like message queues or load balancers. This flexibility allows sophisticated scaling behaviors tailored to specific application characteristics. For example, a web application might scale based on requests per second, while a batch processing system might scale based on queue depth.

Implementation of HPA requires proper resource requests and limits defined in container specifications, as the autoscaler uses these definitions to calculate utilization percentages. The underlying container orchestration platform must have metrics collection infrastructure deployed, typically through metrics server or Prometheus integration. Once configured, HPA operates automatically without manual intervention, continuously adjusting capacity to match demand.

Vertical pod autoscaling (B) automatically adjusts the CPU and memory resources allocated to individual containers rather than changing the number of container instances. VPA analyzes resource usage patterns and modifies resource requests and limits to right-size containers. While VPA improves resource efficiency, it does not address the requirement for horizontal scaling where multiple instances handle increased load. Vertical scaling has limits based on node capacity and typically requires container restarts to apply new resource allocations.

Cluster autoscaling (C) operates at the infrastructure layer, automatically adding or removing worker nodes from the container cluster based on resource demands. Cluster autoscaling ensures sufficient compute capacity exists to schedule pods but does not directly control application instance counts. While cluster autoscaling often works in conjunction with horizontal pod autoscaling—providing nodes when HPA requests more pods—it addresses cluster capacity management rather than application-level scaling based on utilization metrics.

Manual scaling (D) requires human intervention to adjust the number of running container instances. Administrators manually modify replica counts through command-line tools or configuration files when they observe performance issues or anticipate demand changes. Manual scaling cannot meet the requirement for automatic response to CPU utilization thresholds and introduces delays between demand changes and capacity adjustments. Manual approaches also require continuous monitoring and prompt human action, which is impractical for dynamic workloads with unpredictable usage patterns.

The correct answer is A.

Question 215: 

A company is implementing a multi-cloud strategy using multiple public cloud providers. The security team needs to maintain consistent security policies and visibility across all cloud environments. Which of the following solutions would BEST address this requirement?

A) Cloud security posture management

B) Security information and event management

C) Network access control

D) Host-based intrusion detection

Answer: A

Explanation:

Multi-cloud strategies offer benefits like avoiding vendor lock-in, optimizing costs, leveraging best-of-breed services, and improving resilience through geographic distribution. However, managing security across multiple cloud providers introduces significant complexity due to different security models, inconsistent interfaces, varying compliance frameworks, and fragmented visibility. Organizations need specialized tools to maintain consistent security postures across heterogeneous cloud environments.

Cloud security posture management provides centralized visibility, continuous compliance monitoring, and automated security policy enforcement across multiple cloud platforms. CSPM solutions connect to cloud provider APIs to inventory resources, assess configurations against security best practices and compliance frameworks, identify misconfigurations and policy violations, and provide remediation guidance or automated correction. CSPM platforms normalize differences between cloud providers, presenting unified dashboards and reports that show security posture across the entire multi-cloud environment.

Key CSPM capabilities include continuous configuration monitoring that detects security risks like publicly exposed storage buckets, overly permissive security groups, unencrypted databases, and disabled logging. CSPM solutions map resources and configurations to compliance frameworks including CIS benchmarks, NIST guidelines, PCI DSS, HIPAA, and GDPR requirements, automatically identifying gaps and generating compliance reports. Identity and access management analysis reveals excessive permissions, dormant accounts, and privilege escalation risks across cloud environments.

Advanced CSPM platforms incorporate threat detection capabilities that identify suspicious activities, anomalous resource changes, and potential security incidents. Integration with infrastructure-as-code tools enables security scanning during development and deployment phases, shifting security left in the development lifecycle. CSPM solutions also provide risk prioritization, helping security teams focus on critical issues based on potential impact and exploitability.

The unified view that CSPM provides is particularly valuable in multi-cloud environments where security teams would otherwise need to learn and monitor multiple disparate consoles. CSPM solutions eliminate blind spots by continuously scanning all cloud accounts and subscriptions, ensuring that shadow IT resources and forgotten assets are identified and brought under security management. Policy templates and automation features help maintain consistency across cloud platforms despite their different native security controls.

Security information and event management (B) systems aggregate, correlate, and analyze security events and logs from multiple sources to detect threats and support incident response. While SIEM solutions provide valuable security monitoring and can ingest logs from multiple cloud providers, they primarily focus on event analysis and threat detection rather than configuration management and compliance assessment. SIEM platforms require proper log forwarding configuration and may not provide the comprehensive resource inventory and posture assessment capabilities that CSPM offers. Additionally, SIEM solutions typically do not enforce security policies or provide automated remediation for misconfigurations.

Network access control (C) systems manage which devices can connect to network resources based on authentication, authorization, and compliance policies. NAC solutions operate primarily at the network layer to control endpoint access and are typically deployed in traditional data center or campus network environments. NAC does not address cloud-specific security challenges like resource configuration, identity and access management, or API security across multiple cloud providers. Cloud environments require security controls designed for dynamic, API-driven infrastructure rather than network admission control.

Host-based intrusion detection (D) systems monitor individual servers for malicious activity, file changes, and policy violations. While HIDS solutions provide valuable endpoint security and can be deployed on cloud instances, they operate at the host level rather than providing the centralized, multi-cloud visibility and policy management required. HIDS would need to be deployed and managed separately in each cloud environment and would not address configuration security, compliance monitoring, or many cloud-native security concerns like storage bucket permissions or serverless function configurations.

The correct answer is A.

Question 216: 

An organization is deploying a cloud-based application that must comply with data residency requirements mandating that customer data remain within specific geographic boundaries. Which of the following should the cloud architect implement to ensure compliance?

A) Content delivery network with global edge locations

B) Data encryption at rest and in transit

C) Region-specific resource deployment with data sovereignty controls

D) Database replication across multiple continents

Answer: C

Explanation:

Data residency and sovereignty requirements are increasingly common regulatory obligations that mandate certain types of data remain stored and processed within specific geographic boundaries. These requirements arise from privacy regulations like GDPR in Europe, data localization laws in countries like Russia and China, industry-specific regulations in sectors like healthcare and finance, and contractual obligations with customers or partners. Cloud architects must understand how to design compliant solutions that respect geographic data restrictions.

Region-specific resource deployment with data sovereignty controls ensures that all resources storing or processing regulated data are deployed exclusively in cloud regions within approved geographic boundaries. This approach involves careful selection of cloud regions during initial deployment, ensuring compute instances, storage services, databases, and all other data-handling resources reside in compliant locations. Data sovereignty controls include configurations that prevent data replication or backup to regions outside approved boundaries, access restrictions preventing administration from non-compliant locations, and service selections that guarantee data locality.

Cloud providers typically organize infrastructure into geographic regions composed of multiple availability zones, allowing customers to specify exactly where resources are deployed. Organizations subject to data residency requirements must explicitly select appropriate regions and verify that chosen services maintain data within those regions. Some cloud services, particularly global services like content delivery networks or certain management planes, may store metadata or operational data outside specified regions, requiring careful evaluation and possibly service exclusion.

Implementation of region-specific deployment requires documented architecture decisions, infrastructure-as-code templates with hard-coded region specifications, policy controls preventing resource creation in non-compliant regions, and regular compliance audits verifying continued adherence. Organizations should implement technical controls like service control policies in AWS, Azure Policy in Microsoft Azure, or organization policies in Google Cloud Platform that prevent users from deploying resources outside approved regions, even if they have permissions for those services.

Additional considerations include understanding cloud provider data handling practices, reviewing data processing agreements and data protection addendums, ensuring third-party services and integrations also comply with residency requirements, and establishing procedures for incident response that might require data access from outside approved regions. Some organizations implement additional layers of encryption where they control keys to provide extra assurance that even cloud providers cannot access data outside approved jurisdictions.

Content delivery network with global edge locations (A) distributes content across worldwide points of presence to improve performance and availability by serving users from geographically proximate locations. CDN deployment explicitly places data copies across multiple countries and continents, directly violating data residency requirements that mandate data remain within specific boundaries. While CDNs offer excellent performance benefits, they are incompatible with strict geographic data restrictions unless configured to operate only within approved regions.

Data encryption at rest and in transit (B) protects data confidentiality by rendering information unreadable without appropriate decryption keys. While encryption is an important security control and often required for compliance, it does not address data residency requirements. Encrypted data stored in non-compliant geographic locations still violates residency mandates regardless of its encrypted state. Encryption protects against unauthorized access but does not control where data physically resides.

Database replication across multiple continents (D) improves availability and disaster recovery by maintaining synchronized data copies in geographically distributed locations. While multi-region replication provides excellent resilience, it explicitly moves data across geographic boundaries, violating data residency requirements. Replication strategies must be carefully designed to keep all data copies within approved regions, potentially limiting disaster recovery options but ensuring regulatory compliance.

The correct answer is C.

Question 217: 

A DevOps team wants to implement continuous integration and continuous deployment (CI/CD) for a cloud-native application. Which of the following would BEST enable automated testing, building, and deployment of application updates?

A) Configuration management tool

B) Container orchestration platform

C) CI/CD pipeline automation tool

D) Infrastructure as code framework

Answer: C

Explanation:

Modern software development practices emphasize automation, rapid iteration, and continuous delivery of value to users. Continuous integration and continuous deployment represent advanced development methodologies where code changes are automatically tested, integrated, and deployed to production environments with minimal manual intervention. Implementing effective CI/CD requires specialized tools designed to orchestrate complex workflows spanning source code management, automated testing, artifact building, and deployment automation.

CI/CD pipeline automation tools provide comprehensive platforms for defining, executing, and monitoring automated workflows that transform source code into running applications. These tools integrate with version control systems to detect code changes, trigger automated build processes when developers commit code, execute comprehensive test suites including unit tests, integration tests, and security scans, build deployable artifacts like container images or application packages, and deploy artifacts to target environments following approval gates and deployment strategies.

Leading CI/CD platforms offer pipeline-as-code capabilities where developers define build and deployment workflows in version-controlled configuration files stored alongside application code. This approach enables workflow versioning, peer review of deployment processes, and consistent pipeline behavior across different applications. Pipeline definitions typically specify stages like build, test, security scan, staging deployment, and production deployment, with conditional logic controlling workflow progression based on test results and approval requirements.

Modern CI/CD tools provide extensive integration ecosystems connecting to cloud platforms, container registries, Kubernetes clusters, testing frameworks, security scanning tools, and monitoring systems. These integrations enable sophisticated workflows where security vulnerabilities automatically block deployments, performance tests validate application behavior before production release, and deployment automation follows blue-green or canary strategies to minimize risk. Advanced features include parallel execution for faster pipelines, artifact management and versioning, deployment tracking and rollback capabilities, and comprehensive audit logging for compliance.

CI/CD automation fundamentally changes software delivery by reducing time between code changes and production deployment from weeks or months to hours or minutes. Automated testing provides confidence in code quality while enabling rapid iteration. Deployment automation eliminates manual errors and ensures consistent, repeatable processes. Integration with monitoring and feedback systems creates continuous improvement cycles where production metrics inform development priorities.

Configuration management tool (A) automates system configuration, ensuring servers and infrastructure maintain desired states. Configuration management tools like Ansible, Puppet, and Chef excel at configuring operating systems, installing software packages, and managing configuration files across server fleets. While these tools can be incorporated into deployment processes, they focus primarily on infrastructure configuration rather than providing comprehensive CI/CD workflow orchestration. Configuration management does not inherently include source code integration, automated testing, or build artifact creation.

Container orchestration platform (B) manages containerized application deployment, scaling, and operations across cluster infrastructure. Orchestration platforms like Kubernetes provide robust runtime environments for containerized applications but do not directly handle continuous integration, automated testing, or build processes. While orchestration platforms are common deployment targets for CI/CD pipelines, they represent the runtime environment rather than the automation framework that builds, tests, and deploys applications.

Infrastructure as code framework (D) defines infrastructure resources using declarative or imperative code that can be version controlled and automatically deployed. IaC tools like Terraform, CloudFormation, and ARM templates enable consistent, repeatable infrastructure provisioning. While IaC is often integrated into CI/CD pipelines for infrastructure deployment, IaC frameworks focus specifically on infrastructure provisioning rather than application code building, testing, and deployment. Comprehensive CI/CD requires tools that orchestrate both infrastructure provisioning and application deployment.

The correct answer is C.

Question 218: 

A cloud security analyst discovers that several cloud storage buckets containing sensitive data are publicly accessible. Which of the following should be the analyst’s FIRST step to remediate this issue?

A) Enable versioning on the storage buckets

B) Remove public access permissions from the buckets

C) Enable server-side encryption

D) Implement lifecycle policies for data retention

Answer: B

Explanation:

Misconfigured cloud storage is one of the most common causes of data breaches and unauthorized data exposure in cloud environments. Publicly accessible storage buckets have been responsible for numerous high-profile security incidents where sensitive information including customer data, financial records, intellectual property, and personally identifiable information was exposed to the internet. When discovering such misconfigurations, security professionals must prioritize immediate actions that stop active data exposure.

Removing public access permissions from storage buckets immediately eliminates the security exposure by revoking unauthorized access to sensitive data. This direct remediation addresses the root cause of the vulnerability—overly permissive access controls—and prevents further unauthorized data access. Cloud storage services provide multiple permission mechanisms including bucket policies, access control lists, and identity-based policies that may grant public access either intentionally or accidentally. The remediation process involves reviewing all permission mechanisms and removing any configurations that allow anonymous or public internet access.

Immediate permission removal follows fundamental incident response principles by prioritizing containment of active security exposures. Every moment publicly accessible sensitive data remains exposed increases the risk that unauthorized parties discover and exfiltrate information. Automated scanning tools and malicious actors continuously scan cloud storage services looking for publicly accessible buckets, meaning discovery and exploitation can occur rapidly after misconfiguration.

After removing public access, security analysts should implement additional protective measures including access logging to monitor data access patterns, security auditing to understand how the misconfiguration occurred, notification procedures to inform stakeholders of potential exposure, and preventive controls like bucket policies that explicitly deny public access and organizational policies that prevent public access at the account level. Many cloud providers offer features that block public access by default across all storage resources in an account.

Organizations should also review security practices around infrastructure deployment, implement policy-as-code scanning to detect misconfigurations before deployment, establish approval processes for public access permissions when legitimately required, and conduct regular compliance scanning to identify configuration drift. Security awareness training helps prevent future incidents by ensuring personnel understand the risks of public cloud storage exposure.

Enable versioning on storage buckets (A) creates versioned copies of objects, protecting against accidental deletion and enabling recovery of previous object versions. While versioning is valuable for data protection and provides some safeguards against malicious deletion, it does not address the immediate security exposure of publicly accessible sensitive data. Enabling versioning on publicly accessible buckets might even increase risk by preserving multiple versions of sensitive data that remain publicly accessible.

Enable server-side encryption (C) protects data confidentiality by encrypting stored objects, rendering them unreadable without appropriate decryption keys. However, encryption alone does not prevent unauthorized access to cloud storage. When storage buckets are publicly accessible, encryption provides no protection because cloud providers automatically decrypt data for authenticated requests. Public access permissions allow anonymous users to retrieve objects through standard storage APIs, which automatically handle decryption as part of normal operations.

Implement lifecycle policies for data retention (D) automates data management by transitioning objects between storage classes or deleting objects based on age and access patterns. Lifecycle policies optimize storage costs and implement retention requirements but do not restrict access permissions. Implementing lifecycle policies on publicly accessible buckets does nothing to prevent unauthorized access to data before lifecycle rules trigger deletion.

The correct answer is B.

Question 219: 

An organization needs to migrate a legacy application to the cloud. The application has hardcoded dependencies on specific server hostnames and IP addresses. Which cloud migration strategy would be MOST appropriate?

A) Replatform

B) Refactor

C) Rehost

D) Retire

Answer: C

Explanation:

Cloud migration strategies require careful consideration of application characteristics, business requirements, technical constraints, and organizational capabilities. Different approaches offer varying levels of cloud optimization, migration complexity, time to completion, and required resources. Understanding application dependencies and constraints is essential for selecting appropriate migration strategies that balance business value with practical feasibility.

Rehosting, commonly known as «lift and shift,» involves moving applications to cloud infrastructure with minimal or no modifications to application code and architecture. This strategy deploys applications on cloud virtual machines that closely replicate the on-premises environment, maintaining existing server configurations, application architectures, and operational procedures. Rehosting is particularly appropriate for applications with hardcoded dependencies, legacy architectures, limited documentation, or when rapid migration timelines are required.

For applications with hardcoded hostnames and IP addresses, rehosting minimizes migration risk by preserving the existing environment. Virtual machines can be configured with specific hostnames and private IP addresses matching on-premises configurations, allowing applications to function without code changes. Cloud networking features like virtual private clouds, DNS services, and reserved IP addresses enable recreation of familiar network environments. Some cloud providers offer automated rehosting tools that scan on-premises servers, create equivalent virtual machine configurations, and replicate data to cloud storage.

While rehosting provides the fastest path to cloud migration and reduces initial complexity, it offers limited cloud optimization benefits. Rehosted applications do not immediately leverage cloud-native services, autoscaling capabilities, managed databases, or serverless computing. However, rehosting establishes presence in the cloud, enabling incremental optimization after migration. Organizations often follow a two-phase approach: initial rehosting for rapid migration followed by gradual modernization as time and resources permit.

Rehosting addresses common migration challenges including compressed timelines, limited developer resources, risk-averse organizational cultures, and applications where modification is impractical or impossible due to vendor restrictions or missing source code. The strategy is particularly valuable for migrations driven by data center contract expirations, hardware end-of-life situations, or compliance requirements where timing is more critical than optimization.

Replatform (A) involves migrating applications with minor modifications to take advantage of cloud services without fundamental architecture changes. Replatforming might include replacing self-managed databases with managed database services, implementing cloud storage instead of file servers, or adopting cloud load balancing. While replatforming offers moderate cloud benefits, it requires application modifications to remove hardcoded dependencies and integrate with cloud services. Applications with rigid dependencies on specific hostnames and IP addresses would require code changes to implement replatforming, increasing complexity and risk.

Refactor (B) represents comprehensive application redesign to fully embrace cloud-native architectures, services, and design patterns. Refactoring involves significant code changes to implement microservices architectures, adopt containerization, leverage serverless functions, and use managed cloud services. This strategy delivers maximum cloud benefits but requires substantial development effort, extensive testing, and often results in fundamentally different applications. For legacy applications with hardcoded dependencies, refactoring would be the most complex and time-consuming option, requiring complete application restructuring.

Retire (D) involves decommissioning applications that are no longer needed or have been replaced by alternative solutions. Retirement eliminates migration costs and ongoing operational expenses for unnecessary applications. However, retiring an application assumes it provides no business value or that equivalent functionality is available elsewhere. The scenario describes migrating an application to the cloud, indicating continued business need rather than elimination.

The correct answer is C.

Question 220: 

A cloud administrator needs to ensure that all resources created in the cloud environment are properly tagged with cost center, project name, and owner information. Which of the following would BEST enforce this requirement?

A) Resource naming convention

B) Policy-based governance

C) Manual tagging process

D) Cost allocation reports

Answer: B

Explanation:

Cloud environments enable rapid resource provisioning where users can create infrastructure through web consoles, command-line tools, APIs, and infrastructure-as-code templates. This flexibility accelerates development but can lead to resource sprawl, unclear ownership, difficult cost attribution, and compliance challenges. Organizations need mechanisms to enforce standards and ensure resources are properly documented with metadata that supports operational management, financial accountability, and governance requirements.

Policy-based governance provides automated enforcement of organizational standards through programmatically defined rules that prevent non-compliant resource creation or automatically remediate violations. Cloud platforms offer native policy engines that evaluate resource configurations during creation and modification, blocking operations that violate defined policies. For tagging requirements, governance policies can mandate specific tags be present with appropriate values before resources can be created, ensuring compliance from the moment of resource provisioning.

Governance policies implement tag enforcement through various mechanisms including preventive controls that deny resource creation without required tags, detective controls that identify non-compliant existing resources, and automated remediation that applies tags based on context or notifies resource owners of violations. Policy engines support sophisticated logic including required tag keys, allowed values from predefined lists, conditional requirements based on resource type or location, and inheritance rules where tags propagate from parent resources to child resources.

Organizations typically implement tag policies at organizational or account levels, ensuring consistent enforcement across all teams and projects. Policy definitions are version controlled, reviewed through governance processes, and updated as requirements evolve. Comprehensive governance frameworks combine tag policies with other controls including allowed services, approved regions, encryption requirements, network configurations, and identity management rules.

Beyond enforcement, policy-based governance provides visibility through compliance dashboards showing policy violations, resources out of compliance, and remediation status. Automated reporting generates regular compliance summaries for management review. Integration with infrastructure-as-code workflows enables policy validation during development, shifting compliance left and preventing non-compliant resources from ever reaching production environments.

Tag enforcement policies deliver multiple benefits including accurate cost allocation where resources are automatically associated with cost centers and projects, clear ownership enabling efficient incident response and lifecycle management, compliance reporting demonstrating adherence to internal standards and regulatory requirements, and automated operations where tags drive backup policies, security controls, and operational procedures.

Resource naming convention (A) establishes standards for naming cloud resources to improve clarity and organization. While naming conventions are valuable for resource identification, they do not enforce metadata requirements or ensure cost center, project, and owner information is captured in a structured format. Names have character limitations and cannot contain the detailed metadata that tags provide. Additionally, naming conventions typically rely on voluntary compliance rather than technical enforcement, making violations common when users prioritize speed over standards.

Manual tagging process (C) requires users to apply tags when creating resources through documented procedures and training. Manual processes are inherently unreliable because they depend on user knowledge, diligence, and prioritization of tagging among competing demands.

Users frequently forget required tags, apply incorrect values, or intentionally skip tagging to save time. Manual processes lack enforcement mechanisms, allowing non-compliant resources to be created and deployed.

Cost allocation reports (D) provide visibility into cloud spending by organizing expenses based on tags, accounts, or other dimensions. While cost reports benefit from proper tagging and can identify untagged resources through missing allocations, reports are detective controls that identify problems after they occur rather than preventive controls that ensure compliance. Reports do not enforce tagging requirements or prevent creation of non-compliant resources.

The correct answer is B.

Question 221: 

A company is experiencing performance degradation in its cloud-based application during peak business hours. Monitoring shows that database queries are taking significantly longer to execute. Which of the following should the cloud engineer investigate FIRST?

A) Database connection pooling settings

B) Application code optimization

C) Network bandwidth limitations

D) Database read replica configuration

Answer: A

Explanation:

Performance troubleshooting in cloud environments requires systematic investigation of potential bottlenecks across application layers, infrastructure components, and dependencies. When database query performance degrades during peak usage periods, the issue often relates to resource contention, connection management, or capacity limitations. Effective troubleshooting prioritizes likely causes based on symptoms and investigates configuration issues before implementing more complex solutions.

Database connection pooling settings directly impact application performance by controlling how applications establish and reuse database connections. Connection pooling maintains a pool of established database connections that applications can reuse rather than creating new connections for each transaction. Establishing database connections is computationally expensive and time-consuming, involving network handshakes, authentication, and session initialization. Poor connection pooling configuration is a common cause of performance degradation during high load periods.

When connection pools are too small, applications must wait for available connections during peak usage, creating queuing delays that manifest as slow query execution. Applications may also repeatedly create and destroy connections when pools are exhausted, dramatically increasing database load and latency. Symptoms of inadequate connection pooling include connection timeout errors, increasing response times correlated with user load, and database metrics showing high connection creation rates.

Investigating connection pooling settings involves reviewing application configuration for pool size parameters, maximum connections, connection timeout values, and idle connection handling. Cloud databases typically have connection limits based on instance size, and applications must configure pool sizes appropriately to share available connections across application instances. During peak periods, connection contention becomes more severe as multiple application servers compete for limited database connections.

Connection pooling issues are relatively simple to diagnose and resolve compared to code optimization or architectural changes. Monitoring tools typically provide metrics on connection pool utilization, wait times, and connection errors. Resolution often involves adjusting configuration parameters—increasing pool sizes, tuning timeout values, or implementing connection retry logic—which can be deployed quickly without application code changes or infrastructure modifications.

Application code optimization (B) improves performance by refining algorithms, reducing database queries, implementing caching, and eliminating inefficiencies. While code optimization delivers significant long-term benefits, it requires development resources, testing, and deployment cycles. Code issues typically cause consistent performance problems rather than degradation specifically during peak hours. If queries are properly optimized but experience slowdowns only under high load, the issue more likely relates to resource contention or configuration rather than code quality.

Network bandwidth limitations (C) can cause performance issues when data transfer requirements exceed available capacity. However, database query performance degradation typically does not stem from bandwidth constraints unless transferring extremely large result sets. Database protocols are relatively lightweight, and modern cloud networks provide substantial bandwidth. Network latency issues would affect all application components, not just database queries specifically. Bandwidth problems also manifest differently, showing throughput degradation and network congestion metrics rather than specifically slow query execution.

Database read replica configuration (D) improves read performance by distributing queries across multiple database instances. Implementing read replicas represents an architectural enhancement that increases capacity but requires application modifications to route appropriate queries to replicas, introduces eventual consistency considerations, and involves infrastructure changes. Before implementing read replicas, investigating simpler configuration issues like connection pooling makes more sense. Read replicas address capacity limitations after confirming that existing resources are properly configured and utilized.

The correct answer is A.

Question 222: 

An organization wants to implement a cloud backup solution that provides immutable backups to protect against ransomware attacks. Which of the following features would BEST support this requirement?

A) Backup versioning

B) Object lock with retention policies

C) Incremental backup scheduling

D) Backup encryption

Answer: B

Explanation:

Ransomware attacks have evolved to specifically target backup systems, recognizing that organizations with intact backups can recover without paying ransom demands. Modern ransomware variants attempt to delete or encrypt backups before attacking primary data, eliminating recovery options and forcing victims into difficult decisions. Protecting backup data from ransomware requires implementing immutability features that prevent modification or deletion of backup files even by privileged accounts or compromised credentials.

Object lock with retention policies provides immutability by preventing deletion or modification of stored objects for specified time periods. When object lock is enabled with retention policies, backup files cannot be altered or removed until the retention period expires, even by administrators with full access permissions. This write-once-read-many protection ensures that ransomware cannot encrypt or delete backup data even if attackers compromise administrative credentials or gain elevated privileges in the backup environment.

Cloud storage services typically implement object lock through two retention modes: governance mode, which prevents most users from deleting objects but allows specifically authorized users to override protection, and compliance mode, which provides absolute immutability where no user can delete or modify objects until retention periods expire. Compliance mode offers stronger protection against determined attackers who may have compromised highly privileged accounts.

Retention policies define how long objects remain immutable, typically set to periods longer than ransomware dwell time—the duration between initial compromise and ransom deployment. Organizations commonly implement 30, 60, or 90-day retention periods ensuring multiple generations of backups remain protected. Some implementations use legal hold features that maintain immutability indefinitely until explicitly released, providing additional protection for critical backups.

Implementation of immutable backups requires integration between backup software and storage services supporting object lock features. Backup applications write data to storage with object lock enabled, and retention policies are automatically applied based on backup policies. Organizations should verify that backup processes correctly implement immutability and regularly test recovery procedures to ensure protected backups can be restored successfully.

Additional ransomware protection measures complement immutable backups including air-gapped backup copies physically or logically isolated from production networks, separate administrative credentials for backup systems to prevent lateral movement from compromised accounts, monitoring and alerting for unusual backup deletion attempts, and regular backup testing to verify recoverability.

Backup versioning (A) maintains multiple versions of backed-up files, allowing recovery from various points in time. While versioning provides protection against accidental deletion and enables recovery from different time periods, versions can still be deleted or corrupted by ransomware or malicious actors with appropriate permissions. Versioning does not provide the immutability needed to protect against determined attackers who specifically target backup systems. An attacker with access to the backup system can delete all versions, eliminating recovery options.

Incremental backup scheduling (C) optimizes backup efficiency by copying only data that changed since the previous backup. Incremental backups reduce storage requirements and backup windows but do not provide protection against ransomware. Backup schedules determine how frequently data is protected and how much data may be lost between backups, but scheduling does not prevent attackers from deleting or encrypting backup files. Incremental backups still require protection through immutability features to defend against ransomware.

Backup encryption (D) protects backup data confidentiality by rendering data unreadable without decryption keys. While encryption is essential for protecting sensitive data in backups, particularly when stored in cloud environments, it does not prevent ransomware from deleting backup files or encrypting them with additional layers of encryption. Backup encryption addresses confidentiality requirements but does not provide the integrity and availability protection that immutability offers against ransomware attacks.

The correct answer is B.

Question 223: 

A cloud architect is designing a solution that requires processing streaming data from IoT devices in real-time. The solution must analyze data as it arrives and trigger actions based on specific conditions. Which of the following cloud services would be MOST appropriate?

A) Batch processing service

B) Stream processing service

C) Data warehouse service

D) Content delivery network

Answer: B

Explanation:

Modern applications increasingly require real-time data processing capabilities to extract immediate value from continuously generated data streams. Internet of Things devices, application logs, financial transactions, social media feeds, and sensor networks produce continuous data flows that organizations need to analyze instantaneously to detect patterns, identify anomalies, trigger alerts, and drive automated responses. Selecting appropriate services for real-time processing requires understanding the characteristics of different data processing paradigms.

Stream processing services are specifically designed to ingest, analyze, and act upon data in real-time as it flows through the system. These services handle continuous data streams from multiple sources, apply transformations and analytics on moving data without storing it first, identify patterns and conditions across time windows, and trigger downstream actions or route data to various destinations based on analysis results. Stream processing operates on unbounded data sets where new data continuously arrives without predetermined endpoints.

Cloud stream processing platforms provide managed services that handle infrastructure scaling, fault tolerance, and operational complexity. Key capabilities include data ingestion from numerous sources through standardized protocols, real-time analytics including filtering, aggregation, joins, and pattern matching, windowing functions that analyze data across temporal boundaries like sliding windows or tumbling windows, and output routing to databases, data lakes, notification services, or other applications based on processing results.

For IoT scenarios, stream processing services excel at handling high-volume data from thousands or millions of devices, performing edge analytics to reduce data transmission, detecting anomalies requiring immediate attention like equipment failures or security breaches, and triggering automated responses such as sending alerts, adjusting device configurations, or initiating maintenance workflows. The real-time nature ensures minimal latency between data generation and action, which is critical for applications like predictive maintenance, fraud detection, or safety monitoring.

Stream processing architectures implement resilient, scalable pipelines that automatically handle infrastructure failures, scale capacity based on data volume, and guarantee data processing even during system disruptions. These services integrate with other cloud offerings including message queues, databases, machine learning services, and monitoring systems to create comprehensive real-time data platforms.

Batch processing service (A) analyzes large volumes of data in scheduled or triggered jobs that process complete data sets collected over time. Batch processing excels at historical analysis, data transformations, report generation, and complex analytics requiring access to entire data sets. However, batch processing introduces latency measured in minutes, hours, or days between data generation and results availability. For IoT scenarios requiring real-time analysis and immediate action triggering, batch processing cannot meet responsiveness requirements. Batch approaches are appropriate for periodic reporting, trend analysis, and retrospective investigation but not real-time streaming analytics.

Data warehouse service (C) provides optimized storage and query capabilities for large volumes of structured data used in business intelligence and analytical reporting. Data warehouses organize data for efficient querying, support complex analytical queries across historical data, and integrate data from multiple sources into unified schemas. While essential for business analytics, data warehouses are designed for query-based analysis of stored data rather than real-time processing of streaming data. Loading data into warehouses, indexing for query performance, and executing analytical queries all introduce latency incompatible with real-time IoT processing requirements.

Content delivery network (D) distributes static and dynamic content to edge locations near users for improved performance and availability. CDNs cache content like web pages, images, videos, and application assets across geographic locations, reducing latency for end users. While CDNs handle high traffic volumes and provide excellent scalability, they focus on content delivery rather than data processing. CDNs do not provide analytics capabilities, condition evaluation, or action triggering required for IoT stream processing scenarios.

The correct answer is B.

Question 224: 

A company needs to ensure that its cloud infrastructure complies with PCI DSS requirements for handling credit card data. Which of the following controls is specifically required by PCI DSS?

A) Multi-factor authentication for administrative access

B) Annual security awareness training

C) Network segmentation to isolate cardholder data environment

D) All of the above

Answer: D

Explanation:

The Payment Card Industry Data Security Standard is a comprehensive information security framework mandating specific controls for organizations that store, process, or transmit credit card information. PCI DSS was developed by major payment card brands to reduce fraud and protect cardholder data through consistent security practices across all organizations handling payment information. Understanding PCI DSS requirements is essential for architecting compliant cloud environments that handle payment transactions.

PCI DSS organizes requirements into twelve high-level objectives covering network security, access control, monitoring, policies, and data protection. Multiple specific controls fall under these objectives, and organizations must implement all applicable requirements to achieve compliance. The standard applies regardless of whether infrastructure is on-premises, in the cloud, or in hybrid environments, though cloud deployments require careful attention to shared responsibility models.

Multi-factor authentication for administrative access is explicitly required by PCI DSS Requirement 8.3, which mandates MFA for all personnel with administrative access to the cardholder data environment. This control significantly reduces the risk of unauthorized access through compromised credentials, which represents one of the most common attack vectors against payment systems. Cloud environments must implement MFA for console access, API authentication, SSH connections, and any other administrative interfaces that could provide access to systems handling cardholder data.

Annual security awareness training is required by PCI DSS Requirement 12.6, mandating formal security awareness programs for all personnel. Training must occur upon hire and at least annually thereafter, covering payment card handling procedures, security responsibilities, phishing and social engineering awareness, and incident reporting procedures. Cloud environments require training on cloud-specific security considerations including proper configuration of cloud services, understanding shared responsibility models, and secure development practices for cloud applications.

Network segmentation to isolate the cardholder data environment is addressed by PCI DSS Requirement 1.2, which requires implementing network security controls that restrict connections between untrusted networks and systems in the cardholder data environment. Segmentation reduces PCI DSS scope by isolating payment processing systems from other infrastructure, limiting the number of systems requiring full compliance controls. Cloud implementations use virtual private clouds, security groups, network ACLs, and micro-segmentation to create isolated network zones with strictly controlled traffic flows.

Implementing all three controls is mandatory for PCI DSS compliance, along with numerous other requirements including encryption of cardholder data in transit and at rest, secure configuration of all systems, vulnerability management programs, access control policies, logging and monitoring, regular security testing, and maintained information security policies. Cloud organizations must carefully map PCI DSS requirements to cloud services and configurations, ensuring all controls are properly implemented in cloud-native ways.

Cloud-specific PCI DSS considerations include validating that cloud providers maintain their own PCI DSS compliance where applicable, implementing proper encryption key management with customer-controlled keys, ensuring logging captures all required events across cloud services, restricting API access through multi-factor authentication and principle of least privilege, and maintaining clear responsibility matrices defining which security controls are managed by cloud providers versus customers.

Organizations undergo regular PCI DSS assessments by qualified security assessors who validate that all requirements are properly implemented. Assessment scope, frequency, and rigor depend on transaction volumes and processing methods. Cloud migrations often provide opportunities to improve PCI DSS compliance by implementing modern security controls, reducing scope through segmentation, and leveraging cloud provider security features.

Since all three options represent required PCI DSS controls, any answer selecting only one or two options would be incorrect and indicate an incomplete understanding of PCI DSS requirements. Comprehensive compliance requires implementing the full spectrum of required controls across all twelve primary requirements.

The correct answer is D.

Question 225: 

A cloud administrator is configuring auto-scaling for a web application. During testing, the administrator notices that instances are being terminated immediately after being added to the load balancer, causing service disruptions. Which of the following is the MOST likely cause?

A) Health check configuration is too aggressive

B) Scaling policy threshold is set incorrectly

C) Instance type is undersized for the workload

D) Security group rules are blocking traffic

Answer: A

Explanation:

Auto-scaling mechanisms in cloud environments automatically adjust compute capacity based on demand, improving application availability and optimizing costs. Effective auto-scaling requires proper configuration of scaling policies, health checks, load balancer settings, and instance initialization procedures. Misconfigurations can cause service disruptions, failed deployments, and capacity problems. Understanding the relationship between health checks and auto-scaling behavior is crucial for reliable application operations.

Health check configuration determines how load balancers and orchestration platforms assess whether application instances are functioning properly and capable of handling traffic. Health checks typically involve periodic connection attempts to specific ports, HTTP requests to designated endpoints, or execution of custom scripts that verify application readiness. When instances fail health checks, load balancers remove them from traffic rotation, and auto-scaling systems may terminate and replace them with new instances.

Aggressive health check configurations use short intervals between checks, low failure thresholds requiring only one or two consecutive failures before marking instances unhealthy, brief timeout periods for health check responses, and complex health check logic that may fail during normal application operations. When health checks are too aggressive, instances may be marked unhealthy before applications complete initialization, during temporary resource contention, or due to transient network issues.

The described symptom—instances terminated immediately after being added to load balancers—strongly indicates health check timing problems. When new instances launch, applications must complete initialization including operating system boot, application startup, dependency loading, cache warming, and connection establishment before they can respond to health checks. If health checks begin immediately after instance launch and fail before applications are ready, auto-scaling systems interpret this as instance failure and terminate instances, creating a cycle of continuous launch and termination.

Proper health check configuration requires balancing responsiveness with reliability through appropriate interval timing that allows sufficient time for health checks without excessive delay in detecting failures, reasonable failure thresholds requiring multiple consecutive failures before marking instances unhealthy, adequate timeout periods accommodating normal response time variations, and initial delay periods allowing new instances time to complete startup before health checks begin.

Many cloud platforms provide instance warm-up or grace period settings specifically designed to prevent premature health check failures. These settings delay health check evaluation or ignore initial failures for newly launched instances, giving applications time to initialize properly. Configuration should account for actual application startup times determined through testing, including all initialization activities that occur before applications can successfully respond to health checks.

Troubleshooting health check issues involves examining application logs for startup timing and errors, reviewing load balancer logs showing health check results and timing, monitoring instance lifecycle events showing launch and termination patterns, and testing health check endpoints manually to verify response behavior. Solutions typically involve increasing initial delay periods, adjusting failure thresholds, simplifying health check logic, or optimizing application startup procedures.

Scaling policy threshold set incorrectly (B) affects when auto-scaling triggers capacity changes based on metrics like CPU utilization or request counts. Incorrect thresholds might cause excessive scaling or insufficient capacity but would not cause immediate instance termination after launch. Scaling policies determine when to add or remove capacity, while health checks determine whether individual instances are functioning properly. The symptom of immediate termination after addition indicates health check problems rather than scaling trigger issues.

Instance type undersized for workload (C) occurs when selected compute resources lack sufficient CPU, memory, or network capacity for application requirements. Undersized instances might experience performance problems, timeouts under load, or resource exhaustion, but typically would not cause immediate termination after launch unless applications completely fail to start due to insufficient resources. Undersizing would more likely manifest as slow response times or sporadic failures under load rather than systematic immediate termination.

Security group rules blocking traffic (D) would prevent network communication between load balancers and instances, causing health check failures and potentially preventing traffic routing. However, security group issues typically manifest consistently across all instances rather than specifically affecting newly launched instances. If security groups were incorrectly configured, existing instances would also fail health checks. The pattern of immediate termination after launch suggests timing-related health check configuration rather than persistent network connectivity issues.

The correct answer is A.