Amazon AWS Certified Security — Speciality SCS-C02 Exam Dumps and Practice Test Questions Set 1 Q1-15
Visit here for our full Amazon AWS Certified Security — Specialty SCS-C02 exam dumps and practice test questions.
Question1:
A security team is designing an access control strategy for an Amazon RDS for PostgreSQL database that stores regulated financial data. The team must ensure that developers can connect to the database only through IAM authentication and must eliminate the use of static passwords. The solution must enforce short-lived authentication, centralise credential control, and allow the use of temporary tokens for session-based access. Which approach best meets these requirements?
A) Enable RDS IAM database authentication, create IAM users for each developer, and provide long-term AWS access keys for connectivity.
B) Use AWS Secrets Manager to rotate database credentials automatically and distribute the rotated passwords to developers for manual connection.
C) Enable RDS IAM database authentication, use IAM roles with AWS CLI or SDK-generated authentication tokens, and require developers to authenticate through IAM before obtaining temporary tokens for database login.
D) Store the database password in AWS Systems Manager Parameter Store (SecureString) and enforce MFA before developers can retrieve the password.
Answer:
C
Explanation:
Option A proposes enabling RDS IAM database authentication and creating IAM users for each developer, but then providing the developers with long-term AWS access keys. This immediately violates one of the key requirements: eliminating long-term credentials. By providing developers with long-term access keys, the organisation exposes itself to potential credential leaks, compromised devices, or accidental key upload to public repositories, which is a common root cause of security incidents. Additionally, long-term access keys do not enforce the temporary session model required for short-lived authentication. While RDS IAM authentication is indeed part of the correct direction, combining it with long-term access keys directly contradicts the overall security posture desired. Thus, Option A cannot be the correct answer.
Option B suggests using AWS Secrets Manager to rotate database credentials automatically, distributing the rotated passwords to developers. This improves security but does not meet the core requirement of eliminating static passwords. Even with rotation, developers still receive a password that persists until the next rotation event. This means access is not short-lived and not authenticated via IAM directly. Although Secrets Manager is valuable for automated rotation, the organisation in the scenario explicitly wants RDS IAM authentication with temporary tokens to avoid static database credentials. Furthermore, Secrets Manager distribution still creates an operational burden and a potential security gap if a password is leaked shortly after retrieval. Therefore, Option B does not satisfy the requirement.
Option C aligns exactly with AWS’s recommended secure access practices for RDS IAM authentication. With this model, the database accepts IAM-based authentication tokens generated by the AWS CLI or SDK. These tokens are time-bound (typically expiring after 15 minutes), satisfying the requirement for short-lived access. Developers do not store any passwords. Instead, they retrieve a temporary signed token after authenticating with IAM using their assigned role. This ensures centralized access control because IAM dictates which developers can generate valid tokens. Furthermore, IAM policies can restrict database user mapping, strengthen separation of duties, and require MFA before token generation. It also aligns with regulatory standards that discourage the use of passwords that cannot be centrally revoked instantly. For these reasons, Option C precisely meets all requirements and reflects best practice.
Option D uses AWS Systems Manager SecureString to store the database password and enforce MFA before retrieval. While this does add a security layer by using MFA, the organisation explicitly states that static passwords must be eliminated. Storing passwords anywhere, even in encrypted form, does not accomplish that. Additionally, SecureString does not provide session-based time-limited tokens; it merely stores secrets. Developers would still manually use the extracted password, and the authentication would not be tied to IAM’s temporary credentials. MFA retrieval also does not prevent the password from being compromised once retrieved because the developer still has possession of a static credential.
Question2:
A company is building a multi-account architecture using AWS Organizations. The security team needs centralized visibility into all AWS CloudTrail logs across accounts, must prevent any member account from disabling or modifying logging, and must ensure that logs are stored in an immutable, secure, write-once bucket. Which solution provides the strongest centralized control?
A) Enable CloudTrail in each account manually and configure each trail to deliver logs to its own S3 bucket with bucket versioning and MFA delete.
B) Use an organization-wide CloudTrail created in the management account, store logs in a centralized S3 bucket with required bucket policies, enable S3 Object Lock in compliance mode, and restrict member accounts using Organizations service control policies.
C) Allow each member account to configure CloudTrail but require logs to be forwarded through CloudWatch Logs subscription filters to a centralized logging account.
D) Create a multi-region CloudTrail in each account and use AWS Config aggregators to collect trail configurations centrally.
Answer:
B
Explanation:
Option A is inadequate because enabling trails manually in each account results in decentralized configuration management. Each account has administrative autonomy over its trails unless locked down with SCPs, but even so, the logs are being stored in separate buckets. This disperses visibility, complicates auditing, and increases operational overhead. Additionally, MFA delete cannot be enabled by default on S3 buckets using the AWS CLI or API; it requires manual console operations. Even with MFA delete, accounts could still disable their local CloudTrail unless organizational guardrails are placed, which are not mentioned in this option. Therefore, Option A does not provide the required centralized visibility or enforced configuration.
Option B outlines exactly what AWS recommends for enterprise-level governance: create an organization-wide trail that applies to all member accounts and regions automatically. By using a centralized S3 bucket, the organisation achieves complete visibility and consistency. Adding S3 Object Lock in compliance mode makes the logs immutable and tamper-evident, guaranteeing regulatory compliance for retention. Bucket policies can restrict member accounts from modifying or deleting any objects. Finally, using service control policies (SCPs) ensures that no member account can disable the organisation trail or alter its configuration. This directly fulfils every requirement listed in the scenario and thus is the correct answer.
Option C involves forwarding logs through CloudWatch Logs subscription filters. While forwarding can centralise visibility, it does not enforce CloudTrail enablement at the account level. Each account still retains control over its logs and could disable CloudTrail unless SCPs are used. Additionally, CloudWatch Logs forwarding does not offer the same level of immutability as S3 Object Lock, nor does it prevent log alteration at the source. Therefore, Option C does not meet strong tamper-protection requirements.
Option D suggests using AWS Config aggregators to collect CloudTrail configuration details. Config aggregators only gather metadata and configuration states; they do not centralise CloudTrail log data or enforce immutability. Even with multi-region CloudTrails in each account, administrators in those accounts could disable or alter their trails unless SCPs specifically prevent it. Config aggregators are helpful for compliance reporting, but not for centralised secure log storage. Thus, Option D cannot be correct.
Question3:
A company operates a set of containerised workloads on Amazon ECS using Fargate. The security team must implement a solution that provides real-time threat detection, monitors container runtime behaviour, identifies suspicious system calls, detects unauthorised network connections, and sends findings to a central security account. The solution must require no agents installed manually and must operate at the cloud-native service level. Which solution best meets these needs?
A) Deploy an EC2-based intrusion detection system in a separate VPC and route all container traffic through it using VPC peering.
B) Use AWS GuardDuty with the EKS Protection and Runtime Monitoring features enabled to monitor container activities across the environment.
C) Enable Amazon Detective and configure log ingestion from all ECS tasks for security graph analysis.
D) Configure AWS Config rules to evaluate ECS task definitions and alert on insecure parameters such as privileged mode.
Answer:
B
Explanation:
Option A suggests deploying an EC2-based intrusion detection system in a separate VPC and routing all container traffic through it using VPC peering. This is highly impractical in a modern serverless container environment such as ECS Fargate. Fargate tasks do not expose underlying hosts and do not allow traditional agent-based or network-inline security monitoring tools to function effectively because Fargate abstracts the host layer entirely. Routing all traffic through a separate EC2 IDS instance introduces significant architectural complexity, latency, and operational overhead. It also fails to capture container-specific runtime signals such as system calls, internal process behaviour, and file execution monitoring, because these require kernel-level visibility that an inline IDS cannot obtain. Therefore, Option A does not meet the requirements.
Option B is the correct answer because AWS GuardDuty now offers container-aware protections, including EKS Protection and Runtime Monitoring that extend to ECS on Fargate. These capabilities detect suspicious container behaviour without requiring customers to install or manage agents manually. GuardDuty integrates natively with AWS CloudTrail, VPC Flow Logs, DNS logs, EKS audit logs, and the GuardDuty Runtime Monitoring component. With runtime monitoring enabled, GuardDuty inspects runtime events such as system calls, processes, file access, and anomalous container actions. It alerts on unusual network behaviours as outbound connections to suspicious domains or lateral movement attempts. Findings are automatically delivered to a security account configured as the delegated administrator for GuardDuty. This directly fulfils the requirements of real-time detection, no agent installation, cloud-native monitoring, and centralised security operations. Therefore, Option B satisfies every requirement fully.
Option C proposes enabling Amazon Detective and ingesting data from ECS tasks for security graph analysis. Detective is a powerful service for investigating security incidents by correlating different types of event data into a graph model, but it does not perform real-time runtime detection or monitor container system calls. Detective relies on data provided by GuardDu.y, Otion D suggests configuring AWS Config rules to evaluate ECS task definitions for insecure parameters. This is valuable for preventive security and ensuring that ECS tasks do not run with unsafe configurations such as privileged mode or broad IAM permissions. However, AWS Config is a compliance and configuration assessment tool; it does not monitor runtime system-level behaviour. It cannot detect network anomalies, suspicious processes, or unauthorised system calls. Therefore, Option D is relevant for compliance but completely insufficient for runtime threat detection. It provides proactive guardrails but not the real-time detection required by the scenario.
Question4:
A financial services company must encrypt all data stored in Amazon S3 and ensure that only the security team can manage encryption keys. Developers must be able to read and write data, but they must never have the ability to modify key policies or disable key rotation. The company also requires access audits showing which identities used specific keys for decryption operations. Which approach best satisfies these requirements?
A) Enable default encryption with Amazon S3-managed keys (SSE-S3) on all buckets and allow developers full access to the buckets.
B) Use client-side encryption with keys generated and stored on developer workstations, uploading encrypted data to S3.
C) Use SSE-KMS with customer-managed CMKs, grant developers encrypt/decrypt permissions only, and restrict key administration to the security team while enabling CloudTrail logging for key usage.
D) Use SSE-KMS with AWS-managed KMS keys and allow all developers full key access since AWS manages key rotation automatically.
Answer:
C
Explanation:
This scenario focuses on encrypting S3 data using a secure and controlled key management strategy. The company needs to enforce strong encryption, limit key administration to the security team, provide developers the ability to use encryption without controlling keys, and maintain audit trails of key usage. Evaluating each of the four options through the lenses of key control, auditability, developer permissions, and best-practice compliance will lead us to the correct solution.
Option A suggests enabling default encryption with SSE-S3. While SSE-S3 encrypts data at rest, the keys are fully managed by Amazon S3. This means the security team cannot control key policies, cannot restrict key usage to specific identities, and cannot enforce fine-grained permissions. It also does not provide detailed CloudTrail audit logs of key usage since SSE-S3 does not generate per-request KMS usage events. Therefore, Option A fails to meet the requirements for administrative control and auditability.
Option B proposes client-side encryption with developer-generated keys. This contradicts the requirement that the security team must control the keys. Client-side encryption puts key generation, storage, and use entirely in the developer’s hands, making the security team unable to enforce governance. It also distributes sensitive cryptographic material across multiple developer devices, increasing risk. Additionally, AWS cannot audit decryption operations performed outside AWS. Therefore, Option B does not satisfy any of the security control requirements.
Option C is the correct solution. Using SSE-KMS with customer-managed CMKs provides comprehensive administrative control. The security team can define key policies, restrict administrative actions, enforce key rotation, and monitor usage via CloudTrail. Developers can be granted only the permissions to encrypt and decrypt using the CMK, while being denied access to modify key policies, disable rotation, or delete keys. Each encryption or decryption action is logged by CloudTrail, providing detailed audit trails showing which IAM principal used the key, when, and for which operation. This creates strong governance aligned with compliance frameworks such as PCI DSS and financial regulatory standards. Option C meets the requirements in full.
Option D proposes using AWS-managed KMS keys. While AWS-managed keys provide encryption and automatic rotation, administrators have no control over their policies. Developers would effectively inherit broad access because the AWS-managed key cannot be restricted through customer-managed key policies. The security team cannot control the administration of the key, cannot restrict developer access, and cannot configure custom policies. Although AWS-managed keys generate CloudTrail logs, the lack of administrative control violates one of the core requirements. Therefore, Option D is insufficient.
Question5:
A company uses Amazon API Gateway and AWS Lambda to host a sensitive internal API. The security team needs to ensure that only traffic originating from the company’s corporate network can invoke the API. The solution must block public internet access entirely, enforce network-level restrictions, and allow secure internal consumption without exposing endpoints publicly. Which approach best meets these requirements?
A) Protect the API with an API key and distribute the key to internal users only.
B) Use a Lambda authorizer to verify that requests originate from the corporate IP range.
C) Configure API Gateway as a private API accessible only through an interface VPC endpoint associated with the corporate network.
D) Deploy a WAF web ACL to block all traffic except the company’s IP addresses.
Answer:
C
Explanation:
Option A suggests protecting the API with an API key. While API keys add a layer of identification, they are not security credentials and do not prevent access from the public internet. Anyone with the API key—even unintentionally—can access the API from anywhere. API keys do not enforce network restrictions and can be leaked easily. Therefore, Option A does not satisfy the requirement of blocking all public access.
Option B proposes using a Lambda authorizer to check the source IP. While the Lambda authorizer can evaluate the IP address in the request context, this requires that the API be publicly exposed so the request reaches the authorizer in the first place. Network-level blocking cannot be achieved using a Lambda authorizer because evaluation happens after the request has already passed through API Gateway’s public front door. This violates the requirement to block the public entirely. Furthermore, IP addresses can be spoofed in some contexts, and relying solely on application-level logic is not considered a secure network-level restriction. Therefore, Option B cannot be correct.
Option C is the correct solution. By configuring API Gateway as a private API, the endpoint is not reachable over the public internet at all. A private API can only be accessed through an interface VPC endpoint (powered by AWS PrivateLink). The company can then allow only its corporate network—connected through VPN or Direct Connect with appropriate routing—to access the interface endpoint. This enforces strict network-level access control and ensures that the API is fully isolated from the internet. It also aligns with best practices for internal APIs that must never be exposed publicly. Therefore, Option C satisfies every requirement comprehensively.
Option D suggests using AWS WAF to block all traffic except from corporate IPs. While WAF can restrict access based on IP addresses, it is still attached to a public endpoint. The API remains publicly reachable, and the WAF only filters requests at the application layer after the endpoint is exposed. This fails the requirement to prevent public exposure entirely. WAF is valuable for Layer 7 protection, but it cannot make a public endpoint private. Thus, Option D does not meet the requirement.
Question6:
A company stores highly sensitive research data in several Amazon S3 buckets. The security team must ensure that no objects can ever be uploaded to these buckets unless they are encrypted using SSE-KMS with a specific customer-managed CMK. The team must also prevent developers from disabling bucket encryption, uploading unencrypted data, or modifying bucket policies. The solution must apply organisation-wide and prevent misconfigured buckets in the future. What is the MOST effective way to achieve these requirements?
A) Enable default encryption using SSE-S3 on each bucket and require developers to use S3 encryption headers.
B) Create an SCP that denies all S3 PutObject operations unless they include the correct KMS key ID, enforce bucket policies, and require encryption using SSE-KMS with the specified CMK.
C) Use S3 lifecycle rules to detect unencrypted objects and automatically re-encrypt them using the customer-managed KMS key.
D) Configure S3 EventBridge notifications triggered by PutObject events and trigger a Lambda function to reject uploads that are not encrypted.
Answer:
B
Explanation:
Option A suggests enabling default encryption using SSE-S3 and requiring developers to use encryption headers. While SSE-S3 does indeed provide encryption, it is managed by Amazon S3, not by the customer. The key requirement clearly states that encryption must specifically use SSE-KMS with a designated CMK. SSE-S3 does not meet the encryption standard required and does not provide fine-grained access controls through KMS key policies. Additionally, default encryption settings can be modified by users with S3 permissions unless restricted by a broader governance mechanism. Relying on developers to use encryption headers is risky, error-prone, and does not align with mandatory enforcement. Therefore, Option A does not satisfy the strict security and compliance requirements.
Option B is the correct answer because an SCP (Service Control Policy) can enforce encryption requirements at the organisation level across all accounts. An SCP can deny any S3 PutObject request that does not include the required SSE-KMS encryption key. This prevents the upload of unencrypted objects or objects encrypted with the wrong key. Additionally, SCPs can restrict the ability of developers to modify S3 bucket policies or disable default encryption. Because SCPs operate at the management account level, they override any permissions that users or roles might have inside member accounts. SCPs are ideal for preventing misconfiguration, enforcing encryption requirements, and ensuring organization-wide compliance, especially when handling sensitive or regulated data. This approach directly meets the requirement for strong, centralized enforcement and prevents misconfiguration both today and in the future.
Option C proposes using S3 lifecycle rules to detect unencrypted objects and re-encrypt them. Although lifecycle policies can transition or delete objects, they cannot perform re-encryption operations. They also do not block uploads of unencrypted objects. The requirement states explicitly that unencrypted uploads must be prevented, not corrected after the fact. Lifecycle rules also do not enforce organization-wide governance and can be modified by users. Therefore, Option C cannot satisfy the requirements.
Option D uses EventBridge notifications triggered by PutObject events to invoke a Lambda function that rejects unencrypted objects. While Lambda can inspect object metadata and potentially delete noncompliant objects, this is a reactive approach. It does not prevent the initial upload; the object must exist first for EventBridge to detect it. This results in potential compliance gaps, temporary exposure of unencrypted sensitive data, and a race condition between upload and deletion. It also depends on Lambda execution and the correct functioning of event triggers. This approach does not enforce governance across accounts and is too weak for sensitive data scenarios. Therefore, Option D is also not sufficient.
Given the requirements for mandatory enforcement, organization-level governance, and strict encryption using an approved CMK, Option B is the only solution that satisfies all aspects of the scenario. SCPs provide the most powerful and consistent means of preventing unencrypted object uploads and unauthorized configuration changes across all accounts.
Question7:
A company runs a high-volume authentication service using Amazon Cognito. Due to regulatory compliance, the security team must ensure that all authentication logs, token refresh events, sign-in failures, and configuration changes are captured and stored for long-term auditing. The logs must be immutable, protected from deletion, and centralized in a separate audit account. Which solution BEST meets these requirements?
A) Enable Amazon Cognito advanced security features and store logs in CloudWatch Logs within each application account.
B) Stream Cognito logs to Amazon Kinesis Data Streams and store output in S3 buckets located in each developer account.
C) Use CloudTrail data events and management events to log all Cognito-related activity, deliver the logs to a centralized S3 bucket with Object Lock, and restrict deletion through IAM and bucket policies.
D) Configure Cognito to export logs directly to an external SIEM system for long-term retention.
Answer:
C
Explanation:
Option A suggests enabling Cognito advanced security features and storing logs in CloudWatch Logs. While Cognito advanced security features add analytics around user sign-in behaviour and risk, they do not replace full authentication logging. CloudWatch Logs do not inherently provide immutability or long-term compliance-grade storage. Logs stored in CloudWatch are susceptible to deletion if improperly configured IAM policies allow it. In addition, logs remain within each application account, failing to meet the requirement for centralized auditing. Therefore, Option A is insufficient.
Option B involves streaming Cognito logs to Kinesis Data Streams and storing them in S3 buckets in developer accounts. This introduces fragmentation of logs, security risks, and unnecessary distribution. Storing logs in developer accounts violates the requirement to centralise logs in a separate audit account. Additionally, Kinesis does not guarantee immutability of logs once stored unless S3 Object Lock is enabled—and even then, granting developers write access poses a risk. Therefore, Option B does not meet compliance or immutability needs.
Option C is the correct solution because Amazon Cognito activity—including authentication attempts, token refresh activity, and configuration changes—can be logged using AWS CloudTrail. CloudTrail logs both management events and some data events. These logs can be delivered automatically to an S3 bucket in a centralized audit account. Enabling S3 Object Lock in compliance mode prevents deletion or alterations to logs, ensuring immutable storage. Combined with bucket policies and IAM restrictions, the security team can guarantee that developers cannot delete or modify logs. CloudTrail is the most complete logging system available for capturing Cognito activity, and it integrates seamlessly with centralized governance accounts. This aligns fully with compliance frameworks that require auditable, tamper-proof logging systems.
Option D proposes exporting logs directly to an external SIEM. While SIEM systems are often used for monitoring and long-term analytics, this option does not guarantee immutability within AWS. Furthermore, depending solely on an external system leaves room for integration failures, and it does not address the requirement for centralized AWS-native storage. SIEM exports complement AWS logging but cannot replace regulatory-grade log retention. Therefore, Option D is not sufficient.
Given the requirements for immutability, completeness of Cognito logging, centralization, and compliance, Option C is the only solution that meets all the criteria fully.
Question8:
A global enterprise wants to restrict all outbound internet traffic from its AWS workloads. Only approved domains required for operational functions (such as patch repositories and monitoring services) should be allowed. The architecture must apply uniformly across multiple VPCs and accounts, with centralized governance and the ability to audit allowed and denied traffic. Which solution provides the MOST scalable and secure approach?
A) Configure security groups in each account to allow only specific domain-based rules.
B) Deploy NAT gateways in every VPC and apply custom route tables to block outbound access.
C) Use AWS Network Firewall with strict outbound rule groups, integrate with AWS Firewall Manager for organization-wide deployment, and restrict traffic using domain and IP reputation filtering.
D) Use VPC endpoints for all AWS services and disable all other outbound access in security groups.
Answer:
C
Explanation:
Option A suggests configuring security groups to allow specific domain-based rules. Security groups operate at Layer 4 (IP/Port) and do not support domain-based filtering. They cannot filter traffic based on domain names, nor can they enforce global outbound restrictions across multi-account environments without significant operational overhead. Managing security groups independently in multiple accounts fails to provide centralized governance and cannot enforce uniform outbound controls. Therefore, Option A cannot meet the requirements.
Option B proposes using NAT gateways and route tables to block outbound access. NAT gateways do not provide fine-grained filtering capability. They simply route outbound traffic to the internet. Route tables cannot enforce domain-level restrictions or provide action-based filtering. Additionally, this approach does not offer centralized management or audit capabilities. Implementing this across many accounts is labor-intensive and error-prone. Therefore, Option B does not meet the governance or filtering requirements.
Option C is the correct solution. AWS Network Firewall supports deep packet inspection, domain filtering, fully qualified domain name (FQDN) filtering, IP reputation rules, and custom rule groups. When integrated with AWS Firewall Manager, Network Firewall can be deployed centrally across all VPCs within an organization, ensuring consistent governance. Firewall Manager enforces policies across member accounts, preventing misconfiguration and enabling centralized auditing. Network Firewall provides robust visibility into allowed and denied requests, aligns with enterprise governance models, and scales efficiently. This meets all requirements: centralized enforcement, domain-based allowlists, auditing, and multi-account uniformity.
Option D suggests using VPC endpoints and disabling all other outbound access. While VPC endpoints provide secure private connectivity to AWS services, they do not filter outbound connections to external domains. Disabling all outbound access also breaks functionality that relies on external domains such as patching repositories, certificate services, container registries, and monitoring systems. This approach is too restrictive and does not support domain filtering or governance across accounts. Therefore, Option D is insufficient.
Question9:
A company uses AWS Lambda functions to process regulated healthcare data. The security team requires end-to-end encryption, including encryption in transit between Lambda and other AWS services. Additionally, all environment variables must be encrypted using a customer-managed CMK, and the team must prevent developers from accessing decrypted environment variable values. Which solution BEST satisfies these requirements?
A) Encrypt Lambda environment variables with AWS-managed KMS keys and rely on HTTPS for service communication.
B) Use customer-managed CMKs for Lambda environment variables, enforce VPC endpoints for all AWS services accessed by Lambda, and restrict IAM permissions so developers cannot decrypt environment variables.
C) Require developers to store environment variables in plaintext within code repositories and then load them at runtime.
D) Use Lambda layers to store secrets and restrict access to the layer.
Answer:
B
Explanation:
Option A suggests using AWS-managed KMS keys. While AWS-managed keys do encrypt environment variables, the organisation specifically requires customer-managed CMKs to maintain full administrative control, set rotation policies, and restrict decryption capabilities. Additionally, relying solely on HTTPS does not enforce private network paths. Option A lacks the granular control required for compliance and does not meet the customer-managed key requirement. Therefore, Option A is insufficient.
Option B is the correct solution. Using customer-managed KMS keys allows the security team full administrative control, enabling them to restrict key usage so developers cannot decrypt environment variables even though Lambda can. Lambda has the necessary permissions to decrypt values at runtime, but IAM policies can block developers from performing kms:Decrypt. Additionally, enforcing VPC endpoints ensures that communication between Lambda and services like S3, DynamoDB, or API Gateway remains entirely private and encrypted without traversing the public internet. This provides end-to-end encryption in transit and aligns with healthcare compliance requirements. This solution meets all requirements fully.
Option C proposes storing environment variables in plaintext in code repositories. This is a severe security violation. Storing secrets in code repositories risks exposure, violates compliance laws, and contradicts best practices. Secrets should never be embedded in source code. Therefore, Option C is unacceptable.
Option D suggests using Lambda layers to store secrets. Lambda layers are not designed for secrets management. Layers contain code or libraries, not encrypted secrets. Layers do not provide encryption or secure storage, and developers with access to the layer would see the secrets. This fails to meet the requirement for preventing developers from accessing decrypted values. Therefore, Option D cannot be correct.
Question10:
A company must detect unauthorized changes to security groups across multiple AWS accounts. The security team requires real-time alerts, centralized visibility, automatic remediation to revert changes to a known-good baseline, and enforcement of security policies across all accounts using AWS Organizations. Which solution is MOST effective?
A) Use Amazon CloudWatch Logs metric filters to detect changes and notify the security team.
B) Use AWS Config with organization-level rules to detect changes, send alerts to the security account, and use automatic remediation actions to revert unauthorized modifications.
C) Use GuardDuty to monitor for attacks that involve security group misconfigurations.
D) Use VPC Flow Logs to identify changes in allowed traffic patterns.
Answer:
B
Explanation:
Option A uses CloudWatch Logs metric filters. Although CloudTrail logs can be sent to CloudWatch and metric filters can detect changes, metric filters alone cannot perform automatic remediation or organization-wide enforcement. Additionally, CloudWatch cannot enforce guardrails across accounts. Thus, Option A does not meet the requirement.
Option B is the correct answer. AWS Config supports organization-wide rules and conformance packs. With AWS Organizations integration, Config can be deployed uniformly across accounts. Config rules can detect unauthorized changes to security groups in real time and trigger remediation actions using SSM Automation documents. These actions revert changes to a known baseline. Alerts can be sent to the security account through EventBridge. This solution provides full visibility, excellent governance, real-time detection, and automated rollback, meeting all requirements.
Option C proposes using GuardDuty. GuardDuty detects malicious activity but does not monitor configuration changes to security groups. While misconfigurations may lead to attacks that GuardDuty can alert on, it does not detect configuration drift itself. Therefore, Option C cannot satisfy the requirement.
Option D suggests using VPC Flow Logs. Flow logs capture traffic metadata, not configuration changes. They cannot detect when a security group rule is modified. Therefore, Option D does not meet the requirement.
Question11:
A company is running a fleet of Amazon EC2 instances that process sensitive healthcare information. The security team requires encryption of data at rest using customer-managed keys, enforcement of strict key-usage permissions, granular access control for encryption operations, and centralized auditing of all key-related activity. Which solution best meets these requirements?
A) Encrypt the EBS volumes with AWS managed keys and review CloudTrail logs in the same account.
B) Encrypt all EBS volumes with a single customer-managed KMS key and allow all developers full KMS key permissions.
C) Create a dedicated KMS customer-managed key with key policies granting minimal access, enforce IAM policies that restrict encryption/decryption permissions, enable automatic key rotation, and use CloudTrail for centralized audit logging.
D) Store encryption keys in an S3 bucket using SSE-S3 and distribute access via IAM roles.
Answer:
C
Explanation:
Option A uses AWS-managed keys, which simplifies encryption but does not provide the level of granular control that healthcare compliance demands. AWS-managed keys rotate automatically and reduce management overhead, but key policies cannot be fine-tuned in the same way customer-managed keys allow. Additionally, AWS-managed keys do not allow explicit permission boundaries around who can use the key for encrypt/decrypt operations. While CloudTrail can capture KMS usage events, the organisation loses the ability to restrict key usage at an extremely granular level. Therefore, Option A lacks the control needed for sensitive healthcare data protection.
Option B proposes encrypting all EBS volumes with a single customer-managed key while granting all developers full permissions. This approaches key management incorrectly because allowing broad access violates the principle of least privilege. Sensitive healthcare data requires carefully controlled KMS permissions, strict role separation, and limited administrative ability to use or manage the keys. Broad developer permissions increase the likelihood of unauthorized decryption, accidental misuse, or privilege escalation. Using one key for all workloads also reduces separation of duties and increases blast radius if the key is compromised. Thus, Option B is misaligned with the organization’s high-security requirements.
Option C fully aligns with AWS best practices for KMS governance in regulated industries. By creating a dedicated customer-managed key, the security team gains complete control over its lifecycle, key policies, access permissions, and auditing. Implementing a minimal-access key policy ensures that only approved roles or personnel can use the key for encryption or decryption. IAM policies can restrict usage even further, ensuring that no unauthorized individuals can decrypt protected data. Automatic key rotation improves overall security posture without requiring manual intervention. Finally, CloudTrail logs every KMS API call, such as Encrypt, Decrypt, GenerateDataKey, or UpdateKeyPolicy, giving the organisation complete visibility into all cryptographic events for auditing and incident investigations. This aligns perfectly with compliance frameworks like HIPAA. Therefore, Option C fulfills every requirement accurately.
Option D incorrectly states that encryption keys can be stored in S3 with SSE-S3. SSE-S3 uses server-side encryption with S3-managed keys (SSE-S3), which does not give organizations direct access to or control over the encryption keys. It also does not provide fine-grained permissions or detailed audit trails for key usage. Furthermore, distributing access through IAM roles does nothing to provide control over the underlying key material because SSE-S3 is not intended for customer key management. This option fails nearly all of the organization’s requirements.
For these reasons, Option C is the best and only solution that provides strict control, regulatory compliance alignment, key lifecycle governance, detailed auditing, and the required level of security.
Question12:
A company requires secure cross-account access for an AWS Lambda function in Account A that must read messages from an encrypted Amazon SQS queue in Account B. The SQS queue uses a customer-managed KMS key for encryption. The security team requires least-privilege access, prevention of key misuse, and full visibility of cross-account key usage. Which solution best achieves this?
A) Share the SQS queue with Account A and allow global KMS permissions on the customer-managed key.
B) Create an IAM user in Account B with access to the SQS queue and share the user credentials with the Lambda function in Account A.
C) Modify the KMS key policy in Account B to allow the Lambda execution role from Account A to use the key, grant the Lambda role permission to read the queue, and monitor cross-account usage with CloudTrail.
D) Copy the messages from SQS into an S3 bucket using SSE-S3, then allow Account A to access the bucket.
Answer:
C
Explanation:
Option A incorrectly proposes allowing global KMS permissions. Granting global permissions to a customer-managed key exposes it to potential unauthorized use, making it impossible to ensure the least-privilege model required by security-sensitive organizations. Such a permissive configuration violates best practices for KMS governance and introduces auditing ambiguity. While sharing the SQS queue is part of the cross-account access process, broad key permissions undermine the overall security goal.
Option B suggests creating an IAM user in Account B and sharing long-term credentials with the Lambda function in Account A. This is against AWS security best practices. Long-term shared credentials violate policy, increase risk of compromise, are difficult to rotate securely, and undermine the principle of least privilege. Additionally, this does not provide granular auditing because the key usage is tied to a static credential rather than a controlled IAM role. This option should never be used for cross-account access.
Option C is the correct approach because it aligns with AWS’s recommended architecture for granting cross-account access to encrypted resources. To enable cross-account decryption, the KMS key policy in Account B must explicitly trust the Lambda execution role from Account A. This ensures that only the intended workload is allowed to use the key. Then, the Lambda role itself must be granted permission to read from the SQS queue via the appropriate queue policy. Together, these settings ensure that least privilege is enforced at both the key and resource levels. Additionally, CloudTrail logs all key usage, enabling full visibility into cross-account API calls. This satisfies the requirement for monitoring and auditing. Thus, Option C provides precise, controlled, secure access.
Option D suggests copying messages to S3 and using SSE-S3 encryption. While this may provide access, it introduces unnecessary complexity and does not meet the requirement of least privilege. It also alters the architecture and delays message processing. Because SSE-S3 does not use customer-managed keys, it does not help control key usage for the SQS queue. Therefore, Option D is not viable.
Question13:
A company needs to protect data stored in Amazon DynamoDB from unauthorized internal access. Security requirements include row-level access restrictions, limited permissions for developers, prevention of privilege escalation, and centralized governance over all access operations. Which approach meets these requirements?
A) Use IAM policies granting developers full read and write access to the DynamoDB table.
B) Enable DynamoDB point-in-time recovery and encrypt the table with AWS managed keys.
C) Use fine-grained access control by combining IAM identity policies with DynamoDB condition keys to restrict access based on item attributes, enforce least privilege, and monitor access through CloudTrail.
D) Store sensitive data in an S3 bucket and reference it from DynamoDB to simplify access control.
Answer:
C
Explanation:
Option A provides the exact opposite of what the security team requires. Granting developers full read/write access introduces unnecessary risk and violates core principles like least privilege and separation of duties. It also allows internal misuse and unauthorized access to all items within the table, making this option entirely unsuitable.
Option B deals with resilience, not access control. Point-in-time recovery protects against accidental deletion or corruption but does not restrict access permissions. Encrypting with AWS-managed keys provides basic security but lacks the fine-grained controls required. There is also no row-level access enforcement. Therefore, Option B does not fulfill the primary requirement of internal protection.
Option C accurately matches AWS best practices for securing sensitive data in DynamoDB. Fine-grained access control allows organizations to restrict access at the row or attribute level using IAM policies combined with DynamoDB condition keys such as leading keys or attribute-value conditions. Developers can be limited to only the items they are authorized to see. Additionally, IAM ensures no privilege escalation is possible because permissions are tightly scoped. CloudTrail provides visibility into all access attempts. This meets all the listed requirements precisely.
Option D introduces unnecessary architectural complexity by storing sensitive data in S3 and referencing it externally. This does not provide row-level access control for DynamoDB and actually increases the attack surface. Therefore, Option D fails to meet requirements.
Question14:
A company is designing a secure API hosted on Amazon API Gateway and backed by AWS Lambda. The security team requires strong authentication, prevention of unauthorized internal invocation, protection against credential theft, and centralized authorization management. Which solution best meets these requirements?
A) Use API keys with API Gateway and allow direct Lambda invocation from developers.
B) Integrate API Gateway with Cognito user pools for authentication, attach IAM authorization to Lambda, and restrict direct Lambda invoke permissions.
C) Allow Lambda to be invoked by any IAM user but enable CloudWatch logging for monitoring.
D) Use Lambda environment variables to store authentication tokens.
Answer:
B
Explanation:
Option A uses API keys, which are not meant for authentication. API keys provide no user identity context, are easily shared or leaked, and do not prevent unauthorized internal invocation of Lambda. Allowing direct Lambda invocation is a direct violation of least privilege. Option A fails multiple requirements.
Option B follows AWS best practices for secure API design. Cognito User Pools provide strong authentication, including MFA, token-based identity, and user lifecycle management. API Gateway can enforce this authentication and forward only authenticated and authorized requests. Restricting direct Lambda invocation ensures that internal actors cannot bypass API Gateway where security checks occur. IAM authorization on Lambda ensures only the API Gateway has permission to invoke the function. This creates a highly controlled access path, prevents credential theft, and centralizes authorization. Option B meets every listed requirement.
Option C violates least privilege by allowing any IAM user to invoke Lambda directly. This exposes the function to unnecessary internal risk, bypasses authentication, and cannot ensure credentials are not misused. Logging alone does not prevent unauthorized access. Thus, Option C is not appropriate.
Option D is highly insecure. Storing authentication tokens in environment variables exposes them unnecessarily, increases theft risk, and does not address authorization control paths. Therefore, Option D cannot be correct.
Question15:
A financial organisation stores transaction data in Amazon S3. They must ensure protection against accidental deletion, insider misuse, and tampering. Additionally, auditors require immutable storage with compliance-grade retention. Which solution best meets these requirements?
A) Enable versioning and replication on the S3 bucket.
B) Use S3 Object Lock in governance mode, enforce bucket policies restricting deletion, and enable CloudTrail logging.
C) Use AWS Backup with scheduled backups of S3 objects.
D) Encrypt all objects with AWS-managed keys.
Answer:
B
Explanation:
Option A offers basic resilience. Versioning allows recovering deleted objects, and replication increases availability, but this does not prevent intentional deletion or tampering. An internal malicious actor could still delete all versions or alter content. Thus, Option A does not meet regulatory immutability requirements.
Option B is the correct solution because S3 Object Lock in governance mode provides tamper-evident, write-once-read-many protection. Governance mode ensures that even privileged users cannot delete or modify locked objects without explicit compliance overrides. Bucket policies further restrict delete operations, enhancing internal protection. CloudTrail logs all S3 access and deletion attempts, enabling full auditability. Together, this creates an immutable, regulatory-aligned storage model ideal for financial transaction records.
Option C provides backups but does not guarantee immutability or prevent tampering at the source. Backups can still be deleted or altered depending on permissions.
Option D encrypts data, but encryption does not protect against deletion or tampering. KMS protects confidentiality, not integrity or immutability.