Amazon AWS Certified Security — Speciality SCS-C02 Exam Dumps and Practice Test Questions Set 2 Q16-30
Visit here for our full Amazon AWS Certified Security — Specialty SCS-C02 exam dumps and practice test questions.
Question16:
A company wants to secure Amazon RDS instances containing sensitive customer data. The security team requires encryption at rest, encryption in transit, access restricted to only approved IAM roles, database credentials rotated automatically, and audit logging for all access and configuration changes. Which solution best meets these requirements?
A) Enable RDS encryption using AWS-managed keys, use SSL for connections, and allow developers full database access.
B) Use customer-managed KMS keys for RDS encryption, enforce IAM database authentication, enable SSL/TLS connections, rotate database credentials using AWS Secrets Manager, and enable CloudTrail for auditing.
C) Store database credentials in environment variables and rely on default RDS encryption.
D) Enable point-in-time recovery on the RDS instance and log access manually using application logs.
Answer:
B
Explanation:
This scenario requires securing sensitive RDS instances with a combination of encryption, access control, credential management, and auditing. Encryption at rest is critical to ensure that stored data remains unreadable even if the storage media is compromised. Using customer-managed KMS keys provides the organisation with full control over key policies, key usage permissions, and rotation, satisfying strict regulatory compliance. AWS-managed keys are simpler but offer less granular control and do not allow administrators to restrict access to only approved users. Option A is insufficient because developers retain broad access, violating least privilege principles and compliance requirements.
Encryption in transit is equally important because it protects data while moving between client applications and the database instance. Enforcing SSL/TLS connections ensures that sensitive data, such as customer information, cannot be intercepted or tampered with in transit. Combining this with IAM database authentication provides strong identity-based control without relying solely on static credentials, preventing unauthorised users from connecting even if network access is obtained.
Credential rotation is essential for compliance and security best practices. AWS Secrets Manager automates credential rotation for RDS, removing the risk associated with hard-coded or long-lived passwords. This mitigates the risk of credential compromise and reduces administrative overhead while ensuring continuous security compliance. Option C, which stores credentials in environment variables, introduces security risks because credentials can be exposed or mismanaged and does not provide automated rotation.
Audit logging is required to maintain visibility over database access and configuration changes. Enabling CloudTrail captures API calls to RDS, including creation, deletion, and modifications, while combining this with database-native logs such as general or error logs, provides full traceability. Manual logging, as suggested in Option D, is error-prone, incomplete, and fails to provide a comprehensive view necessary for regulatory compliance.
Option B integrates all these requirements. By encrypting with customer-managed KMS keys, access can be tightly controlled. IAM authentication enforces identity-based access, SSL ensures data-in-transit protection, Secrets Manager automates credential rotation, and CloudTrail provides centralised audit logging. This combination fulfils security, compliance, and operational requirements fully, making it the best choice.
Question17:
A company is deploying an Amazon S3 data lake containing highly sensitive financial records. The security team must prevent accidental deletion, ensure immutability of objects for a minimum retention period, enforce access control at an organisation-wide level, and provide centralised auditing. Which solution best satisfies these requirements?
A) Enable S3 versioning and rely on application-level access control.
B) Use S3 Object Lock in compliance mode, apply bucket policies to enforce access control, enable AWS CloudTrail logging, and manage policies centrally via AWS Organisations.
C) Store objects in multiple S3 buckets without encryption and rely on MFA for deletion.
D) Periodically back up S3 objects to a separate account and manually track deletions.
Answer:
B
Explanation:
This scenario requires immutability, strong access control, organisation-wide enforcement, and centralised auditing for financial records. S3 Object Lock in compliance mode is a critical feature because it enforces write-once-read-many (WORM) protection, ensuring objects cannot be deleted or altered during the retention period, even by privileged users. This directly prevents accidental or malicious deletion and satisfies regulatory requirements for immutable storage. Option A, relying on versioning and application-level controls, does not guarantee immutability and can be circumvented by privileged users or misconfigured applications.
Applying bucket policies ensures that only authorised users or roles can perform actions on the S3 bucket, aligning with least-privilege principles. Centralised management through AWS Organisations allows consistent policy enforcement across multiple accounts, preventing misconfigurations that could lead to exposure. CloudTrail logging provides comprehensive visibility into all S3 operations, including object creation, deletion attempts, and policy changes, enabling full auditing. Option D, which suggests backups and manual tracking, introduces operational complexity and is prone to errors, while Option C fails to provide encryption or enforce immutability.
Using Object Lock with compliance mode, bucket policies, and CloudTrail ensures that all data lake objects are secure, immutable, and auditable, fulfilling organisational and regulatory requirements. This solution provides the strongest combination of security, compliance, and operational simplicity. Option B is the clear choice.
Question18:
A company operates multiple Amazon EC2 instances that access sensitive internal APIs. The security team wants to enforce least-privilege access, centrally manage credentials, rotate them regularly, and ensure access logs are auditable. Which solution meets these requirements most effectively?
A) Store API keys in environment variables and manually rotate them.
B) Use AWS Systems Manager Parameter Store with SecureString parameters for API keys, grant EC2 instances IAM roles to retrieve them, enable automatic rotation, and monitor access using CloudTrail.
C) Hard-code credentials in EC2 application code and review access weekly.
D) Use IAM users with long-lived credentials for each EC2 instance.
Answer:
B
Explanation:
The requirements focus on least privilege, centralised credential management, automated rotation, and auditability. Option A, storing API keys in environment variables, exposes credentials to potential compromise and requires manual rotation, increasing the likelihood of human error and credential leakage. Hard-coding credentials, as in Option C, is even less secure and violates best practices. Long-lived IAM user credentials, as suggested in Option D, are difficult to rotate and provide unnecessary risk due to persistent validity.
AWS Systems Manager Parameter Store with SecureString parameters allows secure storage of secrets and integrates with IAM roles assigned to EC2 instances. Only instances with the proper IAM permissions can retrieve the credentials, ensuring least privilege. Automatic rotation can be configured via Systems Manager or Secrets Manager to mitigate the risk of stale or compromised credentials. CloudTrail captures access events for auditing, providing visibility into which instances retrieved the secrets and when. Option B effectively addresses all requirements: centralised management, least privilege, automated rotation, and auditable access.
This solution is aligned with AWS security best practices for managing secrets in dynamic, distributed environments.
Question19:
A company requires that all Lambda functions accessing sensitive customer data must be invoked only through approved API Gateway endpoints. Direct invocation of Lambda by internal staff or other services must be blocked, and invocation activity must be auditable. Which solution is most secure and compliant?
A) Allow all IAM users to invoke Lambda and rely on logging for monitoring.
B) Attach resource-based policies to Lambda functions that allow invocation only from specific API Gateway principals, and enable CloudTrail logging for all invocations.
C) Use environment variables to store invocation secrets and distribute them to approved users.
D) Protect the Lambda function with API keys and rely on developers not sharing the keys.
Answer:
B
Explanation:
This scenario requires preventing direct invocation while allowing only approved API Gateway access. Option A allows unrestricted invocation, violating least-privilege principles. Option C, using environment variables for secrets, does not prevent direct invocation and is insecure. Option D, using API keys, relies on human compliance and cannot prevent internal privilege abuse.
Attaching resource-based policies to Lambda functions provides a mechanism to enforce invocation permissions at the function level. By specifying API Gateway as the trusted principal, Lambda automatically rejects any invocation from unauthorised sources. CloudTrail logging provides a complete audit trail for all invocations, allowing security teams to monitor compliance. This solution ensures that only approved access paths are allowed, internal staff cannot bypass security controls, and invocation activity is fully auditable, meeting regulatory and internal security requirements. Option B is the most secure and compliant approach.
Question20:
A company must ensure that all sensitive S3 data is encrypted, only accessible through approved IAM roles, and that unauthorised deletion attempts are prevented. The security team also wants automated alerts for policy violations. Which solution best meets these requirements?
A) Enable S3 default encryption using SSE-KMS, create bucket policies restricting access to approved IAM roles, and configure CloudWatch Events or EventBridge rules to alert on policy violations.
B) Encrypt objects using client-side keys and distribute them to developers.
C) Enable SSE-S3 encryption and rely on developers to manage permissions manually.
D) Store objects in multiple buckets without encryption, but back them up daily.
Answer:
A
Explanation:
This scenario requires strong encryption, access control, protection against accidental or malicious deletion, and automated alerts. Option B, client-side encryption, increases the risk of key mismanagement. Option C relies on manual developer management, which is error-prone. Option D provides no encryption or access control.
Option A uses SSE-KMS for encryption, ensuring that only authorised IAM roles can access the objects. Bucket policies enforce strict access control, preventing unauthorised access or deletion. EventBridge or CloudWatch Events can detect policy violations or unauthorised attempts, generating real-time alerts for security teams. This combination provides encryption, access control, deletion protection, and automated monitoring, fully meeting security and compliance requirements. Option A is the optimal solution.
Question21:
A company manages multiple AWS accounts and must enforce organisation-wide encryption policies for Amazon S3 buckets. The requirement is to ensure that all objects are encrypted using customer-managed KMS keys, prevent developers from uploading unencrypted objects, and provide centralised audit logging of all bucket and object operations. Which solution best meets these requirements?
A) Enable SSE-S3 default encryption on all buckets and rely on developers to set encryption headers.
B) Implement an AWS Organisations Service Control Policy (SCP) that denies S3 PutObject requests unless the correct KMS key is used, enforce bucket policies restricting encryption changes, and enable CloudTrail logging to a centralised audit account.
C) Use S3 lifecycle policies to detect unencrypted objects and re-encrypt them automatically.
D) Create EventBridge rules that trigger Lambda functions to delete unencrypted objects after upload.
Answer:
B
Explanation:
This scenario requires strict enforcement of S3 bucket encryption across multiple accounts with centralised governance, ensuring that only approved KMS keys are used and that unencrypted uploads are blocked. AWS provides several mechanisms to achieve encryption, including default encryption, KMS integration, SCPs, and automation, but each option must be evaluated carefully.
Option A relies on SSE-S3 default encryption and developer compliance. While SSE-S3 encrypts objects at rest, it uses AWS-managed keys and does not meet the requirement for customer-managed KMS keys. Additionally, relying on developers to set encryption headers introduces human error and does not enforce organisation-wide compliance. This approach cannot guarantee that all objects are encrypted correctly, nor does it prevent future misconfigurations or non-compliant uploads.
Option B is the correct solution. SCPs in AWS Organisations allow centralised enforcement across all member accounts. By denying PutObject requests unless the correct KMS key is specified, the organisation ensures that no unencrypted or incorrectly encrypted objects can be uploaded, regardless of individual developer permissions. Bucket policies further restrict changes to encryption settings, preventing developers from disabling default encryption or modifying keys. CloudTrail logging to a centralised audit account provides comprehensive monitoring, allowing security teams to review all S3 and KMS operations, track compliance, and detect anomalies. This combination satisfies all requirements: strict enforcement, organisation-wide applicability, customer-managed KMS key usage, and auditability.
Option C, using S3 lifecycle policies to detect and re-encrypt unencrypted objects, is reactive rather than preventive. While lifecycle policies can transition or delete objects, they cannot prevent unencrypted uploads, and this approach allows temporary exposure of sensitive data. It also introduces delays and operational complexity, which may conflict with regulatory requirements for immediate encryption. Therefore, Option C is insufficient.
Option D, triggering Lambda functions to delete unencrypted objects via EventBridge, also operates reactively. Objects are uploaded and potentially exposed before Lambda deletes them. This method does not prevent unencrypted data from being stored temporarily and increases operational overhead and complexity. Additionally, this approach cannot prevent developers from disabling Lambda triggers or misconfiguring event rules. It fails to provide a preventive and organisation-wide enforcement mechanism, making it unsuitable.
Option B remains the only solution that guarantees preventive enforcement, centralised auditability, adherence to customer-managed KMS key usage, and protection against human error, making it the best choice.
Question22:
A healthcare company is using Amazon RDS to store electronic health records. Security policies require encryption at rest and in transit, automated rotation of database credentials, strict IAM-based access control, and comprehensive auditing of all database operations and configuration changes. Which solution best addresses these requirements?
A) Enable RDS encryption with AWS-managed keys, use SSL for connections, and allow developers full access to the database.
B) Use customer-managed KMS keys for encryption, enable SSL/TLS for all database connections, enforce IAM authentication for RDS users, rotate credentials automatically using AWS Secrets Manager, and enable CloudTrail logging for all RDS operations.
C) Store database credentials in environment variables and rely on default RDS encryption.
D) Enable point-in-time recovery on the RDS instance and monitor logs manually.
Answer:
B
Explanation:
The requirements emphasise strong data protection, credential management, access control, and auditability for sensitive healthcare data. Option A relies on AWS-managed keys and developer-controlled access, which does not provide the granularity required for compliance or prevent accidental or malicious access. Developers having unrestricted access could compromise the database, violating the principle of least privilege. While SSL/TLS protects data in transit, the lack of fine-grained key control and automated credential rotation makes this approach inadequate.
Option B provides a comprehensive solution aligned with AWS security best practices. Customer-managed KMS keys give organisations full control over encryption policies, usage permissions, and rotation, ensuring regulatory compliance for sensitive healthcare data. Enforcing SSL/TLS ensures all data in transit is encrypted, preventing interception or tampering. IAM-based authentication integrates with identity management policies, eliminating the reliance on static database credentials and providing role-based access enforcement. Automatic rotation of credentials using AWS Secrets Manager minimises the risk of stale or compromised credentials, ensuring continuous security. CloudTrail logging records all RDS API calls, including database creation, configuration changes, and access events, providing a centralised audit trail for compliance and security monitoring.
Option C, storing credentials in environment variables, is insecure and prone to accidental disclosure. Environment variables may be exposed through misconfigurations, logs, or insider misuse. Default RDS encryption alone is insufficient because it does not provide customer-controlled key management or detailed auditability. Option C fails to meet the security and compliance requirements.
Option D focuses on recovery and manual monitoring, which does not enforce encryption, credential rotation, or access controls. Manual monitoring is error-prone and lacks centralised governance, making it unsuitable for compliance with healthcare regulations.
Option B fully satisfies all security requirements for sensitive healthcare data by providing strong encryption, automated credential management, fine-grained access control, and comprehensive auditability. It ensures compliance with regulatory frameworks and aligns with AWS security best practices.
Question23:
A financial organization uses Amazon S3 to store transaction data that must remain immutable for audit and regulatory purposes. The security team requires protection against accidental deletion, insider threats, and tampering, while enabling real-time monitoring of access and modifications. Which solution is most suitable?
A) Enable S3 versioning and rely on developer discipline to prevent deletion.
B) Use S3 Object Lock in compliance mode, enforce bucket policies to restrict deletion, and enable CloudTrail logging to monitor all access and attempts to modify objects.
C) Store backups in a separate S3 bucket without encryption and track changes manually.
D) Encrypt S3 objects with SSE-S3 and manage access manually without centralised logging.
Answer:
B
Explanation:
Financial data is subject to strict compliance regulations that require immutability, access control, and auditability. Option A relies on developer discipline, which is insufficient because human error or insider threats could result in accidental or malicious deletion. Versioning alone does not prevent deletion of all versions or modification of objects.
Option B is the correct solution. S3 Object Lock in compliance mode ensures that objects cannot be deleted or modified during the retention period, even by privileged users. Bucket policies restrict access to authorised personnel only, preventing accidental or malicious tampering. CloudTrail provides centralised logging for all S3 API calls, including attempts to delete or alter objects. This combination guarantees immutability, access control, and auditability, meeting regulatory and organisational requirements.
Option C, relying on backups, introduces operational overhead and cannot prevent tampering at the source. Manual tracking is error-prone and inadequate for compliance reporting. Option D, while encrypting data, does not prevent deletion or provide comprehensive monitoring, leaving data vulnerable to internal threats.
Option B is the only approach that ensures compliance-grade immutability, centralised monitoring, and protection against both accidental and intentional threats.
Question24:
A company uses AWS Lambda to process sensitive healthcare data. The security team requires that Lambda functions are invoked only through approved API Gateway endpoints, that internal direct invocation is blocked, and that all invocation events are auditable. Which solution best meets these requirements?
A) Allow all IAM users to invoke Lambda and rely on logging for monitoring.
B) Attach resource-based policies to Lambda functions allowing invocation only from specific API Gateway principals and enable CloudTrail logging for all Lambda invocations.
C) Use environment variables to store invocation secrets and distribute them to developers.
D) Protect Lambda with API keys and rely on developers not to share keys.
Answer:
B
Explanation:
The security requirements are centered on restricting invocation paths, enforcing least privilege, and ensuring auditability. Option A allows unrestricted invocation and cannot prevent internal misuse, violating least-privilege principles. Option C, using environment variables for secrets, does not enforce access restrictions and exposes sensitive tokens. Option D relies on API keys, which can be shared, copied, or misused, and does not provide centralised enforcement.
Option B is the correct solution. Lambda resource-based policies allow organizations to define which principals can invoke the function. By restricting invocation to API Gateway, the security team ensures that only approved workflows can trigger sensitive processing. CloudTrail captures all Lambda invocations, including successful and unauthorised attempts, providing full auditability and compliance reporting. This approach enforces strict access control, prevents internal abuse, and satisfies regulatory requirements. Option B fulfills all requirements effectively and securely.
Question25:
A company operates multiple Amazon EC2 instances that need to access sensitive internal APIs. Security requirements include least-privilege access, centralised credential management, automatic rotation of secrets, and auditable access logs. Which solution best meets these requirements?
A) Store API keys in environment variables and rotate them manually.
B) Use AWS Systems Manager Parameter Store with SecureString parameters, grant EC2 instances IAM roles to access secrets, enable automatic rotation, and monitor access using CloudTrail.
C) Hard-code credentials in EC2 applications and review access logs weekly.
D) Use IAM users with long-lived credentials for each EC2 instance.
Answer:
B
Explanation:
The organization requires secure credential management, automated rotation, least-privilege access, and auditability. Option A relies on manual rotation, which is error-prone and exposes secrets. Option C introduces high risk by hard-coding credentials, violating security best practices. Option D, long-lived IAM user credentials, is difficult to rotate and increases exposure risk.
Option B is the correct solution. AWS Systems Manager Parameter Store allows secure storage of sensitive parameters using SecureString encryption. EC2 instances can retrieve credentials via IAM roles, enforcing least-privilege access without exposing secrets to developers. Automatic rotation ensures that credentials remain fresh and reduces the likelihood of compromise. CloudTrail monitors all access to the parameters, providing a complete audit trail for compliance and security purposes. This approach meets all security and operational requirements efficiently.
Question26:
A company needs to enforce organization-wide encryption for Amazon EBS volumes attached to EC2 instances. The security team requires that only approved IAM roles can create and modify volumes, all volumes must use customer-managed KMS keys for encryption, and any unencrypted volumes should be automatically flagged and remediated. Which solution best meets these requirements?
A) Enable default EBS encryption using AWS-managed keys and rely on developers to set proper permissions.
B) Implement AWS Organizations Service Control Policies (SCPs) that deny EBS volume creation or modification unless a customer-managed KMS key is used, enforce IAM policies granting access only to approved roles, and create EventBridge rules that detect unencrypted volumes and trigger remediation actions.
C) Use manual audits to check for unencrypted volumes and request developers to fix them.
D) Store unencrypted EBS snapshots in S3 and encrypt them later using SSE-S3.
Answer:
B
Explanation:
The organization’s requirements emphasize preventive enforcement, strict role-based access control, and automatic remediation of non-compliant EBS volumes. AWS provides multiple mechanisms, such as default encryption, KMS integration, IAM policies, SCPs, and event-driven automation, but each option must be carefully evaluated.
Option A relies on default encryption with AWS-managed keys. While this ensures that new EBS volumes are encrypted, it does not allow the organization to control key policies or restrict access to approved IAM roles. AWS-managed keys cannot enforce least-privilege access or organization-wide compliance policies. Additionally, default encryption does not detect existing unencrypted volumes or trigger remediation, leaving gaps in compliance and operational risk.
Option B is the correct solution because it combines preventive, detective, and corrective controls. SCPs enforce organization-wide policies, denying any attempt to create or modify EBS volumes without specifying a customer-managed KMS key. This ensures encryption with organization-approved keys across all accounts. IAM policies further enforce least-privilege access by granting volume creation and modification permissions only to approved roles, minimizing exposure and preventing unauthorised changes. EventBridge rules can detect non-compliant or unencrypted volumes and trigger Lambda functions or Systems Manager Automation to remediate the issue, such as encrypting or alerting responsible teams. This combination ensures full compliance, enforces encryption, maintains strict access control, and provides automatic remediation, fully satisfying security requirements.
Option C relies on manual audits, which are reactive, error-prone, and cannot prevent non-compliance in real time. Manual processes are resource-intensive and may fail to meet regulatory or operational standards, making this approach insufficient.
Option D, storing unencrypted EBS snapshots in S3 and encrypting later, is operationally cumbersome and does not prevent exposure of sensitive data during the unencrypted phase. This reactive approach introduces risk and fails to enforce organization-wide preventive encryption policies.
Option B is the only option that provides preventive enforcement, least-privilege control, automated detection, and remediation for unencrypted EBS volumes, fully aligning with security best practices and organisational compliance requirements.
Question27:
A company manages sensitive financial data in Amazon S3 and must ensure that objects are immutable for a regulatory retention period. They require protection against insider threats, accidental deletion, and unauthorised modifications, along with audit logging for all object access and policy changes. Which solution best meets these requirements?
A) Enable S3 versioning and rely on application-level controls to prevent deletion.
B) Use S3 Object Lock in compliance mode, apply bucket policies restricting access, and enable CloudTrail logging for all operations.
C) Store backups in a separate S3 bucket and track deletions manually.
D) Encrypt objects using SSE-S3 and allow developers to manage access.
Answer:
B
Explanation:
Financial data retention requires strict immutability, controlled access, and comprehensive auditing. Option A, enabling versioning, only allows recovery of prior object versions and does not prevent deletion or modification by privileged users. Relying on application-level controls is insufficient because developers or administrators may bypass controls, introducing significant compliance risks.
Option B is the correct solution. S3 Object Lock in compliance mode enforces a write-once-read-many (WORM) model, ensuring objects cannot be deleted or modified during the retention period, even by privileged users. Bucket policies enforce least-privilege access, restricting deletion and modification rights to authorised personnel only. CloudTrail logging records all object operations, including failed or unauthorised access attempts, providing a full audit trail necessary for regulatory compliance. Together, these mechanisms provide immutable storage, access control, and auditing, meeting all organisational and regulatory requirements.
Option C, using separate S3 backups, introduces operational overhead, does not prevent tampering at the source, and relies on manual tracking, which is error-prone and non-compliant with strict financial regulations. Option D, relying solely on SSE-S3 encryption with developer-managed access, does not enforce immutability or prevent deletion and lacks comprehensive audit visibility.
Option B is the only approach that fully ensures compliance-grade immutability, access control, and auditable activity for sensitive financial data.
Question28:
A company needs to secure cross-account access to an Amazon SQS queue encrypted with a customer-managed KMS key. Lambda functions in Account A must read messages from a queue in Account B, but internal users should not bypass these controls. The security team requires least-privilege access, prevention of key misuse, and full auditing. Which solution best achieves this?
A) Grant global KMS permissions to the queue and share IAM credentials with the Lambda function.
B) Modify the KMS key policy in Account B to allow the Lambda execution role from Account A to use the key, grant the Lambda role permission to read the SQS queue, and monitor all activity with CloudTrail.
C) Copy messages from SQS to S3 and grant Account A access to the bucket.
D) Allow all IAM users in Account A to access the queue directly.
Answer:
B
Explanation:
Cross-account access to encrypted SQS queues requires precise control over KMS key usage, resource policies, and auditing. Option A, granting global KMS permissions, exposes the key to misuse and violates least-privilege principles. Sharing IAM credentials is insecure and violates best practices, introducing risk of compromise.
Option B is correct. By modifying the KMS key policy to trust the Lambda execution role in Account A, the organization ensures that only authorised workloads can use the key. IAM permissions grant the Lambda role the ability to read messages from the queue. CloudTrail logging provides centralised audit visibility into all key usage and queue access, meeting compliance and security requirements. This approach prevents unauthorised access, enforces least-privilege access, and provides full auditability.
Option C, copying messages to S3, introduces operational complexity and potential data exposure, as messages exist in multiple locations, and access control is split across services. Option D allows unrestricted access, violating the principle of least privilege and exposing the queue to internal misuse.
Option B provides the most secure, compliant, and auditable approach, fully meeting the organization’s requirements.
Challenges of Cross-Account Access
Accessing encrypted SQS queues across AWS accounts involves multiple layers of security considerations. Unlike plain queues, encrypted queues rely on AWS KMS (Key Management Service) to protect the data at rest. Simply granting permissions at the queue level is insufficient because the encryption key itself enforces access controls. Therefore, a secure cross-account solution requires both IAM permissions and key policies that explicitly allow the authorised entity to use the encryption key. Without properly configured policies, even if the Lambda function can access the queue, it cannot decrypt the messages, leading to operational failure.
Risks of Granting Global KMS Permissions
Option A, which suggests granting global KMS permissions and sharing IAM credentials with the Lambda function, introduces severe security vulnerabilities. Global KMS permissions would allow any principal in any account to use the key, completely bypassing the principle of least privilege. This approach significantly increases the risk of unauthorised access, misuse, or accidental exposure of sensitive data. Sharing IAM credentials between accounts is highly insecure. Credentials could be intercepted, misused, or leaked, leading to potential compromise of both the queue and other resources in Account B. Additionally, this method creates challenges for auditing and accountability because actions performed by the shared credentials cannot be easily traced back to the individual or role responsible. From both a security and compliance perspective, this option is unacceptable.
Operational Complexity of Copying Messages to S3
Option C suggests copying messages from the SQS queue to an S3 bucket and granting Account A access to the bucket. While technically feasible, this approach introduces unnecessary operational complexity and risks. First, messages are duplicated, creating multiple points where sensitive data is stored. Each storage location must be secured and monitored, increasing administrative overhead. Second, access control becomes fragmented: one set of permissions governs the SQS queue, while another governs the S3 bucket. This separation can lead to misconfigurations and accidental exposure. Third, the data pipeline introduces potential latency, operational errors, and increased cost due to additional storage and transfer operations. From a compliance standpoint, duplicating sensitive data across services increases the attack surface and complicates audit reporting.
Why Modifying the KMS Key Policy is Optimal
Option B is the most secure and operationally efficient solution for cross-account encrypted SQS access. By modifying the KMS key policy in Account B to explicitly trust the Lambda execution role in Account A, you enforce least-privilege access directly at the encryption layer. This ensures that only the intended Lambda function can decrypt messages, even if other IAM users in Account A attempt access. The approach leverages AWS’s resource-based access controls, which are inherently secure and auditable.
In addition to the key policy, the Lambda execution role in Account A must be granted permissions to read messages from the SQS queue. This separation of responsibilities—key access versus queue access—aligns with best practices for cloud security, ensuring that no single permission provides more access than necessary.
Audit and Compliance Considerations
CloudTrail logging completes the secure configuration by providing centralised visibility into all actions. Every invocation, message retrieval, and decryption attempt is recorded, allowing security teams to monitor for unauthorised activity. CloudTrail logs can also be integrated with SIEM solutions to generate alerts for suspicious behavior or anomalies. This level of auditing is critical for compliance frameworks such as PCI DSS, HIPAA, or ISO 27001, which require strong controls over access to sensitive data and full traceability of all operations. Option B ensures that cross-account access is both controlled and auditable, satisfying regulatory and security requirements.
Operational Benefits
From an operational perspective, Option B is clean and maintainable. It avoids unnecessary data duplication, reduces administrative overhead, and simplifies the monitoring of cross-account activity. Updates to the Lambda function or queue configuration in Account A do not require changes to other storage systems or workflows, maintaining a clear and secure architecture. Furthermore, any future security audits or compliance assessments can rely on KMS and CloudTrail logs as definitive proof of controlled access.
Question29:
A healthcare organization is deploying Amazon RDS instances containing protected health information. The security team requires encryption at rest and in transit, automated credential rotation, least-privilege access, and centralised auditing. Which solution best meets these requirements?
A) Enable RDS encryption using AWS-managed keys, allow developers full access, and use SSL for connections.
B) Use customer-managed KMS keys for encryption, enforce IAM-based RDS authentication, enable SSL/TLS connections, rotate credentials automatically using AWS Secrets Manager, and enable CloudTrail logging.
C) Store credentials in environment variables and rely on default RDS encryption.
D) Enable point-in-time recovery and monitor logs manually.
Answer:
B
Explanation:
Option A relies on AWS-managed keys and developer access, which does not enforce least-privilege access or fine-grained control over encryption keys. While SSL protects data in transit, developers having full access introduces significant risk.
Option B is the correct solution. Customer-managed KMS keys provide control over encryption, key rotation, and permissions, ensuring regulatory compliance. IAM-based RDS authentication enforces least-privilege access without relying on static credentials. SSL/TLS encrypts data in transit, preventing interception. AWS Secrets Manager automates credential rotation, reducing the risk of credential compromise. CloudTrail logs all API calls, configuration changes, and access events, providing centralised auditing for compliance and security purposes.
Option C exposes credentials to risk and lacks key management control. Option D is reactive and lacks preventive enforcement of encryption, access control, and credential management.
Option B fully addresses all security and compliance requirements for sensitive healthcare data.
Question30:
A company wants to secure API access to Lambda functions processing sensitive financial transactions. The security team requires strong authentication, restricted invocation paths, prevention of internal bypass, and centralised auditing. Which solution best meets these requirements?
A) Allow all IAM users to invoke Lambda and rely on logging.
B) Attach resource-based policies to Lambda, allowing invocation only from approved API Gateway endpoints and enabling CloudTrail logging.
C) Use environment variables to store invocation secrets and distribute them.
D) Protect Lambda with API keys and rely on developers not to share them.
Answer:
B
Explanation:
Option A permits unrestricted invocation and cannot enforce least privilege or prevent internal misuse. Option C, storing secrets in environment variables, does not prevent unauthorised access. Option D relies on API key secrecy, which is ineffective against internal threats.
Option B enforces strict access control by attaching resource-based policies that permit invocation only from approved API Gateway endpoints. CloudTrail captures all Lambda invocations, including unauthorised attempts, ensuring full auditability. This approach enforces secure invocation paths, prevents internal bypass, and meets compliance requirements. Option B is the only solution that satisfies all security requirements effectively.
Security Risks of Unrestricted Access
Option A, which allows all IAM users to invoke a Lambda function and relies solely on logging, introduces significant security risks. In modern cloud environments, organisations often have multiple teams with varying levels of access. Allowing unrestricted invocation violates the principle of least privilege, which dictates that users should have only the access necessary to perform their duties. Without proper restrictions, internal actors—either intentionally or accidentally—can invoke Lambda functions in ways that compromise application integrity, leak sensitive data, or disrupt critical processes. Relying on logs alone, such as CloudTrail entries, is insufficient as a preventative measure; logs are retrospective, meaning they record events after they occur, which does not stop unauthorised actions from taking place in real-time. This approach exposes the organisation to compliance violations, operational risk, and potential security breaches.
Limitations of Environment Variable Secrets
Option C proposes storing invocation secrets in Lambda environment variables and distributing them to clients or internal systems. While Lambda environment variables can be encrypted with AWS KMS, this method does not inherently enforce access restrictions on who can invoke the function. Anyone who obtains the secret can trigger the Lambda function, rendering this approach ineffective for secure invocation management. Additionally, secrets stored in environment variables may be inadvertently exposed through misconfigured logs, versioning, or developer mishandling. This method also does not provide fine-grained control over invocation sources and does not integrate directly with AWS’s resource-based access control mechanisms, making it insufficient for meeting strict security and compliance requirements.
Drawbacks of API Key Protection
Option D relies on API keys to control access to Lambda and trusts developers not to share them. This method is inherently weak because API keys are a shared secret and can be easily misused if exposed. Internal actors with legitimate access to the key can bypass security controls, intentionally or unintentionally, without leaving immediate traces of misuse. Furthermore, API keys do not provide contextual access controls, such as source IP validation or resource-level restrictions. While API keys can complement a broader security strategy, relying solely on them for Lambda invocation security is inadequate, especially in environments subject to regulatory audits or internal security policies.
Benefits of Resource-Based Policies with Logging
Option B is the most secure and compliant approach. By attaching resource-based policies to the Lambda function, you can explicitly define which API Gateway endpoints are permitted to invoke the function. This ensures that only approved sources have access, effectively enforcing least privilege. The resource-based policy acts as a strong access control mechanism that prevents unauthorised invocations from both internal and external actors. Additionally, enabling CloudTrail logging ensures full auditability. CloudTrail records every invocation attempt, including failed or unauthorised requests, allowing security teams to monitor and investigate suspicious activity. This combination of access control and auditing satisfies key security principles: prevention, detection, and accountability.
Operational and Compliance Advantages
Beyond technical security, Option B also simplifies operational management and regulatory compliance. By controlling invocation at the resource level, administrators can centrally manage access, reduce human error, and maintain a clear separation of duties. CloudTrail integration provides an auditable trail necessary for regulatory reporting and internal governance. This setup ensures that the Lambda function cannot be invoked outside approved channels, mitigating risks of data leakage, unauthorised processing, and accidental disruptions.