CompTIA CAS-005 CompTIA SecurityX Exam Dumps and Practice Test Questions Set 1 Q1-15
Visit here for our full CompTIA CAS-005 exam dumps and practice test questions.
Question 1:
Which of the following best describes the primary purpose of a security information and event management (SIEM) system in an enterprise environment?
A) to automatically block malicious traffic at the network edge
B) to collect, correlate, and analyze log data for detection and response
C) to manage user identities and enforce authentication policies
D) to encrypt data at rest across enterprise servers
Answer: B)
Explanation:
A SIEM is designed to centralize visibility and improve detection and response capabilities across an environment. Below is a focused analysis of each answer and why the correct selection is the best fit.
A) The capability to block malicious traffic at the network edge is typically a function of network security devices such as next-generation firewalls, intrusion prevention systems, or web application firewalls. These systems act in-line to inspect and block traffic based on rules or signatures. While some SIEM solutions can trigger automated playbooks that influence blocking (for example, by instructing a firewall to block an IP), active blocking is not the core, built-in purpose of a SIEM.
B) Collecting, correlating, and analyzing log data for detection and response captures the essential value proposition of a SIEM. SIEMs ingest log and event data from many sources — servers, endpoints, network devices, cloud services, and applications — normalize those logs, apply correlation rules, and surface alerts when patterns indicative of security incidents emerge. They provide dashboards, search capabilities, and often include case management, threat intelligence integration, and support for incident response workflows. This aggregation and correlation enable detection of complex, distributed attacks that would be difficult to identify by looking at a single data source.
C) Identity management and authentication policy enforcement are core functions of identity and access management (IAM) systems, directory services, and identity providers. These systems handle provisioning, single sign-on, multifactor authentication, and access control policies. While a SIEM can receive authentication logs and help detect anomalous authentication behavior — such as abnormal login times or impossible travel — it does not primarily manage identities or enforce authentication.
D) Encrypting data at rest is a data protection control implemented through disk encryption, file-level encryption, database encryption, or storage-layer encryption. Encryption protects confidentiality of stored data but is not a primary function of SIEM solutions. SIEMs may log encryption events (for example, key usage or access attempts), helping auditors and defenders, but they do not perform encryption across enterprise servers as their main role.
The reasoning for why the selected answer is correct rests on the difference between centralization of telemetry and active enforcement. A SIEM’s strongest value is in centralized telemetry: continuous collection from heterogeneous sources, normalization for consistent interpretation, correlation to find multi-step intrusion patterns, and analysis to prioritize and support incident response. It acts as the organization’s nerve center for security monitoring rather than as an enforcement or identity management tool. While modern SIEM platforms increasingly incorporate automation, orchestration, and integrations that can feed defensive controls (closing a firewall rule, disabling an account), those are extensions of its analytics role. Therefore, the description that specifies collecting, correlating, and analyzing log data for detection and response precisely matches the SIEM’s core purpose.
Question 2:
A security team is designing a backup strategy for critical systems and must ensure recovery within two hours after a disruption. Which metric should they primarily use to set recovery objectives?
A) Recovery Time Objective (RTO)
B) Recovery Point Objective (RPO)
C) Mean Time Between Failures (MTBF)
D) Mean Time To Repair (MTTR)
Answer: A)
Explanation:
A clear understanding of continuity and recovery metrics is essential to design backups that meet business requirements. Each term describes a different dimension of availability and recovery performance.
A) Recovery Time Objective defines the maximum allowable time to restore system functionality after an outage or disruption. It answers the question: how long can the business tolerate downtime for a specific process or system? If the requirement is to be operational within two hours, RTO directly sets that goal. Backups, restoration processes, infrastructure redundancies, and resource allocation are planned to meet the RTO target.
B) Recovery Point Objective specifies the maximum acceptable amount of data loss measured in time. It dictates how frequently backups or replication must occur so that, in the event of failure, the data restored will be no older than the RPO window (for example, 15 minutes of acceptable data loss). Though RPO is critical for data currency, it does not answer how quickly operations must be restored; it focuses on data recency.
C) Mean Time Between Failures measures the average operational time between inherent failures for a system or component. MTBF is typically used for reliability engineering and capacity planning, helping to estimate expected failure rates and maintenance needs. It is not a recovery planning metric and does not set targets for how quickly recovery must occur.
D) Mean Time To Repair is an operations metric that measures the average time required to repair or restore a failed component or system. MTTR relates to operational efficiency and can be an internal measure used to determine whether recovery processes meet RTO targets. However, MTTR is a measured performance statistic rather than the planning objective itself.
The reasoning that supports selecting the correct metric focuses on the difference between objectives and measurements. Recovery objectives (RTO and RPO) are business-driven requirements set during continuity planning; they define acceptable tolerance for downtime and data loss. Recovery Time Objective is the explicit specification of allowable downtime and therefore is used to size and design backups, failover mechanisms, and operational playbooks to meet the two-hour recovery requirement. RPO complements RTO by specifying how much data loss is tolerable, and operations teams must align both metrics: achieving a two-hour RTO with an RPO of, say, 15 minutes requires rapid backup/replication and fast restore capabilities. MTBF and MTTR are valuable operational metrics used to monitor system reliability and repair performance, but they are not the contractual or planning metric that sets the recovery deadline required by the business. Consequently, using RTO as the primary metric when planning for a two-hour recovery window is the correct approach.
Question 3:
Which cryptographic algorithm is best suited for encrypting large volumes of data quickly while using minimal CPU resources?
A) RSA
B) AES
C) SHA-256
D) Diffie-Hellman
Answer: B)
Explanation:
Performance, purpose, and computational characteristics vary widely across cryptographic primitives; choosing the right algorithm depends on whether confidentiality, integrity, or key exchange is the goal, and on performance constraints.
A) RSA is an asymmetric algorithm typically used for key exchange, digital signatures, and encrypting small amounts of data (like keys). RSA uses large integer arithmetic and modular exponentiation, which is computationally expensive and slow for large payloads. For bulk data encryption, RSA is impractical due to performance and size limitations.
B) AES is a symmetric block cipher widely adopted for bulk data encryption. It is designed for high performance and low computational overhead; modern hardware often includes AES instruction set support (e.g., AES-NI) to accelerate encryption and decryption. AES operates on fixed-size blocks using efficient substitution-permutation steps and can be used in modes that support streaming, authenticated encryption, and parallelization. For encrypting large volumes of data quickly while minimizing CPU load, AES is the standard choice.
C) SHA-256 is a cryptographic hash function used for producing fixed-size digests for integrity verification, password hashing, or as part of digital signature schemes. It is not an encryption algorithm; it provides one-way hashing, not confidentiality. While it has performance characteristics, it cannot be used to encrypt and decrypt data in the conventional sense.
D) Diffie-Hellman is a key exchange protocol that allows two parties to securely establish a shared secret over an insecure channel. It is not used for bulk encryption of data. Its computational cost is higher than symmetric ciphers for similar workloads and its purpose is establishing keys for subsequent symmetric encryption (often AES) rather than encrypting large datasets directly.
The reasoning for selecting the correct cipher emphasizes the symmetric/asymmetric distinction and operational needs. Bulk encryption requires a symmetric cipher because symmetric algorithms use much smaller key sizes and far less complex math per byte of data, which translates to better throughput and lower CPU utilization. AES’s adoption as a global standard demonstrates its balance of security and performance; it’s used in disk encryption, TLS bulk ciphers, VPNs, and storage encryption. In practice, secure protocols use asymmetric algorithms like RSA or Diffie-Hellman to establish shared symmetric keys, which are then used by AES to encrypt the actual data payloads efficiently. Hash functions like SHA-256 are complementary for integrity checks and are not encryption mechanisms. Therefore, for fast, CPU-efficient encryption of large volumes of data, AES is the appropriate choice.
Question 4:
An organization wants to implement a least-privilege model for service accounts used by applications. Which approach best supports reducing unnecessary privileges while maintaining availability?
A) Use a single domain-wide service account with administrative rights for all applications
B) Assign each application its own service account with narrowly scoped privileges and secrets rotation
C) Use the built-in LocalSystem account for all Windows services to simplify management
D) Create shared service accounts for groups of applications and never change their credentials
Answer: B)
Explanation:
Least privilege aims to give only the permissions needed to perform required tasks. The right account design reduces attack surface and limits blast radius if credentials are compromised.
A) A single domain-wide service account with administrative rights centralizes privilege but massively violates least-privilege principles. If that account is compromised, an attacker gains broad administrative access across the environment. Management may be simpler, but security risk and exposure are unacceptable; auditing and accountability are also degraded.
B) Assigning a dedicated service account per application with narrowly scoped privileges aligns with least-privilege principles. Each account gets only the permissions needed for its specific application functions. This approach supports isolation, easier auditing, and more granular control. Coupling this with secret rotation (regular password or key updates) reduces credential lifetime and limits the window of opportunity for attackers. Automation — using secrets management systems or managed identities — can rotate credentials without manual disruption, maintaining availability while improving security.
C) LocalSystem is a built-in high-privilege account on Windows with extensive local rights and some network privileges presented as the machine account. Using LocalSystem for all services simplifies setup but grants excessive privileges. Services running as LocalSystem can be exploited to achieve elevated access on the host, and it becomes hard to ensure isolation between services. It undermines least privilege and makes attack containment difficult.
D) Shared service accounts across multiple applications degrade isolation and traceability. If the account is compromised or abused, multiple applications and services are affected. Never changing credentials further increases risk; static credentials are prone to leakage, and long-lived secrets are attractive attack targets. Best practice is to minimize sharing and to rotate credentials periodically.
The rationale for the selected approach rests on balancing security and operational continuity. Dedicated, narrowly privileged accounts ensure that an attacker who breaches one service cannot easily pivot to others, and auditors can attribute actions to a specific account. Secrets rotation, ideally automated via a secrets manager, ensures credentials are changed frequently enough to limit exposure without causing downtime. Managed identities (cloud providers) or certificate-based authentication can provide even stronger, more automated mechanisms for least-privilege service identities. Therefore, creating per-application service accounts with least privilege and secret rotation provides the best combination of security and availability.
Question 5:
Which control provides non-repudiation by ensuring that a sender cannot deny having sent a message?
A) Symmetric encryption using a shared secret
B) Digital signature using the sender’s private key
C) Access control list on the message store
D) Transport layer encryption (TLS)
Answer: B)
Explanation:
Non-repudiation is the assurance that someone cannot credibly deny an action — typically sending a message or signing a document. Different cryptographic and access controls provide varying assurances.
A) Symmetric encryption with a shared secret provides confidentiality and, depending on mode, may provide integrity if combined with appropriate MACs. However, using a shared key means both parties can encrypt and decrypt messages; attribution to a specific sender is weak because the other party could have forged the message. Therefore, symmetric keys do not provide strong non-repudiation.
B) Digital signatures created with the sender’s private key provide non-repudiation. The private key is assumed to be exclusively controlled by the sender; signing a message produces a signature that can be verified by anyone with the corresponding public key. Because only the private key holder could have created that signature, the signer cannot plausibly deny having signed the message. When combined with appropriate key management and certificate authorities, digital signatures are a strong basis for non-repudiation in electronic transactions.
C) An access control list on the message store restricts who can read or modify stored messages but does not inherently prove origin. It helps with confidentiality and integrity at the storage level but cannot demonstrate that a specific individual created or sent a message at a specific time — especially if storage system administrators have wide privileges or if credentials are shared.
D) Transport layer encryption (TLS) protects data in transit from eavesdropping and tampering between endpoints, and can provide server and optionally client authentication. However, TLS sessions are ephemeral, and while mutual TLS provides endpoint authentication during the session, it does not create a persistent, attributable signature on the message that independently demonstrates origin after the session ends. TLS helps secure communications but is not a standalone non-repudiation mechanism.
The reasoning behind digital signatures as the right choice centers on unique private-key control and verifiability. A digital signature binds the signer to the signed content in a verifiable way. Public key infrastructure (PKI) and certificate authorities can strengthen the trust model by validating the linkage between a public key and an identified individual or entity. Evidence for non-repudiation typically includes the signed message, the signer’s digital certificate, and logs showing private key use or secure key storage (e.g., hardware security modules). Therefore, for non-repudiation where the sender cannot deny having sent a message, digital signatures produced with the sender’s private key are the appropriate control.
Question 6:
Which of the following best describes a zero-trust security model?
A) Trust all internal network traffic by default but monitor external traffic closely
B) Implicitly trust users logged into the corporate domain and grant broad access
C) Never trust, always verify — enforce continuous authentication and least privilege
D) Use perimeter defenses only and segment internal networks occasionally
Answer: C)
Explanation:
Zero trust is an architectural approach that changes the baseline assumption about trust. Each choice represents a different stance toward trust and access.
A) Trusting all internal network traffic by default is the opposite of zero trust. Traditional perimeter models assume internal safety and focus on securing the edge; zero trust rejects that static internal trust. Monitoring external traffic closely while implicitly trusting internal flows leaves significant blind spots and creates lateral movement opportunities for attackers.
B) Implicitly trusting users just because they’re logged into the corporate domain undermines zero-trust principles. Authentication at login is a starting point, but zero trust requires continuous verification of identity, device posture, and context before granting or maintaining access to resources. Broad access based solely on domain login does not implement least privilege.
C) The phrase “never trust, always verify” succinctly captures zero trust. It enforces continuous authentication, authorization, and device validation for each access request, regardless of location. Policies are dynamic and context-aware (user, device, location, time, risk), and least privilege is applied. Microsegmentation, strong identity controls, continuous monitoring, and just-in-time access are common zero-trust elements. This approach limits lateral movement and reduces impact from compromised credentials or devices.
D) Relying only on perimeter defenses with occasional segmentation describes a traditional network security posture and is insufficient to achieve zero trust. While segmentation helps, zero trust requires fundamental changes to the access model, continuous validation, and identity-centric controls rather than perimeter-only defense.
The reasoning for choosing the correct description rests on the core tenets of zero trust: treat every access request as untrusted, verify identity and device posture continuously, apply least privilege, and log and inspect all traffic. Zero trust does not mean denying everything; it means basing decisions on explicit verification and context, and granting the minimum required access. It shifts focus from an assumed secure internal network to granular, verifiable controls with strong auditing. Therefore, the encapsulation “never trust, always verify — enforce continuous authentication and least privilege” aligns with the zero-trust model.
Question 7:
During a penetration test, an ethical hacker wants to determine if a public-facing web server is vulnerable to SQL injection. Which method is the safest first step to identify possible SQL injection without causing damage?
A) Execute automated SQL payloads that attempt to drop tables
B) Perform input validation checks using benign test strings and observe responses
C) Send malformed packets to crash the database server intentionally
D) Inject time-delay payloads that lock tables for prolonged periods
Answer: B)
Explanation:
Penetration testing must balance thorough vulnerability discovery with safety and rules of engagement to avoid disrupting production systems.
A) Executing automated payloads that attempt to drop tables is destructive and would violate safe testing practices unless explicitly authorized and performed in a controlled test environment. Such destructive tests can lead to data loss and significant downtime. Even in authorized tests, destructive payloads are typically avoided or limited to non-production systems.
B) Using benign test strings to check for input validation and observing responses is the safest initial step. This approach uses non-destructive probes (for example, single quotes, logical operators, or specially crafted harmless strings) to see whether the application echoes input, produces errors, or returns unexpected behavior that might indicate improper input handling. This reconnaissance provides valuable evidence without risking damage, and it can guide more in-depth testing if permitted.
C) Sending malformed packets designed to crash the database would be intentionally disruptive and reckless for a production environment. Denial-of-service or crash-inducing tactics are not appropriate as first steps and are often excluded from standard penetration testing scopes unless specifically agreed upon.
D) Injecting time-delay payloads to lock tables can create performance issues and denial-of-service conditions. Time-based blind SQL injection techniques are a valid testing method when safe and authorized, but they can negatively affect availability and should not be used initially without coordination and permissions.
The underlying logic of the safe-first approach emphasizes minimizing risk while gathering information. Initial testing should focus on passive and non-invasive checks — input validation probes, error message analysis, and reviewing publicly accessible resources and application behavior. If these probes indicate possible vulnerabilities, then further testing (including controlled, non-destructive verification) can be proposed to the stakeholders with explicit authorization. In some cases, testing teams will use mirrored or staging environments for more aggressive exploitation attempts. Therefore, the benign validation method best aligns with responsible testing and the principle of causing no harm while still identifying potential SQL injection weaknesses.
Question 8:
Which security control is primarily preventive and specifically designed to stop unauthorized changes to critical system files?
A) Intrusion detection system (IDS)
B) File integrity monitoring (FIM) with real-time alerts and enforcement
C) Security information and event management (SIEM) for long-term log analysis
D) Backup and archival system for restorative purposes
Answer: B)
Explanation:
Preventive and detective controls play different roles in a security program; controls focused on preventing unauthorized modification of critical files must be able to monitor and, ideally, prevent tampering.
A) An intrusion detection system typically detects network-based or host-based intrusions and generates alerts. It is primarily detective and may not itself prevent changes. Network IDS detects suspicious traffic patterns, while host-based IDS can detect certain host anomalies. However, IDS generally alerts administrators after the fact rather than preventing specific file modifications.
B) File integrity monitoring is explicitly designed to detect changes to files — such as system binaries, configuration files, and critical data — by calculating and comparing file hashes or checksums, monitoring permissions, and reviewing attributes. Advanced FIM solutions can operate in real time and integrate with prevention mechanisms or response playbooks to block or revert unauthorized changes. When configured to enforce policies (e.g., via host-based agents that prevent modification or quarantine processes), FIM can be a preventive control as well as a detective one.
C) A SIEM performs aggregation and analysis of log data and is valuable for investigating incidents and identifying trends over time. While SIEMs can correlate events, including file changes reported by FIM, and launch automated responses, SIEMs are primarily for detection, correlation, and response orchestration rather than direct prevention of file modifications themselves.
D) Backup and archival systems are restorative controls: they enable recovery after unauthorized changes or data loss. Backups do not prevent modifications but provide the ability to restore systems and data to a prior good state. While crucial for resilience, backups are not a preventive mechanism to stop unauthorized file changes in real time.
The reasoning behind choosing file integrity monitoring is its direct mapping to the requirement: stopping or detecting unauthorized modifications to critical files. FIM verifies the integrity of file contents and metadata, triggers alerts on deviations, and, when paired with configuration management and endpoint controls, can prevent or quickly remediate unauthorized changes. An effective defense-in-depth approach might combine FIM (to detect/prevent), IDS and SIEM (to detect and contextualize), and backups (to recover), but FIM is the primary control tailored to the specific need of protecting critical system files from unauthorized change.
Question 9:
Which of the following best characterizes a rainbow table attack against password hashes?
A) Brute-forcing every possible password on the target system in real time
B) Precomputed reverse lookup tables that map hashes to probable plaintexts to speed cracking
C) Intercepting plaintext passwords in transit using a Man-in-the-Middle attack
D) Using social engineering to trick users into revealing passwords
Answer: B)
Explanation:
Password-cracking techniques include brute force, precomputation, network interception, and social engineering — each works differently and has distinct defenses.
A) Brute-forcing every possible password in real time involves generating candidate passwords, hashing them, and comparing them to the stored hash. While brute force eventually yields results for short or weak passwords, it is computationally intensive and time-consuming when done against properly salted and iterated hashes.
B) Rainbow tables are precomputed tables that map plaintext passwords to their hash values to enable fast reverse lookup. Attackers generate large tables of possible passwords and their corresponding hashes ahead of time. When they obtain a set of hashed passwords, they can quickly search the precomputed table to find matching plaintexts. The efficiency gain comes from avoiding repeated hashing computations at attack time. However, effective defenses such as unique salts per password render rainbow tables impractical because the salt changes the hash output, making precomputed tables that don’t include the salt ineffective.
C) Intercepting plaintext passwords in transit using a Man-in-the-Middle attack targets the communication channel rather than stored hash values. If traffic is not encrypted or uses weak authentication, an attacker can capture passwords as users type them. This technique is unrelated to rainbow tables, which operate on stored hashes rather than network interception.
D) Social engineering obtains passwords by deceiving users or exploiting human behavior. Techniques include phishing or phone scams. This method bypasses cryptographic protections by persuading users to reveal secrets and is distinct from cryptanalysis methods like rainbow tables.
The reasoning supporting the correct description focuses on the difference between precomputation and live cracking. Rainbow tables exploit time–space trade-offs: invest considerable computational resources upfront to compute tables, then reap fast lookup benefits later. Salting passwords with unique random values per user and using slow, adaptive hashing algorithms (like bcrypt, scrypt, or Argon2) defeat rainbow tables because they require different precomputed values for each salt and greatly increase the computational cost to produce useful tables. Additional defenses include enforcing strong password complexity, multi-factor authentication, and limiting access to hashed password stores. Thus, rainbow tables are about precomputed reverse lookup to speed cracking rather than live brute force, interception, or social engineering.
Question 10:
A developer needs to securely store API keys for an application deployed across multiple cloud instances. Which approach balances security and manageability best?
A) Hard-code the API keys in the application source code and deploy across instances
B) Store keys in a centralized secrets manager and use instance identities to fetch them at runtime
C) Keep API keys in a shared plaintext file on a network share accessible by all instances
D) Require manual distribution of keys to each instance administrator via email
Answer: B)
Explanation:
Secure secret management must ensure confidentiality, minimize manual handling, enable rotation, and integrate with deployment workflows.
A) Hard-coding API keys into source code is insecure because source repositories may be cloned, leaked, or accessed by developers who don’t need runtime secrets. Even with private repositories, embedded keys are difficult to rotate and can be accidentally published. This practice increases exposure risk and complicates revocation and auditing.
B) Centralized secrets management provides secure storage, access control, auditing, and rotation features. Cloud-native and third-party secrets managers (managed vaults) allow applications to authenticate using instance identities (managed service identities, instance metadata service tokens, or short-lived certificates). Instances request secrets at runtime, and the secrets manager enforces policies, records access, and can provide short-lived credentials. This approach improves security (no long-lived secrets embedded on disk), manageability (centralized rotation and audit), and scalability across multiple instances.
C) Keeping keys in a shared plaintext file on a network share is insecure because file shares can be misconfigured, intercepted, or accessed by unauthorized users. Plaintext storage lacks encryption, strong access controls, rotation automation, and audit trails. It also increases the risk of accidental exposure via backups or mismanaged permissions.
D) Manual distribution via email to each instance administrator is insecure and unscalable. Email can be intercepted, and manual processes are error-prone; rotation is cumbersome, and accountability is weak. This approach is operationally expensive and increases the attack surface due to human handling.
The rationale for the centralized secrets manager centers on least privilege, secure retrieval, automation, and auditability. Managed identity approaches avoid embedding long-lived credentials in instances; instead, instances present a cryptographic identity to the secrets manager and receive short-lived secrets or tokens. Secrets managers often integrate with key management systems for encryption at rest and can rotate credentials programmatically to limit exposure. These systems also provide logging for compliance and forensic purposes. Therefore, for secure, manageable distribution of API keys across cloud instances, using a centralized secrets manager with instance identity-based retrieval is the best practice.
Question 11:
Which measure most effectively mitigates the risk of credential stuffing attacks against a public login portal?
A) Enforce long password length only, without rate limiting
B) Implement multi-factor authentication and rate limiting for authentication attempts
C) Disable account lockouts to avoid denial-of-service issues
D) Allow unlimited login attempts and rely on user education
Answer: B)
Explanation:
Credential stuffing uses large lists of breached username/password pairs against multiple sites. Defenses must reduce the risk of unauthorized access and slow automated attacks.
A) Enforcing long password length helps resist brute-force attacks and encourages stronger passwords, but credential stuffing uses valid credentials from other breaches, so password length alone does not stop an attacker from trying a correct, long password. Without additional controls, attackers can still try known pairs.
B) Implementing multi-factor authentication (MFA) significantly reduces the success rate of credential stuffing because possession of the second factor is typically required in addition to the valid password. Combining MFA with rate limiting on authentication attempts (per IP, per account, or using progressive delays) reduces the effectiveness of automated credential stuffing scripts by slowing or blocking high-volume attempts. Additionally, monitoring for anomalous login geographies, device fingerprinting, and blocking known bad IPs further mitigates risk.
C) Disabling account lockouts to avoid potential denial-of-service concerns removes a defensive mechanism that prevents attackers from repeatedly trying credentials against a particular account. While poorly configured lockouts can cause denial-of-service, properly implemented protections (like progressive throttling and anomaly detection) provide security without excessive denial-of-service risk.
D) Allowing unlimited login attempts and relying on user education is ineffective. Credential stuffing is automated and fast; educational efforts alone do not stop attackers. User education helps encourage strong, unique passwords and MFA adoption but cannot replace technical protections.
The reasoning for the combined controls focuses on layering defenses. MFA addresses the core risk — even if an attacker has a valid password, they typically cannot provide the second factor. Rate limiting and behavioral analytics reduce automation effectiveness and help identify and block malicious campaigns. Other complementary measures include monitoring for reused credentials using breach notification services, encouraging or enforcing unique passwords and password managers, and implementing device reputation and anomaly detection. Therefore, MFA combined with rate limiting and monitoring offers the most effective mitigation for credential stuffing attacks.
Question 12:
Which development practice helps reduce vulnerabilities introduced during coding by catching defects early in the lifecycle?
A) Deploying to production quickly and fixing defects post-deployment
B) Conducting code reviews and integrating static application security testing (SAST) in CI/CD pipelines
C) Relying solely on end-of-life penetration tests before release
D) Allowing unsecured third-party libraries without vetting to speed up development
Answer: B)
Explanation:
Secure software development requires integrating security controls throughout the lifecycle rather than relying solely on late-stage testing or reactive fixes.
A) Rapid deployment with fixes after production increases risk exposure and can lead to repeated incidents, customer impact, and higher remediation costs. While agile release cycles are beneficial, they must be coupled with early and continuous testing and security practices.
B) Conducting code reviews provides human insight into design and implementation issues, catching logic errors, insecure patterns, and potential vulnerabilities early. Integrating Static Application Security Testing tools into continuous integration/continuous deployment pipelines automates scanning for known vulnerability patterns, insecure API usage, injection risks, and misconfigurations before code reaches production. This combination facilitates early detection, reduces remediation costs, and improves code quality while maintaining development velocity.
C) Relying only on penetration tests late in development misses the opportunity to prevent many vulnerabilities from ever being introduced. Penetration testing is valuable for finding complex, runtime, and integration issues, but is most effective when complemented by earlier measures like code review and automated scanning.
D) Using third-party libraries without vetting introduces supply-chain risk and can import vulnerabilities. While open-source components accelerate development, they must be validated, updated, and monitored. Unvetted dependencies can create significant security liabilities.
The reasoning behind integrating code reviews and SAST early emphasizes prevention and cost-effectiveness. Early detection reduces the effort to correct defects, prevents security debt accumulation, and supports developer learning. Automated scans in CI/CD ensure consistent enforcement and scale with frequent changes, while peer review provides context-sensitive assessment and design checks. Complementary practices include dependency scanning (software composition analysis), dynamic testing in early environments, threat modeling, and developer security training. Consequently, embedding code review and SAST into the development pipeline is the best practice to reduce vulnerabilities introduced during coding.
Question 13:
In cloud deployments, which shared responsibility ensures the cloud provider, not the customer, secures the underlying physical data center infrastructure?
A) Protecting the guest OS and application patching
B) Physical security of servers, networking, and facility access control
C) Configuring identity and access management for cloud resources
D) Managing data encryption keys within customer-controlled HSMs
Answer: B)
Explanation:
Cloud security is divided between the provider and the customer responsibilities. Understanding which side handles which controls is essential to avoid gaps.
A) Protecting guest operating systems and applications is typically the customer’s responsibility in Infrastructure-as-a-Service (IaaS) models. Customers must patch and harden their virtual machines and containers. In higher-level services (PaaS, SaaS), some of these duties may shift toward the provider.
B) Physical security of servers, network infrastructure, and facility access control is generally the cloud provider’s responsibility across IaaS, PaaS, and SaaS offerings. Providers secure their data centers with controlled access, video surveillance, environmental controls, and hardware lifecycle management. Customers do not manage the physical premises and must trust that the provider meets contractual and regulatory standards for physical security.
C) Configuring identity and access management for cloud resources is a customer responsibility. The customer sets up IAM roles, users, permissions, and identity federation to control access to their cloud resources. The provider supplies IAM features, but proper configuration and policy enforcement are the customer’s obligations.
D) Managing encryption keys can be shared. Customers can choose to use provider-managed keys or manage their own keys in customer-controlled hardware security modules (HSMs). If customers opt to manage their keys, that management is their responsibility. Providers, however, are responsible for the security of provider-managed key services.
The reasoning for selecting physical security as the provider’s responsibility rests on the shared responsibility model commonly advertised by major cloud providers. The provider secures the physical infrastructure and the hypervisor; customers secure their data, operating systems, applications, and configurations. Cloud customers must therefore focus on proper configuration, encryption, IAM, and supply-chain security, while verifying provider compliance for physical controls and higher-level operational practices. Hence, physical data center and facility security are primarily the provider’s responsibility.
Question 14:
Which control is most appropriate for preventing unauthorized USB devices from exfiltrating data on corporate endpoints?
A) Disable USB ports in BIOS for all machines and never document the change
B) Implement device control software with allowlisting for approved USB devices and DLP policies
C) Rely on user training to avoid plugging unknown devices into machines
D) Encrypt data at rest only and ignore endpoint device controls
Answer: B)
Explanation:
Preventing data exfiltration via removable media requires technical controls, policy, and some user education. The solution should be manageable and allow necessary business use.
A) Disabling USB ports in BIOS can be effective, but it is inflexible and operationally painful. Resetting BIOS settings across many machines is complex; it can interfere with legitimate device needs (keyboards, mice, maintenance drives), and undocumented changes create support challenges. Additionally, attackers can re-enable ports if they have physical access or administrative privileges.
B) Device control solutions allow centralized management of removable media. With allowlisting, administrators can permit specific trusted device IDs, block unapproved devices, require encryption of removable media, and enforce read-only or write-restricted modes. Combining device control with data loss prevention policies enables content-aware blocking — preventing sensitive files from copying to portable drives unless policies are met. This approach balances security and usability and provides audit trails and enforcement.
C) User training is important for awareness, but is insufficient alone. Users may make mistakes or be coerced; training reduces the likelihood, but cannot be relied upon as the sole control to prevent technical exfiltration.
D) Encrypting data at rest protects confidentiality if devices are stolen, but does not prevent copying data to an external device. Endpoint device controls are required to stop unauthorized copying, and encryption alone does not address exfiltration vectors.
The rationale for device control with allowlisting focuses on enforceability and auditability. Centralized management enables policy enforcement across the enterprise, supporting exceptions for approved devices and providing logs for investigations. Integrating DLP adds content inspection to prevent sensitive data transfers while enabling business workflows. Therefore, implement device control and DLP for effective prevention of USB-based exfiltration.
Question 15:
Which logging practice most improves the usefulness of audit logs for forensic investigations after an incident?
A) Log only error messages to save storage and ignore informational events
B) Ensure timestamps are synchronized across systems, include contextual metadata, and protect logs from tampering
C) Store logs only on the local host with no forwarding to a central repository
D) Disable logging for authentication events to protect user privacy
Answer: B)
Explanation:
Forensic investigations depend on accurate, tamper-proof, and context-rich logs. Proper logging practices enable reconstruction of events and attribution.
A) Logging only error messages reduces storage use but discards valuable context. Informational and debug events — such as successful logins, file accesses, configuration changes, and process starts — often provide the timeline and linkages investigators need. Omitting these events can hamper incident timelines and root-cause analysis.
B) Synchronizing timestamps ensures logs from multiple systems can be correlated accurately; without synchronized time (e.g., via NTP), investigators face difficulties ordering events. Including contextual metadata — user IDs, process IDs, source IPs, event IDs, and correlated identifiers — enriches logs and aids tracing. Protecting logs from tampering (using write-once storage, digital signatures, secure forwarding, or access controls) preserves integrity and evidentiary value. Forwarding logs to a centralized, hardened repository enables retention policies, rapid searching, and redundancy. These practices collectively maximize forensic usefulness.
C) Storing logs only locally risks loss if a host is compromised and an attacker deletes or manipulates local logs. Central forwarding to a secure log collector or SIEM preserves logs even if endpoints are compromised and enables enterprise-wide correlation and alerting.
D) Disabling logging for authentication events under the guise of privacy removes crucial evidence about access attempts, failures, and successful authentications — all central to incident timelines. Privacy concerns should be addressed via access controls and data handling policies rather than suppressing critical security logs.
The reasoning emphasizes preservation, correlation, and integrity. Time synchronization resolves cross-system sequencing; contextual metadata connects events to users and hosts; centralized, protected log storage prevents tampering and supports analysis. Proper retention, indexing, and access controls ensure logs remain available for legal, compliance, and investigative needs. Therefore, synchronized timestamps, rich metadata, and tamper protections are fundamental to forensic-ready logging.