ISC CISSP Certified Information Systems Security Professional Exam Dumps and Practice Test Questions Set 9 Q121-135
Visit here for our full ISC CISSP exam dumps and practice test questions.
Question 121:
Which authentication factor is based on a physical characteristic of the user?
A) Something you know
B) Something you have
C) Something you are
D) Something you do
Answer: C
Explanation:
Something you know relies on passwords, PINs, or passphrases. These are knowledge-based factors where authentication is achieved by proving awareness of secret information. While widely used, they are susceptible to guessing, social engineering, phishing, or credential theft. Organizations often implement complexity requirements, password rotation, and multi-factor authentication to strengthen this factor.
Something you have involves physical devices like smart cards, tokens, or hardware keys. These possession-based factors require users to physically carry an item that generates or holds authentication data. Examples include one-time password (OTP) tokens, USB security keys supporting FIDO protocols, or mobile device authenticator apps. Their main strength is that even if a password is compromised, unauthorized access is prevented without the possession of the physical token.
Something you do refers to behavioral biometrics such as keystroke patterns, gait recognition, or mouse movement patterns. These behavioral factors measure how a user interacts with a system and can complement other authentication types. They are generally continuous authentication methods, used to detect anomalies during a session, such as detecting a different typing rhythm or unusual navigation patterns, which may indicate account compromise.
Something you are uses unique physical traits such as fingerprints, retina scans, facial recognition, hand geometry, or voice patterns. Biometric authentication is considered strong because physical characteristics are inherently tied to the individual and are difficult to replicate or share. Fingerprint scanning is the most common method, widely deployed in smartphones and secure facilities. Retina and iris scans are highly accurate but require specialized equipment. Facial recognition is increasingly used for convenience in mobile devices and access control, but may be vulnerable to spoofing if not implemented with liveness detection. Voice recognition can also authenticate users, but it may be affected by background noise or health conditions that alter speech.
Biometric systems typically store template data rather than raw images to protect privacy. Security measures such as encryption, hashing, and secure storage are critical to prevent misuse or theft. Multi-factor authentication (MFA) often combines biometrics with passwords or tokens to create layered security. While convenient and user-friendly, biometric authentication requires careful management to mitigate risks such as false positives, false negatives, and potential replay attacks. Regulatory compliance, such as GDPR or HIPAA, also dictates secure handling of biometric information, ensuring consent and protection of sensitive data.
The authentication factor based on physical characteristics is Something you are, making it the correct answer. It is a cornerstone of modern secure access frameworks and is widely integrated in both physical and digital security systems to enhance identity verification reliability and reduce the risk of unauthorized access.
Question 122:
Which type of malware spreads autonomously across networks without user interaction?
A) Virus
B) Worm
C) Trojan
D) Adware
Answer: B
Explanation:
Viruses require user action to execute and propagate. They attach themselves to files or programs, and the infection spreads only when the infected file is opened or run. Viruses are often spread through email attachments, removable drives, or file downloads, making social engineering a significant component of their propagation.
Trojans are malicious programs disguised as legitimate software. They rely on tricking users into downloading and installing them. Once executed, Trojans can perform a variety of harmful actions such as stealing data, creating backdoors, or installing additional malware, but they do not self-replicate like worms or viruses.
Adware primarily focuses on displaying advertisements. While annoying, it does not usually replicate across networks and is not designed to execute destructive actions autonomously. Some adware may collect data or modify user experiences, but it is not inherently self-spreading.
Worms, by contrast, are self-replicating malware capable of spreading across networks without any user interaction. They exploit vulnerabilities in operating systems, applications, or network protocols to propagate automatically. Worms can saturate network bandwidth, cause widespread disruption, and sometimes deliver secondary payloads like ransomware, keyloggers, or spyware. Notable examples include the Code Red worm, which targeted Windows IIS web servers in 2001, and the WannaCry ransomware worm in 2017, which spread through SMB protocol vulnerabilities and caused global impact.
Mitigation strategies against worms include timely patch management to fix vulnerabilities, network segmentation to limit lateral movement, intrusion detection and prevention systems to identify anomalous traffic, firewalls to block malicious connections, and endpoint security solutions to prevent infection. User education is also essential because worms can sometimes leverage social engineering, email phishing, or malicious downloads to initially gain entry. Monitoring network traffic and implementing anomaly-based alerts are critical in detecting early-stage worm activity before it causes significant damage.
The malware type that spreads autonomously is Worm, making it the correct answer. Its ability to propagate without user intervention makes it a high-risk threat in enterprise and personal computing environments, necessitating proactive and multi-layered defense strategies to prevent outbreaks.
Question 123:
Which backup strategy requires the last full backup and all subsequent backups for restoration?
A) Full Backup
B) Differential Backup
C) Incremental Backup
D) Mirror Backup
Answer: C
Explanation:
Full Backup involves copying all selected data in a single operation, providing a complete snapshot that allows restoration without reliance on any other backups. While reliable, full backups are resource-intensive and consume significant storage space.
Differential Backup captures data changed since the last full backup. During recovery, only the last full backup and the most recent differential backup are required. This method reduces restoration complexity compared to incremental backups, but requires more storage than incremental backups over time.
Mirror Backup duplicates files in real-time or near real-time, creating an exact copy of selected data. While useful for redundancy, it typically does not retain historical versions or incremental changes, so it does not replace conventional backup strategies for long-term data retention.
Incremental Backup, however, copies only the data that has changed since the last backup of any type, whether full or incremental. This approach conserves storage space and reduces backup time because only modified files are processed. However, recovery requires the last full backup followed by each incremental backup in the sequence up to the desired restore point. Failure to maintain any incremental backup in the sequence can prevent complete restoration.
Incremental backups are commonly used in enterprise environments to manage large data volumes efficiently. They are often scheduled daily, with weekly full backups to simplify recovery. Strategies for managing incremental backups include verification routines to ensure integrity, rotation schemes to prevent data loss, and secure storage to guard against ransomware or hardware failure. Additionally, combining incremental backups with off-site or cloud storage enhances disaster recovery capabilities, ensuring data availability in case of local infrastructure compromise.
The backup strategy requiring the last full backup and all incremental backups is Incremental Backup, making it the correct answer. Its balance of storage efficiency and restoration capability makes it a key component of modern backup architectures.
Question 124:
Which access control model uses security labels and classifications to regulate access?
A) Discretionary Access Control
B) Mandatory Access Control
C) Role-Based Access Control
D) Rule-Based Access Control
Answer: B
Explanation:
Discretionary Access Control (DAC) allows resource owners to assign permissions to users at their discretion. While flexible, it relies on individual users to enforce security and is prone to misconfiguration or privilege abuse.
Role-Based Access Control (RBAC) grants access based on defined job roles. Permissions are associated with roles, and users inherit access according to their assigned role. This model simplifies administration but may not provide the strictest security for highly sensitive environments.
Rule-Based Access Control (RB-RBAC) enforces access policies based on specific conditions or triggers, such as time-of-day, IP address, or device type. It is often used in combination with other models to provide context-sensitive access decisions.
Mandatory Access Control (MAC), on the other hand, enforces access policies using centralized rules, labels, and classifications rather than relying on individual discretion. In MAC, users and data objects are assigned sensitivity labels, and access decisions are made based on these labels in conjunction with a policy that defines who can access what under which conditions. MAC is especially prevalent in military, government, and high-security commercial environments where confidentiality and integrity are critical.
MAC reduces the risk of insider threats because users cannot override permissions. It supports rigorous auditing and compliance with regulatory standards that demand strict data handling, such as classified information management, financial regulations, or healthcare data protection. Implementation typically involves defining classification levels, security clearances, and compartmentalized access, ensuring users can only interact with information they are authorized to handle.
The access control model that uses security labels and classifications is Mandatory Access Control, making it the correct answer. Its strength lies in providing consistent, enforceable security across the organization and preventing unauthorized access due to human error or malfeasance.
Question 125:
Which risk assessment methodology evaluates both probability and impact numerically?
A) Qualitative
B) Quantitative
C) Hybrid
D) Checklist
Answer: B
Explanation:
Qualitative risk assessment evaluates risk using descriptive terms like high, medium, or low, without numerical quantification. It is often used for quick evaluations or in environments lacking sufficient data, but it may be subjective and inconsistent.
Hybrid approaches combine qualitative and quantitative elements. They may use scoring scales to approximate numerical values or apply quantitative analysis selectively to critical assets while relying on qualitative assessments elsewhere.
Checklists involve a predefined set of questions or criteria to identify risks. While useful for ensuring completeness, checklists do not inherently provide numeric probability or impact assessments and are often considered preliminary or supportive tools.
Quantitative risk assessment assigns numeric values to both the likelihood of a threat occurring and the potential impact, typically expressed in financial, operational, or performance terms. This allows calculation of expected loss, cost-benefit analysis for mitigation strategies, and prioritization of resources to address the highest risk scenarios. For example, the expected monetary loss can be calculated by multiplying the probability of an event by its potential financial impact.
Quantitative methods often leverage historical data, statistical models, simulations, and expert judgment to estimate probability distributions and potential losses. Techniques include Monte Carlo simulations, sensitivity analysis, and value-at-risk (VaR) assessments. Quantitative assessments provide actionable data for decision-makers, enabling objective justification for investments in security controls, insurance, or contingency planning. Organizations using quantitative risk analysis benefit from precise prioritization, enhanced risk communication to stakeholders, and alignment of risk management with business objectives.
The methodology evaluating probability and impact numerically is Quantitative, making it the correct answer. Its precision and analytical rigor make it invaluable for organizations seeking data-driven, defensible approaches to risk management.
Question 126:
Which principle ensures systems expose only necessary functions and services?
A) Least Privilege
B) Least Functionality
C) Separation of Duties
D) Defense in Depth
Answer: B
Explanation:
Least Privilege limits user permissions to only what is necessary to perform their tasks, reducing risk from accidental or malicious misuse. Separation of Duties divides responsibilities among multiple individuals to prevent fraud or conflict of interest. Defense in Depth layers security measures to provide redundancy and multiple protection mechanisms.
Least Functionality focuses on reducing a system’s exposed attack surface by enabling only essential services and features required for its operation. By disabling unnecessary software, protocols, ports, and interfaces, organizations minimize opportunities for attackers to exploit vulnerabilities. For example, a web server might only have HTTP and HTTPS enabled while disabling FTP, Telnet, and unused modules. Similarly, operating systems can be hardened by removing unnecessary services such as printers, remote administration tools, or default accounts that could be leveraged by attackers.
Implementing the least functionality enhances security by reducing both the quantity and complexity of potential vulnerabilities, making it easier to monitor and secure the system. This principle is applied during system configuration, hardening, and change management processes. Security tools like vulnerability scanners and configuration checklists often help enforce least functionality. It is closely aligned with regulatory compliance frameworks such as NIST, CIS Benchmarks, and ISO standards, which emphasize minimizing unnecessary functionality to mitigate attack surfaces and maintain a secure baseline.
The principle that ensures systems expose only necessary functions is Least Functionality, making it the correct answer. Its adoption strengthens overall cybersecurity posture, simplifies administration, and directly reduces the risk of exploitation by attackers seeking unneeded services to compromise a system.
Question 127:
Which disaster recovery metric defines the maximum tolerable downtime for a system?
A) Recovery Point Objective
B) Recovery Time Objective
C) Mean Time to Repair
D) Maximum Tolerable Loss
Answer: B
Explanation:
Recovery Point Objective (RPO) defines the acceptable data loss measured in time, focusing on how much data an organization can afford to lose between backups. For example, an RPO of four hours means the organization can tolerate losing up to four hours of data if a disaster occurs.
Mean Time to Repair (MTTR) measures the average duration required to repair a failed component or system, emphasizing the speed of technical restoration rather than downtime tolerance from a business perspective. Maximum Tolerable Loss (MTL) or Maximum Tolerable Downtime (MTD) defines the maximum operational impact an organization can withstand before significant financial or reputational damage occurs, but does not prescribe a precise recovery timeline.
Recovery Time Objective (RTO), in contrast, defines the target timeframe in which systems, applications, or processes must be restored following a disruption to prevent unacceptable consequences. Determining RTO is a fundamental step in disaster recovery planning, business continuity management, and risk assessment. RTO guides backup frequency, redundancy planning, resource allocation, and the prioritization of critical business functions. Organizations typically categorize systems by criticality, assigning shorter RTOs to mission-critical applications and longer RTOs to less essential systems.
The calculation of RTO involves evaluating business process dependencies, inter-system communication, regulatory or contractual requirements, and the financial and operational impact of downtime. Achieving RTO may require high-availability architectures, automated failover systems, real-time replication, or cloud-based disaster recovery solutions. Meeting RTO ensures continuity of operations, maintains customer trust, and reduces financial losses during disruptions.
RTO is central to organizational resilience. By defining maximum tolerable downtime, companies can develop recovery strategies tailored to system importance, operational priorities, and compliance needs. For example, banking systems may require near-zero RTO, while internal reporting tools may tolerate longer downtime. Integrating RTO into disaster recovery exercises, testing, and training ensures practical preparedness and validates recovery plans under real-world conditions.
The metric defining maximum tolerable downtime is Recovery Time Objective, making it the correct answer. Its strategic role in disaster recovery ensures that organizations can restore operations within acceptable limits, protecting business continuity, revenue, and stakeholder confidence.
Question 128:
Which attack captures network traffic to manipulate or steal information?
A) Phishing
B) Man-in-the-Middle
C) Denial-of-Service
D) SQL Injection
Answer: B
Explanation:
Phishing attacks exploit human trust, tricking individuals into revealing credentials, personal information, or financial details. They are social engineering attacks and do not directly capture network traffic. Denial-of-Service attacks overwhelm systems, making them unavailable to users, and SQL Injection targets databases to extract or modify data via malicious queries.
Man-in-the-Middle (MITM) attacks occur when an attacker intercepts communication between two parties, allowing eavesdropping, message modification, or injection of false data without the users’ knowledge. MITM attacks compromise the confidentiality, integrity, and sometimes authenticity of transmitted information. Attackers exploit unsecured communication channels, weak encryption, improperly validated certificates, or vulnerabilities in network protocols. Common MITM examples include intercepting unencrypted HTTP traffic, hijacking session cookies, or exploiting vulnerabilities in Wi-Fi networks or VPNs.
Mitigating MITM attacks involves robust encryption methods such as TLS/SSL, end-to-end encryption for messaging, strong certificate validation, and VPNs for secure remote access. Multi-factor authentication further reduces the impact of intercepted credentials, and network monitoring can detect unusual communication patterns indicative of MITM activity. MITM attacks are significant in financial transactions, online services, enterprise communications, and IoT deployments, where interception can lead to data theft, fraud, or system compromise.
Successful defense requires a combination of technical controls, user awareness, and security policies, emphasizing the importance of secure design principles and continuous monitoring. MITM attacks highlight the need for cryptographic best practices, certificate pinning, and regular network vulnerability assessments.
The attack that captures and manipulates network traffic is a Man-in-the-Middle attack, making it the correct answer. It remains one of the most dangerous attack types because it exploits trust relationships and network vulnerabilities while remaining stealthy and difficult to detect without proper safeguards.
Question 129:
Which type of security control reduces the probability of a security incident before it occurs?
A) Detective
B) Corrective
C) Preventive
D) Compensating
Answer: C
Explanation:
Detective controls identify and alert organizations to security incidents after they occur. Examples include intrusion detection systems, audit logs, and monitoring software. Corrective controls restore systems to a secure state following an incident, such as patching vulnerabilities or restoring backups. Compensating controls provide alternative methods for risk mitigation when primary controls cannot be implemented, such as using multi-factor authentication if hardware tokens are unavailable.
Preventive controls are proactive measures aimed at stopping security incidents before they happen. They enforce policies, reduce exposure, and limit opportunities for malicious activity. Examples include firewalls that block unauthorized traffic, access control systems enforcing least privilege, antivirus or anti-malware software scanning and blocking threats, security awareness training for employees, and strong password policies. Preventive controls also encompass network segmentation, secure configuration, encryption, and patch management.
The strategic implementation of preventive controls minimizes vulnerabilities and reduces risk by addressing potential threats at the earliest stage. For example, employee training helps prevent phishing attacks, while system hardening reduces the chance of exploitation through known vulnerabilities. Preventive controls are critical for compliance with standards such as ISO 27001, NIST, and GDPR, which emphasize proactive risk management.
The control type that reduces incident probability before it occurs is Preventive, making it the correct answer. These controls form the first line of defense, creating a resilient security posture and limiting the likelihood of disruptions, breaches, or data loss.
Question 130:
Which access control model is based on user roles and job functions?
A) Discretionary Access Control
B) Mandatory Access Control
C) Role-Based Access Control
D) Rule-Based Access Control
Answer: C
Explanation:
Discretionary Access Control relies on resource owners to assign permissions to users. Mandatory Access Control enforces access using labels and centralized policies. Rule-Based Access Control enforces access based on specific system conditions such as time, location, or device type.
Role-Based Access Control (RBAC) assigns permissions to roles corresponding to job functions, and users inherit access rights according to their assigned roles. RBAC simplifies management, ensures consistent enforcement, supports the principle of least privilege, and facilitates auditability. Organizations map roles to job responsibilities, aligning access with operational requirements while minimizing the risk of unauthorized access.
RBAC is widely used in enterprise environments to manage large numbers of users and systems. It is scalable, reduces administrative overhead, supports regulatory compliance, and enhances security governance. Organizations can implement RBAC with hierarchical roles, constraints, and segregation of duties to further strengthen security. RBAC frameworks are commonly integrated into identity and access management (IAM) systems, providing centralized control over permissions across multiple applications and infrastructure components.
The access control model based on roles and job functions is Role-Based Access Control, making it the correct answer. It provides a balance between security, usability, and operational efficiency while maintaining accountability and reducing the likelihood of privilege abuse.
Question 131:
Which type of testing examines a system without executing code to find vulnerabilities?
A) Black-box Testing
B) White-box Testing
C) Static Code Analysis
D) Penetration Testing
Answer: C
Explanation:
Black-box Testing evaluates a system externally, without knowledge of internal code, focusing on functionality, inputs, and outputs. White-box Testing involves analyzing internal code structures, logic, and implementation, often including unit testing and code coverage analysis. Penetration Testing simulates real-world attacks to identify exploitable weaknesses through active system interaction.
Static Code Analysis examines source code, bytecode, or binaries without executing them to detect potential vulnerabilities, insecure functions, coding errors, and compliance issues. Tools for static analysis can highlight buffer overflows, SQL injection points, cross-site scripting (XSS) vulnerabilities, and other security flaws. Static analysis is particularly valuable early in development, improving code quality, enforcing secure coding standards, and reducing remediation costs.
Integrating static code analysis into the development lifecycle, often within a DevSecOps framework, provides continuous security monitoring and early detection of vulnerabilities before code reaches production. By examining source code, configuration files, or compiled binaries without executing them, static code analysis identifies potential flaws such as buffer overflows, input validation errors, insecure cryptography, hard-coded secrets, or improper error handling. Early identification of such issues significantly reduces the likelihood of exploitation, enhancing system security, reliability, and compliance.
One of the most important benefits of static code analysis is its ability to minimize remediation costs. Fixing vulnerabilities during the development phase is far less expensive than addressing them after deployment or in production environments. Studies have shown that the cost of fixing a defect after deployment can be up to 30 times higher than remediating it during initial development. By proactively identifying security weaknesses, static analysis allows developers to correct issues while the codebase is still in a controlled environment, reducing potential disruption, downtime, and associated financial impact.
Automated static code analysis tools provide continuous feedback, helping developers adhere to organizational security policies, coding standards, and regulatory requirements such as PCI DSS, HIPAA, or GDPR. These tools can generate detailed reports with prioritized findings, enabling teams to focus remediation efforts on high-risk vulnerabilities. When combined with manual code reviews, security architecture assessments, and threat modeling, organizations can ensure that complex logic, business-critical functionality, and architectural decisions are robust against attack.
Beyond cost reduction, static code analysis also supports long-term software quality and maintainability. By embedding security into the development lifecycle, organizations reduce the likelihood of recurring vulnerabilities, promote secure coding practices, and build a culture of accountability. It allows security teams to shift from reactive measures to proactive risk management, aligning with the broader objectives of DevSecOps and continuous integration/continuous deployment (CI/CD) pipelines.
The testing method that examines code without execution is Static Code Analysis, making it the correct answer. Its proactive approach not only strengthens security and compliance but also substantially lowers remediation costs, prevents costly post-production fixes, and ensures the long-term reliability and integrity of software systems. By detecting and addressing vulnerabilities early, static code analysis becomes a critical component of any mature, secure software development program.
Question 132:
Which type of access control enforces rules based on system-enforced conditions like time or location?
A) Discretionary Access Control
B) Mandatory Access Control
C) Rule-Based Access Control
D) Role-Based Access Control
Answer: C
Explanation:
Discretionary Access Control (DAC) relies on the discretion of resource owners to grant or restrict access to files, systems, or applications. This model allows owners to determine who can read, write, or execute resources they control. While flexible and easy to implement, DAC carries the risk of accidental or malicious privilege escalation because users can grant access to others, potentially bypassing organizational security policies. DAC is commonly used in smaller environments or systems where flexibility outweighs strict security requirements, such as personal file systems or collaborative workspaces.
Mandatory Access Control (MAC), by contrast, enforces access policies centrally through security labels and classifications. Users and data objects are assigned sensitivity labels, and access decisions are based on predefined organizational policies. MAC is highly secure because it prevents users from modifying access permissions, ensuring consistent enforcement across the environment. It is typically used in high-security environments like government, military, or financial institutions where confidentiality and integrity of information are paramount.
Role-Based Access Control (RBAC) assigns permissions based on predefined roles corresponding to job functions. Users inherit the access rights associated with their roles, simplifying permission management and supporting the principle of least privilege. RBAC is widely used in enterprises to align access rights with operational responsibilities while reducing administrative overhead and minimizing the risk of excessive privileges. Hierarchical RBAC and constraint-based RBAC further enhance flexibility, allowing separation of duties, temporary access grants, and conditional role assignments.
Rule-Based Access Control (RB-RBAC) adds a dynamic layer of enforcement to access management by applying system-defined rules or conditions. These rules can be based on time (e.g., access only during business hours), location (e.g., geofenced access), network parameters (e.g., IP ranges or VPN requirements), device compliance (e.g., only trusted devices allowed), or contextual authentication factors (e.g., requiring multi-factor authentication under specific circumstances). This model allows organizations to enforce security policies that are responsive to operational contexts, external threats, and regulatory requirements. For instance, sensitive applications can automatically deny access outside standard working hours or block connections from untrusted locations, reducing exposure to potential attacks.
Organizations often implement rule-based controls across multiple layers, including firewalls, identity and access management systems, VPN gateways, cloud platforms, and critical applications. By combining RB-RBAC with other access control models, such as MAC or RBAC, organizations can create a comprehensive, multi-layered security strategy. Rule-based controls enhance monitoring, auditability, and compliance by providing granular visibility into access patterns and ensuring that security policies adapt to real-time conditions.
The access control model that enforces system-based conditions is Rule-Based Access Control, making it the correct answer. By implementing rules that respond to time, location, device, or authentication context, this model strengthens security posture, reduces risk exposure, enforces compliance with internal and external regulations, and protects sensitive assets under varying operational scenarios. RB-RBAC ensures that access is not only role-appropriate but also contextually secure, providing organizations with the flexibility to maintain operational efficiency while maintaining robust security controls. This dynamic enforcement capability makes rule-based controls particularly effective in modern IT environments, where cloud, mobile, and remote access increase complexity and potential vulnerabilities.
Question 133:
Which type of attack overwhelms systems or networks to make resources unavailable?
A) Phishing
B) Man-in-the-Middle
C) Denial-of-Service
D) SQL Injection
Answer: C
Explanation:
Phishing attacks exploit human trust and social engineering tactics to steal sensitive credentials, financial information, or personally identifiable information (PII). Attackers often use deceptive emails, fake websites, or malicious links to trick users into divulging confidential data. Despite technological safeguards, phishing remains highly effective because it targets human behavior, which is often the weakest link in security. Security awareness training, simulated phishing exercises, and strict email filtering policies are key preventive measures.
Man-in-the-Middle (MITM) attacks intercept communications between two parties, allowing attackers to eavesdrop, alter, or inject data without detection. MITM compromises confidentiality and integrity by exploiting unencrypted channels, weak cryptographic protocols, or improperly validated certificates. Common examples include intercepting HTTP traffic, hijacking session tokens, or manipulating financial transactions in transit. Preventive measures include implementing strong encryption, such as TLS, enforcing certificate validation, using virtual private networks (VPNs), and enabling multi-factor authentication to reduce the risk of credential compromise.
SQL Injection attacks target databases by inserting malicious queries into user input fields, allowing attackers to read, modify, or delete data. They exploit insecure coding practices where input validation is insufficient or absent. SQL Injection can lead to severe data breaches, financial loss, and regulatory penalties. Mitigation involves proper input validation, parameterized queries, stored procedures, and regular vulnerability scanning.
Denial-of-Service (DoS) attacks, in contrast, focus on disrupting availability, one of the three core principles of information security alongside confidentiality and integrity. By overwhelming servers, networks, or applications with excessive requests, DoS attacks prevent legitimate users from accessing critical resources. The impact ranges from minor service delays to complete system outages, potentially causing operational, financial, and reputational damage. Distributed Denial-of-Service (DDoS) attacks amplify this effect by coordinating traffic from multiple sources, often leveraging botnets composed of compromised devices, which makes detection and mitigation more challenging.
Effective mitigation strategies for DoS and DDoS attacks involve a combination of network-level and application-level defenses. Traffic filtering and rate limiting can help reduce malicious traffic, while redundancy and load balancing ensure that resources remain available even under attack. Intrusion detection and prevention systems (IDPS) monitor for anomalous traffic patterns, enabling early detection and automated response. Cloud-based DDoS protection services provide scalable solutions to absorb and neutralize large attack volumes, often leveraging global traffic scrubbing and threat intelligence.
Organizations must adopt a layered approach, combining preventive, detective, and corrective measures to maintain service continuity. Proactive monitoring, incident response planning, and business continuity strategies are critical for minimizing downtime and ensuring operational resilience. DoS attacks underscore the importance of system hardening, real-time alerting, and continuous improvement in security posture, emphasizing that availability is as crucial as confidentiality and integrity.
The attack that overwhelms systems or networks is Denial-of-Service, making it the correct answer. DoS attacks highlight the necessity of comprehensive planning, robust infrastructure, and vigilant monitoring to sustain uninterrupted operations, maintain user trust, and protect critical business functions even under adversarial conditions. Effective defenses against DoS and DDoS attacks are integral to organizational cybersecurity strategies and operational resilience, ensuring that systems remain functional and reliable despite increasing threat sophistication.
Question 134:
Which document specifies security requirements for a system before development?
A) Security Requirements Traceability Matrix
B) System Security Requirements Specification
C) Baseline Configuration
D) Service Level Agreement
Answer: B
Explanation:
Security Requirements Traceability Matrix (SRTM) is a tool that maps security requirements to system components, implementation tasks, and verification steps. Its primary purpose is to ensure that all defined requirements are accounted for during system development and testing. By establishing traceability, organizations can verify that each requirement has been implemented, tested, and validated. While the SRTM is an essential tool for quality assurance and compliance auditing, it does not itself define what the security requirements are; it only tracks and validates them. Baseline Configuration, on the other hand, specifies pre-approved system settings that must be applied during deployment to ensure that devices, applications, and systems comply with security policies and industry standards. Baseline configurations serve as a reference for secure deployment, change management, and ongoing system auditing. Service Level Agreements (SLAs) define the expected performance, availability, or operational standards for a service but do not outline specific security controls or implementation requirements. SLAs are primarily concerned with measurable outcomes like uptime, response times, and support levels rather than the security architecture or functional safeguards of the system.
System Security Requirements Specification (SSRS) is a comprehensive document that details the specific security requirements for a system before its development. It acts as a blueprint for secure system design, ensuring that critical security principles such as confidentiality, integrity, availability, authentication, authorization, auditing, and compliance are addressed from the outset. By providing detailed requirements, SSRS guides developers, system architects, and security engineers in building a system that inherently incorporates robust security controls. For example, the SSRS may specify encryption standards for data at rest and in transit, multi-factor authentication protocols, user access control mechanisms, logging and monitoring requirements, and regulatory compliance mandates such as HIPAA, PCI DSS, or GDPR. It ensures that developers understand the security expectations and have clear guidance for implementation, reducing the likelihood of gaps or oversights.
Early integration of security requirements through the SSRS is crucial in mitigating risk and avoiding costly remediation later in the development lifecycle. Systems developed without a clear set of security requirements often require significant rework, patches, or redesigns once vulnerabilities or compliance issues are discovered. By defining security requirements before development begins, organizations can align system functionality with organizational policies, risk management strategies, and regulatory obligations. The SSRS also facilitates communication between stakeholders, including business leaders, security teams, and developers, by providing a clear, agreed-upon set of security expectations.
Beyond guiding development, the SSRS supports validation and verification activities throughout the system lifecycle. Security testing, code review, vulnerability assessments, and penetration testing can all reference the SSRS to ensure that controls are implemented as intended. Additionally, the SSRS can serve as a reference during audits, demonstrating due diligence and adherence to internal and external regulatory requirements. It ensures that security is not an afterthought but an integral component of system design, reducing the likelihood of breaches, unauthorized access, or data loss. By establishing a structured approach to security requirements, the SSRS strengthens an organization’s overall security posture, enhances accountability, and promotes proactive risk management.
The document defining security requirements before development is the System Security Requirements Specification, making it the correct answer. It ensures that systems are designed with security in mind, supporting organizational risk management, compliance objectives, and operational resilience. SSRS acts as a foundational artifact that influences design decisions, implementation strategies, testing procedures, and ongoing maintenance, ensuring that security is embedded throughout the system’s lifecycle. Its importance extends beyond development into deployment, monitoring, and auditing, creating a secure environment that meets both functional and regulatory expectations. Ultimately, SSRS provides a roadmap for building systems that are resilient, compliant, and aligned with organizational objectives, ensuring that security considerations are integrated proactively rather than reactively.
Question 135:
Which cloud deployment model is operated solely for a single organization?
A) Public Cloud
B) Private Cloud
C) Hybrid Cloud
D) Community Cloud
Answer: B
Explanation:
Public Cloud resources are shared among multiple organizations, typically offered by providers like AWS, Azure, or Google Cloud. These services provide highly scalable, on-demand computing resources, storage, and networking, often billed on a pay-as-you-go basis. Public clouds allow organizations to avoid upfront infrastructure costs, benefit from rapid provisioning, and leverage the provider’s expertise in security, maintenance, and compliance management. They are ideal for workloads with fluctuating demand or organizations seeking to reduce capital expenditure. However, public clouds offer less control over the underlying infrastructure, which can be a concern for organizations handling sensitive data or requiring strict regulatory compliance.
Hybrid Cloud combines private and public cloud resources to optimize flexibility, cost, and control. Organizations can keep sensitive workloads or critical systems in a private cloud while using the public cloud for less sensitive workloads, seasonal spikes, or development and testing environments. Hybrid cloud strategies enable workload portability, disaster recovery, and scalable computing power while maintaining compliance and data sovereignty requirements. Effective hybrid cloud deployment requires robust networking, consistent security policies, and orchestration tools to seamlessly manage workloads across both environments.
Community Cloud is shared by several organizations with similar security, compliance, or operational requirements. This model is often adopted by organizations in the same industry or sector, such as healthcare, finance, or government, which must adhere to similar regulatory standards. A community cloud allows cost-sharing among participants while providing more control and customization than a public cloud. It supports collaboration, data sharing, and joint initiatives while ensuring that sensitive information is isolated from unrelated entities.
Private Cloud is dedicated exclusively to one organization, offering enhanced control over infrastructure, security, compliance, and governance. It can be hosted on-premises, within the organization’s own data centers, or by a third-party provider in a dedicated environment. Private clouds provide organizations with complete visibility and control over resources, enabling advanced security configurations, compliance adherence, and tailored performance optimization. They support high levels of customization, allowing organizations to implement specific network segmentation, role-based access control, encryption protocols, and monitoring tools that align with internal policies and regulatory mandates.
Private clouds maintain scalability and flexibility similar to public clouds, often incorporating virtualization, containerization, and automation to efficiently manage resources. Organizations can deploy new applications rapidly, respond to changing business requirements, and scale workloads while maintaining the control and governance essential for sensitive or mission-critical operations. Unlike public clouds, private clouds allow organizations to integrate legacy systems, internal compliance frameworks, and proprietary applications, ensuring consistency with existing IT strategies and infrastructure investments.
Private Cloud deployments are ideal for organizations with strict regulatory, privacy, or operational requirements, such as financial institutions, healthcare providers, government agencies, and large enterprises handling sensitive data. They balance the benefits of cloud computing—elasticity, self-service, and automated management—with the control, oversight, and security necessary for compliance and governance. Advanced security measures in private clouds often include continuous monitoring, identity and access management, granular logging, intrusion detection, and secure API management. These measures protect against internal and external threats, safeguard sensitive information, and ensure accountability and traceability of all actions within the environment.
Private cloud infrastructure can also support business continuity and disaster recovery initiatives. Organizations can implement geographically redundant private data centers or hybrid private-public solutions to achieve high availability and resilience. Automated orchestration and backup systems ensure that critical applications continue to operate even in the event of hardware failures, network outages, or cyberattacks. Additionally, private clouds enable organizations to enforce data residency policies, adhere to industry-specific compliance standards such as HIPAA, PCI DSS, or GDPR, and maintain full control over how data is stored, processed, and accessed.
The cloud deployment model operated for a single organization is Private Cloud, making it the correct answer. It provides dedicated resources, enhanced security oversight, strong compliance alignment, and operational flexibility. Private clouds support modern IT service delivery and digital transformation initiatives by enabling secure, scalable, and highly customizable environments that align closely with organizational goals. By combining the advantages of cloud computing with full control over infrastructure and policies, private clouds allow organizations to innovate, scale, and adapt while maintaining the security, governance, and performance standards required for sensitive and critical workloads.