ISC CISSP Certified Information Systems Security Professional Exam Dumps and Practice Test Questions Set 13 Q181-195
Visit here for our full ISC CISSP exam dumps and practice test questions.
Question 181:
Which protocol provides secure communication over an insecure network by encrypting all traffic?
A) HTTP
B) HTTPS
C) FTP
D) Telnet
Answer: B
Explanation:
HTTP transmits data in plaintext, making it vulnerable to eavesdropping, interception, and man-in-the-middle attacks. Any sensitive information, including credentials, can be exposed during transmission. FTP transfers files but also does so in plaintext, posing similar security risks. Telnet allows remote command-line access but does not encrypt communication, exposing credentials and commands to potential attackers.
HTTPS (Hypertext Transfer Protocol Secure) addresses these weaknesses by encrypting traffic using TLS (Transport Layer Security). Encryption ensures that data transmitted between clients and servers cannot be read or modified by unauthorized parties. It also authenticates the server’s identity, helping prevent impersonation attacks, and can optionally provide client authentication. HTTPS protects sensitive data such as login credentials, financial information, personal data, and session cookies.
Implementing HTTPS involves obtaining a valid digital certificate from a trusted Certificate Authority (CA), configuring web servers to enforce TLS, and regularly updating encryption protocols to avoid vulnerabilities. Modern TLS versions provide stronger algorithms and forward secrecy, further securing communications. Beyond confidentiality, HTTPS enhances integrity, ensuring data is not altered during transit.
HTTPS, which stands for Hypertext Transfer Protocol Secure, is the secure version of HTTP. It combines the standard HTTP protocol with Transport Layer Security (TLS) or its predecessor, Secure Sockets Layer (SSL), to provide encryption, authentication, and data integrity for communications over insecure networks such as the Internet. When a user connects to a website using HTTPS, all data transmitted between the client (usually a web browser) and the server is encrypted, preventing eavesdroppers from intercepting or tampering with the data. This encryption is particularly important for sensitive information such as login credentials, financial transactions, personal data, and confidential communications.
HTTPS also ensures authentication of the server through digital certificates issued by trusted Certificate Authorities (CAs). This process confirms that the user is communicating with the legitimate website and not an imposter attempting a man-in-the-middle (MITM) attack. Additionally, the protocol provides data integrity, ensuring that any transmitted data is not modified during transit. Modern web browsers visually indicate secure HTTPS connections with padlock icons or other trust markers, reinforcing user confidence.
Organizations are strongly encouraged to implement HTTPS universally across all web pages, not just those handling sensitive data. Universal adoption protects against attacks that can compromise cookies, session tokens, and other browser-based authentication methods. HTTPS also helps meet regulatory and compliance requirements, including GDPR, HIPAA, and PCI DSS, which mandate secure handling of personal and financial information. By enforcing encrypted communication, HTTPS not only enhances privacy and security but also strengthens trust and credibility for businesses and online services.
The protocol that encrypts all communication over insecure networks, ensuring authentication, integrity, and confidentiality, is HTTPS, making it the correct answer. Its adoption is essential for secure, modern web communications.
Question 182:
Which type of firewall inspects traffic at the application layer and can block specific content?
A) Packet Filtering Firewall
B) Circuit-Level Gateway
C) Application Firewall
D) Stateful Firewall
Answer: C
Explanation:
Packet filtering firewalls operate at the network layer, allowing or denying traffic based on IP addresses, ports, and protocols. They are fast but cannot inspect the content of application data, making them ineffective against application-specific attacks. Circuit-level gateways monitor TCP/UDP handshakes and session establishment without examining the application data itself. They provide security by validating session legitimacy, but cannot filter based on content or detect attacks embedded in payloads. Stateful firewalls maintain session state and track connections, but primarily focus on ports, IP addresses, and session validation rather than understanding application payloads.
Application firewalls, also known as proxy firewalls, operate at the application layer (Layer 7) of the OSI model. They can inspect and filter content specific to applications, such as HTTP requests, FTP commands, or DNS queries. By understanding the context of the traffic, application firewalls can detect malicious patterns, block specific URLs, prevent SQL injection, and enforce granular policies for users and applications. They often act as intermediaries, terminating the connection and reestablishing a new one to inspect and control data flow.
Deployment of application firewalls can protect web applications from attacks targeting the application logic, such as cross-site scripting, buffer overflows, and malformed requests. They complement traditional firewalls and are critical for environments that expose applications to public networks.
The type of firewall capable of inspecting traffic at the application layer and blocking specific content is the Application Firewall, making it the correct answer. It combines content awareness, policy enforcement, and detailed inspection to secure applications beyond simple packet-level filtering.
Question 183:
Which cryptographic algorithm is symmetric and uses the same key for encryption and decryption?
A) RSA
B) AES
C) ECC
D) DSA
Answer: B
Explanation:
RSA is an asymmetric algorithm that uses a key pair—public and private—for encryption and decryption, supporting digital signatures and key exchange but not symmetric encryption. ECC (Elliptic Curve Cryptography) is also asymmetric, providing high security with smaller keys, and is primarily used for key exchange and digital signatures. DSA (Digital Signature Algorithm) is asymmetric and designed for creating digital signatures to verify integrity and authenticity, but does not perform encryption of data.
AES (Advanced Encryption Standard) is a symmetric key algorithm, meaning the same secret key is used for both encryption and decryption. Symmetric encryption is faster than asymmetric encryption and is commonly used for bulk data encryption, secure storage, and communication confidentiality. AES supports key sizes of 128, 192, and 256 bits, providing robust security suitable for modern applications. It operates on blocks of data using substitution, permutation, and multiple rounds of processing, ensuring strong diffusion and resistance against attacks such as differential and linear cryptanalysis.
AES is widely deployed in protocols like TLS, IPsec, disk encryption, wireless security, and virtual private networks (VPNs). Proper key management is essential because symmetric encryption relies entirely on the secrecy of the shared key; compromise of the key compromises the security of all encrypted data. AES is preferred for high-speed encryption and large data volumes due to its efficiency and security properties.
The symmetric cryptographic algorithm that uses the same key for both encryption and decryption is AES, making it the correct answer. Its balance of speed, security, and widespread adoption makes it the cornerstone of modern symmetric cryptography.
Question 184:
Which type of attack intercepts and potentially alters communications between two parties without their knowledge?
A) Phishing
B) Man-in-the-Middle
C) SQL Injection
D) Denial-of-Service
Answer: B
Explanation:
Phishing uses deception to trick users into revealing credentials or sensitive data. SQL Injection exploits improper input validation to manipulate databases. Denial-of-Service attacks overwhelm systems to disrupt availability but do not intercept or alter communications.
Man-in-the-Middle (MitM) attacks occur when an attacker secretly intercepts and potentially modifies the communication between two parties who believe they are directly communicating with each other. The attacker can eavesdrop, capture sensitive information, or inject malicious content into messages. Common MitM scenarios include public Wi-Fi networks, DNS spoofing, ARP poisoning, and HTTPS stripping attacks. The attacker may capture login credentials, session cookies, or confidential messages, allowing identity theft, account compromise, or unauthorized access to systems.
Mitigation strategies involve encryption, such as TLS/SSL for web traffic, VPNs for secure remote connections, mutual authentication, certificate pinning, and strong network monitoring. Ensuring end-to-end encryption prevents attackers from reading or altering transmitted data, even if they intercept it. Additionally, verifying digital certificates and employing public key infrastructure (PKI) reduces the risk of impersonation attacks.
The defining characteristic of MitM attacks is interception coupled with potential alteration of communications. They target confidentiality, integrity, and sometimes authentication, making them especially dangerous in financial, enterprise, and personal communications. Unlike attacks that rely on user deception or brute force, MitM operates transparently to the victims, making detection difficult without monitoring systems or anomaly detection tools.
The attack that intercepts and can manipulate communications between two unaware parties is the Man-in-the-Middle attack, making it the correct answer. Understanding this threat is critical for securing network communications, implementing encryption, and designing authentication systems resistant to interception and impersonation.
Question 185:
Which method protects stored passwords by transforming them into fixed-length values that are difficult to reverse?
A) Encryption
B) Hashing
C) Tokenization
D) Salting
Answer: B
Explanation:
Encryption transforms data into ciphertext using a key and can be reversed with the key. Tokenization replaces sensitive data with meaningless tokens, maintaining referential integrity but requiring secure mapping. Salting adds random data to passwords before hashing to prevent precomputed attacks, but does not replace hashing itself.
Hashing is a one-way function that converts input data into a fixed-length value known as a hash or digest. Hashes are designed to be computationally irreversible, meaning the original password cannot be recovered from the hash alone. This ensures that even if a database is compromised, plaintext passwords remain protected. Common hashing algorithms include SHA-256, SHA-3, and older variants like MD5 (now considered insecure). Hashing is widely used for password storage, digital signatures, data integrity verification, and authentication systems.
To enhance security, hashes are often combined with salts—random data added to each password before hashing. This prevents attacks using precomputed hash tables, such as rainbow tables, and ensures that identical passwords produce unique hashes. Systems may also employ iterative hashing or key stretching to slow brute-force attacks.
The critical advantage of hashing for password protection is its one-way nature, which ensures that passwords cannot be directly retrieved even if hashes are exposed. Combined with proper salting and secure algorithms, hashing provides robust protection against compromise, offline attacks, and password reuse vulnerabilities.
The method that converts passwords into fixed-length values that are computationally hard to reverse is Hashing, making it the correct answer. It forms the foundation of secure password storage and verification mechanisms across modern systems.
Question 186:
Which process ensures that security is built into a system from the design stage rather than added afterward?
A) Penetration Testing
B) Security by Design
C) Incident Response
D) Patch Management
Answer: B
Explanation:
Penetration Testing identifies vulnerabilities after a system is built. Incident Response reacts to security incidents post-occurrence. Patch Management updates software to fix vulnerabilities discovered later.
Security by Design is a proactive approach where security considerations are integrated into the entire system development lifecycle (SDLC). This includes threat modeling, secure coding practices, rigorous access controls, proper authentication, data protection, and logging. By embedding security from the start, organizations reduce vulnerabilities, simplify compliance, and avoid costly retrofitting of security controls. Security by Design emphasizes principles like least privilege, defense in depth, input validation, secure defaults, and encryption.
Implementing Security by Design involves collaboration between development, security, operations, and compliance teams. Threat modeling during requirements gathering identifies potential risks early, allowing mitigation strategies to be incorporated into the architecture. Secure coding standards ensure developers avoid common vulnerabilities like SQL injection, cross-site scripting, or buffer overflows. Security testing, including code review, static and dynamic analysis, is conducted throughout development, not only at the end.
This approach minimizes the likelihood of exploitable weaknesses being deployed into production systems. It also improves resilience against attacks, reduces incident response costs, and ensures regulatory compliance. By prioritizing security from the initial design, organizations can achieve more robust and predictable protection than if security is applied after deployment.
The process that ensures security is integrated from the system design stage is Security by Design, making it the correct answer. It represents a foundational principle of modern secure software engineering and risk management.
Question 187:
Which type of access control uses labels and classifications to enforce strict policies on data access?
A) Discretionary Access Control
B) Mandatory Access Control
C) Role-Based Access Control
D) Rule-Based Access Control
Answer: B
Explanation:
Discretionary Access Control (DAC) allows resource owners to determine who can access their resources. It provides flexibility but can lead to inconsistent enforcement and potential privilege escalation because individual users can assign access rights as they see fit. Role-Based Access Control (RBAC) grants access based on organizational roles, simplifying management but not enforcing strict classification-based policies. Rule-Based Access Control applies conditional rules such as time-of-day or location, allowing dynamic control but not inherently tied to data sensitivity or classification.
Mandatory Access Control (MAC) enforces strict access rules based on security labels assigned to both subjects (users, processes) and objects (files, data). Each label corresponds to a classification level (e.g., Confidential, Secret, Top Secret), and the system enforces access according to predefined security policies. Users cannot alter these permissions, which prevents unauthorized access and ensures consistent enforcement across the organization. MAC is commonly used in government, military, and high-security environments where data sensitivity and classification are critical.
MAC policies can be implemented using multi-level security models, such as Bell-LaPadula for confidentiality or Biba for integrity. The Bell-LaPadula model enforces “no read up, no write down” rules, ensuring that users cannot access information above their clearance level and cannot write data to a lower classification, preserving confidentiality. Conversely, the Biba model focuses on integrity, enforcing “no write up, no read down” rules to prevent unauthorized modification of higher-integrity data. Together, these models provide structured frameworks for enforcing strict access policies in sensitive environments such as government, military, and certain corporate sectors.
Mandatory Access Control relies on system-enforced rules, meaning that individual users cannot modify access permissions on the objects they own. Each subject (user or process) and object (file, database entry, resource) is assigned a security label, and the operating system or security kernel evaluates access requests against these labels. The consistent enforcement of these policies ensures that sensitive information is only accessible to authorized personnel and cannot be inadvertently or maliciously leaked. MAC is particularly effective in high-security environments where discretionary control by users could introduce risks of exposure, non-compliance, or insider threats.
Additionally, MAC integrates seamlessly with auditing and monitoring mechanisms. By combining label-based access control with detailed logging, organizations can track who accessed what data, when, and under which policy rules. This enables proactive detection of policy violations, forensic investigation in the event of incidents, and compliance with regulatory frameworks such as FISMA, HIPAA, or classified information handling requirements.
MAC also supports complex security hierarchies, allowing organizations to define multiple clearance levels and compartments to protect sensitive information even within a single classification. By preventing users from overriding these policies, MAC reduces the likelihood of accidental or intentional exposure of confidential or
Question 188:
Which type of attack manipulates input to a web application to execute malicious scripts in a user’s browser?
A) SQL Injection
B) Cross-Site Scripting
C) Man-in-the-Middle
D) Denial-of-Service
Answer: B
Explanation:
SQL Injection targets databases by inserting malicious commands through unsanitized input fields, aiming to manipulate or retrieve data. Man-in-the-Middle attacks intercept and possibly alter communications between two parties. Denial-of-Service attacks overwhelm systems with traffic to prevent legitimate use.
Cross-Site Scripting (XSS) attacks occur when an attacker injects malicious scripts into a web application that are executed in the browser of users viewing the affected page. XSS exploits poor input validation or output encoding, allowing attackers to steal cookies, session tokens, or sensitive information, manipulate content, or perform actions on behalf of users. XSS can be stored (persisted in the server), reflected (in URL or request), or DOM-based (executed in the browser’s Document Object Model).
Mitigation strategies for Cross-Site Scripting (XSS) attacks involve a combination of secure coding practices, system-level protections, and continuous security assessments. Input validation is a primary defense, ensuring that any data submitted by users adheres to expected formats and types, rejecting or sanitizing any suspicious content. Output encoding or escaping transforms potentially dangerous characters—such as <, >, «, and ‘—into harmless representations before rendering them in a user’s browser. This prevents malicious scripts from executing while preserving legitimate content.
Secure development practices are essential. Developers should adopt a security-first mindset throughout the Software Development Life Cycle (SDLC). This includes conducting threat modeling to identify potential XSS vectors, applying the principle of least privilege to web applications, and performing rigorous code reviews to catch insecure coding patterns. Frameworks such as React, Angular, or Django provide built-in mechanisms to automatically escape output, reducing the likelihood of XSS if used correctly. However, developers must remain vigilant because misusing or bypassing these protections can reintroduce vulnerabilities.
Content Security Policies (CSP) offer an additional layer of protection. By defining which scripts, styles, and resources are permitted to run on a webpage, CSP can mitigate the impact of injected malicious scripts. For example, restricting script execution to trusted domains prevents attackers from running unauthorized code even if an XSS vulnerability exists. Other browser-based security mechanisms, such as HTTP-only and secure cookies, help protect session data from theft in the event of an XSS attack.
Regular vulnerability testing, including automated scanning and manual penetration testing, is crucial. Black-box testing simulates attacks from an external perspective, while gray-box testing leverages partial knowledge of application logic to identify deeper flaws. Testing helps ensure that new features, updates, or code changes do not introduce XSS vulnerabilities. Organizations should also monitor real-time application activity to detect anomalies, unusual input patterns, or potential exploitation attempts.
XSS attacks exploit the trust between a user and a web application. They can lead to the theft of sensitive data such as session cookies, credentials, personal information, and financial data. They can also be used to deface websites, inject phishing forms, or deliver malware. The impact of a successful XSS attack extends beyond technical compromise; it can damage user trust, violate privacy regulations, and lead to reputational or financial harm.
Education and awareness are equally important. Developers, testers, and security teams must stay informed about evolving XSS techniques, emerging threats, and best practices for mitigation. Combining secure coding, defensive frameworks, CSP, continuous testing, monitoring, and education forms a holistic approach to XSS defense.
The attack that executes malicious scripts in users’ browsers by manipulating web application input is Cross-Site Scripting, making it the correct answer. Understanding XSS is critical for protecting sensitive data and maintaining the integrity of web applications.
Question 189:
Which type of backup captures all changes since the last backup and resets the archive bit?
A) Full Backup
B) Incremental Backup
C) Differential Backup
D) Synthetic Full Backup
Answer: B
Explanation:
Full Backup copies all files regardless of changes, ensuring a complete snapshot, but consuming significant storage and time. Differential Backup copies all files changed since the last full backup but does not reset the archive bit, causing backups to grow cumulatively over time. Synthetic Full Backup combines previous full and incremental backups to create a new full backup without copying data from the source system.
Incremental Backup captures only the files that have changed since the last backup, whether full or incremental, and resets the archive bit. This approach is highly efficient in terms of storage and time, as it only saves new or modified data. Incremental backups allow for faster backup windows, but restoring data requires the last full backup and all subsequent incremental backups in sequence. If one incremental backup is missing or corrupted, it can complicate recovery.
Organizations commonly implement incremental backups as part of a broader, multi-tiered backup strategy to ensure both efficiency and reliability. Incremental backups copy only the data that has changed since the last backup—whether full or incremental—thereby reducing storage requirements, network bandwidth usage, and backup windows. This efficiency is particularly important in enterprise environments with large volumes of data, where performing frequent full backups would be time-consuming, resource-intensive, and potentially disruptive to operations.
To optimize backup strategies, organizations often combine incremental backups with periodic full backups. A full backup provides a complete snapshot of all data at a specific point in time, serving as a reliable baseline. Incremental backups performed between full backups capture only changes, enabling faster daily or more frequent backups while minimizing the amount of data to process. This approach balances the need for quick recovery with the operational advantages of reduced storage and network load.
Recovery objectives are critical in designing effective backup strategies. Recovery Point Objectives (RPOs) define the maximum acceptable amount of data loss in the event of a disruption, guiding the frequency of incremental backups. Recovery Time Objectives (RTOs) specify how quickly systems must be restored to operational status, influencing both backup type and storage location. Organizations must ensure that incremental backup chains are carefully managed because recovery requires the last full backup and all subsequent incremental backups in sequence. Any missing or corrupted incremental backup can compromise the ability to restore data fully, making verification and testing of backups essential.
Additional best practices for incremental backup strategies include off-site storage, either physically or via cloud-based solutions, to protect against local disasters such as fire, flooding, or hardware failure. Encrypting backups is essential to maintain confidentiality, particularly for sensitive or regulated data. Regular testing of restore procedures is also critical to ensure that backups are usable and that recovery processes are efficient, reliable, and documented. Automated monitoring and alerting systems can help detect failures, incomplete backups, or inconsistencies in incremental backup chains, further reducing operational risk.
Incremental backups are widely favored in enterprise environments because of their efficiency and lower resource consumption compared to performing frequent full backups. They support business continuity and disaster recovery objectives by enabling organizations to maintain up-to-date copies of data while minimizing storage costs, network strain, and operational downtime.
The backup type that captures only changes since the last backup and resets the archive bit is Incremental Backup, making it the correct answer. It is widely used in enterprise environments for its efficiency and reduced resource consumption.
Question 190:
Which security model focuses on maintaining data integrity rather than confidentiality?
A) Bell-LaPadula
B) Biba
C) Clark-Wilson
D) Brewer-Nash
Answer: B
Explanation:
Bell-LaPadula emphasizes confidentiality, ensuring users cannot read data above their security clearance (no read-up) or write data below their level (no write-down). Clark-Wilson enforces integrity through well-formed transactions and separation of duties, but it is more practical than purely theoretical. Brewer-Nash (Cinderella model) focuses on dynamically restricting access to prevent conflicts of interest in commercial environments.
The Biba model is designed to preserve data integrity. It prevents users from writing to higher integrity levels (no write-up) and reading from lower integrity levels (no read-down), ensuring that data remains accurate and uncorrupted by unauthorized or less trustworthy sources. Biba is particularly useful in financial systems, industrial control systems, and environments where maintaining accurate, unaltered data is crucial.
Implementing Biba involves assigning integrity levels, enforcing access controls, and monitoring modifications to sensitive data. Unlike confidentiality-focused models, Biba prioritizes preventing accidental or malicious data corruption over information secrecy. It complements other security controls, providing a balanced approach to overall system security.
The security model prioritizing integrity over confidentiality is Biba, making it the correct answer. Understanding Biba is essential for protecting critical systems where accurate and reliable data is paramount.
Question 191:
Which type of malware encrypts files on a victim’s system and demands payment for decryption?
A) Virus
B) Ransomware
C) Trojan
D) Worm
Answer: B
Explanation:
Ransomware is a particularly dangerous type of malware because it combines both destructive and coercive elements. Unlike viruses, which attach to files to replicate, or worms, which propagate autonomously, ransomware’s primary goal is financial extortion. Once executed, ransomware encrypts files, folders, or even entire systems, rendering data inaccessible to users. Attackers then demand payment, typically in cryptocurrency such as Bitcoin, to deliver the decryption key. The use of cryptocurrency makes ransomware attacks difficult to trace and facilitates cross-border cybercrime.
Ransomware infection vectors are diverse. Phishing emails remain one of the most common methods, leveraging social engineering to trick users into opening malicious attachments or clicking harmful links. Drive-by downloads and malicious websites can also deliver ransomware without direct user interaction. Exploit kits target known vulnerabilities in software and operating systems, automatically executing ransomware if patches are not applied. Additionally, network-based attacks can allow ransomware to propagate laterally within an organization, compromising multiple systems once an initial foothold is established. High-profile attacks have demonstrated how quickly ransomware can spread through enterprise networks, crippling operations in hours.
Mitigation strategies for ransomware focus on prevention, detection, and recovery. Regular backups are critical: offline or immutable backups ensure that encrypted data can be restored without paying the ransom. Patch management is essential to close security vulnerabilities that ransomware exploits. Endpoint protection software, including anti-malware tools and behavior-based detection, can detect unusual file modifications or encryption activity in real time. Network segmentation limits the spread of ransomware, isolating critical systems from compromised endpoints. User awareness training is also vital, teaching employees to recognize phishing attempts, suspicious links, and unsafe attachments.
Detection of ransomware often relies on a combination of signature-based and heuristic methods. Signature-based approaches identify known malware samples, while behavioral analysis observes unusual activity patterns, such as mass file renaming or rapid encryption. Advanced monitoring tools leverage real-time threat intelligence feeds, machine learning, and anomaly detection to identify emerging ransomware variants before significant damage occurs.
Incident response planning is equally important. Organizations should have a well-documented response procedure, including isolation of infected systems, communication protocols, and coordination with cybersecurity experts. Timely action can prevent ransomware from spreading further and minimize operational disruption. Legal and regulatory considerations also play a role, as some jurisdictions require reporting of ransomware incidents, especially if sensitive data is affected.
The malware type that encrypts files and demands payment is Ransomware, making it the correct answer. It poses a significant risk to organizations and individuals, emphasizing the importance of proactive security, layered defenses, and robust data recovery planning.
Question 192:
Which authentication factor is based on what a user has?
A) Knowledge
B) Possession
C) Inherence
D) Location
Answer: B
Explanation:
Possession-based authentication relies on physical objects or devices that a user must have in their control to gain access. Unlike knowledge-based methods, such as passwords or PINs, or biometrics-based methods, such as biometrics, possession-based authentication requires a tangible item that cannot be easily guessed, memorized, or duplicated digitally. Common examples include hardware tokens, smart cards, key fobs, and mobile devices used for one-time passwords (OTPs). These items provide an additional layer of security, as access is contingent not only on knowing the correct credentials but also physically possessing the authorized device.
Tokens, one of the most widely used forms of possession-based authentication, often generate time-based or event-based codes. Time-based One-Time Passwords (TOTP) change at regular intervals, typically every 30 to 60 seconds, while event-based One-Time Passwords (HOTP) change in response to a specific action, such as pressing a button on the token. These dynamic codes are used in combination with a username and password, making unauthorized access significantly more difficult. Even if an attacker steals or guesses a password, they cannot authenticate without the token, reducing the risk of account compromise.
Smart cards are another example of possession-based authentication, often used in enterprise and government environments. They contain embedded microchips that store cryptographic keys, certificates, or other secure credentials. When used in conjunction with a PIN or password, smart cards provide two-factor authentication (2FA). They can also be used for secure login to computer systems, encrypted email, digital signatures, and physical access to restricted areas. The use of smart cards helps enforce strong authentication policies, particularly in organizations that handle sensitive data or require compliance with regulatory standards.
Proper management of possession-based authentication devices is essential for security. Organizations must implement secure issuance procedures, ensuring that devices are distributed only to authorized personnel. Lifecycle management includes tracking the device’s usage, performing updates to cryptographic material, and revoking or deactivating devices when users leave the organization, lose the device, or if the device is suspected to be compromised. Revocation and replacement mechanisms must be rapid to prevent unauthorized access.
Combining possession-based authentication with other factors, such as knowledge (passwords) or inherence (biometrics), achieves multi-factor authentication (MFA), which is widely regarded as a best practice for securing sensitive systems. MFA significantly reduces the likelihood of compromise because attackers must bypass multiple independent layers of verification. For high-security environments, such as banking, healthcare, or government networks, possession-based authentication is a foundational component that ensures robust, layered access control.
The authentication factor based on what a user possesses is Possession, making it the correct answer. It is a critical component of multi-factor security systems used in enterprise and high-security environments, providing a tangible barrier against unauthorized access.
Question 193:
Which attack attempts to exhaust system resources to make a service unavailable?
A) SQL Injection
B) Denial-of-Service
C) Man-in-the-Middle
D) Phishing
Answer: B
Explanation:
Denial-of-Service (DoS) attacks are a critical threat to the availability component of cybersecurity, targeting networks, servers, applications, or entire systems. The primary goal of a DoS attack is to overwhelm the target’s resources to the point where legitimate users cannot access the services or applications. While a standard DoS attack usually originates from a single source, a Distributed Denial-of-Service (DDoS) attack leverages multiple compromised systems, often part of a botnet, to launch coordinated attacks at much higher volumes, making mitigation and defense significantly more challenging.
Various techniques are used to execute DoS attacks. TCP and UDP floods are common network-layer attacks that saturate bandwidth or exhaust connection-handling capabilities. HTTP request floods target web servers by sending large volumes of requests, consuming server memory or processing power. Amplification attacks, such as DNS reflection, NTP amplification, or Memcached amplification, exploit legitimate servers to generate massive traffic toward a victim, magnifying the impact. Attackers can also combine multiple attack vectors to create multi-vector DoS attacks, which are more difficult to detect and defend against.
The consequences of successful DoS attacks can be severe. Organizations may experience significant downtime, loss of revenue, degradation of customer trust, and damage to brand reputation. In critical sectors, such as finance, healthcare, and public services, DoS attacks can disrupt essential operations, potentially endangering lives or compromising regulatory compliance. Extended downtime can also lead to cascading effects on dependent systems and services, amplifying the overall impact.
Mitigation strategies for DoS attacks involve a combination of technical, architectural, and procedural controls. Traffic filtering and rate limiting at the network edge help reduce the volume of malicious traffic. Intrusion detection and prevention systems monitor for abnormal traffic patterns and can automatically block suspected attack sources. Redundancy through load balancing, geographically distributed data centers, and content delivery networks ensures that services remain available even during high-volume attacks. Cloud-based DDoS protection services offer elastic capacity to absorb large-scale attacks and provide automated mitigation for rapid response.
Preparation and response are equally important. Organizations should have detailed incident response plans for DoS attacks, including communication protocols, system isolation procedures, and coordination with upstream providers or security vendors. Regular stress testing and simulations help teams understand system resilience, identify bottlenecks, and fine-tune mitigation strategies. Understanding attack vectors, monitoring traffic patterns, and maintaining updated defenses are key to minimizing disruption.
The attack designed to exhaust resources and disrupt availability is Denial-of-Service, making it the correct answer. DoS attacks pose a persistent risk, emphasizing the importance of robust network design, proactive defense mechanisms, and comprehensive incident response planning.
Question 194:
Which security testing method evaluates a system without executing its code?
A) Dynamic Testing
B) Fuzz Testing
C) Static Code Analysis
D) Black-Box Testing
Answer: C
Explanation:
Static Code Analysis (SCA) is a fundamental practice in secure software development, focusing on examining source code, bytecode, or compiled binaries without executing the program. Unlike dynamic testing methods, which analyze the program in a runtime environment to detect vulnerabilities under specific conditions, static analysis identifies issues at the code level, allowing developers to address problems early in the development lifecycle. By detecting security flaws, logic errors, or noncompliance with coding standards before deployment, SCA reduces the risk of vulnerabilities reaching production, saving both time and cost associated with post-release remediation.
Techniques used in static code analysis vary widely. Automated tools scan code for known patterns of insecure coding, such as buffer overflows, SQL injection points, hardcoded credentials, or improper input validation. These tools can parse multiple programming languages, analyze control and data flow, and identify potential vulnerabilities across entire codebases. Advanced static analysis also involves symbolic execution, where the tool explores possible execution paths based on input variables, helping uncover edge-case vulnerabilities that might be missed during manual review. Manual code inspection complements automated scans, allowing security experts to assess complex logic, design decisions, or architectural concerns that may not conform to standard patterns but still present security risks.
Integrating static code analysis into the software development lifecycle—particularly in agile and DevSecOps environments—enables continuous security monitoring. Automated SCA can be embedded into version control pipelines, build processes, and pull request workflows. This ensures that every code commit is checked against security and quality standards before being merged, preventing vulnerable code from progressing further. SCA can also provide metrics and dashboards to track security debt, compliance status, and code quality trends over time, helping organizations prioritize remediation efforts and maintain secure coding practices across teams.
Static code analysis has additional benefits beyond security. It promotes adherence to organizational coding standards, enhances maintainability, and improves overall code quality. For organizations subject to regulatory frameworks such as PCI DSS, HIPAA, or ISO 27001, SCA provides auditable evidence that code has been reviewed for security and compliance, supporting certification and audit requirements. Furthermore, integrating static analysis with other testing methodologies, such as dynamic testing, fuzzing, and penetration testing, creates a comprehensive vulnerability management strategy that addresses both design-time and runtime threats.
The testing method that evaluates code without execution is Static Code Analysis, making it the correct answer. By proactively identifying vulnerabilities, enforcing coding standards, and supporting compliance, SCA serves as a cornerstone of secure software development and risk mitigation strategies.
Question 195:
Which control type corrects problems after they occur?
A) Preventive
B) Detective
C) Corrective
D) Deterrent
Answer: C
Explanation:
Corrective controls are a critical component of a comprehensive cybersecurity and risk management strategy because they focus on restoring systems, processes, and data to a secure and functional state after a security incident, failure, or disruption. Unlike preventive controls, which aim to stop incidents before they happen, or detective controls, which identify and alert on incidents as they occur, corrective controls are reactive by nature, designed to minimize damage, reduce downtime, and ensure continuity of operations. Deterrent controls, in contrast, may discourage unwanted actions but do not actively restore or correct systems. Corrective controls form an essential part of the incident response and business continuity framework in any organization, ensuring resilience against both intentional attacks and accidental failures.
Examples of corrective controls are diverse and span multiple layers of technology and process. At the system level, patch management is a critical corrective measure. Applying patches or updates to software after vulnerabilities have been identified closes security gaps that could be exploited by attackers. Similarly, restoring systems from clean backups allows organizations to recover from ransomware attacks, data corruption, or accidental deletion. Backup strategies often include full, incremental, or differential backups stored in secure, off-site locations or cloud environments to ensure recoverability even in large-scale incidents. Corrective measures may also involve repairing corrupted files, reconfiguring security settings, resetting compromised credentials, or restoring network configurations to a secure state.
Corrective controls are not only technical but also procedural. Incident response plans, disaster recovery procedures, and post-incident reviews are all corrective measures. These plans guide the organization through the steps necessary to recover from an incident, prioritize critical assets, allocate resources effectively, and minimize operational disruption. Training staff to implement corrective measures quickly and efficiently is essential, as delays in restoration can amplify the impact of a breach or system failure. Documentation of corrective actions is also critical, providing an audit trail and facilitating continuous improvement of security practices.
Integration with preventive and detective controls enhances the effectiveness of corrective measures. Preventive controls reduce the likelihood of incidents occurring, detective controls provide timely alerts and evidence of breaches, and corrective controls ensure that any incidents that do occur are managed and mitigated effectively. For instance, if a network intrusion bypasses preventive measures, intrusion detection systems (a detective control) can alert administrators, who then implement corrective actions such as isolating affected systems, removing malicious code, and restoring compromised services. This layered approach ensures that even if one control fails, others provide compensation, maintaining the overall security posture.
Corrective controls are particularly important for compliance and regulatory frameworks. Organizations subject to standards such as ISO 27001, NIST, HIPAA, or PCI DSS must demonstrate the ability to recover from security incidents and maintain operational continuity. Corrective actions, along with documentation and reporting, provide evidence of resilience and compliance. In large-scale cyber incidents, such as ransomware attacks or critical system failures, the effectiveness of corrective controls directly affects the organization’s ability to continue operations, minimize financial loss, and protect customer trust.
The control type that addresses problems after occurrence is Corrective, making it the correct answer. Effective implementation of corrective controls ensures that organizations can restore systems, recover data, and maintain operational continuity, even in the face of significant disruptions or attacks.