ISC CISSP Certified Information Systems Security Professional Exam Dumps and Practice Test Questions Set 3 Q31-45

ISC CISSP Certified Information Systems Security Professional Exam Dumps and Practice Test Questions Set 3 Q31-45

Visit here for our full ISC CISSP exam dumps and practice test questions.

Question 31:

 Which network security device inspects incoming and outgoing packets, monitors active connections, and allows or blocks traffic based on context?

A) Packet-Filtering Firewall
B) Stateful Firewall
C) Proxy Firewall
D) Network Intrusion Detection System

Answer: B

Explanation:

Packet-Filtering Firewalls operate at the network layer and inspect packets individually, checking IP addresses, ports, and protocol headers. They are fast and efficient, making them suitable for high-throughput environments, but their simplicity is also a limitation. Because they evaluate packets in isolation, they cannot track the state of a connection or detect contextual threats that emerge over time, such as multi-step attacks or suspicious patterns spanning multiple packets. While packet-filtering firewalls are effective for basic traffic control, they often need to be combined with other security measures for comprehensive protection.

Proxy Firewalls, on the other hand, operate at the application layer, acting as intermediaries between clients and servers. By terminating connections on behalf of clients and generating new connections to the server, proxy firewalls can inspect the full content of traffic, enforce application-specific rules, and filter based on protocols such as HTTP, FTP, or SMTP. This capability allows them to block malicious content or unauthorized commands at the application level. However, this deeper inspection introduces higher latency, and unlike stateful firewalls, traditional proxy firewalls do not inherently maintain connection state across sessions. Despite this, they are valuable for organizations that require granular control over application usage or need to enforce strict content filtering policies.

Network Intrusion Detection Systems (NIDS) monitor network traffic for signs of malicious activity, such as scanning, exploits, or anomalous behavior. While NIDS are highly effective at detecting threats and generating alerts for security teams, they do not typically enforce access controls or block traffic in real time. Instead, they complement firewall solutions by providing visibility into ongoing attacks and helping with incident response and forensic analysis.

Stateful Firewalls maintain information about active connections, including TCP session states, sequence numbers, and protocol flags, enabling them to make context-aware access decisions. By tracking connection state, they can differentiate between legitimate traffic as part of an established session and unsolicited or potentially harmful packets. This capability allows stateful firewalls to prevent attacks such as session hijacking, SYN floods, and unauthorized access attempts more effectively than stateless packet filters. Operating at multiple layers, including the transport and network layers, stateful firewalls can also integrate with logging, monitoring, and intrusion prevention systems for enhanced visibility and proactive security enforcement.

The device that inspects packets while considering session context and connection state is a stateful firewall, making it the correct answer. Its ability to enforce dynamic, context-aware policies strengthens network security while maintaining traffic flow efficiency. By combining inspection depth, session awareness, and integration capabilities, stateful firewalls serve as a cornerstone of enterprise network security architectures.

Question 32:

 Which type of access control enforces policies based on system-enforced rules and conditions rather than owner discretion?

A) Discretionary Access Control (DAC)
B) Mandatory Access Control (MAC)
C) Role-Based Access Control (RBAC)
D) Attribute-Based Access Control (ABAC)

Answer: B

Explanation:

 Discretionary Access Control (DAC) allows resource owners to grant or revoke permissions at their discretion. While flexible, DAC can result in inconsistent access enforcement and increased risk if owners misconfigure permissions. Role-Based Access Control (RBAC) assigns access based on organizational roles, simplifying management but requiring clear role definitions. Attribute-Based Access Control (ABAC) enforces access policies based on attributes, such as user, resource, or environmental properties, providing dynamic and granular control.

Mandatory Access Control (MAC) enforces access based on system-assigned labels and policies. Security labels are applied to both subjects (users or processes) and objects (files, resources, or data), and access decisions are strictly controlled by the system. Users cannot override these policies, which ensures consistent enforcement across the organization and reduces the risk of human error or intentional policy circumvention. This centralized control makes MAC particularly effective for environments where data sensitivity is paramount, and mistakes can have severe consequences.

MAC relies on predefined rules that are typically established by a security administrator or system policy. For example, a document labeled as “Top Secret” can only be accessed by subjects with a corresponding security clearance, and any attempt to bypass or override these rules is automatically denied by the system. This ensures that sensitive information remains protected, regardless of the intentions or knowledge of individual users. By removing discretionary decision-making from the equation, MAC prevents privilege escalation and unauthorized access, helping organizations maintain confidentiality, integrity, and regulatory compliance.

MAC is commonly implemented in government and military environments, where hierarchical classifications of information, such as Confidential, Secret, and Top Secret, are essential. However, its principles are increasingly applied in corporate and enterprise environments handling highly sensitive data, such as financial records, healthcare information, and intellectual property. Implementations can use a variety of mechanisms, including security-enhanced operating systems, labeled file systems, and access control enforcement within applications.

Compared to Discretionary Access Control (DAC), where resource owners can set permissions and grant access at their discretion, MAC provides a much stronger and predictable security posture. Users cannot grant themselves or others access beyond what is allowed by the system policy. This predictability also simplifies auditing and compliance reporting, as administrators can verify that access policies are enforced uniformly across all users and resources.

MAC is the model that enforces system-based rules and policies rather than relying on owner discretion, making it the correct answer. Its strength lies in centralized control, predictable access enforcement, and alignment with regulatory or classified data requirements. By systematically enforcing access based on security labels and classifications, MAC ensures robust protection for sensitive resources and reduces the likelihood of accidental or malicious data exposure.

Question 33:

 Which type of attack involves an attacker capturing valid data transmissions and retransmitting them to gain unauthorized access?

A) Replay Attack
B) Man-in-the-Middle Attack
C) Phishing Attack
D) Brute-Force Attack

Answer: A

Explanation:

 Man-in-the-Middle (MitM) attacks intercept communication between two parties, potentially altering or eavesdropping on data. While MitM attacks may involve capturing data, the goal is typically manipulation rather than mere replay. Phishing attacks deceive users into divulging credentials or performing actions via fake websites or emails, focusing on social engineering rather than capturing and retransmitting messages. Brute-Force Attacks attempt every possible combination of passwords or keys to gain unauthorized access, relying on computational effort rather than interception.

Replay Attacks involve intercepting valid data transmissions—such as authentication tokens, session cookies, or messages—and retransmitting them to the system to gain unauthorized access without possessing the original credentials. These attacks exploit weaknesses in systems that do not implement proper session validation, timestamping, or the use of nonce values. Since the attacker is using previously captured legitimate information, traditional password-based defenses may not detect the intrusion, making replay attacks particularly insidious.

A classic example of a replay attack occurs in authentication systems. An attacker might capture a login token transmitted over a network and then retransmit it at a later time to impersonate a legitimate user. Similarly, in financial transactions, an attacker could intercept a payment authorization message and resend it, potentially causing duplicate or fraudulent transactions. Replay attacks are also a concern in wireless communications, IoT devices, and protocols like Kerberos or older versions of SSL/TLS that lack robust anti-replay mechanisms.

The vulnerability arises because many systems assume that incoming authentication data is fresh and valid, without verifying whether it has already been used. Without mechanisms like timestamps, sequence numbers, or one-time-use nonces, the system cannot distinguish between a genuine request and a replayed one. Consequently, attackers can bypass authentication, gain unauthorized access, or manipulate transactions without ever needing to compromise passwords or encryption keys directly.

Mitigating replay attacks requires implementing robust session management and validation mechanisms. Nonce-based challenges, where each authentication request contains a unique, single-use value, ensure that captured data cannot be reused. Timestamps provide a temporal validity window, so old messages become invalid once they expire. Secure session management, including token expiration and cryptographic signing of messages, further reduces the likelihood of replay-based exploits. In addition, combining replay protection with encryption and mutual authentication strengthens overall security by ensuring that intercepted messages cannot be decrypted or reused effectively.

The attack that specifically relies on capturing legitimate transmissions and replaying them to bypass authentication is the replay attack, making it the correct answer. By employing nonce-based challenges, timestamps, sequence numbers, and secure session handling, organizations can defend against this threat and ensure that authentication mechanisms remain resilient against interception and reuse of valid credentials.

Question 34:

 Which type of malware secretly records user activities, such as keystrokes, screenshots, or browsing behavior?

A) Virus
B) Trojan
C) Spyware
D) Worm

Answer: C

Explanation:

 Viruses attach themselves to files and require execution to propagate. They primarily aim to replicate and potentially disrupt systems but are not specifically designed to monitor user activity. Trojans disguise themselves as legitimate programs to trick users into execution and may deliver various payloads, but monitoring is only one possible function. Worms self-replicate across networks and can spread rapidly without user interaction, typically focusing on disruption or propagation rather than covert observation.

Spyware is malicious software specifically designed to monitor and collect user activity without consent. It may record keystrokes, capture screenshots, track browsing history, or collect sensitive data such as login credentials, banking information, and personal identifiers. Spyware can also monitor applications in use, capture system information, and even track location data in mobile devices. This type of malware is often installed via Trojans, malicious downloads, phishing emails, or drive-by attacks on compromised websites. Because spyware operates covertly, it may remain undetected for long periods, silently exfiltrating valuable information.

The purpose of spyware is primarily surveillance, information exfiltration, and profiling of user behavior. Attackers use this data to commit identity theft, financial fraud, or unauthorized system access. In some cases, spyware can be bundled with adware to generate revenue through targeted advertisements based on collected user behavior. Its stealthy nature allows it to bypass traditional antivirus solutions, making detection and removal challenging. Some advanced spyware variants can disable security tools, manipulate system settings, and evade monitoring mechanisms.

Effective mitigation against spyware involves a combination of technical and behavioral measures. Endpoint protection software with anti-spyware capabilities can detect and remove threats while keeping operating systems and applications up to date, reducing vulnerabilities that spyware exploits. User awareness is also crucial, as avoiding suspicious downloads, links, and email attachments helps prevent infection. Regular system monitoring, scanning, and auditing can identify anomalies indicative of spyware activity.

The malware that explicitly monitors and records user actions covertly is spyware, making it the correct answer. By implementing robust endpoint protection, maintaining system hygiene, and fostering user vigilance, organizations and individuals can reduce the risk posed by spyware and safeguard sensitive data effectively.

Question 35:

 Which cryptographic method ensures data integrity and verifies that the content has not been altered?

A) Encryption
B) Hashing
C) Digital Signature
D) Steganography

Answer: B

Explanation:

Encryption transforms plaintext into ciphertext to protect confidentiality, preventing unauthorized access by ensuring that only authorized parties with the correct decryption key can read the data. While encryption is effective at protecting content from disclosure, it does not inherently verify that the data has remained unaltered. For instance, an attacker could intercept an encrypted message and modify it without detection if no integrity verification is in place. Digital Signatures address this limitation by combining hashing with asymmetric encryption to provide authentication, integrity, and non-repudiation. However, the underlying mechanism that directly ensures that the data has not been altered is the hash function itself. Steganography, in contrast, hides information within other files such as images or audio, providing concealment rather than integrity verification.

Hashing generates a fixed-length digest from input data, regardless of the size of the original content. Even a minor change in the input—such as flipping a single bit—produces a drastically different hash value, making tampering immediately detectable. This property, known as the avalanche effect, is fundamental to cryptographic security. Widely used hash functions, such as SHA-256, SHA-3, and SHA-1 (though the latter is now considered insecure), are employed in a variety of security applications, including file integrity verification, password storage, digital signature verification, and blockchain technology. In file integrity checks, for example, an organization can compute a hash of a file before and after transmission; if the hash values match, the file is confirmed to be unaltered.

Hashing is a one-way operation, meaning the original data cannot be reconstructed from the hash alone. This property ensures that sensitive data, such as passwords or authentication tokens, can be stored or transmitted securely without exposing the actual content. Hashing also enables the creation of checksums and message authentication codes (MACs), which allow systems to verify data integrity during storage or network communication.

The method that ensures data integrity and detects alterations without necessarily providing confidentiality is hashing, making it the correct answer. By providing a reliable, irreversible way to detect modifications, hashing underpins numerous security applications and protocols, forming the foundation for secure authentication, integrity verification, and trusted digital communications across modern computing environments.

Question 36:

 Which principle limits users and processes to the minimum permissions necessary to perform their functions?

A) Defense in Depth
B) Separation of Duties
C) Least Privilege
D) Need-to-Know

Answer: C

Explanation:

 Defense in Depth focuses on layering security controls, such as firewalls, intrusion detection, and access control, to provide redundancy and resilience, but it does not specifically restrict permissions. Separation of Duties divides responsibilities among multiple individuals to prevent fraud or misuse, but does not focus on technical permissions for tasks. Need-to-Know restricts access to information based on whether it is required for a user’s role, which primarily applies to sensitive data access rather than system operations.

Least Privilege is a security principle that limits users, processes, and systems to only the permissions required to perform their assigned tasks. Enforcing minimal access reduces the attack surface and limits potential damage from errors or compromises. For example, if an employee only needs read access to a database, granting write permissions would violate least privilege and increase risk. Implementing least privilege also aids compliance, enhances security audits, and supports robust access management policies. Techniques like role-based access control, privileged access management, and mandatory access control help enforce least privilege in practice.

The principle that ensures users and processes operate with only the permissions essential for their duties is Least Privilege, making it the correct answer. Properly applied, it reduces the likelihood of unauthorized actions, privilege escalation, and accidental or intentional misuse.

Question 37:

 Which type of cryptography uses a pair of mathematically related keys, one public and one private, to secure communications?

A) Symmetric Encryption
B) Asymmetric Encryption
C) Hashing
D) Steganography

Answer: B

Explanation:

 Symmetric Encryption uses a single shared key for both encryption and decryption. While efficient for large data sets, it requires secure key distribution and does not support digital signatures or non-repudiation. Hashing creates fixed-length digests from data to ensure integrity, but does not encrypt or allow decryption. Steganography hides information within another medium but provides no cryptographic protection.

Asymmetric Encryption, also called public-key cryptography, uses a pair of keys: a public key for encryption and a private key for decryption. In this system, messages encrypted with the public key can only be decrypted using the corresponding private key, and messages signed with the private key can be verified with the public key. This unique property enables secure communication between parties who have never previously exchanged a secret key, solving one of the primary challenges inherent in symmetric encryption: secure key distribution. By eliminating the need to share private keys, asymmetric encryption reduces the risk of key compromise during transmission.

One of the most important applications of asymmetric cryptography is the creation of digital signatures. When a sender signs a message with their private key, the recipient can verify the authenticity and integrity of the message using the sender’s public key. This ensures that the message was indeed sent by the claimed sender (authentication) and that it has not been altered in transit (integrity). Digital signatures also provide non-repudiation, preventing the sender from denying the origin of the message at a later time. This functionality is critical for secure communications, electronic transactions, and legal or regulatory compliance.

Widely used algorithms in asymmetric cryptography include RSA, ECC (Elliptic Curve Cryptography), and DSA (Digital Signature Algorithm). RSA is one of the earliest and most commonly deployed algorithms, known for its robustness but requiring relatively large key sizes for strong security. ECC provides equivalent security with much smaller key sizes, making it ideal for modern devices with constrained resources, such as mobile and IoT devices. DSA is primarily used for digital signatures in various protocols.

Asymmetric cryptography also forms the foundation of secure protocols like TLS/SSL, which protect web traffic, and is essential for certificate authorities (CAs) in the public key infrastructure (PKI), which verify identities and distribute public keys safely. By enabling secure key exchange, authentication, and encrypted communication, asymmetric encryption addresses multiple security requirements simultaneously.

The cryptographic method that relies on mathematically related key pairs for encryption and decryption is asymmetric encryption, making it the correct answer. Its combination of confidentiality, authentication, and digital signature capabilities makes it indispensable in cybersecurity, underpinning secure email, secure web browsing, VPNs, and numerous other modern security protocols.

Question 38:

 Which type of attack floods a system, network, or service with excessive traffic, causing disruption or denial of service?

A) Man-in-the-Middle Attack
B) Denial-of-Service Attack
C) Phishing Attack
D) SQL Injection

Answer: B

Explanation:

 Man-in-the-Middle Attacks intercept communications to eavesdrop or modify traffic, but do not inherently flood systems. Phishing Attacks use social engineering to deceive users into divulging credentials or installing malware. SQL Injection targets databases by inserting malicious queries to access or modify data, not by overwhelming systems.

Denial-of-Service (DoS) attacks aim to disrupt normal operation by overwhelming systems, networks, or services with excessive traffic or resource requests. This prevents legitimate users from accessing resources and can result in downtime, degraded performance, or cascading failures. Distributed Denial-of-Service (DDoS) attacks amplify this effect by using multiple compromised systems, often organized in botnets, to target a single victim simultaneously. Attackers leverage various vectors, including TCP/UDP floods, HTTP floods, SYN floods, ping of death, and amplification attacks, to exhaust server resources, bandwidth, or application capabilities. The impact can range from minor slowdowns to complete service unavailability, affecting business continuity, user trust, and financial performance.

DoS attacks exploit system vulnerabilities, weak network configurations, or inadequate capacity planning. For example, TCP SYN floods exploit the three-way handshake process by initiating connections without completing them, leaving resources tied up and unavailable to legitimate users. Amplification attacks, such as DNS or NTP amplification, take advantage of misconfigured servers to generate significantly larger traffic volumes, further stressing the target network. HTTP floods target web applications specifically, overwhelming servers by mimicking legitimate user behavior, making detection more challenging.

Effective mitigation strategies include a combination of proactive and reactive measures. Traffic filtering and rate limiting can block excessive or suspicious traffic patterns. Intrusion Prevention Systems (IPS) can detect and automatically respond to DoS signatures. Network redundancy, load balancing, and cloud-based DDoS protection services distribute traffic to maintain availability. Content Delivery Networks (CDNs) cache and deliver content closer to users, reducing the impact of volumetric attacks on origin servers. Additionally, robust incident response planning ensures organizations can quickly identify attacks, communicate with stakeholders, and restore services.

The attack that floods systems with excessive traffic to cause disruption is a Denial-of-Service Attack, making it the correct answer. Understanding DoS attacks is essential for network resilience, operational continuity, and designing layered defenses. By combining technical controls, monitoring, and contingency planning, organizations can minimize downtime, protect user access, and maintain trust in their services even during large-scale attacks.

Question 39:

 Which principle focuses on having multiple, overlapping layers of security to protect assets?

A) Least Privilege
B) Separation of Duties
C) Defense in Depth
D) Open Design

Answer: C

Explanation:

Least Privilege limits access rights to the minimum necessary for users or processes to perform their tasks, reducing the risk of accidental or intentional misuse. Separation of Duties divides responsibilities among multiple individuals to prevent errors, fraud, or conflicts of interest, ensuring that no single person has unchecked control over critical operations. Open Design emphasizes that security should rely on robust, well-designed architecture rather than the secrecy of system implementation, promoting transparency and resilience.

Defense in Depth, in contrast, is a comprehensive security strategy that employs multiple, complementary layers of protection. These layers can include physical security measures, network firewalls, intrusion detection and prevention systems, endpoint protection, access controls, authentication mechanisms, encryption, and monitoring. The core idea is that the failure or bypass of one layer does not compromise the overall security posture, as additional layers continue to defend critical assets. This layered approach increases the difficulty and cost for attackers while providing redundancy in detection and mitigation, making it much more effective than relying on a single security control.

For example, an attacker who bypasses a perimeter firewall may still be detected by intrusion detection systems, blocked by endpoint protections, or prevented from accessing sensitive data due to strict access controls. Defense in Depth also supports regulatory compliance by demonstrating multiple protective measures and aligns with incident response planning, ensuring organizations can respond effectively to breaches.

The principle that emphasizes layered security to protect assets from multiple angles is Defense in Depth, making it the correct answer. By combining overlapping safeguards, organizations reduce vulnerabilities, strengthen resilience, and maintain a robust security posture even under complex threat conditions.

Question 40:

 Which type of risk assessment uses qualitative methods, such as expert judgment and ranking, rather than numeric calculations?

A) Quantitative Risk Assessment
B) Qualitative Risk Assessment
C) Semi-Quantitative Risk Assessment
D) Asset-Based Risk Assessment

Answer: B

Explanation:

Quantitative Risk Assessment assigns numeric values to both the probability of occurrence and the potential impact of risks, allowing organizations to calculate expected monetary losses. By providing precise figures, it facilitates cost-benefit analyses, budgeting for mitigation strategies, and comparing risks across different business units. However, quantitative assessments require accurate historical data and detailed information about vulnerabilities and threats, which may not always be available.

Semi-Quantitative Risk Assessment combines numeric scoring with qualitative judgments, providing weighted rankings of risks without requiring full monetary quantification. This approach allows organizations to prioritize risks more effectively than purely qualitative methods, offering a balance between precision and practicality. Asset-Based Risk Assessment, meanwhile, focuses on identifying and valuing critical assets, determining which resources warrant the highest level of protection. While asset valuation informs risk prioritization, it does not define the complete methodology for assessing risk, probability, or impact.

Qualitative Risk Assessment evaluates risk using descriptive, non-numeric methods. It relies heavily on expert judgment, scenario analysis, workshops, and threat-vulnerability discussions to categorize risks as high, medium, or low. Likelihood and impact are assessed using narrative descriptions rather than strict calculations, making qualitative assessments particularly useful when historical data is limited, uncertain, or difficult to quantify. This approach is also more cost-effective and faster to implement than quantitative methods, which require detailed data collection and modeling.

Qualitative risk assessments guide organizational policy, security controls, and mitigation strategies by highlighting areas of greatest concern. For example, an organization may identify a high-risk scenario such as insider data theft or ransomware attacks and prioritize implementing monitoring, access controls, and user awareness training, even without precise numeric probabilities. This method also supports early-stage security planning, compliance audits, and management reporting by providing clear, understandable risk classifications that stakeholders can act upon.

The risk assessment method that uses descriptive, non-numeric techniques is Qualitative Risk Assessment, making it the correct answer. Its adaptability, efficiency, and reliance on expert knowledge make it a cornerstone of risk management frameworks, particularly in environments where speed, flexibility, and practical guidance are more valuable than exact numerical calculations.

Question 41:

 Which type of social engineering attack involves sending emails that appear to come from a trusted source to trick users into revealing sensitive information?

A) Phishing
B) Tailgating
C) Whaling
D) Vishing

Answer: A

Explanation:

 Tailgating is a physical security breach in which an unauthorized individual follows someone with legitimate access into a secure area, relying on human courtesy rather than deception via digital means. Whaling is a targeted form of phishing that specifically targets high-profile individuals, such as executives, with crafted messages to obtain sensitive information, often for financial gain. Vishing involves using voice communication, such as phone calls, to trick victims into revealing confidential information, relying on social engineering over the phone.

Phishing is the most common social engineering attack in which attackers send emails, text messages, or other electronic communications that appear to come from trusted sources. These sources may include banks, popular online services, government agencies, or internal departments within an organization. Phishing messages are crafted to create a sense of urgency, fear, or curiosity, often prompting recipients to click on malicious links, download infected attachments, or disclose sensitive information such as usernames, passwords, credit card numbers, or personal identification data. By exploiting human psychology rather than technical vulnerabilities, phishing can bypass traditional security controls like firewalls or antivirus software, making it a highly effective attack vector.

There are several variations of phishing. Spear phishing targets specific individuals or organizations, using personalized information to increase credibility and the likelihood of success. Whaling attacks focus on high-profile targets, such as executives or decision-makers, aiming to gain access to sensitive corporate information or authorize financial transactions. Clone phishing involves replicating legitimate emails with malicious modifications, while vishing (voice phishing) and smishing (SMS phishing) leverage phone calls or text messages to achieve similar goals. The common thread across all these methods is deception—tricking users into voluntarily revealing information or performing actions that compromise security.

Effective defenses against phishing combine technology, policy, and user awareness. Anti-phishing email filters and advanced threat protection can detect and block known malicious messages before they reach users’ inboxes. Multi-factor authentication (MFA) reduces the impact of compromised credentials, ensuring that even if a user is tricked, attackers cannot easily gain access to accounts. Security policies that enforce verification of unusual requests, such as wire transfers or password resets, add another layer of protection. Equally important is continuous user education, including simulated phishing exercises, awareness campaigns, and clear reporting mechanisms for suspicious communications.

Organizations that proactively address phishing through a layered approach significantly reduce their vulnerability to these attacks. Regular training and awareness initiatives help employees recognize common phishing indicators, such as unexpected attachments, poor grammar, unusual sender addresses, and mismatched URLs. By fostering a security-conscious culture, companies can mitigate one of the most pervasive threats targeting the human element.

The type of attack that relies on deceptive emails to trick users into divulging confidential information is phishing, making it the correct answer. Understanding phishing is critical for implementing organizational awareness programs and layered security measures that reduce human factor vulnerabilities and strengthen overall cybersecurity resilience.

Question 42:

 Which type of security control is implemented to stop incidents from occurring in the first place?

A) Detective
B) Corrective
C) Preventive
D) Compensating

Answer: C

Explanation:

 Detective controls identify and alert organizations to incidents after they occur, such as intrusion detection systems or audit logs. Corrective controls are designed to restore systems to normal operation after an incident, such as patching vulnerabilities or restoring backups. Compensating controls are alternative safeguards implemented when primary controls cannot be applied due to technical, operational, or financial constraints.

Preventive controls are designed to stop security incidents before they occur by reducing the likelihood of breaches, data loss, or unauthorized access. They operate proactively, addressing vulnerabilities and enforcing security policies to prevent attackers or internal threats from exploiting weaknesses. By focusing on prevention, these controls help organizations maintain the confidentiality, integrity, and availability of critical information and systems. Examples of preventive controls include firewalls, which filter incoming and outgoing network traffic to block malicious activity; access control lists (ACLs) that restrict user access based on roles or permissions; and strong authentication mechanisms, such as multi-factor authentication (MFA), which verify user identities before granting access. Encryption also serves as a preventive control by protecting sensitive data in transit and at rest, making it unreadable to unauthorized individuals.

Security awareness training is another critical preventive measure, as human error remains one of the leading causes of breaches. Educating employees about phishing, social engineering, password hygiene, and safe computing practices reduces the likelihood of successful attacks. Similarly, secure configuration of systems and software hardening, which removes unnecessary services and patches vulnerabilities, helps prevent exploitation by attackers. Network segmentation and principle of least privilege policies further limit potential attack surfaces by restricting access to critical assets only to those who require it for legitimate tasks.

Preventive controls are most effective when integrated into a layered security strategy alongside detective and corrective controls. Detective controls, such as intrusion detection systems (IDS) and security information and event management (SIEM) systems, identify suspicious activity, while corrective controls, including patch management and incident response procedures, remediate damage and restore systems to normal operation. The combination of these three types of controls—preventive, detective, and corrective—creates a defense-in-depth architecture that strengthens overall security posture, ensuring that even if one layer fails, other measures continue to protect assets.

Preventive controls also play a key role in regulatory compliance, as many standards and frameworks, including ISO 27001, NIST, and HIPAA, require organizations to implement proactive measures to safeguard sensitive data. By addressing risks before they materialize, preventive measures reduce the reliance on reactive responses, lower the potential cost of breaches, and maintain business continuity.

The control type that focuses on stopping incidents before they occur is preventive, making it the correct answer. By enforcing proactive measures, organizations not only protect their systems and information but also instill a culture of security awareness and vigilance, ensuring long-term resilience against evolving threats.

Question 43:

 Which backup strategy involves copying all data since the last full backup and resetting the archive bit?

A) Full Backup
B) Incremental Backup
C) Differential Backup
D) Mirror Backup

Answer: B

Explanation:

 Full Backup copies all data regardless of changes, providing a complete snapshot and simplifying recovery, but consuming significant storage and time. Differential Backup copies all changes since the last full backup but does not reset the archive bit, leading to progressively larger backups over time. Mirror Backup creates an exact copy of the source data but does not provide historical versions or versioning capabilities.

Incremental Backup copies only the data that has changed since the last backup of any type, whether it was a full or previous incremental backup, and then resets the archive bit. This method is designed to optimize both backup speed and storage efficiency. Because only modified or new files are copied, incremental backups are much faster than performing a full backup every time and require significantly less storage space. This efficiency makes incremental backups especially suitable for large datasets and enterprise environments where frequent backups are necessary to ensure data integrity without overloading storage systems or network bandwidth.

One of the key advantages of incremental backups is their ability to minimize the backup window—the time during which backup operations consume system resources and potentially impact performance. By reducing the amount of data that needs to be processed, organizations can schedule incremental backups more frequently, even multiple times per day, ensuring that critical data is protected with minimal disruption to daily operations. This approach supports business continuity by reducing the risk of data loss between scheduled backups.

However, incremental backups also have certain considerations. Restoration of data requires the last full backup and all subsequent incremental backups to rebuild the complete dataset. This dependency means that if any incremental backup in the chain is missing or corrupted, it can compromise the restoration process. Consequently, organizations must implement careful backup management, verification procedures, and monitoring to ensure the integrity of each incremental backup. Many enterprises use backup software that automates this process, tracks backup chains, and provides alerts for failures or inconsistencies.

Incremental backups are commonly implemented within structured backup schedules that combine full and incremental methods. For example, a weekly full backup may be paired with daily incremental backups, allowing organizations to maintain a balance between recoverability and storage efficiency. This hybrid approach ensures that while storage usage is minimized, recovery remains straightforward and predictable. Incremental backups are also frequently used in conjunction with off-site or cloud-based storage solutions, providing additional resilience against data loss due to hardware failures, ransomware attacks, or natural disasters.

The backup strategy that captures only changes since the last backup and resets the archive bit is an incremental backup, making it the correct answer. By focusing on changed data, incremental backups balance efficiency, storage optimization, and recoverability, making them a critical component of enterprise backup strategies and disaster recovery planning.

Question 44:

 Which type of firewall acts as an intermediary between clients and servers and can inspect application-layer traffic?

A) Packet-Filtering Firewall
B) Stateful Firewall
C) Proxy Firewall
D) Circuit-Level Gateway

Answer: C

Explanation:

 Packet-Filtering Firewalls examine individual packets at the network layer and allow or deny traffic based on IP addresses, ports, or protocols, but do not inspect content. Stateful Firewalls maintain session state to enforce context-aware access, but typically operate at the transport layer. Circuit-Level Gateways operate at the session layer, monitoring TCP or UDP connections without inspecting content.

Proxy Firewalls act as intermediaries between clients and servers, intercepting requests and responses to inspect traffic at the application layer. Unlike traditional packet-filtering or stateful firewalls that operate primarily at the network or transport layers, proxy firewalls analyze the actual content of the traffic, including HTTP, HTTPS, FTP, SMTP, and other application protocols. This deep inspection enables them to enforce application-specific policies, filter content, cache frequently requested data, and prevent direct connections between clients and servers, effectively isolating internal networks from external threats.

By operating at the application layer, proxy firewalls provide a high level of security, as they can detect malicious requests and block harmful content that might exploit vulnerabilities in web applications, email servers, or other services. For example, a proxy firewall can examine incoming web traffic to identify SQL injection attempts, cross-site scripting (XSS), or malware embedded in file downloads. Similarly, in email systems, proxy firewalls can scan attachments for viruses, block phishing attempts, and enforce content policies to prevent sensitive information from leaving the organization.

In addition to security, proxy firewalls can provide performance benefits such as caching frequently requested web content. This reduces bandwidth usage and accelerates access for users, while still enforcing security policies. They also offer enhanced visibility into traffic patterns, enabling administrators to monitor usage, detect anomalies, and generate detailed reports for auditing or compliance purposes.

While proxy firewalls provide granular control and enhanced security, they typically introduce higher latency compared to simpler firewall types, such as packet-filtering or stateful firewalls, due to the extensive processing required to inspect application-layer traffic. Organizations must balance the trade-off between security depth and performance when deploying proxy firewalls, often combining them with other types of firewalls in a layered security strategy.

The firewall type that mediates client-server communication while inspecting application-layer traffic is a proxy firewall, making it the correct answer. Proxy firewalls are particularly useful for securing web applications, email systems, and other specialized services, providing both detailed traffic inspection and enforcement of organizational policies. By controlling communication at the application level, they add a critical layer to defense-in-depth strategies, protecting sensitive assets against sophisticated attacks that target software and user behavior.

Question 45:

 Which principle ensures that sensitive information is disclosed only to those authorized to receive it?

A) Integrity
B) Confidentiality
C) Availability
D) Accountability

Answer: B

Explanation:

 Integrity ensures that data is accurate, complete, and unaltered, but does not address who can access it. Availability ensures that authorized users can access information when needed, but not that it is restricted. Accountability ensures actions are traceable to responsible parties, supporting audit and compliance, but does not prevent unauthorized disclosure.

Confidentiality ensures that sensitive information is protected from unauthorized access, disclosure, or exposure, making it a cornerstone of information security. As one of the three core principles of the CIA triad—Confidentiality, Integrity, and Availability—confidentiality emphasizes that only authorized individuals or systems can access specific data. This principle underpins a wide range of security controls and organizational policies designed to prevent data breaches, identity theft, or misuse of sensitive information.

Mechanisms that support confidentiality include access control lists (ACLs), which define permissions for users and groups; role-based access control (RBAC), which ensures individuals can access only the data necessary for their job functions; and multi-factor authentication (MFA), which adds layers of verification to prevent unauthorized login attempts. Encryption is another critical mechanism, protecting data both in transit and at rest, rendering information unreadable to those without the appropriate decryption key. Information classification and labeling systems further enhance confidentiality by categorizing data according to sensitivity levels and enforcing handling procedures consistent with regulatory or organizational standards.

Maintaining confidentiality also involves procedural and administrative controls, such as security awareness training, secure document handling, and policies governing data sharing and storage. For example, limiting the distribution of sensitive reports, monitoring privileged account activity, and securing communications channels all contribute to preserving confidentiality. Protecting this principle is essential not only to avoid regulatory penalties, such as violations of GDPR, HIPAA, or PCI DSS, but also to maintain trust with clients, partners, and employees.

The principle that ensures information is disclosed only to authorized individuals is confidentiality, making it the correct answer. Effective enforcement of confidentiality is vital for compliance, risk management, and sustaining organizational reputation, ensuring that sensitive data remains secure against both internal and external threats.