ISC CISSP Certified Information Systems Security Professional Exam Dumps and Practice Test Questions Set 7 Q91-105
Visit here for our full ISC CISSP exam dumps and practice test questions.
Question 91:
Which type of firewall filters traffic based on packet headers, such as source and destination IP addresses or ports?
A) Stateful Inspection Firewall
B) Packet-Filtering Firewall
C) Proxy Firewall
D) Application Firewall
Answer: B
Explanation:
Stateful Inspection Firewalls maintain session information and analyze traffic context, looking beyond simple header data. They track the state of active connections, enabling them to make dynamic decisions about which packets are allowed or blocked based on the connection’s state. This provides more nuanced control compared to basic packet inspection. Proxy Firewalls act as intermediaries between clients and servers, inspecting traffic at the application layer, which allows them to filter requests based on content or protocol-specific commands. Application Firewalls, on the other hand, focus on inspecting application-specific traffic and often provide deep packet inspection, protocol validation, and protection against application-layer attacks.
Packet-filtering firewalls operate at the network layer of the OSI model and examine the header information of individual packets, such as source IP address, destination IP address, protocol type, and port numbers. They then compare these against a set of predefined rules to determine whether to allow or block the traffic. Because they do not inspect the payload of the packet, they are highly efficient and have minimal impact on network performance. However, their simplicity also means they cannot detect sophisticated attacks hidden in the payload, such as application-level exploits, malware content, or protocol misuse.
Packet-Filtering Firewalls are often deployed at the perimeter of networks to enforce basic security policies, such as blocking traffic from untrusted IP addresses or restricting access to sensitive services. They provide a foundational layer of defense in depth and are usually complemented by more advanced firewalls or intrusion detection/prevention systems to ensure comprehensive protection. In addition to filtering inbound traffic, packet filters can also control outbound traffic, helping prevent compromised systems from communicating with malicious actors. This capability is critical in preventing data exfiltration, limiting the impact of malware, and enforcing organizational policies about which services and destinations are accessible from within the network.
These firewalls operate by examining the headers of packets passing through the network. They check attributes such as source and destination IP addresses, port numbers, and protocol types against a predefined set of rules. If a packet matches an allowed rule, it is permitted to pass; otherwise, it is blocked. This process occurs without inspecting the actual content of the packet, which distinguishes packet-filtering firewalls from more sophisticated options like stateful inspection or application-layer firewalls that analyze the full context of network traffic. Because of their simplicity, packet-filtering firewalls introduce minimal latency, making them suitable for high-speed network environments.
While modern security strategies tend to rely on multi-layered solutions, understanding packet-filtering firewalls is crucial for network administrators because they represent the first line of defense in controlling traffic flow. They are particularly valuable in preventing unauthorized access to network segments, stopping potentially harmful traffic at the entry point, and establishing a baseline security posture. Correct configuration and regular updating of rules are essential, as misconfigured rules can either leave systems exposed or block legitimate traffic, causing operational issues. Administrators must carefully design firewall policies to balance security and usability, ensuring that critical business functions remain uninterrupted while minimizing attack surfaces.
Moreover, packet-filtering firewalls serve as a learning tool for understanding network protocols, traffic patterns, and the principles of access control. They allow administrators to gain insights into how malicious actors may attempt to exploit open ports or unfiltered protocols. By analyzing logs and monitoring rule matches, administrators can identify anomalies and suspicious activity, which can inform the deployment of more advanced security measures, such as intrusion detection systems, VPNs, or application-layer firewalls.
The firewall type that filters traffic solely based on packet headers is a Packet-Filtering Firewall, making it the correct answer. Despite being one of the oldest firewall technologies, its continued relevance in layered security architectures demonstrates the enduring importance of controlling traffic at the most fundamental level. Properly implemented, packet-filtering firewalls are an indispensable part of a comprehensive cybersecurity strategy, forming the foundation upon which more sophisticated security mechanisms are built.
Question 92:
Which type of attack allows an attacker to intercept and alter communication between two parties?
A) Denial-of-Service
B) Man-in-the-Middle
C) SQL Injection
D) Phishing
Answer: B
Explanation:
Denial-of-Service attacks are designed to overwhelm systems, networks, or services to make them unavailable, but they do not allow attackers to manipulate the content of communications. SQL Injection exploits vulnerabilities in web applications to inject malicious queries into a database, potentially exfiltrating data or modifying stored information. Phishing relies on social engineering to trick users into revealing credentials or sensitive information, typically without directly attacking the communication channel itself.
Man-in-the-Middle (MitM) attacks, in contrast, are sophisticated threats where attackers insert themselves between two communicating parties. This allows them to intercept, monitor, and potentially modify messages in real time, all while remaining undetected. MitM attacks exploit trust in communication channels, making them particularly dangerous in environments like online banking, corporate networks, or healthcare systems where sensitive information is exchanged. Common techniques include ARP spoofing, where attackers send falsified ARP messages to associate their MAC address with the IP of a legitimate device, and DNS poisoning, which redirects users to malicious websites. SSL stripping, another MitM technique, downgrades secure HTTPS connections to unencrypted HTTP, enabling attackers to capture credentials or confidential data.
Preventing MitM attacks requires both technical and procedural measures. Encryption protocols such as TLS secure communication by making intercepted messages unreadable. Mutual authentication ensures that both parties verify each other’s identity before data is exchanged. Proper certificate management and validation reduce the risk of attackers using fraudulent certificates to intercept traffic. Network monitoring tools can also detect unusual patterns, such as ARP anomalies, DNS spoofing, or rogue access points, which may indicate an active MitM attack. In addition, implementing strong key management practices, such as frequent rotation of encryption keys and using robust cryptographic algorithms, further strengthens defenses against interception.
MitM attacks compromise both confidentiality and integrity, as attackers can eavesdrop on conversations or alter transactions without detection. This can have serious implications in environments that rely on secure data transmission, such as online banking, healthcare communications, or enterprise data exchanges. Attackers may use MitM techniques to inject malicious payloads into legitimate traffic, capture sensitive credentials, or manipulate transactions for financial gain. Because these attacks exploit trust relationships rather than system vulnerabilities directly, they can bypass traditional security measures unless encryption and authentication are properly implemented.
Beyond technical controls, organizations must incorporate procedural and operational measures. Security awareness training teaches users to recognize suspicious network setups, phishing attempts that could enable MitM, and the importance of validating secure connections before transmitting sensitive information. Segmentation of networks and strict access controls can reduce the exposure of high-value systems to potential MitM vectors. Regular audits, penetration testing, and vulnerability assessments help identify weaknesses in network configurations or certificate deployments that could be exploited by attackers.
Securing wireless networks is another critical aspect of MitM mitigation. Attackers often exploit unsecured Wi-Fi or public networks to position themselves between users and trusted servers. Using VPNs ensures that all communications are encrypted, even over potentially compromised networks, and adds a layer of authentication to protect data in transit. Similarly, enforcing HTTPS-only connections and implementing HTTP Strict Transport Security (HSTS) can prevent users from unknowingly connecting over insecure channels, further reducing the risk of interception.
Organizations may also deploy intrusion detection and prevention systems capable of analyzing traffic patterns for signs of MitM activity, such as repeated ARP changes, duplicate IP addresses, or anomalies in SSL/TLS handshakes. Integrating these systems with centralized logging and alerting mechanisms ensures that suspicious events are promptly investigated and remediated. Additionally, advanced tools such as certificate pinning, DNSSEC, and endpoint security solutions can complement these measures by adding layers of verification to prevent unauthorized interception.
The attack that intercepts and alters communication is a Man-in-the-Middle attack. By compromising trust between parties and exploiting both network and user vulnerabilities, MitM represents a significant threat to confidentiality and integrity. Proper use of encryption, authentication, network monitoring, and user education is essential to mitigate these attacks effectively, making the combination of technical and procedural controls indispensable in modern cybersecurity frameworks.
Question 93:
Which protocol is commonly used to secure email communications by encrypting messages between servers?
A) POP3
B) IMAP
C) SMTP with TLS
D) FTP
Answer: C
Explanation:
POP3 and IMAP are protocols used for retrieving email from servers, but they do not inherently encrypt data in transit unless paired with secure variants like POP3S or IMAPS. FTP is primarily a file transfer protocol and also lacks native encryption, making it unsuitable for securing email communications.
SMTP with TLS is the standard protocol for sending email securely between mail servers. Transport Layer Security (TLS) encrypts the message while in transit, preventing interception and ensuring that the email content remains confidential. TLS also provides integrity checks, helping verify that messages have not been altered during transmission. Secure SMTP ensures compliance with modern email security standards and regulatory requirements, such as GDPR, HIPAA, and PCI DSS, which mandate encryption of sensitive data in transit.
Email security involves more than encryption alone. Proper authentication mechanisms, such as SPF, DKIM, and DMARC, help prevent email spoofing, phishing, and impersonation attacks. When SMTP is used in combination with TLS, these authentication measures provide a robust framework to protect communications from eavesdropping, message tampering, and impersonation.
Organizations rely heavily on SMTP with TLS to maintain trust and confidentiality in digital communications. By encrypting emails, organizations prevent attackers from intercepting sensitive information, including financial instructions, personal data, proprietary corporate information, and legal communications. Without encryption, emails traverse multiple servers and networks in plaintext, leaving them vulnerable to eavesdropping, interception, and tampering by malicious actors. Implementing TLS ensures that emails are encrypted during transit between mail servers, protecting the integrity and confidentiality of messages against man-in-the-middle attacks and network sniffing.
TLS (Transport Layer Security) works by establishing a secure, encrypted channel between email servers before message exchange begins. When an email server supports TLS, it can negotiate an encrypted session with another TLS-capable server, ensuring that the communication cannot be easily read or modified by third parties. This process involves certificate validation, encryption key exchange, and symmetric encryption for the session, combining asymmetric and symmetric cryptography to optimize both security and performance. Organizations often rely on trusted Certificate Authorities (CAs) to validate server identities, ensuring that clients and servers can trust the endpoints they communicate with.
Secure email transmission is critical for compliance with data protection regulations such as GDPR, HIPAA, and PCI DSS. Many industries handle highly sensitive or regulated data, and failure to secure email communications can result in legal liabilities, regulatory fines, and reputational damage. Organizations must not only enable TLS but also implement policies to enforce its use for both internal and external communications. Tools like MTA-STS (Mail Transfer Agent Strict Transport Security) and DANE (DNS-based Authentication of Named Entities) can further enhance email security by enforcing encryption and verifying server authenticity.
SMTP with TLS also protects organizations from attacks like email spoofing, interception, and tampering. While TLS does not inherently prevent phishing, it ensures that emails are encrypted in transit, mitigating the risk of attackers intercepting or modifying legitimate messages. Encrypted email communications reduce the likelihood of sensitive data leaks and provide confidence that the messages received are authentic and unaltered.
The protocol that secures email communication via encryption between servers is SMTP with TLS, making it the correct answer. Its implementation represents a foundational aspect of modern cybersecurity practices, safeguarding both operational integrity and regulatory compliance. By combining encryption, authentication, and secure transmission, SMTP with TLS ensures that organizational communications remain private, trustworthy, and resilient against interception or unauthorized modification.
Question 94:
Which type of malware self-replicates to spread across systems without user interaction?
A) Virus
B) Trojan
C) Worm
D) Spyware
Answer: C
Explanation:
Viruses attach themselves to executable files or documents and require user interaction to propagate. Trojans are malicious programs disguised as legitimate applications and rely on social engineering to trick users into executing them. Spyware monitors and collects information from a system covertly, but generally does not replicate or spread on its own.
Worms are unique because they are fully self-replicating. They exploit vulnerabilities in operating systems, applications, or network configurations to spread automatically from one system to another without any user intervention. Worms can propagate rapidly across networks, often consuming bandwidth, slowing network performance, and causing significant operational disruption. Some worms carry payloads that delete files, install backdoors, or deploy ransomware, magnifying their destructive potential.
Mitigating worm infections involves a combination of patch management, network segmentation, intrusion detection/prevention systems, and firewall controls. System administrators must apply security patches promptly, as worms often exploit known vulnerabilities to propagate automatically across networks. Unpatched systems provide an easy entry point for worm infections, which can quickly escalate from a single compromised device to widespread network disruption. Disabling unnecessary services and closing unused ports reduces potential attack surfaces, preventing worms from leveraging network or application weaknesses. Network monitoring tools and intrusion detection/prevention systems can detect abnormal traffic patterns, such as sudden spikes in connection attempts or unusual scanning activity, which may indicate a worm actively spreading across systems.
Security education also plays a critical role in worm prevention. Although worms can propagate without direct user action, many still use social engineering vectors, such as phishing emails, malicious downloads, or compromised websites, to gain initial access. Teaching users to recognize suspicious attachments, links, or behaviors helps prevent initial infections that could lead to broader network compromise. Endpoint protection solutions, including antivirus and antimalware software with real-time scanning, can detect and quarantine worm activity, limiting its ability to spread.
The self-replicating nature of worms makes them particularly dangerous in enterprise environments. Unlike viruses, which require user action to propagate, worms can autonomously scan networks, exploit vulnerabilities, and replicate across connected devices. This rapid propagation can overwhelm network bandwidth, disrupt critical services, and create cascading failures across systems. High-profile worm outbreaks such as Code Red, Slammer, Conficker, and WannaCry have illustrated the potential severity of worm attacks, causing billions of dollars in damages, service outages, and data loss worldwide. These examples highlight the importance of proactive defenses, timely patching, and network segmentation to contain potential outbreaks.
Understanding worm behavior is essential for cybersecurity professionals responsible for maintaining network integrity and system availability. Analyzing attack vectors, propagation methods, and payloads enables the design of targeted defense strategies. Firewalls can block worm-specific ports or traffic signatures, intrusion detection systems can alert administrators to suspicious scanning, and network segmentation can prevent worms from moving laterally between critical systems. Backup and disaster recovery plans also help ensure business continuity in case of a successful worm infection, allowing compromised systems to be restored to a clean state with minimal operational impact.
The malware type that self-replicates across systems without user action is Worm, making it the correct answer. By combining technical controls, user education, monitoring, and proactive system maintenance, organizations can significantly reduce the risk and impact of worm infections, ensuring resilient and secure network operations. Properly addressing worm threats is a cornerstone of a comprehensive cybersecurity strategy, emphasizing the need for continuous vigilance, patch management, and layered defenses.
Question 95:
Which type of access control model uses centrally managed security labels to determine access rights?
A) Discretionary Access Control
B) Mandatory Access Control
C) Role-Based Access Control
D) Rule-Based Access Control
Answer: B
Explanation:
Discretionary Access Control (DAC) allows owners of resources to assign permissions to other users at their discretion, which can lead to inconsistencies or unauthorized access. Role-Based Access Control (RBAC) assigns access based on predefined roles and associated privileges, making it flexible but dependent on proper role definitions. Rule-Based Access Control enforces access based on system-defined rules, such as time-of-day restrictions or network location, rather than labels.
Mandatory Access Control (MAC) relies on security labels that are centrally managed and assigned to both subjects and objects. These labels represent classification levels, such as Confidential, Secret, or Top Secret, and access decisions are based strictly on these labels. Users cannot override or change the labels, ensuring consistent enforcement of security policies. MAC is widely used in government, military, and highly regulated industries, where information classification and strict control are essential to prevent unauthorized disclosure.
The MAC model ensures that access decisions are consistent across an organization and helps reduce the risk of insider threats by removing discretionary control from individual users. It is particularly effective in environments where sensitive or classified information must be rigorously protected, such as government agencies, military systems, or organizations handling highly confidential corporate data. By assigning security labels to both subjects (users or processes) and objects (files, databases, or resources), MAC enforces policies that strictly regulate who can access what, based on the classification of the information and the clearance of the user. This prevents unauthorized access even if a user attempts to bypass normal permissions.
By combining MAC with auditing and monitoring, organizations can achieve both security and accountability. Audit logs provide a record of access attempts, successful or denied, enabling administrators to detect anomalies, investigate incidents, and demonstrate compliance with regulatory frameworks such as HIPAA, FISMA, or ISO 27001. Regular reviews of labels, access policies, and user clearances ensure that the system continues to protect sensitive information as roles and requirements evolve.
Mandatory Access Control reduces human error because users cannot override permissions or arbitrarily grant access, unlike discretionary models. It is often paired with other access control models, like role-based or rule-based controls, to create a layered and flexible security strategy. The access control model that relies on centrally managed labels for all access decisions is Mandatory Access Control, making it the correct answer.
Question 96:
Which technique reduces the effectiveness of password attacks by adding random data to passwords before hashing?
A) Salting
B) Hashing
C) Encryption
D) Tokenization
Answer: A
Explanation:
Hashing converts passwords into fixed-length digests, which are irreversible but deterministic, meaning the same input always produces the same hash. Without additional randomness, attackers can use precomputed hash tables, or rainbow tables, to quickly reverse known hashes into plaintext passwords. Encryption protects data by making it reversible with a key, but does not specifically defend against hash-based attacks. Tokenization replaces sensitive data with non-sensitive equivalents but does not modify password hashes.
Salting enhances password security by adding a unique random value to each password before hashing. This ensures that even if two users have the same password, their stored hashes will differ. Salting also protects against the use of rainbow tables, as attackers would need to recompute hashes for each salt individually, significantly increasing the computational effort required. Salts are stored alongside the hashed password in the database, allowing systems to validate passwords without exposing the original password.
The use of salting, combined with strong hashing algorithms such as SHA-256, bcrypt, or Argon2, is considered a best practice in modern authentication systems. It mitigates risks from brute-force and dictionary attacks, ensures password uniqueness, and enhances overall security posture. Organizations that fail to implement salting leave user passwords more vulnerable to compromise, especially in large-scale breaches.
The technique that adds randomness to passwords to prevent hash attacks is salting, making it the correct answer.
Question 97:
Which principle ensures that no single individual can perform all critical steps in a process?
A) Need-to-Know
B) Separation of Duties
C) Least Privilege
D) Role-Based Access Control
Answer: B
Explanation:
Need-to-Know limits access to information based on necessity, ensuring users only have visibility into data required for their role. Least Privilege restricts permissions to the minimum necessary for task completion, reducing the potential impact of compromised accounts or human error. Role-Based Access Control assigns permissions based on job functions and role assignments, streamlining access management and reducing administrative overhead.
Separation of Duties (SoD) divides critical tasks among multiple individuals to prevent fraud, errors, or unauthorized actions. By distributing responsibilities, organizations ensure that no single individual has complete control over a critical process, such as approving financial transactions, deploying code to production, or accessing sensitive administrative systems. SoD enhances accountability and internal controls, making it more difficult for malicious insiders to perform unauthorized actions undetected.
Separation of Duties is widely adopted in finance, IT, and operational processes, aligning with regulatory and compliance frameworks such as SOX, PCI DSS, and ISO 27001. Implementing SoD involves identifying critical tasks, determining which activities can be separated, and enforcing technical and procedural controls to ensure compliance. Automation, workflow management systems, and auditing tools help maintain proper separation and monitor for violations.
By enforcing SoD, organizations reduce risk, enhance operational integrity, and maintain trust in their internal processes. It is a foundational principle in secure and compliant system design.
The principle that prevents a single individual from controlling all critical steps is the Separation of Duties, making it the correct answer.
Question 98:
Which type of cryptography uses a pair of mathematically related keys, one for encryption and one for decryption?
A) Symmetric Encryption
B) Asymmetric Encryption
C) Hashing
D) Tokenization
Answer: B
Explanation:
Asymmetric encryption, also known as public-key cryptography, relies on a key pair: a public key for encryption and a private key for decryption. This is fundamentally different from symmetric encryption, where the same key is used for both processes. The mathematical relationship between the public and private keys ensures that only the private key can decrypt data encrypted with its corresponding public key. This property is essential for secure communications over untrusted networks because it allows parties to exchange information without having to share a secret key in advance.
Asymmetric encryption is widely used in secure communications protocols such as SSL/TLS, which underpin HTTPS websites. When a user connects to a secure website, the server shares its public key, which the client uses to encrypt session information. Only the server’s private key can decrypt this information, ensuring confidentiality. Asymmetric encryption also enables digital signatures, which authenticate the origin of data and ensure integrity. A sender can sign a message using their private key, and recipients can verify it with the sender’s public key. This is critical for email security, software distribution, and blockchain systems, as it guarantees that messages or transactions have not been tampered with.
In addition to encryption and digital signatures, asymmetric cryptography is also used in key exchange protocols. Protocols such as Diffie-Hellman allow two parties to establish a shared secret over an insecure channel without ever transmitting the secret itself. This enables secure symmetric encryption to be applied afterward. While asymmetric encryption provides strong security benefits, it is computationally more intensive than symmetric encryption. Therefore, many systems use a hybrid approach: asymmetric encryption for securely exchanging a symmetric session key, and symmetric encryption for fast bulk data transmission.
Key management is a critical aspect of asymmetric cryptography. Private keys must remain confidential, as their compromise undermines the security of the entire system. Public keys, on the other hand, can be widely distributed without risk. Public Key Infrastructure (PKI) provides a framework for issuing, managing, and revoking keys and digital certificates to ensure trust in the system. PKI supports applications like secure email, VPN authentication, code signing, and identity verification.
The cryptography type that relies on key pairs is asymmetric encryption. Its use of mathematically linked keys, combined with support for encryption, digital signatures, and key exchange, makes it the cornerstone of modern secure communication systems and digital trust frameworks. Without asymmetric cryptography, the internet and many secure services we rely on daily would be far less safe.
Question 99:
Which disaster recovery strategy involves relocating operations to a fully equipped alternate site immediately?
A) Cold Site
B) Hot Site
C) Warm Site
D) Backup Site
Answer: B
Explanation:
A hot site is a disaster recovery facility that is fully configured and ready to resume operations immediately. It contains all the necessary hardware, software, networking equipment, and up-to-date data to allow business continuity with minimal downtime. Hot sites are ideal for organizations with high availability requirements, such as financial institutions, healthcare providers, or large e-commerce businesses.
Unlike cold sites, which provide only physical space and utilities, or warm sites, which require some setup and configuration before going live, hot sites ensure that critical business functions can continue without significant interruption. Data replication is often continuous or near-real-time, meaning that any changes made in the primary site are mirrored at the hot site. This reduces the risk of data loss during disasters such as natural events, cyberattacks, or hardware failures.
Hot sites are typically part of a broader disaster recovery plan, which outlines how to respond to various types of incidents. They may be geographically distant from the primary site to mitigate risks from regional disasters. Organizations using hot sites must regularly test failover procedures to ensure seamless operation during an actual emergency. Hot sites can be costly due to the need for duplicate infrastructure and ongoing maintenance, but the cost is justified for businesses that cannot afford extended downtime.
In contrast, cold sites are less expensive but require substantial time to set up and restore operations, potentially leading to extended service interruptions. Warm sites represent a compromise between cost and recovery time, with partial infrastructure and data that needs updating before full operation. Backup sites primarily serve as data storage locations and do not provide operational capabilities on their own.
The disaster recovery strategy that allows immediate relocation to a fully equipped site is therefore a hot site. Its combination of ready-to-use infrastructure, real-time data synchronization, and operational continuity ensures businesses can maintain critical functions with minimal downtime, making it an essential component for high-availability organizations.
Question 100:
Which attack manipulates users into revealing sensitive information through deception?
A) Phishing
B) Man-in-the-Middle
C) Brute-Force
D) SQL Injection
Answer: A
Explanation:
Phishing is a social engineering attack that deceives users into voluntarily revealing sensitive information such as passwords, credit card numbers, or personal details. Unlike technical attacks, phishing exploits human psychology rather than system vulnerabilities. Common phishing methods include fraudulent emails, fake websites, text messages, and phone calls that impersonate trusted entities like banks, government agencies, or internal IT departments.
Phishing attacks often create a sense of urgency or fear to compel users to act quickly without verifying legitimacy. For example, a phishing email may warn of unauthorized access to an account and urge the user to log in immediately through a malicious link. Spear-phishing, a targeted form of phishing, uses personal information about the victim to increase credibility, often targeting executives or employees with access to sensitive systems.
Organizations combat phishing through user education, multi-factor authentication, email filtering, and monitoring for suspicious activity. Security awareness training teaches employees to recognize signs of phishing, such as mismatched URLs, unexpected attachments, or generic greetings. Multi-factor authentication adds an extra layer of defense, as attackers cannot gain access using only stolen credentials. Advanced phishing detection tools can analyze email content, URLs, and sender patterns to flag potential attacks before they reach end users.
The human factor remains the weakest link in cybersecurity, making phishing one of the most successful and prevalent attack methods. Phishing can lead to identity theft, financial loss, ransomware deployment, and unauthorized system access. Its effectiveness has prompted organizations to adopt comprehensive strategies combining technical defenses, employee awareness, and incident response planning.
The attack that relies on deception to obtain sensitive information is phishing. By manipulating human trust, phishing bypasses technical security measures and underscores the importance of user education and multi-layered defenses in modern cybersecurity frameworks.
Question 101:
Which security principle minimizes exposure by disabling unnecessary services and functions?
A) Least Privilege
B) Defense in Depth
C) Least Functionality
D) Open Design
Answer: C
Explanation:
The principle of least functionality focuses on reducing attack surfaces by disabling unnecessary services, applications, and functions. By ensuring that systems run only essential services, organizations minimize potential entry points for attackers. Every enabled service, port, or feature introduces a potential vulnerability that could be exploited.
Implementing the least functionality involves identifying required operations for business processes and disabling or uninstalling everything else. For example, in operating systems, administrators might disable unused network protocols, services, or scheduled tasks. In network devices, unused ports and services should be closed. In applications, features not required for operations should be turned off or restricted. This principle also applies to mobile devices, cloud platforms, and IoT devices, which often come with default features that may not be necessary.
Least functionality complements other security principles. While least privilege restricts user access rights, least functionality restricts the system itself. It enhances overall security posture, simplifies monitoring, and reduces the number of vulnerabilities to manage. Regular audits, patch management, and configuration baselines support the enforcement of this principle.
By minimizing unnecessary features, organizations can prevent attackers from leveraging unused services, reduce potential misconfigurations, and strengthen defenses against malware, remote exploits, and insider threats. This approach is particularly important for critical infrastructure, financial systems, and environments requiring regulatory compliance.
The principle that reduces exposure by disabling unnecessary features is the least functionality. Its focus on minimizing the attack surface ensures a hardened, manageable, and secure system environment, contributing significantly to an organization’s overall risk mitigation strategy.
Question 102:
Which authentication factor relies on something the user possesses?
A) Password
B) Security Token
C) Biometric
D) Knowledge Question
Answer: B
Explanation:
Security tokens are physical or digital devices that authenticate a user based on something they possess, rather than what they know or who they are. These tokens generate one-time passwords (OTPs), store cryptographic credentials, or provide challenge-response capabilities. Examples include smart cards, hardware tokens, USB security keys, and mobile app–based tokens. Possession-based authentication adds a crucial layer to security because it requires the user to have a physical object in addition to knowledge-based or biometric authentication factors.
Unlike passwords, which can be guessed, stolen, or reused, possession-based factors provide an extra barrier against unauthorized access. Even if an attacker knows a user’s password, they cannot log in without the token. Similarly, tokens protect against phishing attacks, replay attacks, and brute-force attacks when combined with multi-factor authentication. Hardware tokens may display a new code every 30–60 seconds, synchronized with the server, while smart cards may store digital certificates or keys that facilitate secure logins to networks, computers, or VPNs.
Implementing possession-based authentication enhances security in corporate environments, financial institutions, and critical infrastructure. It is particularly important for remote access, where users connect over potentially insecure networks. Mobile device–based tokens, like authenticator apps, offer convenience while maintaining security. Some tokens also support biometrics, creating a hybrid authentication method that strengthens protection.
Managing security tokens requires careful policies. Lost or stolen tokens must be quickly revoked to prevent misuse. Organizations may maintain token inventories, enforce periodic replacement, and train users on secure handling. Token-based authentication complements other factors in multi-factor authentication (MFA) systems, providing a layered defense approach that significantly reduces the risk of unauthorized access.
The authentication factor that relies on something the user possesses is the security token. By combining physical possession with other forms of authentication, tokens create a robust security mechanism that mitigates risks from compromised credentials and strengthens overall access control.
Question 103:
Which type of attack attempts to overload a network or server to disrupt services?
A) Denial-of-Service
B) Cross-Site Scripting
C) SQL Injection
D) Malware
Answer: A
Explanation:
Denial-of-Service (DoS) attacks are designed to make systems, networks, or services unavailable to legitimate users by overwhelming resources with excessive traffic. The attack can target bandwidth, server processing power, or application capacity. When distributed across multiple compromised systems, the attack is referred to as a Distributed Denial-of-Service (DDoS) attack. This amplifies the impact and makes mitigation more difficult.
DoS attacks exploit weaknesses in network protocols, application logic, or server configurations. Techniques include sending massive volumes of requests, exploiting protocol vulnerabilities, or flooding network devices with malformed packets. DDoS attacks can be financially and operationally damaging, especially for e-commerce websites, online services, and critical infrastructure systems that rely on continuous availability. Downtime leads to revenue loss, reputational damage, and potentially regulatory penalties if services are disrupted for extended periods.
Organizations defend against DoS and DDoS attacks through various strategies. Traffic filtering, rate limiting, network segmentation, and firewalls help identify and block malicious traffic. Content Delivery Networks (CDNs) and cloud-based mitigation services distribute traffic and absorb attack loads. Intrusion prevention systems (IPS) can detect patterns indicative of flooding or resource exhaustion. Redundancy and failover planning ensure that critical services remain operational even if one component is targeted.
DoS attacks are often combined with other malicious activities, such as exploiting the temporary unavailability to inject malware, steal data, or execute ransomware attacks. Attackers may use botnets composed of thousands or millions of compromised devices to generate attack traffic. This makes DoS attacks a persistent and evolving threat in cybersecurity, requiring ongoing monitoring and adaptive mitigation techniques.
The type of attack that overwhelms a network or server to deny service is Denial-of-Service. Its direct effect on system availability highlights the importance of network resilience, monitoring, and incident response planning to maintain operational continuity.
Question 104:
Which access control model bases permissions on job functions and responsibilities?
A) Discretionary Access Control
B) Mandatory Access Control
C) Role-Based Access Control
D) Rule-Based Access Control
Answer: C
Explanation:
Role-Based Access Control (RBAC) is an access control model that assigns permissions based on the roles users occupy within an organization. Unlike discretionary access control, which allows users to determine permissions for resources they own, or mandatory access control, which enforces centralized policies based on security labels, RBAC aligns access with job functions and organizational responsibilities.
RBAC simplifies management in complex environments. Instead of assigning permissions to each user, administrators define roles that represent typical job functions, such as HR manager, network administrator, or accountant. Users inherit permissions from their assigned roles, ensuring consistent and efficient enforcement of security policies. This approach supports the principle of least privilege because users only receive access necessary for their role.
RBAC is widely used in enterprise IT environments, cloud services, and compliance-focused systems. It helps meet regulatory requirements by controlling access to sensitive data based on job responsibilities. Role hierarchies can model organizational structures, allowing senior roles to inherit permissions from subordinate roles. RBAC also supports the separation of duties, reducing the risk of fraud or unauthorized actions by ensuring critical tasks require multiple individuals with different roles.
Implementation of RBAC requires role definition, mapping of permissions to roles, assignment of users to roles, and ongoing review to accommodate organizational changes. Auditing and monitoring are essential to detect role misconfigurations or privilege creep. RBAC can be combined with rule-based or attribute-based controls to increase flexibility and adapt to dynamic access requirements in large organizations.
The access control model that assigns permissions according to job responsibilities is Role-Based Access Control. Its role-centric approach enables efficient, secure, and compliant access management in modern IT infrastructures.
Question 105:
Which type of malware disguises itself as legitimate software to trick users into executing it?
A) Worm
B) Trojan
C) Virus
D) Rootkit
Answer: B
Explanation:
A Trojan, or Trojan horse, is a type of malware that masquerades as a legitimate program or file to deceive users into executing it. Unlike worms, which self-replicate, or viruses, which attach to files, Trojans rely on user action for infection. They exploit trust, curiosity, or urgency, often delivered through phishing campaigns, malicious downloads, pirated software, or email attachments.
Once executed, a Trojan can perform a wide variety of malicious actions. These may include stealing credentials, logging keystrokes, installing backdoors, deploying ransomware, or creating botnet nodes. Because the malware is hidden within seemingly benign software, traditional antivirus detection can be less effective unless heuristics or behavior analysis are applied. Trojans frequently serve as a gateway for additional attacks, making them versatile tools for cybercriminals.
Preventing Trojans requires a combination of user awareness and technical safeguards. Users must be educated to download software only from trusted sources, scrutinize email attachments, and avoid executing unknown programs. Endpoint protection platforms and advanced malware detection systems can identify suspicious behaviors, quarantine infected files, and prevent system compromise. Regular patching of operating systems and applications also reduces the risk of exploiting vulnerabilities that Trojans might use.
Trojans remain one of the most prevalent malware types because of their reliance on social engineering. They demonstrate how human trust and interaction with digital systems can be exploited to bypass traditional security controls. Organizations must integrate technical defenses with security training and monitoring to defend against Trojan-based attacks effectively.
The malware type that masquerades as legitimate software is a Trojan. Its use of deception to gain execution makes it a significant threat, highlighting the importance of user vigilance, endpoint protection, and layered security strategies.