CompTIA SY0-701 CompTIA Security+ Exam Dumps and Practice Test Questions Set 6 Q76-90

CompTIA SY0-701 CompTIA Security+ Exam Dumps and Practice Test Questions Set 6 Q76-90

Visit here for our full CompTIA SY0-701 exam dumps and practice test questions.

Question 76

Which type of attack involves intercepting and altering email messages to manipulate communications?

A) Email spoofing
B) Phishing
C) SQL injection
D) Brute-force

Answer:  A) Email spoofing

Explanation:

Email spoofing is an attack in which the attacker sends emails with a forged sender address, making it appear that the message comes from a trusted source. The primary goal is to deceive recipients into taking certain actions, such as clicking on malicious links, sharing sensitive information, or authorizing transactions. Security+ candidates must understand email spoofing because it exploits trust and can be a precursor to other attacks, including phishing, business email compromise (BEC), or malware distribution. Spoofing attacks manipulate the email header to create the illusion of authenticity, often leveraging social engineering tactics to increase success rates. Detection can be challenging because it exploits the inherent weaknesses in email protocols like SMTP, which do not authenticate sender addresses by default. Mitigation strategies include implementing email authentication protocols such as SPF (Sender Policy Framework), DKIM (DomainKeys Identified Mail), and DMARC (Domain-based Message Authentication, Reporting & Conformance). These protocols verify that emails are legitimately sent from the claimed domain, reducing the likelihood of successful spoofing.

Phishing, the second choice, also uses email to deceive users but focuses on tricking the recipient into providing credentials, clicking links, or opening attachments. While phishing can leverage spoofed emails, phishing itself is the act of social engineering rather than manipulating email headers.

SQL injection, the third choice, targets database queries by injecting malicious SQL code. SQL injection does not involve email or the alteration of communications, making it unrelated to email spoofing.

Brute-force, the fourth choice, systematically guesses passwords or keys to gain unauthorized access. Brute-force attacks target authentication mechanisms rather than manipulating communications between users or systems.

The correct answer is email spoofing because it specifically involves the manipulation of email headers to deceive recipients and alter communication perception. Security+ candidates should understand the technical and social aspects of email spoofing, the methods attackers use, the risks it introduces, and the authentication protocols designed to mitigate it. Email spoofing demonstrates how attackers exploit trust and highlights the importance of layered defenses, user awareness, and monitoring for unusual email patterns. Implementing SPF, DKIM, and DMARC, combined with email filtering and user training, forms a comprehensive defense against spoofing attacks. Email spoofing also emphasizes that threats are not only technical but often social, requiring vigilance, verification, and proactive policies to maintain integrity in organizational communications.

Question 77

Which type of attack attempts to manipulate URLs to gain unauthorized access to a website or web application?

A) URL manipulation
B) Phishing
C) Rootkit
D) Denial of Service

Answer:  A) URL manipulation

Explanation:

URL manipulation is an attack technique in which attackers modify parameters within a URL to gain unauthorized access, escalate privileges, or retrieve sensitive information from a web application. Attackers exploit insufficient input validation, predictable URL parameters, or insecure session handling to bypass security controls. Security+ candidates must understand URL manipulation because it demonstrates vulnerabilities in web application design and highlights the importance of secure coding, input validation, and session management. URL manipulation can allow attackers to access unauthorized resources, modify data, or perform actions they are not authorized to perform. Common examples include altering user IDs, toggling query string parameters, or modifying GET requests to access another user’s account or private information. Prevention techniques include implementing server-side input validation, session management, role-based access controls, and ensuring predictable URL parameters are protected or encrypted. Web application firewalls (WAFs) and secure application testing are also critical in detecting and preventing URL manipulation attacks.

Phishing, the second choice, targets human behavior to obtain credentials or sensitive information, but it does not involve altering URLs directly to exploit web applications. Phishing may be used in conjunction with malicious URLs, but the act of modifying URLs is separate.

Rootkits, the third choice, provide hidden persistent access on compromised systems. They target endpoints rather than web application functionality and do not manipulate URLs to gain unauthorized access.

Denial of Service attacks, the fourth choice, aim to overwhelm systems with traffic to disrupt availability. DoS does not involve URL modification or the exploitation of web application parameters to access unauthorized content.

The correct answer is URL manipulation because it specifically targets web application vulnerabilities through modified URL parameters. Security+ candidates should understand techniques, attack vectors, detection, and mitigation strategies to ensure robust web application security. URL manipulation highlights the need for server-side validation, session security, and consistent access controls to prevent unauthorized data exposure or manipulation. This type of attack emphasizes that security must be implemented at both the application and network layers and demonstrates the potential consequences of overlooked web development vulnerabilities. By understanding URL manipulation, candidates learn how attackers exploit predictable or unvalidated inputs to bypass security, underscoring the importance of secure development practices, testing, and layered application defenses.

Question 78

Which attack method captures user credentials by recording keyboard input?

A) Keylogger
B) Phishing
C) Ransomware
D) Worm

Answer:  A) Keylogger

Explanation:

A keylogger is a type of malware or monitoring tool that records keystrokes on a device to capture sensitive information such as usernames, passwords, credit card numbers, and personal data. Security+ candidates must understand keyloggers because they directly compromise confidentiality and can serve as a stepping stone for larger attacks, including identity theft, unauthorized access, and financial fraud. Keyloggers can be hardware-based, inserted between keyboards and systems, or software-based, installed on operating systems to run stealthily in the background. They are often delivered through phishing campaigns, malicious downloads, or exploit kits. Mitigation strategies include deploying antivirus and anti-malware solutions, using endpoint protection and monitoring tools, employing multi-factor authentication, ensuring proper access controls, and educating users about safe computing practices. Keyloggers highlight the importance of endpoint security, user awareness, and layered defenses because they operate at the interface where human interaction meets technology, making them difficult to detect without specialized monitoring.

Phishing, the second choice, is a social engineering attack aimed at tricking users into providing credentials. While phishing can indirectly deliver keyloggers, it does not inherently record keystrokes or monitor device activity.

Ransomware, the third choice, encrypts files or locks systems for financial extortion. Ransomware affects availability and data access rather than capturing user input directly.

Worms, the fourth choice, self-replicate to propagate malware across networks. Worms can distribute keyloggers, but do not inherently record keystrokes.

The correct answer is a keylogger because it specifically captures user input to obtain sensitive information. Security+ candidates should understand delivery methods, detection techniques, preventive measures, and monitoring strategies. Keyloggers demonstrate the intersection of human behavior, endpoint vulnerabilities, and malware delivery, emphasizing the need for comprehensive security strategies that combine technical controls, user education, and vigilance. Awareness of keyloggers helps candidates understand how attackers gain credentials and sensitive data stealthily, highlighting the importance of multi-layered protection and proactive monitoring. Keyloggers also illustrate the broader concept that protecting data requires securing both technology and human interactions, reinforcing the principles of confidentiality, integrity, and layered cybersecurity defense.

Question 79

Which type of attack manipulates network address resolution to intercept or redirect network traffic?

A) ARP poisoning
B) Phishing
C) DDoS
D) Adware

Answer:  A) ARP poisoning

Explanation:

ARP poisoning, also called ARP spoofing, is a network attack in which an attacker sends falsified Address Resolution Protocol (ARP) messages to a local network. The attacker associates their MAC address with the IP address of a legitimate device, redirecting or intercepting network traffic intended for that device. Security+ candidates must understand ARP poisoning because it demonstrates vulnerabilities in network communication protocols and can be used for man-in-the-middle attacks, data theft, session hijacking, and traffic manipulation. ARP poisoning can occur on both wired and wireless networks and exploits the lack of authentication in ARP protocol messages. Mitigation strategies include implementing static ARP entries for critical devices, using dynamic ARP inspection (DAI), network segmentation, VLAN isolation, monitoring for unusual ARP traffic, and employing encrypted communication protocols such as HTTPS and VPNs. Attackers often use ARP poisoning to capture credentials, inject malicious content, or eavesdrop on communications without the knowledge of the victim. Understanding ARP poisoning highlights the importance of securing internal network infrastructure, monitoring for anomalies, and reinforcing endpoint defenses to prevent network-level attacks.

Phishing, the second choice, is a social engineering attack aimed at tricking users into revealing sensitive information. Phishing does not manipulate network protocols or redirect traffic directly.

DDoS attacks, the third choice, overwhelm system resources with traffic. DDoS impacts availability but does not manipulate address resolution to intercept communications.

Adware, the fourth choice, delivers advertisements and collects user information. Adware does not target network traffic or manipulate network communication protocols.

The correct answer is ARP poisoning because it specifically alters ARP mappings to intercept or redirect network traffic. Security+ candidates should understand attack vectors, detection methods, prevention techniques, and mitigation strategies. ARP poisoning emphasizes securing internal network communications, monitoring ARP activity, and implementing layered defenses to prevent attackers from exploiting protocol weaknesses. Awareness of ARP poisoning also reinforces the importance of encrypted communication, proactive network monitoring, and vigilant security practices to maintain confidentiality, integrity, and availability of data on enterprise networks.

Question 80

Which type of attack exploits vulnerabilities in software to install hidden malware on a target system?

A) Exploit kit
B) Phishing
C) Keylogger
D) Brute-force

Answer:  A) Exploit kit

Explanation:

An exploit kit is a toolkit used by attackers to identify and exploit vulnerabilities in software on a target system, delivering malware payloads such as ransomware, spyware, or keyloggers. Security+ candidates must understand exploit kits because they automate the process of vulnerability exploitation, making attacks more scalable and effective. Exploit kits are typically delivered through malicious websites, compromised advertisements, or phishing campaigns. They scan for unpatched software, outdated plugins, or misconfigured applications and then exploit these weaknesses to execute hidden malware. Exploit kits highlight the importance of patch management, endpoint protection, vulnerability scanning, secure browsing practices, and user awareness. Successful exploitation can compromise confidentiality, integrity, and availability simultaneously, allowing attackers to gain persistent access, steal data, or disrupt systems. Mitigation strategies include updating software regularly, restricting access to untrusted websites, employing endpoint protection, monitoring for abnormal activity, and implementing network segmentation. Understanding exploit kits demonstrates how attackers combine automated tools with technical vulnerabilities to target multiple systems efficiently, underscoring the importance of layered security and proactive defense.

Phishing, the second choice, is a social engineering attack used to obtain credentials or install malware, but it does not inherently automate the exploitation of vulnerabilities.

Keyloggers, the third choice, capture keystrokes to steal information. While exploit kits may deliver keyloggers, keyloggers themselves are the payload, not the automated vulnerability exploitation process.

Brute-force attacks, the fourth choice, attempt to guess credentials by systematically testing combinations. Brute-force attacks target authentication rather than exploiting software vulnerabilities to install malware.

The correct answer is exploit kit because it specifically targets vulnerabilities to install hidden malware on systems. Security+ candidates should understand delivery methods, exploitation techniques, mitigation strategies, and the consequences of successful exploitation. Exploit kits highlight the importance of proactive vulnerability management, patching, endpoint security, and user education in preventing widespread compromise. They demonstrate how automation and technical vulnerabilities combine to create scalable attack campaigns, emphasizing the need for a comprehensive, layered approach to cybersecurity that integrates technical controls, monitoring, and awareness training to maintain the security and integrity of systems.

Question 81

Which security concept ensures that users only have the minimum level of access required to perform their job functions?

A) Least privilege
B) Redundancy
C) Hashing
D) Load balancing

Answer:  A) Least privilege

Explanation:

Least privilege is a foundational security concept that restricts users, systems, and processes to the smallest set of permissions necessary to accomplish assigned tasks. This principle prevents unnecessary access to sensitive systems or data and reduces the risk of misuse, whether intentional or accidental. Applying least privilege limits the potential damage caused by compromised accounts, insider threats, or misconfigurations because an attacker or unauthorized user cannot access areas beyond their essential responsibilities. Organizations implement least privilege through access control mechanisms, permission management, privilege auditing, and continual monitoring to ensure users maintain only what their roles require. This approach enhances overall security posture by reducing attack surfaces and containing the potential impact of breaches. In environments where sensitive data is handled, enforcing least privilege becomes even more critical because it ensures that only those with legitimate business justification can interact with protected resources.

Redundancy, the second choice, plays an important role in maintaining availability by creating duplicate systems, components, or processes. While redundancy is essential for ensuring that operations continue during system failures, it does not relate to restricting access rights. Redundancy is concerned with system resilience rather than user permissions or limiting access to improve security.

Hashing, the third choice, is associated with data integrity, ensuring information has not been altered. Hashing uses mathematical functions to convert input into a fixed-size value that cannot be reversed. While hashing is crucial for password storage and message integrity, it has no relationship to controlling user permissions or defining user access levels.

Load balancing, the fourth choice, distributes network or application traffic across multiple servers to improve performance and availability. Load balancing ensures efficient resource utilization but is unrelated to restricting user access or managing permissions.

Therefore, the correct answer is least privilege because it directly addresses the idea that users should only have the permissions necessary for their duties and nothing more. Understanding least privilege is essential for Security+ candidates because it is central to designing secure systems, preventing lateral movement during attacks, and reducing the potential harm caused by compromised credentials. Least privilege reinforces the broader principles of access control, identity management, and risk mitigation by ensuring that permissions remain limited and purposeful. It highlights the importance of continuous evaluation of user roles and permissions, particularly as responsibilities change or employees move between departments. Least privilege also works in tandem with other concepts, such as separation of duties, to create a comprehensive and layered security model. Proper implementation requires an understanding of how permissions are assigned, monitored, adjusted, and revoked when no longer needed. This principle also applies to service accounts, automation scripts, network devices, and applications, ensuring all entities operate with restricted capabilities to minimize risk. Organizations that fail to enforce least privilege expose themselves to unnecessary vulnerabilities, making them more susceptible to data breaches, privilege escalation attacks, and unauthorized data access. Adopting least privilege reduces these dangers significantly and creates a security-conscious environment where access is granted with caution and oversight.

Question 82

Which type of control primarily aims to detect malicious or unauthorized activity after it has occurred?

A) Detective
B) Preventive
C) Deterrent
D) Corrective

Answer:  A) Detective

Explanation:

Detective controls are designed to identify and uncover unauthorized activity, security breaches, anomalies, and policy violations after they have occurred. These controls play a critical role in organizational security by providing visibility into events that may not have been stopped by preventive measures. Detective controls enhance situational awareness and allow security teams to identify suspicious behavior before it escalates further. For example, monitoring logs, intrusion detection systems, security cameras, file integrity monitoring, and SIEM platforms all function as detective mechanisms. They do not prevent incidents directly; instead, they highlight irregular behavior, trigger alerts, and guide analysts toward potential threats requiring investigation. Detective controls serve as an essential part of layered defense because no system can prevent every attack. By uncovering incidents early, organizations can respond before attackers cause significant damage. Effective detective controls provide timely notifications, detailed logs, and insights into user activity, enabling analysts to reconstruct actions and identify vulnerabilities exploited during the event.

Preventive controls, the second choice, aim to stop malicious activity before it occurs. These include firewalls, access control lists, encryption, and authentication mechanisms, which help reduce the chances of successful attacks. Preventive controls differ from detective controls because they focus on blocking unauthorized actions rather than uncovering them after they happen.

Deterrent controls, the third choice, discourage malicious behavior by signaling that security measures are in place. Examples include warning banners, visible surveillance equipment, and documented security policies. While deterrent controls attempt to reduce motivation for malicious activity, they do not detect or respond to attacks once they occur.

Corrective controls, the fourth choice, aim to repair damage or restore systems after a security incident. Backup restoration, patching, and system recovery fall under this category. Corrective controls address consequences rather than detecting or preventing the initial issue.

The correct answer is detective because it specifically focuses on uncovering and identifying malicious or unauthorized actions after they occur. Understanding detective controls is essential in cybersecurity frameworks, incident response strategies, and post-incident analysis. They produce evidence that helps identify attack patterns, compromised systems, and vulnerabilities requiring remediation. In many cases, detective controls serve as the first indication that an attack has bypassed preventive measures, making them vital for timely intervention. Logs, monitoring tools, and automated alerts provide context and visibility across systems, enabling analysts to detect lateral movement, privilege misuse, data exfiltration attempts, or unauthorized configuration changes. Detective controls contribute to continuous monitoring efforts, helping maintain compliance with regulatory requirements and organizational policies. They also support forensic investigations by providing necessary data to reconstruct events and determine root causes. Without detective controls, organizations would operate blindly, unaware of ongoing threats within their environment. This lack of visibility could allow attackers to persist undetected for extended periods, increasing damage. Thus, detective controls form a crucial part of a defense-in-depth strategy, enabling organizations to identify and respond to threats quickly, strengthen security posture, and mitigate future risks.

Question 83

Which of the following best describes an air-gapped network?

A) A network physically isolated from unsecured networks
B) A network separated with VLANs
C) A network connected through a VPN
D) A network using encrypted wireless communication

Answer:  A) A network physically isolated from unsecured networks

Explanation:

An air-gapped network is a security design in which the network or system is physically isolated from other networks, particularly from public or unsecured networks such as the internet. This is done to ensure that no electronic communications, wired or wireless, can occur between the isolated system and any external system. The main goal is to provide the highest achievable level of protection by removing all direct digital pathways that an attacker could exploit. Air-gapped networks are commonly used in environments requiring extreme confidentiality, such as military systems, critical infrastructure control networks, classified government sectors, industrial control systems, and financial networks managing highly sensitive operations. By ensuring complete physical separation, organizations reduce the risk of remote intrusion, malware infections delivered through internet-based vectors, and unauthorized external access. Although extremely secure, air-gapped environments still require strict procedural controls, because attackers may find indirect methods of breaching isolation, such as through removable media, insider threats, or compromised supply-chain hardware.

The second choice describes a network separated with VLANs, which are logical network partitions implemented within the same physical infrastructure. VLANs improve segmentation, manage traffic flow, and enhance security by creating virtual boundaries, but they are not physically isolated. If the underlying infrastructure is compromised, VLANs can be bypassed. VLAN separation reduces attack surfaces but does not meet the strict requirement of absolute isolation. Because VLANs rely on digital configuration rather than physical separation, they cannot prevent all types of intrusions, especially those arising from misconfigurations, switching vulnerabilities, or unauthorized access to the core network.

The third choice refers to a network connected through a VPN. A VPN provides secure encrypted communication over shared or public networks, ensuring confidentiality and authentication. While VPNs enhance secure remote communication, they inherently involve connecting to external networks, which is incompatible with the concept of air-gapping. A VPN creates a protected communication tunnel, but it does not isolate the network from external systems. The existence of a communication pathway itself—encrypted or not—invalidates the concept of an air-gap.

The fourth choice involves encrypted wireless communication. Wireless networks using encryption methods such as WPA3 enhance confidentiality and reduce unauthorized access risks. However, wireless networks, regardless of encryption strength, still involve external communication channels. They cannot be considered air-gapped because wireless technologies inherently transmit signals beyond the physical boundaries of devices, allowing possible interception or exploitation.

The correct answer is that an air-gapped network is physically isolated from unsecured networks. Understanding air-gapped environments is important for Security+ candidates because it demonstrates how physical separation can serve as a powerful defense mechanism. Air-gapped systems require strict operational procedures, including careful control of removable media, mandatory scanning of all introduced hardware, and oversight of authorized personnel. Even though they are highly secure, history has shown that attackers can still breach air-gapped systems by exploiting unconventional methods such as electromagnetic emissions, USB devices, supply-chain tampering, and social engineering. This underscores that no security method is perfect, and even isolated systems require disciplined operational security, monitoring, and auditing. Air-gapping represents an extreme form of network segmentation, demonstrating that physical controls remain essential in cybersecurity. It also shows how organizations balance usability and security, as air-gapped systems may experience operational limitations due to their lack of external connectivity. This reinforces the principle that high-value assets require defense-in-depth, combining physical security, user behavior controls, procedural safeguards, and strict configuration management.

Question 84

What is the primary purpose of sandboxing in cybersecurity?

A) Isolating applications to prevent system compromise
B) Encrypting files to prevent unauthorized access
C) Distributing traffic across multiple servers
D) Detecting phishing attempts through email filters

Answer:  A) Isolating applications to prevent system compromise

Explanation:

Sandboxing is a cybersecurity technique used to isolate applications, processes, or code execution environments so that potentially harmful actions cannot affect the broader system. The idea behind sandboxing is to run untrusted or unknown code inside a restricted environment where access to system resources, files, or network connections is limited or monitored. This isolation ensures that even if malicious behavior occurs within the sandbox, it cannot escape into the main environment and cause damage. Sandboxing is commonly used in malware analysis, browser security, application virtualization, and execution of downloaded files. By observing the behavior of programs within the sandbox, analysts can determine whether they contain harmful code without risking the integrity of the primary system. Sandboxing reduces attack surfaces by quarantining risky activity, making it a widely adopted defensive measure in modern systems. It plays an important role in defending against zero-day attacks, as unknown threats can be contained before they exploit vulnerabilities.

The second choice describes encrypting files. Encryption protects data confidentiality by converting readable information into unreadable ciphertext. While encryption prevents unauthorized access, it does not isolate processes or applications. Encryption is essential for data security, but it serves a different purpose from sandboxing. Sandboxing focuses on isolating execution environments, whereas encryption focuses on protecting stored or transmitted data.

The third choice refers to distributing traffic across multiple servers, known as load balancing. Load balancing improves system availability and performance, ensuring no single server becomes overloaded. While this enhances system resilience, it has nothing to do with isolating untrusted code or analyzing suspicious applications.

The fourth choice refers to detecting phishing attempts through email filters. Email filters use heuristics, signatures, machine learning, and reputation analysis to detect phishing messages, malicious attachments, or suspicious links. Although email filtering is an important security control, it does not isolate code or programs from affecting a system.

The correct answer is isolating applications to prevent system compromise. Sandboxing plays a significant role in modern cybersecurity because it provides strong containment capabilities. For example, web browsers use sandboxing to limit the damage that malicious scripts or websites can cause. Malware analysts use sandboxes to observe harmful software in a controlled environment without risking live systems. Cloud and virtualization platforms use sandboxing to run applications in lightweight isolated containers. By restricting what code can access, sandboxing prevents unauthorized changes, system corruption, and data loss. It also contributes to defense-in-depth, working alongside firewalls, intrusion detection systems, and anti-malware solutions. Sandboxing illustrates the principle that isolation can be a powerful method to mitigate risk, especially when dealing with unknown or potentially harmful software. It underscores the importance of limiting interactions between untrusted applications and critical system components. Through sandboxing, organizations can test updates, analyze malware, and handle risky content safely, thereby enhancing overall security posture.

Question 85

Which technology provides tamper-evident logs that prevent unauthorized alteration of recorded events?

A) Blockchain logging
B) DHCP
C) NAT
D) SNMP

Answer:  A) Blockchain logging

Explanation:

Blockchain logging uses blockchain technology to create tamper-evident and immutable logs, ensuring that recorded events cannot be altered without detection. Blockchain works by storing data in blocks linked together through cryptographic hashes. Each block contains a hash of the previous block, making the entire chain resistant to modification. If someone attempts to alter a log entry, the hash changes, breaking the chain and revealing the tampering. This property makes blockchain logging ideal for environments requiring high assurance of integrity, such as financial auditing systems, legal evidence preservation, compliance reporting, and secure monitoring. Blockchain logging provides transparency, auditability, and verifiable evidence of system activity. It helps prevent insider threats, unauthorized modifications, and falsified logs, which are major concerns in cybersecurity. Because logs are essential for incident investigations, forensics, compliance, and threat detection, ensuring their integrity is crucial. Blockchain ensures that logs remain trustworthy and resistant to tampering, even in the presence of malicious actors.

The second choice refers to DHCP, which automatically assigns IP addresses to devices within a network. DHCP simplifies network configuration, but it does not provide tamper-evident logging or protect log integrity. While DHCP servers generate logs of assigned addresses, these logs can be modified through conventional means, offering no cryptographic guarantees.

The third choice, NAT, translates private IP addresses to public ones. NAT provides network flexibility and modest security by obscuring internal addresses but has no relevance to log integrity or tamper-resistant recording.

The fourth choice, SNMP, manages and monitors network devices. SNMP collects performance data and system information but does not protect logs from modification. Traditional SNMP logs can be altered unless additional security mechanisms are implemented.

The correct answer is blockchain logging because it uniquely provides cryptographic tamper resistance. Blockchain logging ensures logs cannot be changed without detection, preserving trust in digital evidence. In cybersecurity, logs serve as one of the most valuable forensic and monitoring tools. Attackers often attempt to erase or modify logs to conceal their actions, making log integrity essential for proper incident investigation. Blockchain’s decentralized and cryptographically linked structure prevents unauthorized edits, ensuring logs remain intact even under internal or external threats. Blockchain logging supports compliance requirements, legal obligations, and organizational policies by ensuring reliable chain-of-custody for digital records. Understanding blockchain logging helps security professionals appreciate how emerging technologies solve longstanding challenges in integrity and accountability. It demonstrates that cybersecurity requires not only protection mechanisms but also trustworthy monitoring systems. Blockchain logging reflects the principle that trustworthy data is essential for accurate diagnosis, response, and prevention of security incidents.

Question 86

Which security concept ensures that data is available to authorized users whenever it is needed?

A) Integrity
B) Availability
C) Confidentiality
D) Non-repudiation

Answer: B) Availability

Explanation:

Availability is one of the central pillars of cybersecurity, and it refers to ensuring that data, services, systems, and resources are accessible to authorized users at the time they are needed. To understand why this is the correct selection, it is important to examine each of the listed concepts in depth and understand their roles in security programs. Availability focuses on maintaining operational uptime, preventing disruptions, and ensuring that business processes continue functioning even when under stress. Systems designed with a strong focus on availability implement redundancy, failover capabilities, backup power, load balancing, and robust monitoring to detect and resolve outages quickly. Cyberattacks like distributed denial-of-service events directly target availability by overwhelming systems with traffic until they become inaccessible. Other threats include hardware failures, software bugs, misconfigurations, and natural disasters. Availability involves proactive planning, disaster recovery strategies, and regular testing to ensure resilience.

The first concept involves maintaining accuracy, consistency, and reliability of information. It ensures that data is not modified in an unauthorized or unexpected way. Systems and processes that protect this principle use mechanisms such as hashing, checksums, digital signatures, and access controls that prevent unauthorized modification. While closely related to many other security principles, this one does not speak directly to keeping systems accessible. Even if data is accurate and unaltered, system downtime or network outages could still prevent legitimate users from accessing it. Therefore, it does not align with the requirement of ensuring constant access.

The second concept concerns keeping data accessible, and this is the correct answer. It ensures that systems, networks, and information remain operational and usable. Methods to preserve this include implementing hardware redundancy, maintaining reliable network configurations, protecting against denial-of-service attacks, and building recovery plans. It also involves service-level management, sufficient bandwidth provisioning, and maintaining patches and system updates to prevent outages due to system failures. This concept ensures that authorized users never experience unexpected disruptions that prevent them from carrying out critical work.

The third concept focuses on protecting data from unauthorized disclosure. This is often addressed through encryption, access controls, authentication mechanisms, and secure transmission methods. While confidentiality is crucial for preventing leaks of private or sensitive information, it does not concern whether users can access the data at all times. A system may have strong encryption and robust privacy safeguards but still experience downtime, meaning it does not satisfy the requirement for constant access.

The fourth concept deals with ensuring that a person cannot deny having performed an action. This is accomplished through logging, digital signatures, auditing mechanisms, and chronological documentation of events. While vital for accountability and legal purposes, it has nothing to do with whether systems are accessible.

In conclusion, availability directly addresses the need for data and services to remain accessible at all times, making it the correct answer.

Question 87

Which of the following best describes a mantrap in physical security?

A) A device that encrypts entry logs
B) A small space with two interlocking doors designed to control access
C) A perimeter fence around a building
D) A biometric scanner used for authentication

Answer: B) A small space with two interlocking doors designed to control access

Explanation:

A mantrap is a physical security control used to restrict and manage access into sensitive areas. To understand why this option is correct, each listed item must be examined in detail. A mantrap consists of two locked doors placed in sequence, forming a secure vestibule that allows the first door to open only when the second door is closed. This design ensures that unauthorized individuals cannot tailgate or closely follow authorized personnel into a secured area. Such spaces are commonly used in data centers, research facilities, secure offices, or places handling classified information. The system forces individuals to authenticate twice or undergo inspection before fully entering. In highly secure environments, guards or automated systems verify identity within the enclosed space. The key purpose of this structure is to prevent piggybacking and to enforce strict control of movement.

The first listed item describes a device that encrypts logs. While such a device may enhance security by protecting audit data from tampering, it does not physically restrict access to sensitive spaces. Log encryption is part of cybersecurity and data protection, not physical access control. It prevents unauthorized disclosure or manipulation of log records but lacks the capability to control or prevent real-world tailgating.

The second listed item is the correct answer. It describes a small secure room or vestibule equipped with two interlocking doors. This architectural feature ensures that one door cannot open until the other is fully closed and secured. The purpose is to isolate a person entering and verify their identity before granting access. These spaces increase security by providing an additional authentication point, sometimes using badges, biometrics, or visual inspection. This design prevents unauthorized individuals from forcefully entering or following an authorized user into a facility.

The third listed item refers to a perimeter fence around a building. Fences are important physical security measures intended to establish boundaries, discourage trespassing, and provide early detection of intrusions. However, they do not regulate entry at the level of precision required for sensitive interior spaces and do not force individual verification.

The fourth listed item is a biometric scanner used for authentication. While biometrics are powerful tools for verifying identity, they are not physical structures that prevent tailgating. A biometric scanner alone cannot stop an unauthorized person from slipping in behind an authorized individual.

In summary, the mantrap is accurately described as a secure enclosure with two interlocking doors used to regulate and monitor access to restricted areas, making the second choice correct.

Question 88

Which type of attack involves an adversary inserting false or manipulated data into a machine-learning model during training?

A) Poisoning attack
B) Evasion attack
C) Replay attack
D) Brute-force attack

Answer:  A) Poisoning attack

Explanation:

Machine-learning systems depend on large quantities of training data to learn how to make predictions and decisions. When an adversary tampers with the training dataset by inserting misleading or incorrect information, it can cause the model to behave unpredictably or inaccurately. To fully understand the correct answer, all listed options must be examined carefully. Poisoning refers to the deliberate manipulation of training inputs so that the final model learns wrong patterns. This may cause a facial recognition system to misidentify individuals, a spam filter to misclassify messages, or an intrusion detection system to ignore malicious traffic. Poisoning attacks are especially dangerous because machine-learning systems can operate at large scale, and widespread deployment means subtle manipulation can have far-reaching consequences. Detecting poisoning is difficult because it exploits the internal learning process of the algorithm rather than attacking the system directly.

The second listed item, evasion, refers to attacks that occur after the model is trained. Instead of corrupting the training data, an adversary crafts specialized inputs designed to slip past the model undetected. For example, a malware author may slightly alter malicious code to cause a detection system to misclassify it as harmless. Evasion alters inputs at inference time, not during model training. Therefore, it is not the correct answer.

The third listed item refers to intercepting and retransmitting data. Replay involves capturing legitimate communication, such as authentication tokens or network packets, and replaying them at a later time to gain unauthorized access. While dangerous, replay has nothing to do with machine-learning model training.

The fourth listed item, brute-force attacks, rely on repeated attempts to guess passwords or encryption keys using computational power. They focus on authentication or cryptography, not model training.

Therefore, poisoning is the correct answer because it directly targets the training process of the machine-learning model.

Question 89

Which of the following best represents the function of a network jump box in a secure environment?

A) It allows users to connect directly to production servers from the internet
B) It serves as an isolated, controlled access point for connecting to internal systems
C) It replaces firewalls by filtering incoming traffic
D) It stores encryption keys for privileged access management

Answer: B) It serves as an isolated, controlled access point for connecting to internal systems

Explanation:

A jump box, sometimes known as a jump server or bastion host, is a highly controlled and hardened system that acts as a gateway for administrators and technicians who need to access internal resources. This concept is widely used in secure architectures to prevent direct access to sensitive networks from less trusted environments. To understand the correct answer, each listed item should first be explored individually, and then compared to what a jump box is intended to accomplish in real-world security scenarios.

The first selection suggests that such a system allows direct connections from the internet to production servers. Allowing direct internet access to internal systems would be considered a severe security risk, especially in professional environments where segmentation and access control must be enforced. A jump box is actually designed to prevent this type of behavior, not facilitate it. Exposing production systems directly to the internet introduces threats such as brute-force attacks, scanning, exploitation attempts, and unauthorized remote access. Best practices recommend placing internal systems behind layers of security controls, ensuring that access occurs only through monitored and hardened points. Therefore, allowing unrestricted direct access is the opposite of what a jump box is supposed to do.

The second selection correctly describes the purpose of a jump box. This type of system is placed in a controlled network segment, often a demilitarized zone or special administrative zone. Authorized administrators first authenticate into the jump box using strong authentication such as multifactor methods, certificates, VPN connections, or privileged access management. Once authenticated, administrators can then proceed from the jump box to internal resources such as servers, databases, or network devices. This setup ensures that administrators are not connecting directly from their everyday systems, which may not be as hardened or monitored. It also concentrates access through a small number of well-secured devices, making auditing and logging more effective. Because these devices are hardened and monitored continuously, they reduce the risk of unauthorized activity and contain the blast radius in case an account is compromised.

The third selection describes a system that replaces firewalls by filtering traffic. A jump box does not perform firewall functions. Firewalls implement packet filtering, network address translation, deep inspection, and segmentation enforcement. A jump box does not engage in traffic filtering at that level. Instead, it acts as an administrative access point. It may rely on firewalls for protection, but it does not replace them or serve the same purpose. Firewalls provide network-layer security, whereas a jump box provides controlled remote administration access. They often complement each other but are not interchangeable.

The fourth selection refers to storing encryption keys used in privileged access management. A jump box is not a secure vault for storing cryptographic materials. Although it may be integrated into a privileged access management platform, it does not function as a key repository. Key management requires specialized software and hardware, such as hardware security modules, secure vault systems, and encryption servers that enforce strict lifecycle controls, access permissions, and auditing. A jump box simply restricts administrative entry points and provides an isolated workspace for administrative tasks.

The reason the second selection is correct is that a jump box serves as a controlled and monitored intermediary through which administrators must pass before accessing sensitive internal resources. Because administrators often have elevated privileges, controlling the pathway they use reduces the attack surface and makes it easier to implement strong security controls. Logging, monitoring, behavior analytics, multifactor authentication, and audit trails can all be applied at the jump box. If any unusual activity occurs, it is easier to detect because all administrator access flows through a predictable point. In addition, by isolating administrative access to a single hardened device, organizations can ensure that personal laptops, user workstations, or external devices are not directly used to manage sensitive systems. This isolation reduces the risk posed by malware, configuration errors, and unsecured endpoints. In essence, the jump box serves as a choke point for administrative activity, making the network significantly more secure, auditable, and manageable.

Question 90

Which security testing method involves a tester having full knowledge of the system architecture, source code, and infrastructure details?

A) Black-box testing
B) Gray-box testing
C) White-box testing
D) Blind testing

Answer: C) White-box testing

Explanation:

White-box testing is a security evaluation method where the tester is provided with complete information about the system, its architecture, source code, design documents, data flows, network diagrams, authentication methods, and infrastructure components. It is often used in development and pre-production environments to thoroughly examine how an application or system behaves internally. To understand why this is the correct answer, it is important to examine all the choices individually and understand how they differ.

The first selection describes an approach where the tester receives no internal information. Black-box testing simulates the perspective of an external attacker who has no privileged knowledge of the system’s structure. Testers rely entirely on publicly exposed interfaces, application behavior, error messages, scanning tools, and probing techniques. They do not have source code access, network diagrams, or architectural details. This form of testing focuses on how the system behaves from an outside point of view and assesses externally visible vulnerabilities. Since black-box testing does not include internal visibility, it cannot be the method described in the question.

The second selection describes an approach that offers partial internal knowledge. Gray-box testing represents a hybrid approach where the tester has some limited system information, such as API documentation, database schema fragments, user credentials, or simplified architectural overviews. It is more informed than black-box testing but less detailed than white-box testing. It allows testers to focus on specific areas without having full internal access. While it provides a middle ground, it does not involve full knowledge of source code or system architecture, so it does not match the requirement outlined in the question.

The third selection is the correct one. White-box testing involves complete visibility into the internal workings of the system. Developers or security testers analyze code structure, logic, security controls, cryptographic implementations, and internal components in detail. This kind of testing can uncover deep logic vulnerabilities, insecure coding practices, flawed access control mechanisms, race conditions, improper error handling, and issues that may not appear from an external perspective. Because the tester has access to everything, they can evaluate the system more comprehensively.

The fourth selection describes an approach where a tester has minimal or no information, but the organization being tested knows about the test. Blind testing simulates the actions of an external attacker who is not given system details, but unlike black-box testing, the organization is aware of the test. It is used primarily for assessing detection and response capabilities rather than evaluating code or internal architecture. Therefore, blind testing does not match the comprehensive internal access described in the question.

White-box testing remains the correct answer because it specifically refers to scenarios where testers have complete internal visibility. This level of access allows for highly thorough evaluations that can reveal vulnerabilities missed by other testing approaches. It is widely used for code reviews, static analysis, secure software development lifecycle activities, and detailed architectural assessments.