ISC CISSP Certified Information Systems Security Professional Exam Dumps and Practice Test Questions Set 11 Q151-165

ISC CISSP Certified Information Systems Security Professional Exam Dumps and Practice Test Questions Set 11 Q151-165

Visit here for our full ISC CISSP exam dumps and practice test questions.

Question 151:

 Which access control model allows resource owners to determine who can access their objects?

A) Discretionary Access Control
B) Mandatory Access Control
C) Role-Based Access Control
D) Rule-Based Access Control

Answer: A

Explanation:

Discretionary Access Control (DAC) is an access control model that fundamentally differs from other models like Mandatory Access Control (MAC) or Role-Based Access Control (RBAC) because it places the authority to manage permissions in the hands of the resource owner. In DAC, the owner of a file, folder, database entry, or system resource has the ability to determine who can access their objects and what type of access they are allowed. This is typically implemented using Access Control Lists (ACLs) or capability tables, which explicitly enumerate which users or groups can read, write, execute, or modify a resource.

DAC is often contrasted with Mandatory Access Control, where access is strictly determined by system-enforced policies and labels, leaving no discretion to individual owners. For instance, in government or military environments, MAC is preferred to enforce strict confidentiality classifications (e.g., Top Secret, Secret, Confidential) that users cannot override. DAC, in contrast, provides flexibility, which is particularly beneficial in business, academic, or collaborative environments where users need the autonomy to share documents, delegate tasks, or grant temporary access.

One of the primary advantages of DAC is its adaptability and ease of implementation. Because individual users control access, it allows dynamic collaboration, such as granting a colleague temporary read or write access to a document without requiring administrative intervention. This model aligns well with common operating systems like Windows and Unix/Linux, where users can change file permissions for files they own using simple commands (e.g., chmod in Linux or the security properties panel in Windows).

However, this flexibility comes with significant security trade-offs. Since permissions are determined by owners rather than a central authority, it can lead to inconsistent enforcement of access policies across the organization. Users may unintentionally grant excessive privileges or fail to revoke access when it is no longer needed. In large or highly sensitive environments, this lack of control can create vulnerabilities and increase the risk of insider threats. Attackers often exploit DAC misconfigurations to escalate privileges or gain unauthorized access to critical assets.

Modern implementations of DAC often integrate additional security features to mitigate these risks. For example, auditing mechanisms can track who accessed what resources and when, alerting administrators to potential misuse. Some systems combine DAC with RBAC to enforce a hybrid model where users can delegate access only within the bounds of predefined roles, balancing flexibility with organizational policy adherence. DAC is also foundational in cloud computing environments, where resource owners often control access to their virtual machines, storage buckets, or database objects using DAC-like permission models.

Discretionary Access Control (DAC) is the model where resource owners retain the authority to decide access rights, providing flexibility and ease of collaboration while requiring careful oversight to prevent privilege misuse. This emphasis on owner-controlled permissions makes DAC particularly suitable for collaborative and dynamic environments where centralized enforcement is less practical.

Question 152:

 Which cryptographic function ensures message integrity and detects tampering?

A) Encryption
B) Hashing
C) Digital Signature
D) Key Exchange

Answer: B

Explanation:

Hashing is a core cryptographic function designed to ensure the integrity of a message or data set. Unlike encryption, which primarily ensures confidentiality, or digital signatures, which provide authenticity and non-repudiation, hashing focuses on verifying that data has not been altered during transmission or storage. A hash function takes an input of arbitrary length and produces a fixed-length output, known as a hash value or digest. This digest acts like a unique fingerprint for the original data; even the slightest modification to the input will produce a completely different hash output due to the avalanche effect.

Hashing is one-way, meaning it is computationally infeasible to reconstruct the original data from the hash value. This property makes it ideal for integrity verification. Common hash algorithms include SHA-2 (e.g., SHA-256, SHA-512), SHA-3, and older algorithms like MD5 (largely deprecated due to vulnerabilities). In practical applications, hashing ensures that messages or files have not been tampered with. For instance, when downloading software, developers often provide hash values alongside installation files. Users can compute the hash of the downloaded file and compare it to the provided value; a mismatch indicates possible corruption or malicious modification.

Hashing is frequently combined with other cryptographic techniques for enhanced security. In digital signatures, the sender generates a hash of the message and then encrypts that hash using their private key. The recipient decrypts the signature using the sender’s public key and compares it to their own hash of the received message. If the hashes match, it confirms both integrity and authenticity. Message Authentication Codes (MACs) also leverage hashing, often in conjunction with a secret key, to ensure that data has not been altered while verifying that it originated from a trusted source.

In networking and secure communications, hashing plays a pivotal role in protocols such as TLS, SSL, and IPsec, where it helps protect against tampering and man-in-the-middle attacks. In file storage and backup systems, hashes are used to detect unintended modifications, data corruption, or unauthorized changes. Similarly, blockchain technology relies heavily on hashing to ensure the integrity of transactions and maintain immutable ledgers. Each block contains a hash of the previous block, making it extremely difficult to alter any block without detection.

Overall, hashing provides a lightweight yet highly effective method for verifying data integrity. While it does not encrypt data or provide confidentiality, its ability to detect any modification, intentional or accidental, is crucial in secure communication, software distribution, and data storage systems. The cryptographic function responsible for detecting tampering and ensuring message integrity is therefore hashing.

Question 153:

Which type of testing combines knowledge of internal structures with external testing methods?

A) White-box Testing
B) Gray-box Testing
C) Black-box Testing
D) Fuzz Testing

Answer: B

Explanation:

Gray-box testing is a hybrid testing methodology that combines elements of both white-box and black-box testing, offering unique advantages in terms of effectiveness and efficiency. In white-box testing, testers have complete knowledge of the internal structure, code, and architecture of the system under test. This allows for highly granular and targeted testing of specific components, logic paths, and potential vulnerabilities, but it can be time-consuming and requires deep technical expertise. Black-box testing, in contrast, treats the system as an opaque entity, focusing on inputs, outputs, and expected behavior without insight into internal structures. This simulates real-world attack or usage scenarios but may miss complex internal flaws.

Gray-box testing strikes a balance by providing testers with partial knowledge of the system’s architecture, data flows, or internal configurations while still evaluating the system externally. For example, a gray-box tester might have access to database schemas, API documentation, or certain source code modules, enabling more precise test design and risk assessment. At the same time, they evaluate the system from a user or attacker perspective, ensuring realistic coverage. This makes gray-box testing particularly valuable for integration testing, penetration testing, and security assessments.

In security contexts, gray-box testing allows testers to identify vulnerabilities that might be missed in black-box testing, such as misconfigurations, insufficient access controls, or input validation weaknesses, without requiring full access to the source code. It also helps uncover vulnerabilities in third-party components or services where full source code is unavailable. From a development lifecycle perspective, gray-box testing supports DevSecOps initiatives by bridging the gap between static code analysis and real-world exploitation scenarios, allowing organizations to proactively remediate risks before deployment.

Additionally, gray-box testing supports efficiency. Since testers have partial knowledge, they can focus on high-risk areas, reducing the time and resources spent testing low-risk or irrelevant components. Organizations often use it in combination with automated tools and scripts to simulate attacks on web applications, databases, APIs, and networked systems. The methodology is highly versatile, applicable to functional testing, regression testing, and security testing, making it a robust option for modern software environments where both security and functionality are priorities.

Gray-box testing combines partial internal knowledge with external evaluation, enhancing coverage and targeting while simulating realistic usage or attack scenarios. It leverages insider knowledge for precision while maintaining the practical perspective of external testing, making it an essential approach for comprehensive software quality and security assurance.

Question 154:

Which type of malware can self-replicate without user intervention?

A) Trojan
B) Virus
C) Worm
D) Rootkit

Answer: C

Explanation:

 Worms are a type of self-replicating malware that can spread autonomously across networks without requiring any user action. Unlike viruses, which attach themselves to files and rely on execution or transfer by a host, worms exploit vulnerabilities in network services, operating systems, or applications to propagate automatically. This autonomous propagation allows worms to infect multiple systems rapidly, often creating widespread disruptions within minutes or hours.

Worms can carry various malicious payloads, including ransomware, spyware, or backdoors, and are capable of consuming network bandwidth, overloading servers, and disrupting critical services. Historical examples include the Morris Worm of 1988, which infected thousands of systems, and the more modern WannaCry ransomware worm, which exploited a Windows vulnerability to encrypt files globally. Unlike trojans, which disguise themselves as legitimate software, worms are primarily designed for rapid self-propagation and may or may not carry additional malicious actions.

Detection and mitigation of worms require proactive measures. Network segmentation can limit the spread of worms by isolating critical systems from less secure network segments. Regular patching and updating of operating systems, software, and network devices closes vulnerabilities that worms exploit. Intrusion detection systems (IDS) and intrusion prevention systems (IPS) can identify anomalous traffic patterns associated with worm propagation, enabling automated containment. Additionally, endpoint security software can scan for unusual behaviors characteristic of worms, such as mass file creation, excessive network scanning, or unauthorized replication.

From a risk management perspective, worms highlight the importance of defense-in-depth strategies. Firewalls, access control policies, security monitoring, and user education collectively reduce the likelihood and impact of worm outbreaks. Organizations must also have incident response plans ready to quickly isolate infected systems, analyze worm behavior, and remediate affected resources. The autonomous nature of worms makes them highly potent threats, capable of causing operational, financial, and reputational damage if left unchecked.

In essence, worms are distinguished from other malware types by their ability to self-propagate without user intervention, making them particularly dangerous in interconnected environments. Their rapid spread, potential payload delivery, and network disruption emphasize the need for comprehensive preventative, detective, and corrective security controls. The correct answer is therefore Worm.

Question 155:

 Which security principle minimizes unnecessary permissions and access?

A) Least Functionality
B) Defense in Depth
C) Least Privilege
D) Separation of Duties

Answer: C

Explanation:

The principle of Least Privilege is a foundational security concept aimed at minimizing exposure by granting users, processes, and systems only the permissions necessary to perform their required functions—no more, no less. It reduces the potential attack surface, limits the impact of security breaches, and enhances compliance with regulatory frameworks like PCI DSS, HIPAA, and GDPR. By enforcing minimal access rights, organizations can mitigate the risk of accidental misuse, insider threats, and external exploitation of compromised accounts.

Least Privilege is implemented through careful access management strategies. Role-Based Access Control (RBAC) and Attribute-Based Access Control (ABAC) are commonly used to enforce least privilege by mapping specific duties to precise access rights. For example, a database administrator might have full access to development databases but read-only access to production databases, while a business analyst may only have read access to specific reports. Just-in-time access provisioning can further enhance least privilege by granting elevated privileges temporarily when required and automatically revoking them afterward.

The principle also applies to software, services, and network components. Limiting installed applications, disabling unnecessary services, and configuring minimal execution privileges prevent attackers from exploiting superfluous capabilities. This complements the principle of Least Functionality, which focuses on reducing the number of services available to an attacker, but Least Privilege specifically targets permissions and access rights.

In practice, implementing least privilege requires continuous monitoring and review. Organizations must regularly audit user accounts, groups, and privileges to ensure compliance and adjust access as roles or responsibilities change. Automated tools can provide alerts when users exceed their required privileges, and security policies can mandate periodic reviews. Education and awareness are also critical; users must understand the importance of requesting only necessary access and avoiding workarounds that bypass controls.

Adherence to the Least Privilege principle significantly strengthens the overall security posture. By constraining access rights to the absolute minimum, organizations minimize the potential impact of compromised accounts, reduce the likelihood of accidental data exposure, and reinforce accountability. It is widely regarded as one of the most effective proactive security measures, forming a key component of broader security strategies like Defense in Depth.

The security principle that explicitly minimizes unnecessary permissions and access is Least Privilege. It emphasizes granting only the access needed for specific tasks, thereby reducing risk, supporting compliance, and enhancing overall cybersecurity resilience.

Question 156:

Which cloud service model provides fully managed applications accessed via web interfaces?

A) Infrastructure as a Service
B) Platform as a Service
C) Software as a Service
D) Function as a Service

Answer: C

Explanation:

Software as a Service (SaaS) represents the highest level of cloud service abstraction, providing fully managed software applications that users can access directly through web interfaces or APIs without worrying about the underlying infrastructure, operating systems, or middleware. SaaS applications are designed to be user-ready, meaning organizations and individual users can consume them immediately without installation, configuration, or maintenance efforts. Examples include email platforms like Microsoft 365, customer relationship management tools like Salesforce, collaboration suites like Google Workspace, and enterprise resource planning solutions like SAP S/4HANA Cloud.

The SaaS model offers significant advantages over other cloud models. Unlike Infrastructure as a Service (IaaS), where users must provision and manage virtual machines, storage, and networks, or Platform as a Service (PaaS), which provides development environments but requires code deployment and application management, SaaS removes almost all administrative responsibilities. The service provider handles software updates, patch management, security monitoring, scaling, and performance optimization. This reduces the operational burden on internal IT teams, allowing organizations to focus on their core business objectives rather than system administration.

SaaS also enables rapid scalability and accessibility. Because applications are hosted in the cloud, users can access them from virtually any device with an internet connection. Organizations can scale licenses and user access up or down based on demand, making SaaS ideal for dynamic business environments. Additionally, SaaS solutions often integrate APIs, allowing other systems or platforms to connect seamlessly, supporting workflows that span multiple applications.

However, SaaS has certain trade-offs. Customization options may be limited compared to on-premises software or PaaS deployments. Organizations must also trust the provider for data security, privacy, and compliance. This is especially relevant in regulated industries such as healthcare, finance, or government, where SaaS providers must comply with standards like HIPAA, SOC 2, or ISO 27001. Despite these limitations, the benefits of reduced IT overhead, faster deployment, and predictable subscription-based costs make SaaS the dominant choice for many modern organizations.

SaaS delivers fully managed applications that are accessible through web interfaces, requiring minimal user administration while providing robust functionality and scalability. It is the correct answer for the cloud service model focused on ease of use, management outsourcing, and end-user accessibility.

Question 157:

 Which type of attack intercepts communication between two parties?

A) Denial-of-Service
B) SQL Injection
C) Man-in-the-Middle
D) Phishing

Answer: C

Explanation:

A Man-in-the-Middle (MITM) attack occurs when an adversary secretly intercepts, relays, or manipulates communication between two parties who believe they are communicating directly with each other. MITM attacks compromise both the confidentiality and integrity of information, enabling attackers to eavesdrop on sensitive data, modify transmitted messages, inject malicious content, or impersonate legitimate users. This type of attack can target numerous protocols, including HTTPS, email communications, VoIP calls, and wireless networks, making it a versatile and persistent threat.

MITM attacks exploit several vulnerabilities. On unsecured networks, such as public Wi-Fi hotspots, attackers can intercept unencrypted traffic with minimal effort. Protocol weaknesses, such as insufficient SSL/TLS validation or outdated cryptographic algorithms, can also expose users to interception. Additionally, attackers may use ARP spoofing, DNS spoofing, session hijacking, or malicious proxies to position themselves between the communicating parties. In more sophisticated scenarios, MITM attacks can involve advanced persistent threats (APT) where attackers infiltrate a network and maintain covert access to monitor communications over long periods.

Mitigation strategies for MITM attacks are multi-layered. Strong encryption protocols like TLS or IPSec protect data in transit, making intercepted messages unreadable to attackers. Mutual authentication ensures that both parties validate each other’s identities before exchanging information, reducing the risk of impersonation. Virtual Private Networks (VPNs) encrypt network traffic between endpoints, providing a secure tunnel even over untrusted networks. Additionally, certificate validation, awareness of phishing and social engineering attacks, and frequent updates to software and cryptographic libraries further reduce the risk. Intrusion detection systems can identify unusual traffic patterns indicative of MITM activity, enabling proactive response and mitigation.

MITM attacks have real-world consequences. They can result in the theft of login credentials, financial data, intellectual property, or confidential communications. In corporate environments, MITM attacks can compromise internal systems, customer databases, or trade secrets. Governments and critical infrastructure operators are particularly concerned because MITM attacks can be used for espionage or sabotage. Cybersecurity frameworks, therefore, emphasize encryption, network monitoring, and endpoint security to minimize the likelihood and impact of MITM attacks.

A Man-in-the-Middle attack intercepts communication between two parties to eavesdrop, alter, or inject data without detection. Its sophistication and potential for damage make it one of the most serious forms of cyber attack, and it requires layered defense mechanisms for effective mitigation.

Question 158:

 Which control type restores systems and data after an incident?

A) Detective
B) Corrective
C) Preventive
D) Deterrent

Answer: B

Explanation:

Corrective controls are designed to respond to security incidents, system failures, or operational disruptions by restoring systems, data, or processes to a normal or secure state. Unlike preventive controls, which aim to stop incidents before they occur, or detective controls, which identify incidents as they happen, corrective controls focus on recovery and mitigation after an adverse event. Deterrent controls, on the other hand, discourage undesirable behavior but do not actively restore systems.

Examples of corrective controls include patching vulnerabilities after detection of exploits, restoring corrupted databases from backups, reconfiguring compromised systems, and repairing or replacing damaged hardware. Disaster recovery plans and business continuity strategies incorporate corrective measures to ensure rapid recovery from natural disasters, cyberattacks, or system failures. By implementing robust corrective controls, organizations can minimize downtime, reduce operational disruption, and maintain regulatory compliance.

Effective corrective controls require careful planning and validation. Organizations must maintain regular, tested backups of critical data and system configurations. Recovery procedures must be documented and rehearsed through simulations or tabletop exercises to ensure staff know the steps to restore functionality. Integration with incident response workflows ensures that corrective actions are coordinated, timely, and efficient, reducing the likelihood of cascading failures. For complex IT environments, automation tools may assist in restoring services, deploying patches, or recovering data to meet organizational recovery objectives.

Corrective controls also complement preventive and detective measures. While preventive controls aim to block incidents and detective controls alert administrators when incidents occur, corrective controls provide the final layer of assurance by enabling recovery. They are critical for minimizing financial losses, reputational damage, and legal exposure. Industries with high uptime requirements, such as healthcare, finance, and telecommunications, rely heavily on corrective controls as part of a comprehensive risk management strategy.

Corrective controls restore systems, processes, and data after an incident, ensuring operational continuity and minimizing disruption. They are an essential component of a balanced security and resilience strategy, complementing preventive and detective measures.

Question 159:

 Which disaster recovery metric specifies the maximum acceptable downtime?

A) Recovery Time Objective
B) Recovery Point Objective
C) Maximum Tolerable Loss
D) Mean Time to Repair

Answer: A

Explanation:

Recovery Time Objective (RTO) is a critical disaster recovery and business continuity metric that defines the maximum allowable duration of system downtime after a disruption. It represents the targeted time within which business processes or IT systems must be restored to an acceptable level of operation. RTO directly influences disaster recovery planning, backup strategies, staffing, and resource allocation to ensure continuity of critical services.

RTO is distinct from Recovery Point Objective (RPO), which defines the maximum tolerable data loss in terms of time (i.e., how far back data can be recovered). For example, an RTO of four hours indicates that a system must be operational within four hours of a disruption, while an RPO of 15 minutes indicates that no more than 15 minutes of data should be lost. Maximum Tolerable Loss (MTL) refers to the overall impact an organization can sustain, including operational, financial, and reputational effects. Mean Time to Repair (MTTR) measures the average time to repair a failed system, but is not tied to business requirements in the same way as RTO.

RTO informs various aspects of disaster recovery planning. Organizations must design infrastructure, backup procedures, and failover mechanisms to meet the RTO targets. This may include high-availability clustering, redundant systems, cloud failover solutions, and automated recovery scripts. In addition, staffing plans and incident response teams are often structured to ensure rapid action within the RTO window. A well-defined RTO helps organizations prioritize recovery efforts based on the criticality of systems, balancing cost against operational risk.

In practice, RTO also serves as a benchmark for testing disaster recovery plans. Through simulation exercises, organizations verify that recovery processes can achieve the defined RTO under realistic conditions. If testing reveals delays or bottlenecks, processes are refined, redundant resources are added, or automation is introduced. Regulatory and contractual obligations often require documented RTOs for mission-critical systems, making it both a technical and compliance-driven metric.

Recovery Time Objective (RTO) specifies the maximum acceptable downtime following an incident, guiding disaster recovery planning, system design, and operational priorities. It is essential for maintaining business continuity and minimizing the impact of disruptions.

Question 160:

 Which type of access control enforces restrictions using security labels?

A) Discretionary Access Control
B) Mandatory Access Control
C) Role-Based Access Control
D) Rule-Based Access Control

Answer: B

Explanation:

Mandatory Access Control (MAC) is an access control model in which access decisions are governed by centrally defined security policies, rather than by the discretion of individual resource owners. In MAC, all subjects (users, processes) and objects (files, databases, devices) are assigned security labels or classifications, and access is granted only when the subject’s clearance matches or exceeds the object’s classification. This model is widely used in highly sensitive environments where confidentiality, integrity, and compliance are critical, such as government, military, and defense systems.

MAC is fundamentally different from Discretionary Access Control (DAC), where resource owners determine access, and Role-Based Access Control (RBAC), where access is tied to job roles. Rule-Based Access Control (RBAC variant) enforces dynamic policies such as time-of-day restrictions but still relies on system-enforced rules rather than labels. In MAC, users cannot override system-defined access restrictions, reducing the risk of accidental or intentional privilege escalation and ensuring consistent enforcement of security policies.

The application of MAC involves labeling both data and users. For example, a document might be classified as “Confidential,” “Secret,” or “Top Secret,” and only users with the appropriate security clearance can access it. These labels are enforced at the operating system level, application layer, or network access layer. Implementations like SELinux or TrustedBSD in modern operating systems provide MAC enforcement mechanisms to prevent unauthorized access, even if a user or process has high-level system privileges.

MAC offers high assurance in protecting sensitive information, particularly in environments subject to strict regulatory or classification requirements. However, it is less flexible than DAC or RBAC and requires careful planning to assign labels and clearances appropriately. Misclassification or overly restrictive labeling can hinder legitimate business operations, while insufficient labeling may expose sensitive data. MAC systems often integrate auditing and monitoring to maintain accountability, track access attempts, and support compliance reporting.

Mandatory Access Control enforces access restrictions using security labels, ensuring centralized, policy-driven control over data access. Its strict enforcement and focus on confidentiality make it indispensable in high-security environments, making it the correct answer.

Question 161:

 Which type of testing evaluates system functionality without knowledge of internal code?

A) White-box Testing
B) Black-box Testing
C) Gray-box Testing
D) Static Code Analysis

Answer: B

Explanation:

Black-box testing is a software testing methodology that evaluates a system’s functionality strictly from an external perspective, without requiring knowledge of the internal structure, code, or architecture. The term “black-box” metaphorically refers to the software being tested as a sealed container where testers can observe inputs and outputs but cannot see or manipulate the internal mechanisms. This approach simulates real-world usage scenarios, making it particularly useful for assessing how software behaves under normal operation, stress conditions, or malicious attacks.

Black-box testing encompasses a variety of techniques, including functional testing, acceptance testing, system testing, and security testing. Functional testing verifies that the system meets its specified requirements by providing inputs and comparing the outputs with expected results. For example, a black-box tester evaluating a banking application might check whether entering valid credentials allows access to an account or if an invalid transfer request is appropriately rejected. Acceptance testing, often performed by end-users or stakeholders, ensures the software fulfills business needs and usability expectations.

Security-focused black-box testing mimics an external attacker’s approach, identifying vulnerabilities without insider knowledge of the system. Testers may probe for input validation flaws, authentication weaknesses, improper session handling, or unprotected APIs. This is particularly relevant for web applications, network services, and client-server architectures where attackers typically do not have access to source code. Black-box penetration testing involves simulating attacks to discover potential entry points or exploitation paths, helping organizations strengthen defenses before real-world adversaries can take advantage.

The advantages of black-box testing include its unbiased perspective, as testers are not influenced by knowledge of internal implementation, and its ability to evaluate software as end-users experience it. It is also scalable across multiple platforms, systems, or interfaces without requiring in-depth programming expertise. However, black-box testing has limitations. It may not uncover certain internal logic flaws, memory leaks, or complex interdependencies that are more visible through white-box or gray-box testing. Therefore, organizations often employ black-box testing in combination with other methods to ensure comprehensive coverage.

In practice, black-box testing is widely applied across industries, including finance, healthcare, e-commerce, and critical infrastructure. It is integral to regulatory compliance efforts, as it validates that software behaves correctly under expected operational conditions. Automated tools, such as Selenium for web applications or Postman for APIs, are often used to execute black-box test scripts efficiently, while manual testing remains essential for exploratory scenarios where intuition and creativity help identify subtle issues.

Black-box testing evaluates system functionality without access to internal code, focusing on inputs, outputs, and expected behavior. Its role in uncovering functional errors, vulnerabilities, and security weaknesses from an external perspective makes it indispensable for acceptance testing, penetration testing, and real-world simulation. It is the correct answer.

Question 162:

 Which malware type hides its presence and maintains privileged access to a system?

A) Trojan
B) Worm
C) Rootkit
D) Virus

Answer: C

Explanation:

Rootkits are a sophisticated type of malware designed to conceal their presence on a system while maintaining persistent and privileged access, often at the kernel or system level. Unlike Trojans, which rely on social engineering or user execution, or worms, which self-propagate across networks, rootkits focus primarily on stealth and longevity. They can evade detection by traditional antivirus software, firewalls, and monitoring tools, making them exceptionally dangerous for organizations and individual users alike.

Rootkits function by integrating deeply into the operating system or hypervisor, intercepting system calls, and modifying kernel-level processes to hide files, processes, network connections, and even other malware. This allows attackers to manipulate system behavior, exfiltrate sensitive data, install additional malware, or maintain remote control over compromised machines. Advanced persistent threats (APT) often utilize rootkits to establish long-term footholds in target networks, enabling espionage or sabotage without alerting security teams.

Detection of rootkits is extremely challenging due to their ability to operate below the visibility of standard monitoring tools. Specialized detection techniques include behavior-based monitoring, integrity checking of system files, memory analysis, and boot-time scanning. Some rootkits require offline inspection or even complete system reinstallation for removal, highlighting the importance of preventive security measures. Maintaining updated operating systems, applying security patches promptly, and using intrusion detection systems (IDS) are key strategies to reduce the likelihood of rootkit infections.

Rootkits also present unique security considerations for cloud environments, virtualized systems, and Internet-of-Things (IoT) devices. For instance, hypervisor-level rootkits can compromise multiple virtual machines simultaneously, while firmware rootkits embedded in IoT devices can bypass conventional endpoint protections. Cybersecurity frameworks emphasize multi-layered defenses, continuous monitoring, and incident response readiness to counteract rootkit threats effectively.

Rootkits are malware designed to hide their presence and maintain privileged access at the system or kernel level. Their stealth, persistence, and potential for enabling other malicious activities make them a critical concern for both organizational and personal security. Rootkits are difficult to detect and remove, requiring specialized tools and preventive measures. The correct answer is Rootkit.

Question 163:

 Which access control model applies rules based on conditions like time, location, or network?

A) Discretionary Access Control
B) Mandatory Access Control
C) Role-Based Access Control
D) Rule-Based Access Control

Answer: D

Explanation:

Rule-Based Access Control (RBAC variant) is an access control model where access decisions are governed by pre-defined rules or conditions, often based on environmental or contextual factors such as time of day, user location, device type, or network attributes. Unlike Discretionary Access Control (DAC), where resource owners manage permissions, or Mandatory Access Control (MAC), which enforces strict label-based policies, rule-based access provides dynamic enforcement tailored to specific operational requirements.

This model is commonly layered on top of other access control systems to provide flexibility and enhanced security. For example, an organization may use Role-Based Access Control to assign permissions according to job functions, and then apply rule-based policies to enforce additional constraints such as restricting access to sensitive databases outside of business hours or from untrusted networks. Rules can also enforce multifactor authentication when conditions are deemed high risk, such as logins from unfamiliar locations or devices.

Rule-Based Access Control is highly relevant in cloud computing, hybrid networks, and mobile device management environments. It allows organizations to implement context-aware security policies, automatically adjusting access permissions to reduce risk without manual intervention. This helps protect sensitive data, prevent unauthorized access, and comply with regulatory frameworks that mandate access logging and conditional control, such as GDPR, HIPAA, and PCI DSS.

Implementation of rule-based policies often involves centralized policy engines or identity and access management (IAM) systems that evaluate conditions in real time. Examples include Microsoft Azure Conditional Access policies, AWS IAM policy conditions, and various enterprise identity federation solutions. By automating access control based on environmental context, organizations can achieve a balance between usability and security, minimizing the need for manual oversight while ensuring adaptive protections.

Rule-Based Access Control enforces access according to predefined rules or environmental conditions, providing dynamic and context-aware security. Its ability to apply restrictions based on time, location, network attributes, or device type distinguishes it from other access control models, making it the correct answer.

Question 164:

 Which testing technique involves feeding random or unexpected inputs to a system?

A) Static Code Analysis
B) Fuzz Testing
C) Black-box Testing
D) White-box Testing

Answer: B

Explanation:

Fuzz testing, commonly called fuzzing, is a software testing technique that evaluates system resilience and security by providing random, malformed, or unexpected inputs to applications, APIs, or protocols. The primary goal is to uncover defects, vulnerabilities, crashes, unhandled exceptions, buffer overflows, and other input-handling weaknesses that may be exploited by attackers. Fuzzing complements other testing approaches by targeting scenarios that are difficult to anticipate through traditional functional or white-box testing.

Fuzz testing can be performed in multiple forms: black-box fuzzing, where the tester does not know the system internals; white-box fuzzing, which leverages knowledge of source code to guide input generation; and gray-box fuzzing, which uses partial internal knowledge to improve coverage. Modern fuzzing tools often integrate automated input generation, mutation, and feedback-based techniques to systematically explore potential vulnerabilities across large codebases. Examples include AFL (American Fuzzy Lop), OSS-Fuzz, and proprietary tools used by enterprises for security testing.

Fuzzing is particularly valuable for security testing because many vulnerabilities arise from improper input validation, unexpected data formats, or edge cases. For instance, malformed HTTP requests, malformed file headers, or unusual characters in user input may trigger memory corruption or logic errors. Fuzz testing allows organizations to proactively identify such weaknesses before attackers exploit them, improving the robustness and reliability of software systems.

Additionally, fuzz testing is widely applied in compliance-driven industries, such as finance, healthcare, and critical infrastructure, where software must be resilient against malicious inputs. Automated fuzzing integrated into the DevSecOps pipeline helps identify defects early in development, reducing remediation costs and enhancing secure coding practices.

Fuzz testing involves feeding random or unexpected inputs to a system to identify vulnerabilities and robustness issues. Its ability to expose input validation flaws, crashes, and security weaknesses makes it an essential tool for modern software testing and security assurance. The correct answer is Fuzz Testing.

Question 165:

 Which cloud deployment model is a combination of private and public clouds?

A) Public Cloud
B) Private Cloud
C) Hybrid Cloud
D) Community Cloud

Answer: C

Explanation:

Hybrid cloud is a cloud deployment model that combines elements of private and public cloud environments to provide organizations with flexibility, scalability, and control. In a hybrid model, certain workloads, applications, or sensitive data may reside on a private cloud infrastructure—either on-premises or hosted by a dedicated provider—while other workloads leverage the public cloud for elasticity, high availability, or cost efficiency.

The hybrid approach allows organizations to balance competing priorities, such as performance, security, compliance, and cost. Sensitive customer data, intellectual property, or mission-critical applications can remain within a controlled private environment, while less sensitive workloads, such as testing environments or web applications, can dynamically scale using public cloud resources. This enables effective load balancing, disaster recovery, and business continuity planning.

Hybrid cloud adoption requires robust integration between private and public environments. Consistent networking, security policies, identity management, and monitoring are critical to ensure seamless operation and protect data across platforms. Technologies such as VPNs, cloud management platforms, and container orchestration tools like Kubernetes facilitate hybrid cloud management, enabling organizations to migrate workloads, enforce unified access controls, and maintain visibility across both environments.

Industries such as finance, healthcare, and government benefit from hybrid clouds because they can comply with strict regulations while still leveraging cloud innovation. In finance, hybrid cloud enables institutions to store sensitive customer data and transaction records within private, highly secured environments while using the public cloud for analytics, reporting, or temporary high-demand processing. Healthcare organizations can protect patient health information in private cloud environments to meet HIPAA requirements, while simultaneously using public cloud resources for telemedicine platforms, research data analysis, and collaborative projects that require scalability. Government agencies leverage hybrid cloud to maintain control over classified or sensitive information while using public cloud services to efficiently manage citizen-facing applications, data sharing, and inter-agency collaboration.

Hybrid deployments also support burst computing, temporary project workloads, and geographic redundancy for disaster recovery, providing both flexibility and resilience. Organizations can handle sudden spikes in demand without over-provisioning private infrastructure, reducing costs while maintaining performance. Disaster recovery strategies are enhanced as hybrid clouds allow replication of critical systems across geographically dispersed public and private resources, minimizing downtime and mitigating the impact of regional outages or natural disasters.

In addition, hybrid cloud fosters innovation by allowing teams to experiment with public cloud tools and services—such as AI, machine learning, and big data analytics—without compromising core private workloads. Security policies and compliance controls can be consistently enforced across environments, ensuring that innovation does not come at the expense of regulatory adherence.

A hybrid cloud combines private and public cloud resources to achieve a balance of control, security, scalability, and cost efficiency. By enabling organizations to selectively place workloads based on sensitivity, operational requirements, and performance needs, hybrid cloud offers a versatile, strategic, and resilient deployment model. The correct answer is Hybrid Cloud.