CompTIA CS0-003 CySA+ Exam Dumps and Practice Test Questions Set 1 Q1-15

CompTIA CS0-003 CySA+ Exam Dumps and Practice Test Questions Set 1 Q1-15

Visit here for our full CompTIA CS0-003 exam dumps and practice test questions.

Question 1

A security analyst observes repeated failed authentication attempts against several service accounts from an external IP address over a short period. The logs show a low but increasing rate of attempts, and the source addresses rotate through a small set of addresses. Which of the following is the most likely explanation, and what should be the next prioritized response?

A) The activity is a distributed denial-of-service attack targeting authentication infrastructure; throttle or rate-limit incoming traffic.
B) The activity is a credential-stuffing attack using a list of leaked credentials; block the source IPs and require password resets for affected accounts.
C) The activity is a normal activity from an external scanning service; whitelist the IP addresses after verification.
D) The activity indicates a misconfigured client repeatedly attempting authentication due to incorrect saved credentials; contact the users to update saved credentials.

Answer: B)

Explanation:

A) This scenario describes a distributed denial-of-service scenario. In a denial-of-service situation, numerous requests are directed at a service to exhaust resources and disrupt service availability. These attacks often flood the target at high volume and can affect multiple components. Throttling or rate-limiting incoming traffic can mitigate volumetric and connection-exhaustion attacks by limiting the number of requests accepted from the same source or to the same endpoint. However, the observed characteristics here—repeated failed authentication attempts using service accounts, a low but increasing rate, and rotated source addresses—are not classic high-volume denial-of-service patterns meant to overwhelm infrastructure; rather, they more closely resemble an authentication abuse pattern. Throttle and rate-limit actions might blunt the attempts, but do not address the root risk of credential compromise; they also risk disrupting legitimate authentication traffic if not carefully tuned. Thus, treating this first as a denial-of-service is less likely to be appropriate.

B) The pattern described aligns with credential-stuffing behavior: attackers use lists of username/password pairs obtained from breaches and attempt to reuse them across services. Credential-stuffing attacks commonly involve automated tools that try many login attempts at a modest per-second rate to avoid rate-limiting or detection. Attackers rotate source IP addresses or proxies to bypass IP-based blocks, and they often target service or privileged accounts because these accounts yield more value. The appropriate prioritized response is to block known malicious source IPs or IP ranges temporarily to slow the attack, enforce immediate password resets or MFA enrollment for the affected accounts, and increase monitoring for successful logins from unusual locations. Additional steps include checking for reuse of credentials, analyzing logs for any successful authentication, and implementing or strengthening rate-limiting, geofencing, and multifactor authentication. The focus is on preventing unauthorized access and remediating possible compromised credentials rather than only mitigating traffic volume.

C) The suggestion that this is normal scanning from an external service is plausible only if the source is a trusted scanning provider and the pattern matches their scheduled activity. Trusted scanners often identify themselves by consistent IP ranges or user-agent tags and are coordinated with the target environment. The observed activity’s characteristics—low but increasing rate, attempts across service accounts, rotated addresses—are atypical for benign scanning. Before whitelisting any IPs, verification with the presumed scanning provider is required. Whitelisting without verification could enable attackers to continue probing. Therefore, treating the activity as legitimate scanning without confirmation is risky.

D) A misconfigured client repeatedly attempting authentication due to incorrectly saved credentials could produce repeated failed logins, but such failures are usually localized to a user endpoint or a small set of internal addresses related to the affected users. The incident description cites external IP addresses and rotation through a small set of addresses, which points away from local misconfiguration. Moreover, service accounts are typically not associated with individual user devices holding saved credentials. Contacting users to update saved credentials is appropriate for internal misconfigurations, but would not address an attack originating externally.

Reasoning about why the selected answer is correct: The described behavior best matches credential-stuffing—automated attempts using known credentials across services, modest per-source rates to avoid detection, and rotating source IPs to evade static blocking. The top priority is preventing unauthorized access by blocking malicious sources where possible, forcing resets for affected accounts, enabling or enforcing multifactor authentication, and performing log and forensic analysis to detect any successful compromise. Additional longer-term controls include implementing credential hygiene policies, user education, monitoring for credential reuse patterns, and deployment of anomaly detection tuned for authentication abuse. These measures both mitigate the immediate threat and reduce the surface for future credential-based attacks.

Question 2

A SOC has noticed a spike in DNS requests from internal workstations to a rarely used external domain. Inspection shows the DNS responses include unusual TXT records and occasional small A record changes. Which investigation step should the analyst take first, and why?

A) Perform a blocklist update to prevent all DNS queries to the domain and notify users to stop using the domain.
B) Capture network traffic to and from the suspected domain for protocol analysis and content extraction.
C) Reimage the affected workstations immediately to remove any potential persistent malware.
D) Disable DNS on the affected subnets and switch to a public DNS resolver to avoid poisoning.

Answer: B)

Explanation:

A) Updating the blocklist to prevent DNS queries to the domain is a containment measure that can be useful if the domain is confirmed malicious. Blocking at the DNS resolver or firewall stops further communication and can prevent data exfiltration or command-and-control exchanges. However, doing this as the first step without investigation may eliminate the ability to monitor and collect evidence. Immediate blocking might hinder forensic analysis, such as capturing C2 traffic or payloads delivered via DNS, which could provide indicators of compromise and allow identification of other affected hosts. Blocking can be appropriate after securing evidence or when an active threat requires immediate containment, but it’s not optimal as the initial step when more information is needed.

B) Capturing network traffic for the suspected domain is the correct first step because DNS anomalies involving rare domains and unusual TXT records often indicate use of DNS for covert channels, command-and-control, or data exfiltration. Performing packet capture allows the analyst to inspect query and response payloads, timestamps, fluxing A records, and patterns that might indicate DGA-generated domains or data encoded within TXT records. Protocol analysis helps determine if the DNS records are being used for legitimate services (e.g., SaaS verification) or malicious use. Extracted content from the traffic can be decoded or reconstructed to reveal commands or data being exfiltrated. Building a record of the malicious traffic provides the forensic evidence necessary to justify subsequent containment and remediation actions like blocking or reimaging.

C) Reimaging affected workstations removes persistent malware, but is an extreme measure to take immediately without confirming the scope and nature of the compromise. Reimaging before capturing evidence loses volatile artifacts and network-based indicators. Additionally, if multiple hosts are involved or if the infection vector is unknown, reimaging individual workstations may not prevent recurrence. Reimaging is more appropriate after investigation, collection of evidence, and identification of the root cause and scope.

D) Disabling DNS on affected subnets and switching to a public DNS resolver is not a practical, immediate response. Disabling DNS would severely disrupt normal network operations; switching to an external resolver could increase risk by sending internal queries to an uncontrolled resolver. Proper containment should rely on controlled blocking and logging at the organization’s DNS servers or firewalls. This course would remove visibility and impede investigation.

Reasoning about why the selected answer is correct: DNS-based covert channels are commonly used because DNS is allowed through many networks, has low detection rates, and can carry data within record types such as TXT. The best first action when spotting unusual DNS behavior is to capture and analyze traffic to determine intent, payloads, and affected hosts. Doing so preserves evidence, enables decoding of any embedded commands or data, and supports informed decisions about containment like targeted blocking, sinkholing, or host remediation. After analysis, the team can implement precise containment to minimize disruption while ensuring eradication and recovery steps are effective.

Question 3

During a routine vulnerability scan, several endpoints report a critical privileged escalation vulnerability that requires a local exploit to trigger. The asset owner says these endpoints are user desktops with up-to-date vendor patches. What is the most appropriate triage for this finding?

A) Mark the finding as a false positive and close the scan ticket because vendor patches are up-to-date.
B) Prioritize the vulnerability as high for immediate patching re-scanning, since local escalation is always critical.
C) Validate the scanner’s findings by cross-checking with the vendor advisory, local configuration, and exploitability in your environment before determining risk.
D) Isolate the endpoints from the network immediately to prevent exploitation.

Answer: C)

Explanation:

A) Labeling the finding as a false positive solely because vendor patches are reported as current is premature. Vulnerability scanners can misinterpret configurations, may use outdated signatures, or may not fully account for local configuration differences that affect exploitability. Closing the ticket without validation risks leaving a real vulnerability unaddressed. Instead, verification steps are necessary to confirm whether the scanner’s assertion aligns with true exposure.

B) While privilege escalation vulnerabilities can be severe, especially if they can be exploited remotely, those requiring local access have lower immediate risk unless there is evidence of remote code execution or lateral movement enabling local access. Jumping straight to immediate patching and the highest priority could be wasteful or disruptive without confirming whether the vulnerability is present and exploitable, given the patched status. Prioritization should be based on exploitability, presence of mitigating controls, and exposure likelihood.

C) Validating the scanner finding is the correct course of action. This includes consulting the vendor advisory to understand the vulnerability’s nature, CVE details, and patch applicability. Review the local configuration, patch levels, and any mitigations in place (such as application whitelisting or endpoint protection features). If feasible in a controlled environment, attempt to reproduce exploitability using safe proof-of-concept code under lab conditions or rely on reputable exploitability scoring resources. Confirm whether the scanner used authenticated checks (which are more reliable) or only unauthenticated methods. This validation determines actual risk and informs remediation priorities. It also prevents unnecessary downtime from rollback or emergency patching when not required.

D) Isolating endpoints immediately is drastic and should be reserved for confirmed active exploitation or when evidence indicates imminent compromise. Isolation disrupts productivity and may be disproportionate for a finding that requires local access and is not yet validated. Containment should be proportionate to risk and based on evidence.

Reasoning about why the selected answer is correct: Effective vulnerability triage balances urgency with accuracy. Critically evaluating scanner output against authoritative sources, local configurations, and evidence of exploitability ensures appropriate prioritization and avoids unnecessary operational impact. This approach allows security teams to focus on genuinely exploitable and high-impact vulnerabilities while documenting findings and remediation steps for auditability.

Question 4

A packet capture shows an internal host making multiple HTTPS POST requests to an external IP at regular intervals. The User-Agent is a legitimate browser string, and POST payloads are small and appear encoded. What is the most likely use of this traffic, and which immediate control should be applied?

A) This is normal web browsing activity; no action required.
B) This suggests command-and-control using an HTTP-based covert channel; block the external IP and extract payloads for decoding.
C) This is likely a cloud backup service; whitelist the IP and continue monitoring.
D) This indicates a web application vulnerability being exploited in the internal host; patch the local browser.

Answer: B)

Explanation:

A) Normal web browsing typically involves varied domains, larger payloads for POSTs like form submissions, and user-driven timing. Regular, periodic POST requests of small encoded payloads to a single external IP are not characteristic of benign browsing. Small, regularly scheduled transmissions suggest automation and potentially covert channels. Treating it as normal browsing without investigation risks missing data exfiltration or persistent communication.

B) The pattern described matches common command-and-control techniques where malware or an agent uses HTTP or HTTPS POST requests to blend in with legitimate traffic. Using legitimate User-Agent strings and HTTPS helps evade detection. Small, encoded payloads often represent beaconing (periodic check-ins) or exfiltrated data fragments. The appropriate immediate control is to block the external IP to halt further communication, while simultaneously capturing and extracting payloads for decoding to understand commands or data sent. Blocking should be balanced with forensic needs: ensure captures are saved before blocking to preserve evidence and that containment does not destroy volatile data required for investigation.

C) A cloud backup service could use periodic HTTPS connections, but such services typically use well-known domains or cloud provider IP ranges and often use larger payloads or distinct API endpoints. Before whitelisting, verify the destination is a legitimate vendor domain and corroborate with asset owners. Assuming it’s a backup service without verification is risky.

D) The scenario does not describe browser exploitation via a web application vulnerability; instead it indicates outbound automated communications from a host. Patching the browser is unlikely to stop a malware agent that has already been installed and is performing outbound POSTs. A more comprehensive remediation including forensic analysis and potential malware removal would be required.

Reasoning about why the selected answer is correct: Periodic, encoded HTTPS POSTs to an external address resemble beaconing used by command-and-control frameworks and data exfiltration tools. The immediate priority is to stop ongoing malicious communications while preserving evidence for analysis. Blocking and extracting payloads enables containment and helps reveal the nature and scope of the compromise. Follow-up actions include scanning the internal host for persistence mechanisms, reviewing endpoint telemetry, and applying network-based detection signatures to prevent similar behavior in other hosts.

Question 5

A behavior-based IDS alerts on a process that is executing base64-decoded payloads from memory and creating outbound connections on nonstandard ports. Endpoint detection shows the process is signed by a known vendor but was recently updated from an untrusted source. Which of the following best describes the likely situation and the next forensic step?

A) A benign signed application performing a normal auto-update; approve the update and close the alert.
B) A supply-chain or binary tampering incident where a legitimate-signed binary was replaced with a malicious variant; perform a full memory dump and preserve artifacts for analysis.
C) A false positive due to heuristic detection; tune the IDS to reduce similar alerts.
D) A malware infection unrelated to the signed binary; focus on network blocking and ignore the binary signature.

Answer: B)

Explanation:

A) While signed applications can perform legitimate update activities, the behavior observed—decoding payloads in memory and establishing outbound connections on nonstandard ports—is suspicious. Legitimate vendor updater processes generally use documented update mechanisms, connect to vendor domains or trusted CDNs, and do not decode arbitrary payloads in memory. Accepting the update without investigation risks ignoring a potentially serious compromise.

B) The described situation aligns with a supply-chain compromise or binary tampering where a signed executable has been replaced or re-signed with malicious content. Attackers sometimes distribute malicious binaries via compromised update channels or counterfeit installers, maintaining digital signatures or using stolen signing keys to appear legitimate. When an otherwise expected signed process exhibits anomalous runtime behavior, that is an indicator of compromise. The appropriate next forensic step is to perform a full memory dump of the affected host to capture running process memory, loaded modules, and in-memory payloads that may not exist on disk. Preserving volatile artifacts is crucial for reverse engineering, attribution, and determining the scope of impact. Additional steps include collecting the binary from disk for static analysis, comparing signatures and hashes with vendor-supplied versions, and isolating the host to prevent lateral movement while preserving forensic integrity.

C) Tuning the IDS because of a presumed false positive is risky without verification. Heuristics can generate false positives, but when behavioral anomalies include memory decoding and network communications to unusual ports, escalation for investigation is warranted. Tuning should only follow confirmation that the activity is benign.

D) Saying the infection is unrelated to the signed binary ignores the clear correlation between the process exhibiting malicious behavior and the signed binary’s recent untrusted update. Network blocking is part of containment but ignoring the binary’s provenance and execution context forfeits essential investigative leads. The signature may be forged or the delivery path compromised; investigating the binary and update source is necessary.

Reasoning about why the selected answer is correct: Supply-chain and binary tampering incidents are high-impact because they can affect many systems and be difficult to detect when malware appears as a trusted signed binary. Memory forensics captures runtime evidence of injected payloads, ephemeral unpacked code, and network indicators that disk-based scans may miss. A deliberate forensic approach—memory dump, disk image, hash comparison, and vendor coordination—enables accurate determination of compromise, informs remediation (e.g., revocation of signing certificates, rollback of updates), and supports detection of other affected systems.

Question 6

A security analyst notices that several Linux servers are establishing outbound SSH connections to a foreign IP address at random intervals, even though none of these servers should initiate SSH sessions externally. Log review shows the outbound sessions are initiated by a process named “sysd-helper,” which is not part of the approved baseline. What is the most appropriate assessment and response?

A) The traffic is part of a legitimate remote backup synchronization process; verify with system administrators before taking action.
B) The behavior indicates a worm propagating through SSH; immediately isolate all affected servers from the network.
C) The activity suggests a compromised host establishing command-and-control channels; collect volatile memory and preserve forensic images.
D) The SSH initiation is due to misconfigured cron jobs; disable the jobs and report configuration drift.

Answer: C)

Explanation:

The first listed response suggests a scenario involving legitimate backup synchronization. Some environments indeed utilize outbound SSH for remote backups or data replication. These processes often rely on designated accounts, predictable schedules, and known remote destinations. Here, however, the random intervals, foreign IP destination, and an unrecognized process name provide a strong mismatch with typical authorized backup procedures. Backup tools also usually have recognizable process names and are documented in system baselines. Relying on a benign interpretation would overlook multiple red flags.

The second listed response presents an interpretation involving a worm relying on SSH for propagation. SSH worms do exist, especially those targeting weak credentials or known SSH vulnerabilities. Such worms often attempt to log into other internal hosts rather than establish outbound connections to external unmanaged IP addresses. Propagation-focused malware typically performs rapid, high-volume scanning or brute force attempts. The behavior described—random outbound sessions toward one external destination initiated by an unknown process—does not match high-frequency propagation activity. Although isolation might still be needed later, premature isolation without forensic collection risks losing essential evidence.

The third listed response identifies the situation as a compromise involving command-and-control communication. This analysis aligns well with the observed indicators. An unrecognized process on Linux systems initiating outbound SSH sessions is highly suspicious, especially when the process name resembles a legitimate system service but is slightly altered, which is a common tactic used to evade casual inspection. The foreign IP destination, irregular timing of connections, and lack of legitimate need for outbound SSH strengthen the case for malicious remote control. In such cases, immediate forensic actions—especially collecting volatile memory—are essential to preserve evidence of any in-memory payloads, active network connections, encryption keys, malware configurations, and system state. Creating full forensic disk images ensures investigators can later analyze persistence mechanisms, unauthorized binaries, and lateral movement traces. Preserving evidence before containment is crucial because many modern threats erase logs or terminate malicious processes once network connectivity is lost.

The fourth listed response attributes activity to misconfigured scheduled tasks. While cron misconfigurations can trigger unexpected processes, legitimate cron tasks rarely spawn outbound SSH connections to unknown foreign servers unless specifically configured to do so. Cron jobs also typically call known scripts or binaries residing in standard locations. In this case, the suspicious process name and unexpected network behavior make a configuration issue unlikely. Addressing configuration drift may be a step in general system hygiene, but it does not explain unauthorized outbound SSH activity tied to a non-baseline process.

Reasoning for why the selected response is correct centers on the combination of indicators pointing to remote control. Malware that uses SSH for command-and-control is less common than HTTPS-based channels, but it exists and offers attackers encrypted communication with minimal anomaly detection. A process masquerading as a legitimate service elevates the likelihood of compromise, and the random timing suggests beacon-like behavior. Proper incident response emphasizes capturing volatile data early, as many key artifacts exist only in memory. Disk imaging ensures investigators can analyze the full impact, including persistence mechanisms like modified systemd services, SSH backdoors, altered authorized_keys files, or trojanized binaries. Only after evidence is preserved should containment steps such as isolating servers, blocking suspicious IP ranges, and examining peer systems occur. When handled correctly, this approach enables accurate scoping, supports remediation, and prevents reinfection while maintaining an audit trail for reporting and post-incident improvements.

Question 7

A company’s threat intelligence platform reports that a newly discovered malware strain uses encrypted DNS tunneling for exfiltration. Shortly afterward, the SOC detects multiple internal devices generating unusually long DNS queries with high entropy strings. What should the analyst do first to validate whether these devices are compromised?

A) Immediately block all DNS requests from the affected devices and force reimaging.
B) Perform packet captures of the DNS traffic and analyze payload structure and timing.
C) Disable DNSSEC validation on internal resolvers to reduce entropy.
D) Contact the threat intelligence vendor and request confirmation before investigation.

Answer: B)

Explanation:

The first listed response proposes immediate blocking and reimaging. While such actions may eventually become necessary, jumping directly to them without investigation risks destroying volatile forensic data that could reveal infection vectors, malware behavior, and exfiltrated content. Reimaging without understanding the cause also prevents determining whether infections originated from user behavior, vulnerable services, or lateral movement. Premature containment prevents analysis of the tunneling method and may leave undetected similar infections elsewhere.

The second listed response recommends capturing and analyzing DNS traffic. This aligns perfectly with the described situation. DNS tunneling relies on encoding data into query names or TXT records, often producing long, high-entropy labels. Packet captures allow analysts to inspect patterns, such as consistent query lengths, repetition rates, domain structures, and encoded data formats. Examining timing intervals helps determine whether devices are beaconing or transferring data in bursts. Captured traffic can reveal whether the queries match known tunneling frameworks, such as Iodine or DNScat2, or whether the malware uses custom encoding. This evidence provides confirmation of compromise, informs containment decisions, and identifies which systems require remediation. It also allows correlation with threat intelligence indicators such as known malicious domains or authoritative name servers used by tunneling malware.

The third listed response involves disabling DNSSEC validation. DNSSEC ensures authenticity of DNS responses and does not influence entropy within queries themselves. The high-entropy strings detected are within the queries generated by the devices, not in the DNSSEC signatures. Disabling DNSSEC would reduce security and create DNS spoofing risks without addressing the problem.

The fourth listed response suggests waiting for vendor confirmation. Threat intelligence provides context, but SOC teams cannot delay investigation when suspicious activity corresponds to known malicious behaviors. Timeliness is crucial in preventing data loss, reducing attacker dwell time, and minimizing damage.

Reasoning supporting the correct response emphasizes that DNS tunneling requires detailed inspection to confirm malicious use. Capturing and analyzing traffic verifies whether the entropy, size, and pattern of queries match tunneling techniques. This investigative step precedes containment because it collects essential evidence and prevents premature disruption. After validation, analysts can block domains or IPs associated with tunneling, isolate infected hosts, and conduct endpoint forensics to determine initial compromise vectors. This approach yields a comprehensive understanding of the incident, aids in preventing recurrence, and supports incident reporting requirements.

Question 8

An enterprise discovers that several Windows endpoints are beaconing to a suspicious domain over HTTPS every 45 seconds. The user-agent string mimics Microsoft Update, but the domain is newly registered and not associated with Microsoft. What is the most effective next action?

A) Block the domain at the firewall and continue monitoring for recurring attempts.
B) Initiate endpoint isolation and capture full memory snapshots for malware analysis.
C) Whitelist the domain temporarily until more intelligence is gathered.
D) Redirect all traffic to the domain using DNS sinkholing to observe payloads.

Answer: B)

Explanation:

The first listed response involves blocking the domain. Blocking malicious destinations is an important containment step, but performing it first prevents collecting in-transit payloads exchanged through the beaconing channel. Without payload captures, analysts might miss commands issued by attackers, encryption keys, or lateral movement instructions. Moreover, some malware terminates itself or alters behavior when communications fail, eliminating evidence.

The second listed response focuses on isolating endpoints and collecting full memory snapshots. Beaconing to a newly registered domain with spoofed user-agent strings strongly indicates an active command-and-control channel. Memory captures preserve decrypted payloads, configuration files stored in memory, injected modules, and encryption keys used by malware. This allows analysts to determine malware families, persistence mechanisms, and techniques used. Isolation prevents attackers from receiving warnings or issuing additional commands, reducing further risk.

The third listed response proposes whitelisting the suspicious domain. Newly registered domains are high-risk, and domain ownership patterns are easily checked. Allowing continued communication to a likely command-and-control server could expose the organization to data theft or ransomware deployment. Trust should never be granted without verification.

The fourth listed response suggests DNS sinkholing. Sinkholing can be valuable for large-scale infections, but it changes communication patterns and prevents the malware from receiving legitimate attacker instructions. It can also interfere with evidence collection and is better deployed after understanding malware behavior.

Reasoning supporting the correct response rests on the principle that suspected active compromise requires immediate isolation and evidence preservation. Memory forensics is essential because many modern malware families operate primarily in memory and never write fully formed payloads to disk. Capturing memory provides analysts with executable code, network artifacts, and volatile indicators necessary for complete incident remediation.

Question 9

A threat hunter identifies PowerShell scripts running obfuscated commands across multiple hosts. The scripts originate from scheduled tasks created within the last 24 hours. No authorized administrators made these changes. What is the most likely priority action?

A) Remove the scheduled tasks and alert system owners.
B) Conduct lateral movement analysis and disable affected accounts.
C) Reboot the affected systems to clear in-memory artifacts.
D) Update endpoint antivirus signatures and run full scans.

Answer: B)

Explanation:

The first listed response advises removing scheduled tasks. While removal disrupts ongoing malicious activity, doing this first risks tipping off attackers and destroying useful forensic traces. Without understanding the attack chain, simply deleting tasks does not prevent the re-establishment of persistence elsewhere.

The second listed response emphasizes investigating lateral movement and disabling compromised accounts. Obfuscated PowerShell executed across multiple hosts, especially through unauthorized scheduled tasks, indicates that attackers gained a foothold and are expanding control using privileged credentials or stolen tokens. Lateral movement analysis identifies how attackers traversed the network, which credentials were compromised, and which hosts require deeper examination. Disabling affected accounts stalls attacker access and prevents further script execution. This approach prioritizes containment and scoping.

The third listed response proposes rebooting systems, but rebooting clears critical volatile evidence, such as in-memory scripts, credential artifacts, and injected payloads. It may also disrupt logs and hinder investigators from learning how tasks were created.

The fourth listed response involves updating the antivirus. Signature-based protection is ineffective against obfuscated PowerShell scripts crafted dynamically. Antivirus updates may detect some components, but relying on this misses the broader compromise involving credential theft and lateral movement.

Reasoning supporting the correct response highlights that unauthorized scheduled tasks spreading obfuscated PowerShell indicate attacker movement. Identifying how movement occurred and cutting off access by disabling accounts is the most important containment step. Without stopping credential misuse, attackers can recreate tasks or escalate further.

Question 10

A cloud-hosted web application suddenly begins returning 500-level errors. Logs show repeated attempts to access administrative API endpoints followed by CPU spikes on backend containers. No authorized API activity was scheduled. What is the most probable cause and best immediate response?

A) Routine cloud maintenance is causing temporary instability; wait until maintenance completes.
B) Automated vulnerability scanning; block the scanner’s IP and resume operations.
C) Active exploitation of an API endpoint; scale down affected containers and capture runtime logs before redeployment.
D) Misconfigured load balancer health checks; adjust thresholds.

Answer: C)

Explanation:

The first listed response attributes failures to cloud maintenance. Cloud maintenance seldom targets application-specific API endpoints, nor does it produce targeted administrative API probing. CPU spikes in backend containers following unauthorized API access attempts suggest server-side processing triggered by malicious requests rather than platform-level maintenance.

The second listed response suggests vulnerability scanning. Scanners may access administrative endpoints, but scanning does not typically cause backend CPU saturation unless it triggers expensive operations. The timing correlation between endpoint probing and application failures indicates something more severe. Blocking a scanner’s IP is premature without confirmation, and commercial scanners typically identify themselves.

The third listed response aligns with active exploitation. Attackers probing administrative APIs may trigger resource-intensive operations or exploit input validation flaws, leading to container overload, crashes, or remote code execution attempts. Scaling down containers freezes the current state and enables forensic capture of logs, memory dumps, and request payloads. Preserving logs before redeploying prevents loss of attack indicators. Redeployment from trusted images restores service availability while the investigation proceeds.

The fourth listed response involves a load balancer misconfiguration. Misconfigured health checks rarely produce targeted API access attempts. They also do not cause CPU spikes in the manner described.

Reasoning supporting the correct response centers on the observation that unauthorized administrative API access attempts immediately precede service degradation. This is a classic sign of attackers discovering or exploiting administrative endpoints. Forensic preservation is vital before resetting containers to avoid losing evidence that reveals the attacker’s intent and method.

Question 11

An analyst reviewing Linux audit logs notices that a privileged binary located in /usr/bin/ was modified without any corresponding package-management activity. Shortly afterward, several users reported unusual behavior, including outbound connections from their sessions to unfamiliar IP addresses. What should the analyst conclude, and what is the most appropriate next step?

A) The binary was corrupted due to a disk failure; schedule a filesystem repair.
B) The system experienced unauthorized tampering with privilege-escalation intent; isolate the host and acquire a forensic disk image.
C) The behavior is normal for systems configured to use extended debugging features; disable debugging.
D) The binary was updated as part of an automated feature rollout; ignore the alert and close the ticket.

Answer: B)

Explanation:

A) A disk failure can result in corrupted files, including binaries, but this type of corruption usually appears with accompanying system log errors, I/O warnings, or kernel messages pointing to hardware faults. Disk-related corruption also tends to manifest unpredictably and affect multiple files, not one specific privileged binary. The observed outbound connections and user-session anomalies do not correlate with typical hardware-induced corruption. Treating the issue as a filesystem repair need would postpone necessary security response actions and allow a potential compromise to continue operating.

B) Unauthorized modification of a privileged binary in /usr/bin/ without package-management activity is a strong indicator of tampering. Attackers frequently replace or wrap privileged executables to insert backdoors, enable persistence, or escalate privileges. When the modification is followed by abnormal user-session behavior and unexpected external communication, the signs point toward a compromise involving execution hijacking or installation of an implant. The correct next step is immediate isolation of the affected host to prevent continued malicious communication or lateral movement. Acquiring a full forensic disk image preserves on-disk artifacts such as modified binaries, timestamps, malicious scripts, persistence mechanisms, and attacker tooling. This step allows later analysis while ensuring the original state is preserved for investigation or legal needs. Live-response steps should be kept minimal to avoid altering evidence.

C) Systems using debugging features can generate unusual execution patterns, but they do not normally replace privileged binaries or produce outbound connections tied to tampered executables. Debugging tools also leave recognizable logs or configuration markers, none of which match the described activity. Blaming debugging features would divert attention from what is clearly suspicious tampering.

D) Automated feature rollouts or updates would normally leave package-manager logs, digital signatures, version metadata, or documented vendor notes. They also rarely modify privileged system binaries directly without package tracking. Ignoring the alert risks leaving root-level compromise undetected, allowing an attacker to maintain control.

Reasoning about the correct choice: The combination of unauthorized privileged binary modification and subsequent suspicious outbound behavior strongly suggests attacker manipulation of system utilities to maintain persistence or elevate privilege. Isolating the host and preserving evidence through full disk imaging helps contain damage and supports later analysis, ensuring the response is thorough and defensible.

Question 12

A threat hunter finds that several systems are querying a domain that uses rapidly changing subdomains generated algorithmically. The requests occur at regular intervals and continue even after the domain is temporarily blocked. What is the best conclusion and immediate action?

A) The systems are performing legitimate cloud-service discovery; allow the traffic.
B) The systems likely contain malware relying on a domain-generation algorithm; perform host-based investigation and quarantine infected systems.
C) The behavior indicates misconfigured DNS caching; flush caches and restart DNS.
D) The requests reflect outdated proxy configuration; update proxy settings.

Answer: B)

Explanation:

A) Some cloud services use numerous subdomains, but they are not generated algorithmically at high frequency, nor do they exhibit patterns typical of domain-generation mechanisms. Cloud platforms also rely on recognizable domains owned by known providers. Continuing to allow such traffic without investigation would risk enabling active command-and-control communications.

B) Regular queries to domains whose subdomains change in an algorithmic pattern are classic signs of malware employing a domain-generation algorithm. This technique helps malware evade blocking by generating many potential command-and-control endpoints each day. The persistence of queries even after blocking indicates that the malware continues attempting to reach its controller regardless of network restrictions. The correct immediate action is to investigate the affected hosts for malware presence, persistence mechanisms, and lateral-movement evidence. Quarantining infected systems prevents continued beaconing and possible remote command execution while the malware is still active. Host-level data, such as process trees, registry artifacts, scheduled tasks, and file system changes, should be collected in a controlled manner.

C) DNS caching issues could produce repeated queries to a stale domain, but they do not generate new algorithmically formed subdomains. They also would not produce regularly timed query intervals across multiple systems. Restarting DNS services would not address an underlying malware infection, and assuming DNS misconfiguration could delay remediation.

D) Outdated proxy configurations typically result in routing failures or connection errors, not automatically generated subdomains. Proxy errors also do not cause algorithmic domain generation or consistent intervals of communication. Updating proxy settings would not address a deeper compromise.

Reasoning about correctness: The observed domain-querying behavior perfectly matches known DGA activity, and the persistence of requests after blocking indicates an active malware implant. Host-level containment and forensic investigation are required to determine the scope and prevent further compromise.

Question 13

An enterprise SIEM alerts on multiple failed API authentication attempts from an internal development server to a sensitive financial system. The development team states that no new integrations were deployed. Network logs show the requests originate from a container hosted on that server. What should the analyst determine, and what is the best response?

A) A developer misconfigured a script; disable the script and notify the team.
B) A containerized workload is likely compromised and probing for credentials; freeze the container state and investigate.
C) Normal API discovery traffic from the development environment; ignore the alert.
D) A software update bug triggered the authentication issue; re-deploy the container.

Answer: B)

Explanation:

A) Misconfigurations can result in failed authentication attempts, but they typically align with known development activity, newly deployed scripts, or recent code pushes. In this case, the development team has confirmed no new integrations. Misconfigurations also do not typically trigger repeated, unexplained authentication attempts targeting sensitive systems. Simply disabling a script without understanding the root cause may overlook an active compromise.

B) A container generating unauthorized authentication attempts toward a financial system suggests compromise within that containerized environment. Containers are often lightly monitored and can run workloads that attackers exploit once they gain initial access. Probing for credentials via brute-force or enumeration attempts is consistent with compromised containers being used as footholds. Freezing the container state is critical because it preserves volatile evidence such as running processes, memory contents, temporary files, and any in-memory payloads. Further investigation can determine whether the attacker escaped the container, pivoted within the environment, or implanted additional tools. This response balances containment and forensic preservation.

C) API discovery traffic does not typically involve repeated failed authentications to sensitive financial systems, nor would it originate unexpectedly from a container with no new integrations deployed. Treating such indicators as normal could allow a threat actor to escalate or pivot.

D) Deployment bugs may generate authentication anomalies, but they usually align with known update timelines, not unexpected traffic from containers. Redeploying the container might destroy evidence if an attacker is present, undermining the ability to investigate.

Reasoning about correctness: Unexplained authentication attempts from a containerized workload strongly signal compromise. Freezing the container preserves critical evidence for review while containing further harm. This is the safest and most effective approach when dealing with potentially malicious container activity.

Question 14

A SOC receives alerts showing that a workstation is sending large volumes of encrypted outbound traffic at irregular intervals to a cloud storage provider not approved by the organization. The user reports the workstation is behaving normally. What should be concluded, and what is the most appropriate response?

A) The user is manually uploading large work files; approve temporary access.
B) The workstation may be infected with data-exfiltration malware; isolate the host and analyze recent file-access patterns.
C) The organization’s cloud compliance rules are outdated; update them and allow the traffic.
D) The traffic is caused by automatic OS backups; disable automatic backup.

Answer: B)

Explanation:

A) Users manually uploading files typically produce visible system behavior, such as application windows or file-transfer prompts. The traffic described is irregular, encrypted, and large in volume, signs not normally consistent with interactive user behavior. Allowing the traffic without verification risks enabling data theft.

B) Large encrypted outbound transfers to an unapproved cloud storage provider are classic indicators of data-exfiltration activity. Malware or insider-threat tooling often uses cloud services to blend in with normal traffic while steadily exfiltrating data. Irregular intervals also match automated jobs triggered by certain conditions, such as file changes or scheduled tasks. Isolating the host prevents ongoing exfiltration. Analyzing recent file-access patterns helps determine which files may have been stolen, whether the payload originated from user activity, and whether malware is monitoring directories. This also helps assess breach severity and reporting obligations.

C) Compliance rules may indeed be outdated, but updating them is not the correct response when suspicious encrypted traffic is occurring. Policy issues do not explain unusual upload patterns or irregular scheduling. Investigating the behavior must precede any policy changes.

D) OS backups typically communicate with known cloud providers and follow regular patterns. They also interact with approved backup services configured by IT. The traffic described does not match known backup intervals and uses an unapproved cloud provider, making this explanation unlikely.

Reasoning about correctness: Encrypted, irregular, high-volume outbound transfers to unapproved storage strongly indicate data-exfiltration malware or insider misuse. Immediate containment and forensic examination are required to prevent additional loss and determine the extent of compromise.

Question 15

An analyst reviewing process telemetry finds that a legitimate PDF reader is spawning PowerShell with encoded commands whenever a user opens certain PDF files received via email. Network logs show the PowerShell process attempting to download additional payloads. What is the most accurate conclusion and best next step?

A) The PDF reader uses PowerShell for extended document features; whitelist the behavior.
B) The PDFs likely contain embedded malicious scripts exploiting the reader; block the sender domain and perform endpoint forensic collection.
C) The user misconfigured file-handling preferences; reset the default programs.
D) The behavior is caused by a PowerShell logging bug; ignore the alert.

Answer: B)

Explanation:

A) Legitimate PDF readers do not normally invoke PowerShell with encoded commands when opening documents. Standard features rely on built-in scripting engines or JavaScript interpreters, not external shell invocation. Whitelisting would enable an active exploit path.

B) The observed behavior—PDF documents triggering PowerShell with encoded commands that attempt remote downloads—is strong evidence of a malicious PDF exploiting a vulnerability or leveraging embedded scripts to deliver additional malware. Blocking the sender domain prevents further delivery of malicious documents. Endpoint forensic collection is necessary to capture in-memory payloads, registry changes, persistence mechanisms, trojanized files, and other artifacts. This also enables analysis to determine whether exploitation succeeded, what payloads were downloaded, and whether the attacker achieved persistence. A thorough forensic response helps define the full scope and prevents reinfection.

C) Misconfigured file-handling might cause unexpected applications to launch, but it would not trigger encoded PowerShell commands or network payload retrieval. Resetting default programs would not address the malicious content embedded in the documents.

D) PowerShell logging bugs do not spontaneously cause encoded commands tied to PDF activity, nor do they justify ignoring remote payload download attempts. Treating this as harmless would allow active exploitation to continue undetected.

Reasoning about correctness: Malicious PDFs are a common initial infection vector. When PDF execution triggers encoded PowerShell behaviors and attempted payload retrievals, the situation clearly indicates exploitation. Blocking further emails and conducting a full forensic investigation ensures containment and proper remediation.