Cisco 350-701 Implementing and Operating Cisco Security Core Technologies Exam Dumps and Practice Test Questions Set 8 Q 106-120
Visit here for our full Cisco 350-701 exam dumps and practice test questions.
Question 106:
An administrator needs to configure Cisco Firepower to block traffic from specific countries while allowing traffic from trusted regions. Which feature should be implemented?
A) Geolocation-based access control policies
B) Security Intelligence filtering
C) URL filtering categories
D) Application visibility and control
Answer: A
Explanation:
This question tests understanding of geographic-based traffic filtering in Cisco Firepower Threat Defense (FTD). Geolocation is a powerful security feature that allows administrators to make access control decisions based on the geographic origin of traffic. This capability is particularly useful for organizations that operate in specific regions and want to reduce attack surface by blocking traffic from countries where they have no business operations or that are known sources of malicious activity.
Geolocation-based access control policies in Cisco Firepower use IP address intelligence to determine the geographic location (country, continent) of source and destination IP addresses. When traffic arrives, Firepower performs geolocation lookup against its database that maps IP address ranges to geographic locations. Access control rules can then permit or deny traffic based on these geographic attributes, providing a straightforward mechanism to implement location-based security policies.
The implementation process involves creating access control rules within the Firepower Management Center (FMC) that specify geographic locations in the source or destination criteria fields. For example, an administrator might create a rule that blocks all traffic originating from specific high-risk countries while allowing traffic from countries where the organization operates or has business partners. Multiple countries can be combined in single rules using logical operators, and rules are processed in order like traditional firewall policies. Geolocation filtering operates at the network layer and can be applied before deeper inspection occurs, providing efficient first-line defense by dropping unwanted geographic traffic early in the processing pipeline. This reduces resource consumption from processing and inspecting traffic that would ultimately be blocked anyway.
A) This is the correct answer. Geolocation-based access control policies provide the precise capability required to block traffic from specific countries while allowing trusted regions. Firepower’s geolocation database enables administrators to create rules specifying allowed or blocked countries in access control policies. These policies evaluate the geographic origin of traffic and enforce permit or deny actions accordingly. Configuration occurs through FMC access control policies where geographic locations are selected as match criteria alongside traditional criteria like source/destination networks, ports, and applications.
B) Security Intelligence filtering in Firepower blocks traffic from IP addresses and domains known to be associated with malicious activity based on Cisco Talos threat intelligence feeds. While Security Intelligence can include geographic context in its threat data, it’s focused on blocking known malicious sources rather than implementing country-level geographic policies. Security Intelligence is reputation-based blacklisting that operates on threat intelligence rather than pure geographic filtering. While both features can block unwanted traffic, they serve different purposes and use different criteria.
C) URL filtering categories classify and control web traffic based on the content category of websites (gambling, social media, malware, etc.). URL filtering operates at the application layer for HTTP/HTTPS traffic and makes decisions based on website content classification rather than geographic origin. While some URL categories might correlate with certain regions, URL filtering doesn’t provide direct country-based traffic blocking. URL filtering addresses content control rather than geographic access control, serving different security objectives.
D) Application visibility and control (AVC) identifies applications regardless of port or protocol and enforces policies based on application identity. AVC provides deep packet inspection to recognize applications and enables administrators to allow, block, or rate-limit specific applications. While AVC provides granular application control, it doesn’t inherently filter based on geographic location. Application control and geographic filtering address different security dimensions and are typically used together for comprehensive security policies.
Question 107:
A security engineer needs to configure Cisco ISE to assign different VLANs to users based on their Active Directory group membership. Which ISE component enables this functionality?
A) Authorization policies with VLAN assignment
B) Authentication policies
C) Posture assessment policies
D) Profiling policies
Answer: A
Explanation:
This question addresses Cisco Identity Services Engine (ISE) authorization capabilities and dynamic VLAN assignment. ISE provides comprehensive network access control (NAC) through authentication, authorization, and accounting (AAA) services. The authorization phase determines what network resources authenticated users can access and which network segments they’re placed into. Dynamic VLAN assignment is a common authorization decision that places users into different network segments based on their identity, group membership, device type, posture status, or other criteria.
Authorization policies in ISE define the permissions and network access granted to authenticated users. These policies consist of rules that match specific conditions (user identity, group membership, device type, location, time of day) and return authorization results including VLAN assignments, downloadable ACLs, security group tags, and other attributes. When a user successfully authenticates, ISE evaluates authorization policies in order until a matching rule is found, then returns the authorization result to the network access device (switch, wireless controller, VPN concentrator) which enforces the decision.
For VLAN assignment based on Active Directory group membership, the authorization policy rule would include a condition matching the user’s AD group (retrieved during authentication via integration with Active Directory) and would specify the VLAN assignment in the authorization result. For example, a rule might state «If user is member of AD group ‘Engineering’, assign VLAN 100» while another rule specifies «If user is member of ‘Finance’, assign VLAN 200.» The network access device receives the VLAN assignment from ISE and places the user’s port or wireless session into the specified VLAN. This dynamic VLAN assignment enables network segmentation based on identity rather than physical port assignment, supporting flexible, secure network architectures where users can connect from any location and be automatically placed into appropriate network segments based on their role and group membership. The process requires integration between ISE and Active Directory, proper authorization policy configuration with VLAN assignments, and network infrastructure configured to accept dynamic VLAN assignments from ISE RADIUS responses.
A) This is the correct answer. Authorization policies with VLAN assignment provide the exact functionality required to assign different VLANs based on Active Directory group membership. Authorization policies evaluate after successful authentication and determine what access the user receives. By configuring authorization rules that match AD group membership and specify VLAN assignments in the authorization results, ISE can dynamically assign users to appropriate network segments. Configuration involves creating authorization policy rules with conditions matching AD group attributes and results specifying VLAN IDs or VLAN names that network devices will enforce.
B) Authentication policies in ISE determine how users authenticate (which authentication methods and identity sources to use) but don’t determine authorization decisions like VLAN assignments. Authentication policies select authentication protocols (PAP, CHAP, EAP-TLS, PEAP, etc.) and identity sources (Active Directory, LDAP, internal users) but the actual network access permissions including VLAN assignments are handled by authorization policies. Authentication verifies who the user is, while authorization determines what they can access. These are separate policy types in ISE with distinct purposes.
C) Posture assessment policies evaluate endpoint compliance with security requirements by checking for required software (antivirus, patches, firewall) and configurations. Posture assessment determines whether endpoints meet organizational security standards before granting network access. While posture results can influence authorization decisions (compliant devices might receive full access while non-compliant devices are quarantined), posture policies themselves don’t assign VLANs based on AD groups. Posture assessment focuses on device security compliance rather than user identity or group membership.
D) Profiling policies in ISE automatically classify and categorize network endpoints based on their characteristics including MAC address vendor, DHCP parameters, HTTP user agents, and network behavior. Profiling identifies device types (iPhone, Windows 10, printer, IP camera) enabling different policies for different device categories. While profiling results can be used in authorization decisions, profiling doesn’t directly handle AD group membership or VLAN assignment. Profiling addresses device identification while the question requires user group-based VLAN assignment which is an authorization function.
Question 108:
An administrator needs to implement URL filtering on Cisco WSA (Web Security Appliance) to block access to social media sites during business hours while allowing access during lunch and after hours. Which feature should be configured?
A) Time-based access policies with URL category filtering
B) Authentication realms only
C) Anti-malware scanning
D) Data loss prevention policies
Answer: A
Explanation:
This question examines time-based policy enforcement in Cisco Web Security Appliance (WSA). WSA provides comprehensive web security including URL filtering, malware protection, application control, and data loss prevention. Time-based policies enable administrators to enforce different security rules based on time of day or day of week, allowing flexible policies that adapt to business requirements. This is particularly useful for organizations wanting to restrict certain activities during productive hours while allowing them during breaks or after hours.
Time-based access policies in WSA allow administrators to define when specific security rules apply. These policies use time ranges and group policies to create schedules specifying days and hours when rules are active. Combined with URL filtering capabilities, time-based policies enable sophisticated control like blocking specific website categories during certain hours while permitting access during others. WSA’s URL filtering categorizes websites into predefined categories (social networking, entertainment, shopping, news, etc.) allowing administrators to create policies based on content type rather than maintaining lists of individual sites.
Implementation involves creating identification profiles (defining user groups), time ranges (specifying schedules like «business hours» or «lunch time»), and access policies that combine these elements with URL filtering categories. For the scenario described, administrators would create time ranges for business hours, lunch time, and after hours, then create access policies that block social media categories during business hours while allowing them during lunch and after hours. The policies are evaluated based on current time, user identity, and requested URL category, returning the appropriate allow or block decision. This provides granular control over web access based on temporal context, balancing productivity requirements with employee flexibility. WSA processes requests by identifying the user, determining the current time, matching the appropriate policy, evaluating the URL against filtering categories, and enforcing the policy decision. Advanced configurations might include different policies for different user groups, warnings instead of blocks, quotas limiting time spent on certain categories, or exemptions for specific users or sites.
A) This is the correct answer. Time-based access policies combined with URL category filtering provide the exact functionality required. Time ranges define when different policies apply (business hours, lunch, after hours), while URL filtering categorizes social media sites into recognizable categories that policies can block or allow. By creating multiple access policies with different time ranges and URL filtering actions, administrators can block social media during business hours and permit it during specified times. Configuration involves defining time ranges for different periods, creating access policies that reference these time ranges and specify URL category actions.
B) Authentication realms in WSA define how users authenticate to the proxy (Active Directory, LDAP, local database) but don’t provide time-based policy enforcement or URL filtering. Authentication identifies users and enables user-based policies, but authentication realms alone don’t implement the time-based access control or URL category blocking required. Authentication is a prerequisite for user-based policies but doesn’t itself provide scheduling or URL filtering capabilities. Time-based access policies work with authentication to apply different rules to identified users at different times.
C) Anti-malware scanning in WSA protects against malicious content by scanning downloaded files and web content for threats. Anti-malware is an important security function but doesn’t address URL filtering or time-based access control. Anti-malware scanning operates continuously regardless of time and focuses on threat prevention rather than productivity or acceptable use policy enforcement. While anti-malware and URL filtering are both important WSA features, they serve different security objectives with anti-malware addressing threats and URL filtering addressing acceptable use and productivity.
D) Data Loss Prevention (DLP) policies in WSA prevent sensitive data from leaving the organization through web channels by scanning uploads for confidential information (credit card numbers, social security numbers, proprietary documents). DLP focuses on protecting organizational data rather than controlling which websites users can access or when they can access them. DLP and URL filtering address different security concerns and are independent features that can be used together for comprehensive web security but serve distinct purposes.
Question 109:
A network administrator needs to configure Cisco Umbrella to provide DNS-layer security for remote users. Which deployment method is most appropriate?
A) Umbrella roaming client on endpoint devices
B) On-premises Umbrella virtual appliance only
C) Network device integration only
D) Manual DNS configuration on each device
Answer: A
Explanation:
This question tests understanding of Cisco Umbrella deployment options, particularly for protecting remote users. Umbrella provides cloud-delivered security using DNS as an enforcement point to block malicious domains, phishing sites, command-and-control callbacks, and other threats before connections are established. Different deployment methods suit different use cases, with endpoint-based deployment being most appropriate for protecting remote workers who connect from various locations outside the corporate network.
The Umbrella roaming client is agent software installed on endpoint devices (laptops, mobile devices) that redirects DNS queries to Umbrella’s cloud security platform regardless of network location. The roaming client ensures that devices receive Umbrella protection whether connected to corporate networks, home networks, public WiFi, or cellular connections. This is critical for protecting remote workers whose devices leave the protection of corporate network security controls. The client operates transparently, intercepting DNS queries and sending them to Umbrella for security inspection before resolution, providing consistent security policy enforcement regardless of location.
For network-based deployments (office locations, data centers), organizations can configure network devices (routers, firewalls, DHCP servers) to use Umbrella DNS servers or deploy Umbrella virtual appliances that forward queries to the cloud. However, these methods only protect devices while connected to those specific networks and don’t follow users as they move between locations. Remote users connecting from home or public networks would bypass network-based Umbrella protection unless the roaming client is deployed. The roaming client specifically addresses the remote user protection gap by moving the enforcement point from network infrastructure to individual endpoints.
Deployment involves installing the roaming client software on endpoints, configuring client settings through Umbrella dashboard including which security policies apply to different user groups, and optionally integrating with Active Directory for user identity. The client maintains persistent connection to Umbrella cloud services and includes features like internal domain bypass (allowing internal DNS queries to reach corporate DNS servers for internal resources), multiple VPN support, and reporting integration. For comprehensive protection of hybrid workforces including office and remote workers, organizations typically deploy both network-based integration for office locations and roaming clients for mobile devices, ensuring all users receive protection regardless of location. The roaming client provides credential-based identification linking devices to specific users enabling user-level policies and reporting even when users connect from dynamic IP addresses on various networks.
A) This is the correct answer. The Umbrella roaming client provides DNS-layer security for remote users by installing agent software on endpoints that redirects DNS queries to Umbrella regardless of network location. This ensures remote workers receive consistent security protection whether working from home, traveling, or connecting from public networks. The roaming client is specifically designed for protecting mobile users and devices that operate outside corporate network controls, making it the most appropriate deployment method for remote user protection.
B) On-premises Umbrella virtual appliances deploy within corporate data centers or branch offices to forward DNS traffic from local networks to Umbrella cloud services. Virtual appliances are excellent for protecting office locations and provide features like intelligent proxy and Active Directory integration, but they only protect devices while those devices connect through networks where virtual appliances are deployed. Remote users connecting from home or public networks bypass virtual appliance protection. Virtual appliances complement roaming clients in comprehensive deployments but alone don’t protect remote users connecting from arbitrary locations.
C) Network device integration configures existing network infrastructure (routers, firewalls, switches) to use Umbrella DNS servers for name resolution, providing network-level protection for all devices on those networks. Like virtual appliances, network device integration works well for office locations but doesn’t follow users when they connect from outside the corporate network. Remote users on home or public networks wouldn’t route through corporate network devices and therefore wouldn’t receive Umbrella protection. Network integration is appropriate for fixed locations but insufficient for mobile remote user protection.
D) Manual DNS configuration involves statically configuring Umbrella DNS server addresses on individual devices. While this technically provides Umbrella protection, it has significant limitations: configuration is labor-intensive, users can easily change DNS settings bypassing protection, no user or device identification occurs (only IP-based identification which is problematic for dynamic IPs), and troubleshooting is difficult. Manual DNS configuration lacks the policy enforcement, user identification, internal domain bypass, and management capabilities of the roaming client. Manual configuration is generally not recommended except for testing or situations where agent deployment is impossible.
Question 110:
An administrator needs to configure Cisco FTD to perform SSL/TLS decryption for traffic inspection. Which feature must be configured?
A) SSL decryption policy
B) Access control policy only
C) Security Intelligence
D) Network discovery policy
Answer: A
Explanation:
This question addresses SSL/TLS decryption in Cisco Firepower Threat Defense, a critical capability for modern security architectures. As the majority of Internet traffic now uses HTTPS encryption, security devices must decrypt traffic to inspect payload content for threats. Without decryption, encrypted traffic passes uninspected, creating blind spots where malware, data exfiltration, and other threats can hide. SSL decryption policies in FTD enable administrators to configure which traffic should be decrypted, inspected, and re-encrypted transparently.
SSL decryption policies define rules determining how FTD handles encrypted traffic. These policies can decrypt and inspect traffic (requiring FTD to act as man-in-the-middle with certificate replacement), block encrypted connections, or allow encrypted traffic to pass without inspection. The policy consists of rules evaluated in order, with conditions matching traffic characteristics (source/destination networks, URLs, applications, certificates) and actions specifying how to handle matched traffic. Decryption occurs before deep packet inspection, allowing security features like intrusion prevention, malware detection, and URL filtering to examine decrypted content.
Implementation requires configuring SSL decryption policies through Firepower Management Center including creating a trusted CA certificate that FTD uses to re-sign server certificates, defining rules specifying which traffic to decrypt versus bypass, configuring certificate pinning bypass lists for applications that don’t tolerate certificate replacement, and applying policies to specific FTD devices or interfaces. The trusted CA certificate must be deployed to endpoint devices so they trust certificates re-signed by FTD, preventing certificate warnings for users. Different deployment scenarios exist: outbound traffic decryption inspects user traffic going to Internet sites, inbound traffic decryption inspects connections to internal servers, and passthrough decryption handles specific protocols or applications that shouldn’t be decrypted.
Considerations when implementing SSL decryption include performance impact (cryptographic operations consume significant CPU resources), certificate management (maintaining CA certificates and handling certificate errors), privacy and compliance concerns (decrypting traffic may violate privacy policies or regulations in certain jurisdictions), and application compatibility (some applications using certificate pinning or mutual authentication don’t work with decryption). Administrators must balance security benefits against these considerations, often implementing selective decryption that focuses on high-risk traffic while bypassing trusted categories or applications. Performance planning is critical as enabling decryption for all traffic can significantly reduce throughput, potentially requiring hardware upgrades or selective policy scoping.
A) This is the correct answer. SSL decryption policies configure how FTD handles encrypted traffic including which traffic to decrypt for inspection. These policies define rules matching traffic and specifying decryption actions (decrypt and inspect, block, or passthrough without decryption). SSL decryption must be configured before deep inspection features can examine encrypted traffic content. Configuration involves creating SSL policies with rules, generating or importing CA certificates for re-signing, and deploying CA certificates to endpoints for trust.
B) Access control policies enforce allow/deny decisions and apply security inspection features but don’t handle SSL/TLS decryption. While access control policies determine whether traffic is permitted and which inspection features apply, encrypted traffic must first be decrypted through SSL decryption policies before access control inspection features can examine payload content. Access control operates on decrypted traffic but doesn’t perform the decryption itself. Both policy types are required: SSL decryption policies decrypt traffic, access control policies inspect and enforce security policies.
C) Security Intelligence in FTD blocks traffic to and from IP addresses and domains known to be malicious based on threat intelligence feeds. Security Intelligence can operate on encrypted traffic by inspecting DNS queries, SNI information in TLS handshakes, and IP addresses without requiring full decryption. However, Security Intelligence doesn’t perform the SSL/TLS decryption required for deep content inspection. While Security Intelligence provides important threat blocking capabilities, it’s a different security feature from SSL decryption and doesn’t enable the payload inspection that decryption enables.
D) Network discovery policies in FTD configure how Firepower collects information about network hosts, operating systems, applications, and users for network visibility and context. Discovery provides situational awareness about network assets but doesn’t decrypt or inspect traffic. Discovery can observe encrypted connections and record metadata (source, destination, application) but doesn’t decrypt payload content. Discovery and decryption serve different purposes with discovery focused on network mapping and decryption focused on enabling content inspection of encrypted traffic.
Question 111:
A security administrator needs to configure Cisco ASA to allow IPsec VPN connections while blocking all other traffic from the Internet. Which configuration approach is correct?
A) Create access-list permitting UDP 500/4500 and ESP, apply to outside interface inbound
B) Enable IPsec passthrough only
C) Configure default route only
D) Disable all access-lists
Answer: A
Explanation:
This question tests understanding of Cisco ASA firewall policy configuration for VPN traffic. IPsec VPNs require specific protocols and ports to establish and maintain connections: IKE (Internet Key Exchange) uses UDP port 500 for initial key negotiation, NAT Traversal uses UDP port 4500 when IPsec operates through NAT devices, and ESP (Encapsulating Security Payload) protocol 50 carries the encrypted VPN traffic. For remote VPN users to connect successfully, the ASA must permit these specific protocol flows through the firewall while maintaining security by blocking all other unwanted traffic.
Access control lists (ACLs) on ASA define which traffic is permitted or denied through the firewall. For interfaces facing untrusted networks (outside interfaces connected to Internet), ACLs typically deny most inbound traffic except explicitly permitted services. When deploying remote access VPN, administrators must ensure ACLs permit the necessary VPN protocols while maintaining restrictive policies for other traffic. The ACL must permit UDP port 500 for IKE phase 1 negotiation, UDP port 4500 for NAT-T (NAT Traversal) used when VPN clients are behind NAT devices, and ESP (IP protocol 50) for the actual encrypted VPN data transfer.
Configuration involves creating an access-list with entries permitting these protocols to the ASA’s outside interface IP address, then applying the access-list to the outside interface in the inbound direction. The access-list might include entries like «access-list OUTSIDE_IN permit udp any host [ASA-outside-IP] eq 500» for IKE, «access-list OUTSIDE_IN permit udp any host [ASA-outside-IP] eq 4500» for NAT-T, and «access-list OUTSIDE_IN permit esp any host [ASA-outside-IP]» for ESP traffic. After defining these permit statements, the access-list is applied with «access-group OUTSIDE_IN in interface outside» command. Any traffic not explicitly permitted by these entries is implicitly denied by the ACL’s default deny behavior, satisfying the requirement to block all other traffic. This approach provides necessary VPN connectivity while maintaining security by explicitly allowing only required protocols and blocking everything else. Additional hardening might include rate-limiting VPN connection attempts to mitigate brute force attacks, implementing threat detection for VPN anomalies, or requiring multi-factor authentication for VPN access.
A) This is the correct answer. Creating an access-list that permits UDP ports 500 and 4500 plus ESP protocol and applying it to the outside interface inbound allows IPsec VPN connections while blocking all other traffic. UDP 500 enables IKE negotiation, UDP 4500 supports NAT Traversal, and ESP carries encrypted VPN data. Applying the ACL inbound on the outside interface enforces these specific permits while implicitly denying all other traffic due to the ACL’s default deny rule, meeting both requirements: enabling VPN and blocking unwanted traffic.
B) IPsec passthrough is a feature on some NAT devices that allows IPsec traffic to traverse NAT, but it’s not a firewall configuration feature on ASA. ASA acts as the VPN concentrator terminating IPsec connections rather than passing them through, so passthrough configuration doesn’t apply. Additionally, simply enabling passthrough (if it were applicable) wouldn’t provide the firewall security policy blocking other traffic. The requirement needs explicit access control configuration permitting only VPN protocols while denying other traffic, which access-lists provide.
C) Configuring a default route establishes how the ASA routes traffic but doesn’t control which traffic is permitted through the firewall. Routing and access control are separate functions: routing determines where permitted traffic is forwarded, while access-lists determine which traffic is permitted or denied. A default route alone provides no security filtering and wouldn’t permit or block any specific traffic. Access-lists are required to implement the security policy permitting VPN protocols while blocking other traffic. Default routes are necessary for routing but don’t replace access control security policies.
D) Disabling all access-lists removes all access control security, allowing all traffic through the firewall without restriction. This completely violates the requirement to block all non-VPN traffic and creates massive security vulnerabilities exposing internal networks to Internet threats. ASA security policies depend on properly configured access-lists to enforce security boundaries. Disabling ACLs effectively turns the ASA into a router without firewall protection, which is inappropriate and dangerous. The requirement explicitly needs VPN traffic allowed while blocking everything else, which requires carefully configured access-lists, not their absence.
Question 112:
An administrator needs to configure Cisco Stealthwatch to detect anomalous network behavior indicating potential security threats. Which primary data source does Stealthwatch analyze?
A) NetFlow data from network devices
B) Syslog messages only
C) SNMP traps
D) Email headers
Answer: A
Explanation:
This question examines Cisco Stealthwatch (now called Cisco Secure Network Analytics) and its primary detection methodology. Stealthwatch provides network visibility and security analytics by analyzing network traffic patterns to detect anomalous behavior indicating security threats, compromised hosts, or policy violations. Unlike signature-based detection systems that match known attack patterns, Stealthwatch uses behavioral analysis to identify deviations from normal network behavior, enabling detection of unknown threats, zero-day attacks, and insider threats that traditional security tools might miss.
NetFlow is a network protocol that collects IP traffic information, creating flow records that describe network conversations including source/destination IP addresses, ports, protocols, bytes transferred, packet counts, and timestamps. Network devices (routers, switches, firewalls) generate NetFlow data and export it to collectors like Stealthwatch. NetFlow provides efficient network visibility because it captures metadata about communications without capturing actual packet payloads, enabling scalable monitoring of high-speed networks without the bandwidth and storage requirements of full packet capture.
Stealthwatch analyzes NetFlow data to establish baseline behavior patterns for network entities (hosts, segments, applications) and detect anomalies indicating security threats. Machine learning algorithms identify normal communication patterns including which devices communicate with each other, typical traffic volumes, common protocols and ports, and timing patterns. When behavior deviates significantly from established baselines—such as internal hosts suddenly communicating with suspicious external IPs, unusual data transfer volumes, lateral movement patterns, or command-and-control beaconing—Stealthwatch generates security alerts. This behavioral approach detects threats that bypass traditional signature-based security because the detection is based on abnormal behavior rather than matching known attack signatures.
Stealthwatch architecture includes Flow Collectors that receive and store NetFlow data, the Stealthwatch Management Console for configuration and investigation, and FlowSensors that can generate enhanced NetFlow data. The system builds contextual understanding by enriching NetFlow data with identity information from ISE, vulnerability data from security scanners, and threat intelligence from external feeds. Analytics identify specific threat types including malware callbacks, data exfiltration, network reconnaissance, lateral movement, DDoS attacks, and policy violations. Investigations provide drill-down capabilities showing complete communication patterns, timelines, and relationships to understand security incidents. Stealthwatch integrates with other security tools enabling automated response workflows where detected threats trigger containment actions through firewalls, switches, or endpoint security platforms.
A) This is the correct answer. NetFlow data from network devices is the primary data source that Stealthwatch analyzes for anomaly detection. NetFlow provides comprehensive network traffic metadata enabling behavioral analysis to detect security threats. Stealthwatch correlates NetFlow records from across the network to build traffic baselines and identify anomalous patterns indicating compromise or policy violations. The NetFlow-based approach provides scalable network-wide visibility without requiring deep packet inspection or full packet capture, making Stealthwatch effective for large enterprise networks.
B) Syslog messages provide event logs from network devices, servers, and applications but are not Stealthwatch’s primary detection data source. While Stealthwatch can ingest syslog data to enrich context and correlate with NetFlow analytics, the core behavioral detection engine operates on NetFlow traffic data. Syslog provides device events and alerts but doesn’t provide the comprehensive traffic flow information that enables behavioral analysis of network communications. Syslog is supplementary rather than primary for Stealthwatch detection capabilities.
C) SNMP traps provide alerts about specific device conditions (interface down, high CPU, threshold violations) but don’t provide the traffic flow information needed for behavioral analysis. SNMP is valuable for device health monitoring and fault management but doesn’t reveal communication patterns between hosts, data transfer volumes, or the traffic behavioral indicators that Stealthwatch analyzes. SNMP and NetFlow serve different monitoring purposes with SNMP focused on device status and NetFlow providing traffic visibility for security analytics.
D) Email headers are analyzed by email security solutions for phishing detection and email threat analysis but are not relevant to Stealthwatch network behavior analytics. Stealthwatch operates at the network flow level analyzing IP communications rather than application-layer email content. Email security is addressed by dedicated email gateway security solutions that inspect message content, attachments, and headers for threats. Stealthwatch and email security serve different security domains with Stealthwatch providing network-level visibility and email gateways providing message-level protection.
Question 113:
A security engineer needs to configure Cisco ESA (Email Security Appliance) to quarantine emails containing suspicious attachments for administrator review before delivery. Which feature should be configured?
A) File reputation filtering with quarantine action
B) SPF validation only
C) TLS encryption
D) DLP policy scanning only
Answer: A
Explanation:
This question addresses email security and attachment threat prevention in Cisco Email Security Appliance. Email remains a primary attack vector with threats delivered through malicious attachments including malware, ransomware, and weaponized documents. ESA provides multiple security layers for email protection including anti-spam, anti-malware, attachment filtering, URL reputation, and content filtering. File reputation filtering specifically analyzes email attachments against threat intelligence to identify known malicious files and suspicious files that warrant additional review.
File reputation filtering in ESA uses Cisco Talos threat intelligence to evaluate email attachments based on their reputation scores. Each file is analyzed and assigned a reputation score indicating the likelihood of being malicious based on factors including file hash comparison against known malware databases, file type and characteristics, prevalence (how widely the file has been seen), source reputation, and behavioral analysis. Reputation scores range from verified malicious to verified clean, with suspicious scores indicating files that aren’t definitively malicious but exhibit concerning characteristics warranting caution.
Configuration involves enabling Advanced Malware Protection (AMP) for Email which provides file reputation analysis and sandboxing capabilities. Within AMP settings, administrators configure file reputation filtering policies specifying actions for different reputation thresholds. For suspicious attachments requiring review, the policy action should be «quarantine» which holds messages in ESA’s quarantine area preventing delivery to recipients until administrators review and either release or delete them. The quarantine provides safe containment where suspicious attachments can’t harm users while administrators investigate. Review workflows enable administrators to examine quarantined messages, view attachment analysis results, perform additional investigation if needed, and make informed decisions about releasing messages to intended recipients or permanently blocking them.
Additional ESA attachment security features complement file reputation including file analysis (sandboxing suspicious files in isolated environments to observe behavior), file type filtering (blocking high-risk file types like executables), outbreak filters (detecting emerging threats through behavioral analysis), and URL reputation (analyzing links in messages). Comprehensive email security combines these capabilities configured through mail policies that define which security features apply to inbound, outbound, and internal email. Policies can specify different treatment based on sender/recipient domains, message characteristics, and detected threat types. Quarantine management includes notification workflows alerting administrators of suspicious quarantined messages, scheduled quarantine reports summarizing held messages, and end-user quarantine access allowing users to review their own quarantined spam with administrator-controlled release capabilities.
A) This is the correct answer. File reputation filtering with quarantine action provides the capability to hold emails with suspicious attachments for administrator review before delivery. File reputation analyzes attachments against threat intelligence databases and assigns reputation scores. Configuring quarantine as the action for suspicious reputation scores prevents delivery while holding messages for administrator investigation and approval. This enables safe handling of potentially threatening attachments where uncertainty exists about maliciousness, providing security without automatically blocking potentially legitimate files.
B) SPF (Sender Policy Framework) validation verifies that sending mail servers are authorized to send email for claimed sender domains, helping prevent email spoofing and phishing. SPF operates on email envelope sender addresses and validates sending server authorization against DNS-published policies. While SPF is valuable for email authentication and spam prevention, it doesn’t analyze or make decisions about email attachments. SPF addresses email authentication rather than attachment threat detection, serving a different security purpose from attachment analysis and quarantine.
C) TLS (Transport Layer Security) encryption protects email in transit between mail servers, preventing eavesdropping and man-in-the-middle attacks on email communications. TLS encryption is important for email confidentiality and integrity during transmission but doesn’t analyze attachments for threats or quarantine suspicious messages. TLS addresses transport security while the question requires attachment threat detection and quarantine. Both are important email security features but serve different purposes with TLS protecting communications and reputation filtering protecting against attachment threats.
D) DLP (Data Loss Prevention) policy scanning in ESA prevents sensitive information from leaving the organization by scanning outbound email for confidential data patterns like credit card numbers, social security numbers, or classified information. DLP addresses data exfiltration prevention rather than inbound attachment threat detection. While DLP might quarantine outbound emails violating data policies, it doesn’t address the requirement of analyzing and quarantining inbound emails with suspicious attachments. DLP and file reputation filtering address opposite directions and different threats: DLP prevents data leakage outbound while file reputation protects against attachment threats inbound.
Question 114:
An administrator needs to configure Cisco Duo for multi-factor authentication protecting VPN access. Which authentication factors does Duo provide beyond username and password?
A) Push notifications, SMS codes, phone callbacks, and hardware tokens
B) Password only
C) Biometric iris scanning exclusively
D) Smart card certificates only
Answer: A
Explanation:
This question examines Cisco Duo’s multi-factor authentication (MFA) capabilities. MFA significantly strengthens security beyond password-only authentication by requiring users to provide multiple authentication factors: something they know (password), something they have (phone, token), or something they are (biometric). Duo is a cloud-based authentication service that integrates with existing applications, VPNs, and systems to add MFA without requiring complex infrastructure deployment. Duo’s strength lies in offering multiple authentication methods accommodating different user situations and preferences while maintaining security.
Duo provides several second-factor authentication methods beyond passwords. Push notifications sent to Duo Mobile app on smartphones are the most common and user-friendly method—users approve authentication requests with a single tap. SMS passcodes deliver one-time codes via text message to registered phones, useful when data connectivity isn’t available. Phone callbacks provide automated voice calls where users press a key to authenticate, suitable when smartphones aren’t available or policies restrict mobile apps. Hardware tokens generate time-based one-time passwords (TOTP) for users requiring physical token devices, addressing scenarios where smartphones aren’t permitted or available. Duo also supports U2F/WebAuthn security keys for phishing-resistant authentication.
This variety of authentication methods provides flexibility for different user populations and use cases. Road warriors might use push notifications on smartphones, office workers might use hardware tokens, and users in locations without cellular coverage might use phone callbacks. Organizations can configure which methods are available and can enforce specific method requirements based on risk profiles. Duo’s adaptive authentication adds intelligence by evaluating authentication context including device security posture, location, network, and user behavior patterns to require additional verification for high-risk scenarios while streamlining authentication for trusted situations.
Integration with VPN solutions is straightforward with Duo supporting most VPN platforms including Cisco ASA, AnyConnect, Palo Alto, Fortinet, and others through RADIUS proxy. The Duo Authentication Proxy acts as RADIUS server between VPN and Duo cloud service, forwarding authentication requests to Duo for second-factor verification. Users authenticate with username and password to the VPN, which forwards credentials to Duo proxy, Duo prompts user for second-factor verification via chosen method, user completes second-factor authentication, and Duo returns success or failure to VPN. The process is transparent to VPN infrastructure and adds MFA security without modifying VPN configurations beyond pointing to Duo RADIUS proxy. Duo provides comprehensive reporting on authentication events, device security status, and user behaviors supporting security auditing and compliance requirements.
A) This is the correct answer. Duo provides multiple second-factor authentication methods including push notifications to Duo Mobile app, SMS passcodes, phone callbacks, and hardware tokens. This variety ensures users can authenticate in different situations while organizations maintain strong MFA security. The multiple methods accommodate different user preferences, device availability, network conditions, and security policies. Duo’s flexible authentication options combined with easy integration make it widely deployed for protecting VPN access, cloud applications, and other resources requiring enhanced authentication security.
B) Password-only authentication is specifically what MFA is designed to improve beyond. Duo adds second-factor authentication on top of existing password authentication, providing «something you have» (phone, token) in addition to «something you know» (password). Password-only authentication is vulnerable to phishing, credential theft, and brute force attacks. Duo’s purpose is overcoming password-only limitations by requiring additional authentication factors that attackers can’t easily compromise even if passwords are stolen. Modern security standards and compliance frameworks increasingly require MFA rather than accepting password-only authentication.
C) Biometric iris scanning is a biometric authentication factor («something you are») but Duo doesn’t exclusively provide iris scanning. While Duo Mobile supports biometric authentication on smartphones (fingerprint, face recognition) for app access and can integrate with biometric capabilities, Duo’s primary second factors are device-based (push, SMS, call, token) rather than biometric-exclusive. Biometric authentication often supplements Duo Mobile app security but isn’t the primary second factor Duo provides. Additionally, iris scanning specifically isn’t a standard Duo capability and would be highly limiting as the sole authentication method.
D) Smart card certificates provide strong authentication through PKI-based cryptographic validation but aren’t Duo’s primary authentication method. Duo focuses on cloud-delivered MFA using phones and tokens rather than certificate-based authentication requiring PKI infrastructure. While Duo integrates with environments using certificates and can work alongside certificate authentication, it doesn’t exclusively rely on smart card certificates. Smart card implementations require significant infrastructure (certificate authorities, card readers, management systems) whereas Duo’s cloud-based approach using phones and tokens provides simpler deployment with comparable security.
Question 115:
A security administrator needs to configure Cisco Firepower to automatically block traffic from IP addresses exhibiting malicious behavior based on threat intelligence. Which feature should be enabled?
A) Security Intelligence filtering
B) Access control policies only
C) NAT translation
D) Interface statistics
Answer: A
Explanation:
This question addresses threat intelligence-based blocking in Cisco Firepower Threat Defense. Modern security architectures increasingly leverage threat intelligence—information about known malicious IP addresses, domains, and URLs compiled from global threat research, honeypots, incident analysis, and collaborative sharing. Using threat intelligence enables proactive blocking of communications with known-bad entities before deeper inspection occurs, improving security and reducing resource consumption by dropping malicious traffic early in processing.
Security Intelligence filtering in Firepower provides reputation-based blacklisting using Cisco Talos threat intelligence feeds. Talos continuously researches threats globally and maintains databases of IP addresses and domains associated with malware distribution, command-and-control servers, phishing sites, spam sources, and other malicious activities. Security Intelligence feeds are automatically updated providing real-time protection against emerging threats. When Security Intelligence is enabled, Firepower checks source and destination IP addresses and DNS queries against these threat intelligence lists before performing deeper packet inspection or access control policy evaluation.
Configuration involves enabling Security Intelligence through Firepower Management Center and selecting which threat intelligence feeds to use. Cisco provides multiple intelligence categories including malware sources, high-risk IP addresses, phishing URLs, known botnet controllers, and anonymization networks. Administrators can select relevant categories based on their threat landscape and risk tolerance. Additionally, custom Security Intelligence lists can be created containing organization-specific IP addresses or domains to block, supporting customized threat response. Security Intelligence also supports monitoring mode where matches are logged without blocking, useful for tuning before enforcement.
The benefits of Security Intelligence include early threat blocking (malicious traffic is blocked before consuming resources for deep inspection), reduced attack surface (preventing communications with known-bad entities reduces compromise risk), improved performance (blocking malicious traffic early reduces load on deeper inspection engines), and automatic updates (intelligence feeds update automatically without administrator intervention providing current protection). Security Intelligence works in conjunction with other Firepower features: it blocks known threats early while IPS, malware detection, and URL filtering provide additional protection layers for traffic that passes Security Intelligence checks. This layered approach provides defense-in-depth with reputation-based blocking as the first security gate complemented by signature-based and behavioral detection for comprehensive threat prevention.
A) This is the correct answer. Security Intelligence filtering automatically blocks traffic to and from IP addresses and domains known to be malicious based on Cisco Talos threat intelligence feeds. Security Intelligence provides reputation-based blocking operating early in traffic processing before access control evaluation, efficiently protecting against known threats. The feature updates automatically with latest intelligence ensuring current protection without manual administrator intervention. Configuration involves enabling Security Intelligence and selecting appropriate threat intelligence categories through Firepower Management Center.
B) Access control policies enforce permit/deny decisions and apply security inspection features but don’t inherently provide automatic threat intelligence-based blocking. While access control rules can manually block specific IPs or networks, they require manual configuration and don’t leverage dynamic threat intelligence feeds that update automatically. Access control policies are essential for traffic enforcement but Security Intelligence provides the automatic, intelligence-driven blocking based on threat reputation that the question requires. Both features work together with Security Intelligence providing intelligence-based blocking and access control providing policy enforcement.
C) NAT (Network Address Translation) translates IP addresses for routing purposes, typically translating private internal addresses to public addresses for Internet communication or translating inbound traffic to internal servers. NAT is a routing and address management function, not a security feature that blocks malicious traffic. NAT doesn’t analyze threat intelligence or make blocking decisions based on reputation. While NAT is often configured on security devices, it serves address translation purposes completely unrelated to threat intelligence-based blocking.
D) Interface statistics provide monitoring information about traffic volumes, packet counts, errors, and utilization on network interfaces. Statistics support capacity planning and troubleshooting but don’t provide security functions or blocking capabilities. Interface statistics are passive monitoring metrics that don’t analyze traffic for threats or make blocking decisions. Security Intelligence provides active threat prevention while interface statistics provide passive performance monitoring serving entirely different purposes.
Question 116:
An administrator needs to implement application visibility and control on Cisco ASA with FirePOWER Services to block social media applications. Which feature enables application-layer control?
A) Application filtering policies
B) Standard ACLs with port numbers only
C) Static routing only
D) DHCP snooping
Answer: A
Explanation:
This question examines application-layer visibility and control capabilities in Cisco ASA with FirePOWER Services. Traditional firewall policies operate on network parameters (IP addresses, ports, protocols) which are insufficient for controlling modern applications that use dynamic ports, encryption, or tunnel through standard ports like 80 and 443. Application visibility and control (AVC) technologies identify applications regardless of port or protocol through deep packet inspection analyzing application signatures, behavioral patterns, and heuristics.
Application filtering policies in ASA with FirePOWER Services provide Layer 7 application identification and control. FirePOWER inspects traffic to identify specific applications from thousands of recognized applications including social media (Facebook, Twitter, LinkedIn), collaboration tools, file sharing, gaming, instant messaging, and many others. Application detection operates independently of port numbers—for example, identifying Skype or BitTorrent traffic even when they use non-standard ports or HTTP tunneling. Once applications are identified, policies can allow, block, rate-limit, or apply security inspection based on application identity rather than relying on port-based controls that are easily circumvented.
Configuration involves enabling FirePOWER Services module on ASA and configuring application filtering through Access Control Policies in FirePOWER Management Center (FMC) or ASA FirePOWER device manager. Application filter objects are created specifying applications or application categories to control—for blocking social media, administrators would select social networking applications or the social networking category encompassing multiple services. Access control rules then reference these application filters to enforce blocking, allowing, or monitoring. Policies are granular enabling different treatment for different applications: perhaps allowing LinkedIn for business purposes while blocking Facebook and Twitter.
Application visibility provides insights into application usage across the network supporting capacity planning, security policy decisions, and acceptable use enforcement. Reports show which applications consume bandwidth, which users access specific applications, and trends over time. This visibility helps organizations understand their actual application landscape which often includes shadow IT and unapproved applications. Application control enables enforcement of acceptable use policies, blocking of high-risk applications that might introduce malware or data leakage, and bandwidth optimization by rate-limiting non-business applications. Modern security requires application-layer control because port-based policies are insufficient given how applications use encryption, port obfuscation, and tunneling to bypass traditional controls.
A) This is the correct answer. Application filtering policies provide application-layer visibility and control enabling blocking of specific applications like social media regardless of ports or protocols used. FirePOWER Services identifies applications through deep packet inspection and signature analysis, then enforces policies based on application identity. Configuring application filters to block social networking category or specific social media applications provides the required control. This Layer 7 approach overcomes limitations of port-based filtering that can’t effectively control modern applications.
B) Standard ACLs with port numbers only operate at Layer 3/4 controlling traffic based on IP addresses and ports but cannot provide application-layer visibility. Standard ACLs are the most basic firewall control and don’t even inspect port numbers (extended ACLs inspect ports, standard ACLs only inspect source IP addresses). Even extended ACLs with port numbers can’t reliably control applications because modern applications use dynamic ports, encryption, and tunneling. Social media applications often use HTTPS on port 443, indistinguishable from legitimate business HTTPS traffic using port-based filtering. Application-layer inspection is required to identify and control specific applications.
C) Static routing defines forwarding paths for network traffic but provides no security filtering or application control. Routing determines where traffic is sent, while access control determines whether traffic is permitted. Routing and security are separate functions with routing directing permitted traffic and security policies determining what’s permitted. Static routes alone provide no visibility into applications and no blocking capabilities. Security policies including application filtering are required for controlling which applications are allowed, not routing configuration.
D) DHCP snooping is a Layer 2 security feature protecting against rogue DHCP servers and man-in-the-middle attacks. DHCP snooping builds a binding table of MAC addresses, IP addresses, VLAN IDs, and switch ports to validate DHCP transactions and prevent DHCP spoofing attacks. While DHCP snooping provides important Layer 2 security, it operates completely independently from application-layer control and doesn’t provide visibility into or control over applications like social media. DHCP snooping and application filtering address entirely different security concerns at different network layers.
Question 117:
A security engineer needs to configure Cisco Umbrella to provide granular control over YouTube access allowing educational content while blocking entertainment content. Which Umbrella feature enables this level of control?
A) Application-level controls with YouTube Restricted Mode
B) Complete domain blocking only
C) DNS forwarding only
D) DHCP relay only
Answer: A
Explanation:
This question addresses granular application control within Cisco Umbrella’s web security capabilities. While basic content filtering operates at the domain level (block youtube.com entirely), granular application controls provide finer control within specific applications or websites. This granularity is particularly valuable for educational institutions and businesses that want to allow productive use of platforms like YouTube while blocking non-business content. Application-level controls recognize that platforms serve multiple purposes and blanket blocking would eliminate legitimate business or educational use.
Umbrella’s application-level controls for YouTube integrate with YouTube’s own content filtering capabilities. YouTube Restricted Mode is YouTube’s built-in feature that filters out potentially inappropriate content including mature themes, violence, and age-restricted videos, while allowing educational and appropriate content. Umbrella can enforce YouTube Restricted Mode at the DNS/proxy level ensuring users cannot disable it in their browser settings. This provides centralized policy enforcement where organizational requirements override individual user preferences.
Implementation involves configuring Umbrella application settings within the destination lists or application controls. When YouTube is encountered, rather than completely blocking or allowing the domain, Umbrella enforces YouTube Restricted Mode by manipulating the HTTPS requests or DNS responses to trigger YouTube’s filtering. This happens transparently to users who see YouTube access but with filtered content. Additional YouTube controls might include SafeSearch enforcement for Google searches, Bing SafeSearch for Microsoft searches, and similar enforced filtering for other platforms. Organizations can combine these controls with time-based policies (perhaps allowing broader access during lunch breaks while restricting during work hours) and user/group-specific policies (different controls for different departments).
The benefits of application-level controls include balancing security with productivity (users access needed resources while blocking inappropriate content), educational value (students can access educational videos while being protected from inappropriate content), bandwidth optimization (blocking entertainment content reduces non-business bandwidth consumption), and compliance (organizations can demonstrate due diligence in protecting users from inappropriate content). This approach is more sophisticated than all-or-nothing blocking, recognizing that modern platforms serve legitimate business purposes alongside entertainment or risky content. Effective security policies enable business productivity while maintaining appropriate controls and protection.
A) This is the correct answer. Application-level controls with YouTube Restricted Mode provide granular control over YouTube content allowing educational material while blocking entertainment and inappropriate content. Umbrella enforces YouTube Restricted Mode centrally preventing users from disabling filtering and ensuring consistent content policy enforcement. This approach enables productive YouTube use for business or education while blocking non-business content. Configuration involves enabling application controls for YouTube within Umbrella policy settings.
B) Complete domain blocking would block all access to youtube.com preventing both educational and entertainment content access. While complete blocking is simple, it eliminates legitimate educational or business use of the platform. Many organizations have valid reasons for YouTube access including training videos, how-to guides, product demonstrations, and educational content. Complete blocking fails to provide the granularity required to allow educational content while blocking entertainment, taking an overly restrictive approach that hampers productivity and learning.
C) DNS forwarding is a basic DNS service functionality that forwards queries from clients to upstream DNS servers but doesn’t provide content filtering or application-level controls. DNS forwarding handles query resolution without inspecting or controlling content within websites or applications. While Umbrella operates as a secure DNS service, the DNS forwarding function alone doesn’t provide the granular YouTube content filtering required. Application controls are needed in addition to DNS service functionality to enable content-specific filtering within platforms like YouTube.
D) DHCP relay is a network function that forwards DHCP requests from clients to DHCP servers across network boundaries, enabling centralized DHCP management in multi-subnet environments. DHCP relay is completely unrelated to web content filtering or YouTube access control. DHCP handles IP address assignment while Umbrella provides security filtering. These are independent functions serving different purposes with DHCP addressing network configuration and Umbrella addressing security and content filtering.
Question 118:
An administrator needs to configure Cisco AMP (Advanced Malware Protection) for Endpoints to perform retrospective security analysis after files have been initially classified as clean. Which AMP feature enables this capability?
A) Continuous analysis and retrospective alerting
B) Static signature matching only
C) SNMP monitoring
D) Port mirroring
Answer: A
Explanation:
This question examines one of Cisco AMP for Endpoints’ most powerful capabilities: retrospective security analysis. Traditional anti-malware solutions perform point-in-time analysis when files are first encountered—if a file is deemed clean at initial scan, no further action occurs even if the file is later discovered to be malicious. This creates a security gap because sophisticated malware often evades initial detection through zero-day exploits, polymorphism, or simply because threat intelligence hasn’t yet identified the threat. Retrospective analysis addresses this gap by continuously monitoring files and updating verdicts as new intelligence becomes available.
AMP for Endpoints uses cloud-based continuous analysis where file dispositions are dynamic rather than static. When endpoints encounter files, AMP calculates cryptographic hashes (SHA-256) and queries the AMP cloud for file reputation. Even after files are initially classified as clean and allowed to execute, AMP continues tracking those files and their behavior. If threat intelligence later identifies a previously-clean file as malicious (perhaps the file was used in attacks discovered elsewhere, or behavioral analysis revealed malicious intent), AMP automatically updates the file’s reputation and generates retrospective security alerts for all endpoints that encountered the file.
Retrospective alerting notifies security teams when files previously allowed on endpoints are later identified as threats. These alerts include complete file trajectory information showing where files came from, which endpoints received them, what actions they performed, and what other files or processes they interacted with. This provides complete forensic context enabling effective incident response. Security teams can use trajectory data to understand infection scope, identify patient zero, determine what data might have been accessed, and remediate all affected systems. Retrospective analysis transforms endpoint security from point-in-time protection to continuous security assessment.
The implementation requires AMP connector software installed on endpoints (Windows, Mac, Linux, mobile devices) that monitors file activity and communicates with AMP cloud. The connector uses local analysis engines for immediate threat prevention while continuously synchronizing with cloud intelligence. Cloud architecture enables massive-scale analysis processing millions of samples globally and sharing intelligence across all customers (while respecting privacy—only hashes and metadata are shared, not actual files). When new threats are identified anywhere in the AMP ecosystem, all customers benefit from updated intelligence. Retrospective analysis considers not just individual file reputation but also behavioral indicators like unusual network connections, registry modifications, or process relationships suggesting malicious intent even when individual components appear benign.
A) This is the correct answer. Continuous analysis and retrospective alerting enable AMP to update file verdicts after initial classification and alert security teams when previously-clean files are later identified as malicious. This capability closes the security gap where sophisticated threats evade initial detection but are later discovered through behavioral analysis or global intelligence sharing. Retrospective alerts include complete file trajectory showing the file’s history enabling effective incident response and remediation across all affected endpoints.
B) Static signature matching performs one-time analysis comparing files against known malware signatures but doesn’t provide ongoing monitoring or retrospective analysis. Once a file passes static signature checks, it’s considered clean with no further evaluation even if later intelligence reveals malicious nature. Static signature matching represents traditional anti-malware technology that AMP specifically evolved beyond through cloud-based continuous analysis. While signature matching remains one detection technique AMP uses, it’s complemented by behavioral analysis, sandboxing, machine learning, and critically, continuous retrospective analysis that signature-only solutions lack.
C) SNMP (Simple Network Management Protocol) monitoring collects device performance and status information from network infrastructure including utilization, errors, and health metrics. SNMP supports network device management but is completely unrelated to endpoint malware detection or retrospective file analysis. SNMP operates at network device level while AMP operates at endpoint file level. These are different security and management domains with SNMP addressing network management and AMP addressing endpoint threat prevention and forensics.
D) Port mirroring (SPAN) copies network traffic from one or more switch ports to a monitoring port for analysis by external tools like IDS/IPS or packet analyzers. Port mirroring provides network visibility for security monitoring but doesn’t perform endpoint file analysis or retrospective security assessment. Port mirroring operates at network traffic level while AMP operates at endpoint file and process level. These are complementary security techniques operating at different layers with port mirroring providing network visibility and AMP providing endpoint visibility and protection.
Question 119:
A security administrator needs to configure Cisco ISE to enforce compliance by checking that endpoints have required antivirus software and current patches before granting network access. Which ISE service provides this capability?
A) Posture assessment
B) Guest services only
C) Device administration
D) pxGrid services
Answer: A
Explanation:
This question addresses Cisco ISE’s posture assessment capabilities for enforcing endpoint compliance with organizational security policies. Network access control (NAC) evolved from simple authentication (who are you?) to include authorization (what can you access?) and posture assessment (is your device compliant with security requirements?). Posture assessment ensures endpoints meet minimum security standards before accessing network resources, reducing the risk of compromised or insecure devices introducing threats to the network.
Posture assessment in ISE evaluates endpoint compliance with security policies by checking for required software, configurations, and security states. Common posture checks include antivirus installed and up-to-date with current definitions, operating system patches current, personal firewall enabled, disk encryption enabled, specific applications installed or removed, registry settings configured correctly, and services running or stopped as required. ISE uses agent-based or agentless methods to perform these checks depending on endpoint capabilities and organizational policies.
Agent-based posture uses the AnyConnect NAM (Network Access Manager) or AnyConnect ISE Posture module installed on endpoints. These agents perform detailed posture checks reading local system information about installed software versions, patch levels, antivirus definitions, security settings, and more. Agents communicate posture status to ISE which evaluates compliance against defined posture policies. Agentless posture uses web-based checks through client provisioning portal for devices that cannot run agents (personal devices, guests, IoT). Agentless checks are less comprehensive than agent-based but provide basic compliance verification.
Posture policies define compliance requirements and remediation actions. Administrators create posture conditions specifying requirements (antivirus must be installed, definition file less than 7 days old, Windows 10 must have specific KB patch installed). Posture policies combine conditions with remediation actions: compliant endpoints receive full network access, non-compliant endpoints are placed in quarantine VLANs with restricted access to remediation resources, and optionally automatic remediation can install missing software or updates. Endpoints remain in quarantine until they achieve compliance, then ISE triggers Change of Authorization (CoA) to dynamically move them to appropriate access VLANs. This creates a strong security boundary ensuring only compliant devices access sensitive network resources.
A) This is the correct answer. Posture assessment provides endpoint compliance checking evaluating whether devices meet security requirements including antivirus presence and updates, patch levels, and configurations before granting network access. ISE’s posture service uses agents or agentless methods to assess endpoints against defined compliance policies and enforces appropriate access based on compliance status. Non-compliant endpoints are quarantined with restricted access until remediated, while compliant endpoints receive full access, creating effective endpoint security enforcement.
B) Guest services in ISE provide network access workflows for visitors and temporary users including self-registration portals, sponsor approval processes, account creation with time limits, and access tracking. Guest services focus on providing appropriate temporary access to non-employees rather than enforcing endpoint compliance. While guest access might include basic compliance checks, guest services are distinct from posture assessment which specifically enforces detailed endpoint security compliance. Guests typically receive limited access to segmented guest networks rather than access to internal resources requiring full posture compliance.
C) Device administration in ISE provides centralized management of network device access controlling which administrators can access network infrastructure (routers, switches, firewalls) and what commands they can execute. TACACS+ support enables command authorization, accounting, and comprehensive auditing of administrative actions. Device administration secures network infrastructure access but doesn’t assess endpoint compliance. Device administration addresses administrative access control while posture assessment addresses endpoint security compliance—different services serving different security requirements.
D) pxGrid (Platform Exchange Grid) is ISE’s API framework enabling security ecosystem integration where ISE shares context and security information with other security platforms. pxGrid enables ISE to publish session information, threat intelligence, and user context to integrated security tools, and consume threat detection from those tools to trigger response actions. While pxGrid enhances overall security through integration, it’s not the service that performs endpoint posture assessment. pxGrid facilitates information sharing while posture assessment performs actual endpoint compliance checking.
Question 120:
An administrator needs to configure Cisco FTD to prevent command and control (C2) communications from compromised hosts to external botnet controllers. Which security feature should be enabled?
A) Intrusion Prevention System with botnet detection rules
B) Static routing only
C) NTP synchronization
D) DHCP services
Answer: A
Explanation:
This question addresses detecting and preventing command and control communications which are critical for botnet operations and advanced persistent threats. After malware infects endpoints, it typically attempts to communicate with external command and control servers to receive instructions, download additional payloads, exfiltrate stolen data, or coordinate with other infected hosts in distributed attacks. Preventing C2 communications can neutralize infections by isolating compromised hosts from attacker infrastructure even when the initial infection wasn’t prevented.
Intrusion Prevention System in Firepower Threat Defense provides signature-based and behavioral detection of malicious network activity including command and control communications. IPS uses thousands of signatures (Snort rules) maintained by Cisco Talos that identify attack patterns, exploit attempts, malware communications, and other threats. Specific signature categories target botnet detection including rules that recognize communication patterns, protocols, encoding methods, beaconing behaviors, and specific C2 infrastructure associated with known malware families and threat actors.
Botnet detection operates through multiple techniques: signature matching identifies known C2 protocols and communication patterns used by specific malware families, behavioral analysis detects anomalous periodic beaconing typical of C2 keep-alive traffic, reputation-based blocking leverages threat intelligence about known C2 server IP addresses and domains, protocol analysis detects malware using legitimate protocols (HTTP, DNS, IRC) for C2 tunneling, and heuristics identify suspicious characteristics like unusual user agents, uncommon ports, encrypted payloads in unexpected protocols, or communication patterns inconsistent with normal traffic.
Configuration involves enabling IPS through access control policies, selecting appropriate intrusion policies that include botnet detection rules (Talos provides preconfigured policies optimized for different security postures), and configuring IPS to prevent (block) rather than only detect malicious traffic. Within intrusion policies, specific rule categories can be enabled including malware-cnc (command and control) rules, blacklist rules blocking known malicious IPs, and protocol analysis rules detecting protocol abuse for C2 tunneling. Administrators can tune IPS sensitivity balancing security (more aggressive blocking) against false positives (blocking legitimate traffic), with tuning often involving monitoring initially in detect-only mode, analyzing alerts, creating exception rules for false positives, then enabling prevention mode. IPS operates inline analyzing traffic passing through FTD and can drop malicious sessions, reset connections, or log events while allowing traffic, providing flexible response options based on confidence levels.
A) This is the correct answer. Intrusion Prevention System with botnet detection rules identifies and blocks command and control communications through signature matching, behavioral analysis, and reputation-based blocking. IPS includes specific rule categories targeting C2 traffic patterns and known botnet infrastructure enabling detection and prevention of attacker communications with compromised hosts. Enabling IPS with appropriate policies containing botnet detection capabilities provides effective protection against C2 communications, neutralizing infections by preventing external control even when malware executes on endpoints.
B) Static routing defines forwarding paths for network traffic determining where packets are sent but provides no security inspection or threat detection. Routing directs permitted traffic to destinations but doesn’t analyze traffic for malicious content or block command and control communications. Static routes are necessary for network connectivity but are completely unrelated to security functions like botnet detection. Security services including IPS are required to inspect traffic and identify threats, while routing simply directs traffic without inspecting it for threats.
C) NTP (Network Time Protocol) synchronization ensures devices maintain accurate time which is important for logging, certificate validation, and event correlation but doesn’t provide security inspection or botnet detection. While accurate time is a prerequisite for many security functions (certificate validation requires correct time, log correlation requires synchronized timestamps), NTP itself doesn’t detect or prevent threats. NTP supports security operations but doesn’t perform threat detection or prevention, serving an important but different purpose from botnet detection.
D) DHCP (Dynamic Host Configuration Protocol) services automatically assign IP addresses and network configuration to endpoints but don’t provide security inspection or threat detection. DHCP operates at network configuration layer while threat detection operates at traffic inspection layer. DHCP ensures endpoints obtain appropriate network addressing but has no role in inspecting traffic for command and control communications or other threats. DHCP and IPS serve completely different functions with DHCP providing network configuration and IPS providing security inspection and threat prevention.