Cisco 350-401  Implementing Cisco Enterprise Network Core Technologies (ENCOR) Exam Dumps and Practice Test Questions Set 7  Q91-105

Cisco 350-401  Implementing Cisco Enterprise Network Core Technologies (ENCOR) Exam Dumps and Practice Test Questions Set 7  Q91-105

Visit here for our full Cisco 350-401 exam dumps and practice test questions.

Question 91

Which protocol is used by Layer 2 and Layer 3 devices to share device information in a Cisco network, including device ID, capabilities, and IP address?

A) CDP
B) LLDP
C) OSPF
D) EIGRP

Answer: A) CDP

Explanation:

LLDP is a vendor-neutral discovery protocol similar to CDP but designed to work across multiple manufacturers’ devices, making it useful in multi-vendor environments. OSPF is a link-state routing protocol that allows routers to exchange network topology information to calculate shortest paths, but it does not provide device-level information such as device type or IP address. EIGRP is a hybrid routing protocol used for efficient path selection within a single autonomous system but is not intended for neighbor device discovery. CDP, or Cisco Discovery Protocol, is a proprietary protocol used by Cisco devices to share information about directly connected neighbors at Layer 2. CDP packets are sent periodically on all supported interfaces, allowing devices to discover each other, including model, device ID, IP address, and capabilities (router, switch, or phone). Administrators can leverage CDP for network topology mapping, verifying connectivity, troubleshooting, and planning expansions. CDP enables commands such as show cdp neighbors to display connected devices and interface details. Because CDP operates at Layer 2, it can discover devices even if IP addresses are not configured, making it valuable during initial deployment. It also supports management tools to maintain accurate network documentation and helps identify misconfigurations or unauthorized devices. CDP is essential in Cisco environments for network visibility and efficient monitoring. Therefore, the correct answer is CDP because it allows Layer 2 and Layer 3 devices to share detailed neighbor information, facilitating network management, troubleshooting, and planning.

Question 92

Which protocol is used for monitoring network devices by collecting performance metrics such as CPU, memory, and interface statistics?

A) SNMP
B) TACACS+
C) RADIUS
D) ICMP

Answer: A) SNMP

Explanation:

TACACS+ is used for AAA (Authentication, Authorization, and Accounting) and controls user access to devices but does not provide performance metrics. RADIUS is also an AAA protocol for authentication and accounting but is not used for network monitoring. ICMP is used for network diagnostics, such as ping and traceroute, but does not collect detailed device metrics. SNMP, or Simple Network Management Protocol, is the standard for monitoring and managing network devices such as switches, routers, firewalls, and servers. SNMP uses a manager-agent model where the agent on each device collects and provides statistics to an SNMP manager. Metrics include CPU utilization, memory usage, interface throughput, error rates, uptime, and device health. SNMP supports polling for real-time monitoring, traps to alert managers of significant events, and secure communication via SNMPv3. Enterprises rely on SNMP for proactive monitoring, performance analysis, capacity planning, and rapid troubleshooting. Tools such as network monitoring software use SNMP data to visualize network performance, detect anomalies, and maintain uptime. Therefore, the correct answer is SNMP because it enables detailed monitoring and management of network devices, ensuring optimal performance, reliability, and proactive issue detection.

Question 93

Which IPv6 address type is designed for one-to-many communication, allowing packets to reach multiple devices in a group?

A) Unicast
B) Multicast
C) Anycast
D) Link-local

Answer: B) Multicast

Explanation:

Unicast addresses are for one-to-one communication, delivering packets to a single device. Anycast addresses allow packets to reach the nearest of multiple devices sharing the same address, not all members of a group. Link-local addresses are confined to a subnet for local communication and are not used for group delivery. Multicast addresses, however, enable efficient one-to-many communication, where packets are delivered to all devices that have joined a specific multicast group. In IPv6, multicast is essential because traditional broadcast addresses are removed. IPv6 multicast addresses are identified by the prefix ff00::/8 and are used in protocols such as neighbor discovery, routing updates, and streaming media. Multicast reduces network traffic compared to sending multiple unicast messages by ensuring only interested devices receive the traffic. It improves bandwidth efficiency, network scalability, and performance. Enterprise networks use multicast extensively for routing protocol updates, media distribution, and targeted network communication. Therefore, the correct answer is Multicast because it enables packets to reach all devices in a designated group efficiently, improving network performance and scalability.

Question 94

Which protocol allows multiple routers to share a single virtual IP and provide seamless default gateway redundancy?

A) HSRP
B) VRRP
C) GLBP
D) STP

Answer: A) HSRP

Explanation:

VRRP is a standards-based alternative to HSRP that provides similar gateway redundancy but is not Cisco-proprietary. GLBP allows load balancing along with redundancy but is less widely implemented than HSRP. STP prevents Layer 2 loops but does not provide default gateway redundancy. HSRP, or Hot Standby Router Protocol, is a Cisco proprietary protocol designed to provide redundancy for default gateways. Multiple routers form an HSRP group, sharing a virtual IP and MAC address. One router is elected as active, forwarding all traffic, while others remain in standby. If the active router fails, a standby router seamlessly takes over without requiring host reconfiguration, ensuring continuous network access. Rapid HSRP (HSRPv2) enhances failover speed, improving reliability for enterprise networks. HSRP ensures uninterrupted communication in VLANs by eliminating a single point of failure at the gateway. Therefore, the correct answer is HSRP because it enables multiple routers to share a virtual IP for gateway redundancy, ensuring high availability and seamless failover.

Question 95

Which Cisco feature allows multiple physical switches to operate as a single logical switch, simplifying management and providing redundancy?

A) EtherChannel
B) StackWise
C) VTP
D) HSRP

Answer: B) StackWise

Explanation:

EtherChannel aggregates multiple physical links into a single logical link, providing bandwidth and redundancy but not unifying multiple switches. VTP propagates VLAN configuration across switches but does not create a single logical switch. HSRP provides gateway redundancy but does not consolidate switch management. StackWise interconnects multiple physical switches to operate as a single logical switch. One switch acts as the master, controlling configuration, management, and the control plane, while member switches operate under the master’s control. This allows centralized management, simplified configuration, and a single IP address for administration. Ports from different switches can be used interchangeably, improving flexibility and fault tolerance. If one switch fails, the remaining stack members continue forwarding traffic, ensuring high availability. StackWise simplifies operations, reduces configuration errors, and enhances network scalability and reliability. Therefore, the correct answer is StackWise because it allows multiple switches to operate as a single logical switch, providing simplified management, redundancy, and high availability.

Question 96

Which protocol allows routers to dynamically exchange routing information using a distance-vector method with hop count as the primary metric?

A) OSPF
B) RIP
C) EIGRP
D) BGP

Answer: B) RIP

Explanation:

In the world of networking, selecting the appropriate routing protocol is critical for ensuring efficient and reliable data delivery across a network. Routing protocols can be broadly classified into categories such as distance-vector, link-state, hybrid, and path-vector protocols, each with its own operational characteristics and use cases. Among these, the Routing Information Protocol, commonly referred to as RIP, is one of the oldest and simplest distance-vector protocols still in use today. Understanding why RIP fits into the distance-vector category requires examining how it operates in comparison to other protocols like OSPF, EIGRP, and BGP.

Open Shortest Path First (OSPF) is a link-state routing protocol, which fundamentally differs from distance-vector protocols. OSPF routers build a complete map of the network topology by exchanging Link-State Advertisements (LSAs) with all other routers in the area. Each router independently uses Dijkstra’s shortest path algorithm to calculate optimal paths to every network destination. Because OSPF maintains a full network topology and performs independent path computation, it is not classified as a distance-vector protocol. It is highly suitable for medium to large enterprise networks due to its scalability, fast convergence, and support for hierarchical network design through the use of areas.

Enhanced Interior Gateway Routing Protocol (EIGRP) is often described as a hybrid protocol because it combines characteristics of distance-vector and link-state protocols. EIGRP uses a composite metric that includes bandwidth, delay, reliability, and load to determine the best path to a destination. It also maintains a topology table containing information about all known routes but does not maintain a full network map like OSPF. EIGRP ensures rapid convergence and loop-free routing using the Diffusing Update Algorithm (DUAL). Despite its advantages, EIGRP is not purely distance-vector because it considers multiple metrics and maintains additional routing information beyond simple hop counts.

Border Gateway Protocol (BGP) operates as a path-vector protocol and is primarily used for routing between autonomous systems on the Internet. BGP does not use hop count as a metric; instead, it selects routes based on path attributes, policies, and the Autonomous System path. While BGP is highly scalable and essential for inter-domain routing, it functions differently from distance-vector protocols and is not typically used for internal network routing within a single organization.

In contrast, RIP exemplifies the core principles of a distance-vector protocol. In RIP, routers maintain a simple routing table containing the next hop and hop count to each known destination network. Routers periodically exchange their full routing tables with their directly connected neighbors. The primary metric used by RIP to determine the best path to a destination is the hop count, which is a straightforward measure of the number of routers a packet must traverse to reach its destination. The simplicity of this metric makes RIP easy to configure and understand, particularly for small networks where complexity is minimal.

RIP has a maximum hop count of 15, which effectively limits its use to small networks, as any network farther than 15 hops is considered unreachable. RIP version 1 is classful, meaning it does not support variable-length subnet masks, whereas RIP version 2 introduced classless routing, allowing the use of variable-length subnet masks and supporting authentication for added security. Despite its limitations, RIP remains relevant in certain scenarios, such as legacy systems or small-scale networks where ease of configuration and minimal overhead are prioritized over advanced features and scalability.

Although modern enterprise networks often rely on OSPF or EIGRP for faster convergence, enhanced scalability, and multiple metrics, RIP continues to serve as a simple and reliable option for specific use cases. Its distance-vector approach, reliance on hop count, and predictable behavior make it a foundational protocol for understanding the principles of routing.

RIP is classified as a distance-vector protocol because it relies on hop count as the primary metric, exchanges full routing tables with neighbors periodically, and makes routing decisions based solely on the information received from adjacent routers. While other protocols like OSPF, EIGRP, and BGP provide more advanced routing capabilities and metrics, RIP’s simplicity and predictable behavior make it suitable for small or legacy network environments. Therefore, RIP is the correct choice when a distance-vector protocol is required, providing straightforward, easy-to-manage routing in networks where scalability and complex metrics are less critical.

Question 97

Which IPv6 address type is used to send traffic to the closest device among multiple devices sharing the same address?

A) Unicast
B) Multicast
C) Anycast
D) Link-local

Answer: C) Anycast

Explanation:

In IPv6 networking, understanding the different types of addresses and their purposes is critical for designing efficient and resilient networks. Address types such as unicast, multicast, link-local, and anycast each serve distinct functions, influencing how packets are delivered and how services are architected. Among these, anycast addressing stands out due to its ability to deliver packets to the nearest instance of a group of devices, providing both efficiency and redundancy.

Unicast addresses are the simplest form of IPv6 addressing. Each unicast address identifies a single interface on a device, and packets sent to that address are delivered only to that specific interface. This one-to-one communication is ideal for typical point-to-point communications but does not scale when multiple devices need to provide the same service or respond to the same request. Unicast is the default addressing scheme for most standard network communications, but it does not provide mechanisms for redundancy or load distribution.

Multicast addresses, on the other hand, are used for one-to-many communication. A single packet sent to a multicast address is delivered to all devices that are part of the multicast group. Multicast is widely used for scenarios like video streaming, real-time updates, and routing protocol advertisements, where information must reach multiple devices simultaneously. However, multicast does not provide a mechanism to deliver traffic specifically to the nearest device, and all group members receive the same packet regardless of network topology, which can sometimes lead to inefficient use of resources if only the nearest device is needed.

Link-local addresses are automatically assigned to interfaces and are used exclusively for communication within the same subnet or link. These addresses are essential for IPv6 protocol operations, such as neighbor discovery, router advertisements, and certain routing protocol functions. While link-local addresses allow immediate local communication without requiring global address assignment, they are confined to the local link and cannot provide routing to the nearest device across a broader network.

Anycast addresses provide a unique solution by combining elements of unicast delivery with intelligent routing to the nearest device in a group. Multiple devices can share the same anycast address, and when a packet is sent to this address, the network’s routing protocols determine the “nearest” device based on metrics such as shortest path, routing cost, or other topology considerations. This capability is particularly valuable for services that must provide rapid, reliable responses, such as Domain Name System (DNS) servers or content delivery network (CDN) nodes. By directing traffic to the closest device, anycast reduces latency, improves performance, and optimizes the utilization of network resources.

Additionally, anycast inherently supports redundancy. If one of the devices sharing the anycast address fails or becomes unreachable, routing protocols automatically direct traffic to the next closest device. This failover mechanism reduces single points of failure, enhancing service availability and reliability without requiring complex manual reconfiguration or load-balancing systems. Anycast is therefore a preferred addressing scheme for globally distributed services where responsiveness and continuity are critical.

The deployment of anycast also simplifies network architecture by allowing multiple physical devices to present themselves as a single logical endpoint. Clients or users do not need to be aware of the multiple servers providing the service; they simply send traffic to the anycast address and rely on the network to determine the optimal path. This allows service providers to scale infrastructure efficiently, distribute load intelligently, and maintain high availability even during maintenance or outages on individual nodes.

while unicast addresses deliver packets to a single interface, multicast addresses deliver to all group members, and link-local addresses operate only within a local subnet, anycast addresses uniquely combine these concepts by delivering packets to the nearest device among multiple devices sharing the same IPv6 address. This ensures low-latency delivery, improves performance, provides redundancy, and supports scalable, high-availability services. Therefore, anycast is the correct choice in scenarios requiring efficient, nearest-device delivery while maintaining reliability and optimal resource usage in IPv6 networks.

Question 98

Which Cisco protocol provides default gateway redundancy for VLANs by using a shared virtual IP and MAC address among multiple routers?

A) HSRP
B) GLBP
C) VRRP
D) STP

Answer: A) HSRP

Explanation:

GLBP provides both redundancy and load balancing, but it is less widely implemented than HSRP. VRRP is a standards-based alternative to HSRP that also provides gateway redundancy but is not Cisco proprietary. STP is used for preventing loops in Layer 2 networks and does not address gateway redundancy. HSRP, or Hot Standby Router Protocol, allows multiple routers to share a virtual IP and MAC address that serves as the default gateway for hosts on a VLAN. One router is active and forwards traffic, while others remain in standby. If the active router fails, a standby router automatically assumes the active role without requiring host reconfiguration. HSRP ensures uninterrupted network access, high availability, and elimination of single points of failure at the gateway. Rapid HSRP (HSRPv2) improves failover speed, enhancing reliability in enterprise networks. HSRP is widely used in VLANs where seamless default gateway failover is critical for network resilience. Therefore, the correct answer is HSRP because it provides seamless default gateway redundancy using a shared virtual IP and MAC address among multiple routers.

Question 99

Which Layer 2 technology aggregates multiple physical links into a single logical link to increase bandwidth and provide redundancy?

A) STP
B) EtherChannel
C) VTP
D) HSRP

Answer: B) EtherChannel

Explanation:

In enterprise networking, ensuring both high availability and optimal utilization of resources is critical for maintaining reliable and efficient operations. One common challenge in network design is the need to combine multiple physical links between switches, or between a switch and a router, to improve overall bandwidth and provide redundancy. While there are several technologies that enhance network stability, not all of them address bandwidth aggregation. For example, Spanning Tree Protocol (STP) is designed to prevent loops in redundant Layer 2 topologies, maintaining a stable network by selectively blocking redundant paths, but it does not combine links to increase throughput. Similarly, VLAN Trunking Protocol (VTP) simplifies the distribution and synchronization of VLAN configuration across multiple switches, but it does not merge physical links to enhance capacity. Hot Standby Router Protocol (HSRP) provides default gateway redundancy by allowing two or more routers to share a virtual IP address, but it does not aggregate the throughput of multiple physical connections.

EtherChannel, in contrast, is specifically designed to address the need for both bandwidth aggregation and redundancy. It allows multiple physical Ethernet links to be bundled into a single logical interface. This logical grouping means that traffic can be distributed across all member links, effectively increasing the available bandwidth beyond that of a single physical connection. For example, if four 1 Gbps links are combined using EtherChannel, the logical interface can handle up to 4 Gbps of traffic, depending on the load-balancing method in use. This capability is particularly beneficial in environments with high-volume traffic between switches, core-to-distribution links, or connections to critical servers and routers, where performance and throughput are paramount.

In addition to increasing bandwidth, EtherChannel provides fault tolerance. If one of the physical links in the aggregation fails, traffic is automatically redistributed across the remaining active links, ensuring uninterrupted network service. This resilience is crucial in enterprise networks where downtime can result in significant operational disruption or financial loss. Because the bundle of links is treated as a single logical interface, network devices continue to operate normally without manual intervention or reconfiguration, allowing for seamless failure recovery.

EtherChannel supports both Layer 2 and Layer 3 configurations. In Layer 2, it connects switches to other switches, while in Layer 3, it can connect switches to routers or routers to routers. This flexibility allows network architects to implement EtherChannel across various parts of the network, including core, distribution, and access layers, optimizing the use of existing physical infrastructure. EtherChannel also works with negotiation protocols such as PAgP (Port Aggregation Protocol), which is Cisco proprietary, and LACP (Link Aggregation Control Protocol), which is standards-based and widely supported across multiple vendors. These protocols help automate the aggregation process, verify link compatibility, and maintain consistent configuration between devices.

Beyond performance and redundancy, EtherChannel simplifies network management. Instead of configuring multiple individual links separately, network administrators can manage the aggregated group as a single interface. This reduces administrative overhead, lowers the risk of misconfiguration, and streamlines monitoring, troubleshooting, and policy enforcement. Load-balancing algorithms within EtherChannel determine how traffic is distributed across member links, using criteria such as source and destination IP addresses, MAC addresses, or port numbers, which helps optimize utilization and maintain equitable distribution of network load.

While STP ensures loop prevention, VTP synchronizes VLANs, and HSRP provides gateway redundancy, none of these technologies increase the effective bandwidth of multiple physical links. EtherChannel uniquely combines multiple physical connections into a single logical link, offering both enhanced bandwidth and fault tolerance. It supports Layer 2 and Layer 3 topologies, integrates with negotiation protocols like PAgP and LACP, and simplifies network management by presenting the aggregated links as a single interface. By enabling load balancing, providing redundancy, and maximizing the utilization of available physical infrastructure, EtherChannel is a key technology for improving network efficiency, reliability, and scalability in enterprise environments. Its ability to aggregate links while maintaining seamless operation makes it the ideal choice for high-performance networks requiring both resilience and increased throughput.

Question 100

Which Cisco technology allows multiple physical switches to operate as a single logical switch, simplifying management and providing high availability?

A) HSRP
B) StackWise
C) VTP
D) EtherChannel

Answer: B) StackWise

Explanation:

In modern enterprise networks, managing multiple physical switches can quickly become complex, particularly when it comes to configuration consistency, monitoring, and fault tolerance. While various technologies address different aspects of network reliability and efficiency, not all provide a comprehensive solution for unifying switch management. Protocols and technologies like HSRP, VTP, and EtherChannel serve important roles but do not create a single logical switch. HSRP, or Hot Standby Router Protocol, ensures default gateway redundancy by allowing multiple routers to share a virtual IP and MAC address. While this provides high availability at the gateway level, it does not simplify switch management or consolidate multiple physical switches into one logical entity. Similarly, VTP, or VLAN Trunking Protocol, synchronizes VLAN information across switches within the same VTP domain, ensuring consistent VLAN configurations and reducing administrative errors. However, VTP does not combine multiple switches into a unified management plane; each switch still functions independently for control and configuration purposes. EtherChannel, another widely used technology, aggregates multiple physical links between switches into a single logical link to increase bandwidth and provide link redundancy. While EtherChannel improves throughput and resiliency at the connection level, it does not centralize switch management or create a single configuration interface for multiple switches.

StackWise, a Cisco proprietary technology, addresses these limitations by interconnecting multiple physical switches to function as a single logical switch. In a StackWise configuration, one switch is elected as the master switch, taking responsibility for managing the control plane and maintaining the stack’s configuration. The remaining member switches operate under the master’s supervision, effectively consolidating multiple devices into one logical entity for administration purposes. This design allows network administrators to configure the entire stack through a single IP address, significantly simplifying management tasks and reducing the potential for configuration errors across multiple devices.

One of the key benefits of StackWise is fault tolerance. Because the switches operate as a single logical unit, if one member switch fails, the remaining switches in the stack continue to forward traffic without disruption. This high-availability feature is crucial for enterprise environments where network downtime can have serious operational and financial consequences. Additionally, StackWise enhances flexibility and scalability. Ports on any member switch can be used interchangeably, allowing administrators to connect devices to different switches without worrying about configuration inconsistencies or additional management overhead. This flexibility facilitates network growth and ensures that resources can be allocated dynamically based on demand.

StackWise is particularly valuable in enterprise campus networks, data centers, and large office environments where multiple access switches must be deployed to serve a significant number of endpoints. By consolidating these switches into a single logical entity, network engineers can implement consistent policies, monitor performance centrally, and maintain uniform security configurations across the stack. This centralized management also simplifies troubleshooting and reduces the complexity associated with maintaining multiple independent switches.

In addition to operational simplicity, StackWise supports redundancy and resiliency at the hardware level. The master switch manages the stack’s control plane, including spanning-tree calculations and routing decisions, while member switches continue to handle data forwarding. This architecture ensures that even if a member switch or link fails, the stack remains operational, minimizing service interruptions and maintaining network reliability.

while technologies like HSRP, VTP, and EtherChannel each address specific network requirements—gateway redundancy, VLAN synchronization, and link aggregation—they do not provide centralized switch management or unified configuration. StackWise uniquely enables multiple physical switches to operate as a single logical switch, offering simplified administration, redundancy, high availability, and scalability. By implementing StackWise, enterprises can streamline network operations, reduce configuration errors, improve fault tolerance, and maintain consistent performance across all switches in the stack. Therefore, the correct answer is StackWise, as it effectively consolidates multiple physical switches into a single logical unit, enhancing manageability and ensuring resilient network operation.

Question 101

Which protocol is used to detect and prevent loops in Layer 2 networks with redundant links?

A) STP
B) CDP
C) VTP
D) EtherChannel

Answer: A) STP

Explanation:

In modern enterprise networks, redundancy is essential for maintaining high availability and ensuring uninterrupted communication between devices. To achieve redundancy, network engineers often connect multiple switches using redundant physical links. While this approach increases fault tolerance, it introduces a critical challenge at Layer 2: the potential for switching loops. When redundant paths exist in a Layer 2 network, broadcast frames and unknown unicast frames can circulate endlessly, creating broadcast storms that overwhelm the network. These loops can lead to severe performance degradation, network instability, and even complete network outages. Preventing such loops while maintaining redundancy is a fundamental requirement in the design of robust enterprise networks.

Several protocols and technologies address different aspects of network functionality, but only specific solutions are designed to prevent Layer 2 loops. Cisco Discovery Protocol (CDP) is one such technology commonly deployed in networks. CDP is a proprietary Layer 2 protocol that allows Cisco devices to discover information about directly connected neighbors. It can provide device type, model, software version, IP addresses, and interface details, which is invaluable for network documentation and troubleshooting. However, CDP does not interact with data forwarding or topology management and cannot prevent loops or broadcast storms in the network.

VLAN Trunking Protocol (VTP) is another widely used technology. VTP enables Layer 2 switches within the same VTP domain to synchronize VLAN configuration information. When a VLAN is created, deleted, or modified on a VTP server, the changes are propagated to all client switches, ensuring consistent VLAN information across the network. While VTP greatly simplifies administrative tasks and reduces configuration errors, it does not provide mechanisms to block redundant paths or prevent switching loops. Its primary purpose is VLAN management, not network stability.

EtherChannel, or link aggregation, combines multiple physical links between switches into a single logical interface. This allows for load balancing, redundancy, and increased bandwidth between devices. While EtherChannel improves network performance and fault tolerance, it does not inherently prevent loops. If redundant paths exist outside of the EtherChannel, broadcast traffic can still circulate, potentially causing network congestion and instability.

The protocol specifically designed to address the problem of loops in Layer 2 networks is Spanning Tree Protocol (STP). STP ensures a loop-free topology in networks with redundant paths. It operates by electing a root bridge for the network and then assigning roles to all switch ports: root, designated, or blocked. The root port on each switch provides the shortest path to the root bridge, designated ports forward traffic for each network segment, and blocked ports prevent redundant traffic from creating loops. This mechanism guarantees that even with multiple physical connections, only one path is active at a time for any given segment, preventing broadcast storms and ensuring stability.

STP continuously monitors the network. If a link fails or a topology change occurs, STP recalculates the network topology and reassigns port roles to maintain a loop-free state. Rapid STP (RSTP) improves on this by providing faster convergence, reducing downtime during changes in the network. Additional features such as BPDU guard and root guard further protect the network from misconfigurations, rogue devices, and malicious attempts to disrupt topology.

By combining redundancy with loop prevention, STP allows network administrators to design resilient, high-availability networks. It ensures that broadcast traffic is contained, unknown unicast frames are handled correctly, and multicast traffic does not spiral uncontrollably. Networks using STP can safely deploy multiple paths for fault tolerance without risking instability.

While CDP provides neighbor discovery, VTP synchronizes VLANs, and EtherChannel aggregates links for bandwidth and redundancy, none of these protocols prevent loops in a Layer 2 network. STP, including its rapid variants, is specifically engineered to maintain a loop-free topology, manage port roles dynamically, and protect the network from broadcast storms. Its ability to combine redundancy with stability makes it an essential protocol for enterprise networks. Therefore, STP is the correct solution for ensuring reliable, stable, and resilient Layer 2 operation while allowing redundant links for high availability.

Question 102

Which IPv6 address type is automatically assigned to every interface and used for local subnet communication?

A) Global unicast
B) Anycast
C) Link-local
D) Multicast

Answer: C) Link-local

Explanation:

In IPv6 networking, understanding the roles and distinctions of different address types is essential for designing efficient, reliable, and functional networks. IPv6 introduces several address categories, each tailored for specific purposes, including global unicast, anycast, multicast, and link-local addresses. Among these, link-local addresses hold a particularly critical role because they enable automatic, essential communication between devices within the same local subnet, supporting the foundational operations of IPv6 networks.

Global unicast addresses are perhaps the most familiar type, as they are analogous to public IPv4 addresses. These addresses are routable across the Internet and are used to communicate with external devices and networks. While global unicast addresses allow devices to send and receive traffic across different networks, they are not designed for immediate, automatic communication within the local subnet before global configuration is completed.

Anycast addresses are unique in that they are assigned to multiple devices, allowing packets sent to the address to be delivered to the nearest device according to routing metrics such as cost, hop count, or delay. Anycast is often used in scenarios requiring proximity-based delivery, such as DNS services or content distribution. However, anycast addresses are not limited to a local link, and they do not inherently support universal communication between all devices on the same subnet.

Multicast addresses enable one-to-many communication, targeting a specific group of devices that have joined the multicast group. This approach is efficient for applications such as streaming media, routing protocol updates, and service advertisements because it avoids unnecessary duplication of data. Multicast reduces network traffic by delivering a single packet to all interested devices rather than sending multiple unicast packets. While multicast is vital for group communication, it is not restricted to the local link and is not always automatically assigned to every interface.

Link-local addresses, in contrast, are automatically assigned to every IPv6-enabled interface on a device and are strictly used for communication within the same local subnet or link. These addresses are essential for the proper functioning of IPv6 networks. Without link-local addresses, key network functions such as neighbor discovery, router advertisement, and routing protocol operations would fail. Protocols like OSPFv3 and EIGRP for IPv6 rely on link-local addresses to form neighbor relationships, exchange routing information, and maintain network topology. The automatic presence of link-local addresses ensures that devices can communicate immediately upon interface activation, even before global unicast addresses are configured.

The generation of link-local addresses is often based on the interface identifier, typically using a modified EUI-64 format derived from the device’s MAC address. This approach guarantees a unique local address for each interface, enabling predictable and consistent communication. Administrators may also manually configure link-local addresses if specific addressing or network segmentation requirements exist. Because these addresses are non-routable, any communication using link-local addresses remains confined to the local subnet, reducing the risk of routing conflicts and ensuring secure, predictable local interactions.

Link-local addresses are indispensable in IPv6 networking because they provide a reliable mechanism for immediate device communication, protocol operation, and subnet-level connectivity. They serve as a foundation for establishing neighbor relationships, exchanging routing updates, and performing essential network functions without relying on globally assigned addresses. Their automatic configuration simplifies network deployment and ensures consistent functionality across all IPv6 devices.

while global unicast addresses enable external communication, anycast addresses facilitate proximity-based delivery to the nearest device, and multicast addresses target specific groups, link-local addresses are unique in their critical role within a local subnet. They ensure that all devices on the same link can communicate automatically and reliably, supporting neighbor discovery, routing protocol exchanges, and other essential IPv6 operations. Therefore, link-local addresses are fundamental to IPv6, providing automatic, necessary communication within a single subnet, enabling local network functionality, and ensuring seamless operation of critical IPv6 protocols.

Question 103

Which NAT method allows multiple private IP addresses to share a single public IP address using different port numbers?

A) Static NAT
B) Dynamic NAT
C) PAT
D) NAT64

Answer: C) PAT

Explanation:

Static NAT maps a single private IP to a single public IP, suitable for servers but not for multiple hosts sharing a public IP. Dynamic NAT assigns private IPs to a pool of public IPs on a one-to-one basis, limiting the number of simultaneous connections. NAT64 translates IPv6 traffic to IPv4, enabling protocol interoperability but does not allow multiple hosts to share one public IP. PAT, or Port Address Translation, also called NAT overload, allows multiple internal hosts to access external networks using a single public IP by assigning unique port numbers to each session. The router maintains a translation table mapping private IPs and ports to the public IP and corresponding ports. PAT efficiently conserves public IPv4 addresses, supports many simultaneous connections, and ensures return traffic reaches the correct host. It is widely deployed in enterprise and home networks. Therefore, the correct answer is PAT because it enables multiple private IP addresses to share a single public IP using unique port numbers, providing efficient, scalable, and reliable Internet access.

Question 104

Which protocol is used to dynamically assign IP addresses and other network configuration parameters to hosts?

A) DNS
B) DHCP
C) ICMP
D) ARP

Answer: B) DHCP

Explanation:

DNS resolves domain names to IP addresses but does not assign IP addresses or configuration settings. ICMP is used for diagnostics and network troubleshooting, such as ping and traceroute, but it does not provide address assignment. ARP maps IP addresses to MAC addresses within a local subnet but does not assign IP addresses dynamically. DHCP, or Dynamic Host Configuration Protocol, is the standard protocol for dynamically assigning IP addresses and network configuration information such as subnet mask, default gateway, and DNS servers to hosts. When a device joins a network, it broadcasts a DHCP Discover message. The DHCP server responds with an Offer, and the client requests the offered address. The server confirms the assignment with an ACK, creating a lease. DHCP automates IP management, prevents address conflicts, and reduces administrative overhead in large networks. Enterprise networks rely on DHCP for scalability, centralized management, and efficient deployment of hosts. Therefore, the correct answer is DHCP because it dynamically assigns IP addresses and essential configuration parameters, ensuring efficient and reliable network connectivity for hosts.

Question 105

Which protocol provides default gateway redundancy by allowing multiple routers to share a virtual IP address?

A) HSRP
B) GLBP
C) VRRP
D) STP

Answer: A) HSRP

Explanation:

GLBP provides redundancy along with load balancing among multiple routers but is less commonly deployed than HSRP. VRRP is a standards-based protocol similar to HSRP but is not Cisco proprietary. STP prevents loops in Layer 2 topologies but does not provide gateway redundancy. HSRP, or Hot Standby Router Protocol, is a Cisco proprietary protocol that allows multiple routers to share a virtual IP and MAC address for default gateway redundancy. One router is active, forwarding traffic, while others remain in standby. If the active router fails, a standby router automatically takes over without host reconfiguration, ensuring uninterrupted network access. Rapid HSRP (HSRPv2) enhances failover speed, making it suitable for enterprise networks requiring high availability. HSRP eliminates single points of failure at the gateway, ensuring VLAN connectivity remains intact. Therefore, the correct answer is HSRP because it provides seamless default gateway redundancy using a shared virtual IP and MAC address among multiple routers, ensuring high availability and network reliability.