Cisco 350-401 Implementing Cisco Enterprise Network Core Technologies (ENCOR) Exam Dumps and Practice Test Questions Set 9 Q121-135
Visit here for our full Cisco 350-401 exam dumps and practice test questions.
Question 121
Which protocol allows switches to automatically negotiate trunk links and determine whether a link should carry multiple VLANs?
A) VTP
B) DTP
C) STP
D) CDP
Answer: B) DTP
Explanation:
VTP is used to distribute VLAN configuration information across switches, maintaining VLAN consistency but not automatically negotiating trunk links. STP prevents loops in Layer 2 networks but does not configure trunk links. CDP is a Cisco proprietary protocol that discovers neighbouring devices and provides device information, but it does not handle trunking. DTP, or Dynamic Trunking Protocol, is a Cisco proprietary protocol designed to automatically negotiate trunk links between switches. It allows ports to dynamically form trunk connections using modes such as dynamic auto, dynamic desirable, trunk, or access.
Trunking enables multiple VLANs to share a single physical link between switches, ensuring efficient VLAN traffic management. DTP reduces administrative overhead by eliminating the need for manual trunk configuration, ensuring consistent and accurate trunk setup. It supports the formation of 802.1Q trunks, which carry traffic from multiple VLANs. This dynamic negotiation improves network scalability, reduces configuration errors, and supports seamless VLAN communication between switches. DTP is widely used in enterprise networks to simplify trunking deployment and maintain efficient VLAN traffic management. Therefore, the correct answer is DTP because it automatically negotiates trunk links between switches, streamlining configuration and ensuring reliable multi-VLAN connectivity.
Question 122
Which protocol is used to provide high availability for default gateways in a Cisco network by sharing a virtual IP address among multiple routers?
A) HSRP
B) VRRP
C) GLBP
D) STP
Answer: A) HSRP
Explanation:
VRRP is a standards-based protocol similar to HSRP, but is not Cisco proprietary. GLBP also provides gateway redundancy and load balancing, but is less widely deployed than HSRP. STP prevents Layer 2 loops but does not provide default gateway redundancy. HSRP, or Hot Standby Router Protocol, is a Cisco proprietary protocol that allows multiple routers to share a virtual IP and MAC address for default gateway redundancy. One router is elected as active to forward traffic, while others remain in standby. If the active router fails, a standby router seamlessly assumes the active role without requiring host reconfiguration, ensuring uninterrupted connectivity. Rapid HSRP (HSRPv2) reduces failover time, improving network availability. HSRP eliminates single points of failure at the default gateway, providing continuous network access in enterprise VLANs. It is crucial for maintaining high availability for hosts and critical applications. Therefore, the correct answer is HSRP because it provides seamless default gateway redundancy, enabling multiple routers to share a virtual IP and maintain uninterrupted network connectivity.
Question 123
Which protocol allows multiple private IP addresses to access the Internet using a single public IP address by assigning unique ports to each session?
A) Static NAT
B) Dynamic NAT
C) PAT
D) NAT64
Answer: C) PAT
Explanation:
Static NAT maps a single private IP to a single public IP, suitable for server access but not multiple hosts sharing one public IP. Dynamic NAT maps private IPs to a pool of public IPs on a one-to-one basis, limiting scalability. NAT64 translates IPv6 traffic to IPv4 for protocol iinteroperability ty but does not allow multiple private IPs to share a single public IP. PAT, or Port Address Translation (NAT overload), enables multiple private IP addresses to share a single public IP by assigning unique port numbers for each session. The NAT device maintains a translation table mapping internal IPs and ports to the public IP and corresponding ports. PA optimises IPv4 address usage, supports many simultaneous sessions, and ensures return traffic reaches the correct host. It is widely used in enterprise networks for scalable and reliable Internet access. Therefore, the correct answer is PAT because it allows multiple private IP addresses to share a single public IP using unique ports, efficiently conserving addresses and maintaining connectivity.
Question 124
Which protocol provides device discovery on Cisco networks by sending periodic messages that include device ID, model, and capabilities?
A) CDP
B) LLDP
C) OSPF
D) EIGRP
Answer: A) CDP
Explanation:
LLDP is a vendor-neutral protocol for neighbour discovery in multi-vendor environments, but it’s not proprietary. OSPF is a link-state routing protocol that exchanges routing information but does not provide device discovery. EIGRP is a hybrid routing protocol used for optimal path selection, not neighbour discovery. CDP, or Cisco Discovery Protocol, is a Layer 2 protocol that enables Cisco devices to automatically discover directly connected neighbours. CDP packets are sent periodically to share information such as device ID, model, IP address, and capabilities (router, switch, phone, etc.). Administrators use CDP to map network topology, verify connectivity, troubleshoot issues, and maintain accurate network documentation. Commands such as show cdneighbours provide detailed information about connected devices. Since CDP operates at Layer 2, it can discover devices even without IP configuration, making it essential during initial deployments. CDP also integrates with management tools for topology visualisation and monitoring. Therefore, the correct answer is CDP because it provides automatic neighbour discovery and shares device information, enhancing network visibility, troubleshooting, and management.
Question 125
Which IPv6 address type is used to deliver packets to the nearest device among multiple devices sharing the same address?
A) Unicast
B) Multicast
C) Anycast
D) Link-local
Answer: C) Anycast
Explanation:
Unicast addresses deliver packets to a single device, not multiple devices. Multicast addresses deliver packets to all devices in a group, not just the nearest one. Link-local addresses are automatically assigned for communication within a single subnet and are not used for nearest-device delivery. Anycast addresses are assigned to multiple interfaces on different devices. When a packet is sent to an anycast address, routing protocols deliver it to the closest device based on routing metrics such as distance or cost. Anycast is widely used for services like DNS and content delivery networks, improving response time and reducing latency. It also provides redundancy: if the nearest device becomes unavailable, the packet is automatically routed to the next closest device, ensuring high availability. Anycast is essential in IPv6 networks for efficient traffic delivery, servicoptimisationon, and fault tolerance. Therefore, the correct answer is Anycast because it allows a packet to reach the nearest device among multiple devices sharing the same IPv6 address, optimising performance and reliability.
Question 126
Which protocol allows a single logical IP to be shared among multiple routers, providing seamless default gateway failover in Cisco networks?
A) HSRP
B) GLBP
C) VRRP
D) STP
Answer: A) HSRP
Explanation:
GLBP also provides redundancy and load balancing, but is less widely deployed compared to HSRP. VRRP is a standards-based alternative for default gateway redundancy, but it is not Cisco’s proprietary. Technology STP prevents loops in Layer 2 networks but does not provide gateway redundancy. HSRP, or Hot Standby Router Protocol, allows multiple routers to share a virtual IP and MAC address, acting as a single default gateway for hosts. One router is active and forwards traffic, while standby routers monitor its status. If the active router fails, a standby router automatically assumes the active role without requiring hosts to change their configuration, ensuring uninterrupted network connectivity. Rapid HSRP (HSRPv2) improves convergence time, reducing downtime during failover. HSRP eliminates single points of failure at the gateway, providing high availability for VLANs in enterprise networks. The protocol ensures that critical applications and services remain accessible, maintaining reliability and network stability. HSRP’s simplicity and Cisco integration make it the most widely used default gateway redundancy protocol in enterprise environments. Therefore, the correct answer is HSRP because it provides seamless failover by allowing multiple routers to share a virtual IP, ensuring high availability and uninterrupted network services.
Question 127
Which protocol enables Layer 2 switches to prevent loops while supporting redundant links and electing a root bridge?
A) STP
B) CDP
C) VTP
D) EtherChannel
Answer: A) STP
Explanation:
CDP is used for neighbour discovery and sharing device information, not for loop prevention. VTP distributes VLAN information but does not manage loops or port roles. EtherChannel combines multiple physical links into a single logical link for increased bandwidth and redundancy, but does not prevent loops by itself. STP, or Spanning Tree Protocol, is specifically designed to prevent loops in Layer 2 networks with redundant paths. When multiple switches are interconnected, broadcast frames can circulate endlessly in redundant topologies, causing network storms and outages. STP elects a root bridge and assigns port roles (root, designated, or blocked) to maintain a loop-free topology. If a link fails, STP recalculates the topology dynamically, maintaining connectivity without loops. Rapid STP (RSTP) improves convergence time for faster recovery from failures. STP is crucial in enterprise networks for ensuring stability, enabling redundancy, and supporting high availability while avoiding broadcast storms. Therefore, the correct answer is STP because it provides loop prevention in Layer 2 networks and ensures a stable, resilient topology.
Question 128
Which protocol allows multiple private IP addresses to share a single public IP address using unique port numbers for outgoing sessions?
A) Static NAT
B) Dynamic NAT
C) PAT
D) NAT64
Answer: C) PAT
Explanation:
Static NAT maps a single private IP to a single public IP, which does not support multiple hosts sharing one public IP. Dynamic NAT maps private IP addresses to a pool of public IPs on a one-to-one basis, limiting scalability. NAT64 translates IPv6 traffic to IPv4 to enable interoperability, not for sharing a single public IP. PAT, or Port Address Translation, allows multiple private IP addresses to share a single public IP by assigning unique port numbers to each session. The NAT device maintains a translation table linking internal IPs and ports to the public IP and corresponding ports. PAT optimises the use of scarce IPv4 addresses, supports many simultaneous connections, and ensures return traffic is correctly routed. It is widely deployed in enterprise networks and home routers for scalable Internet access. PAT efficiently balances address conservation with connectivity and reliability. Therefore, the correct answer is PAT because it enables multiple private IP addresses to share a single public IP, using unique ports to maintain distinct sessions and conserve address space.
Question 129
Which protocol is used to monitor network devices by collecting CPU usage, memory utilisation, and interface statistics in enterprise networks?
A) SNMP
B) TACACS+
C) RADIUS
D) ICMP
Answer: A) SNMP
Explanation:
In modern network environments, monitoring and managing device performance is critical to ensure reliability, availability, and optimal operation. While several protocols exist for network administration, not all are designed for collecting detailed performance metrics. For instance, TACACS+ is primarily an AAA protocol, providing centralised authentication, authorisation, and accounting for network devices. It allows administrators to control who can access a device, what actions they can perform, and logs their activity for auditing purposes. However, TACACS+ does not provide capabilities for monitoring CPU utilisation, memory usage, interface statistics, or other performance indicators. Similarly, RADIUS also focuses on centralised authentication and accounting, offering secure access control across enterprise networks, but it is not intended for real-time monitoring of device health or performance metrics.
ICMP, the Internet Control Message Protocol, serves as a diagnostic tool used by network engineers to test connectivity and detect path issues through tools such as ping and traceroute. While ICMP is invaluable for troubleshooting reachability problems, it cannot gather comprehensive performance data from network devices. It does not report on CPU load, interface errors, bandwidth utilisation, or other critical metrics needed for proactive network management.
The Simple Network Management Protocol, or SNMP, is specifically designed to fill this gap and provides a standardised framework for network monitoring and management. SNMP enables network administrators to monitor a wide range of performance metrics across diverse devices, including routers, switches, firewalls, servers, and other networked components. Devices that support SNMP run software agents, which collect and maintain data such as CPU usage, memory consumption, uptime, interface statistics, error counts, and traffic throughput. These agents communicate with SNMP managers, which aggregate, analyse, and present the collected data for administrators.
SNMP provides two primary methods for data collection. Administrators can query agents periodically using SNMP GET requests to retrieve specific metrics or allow agents to send asynchronous notifications, known as traps, to report critical events or threshold violations. This dual capability enables both proactive and reactive monitoring, allowing administrators to detect potential issues before they impact network performance and to respond quickly to unexpected problems. SNMPv3, the most secure version of the protocol, introduces encryption and authentication mechanisms, ensuring that performance data and network management communications are protected from unauthorised access or tampering.
Network administrators rely on SNMP for multiple purposes, including performance monitoring, capacity planning, troubleshooting, and proactive network management. By continuously observing device metrics, SNMP helps detect bottlenecks, misconfigurations, hardware failures, and unusual activity patterns. It allows organisations to plan for network growth by analysing trends in resource usage, ensuring that sufficient capacity is available to meet increasing demand. Furthermore, SNMP supports integration with network management systems, dashboards, and alerting tools, enabling automated monitoring and rapid incident response.
In large-scale enterprise networks, SNMP plays an essential role in maintaining operational visibility and ensuring that all devices perform reliably. Its ability to centralise data collection, provide detailed performance metrics, and offer real-time notifications makes it the cornerstone of network monitoring strategies. Unlike protocols designed for access control or diagnostic purposes, SNMP is uniquely equipped to provide a comprehensive view of network health and performance.
Therefore, SNMP is the correct choice for centralised monitoring of network devices. It offers an extensive range of capabilities for performance measurement, fault detection, and proactive management, ensuring that networks remain reliable, efficient, and secure while providing administrators with the visibility necessary to maintain optimal operations.
Question 130
Which IPv6 address type delivers a packet to all devices that are part of a specific group, providing one-to-many communication?
A) Unicast
B) Multicast
C) Anycast
D) Link-local
Answer: B) Multicast
Explanation:
In modern IPv6 networks, addressing schemes are crucial for ensuring that data packets are delivered efficiently and reliably to the intended recipients. While several types of IPv6 addresses exist, each serving distinct purposes, only some are suitable for delivering information to multiple devices simultaneously. Unicast addresses, for example, are designed to identify a single network interface. When a packet is sent to a unicast address, it is delivered exclusively to the specified device.
This type of addressing is ideal for one-to-one communication but does not support scenarios where data must reach multiple devices at once. Anycast addresses, on the other hand, are assigned to multiple interfaces across different devices, but the delivery model is unique: packets are routed to the nearest device based on routing metrics, such as cost or distance. Anycast is commonly used for services like DNS and content delivery networks, where directing traffic to the closest node improves response time and redundancy, yet it does not provide true one-to-many delivery to all devices in a group. Link-local addresses are automatically assigned to interfaces for communication on the same local subnet, allowing devices to interact with neighbourss without requiring global configuration. These addresses are essential for functions like neighbour or discovery and local routing protocol operations, but are not intended for delivering data to multiple hosts within a multicast group.
Multicast addresses in IPv6 are specifically designed to solve the problem of one-to-many communication. Unlike unicast or anycast, multicast allows a single packet to reach all devices that have explicitly joined a particular multicast group. This addressing model eliminates the inefficiencies of sending multiple unicast messages to individual recipients or relying on broadcast mechanisms, which are no longer used in IPv6. Instead of inundating every device on a subnet, multicast targets only those devices that have expressed interest in receiving the traffic. Multicast addresses in IPv6 use the ff00::/8 prefix, which designates them as group addresses and enables routers and switches to manage multicast traffic effectively. Various critical network operations, including routing protocol updates, neighbour discovery processes, and media streaming services, rely on multicast communication to function optimally.
One key benefit of multicast is the reduction of network congestion. By sending a single packet that is efficiently replicated by network devices only where necessary, multicast significantly conserves bandwidth and improves overall network performance. This capability is particularly important in enterprise environments where numerous devices may need access to the same information simultaneously, such as video conferencing, IP television, or software updates. Multicast ensures that all devices in the designated group receive the same information without overwhelming the network infrastructure.
Furthermore, multicast supports scalability and enhances operational efficiency. Network administrators can configure multicast groups to manage who receives specific types of traffic, allowing for more precise control and better utilisation of network resources. Protocols like Multicast Listener Discovery (MLD) help routers maintain knowledge of group membership and ensure that multicast traffic is delivered only to interested recipients. By optimising the delivery process, multicast reduces unnecessary duplication of data, lowers latency, and improves reliability for group-based communications.
In addition to efficiency, multicast addresses facilitate the seamless operation of IPv6 network protocols. Routing updates, such as those used by OSPFv3, rely heavily on multicast to communicate topology changes to all relevant routers efficiently. Similarly, neighbour discovery protocols use multicast to announce the presence of devices, ensuring that all interested nodes can respond and maintain accurate network state information. The use of multicast for these functions ensures the timely and efficient dissemination of critical network data, which is essential for maintaining high network performance and stability.
While unicast, anycast, and link-local addresses serve important roles in IPv6 networking, they do not provide true one-to-many delivery. Multicast addresses, in contrast, are explicitly designed to reach multiple devices in a group efficiently. By targeting only interested recipients, multicast reduces network congestion, optimises bandwidth usage, supports scalable communication, and enables essential network protocols to function effectively. This makes multicast the preferred addressing mechanism for delivering packets to multiple devices within IPv6 networks, ensuring both efficiency and reliability in enterprise and large-scale network environments.
Therefore, the correct answer is Multicast because it enables the delivery of packets to all devices within a specific group, enhancing efficiency, scalability, and overall network performance.
Question 131
Which protocol automatically assigns IP addresses and network configuration parameters, such as default gateway and DNS server, to hosts?
A) DHCP
B) DNS
C) ICMP
D) ARP
Answer: A) DHCP
Explanation:
In modern network environments, automating IP address assignment and host configuration is crucial for efficient operation, especially in medium tlarge-sizedge enterprise networks. While several protocols play key roles in network communication and management, not all are capable of dynamically assigning addresses or providing configuration parameters to hosts. For example, the Domain Name System (DNS) is widely used to resolve human-readable domain names into numerical IP addresses, allowing devices to locate services across the network. However, DNS is not designed to assign IP addresses to hosts, nor does it configure essential network parameters such as subnet masks or default gateways. Similarly, the Internet Control Message Protocol (ICMP) is a diagnostic protocol employed for troubleshooting and testing network connectivity using tools such as ping and traceroute. Despite its importance in assessing network health, ICMP does not provide mechanisms for IP address assignment or host configuration. The Address Resolution Protocol (ARP), on the other hand, translates IP addresses into physical MAC addresses within a local subnet, enabling proper packet delivery at Layer 2. While ARP is essential for local communication, it does not handle dynamic IP address allocation or deliver configuration settings to devices.
Dynamic Host Configuration Protocol, commonly known as DHCP, is the protocol specifically designed to address the challenges of automated IP address allocation and host configuration. When a device connects to a network, it initiates the process by broadcasting a DHCP Discover message to locate available DHCP servers. Upon receiving the Discover message, a DHCP server responds with a DHCP Offer, presenting an available IP address along with associated network parameters. The host then sends a DHCP Request to confirm its acceptance of the offered configuration, and the server completes the process by sending a DHCP Acknowledgment (ACK), thereby creating a temporary lease for the assigned IP address. This lease ensures that the IP address remains allocated to the host for a specified period, after which it can be renewed or released.
Beyond simply assigning IP addresses, DHCP can provide additional configuration details, including the subnet mask, default gateway, and DNS server addresses. These parameters are essential for proper network operation, as they allow hosts to communicate efficiently both within the local subnet and with external networks. By centralising address management, DHCP significantly reduces the risk of configuration errors, such as duplicate IP addresses, and minimises administrative overhead, especially in large networks with hundreds or thousands of devices. Enterprise environments rely heavily on DHCP to maintain scalability, ensure consistent configuration, and simplify network management.
DHCP also supports lease renewal, which allows hosts to maintain continuous network connectivity without manual intervention, and can provide additional options for advanced network operations, such as network boot using PXE (Preboot Execution Environment) and remote device management. This flexibility ensures that DHCP can meet the diverse needs of modern networks, from small office setups to complex enterprise infrastructures.
While protocols such as DNS, ICMP, and ARP serve important functions in network operation, they do not provide automated IP address allocation or full configuration capabilities for hosts. DHCP is uniquely suited for this role, enabling devices to receive IP addresses and essential network settings dynamically, reducing administrative effort, preventing address conflicts, and ensuring reliable communication across the network. Its ability to handle leases, renewals, and optional configuration parameters makes it an indispensable tool for scalable, efficient, and manageable network environments. Therefore, DHCP is the correct solution for dynamically assigning IP addresses and configuration parameters, supporting robust and seamless network operation.
Question 132
Which protocol allows multiple private IP addresses to share a single public IP address by assigning unique port numbers to each session?
A) Static NAT
B) Dynamic NAT
C) PAT
D) NAT64
Answer: C) PAT
Explanation:
Network Address Translation (NAT) plays a crucial role in modern network design by allowing private IP addresses to communicate with public networks while conserving scarce IPv4 address space. Among the various NAT techniques, static NAT, dynamic NAT, NAT64, and Port Address Translation (PAT) each serve specific purposes, but they differ significantly in functionality, scalability, and flexibility. Understanding these differences is essential for deploying efficient and reliable network connectivity solutions.
Static NAT is the simplest form of NAT and establishes a permanent one-to-one mapping between a private IP address and a public IP address. This approach is commonly used for servers or devices that require a consistent public IP for services such as web hosting, email, or remote access. While static NAT ensures predictable accessibility from external networks, it does not scale well for environments with multiple hosts needing simultaneous Internet access through a limited number of public IPs. Each private address requires its own public IP, which can quickly deplete available addresses in large networks.
Dynamic NAT addresses some of the limitations of static NAT by allowing private IP addresses to be mapped to a pool of public IP addresses on a first-come, first-served basis. When an internal host initiates a connection, a public IP from the pool is temporarily assigned for the duration of the session. While this method improves resource utilisation compared to static NAT, it still enforces a one-to-one mapping at any given time. Consequently, the number of simultaneous Internet connections is limited to the size of the public IP pool, which can be a significant constraint in enterprise networks or Internet service provider environments.
NAT64 provides a mechanism for translating IPv6 addresses to IPv4 addresses, facilitating communication between IPv6-only hosts and IPv4 networks. This is particularly important as organisations gradually transition from IPv4 to IPv6. NAT64 ensures interoperability between the two protocol families, but it does not inherently solve the challenge of allowing multiple private hosts to share a single public IPv4 address. Each translation session still requires a unique mapping, limiting scalability in scenarios with numerous internal clients.
Port Address Translation, commonly referred to as PAT or NAT overload, is the most versatile NAT solution for enabling multiple private hosts to access the Internet using a single public IP address. PAT achieves this by assigning a unique port number to each session initiated by an internal device. The NAT device maintains a translation table that maps each private IP and source port combination to the public IP and an associated unique port. This allows thousands of internal hosts to share a single public IP simultaneously, making efficient use of the limited IPv4 address space.
In addition to conserving public addresses, PAT ensures that return traffic reaches the correct internal host by referencing the translation table, providing seamless connectivity for all active sessions. It is widely deployed in enterprise, service provider, and home networks, supporting scalable Internet access without requiring extensive public IP resources. PAT also simplifies network design by reducing the need for large address allocations and provides flexibility for dynamically changing internal networks, such as environments with DHCP-assigned private IPs.
Furthermore, PAT enhances network security indirectly by hiding internal IP addresses from external networks. External clients communicate only with the shared public IP, making the internal network topology less exposed. Combined with firewall policies, PAT allows organisations to maintain both connectivity and control over internal resources.
While static NAT is suitable for individual servers requiring fixed public addresses, and dynamic NAT and NAT64 offer partial solutions for address translation or IPv6 interoperability, Port Address Translation stands out as the most practical approach for supporting multiple hosts behind a single public IP. PAT assigns unique port numbers to each session, efficiently managing scarce IPv4 resources, enabling simultaneous Internet access for numerous internal devices, and ensuring that return traffic is accurately routed. Its flexibility, scalability, and widespread deployment make it the preferred method for modern networks. Therefore, the correct solution for allowing multiple private IP addresses to share a single public IP while maintaining connectivity and conserving address space is PAT.
Question 133
Which protocol provides redundancy for default gateways in Cisco networks by sharing a virtual IP and MAC address among multiple routers?
A) HSRP
B) GLBP
C) VRRP
D) STP
Answer: A) HSRP
Explanation:
High availability and uninterrupted network connectivity are critical requirements in modern enterprise networks, particularly for default gateway services that allow hosts within a subnet to communicate beyond their local network. Several protocols exist to provide redundancy and failover capabilities for default gateways, each with distinct features, benefits, and limitations. Among these, GLBP, VRRP, STP, and HSRP are commonly discussed, but their purposes and implementations vary significantly.
Gateway Load Balancing Protocol (GLBP) is a Cisco-proprietary protocol that offers both redundancy and load balancing for default gateway functions. GLBP allows multiple routers to participate actively in forwarding traffic by distributing client requests across a group of routers, thereby utilising bandwidth more efficiently and preventing a single gateway from becoming a bottleneck. While GLBP provides advanced functionality compared to simple redundancy protocols, it is less frequently implemented than HSRP due to its complexity, configuration overhead, and Cisco-specific nature, which may limit its interoperability in multi-vendor environments.
Virtual Router Redundancy Protocol (VRRP) is a standards-based protocol designed to provide high availability for default gateways. VRRP enables multiple routers to share a virtual IP address, with one router acting as the master and others as backups. If the master router fails, a backup router assumes the role of the master automatically, ensuring continuity of service. Although VRRP offers similar redundancy features to HSRP, it is not Cisco proprietary, which makes it suitable for mixed-vendor networks but less integrated with Cisco-specific features and enhancements.
Spanning Tree Protocol (STP), on the other hand, is a Layer 2 protocol focused on preventing loops in networks with redundant paths. STP dynamically blocks redundant links to maintain a loop-free topology and ensures network stability at the data link layer. While STP is essential for maintaining the integrity of Layer 2 topologies and preventing broadcast storms, it does not provide redundancy or failover for default gateway services. It is therefore not a solution for ensuring continuous IP routing in the event of a gateway failure.
Hot Standby Router Protocol (HSRP) is a Cisco-proprietary protocol specifically designed to provide default gateway redundancy within a subnet. HSRP enables multiple routers to share a virtual IP address that acts as the default gateway for hosts. One router is designated as the active router, responsible for forwarding traffic sent to the virtual IP, while the remaining routers function as standby devices monitoring the status of the active router. If the active router becomes unavailable due to failure or maintenance, a standby router automatically assumes the role of the active router without requiring reconfiguration of host default gateway settings.
HSRP is particularly valuable in enterprise VLANs where uninterrupted connectivity is critical. The protocol ensures that hosts experience minimal disruption during router failures, maintaining high availability for critical services and applications. HSRP also supports enhancements such as Rapid HSRP (HSRPv2), which significantly reduces failover time and improves convergence, ensuring that traffic is quickly rerouted to a standby router. By eliminating single points of failure at the gateway level, HSRP helps maintain network reliability, prevents downtime, and supports enterprise business continuity objectives.
In addition to redundancy, HSRP provides a predictable and centralised method for managing gateway services in complex networks. Network administrators can configure multiple HSRP groups to provide redundancy across different VLANs, offering scalable and flexible failover solutions. Combined with monitoring tools, HSRP allows proactive detection of router failures and ensures that backup routers are ready to take over immediately, enhancing the overall resilience of the network infrastructure.
While GLBP provides both redundancy and load balancing, and VRRP offers standards-based failover, HSRP remains the most widely implemented protocol for Cisco networks that require seamless default gateway redundancy. Unlike STP, which addresses loop prevention, HSRP specifically focuses on maintaining continuous IP routing for hosts in the event of router failure. By allowing multiple routers to share a virtual IP and MAC address, with automatic failover to standby routers, HSRP ensures uninterrupted network connectivity, rapid convergence, and high availability, making it essential in enterprise network design. Therefore, the correct answer is HSRP, as it provides reliable, seamless default gateway redundancy, maintaining network access and operational continuity for hosts.
Question 134
Which protocol is used to prevent loops in Layer 2 networks by electing a root bridge and assigning port roles?
A) STP
B) CDP
C) VTP
D) EtherChannel
Answer: A) STP
Explanation:
In modern enterprise networks, maintaining stability and reliability in Layer 2 topologies is critical, especially in environments with redundant paths. Redundant connections are often implemented to increase fault tolerance and provide alternate paths for traffic in case of link failures. However, while redundancy improves network availability, it also introduces the potential for switching loops, which can lead to broadcast storms, network congestion, and even complete network outages. Several network protocols and technologies—such as CDP, VTP, EtherChannel, and STP—play different roles in managing Layer 2 networks, but only one specifically addresses the prevention of loops.
Cisco Discovery Protocol (CDP) is a proprietary protocol used primarily for neighbour discovery. It enables devices to share information about their identity, capabilities, software versions, and interface configurations with directly connected neighbours. CDP is a valuable tool for mapping network topology, troubleshooting connectivity issues, and documenting device interconnections. Despite its usefulness in network visibility, CDP does not provide mechanisms to prevent loops in redundant Layer 2 networks. Its function is limited to discovery and monitoring, not active topology management.
VLAN Trunking Protocol (VTP) is another Cisco protocol designed to simplify VLAN management. It propagates VLAN configuration changes across all switches within the same VTP domain, ensuring consistency and reducing administrative effort. VTP allows central management of VLANs, supporting creation, deletion, and renaming of VLANs on a single switch, with changes automatically synchronised to all other switches in the domain. While VTP streamlines configuration and prevents VLAN misconfigurations, it does not prevent loops or manage how data flows across redundant paths in Layer 2 networks.
EtherChannel is a technology that allows multiple physical links between switches—or between a switch and a router—to operate as a single logical interface. This provides higher aggregate bandwidth, load balancing, and redundancy. Traffic can flow across multiple physical links simultaneously, improving efficiency and resilience against link failures. However, EtherChannel itself does not prevent switching loops. While it reduces the number of logical interfaces to simplify configuration, the protocol relies on other mechanisms, such as Spanning Tree Protocol, to ensure that loops do not form within the network.
Spanning Tree Protocol (STP) is the key technology for preventing loops in Layer 2 networks. Designed to maintain a loop-free topology, STP detects redundant paths and selectively blocks ports to prevent frames from circulating endlessly. In STP, one switch is elected as the root bridge, which serves as the reference point for all path calculations. Each port is assigned a role—root, designated, or blocked—based on its location relative to the root bridge and link costs. By doing so, STP ensures that only a single active path exists between any two devices, eliminating the risk of loops while maintaining redundancy.
If a link or switch fails, STP dynamically recalculates the topology, unblocking previously blocked ports as needed to restore connectivity without introducing loops. This capability ensures continuous network availability and resilience in enterprise environments. Rapid Spanning Tree Protocol (RSTP), an enhancement of STP, further improves convergence times, minimising downtime and maintaining performance during topology changes. STP also integrates optional features such as BPDU guard, root guard, and loop guard, which protect the network from misconfigurations, rogue devices, and unexpected topology changes.
By providing a loop-free topology while allowing redundant links to remain in place for fault tolerance, STP ensures network stability, prevents broadcast storms, and supports high availability. It is indispensable in enterprise Layer 2 design, enabling networks to scale and operate reliably under complex configurations.
Therefore, the correct solution for preventing loops in Layer 2 networks is the Spanning Tree Protocol. Unlike CDP, VTP, or EtherChannel, STP actively manages the network topology, maintains a loop-free environment, and allows redundant paths to coexist safely. Its dynamic recalculation, port role assignments, and rapid convergence capabilities make it essential for resilient, high-performance enterprise networks, ensuring both stability and operational continuity.
Question 135
Which IPv6 address type delivers packets to all devices that are members of a specific group, replacing traditional broadcast?
A) Unicast
B) Multicast
C) Anycast
D) Link-local
Answer: B) Multicast
Explanation:
In IPv6 networking, understanding the different address types and their specific purposes is crucial for designing efficient, scalable, and reliable networks. IPv6 introduces multiple address categories, each serving distinct communication needs, including unicast, anycast, link-local, and multicast. Among these, multicast addresses play a vital role in enabling one-to-many communication, making them essential for modern enterprise networks.
Unicast addresses are the simplest form of IPv6 addressing. Each unicast address uniquely identifies a single network interface, and packets sent to that address are delivered solely to that device. This one-to-one communication is ideal for standard device-to-device communication, such as connecting a client to a server. However, unicast addresses are not suitable when a packet needs to reach multiple devices simultaneously, as this would require sending multiple copies of the same packet to each unicast address, increasing bandwidth usage and network overhead.
Anycast addresses provide a different mechanism. They are assigned to multiple devices, and when a packet is sent to an anycast address, the network’s routing protocols deliver it to the nearest device according to routing metrics such as shortest path or lowest cost. While anycast provides efficiency in reaching the closest available server, it does not allow delivery to all devices in a group. Anycast is commonly used for distributed services such as DNS, content delivery networks, or load-balanced web servers, where traffic must be directed to the closest or most optimal server. It is highly effective for redundancy and latency reduction, but does not facilitate one-to-many communication.
Link-local addresses are automatically configured on all IPv6-enabled interfaces for communication within a single local subnet or link. These addresses are essential for IPv6 operation because they are used for neighbour discovery, router advertisements, and certain routing protocol operations such as OSPFv3 and EIGRP for IPv6. Link-local addresses are non-routable beyond the local link and cannot be used for reaching multiple devices outside the immediate subnet. Their purpose is primarily to enable local connectivity without requiring global address assignment.
Multicast addresses, in contrast, are specifically designed to enable one-to-many communication in IPv6 networks. A single packet sent to a multicast address is delivered to all devices that have joined the corresponding multicast group. This mechanism replaces the traditional broadcast used in IPv4, which sent packets to all devices in a subnet regardless of interest, leading to unnecessary network traffic and inefficient bandwidth usage. By targeting only the devices that have explicitly joined the multicast group, IPv6 reduces bandwidth consumption, improves network efficiency, and enhances overall scalability.
Multicast addresses use the designated prefix ff00::/8 and are employed in a wide range of applications, including routing protocol updates, neighbour discovery processes, media streaming, and real-time conferencing. For example, routing protocols like OSPFv3 and EIGRP for IPv6 rely on multicast to distribute updates efficiently to multiple devices without flooding the network with unnecessary traffic. Similarly, video streaming or live broadcast applications use multicast to send a single stream to all subscribers simultaneously, rather than duplicating packets for each recipient.
By enabling efficient group communication, multicast addresses help organisations conserve bandwidth, optimise network performance, and scale their services effectively. They allow network administrators to implement structured, targeted communication, reducing congestion and improving reliability for applications that need to reach multiple devices simultaneously.
While unicast addresses deliver packets to a single device, anycast addresses direct packets to the nearest device among several, and link-local addresses provide local link connectivity. Multicast addresses uniquely support one-to-many delivery in IPv6 networks. This capability makes multicast ideal for scenarios such as media streaming, routing updates, and other applications where a single packet must reach multiple devices efficiently. Therefore, multicast is the correct address type when delivering packets to all devices in a group, optimising bandwidth usage, and enabling scalable and efficient communication in enterprise IPv6 networks.