Cisco 350-401 Implementing Cisco Enterprise Network Core Technologies (ENCOR) Exam Dumps and Practice Test Questions Set 2 Q16-30
Visit here for our full Cisco 350-401 exam dumps and practice test questions.
Question 16
Which protocol enables routers to dynamically learn routes and automatically adjust to network topology changes?
A) OSPF
B) ARP
C) ICMP
D) DNS
Answer: A) OSPF
Explanation:
ARP, or Address Resolution Protocol, is used to map IP addresses to MAC addresses within a local network. It does not provide routing capabilities or knowledge of network topology. ICMP, Internet Control Message Protocol, is used primarily for diagnostics and error reporting, such as with ping and traceroute, but it does not manage routing information. DNS, or Domain Name System, translates human-readable domain names into IP addresses and does not deal with network routes. OSPF, or Open Shortest Path First, is a link-state routing protocol that allows routers to exchange information about the state of their links, enabling them to dynamically learn routes to all reachable networks within an autonomous system. OSPF routers build a link-state database, apply the Dijkstra algorithm, and calculate the shortest path tree to determine optimal paths. If a network topology changes, OSPF automatically recalculates routes and updates forwarding tables without manual intervention. This adaptability ensures high availability and minimal downtime in enterprise networks. OSPF also supports hierarchical network design using areas, which reduces routing overhead and improves scalability. Compared to static routing, which requires manual configuration and changes whenever the network topology changes, OSPF’s dynamic behaviour is far more efficient and resilient. Therefore, the correct answer is OSPF because it provides dynamic route learning, automatic adjustment to topology changes, and fast convergence within an autonomous system.
Question 17
Which feature of Ethernet allows multiple VLANs to be carried across a single physical link?
A) STP
B) Trunking
C) EtherChannel
D) Port Security
Answer: B) Trunking
Explanation:
In modern enterprise networks, the ability to efficiently manage traffic from multiple VLANs is critical for scalability, performance, and security. Virtual LANs, or VLANs, segment network traffic logically, allowing different groups of devices to operate in isolated broadcast domains even when sharing the same physical infrastructure. However, when traffic from multiple VLANs needs to traverse a single physical connection between switches, a specialised mechanism is required. This is where trunking plays a fundamental role in network design.
Spanning Tree Protocol (STP) is a key Layer 2 protocol that enhances network stability by preventing switching loops. STP achieves this by selectively blocking redundant paths in a network, ensuring that only a single active path exists between any two switches. While STP is essential for loop prevention, it does not facilitate the transmission of multiple VLANs over a single link. Its focus is solely on network redundancy and loop avoidance, leaving VLAN traffic management to other mechanisms.
EtherChannel is another Layer 2 technology designed to improve bandwidth and redundancy. By combining multiple physical links into a single logical link, EtherChannel allows traffic to be distributed across multiple connections, increasing throughput and providing failover in case a link fails. However, EtherChannel does not inherently manage multiple VLANs on a single link; it simply treats the aggregated connections as a single logical path. VLAN tagging and traffic separation must still be handled independently.
Port security is also important in enterprise networks, as it restricts access to switch ports based on MAC addresses. This feature helps prevent unauthorised devices from connecting and reduces the risk of network attacks. While port security is valuable for controlling access at the edge, it does not address the issue of carrying traffic from multiple VLANs over a single link.
Trunking, on the other hand, is specifically designed to allow a single physical connection between switches to carry traffic from multiple VLANs. Trunk links achieve this by tagging Ethernet frames with VLAN identifiers, typically using the IEEE 802.1Q standard. Each frame carries a VLAN tag that tells the receiving switch which VLAN the frame belongs to, preserving logical separation across the shared link. Trunking enables efficient use of network infrastructure, as switches no longer require a separate physical connection for each VLAN. Without trunking, a network with numerous VLANs would quickly become inefficient and unmanageable, with each VLAN demanding its own dedicated cable.
Trunking ensures that broadcast, multicast, and unicast traffic for all VLANs can traverse a single link while maintaining the isolation and integrity of each VLAN domain. This capability simplifies network design, reduces cabling requirements, and allows for scalable deployment of VLANs across multiple switches. Trunking also facilitates consistent VLAN propagation across the network, ensuring that all switches maintain awareness of VLAN membership and can forward traffic appropriately.
While STP, EtherChannel, and port security each provide critical functionality for network stability, redundancy, and security, they do not solve the challenge of carrying multiple VLANs over a single link. Trunking is the mechanism that enables efficient, scalable, and logically separated communication across VLANs using a single physical connection. By tagging frames and preserving VLAN separation, trunking optimises network design, reduces hardware requirements, and supports the deployment of large-scale enterprise networks. Therefore, trunking is the correct solution for managing multiple VLANs over a single switch-to-switch link.
Question 18
Which mechanism allows a router to identify and forward traffic based on Layer 4 information, such as TCP or UDP ports?
A) NAT
B) ACL
C) QoS
D) Port Forwarding
Answer: B) ACL
Explanation:
In modern networking, managing traffic effectively and securely requires a combination of tools that operate at different layers of the OSI model. One critical requirement in enterprise environments is the ability to filter or forward traffic not just based on IP addresses, but also based on Layer 4 information, such as TCP or UDP ports. Various technologies provide different capabilities in traffic management, but only one offers precise control over Layer 4 port-level traffic: Access Control Lists, or ACLs.
Network Address Translation (NAT) is commonly used to allow devices on a private network to communicate with external networks, such as the Internet. NAT works by modifying the source or destination IP addresses and sometimes the ports of packets as they traverse a router or firewall. This translation enables multiple devices to share a single public IP address and provides a basic level of network security by obscuring internal addresses. However, NAT itself does not provide selective control or filtering based on Layer 4 ports. While NAT can include port forwarding features to direct incoming traffic on specific ports to a particular host, this is limited to redirection and does not allow administrators to enforce comprehensive policies or filter traffic at a granular level.
Quality of Service, or QoS, is another network mechanism often mentioned in traffic management. QoS classifies prioritises traffic based on criteria such as application type, protocol, or IP address. Its purpose is to ensure that critical applications, such as voice over IP, video conferencing, or high-priority data streams, receive preferential treatment in terms of bandwidth and latency. While QoS is essential for maintaining performance and reducing congestion, it does not inherently allow or block traffic based on specific Layer 4 ports, making it insufficient for policy-based filtering.
Port forwarding, as part of NAT configurations, allows inbound traffic on specific ports to reach designated internal hosts. This technique is useful for making services available externally, such as web servers or game servers, but it does not provide the broader ability to control access, deny unwanted traffic, or implement complex security policies across an entire network. It is primarily a mechanism for traffic redirection rather than a comprehensive filtering solution.
Access Control Lists, however, provide the precise Layer 4 control required for policy enforcement. ACLs allow network administrators to evaluate packets based on multiple attributes, including source and destination IP addresses, protocols, and TCP or UDP port numbers. Using ACLs, administrators can permit, deny, or redirect traffic according to business or security requirements. ACLs can be applied inbound or outbound on network interfaces, ensuring that only authorised traffic flows through the network while blocking malicious or unauthorised access. For example, an ACL could allow HTTP traffic on TCP port 80 from trusted sources while denying SSH access on port 22 from external networks. This granular level of control is critical for protecting sensitive resources, enforcing network policies, and maintaining secure enterprise environments.
While NAT, QoS, and port forwarding provide valuable network functions, they do not offer comprehensive control over traffic at the Layer 4 level. ACLs are the tool specifically designed for this purpose, allowing administrators to filter, permit, or redirect traffic based on TCP and UDP port information. This Layer 4 awareness ensures precise, policy-driven traffic management and is essential for securing modern enterprise networks. Therefore, ACLs represent the correct solution for controlling traffic at the port level.
Question 19
Which type of IPv6 address is used for one-to-one communication between two devices?
A) Link-local
B) Multicast
C) Unicast
D) Anycast
Answer: C) Unicast
Explanation:
In IPv6 networking, different types of addresses are used to manage how data is delivered across devices, each serving distinct purposes depending on the communication requirements. One important category is link-local addresses. These addresses are automatically assigned to every IPv6-enabled interface and are primarily used for communication within a single network segment or link. They are crucial for fundamental network functions such as neighbour discovery, address autoconfiguration, and routing protocol operations. Link-local addresses, however, are not intended for general one-to-one communication beyond the local link. They allow devices to interact with directly connected neighbours, but cannot be used to reach devices across broader network segments or the Internet.
Multicast addresses serve a different purpose. These addresses allow a single source to send traffic to a specific group of receivers simultaneously. Multicast is highly efficient for one-to-many communication scenarios, such as distributing streaming media, updates, or announcements to multiple devices. While multicast ensures that a message reaches all members of the designated group, it does not provide one-to-one communication. The network delivers the packets to all subscribers of the multicast group rather than targeting a single interface.
Anycast addresses are another specialised type of IPv6 address. They are assigned to multiple devices, often in geographically distributed locations. When a packet is sent to an anycast address, routing protocols determine the nearest device based on metrics such as hop count, path cost, or latency. While anycast enhances performance by directing traffic to the closest available device and provides redundancy, it does not guarantee delivery to a specific interface. Anycast is typically used for services that benefit from distributed availability, such as DNS servers, content delivery networks, and load-balancing applications.
For direct one-to-one communication, unicast addresses are the correct choice. A unicast address uniquely identifies a single interface on a device. When traffic is sent to a unicast address, it is delivered directly and exclusively to the interface assigned that address. IPv6 unicast addresses include global unicast addresses, which are routable across the Internet, and unique local addresses, which operate within private networks. This addressing model ensures reliable and predictable packet delivery between two specific devices, similar to IPv4 unicast addressing. Unicast addresses are essential for standard client-to-server interactions, point-to-point communications, and any scenario where traffic must reach a specific device reliably.
The distinction between unicast, link-local, multicast, and anycast addresses is significant for network design and operation. Link-local and multicast addresses are useful for specialised functions within a network segment or group, and anycast addresses provide redundancy and performance optimisation. However, none of these address types guarantees the direct delivery of packets to a single, specific interface. Unicast addresses, on the other hand, are explicitly designed for this purpose, facilitating precise one-to-one communication in both local and global contexts.
Unicast addresses in IPv6 provide a reliable mechanism for direct communication between individual devices. They support both global and private network scenarios, ensuring that packets reach their intended destination interface without ambiguity. Unlike link-local, multicast, or anycast addresses, unicast uniquely enables targeted, one-to-one delivery, making it the correct choice when direct device-to-device communication is required in an IPv6 network.
Question 20
Which protocol is commonly used to monitor and manage network devices and gather statistics?
A) SNMP
B) ICMP
C) NTP
D) FTP
Answer: A) SNMP
Explanation:
ICMP, or Internet Control Message Protocol, is primarily used for diagnostic purposes, such as ping and traceroute, to test connectivity or report errors. While it can indicate whether a device is reachable, it does not provide detailed monitoring or management capabilities for devices or network performance. NTP, or Network Time Protocol, is used exclusively to synchronise clocks across network devices. Accurate time is critical for logs, troubleshooting, and security, but NTP does not gather statistics or monitor device health. FTP, or File Transfer Protocol, is used to transfer files between devices over a network, but has no monitoring or management capabilities for network devices.
SNMP, or Simple Network Management Protocol, is specifically designed to monitor and manage network devices such as routers, switches, firewalls, and servers. SNMP allows administrators to collect real-time data about device performance, bandwidth usage, interface statistics, CPU and memory utilisation, and error rates. SNMP operates using a manager-agent model, where network devices run an SNMP agent that communicates with a central SNMP manager to report status, send alerts (traps), and respond to queries. By gathering this information, network administrators can proactively identify bottlenecks, troubleshoot issues, and optimise network performance. SNMP supports standard Management Information Bases (MIBs), which define the structure of the data that can be retrieved, ensuring interoperability between devices from different vendors. It can also be used to automate network tasks, trigger alerts when thresholds are exceeded, and generate performance reports for capacity planning. Because of its extensive monitoring and management capabilities, SNMP is widely implemented in enterprise networks for maintaining visibility into network health and ensuring reliability. Therefore, the correct answer is SNMP because it is the protocol explicitly designed to monitor network devices, collect statistics, and facilitate centralised network management, unlike ICMP, NTP, or FTP.
Question 21
Which type of wireless network topology allows devices to communicate directly without an access point?
A) Infrastructure mode
B) Mesh network
C) Ad hoc mode
D) Repeater mode
Answer: C) Ad hoc mode
Explanation:
Infrastructure mode is the most common wireless topology, where all wireless devices communicate through a central access point (AP). The AP manages traffic, provides security features, and enables communication with wired networks, but it is required for connectivity. Mesh networks involve multiple APs or nodes that communicate with each other to provide seamless coverage and redundancy. While mesh networks can extend range and reliability, they still often rely on access points or mesh nodes to manage traffic. Repeater mode is used to extend the coverage area of an existing wireless network by repeating signals from an access point; devices still communicate primarily through the AP. Ad hoc mode, also called peer-to-peer mode, allows wireless devices to communicate directly with each other without requiring an access point. Each device participates in the communication equally, making this topology useful for temporary networks, small file sharing, or emergency connectivity.
Devices in ad hoc mode manage routing and discovery themselves without centralised control. However, this mode typically lacks advanced features like centralised authentication, QoS, or seamless roaming. Ad hoc networks are simpler to deploy for small groups but may not scale efficiently for large enterprise networks. Therefore, the correct answer is Ad hoc mode because it enables direct device-to-device communication without relying on an access point, which distinguishes it from infrastructure, mesh, or repeater setups.
Question 22
Which routing protocol supports classless inter-domain routing and is widely used for Internet backbone routing?
A) RIP
B) OSPF
C) BGP
D) EIGRP
Answer: C) BGP
Explanation:
RIP, or Routing Information Protocol, is a distance-vector protocol that uses hop count as its metric and does not support large-scale backbone routing efficiently. It is not designed for inter-domain routing and has limitations in scalability and convergence speed. OSPF, or Open Shortest Path First, is a link-state protocol suitable for internal routing within an autonomous system but is not intended for inter-domain or global Internet routing.
EIGRP is an advanced distance-vector protocol designed for internal routing within an autonomous system; it provides fast convergence and supports multiple metrics, but is not widely used for Internet backbone routing. BGP, or Border Gateway Protocol, is explicitly designed for exchanging routing information between autonomous systems (inter-domain). It supports classless inter-domain routing (CIDR) to optimise route aggregation and reduce routing table size on the global Internet. BGP uses path attributes, policies, and AS-path information to determine the best path for traffic between ASes. Its scalability, policy-based routing, and ability to manage complex networks make it the de facto protocol for the Internet backbone. BGP also provides resilience through multiple paths, loop prevention, and policy enforcement, making it critical for global connectivity. Therefore, the correct answer is BGP because it is the protocol that enables inter-domain routing, supports classless addressing, and forms the backbone of Internet routing.
Question 23
Which Layer 2 protocol is used to automatically negotiate trunking between two switches?
A) STP
B) CDP
C) DTP
D) VTP
Answer: C) DTP
Explanation:
In Ethernet networks, particularly those operating at Layer 2, the efficient transfer of traffic between switches is critical to maintaining performance, VLAN segmentation, and overall network stability. One common task in managing a switched network is configuring trunk links, which carry traffic for multiple VLANs between switches. While several network protocols support different aspects of Layer 2 operations, only one is specifically designed to automate the negotiation of trunk links: Dynamic Trunking Protocol, or DTP.
The Spanning Tree Protocol (STP) is widely used in Layer 2 networks to prevent switching loops. By selectively blocking redundant paths, STP ensures that only a loop-free topology is active, avoiding broadcast storms and other issues caused by multiple active paths between switches. Although STP is essential for network stability, it does not handle trunk negotiation or VLAN configurations. Its focus is solely on loop prevention, leaving trunk establishment to other mechanisms.
Cisco Discovery Protocol (CDP) is another Layer 2 protocol that allows network devices to share information about themselves and their directly connected neighbourss. CDP provides useful insights, such as device type, IP address, and interface details, which aid in network management and troubleshooting. However, CDP does not automatically configure trunk links or determine VLAN assignments, and its role is limited to discovery and information sharing rather than configuration.
VLAN Trunking Protocol (VTP) is used to maintain VLAN consistency across multiple switches in an enterprise network. By distributing VLAN information, VTP helps ensure that all switches are aware of which VLANs exist and can manage changes centrally. Despite its role in VLAN management, VTP does not negotiate trunk links on a per-port basis. It requires preexisting trunks or manually configured links to propagate VLAN data and cannot dynamically create trunk connections between switches.
Dynamic Trunking Protocol, by contrast, is specifically designed to automate the process of trunk negotiation between switches. DTP is a Cisco proprietary protocol that allows two directly connected interfaces to communicate and determine whether a trunk should be established. Depending on the configuration, DTP can operate in dynamic desirable or dynamic auto mode. Dynamic desirable mode actively attempts to form a trunk, while dynamic auto mode waits for the other side to initiate the negotiation. Once the negotiation succeeds, a trunk link is established, allowing multiple VLANs to traverse the link efficiently.
The automation provided by DTP significantly reduces the potential for configuration errors that can occur when manually configuring trunk links. In large enterprise networks with numerous switches and VLANs, manually configuring trunks on every interface would be both time-consuming and error-prone. DTP simplifies network deployment, ensures consistent VLAN propagation across the network, and enhances operational efficiency.
While protocols like STP, CDP, and VTP each serve important functions in a Layer 2 environment, DTP is uniquely responsible for dynamically negotiating trunk links between switches. By automating trunk creation and supporting flexible modes of operation, DTP enables VLANs to be correctly carried between switches, reduces administrative overhead, and minimises configuration mistakes. Its use is particularly valuable in enterprise networks, where maintaining consistent and reliable trunking is critical for network performance and scalability.
Question 24
Which feature ensures that voice and video traffic receive priority over other types of network traffic?
A) VLAN
B) QoS
C) NAT
D) ACL
Answer: B) QoS
Explanation:
VLANs segment traffic into separate broadcast domains but do not prioritise certain traffic types. NAT translates IP addresses and facilitates connectivity between private and public networks, but does not handle traffic prioritisation. ACLs control access by permitting or denying traffic based on IP addresses, protocols, or ports, but do not provide prioritisation. QoS, or Quality of Service, allows network administrators to classify, mark, and schedule traffic to ensure that high-priority applications like voice and video receive better treatment than lower-priority data. QoS can allocate bandwidth, reduce latency and jitter, and manage congestion to maintain optimal performance for critical applications.
Mechanisms such as traffic shaping, policing, and queuing ensure that time-sensitive packets are delivered reliably, which is crucial for VoIP and streaming video services. QoS operates across multiple layers and can integrate with Layer 2 and Layer 3 devices to maintain consistent traffic prioritisation throughout the network. Proper QoS configuration is essential in enterprise networks where high-bandwidth and low-latency applications must coexist with standard data traffic. Therefore, the correct answer is QoS because it provides traffic prioritisation, ensuring that voice and video applications maintain performance even under network congestion.
Question 25
Which IP routing protocol supports VLSM (Variable Length Subnet Masking) and uses hop count as its metric?
A) RIP
B) OSPF
C) EIGRP
D) BGP
Answer: A) RIP
Explanation:
OSPF is a link-state protocol that fully supports VLSM and calculates the best path using cost based on bandwidth, not hop count. EIGRP supports VLSM and uses a composite metric including bandwidth, delay, load, and reliability, rather than just hop count. BGP is an inter-domain routing protocol that does not rely on hop count; instead, it uses path attributes and policies to determine routes. RIP, or Routing Information Protocol, is a distance-vector protocol that originally supported classful addressing, but later versions (RIP version 2) added support for VLSM, enabling more efficient use of IP address space by allowing subnets of varying sizes. RIP uses hop count as its metric to determine the shortest path to a destination network, with a maximum of 15 hops to prevent routing loops. Although simple and easy to configure, RIP has limitations in large networks due to slow convergence and limited scalability. Its support for VLSM in RIP version 2 allows network administrators to implement hierarchical subnetting while maintaining compatibility with older RIP configurations. Therefore, the correct answer is RIP because it supports variable-length subnet masks and uses hop count as its routing metric.
Question 26
Which technology allows multiple IP subnets to exist on the same VLAN?
A) NAT
B) VLAN routing
C) Secondary IP addresses
D) HSRP
Answer: C) Secondary IP addresses
Explanation:
NAT, or Network Address Translation, is used to translate private IP addresses to public addresses for Internet connectivity and does not directly allow multiple subnets on a single VLAN. VLAN routing refers to Layer 3 routing between VLANs, which requires separate IP subnets for each VLAN; it does not allow multiple subnets within the same VLAN. HSRP, or Hot Standby Router Protocol, provides redundancy for default gateways but is not related to hosting multiple subnets on the same VLAN. Secondary IP addresses are a feature that allows a router or Layer 3 interface to be configured with multiple IP addresses within different subnets on the same physical or logical interface. By assigning secondary addresses, devices within the same VLAN can communicate with different subnets without requiring additional physical interfaces. This is particularly useful in scenarios such as gradual network migrations, overlapping networks, or integrating multiple address spaces without creating additional VLANs. When using secondary IP addresses, the primary IP address is still the main identifier for the interface, while secondary addresses are used to route traffic for additional subnets. Routing tables automatically handle multiple subnets, recognising the secondary addresses, and communication occurs seamlessly within the VLAN. This method allows network engineers to maximise address space and maintain logical network segmentation without adding complexity to the physical network. However, it is essential to manage ARP and routing behaviour carefully to avoid conflicts between subnets. Therefore, the correct answer is Secondary IP addresses because they allow a single VLAN interface to participate in multiple IP subnets, providing flexibility and efficient utilisation of network resources.
Question 27
Which protocol provides loop-free Layer 2 connectivity in a redundant network topology?
A) DTP
B) STP
C) VTP
D) RSTP
Answer: D) RSTP
Explanation:
DTP, or Dynamic Trunking Protocol, automatically negotiates trunk links between switches but does not provide loop prevention. STP, Spanning Tree Protocol, prevents loops by blocking redundant paths; however, the original STP (IEEE 802.1D) can converge slowly, taking tens of seconds to transition from a topology change. VTP, or VLAN Trunking Protocol, manages VLAN information across multiple switches to ensure consistency, but it does not address loops. RSTP, or Rapid Spanning Tree Protocol (IEEE 802.1w), is an enhancement of STP that provides faster convergence while maintaining loop-free Layer 2 topologies. RSTP improves response time to topology changes by using edge ports and rapid transition mechanisms, allowing networks to recover from link failures much faster than traditional STP. In a redundant topology with multiple paths between switches, loops can cause broadcast storms, MAC table instability, and network outages. RSTP resolves these issues by quickly determining the active topology, blocking unnecessary ports, and forwarding traffic efficiently. The protocol classifies ports into roles such as root, designated, alternate, and backup, dynamically adjusting to failures and changes. Therefore, the correct answer is RSTP because it provides loop-free Layer 2 connectivity with rapid convergence, which is essential for maintaining high availability in modern enterprise networks.
Question 28
Which mechanism allows routers to forward packets based on destination MAC addresses within a VLAN?
A) Routing table
B) ARP table
C) MAC address table
D) NAT table
Answer: C) MAC address table
Explanation:
In networking, efficient traffic forwarding within a local area network (LAN) relies heavily on the ability of devices to correctly identify the destination for each data frame. Layer 3 devices, such as routers, use routing tables to forward packets based on their IP addresses. These tables determine the next hop for a packet on its journey toward the final destination across networks. However, routing tables operate at the network layer and do not handle the delivery of frames based on physical addresses within a local segment. This distinction is critical, as traffic within a VLAN is primarily managed at Layer 2 using MAC addresses.
ARP tables, or Address Resolution Protocol tables, map IP addresses to MAC addresses, enabling devices to discover the physical address associated with a given network address. While ARP is essential for the initial resolution of addresses so that communication can occur, it does not actually forward traffic within a VLAN. Its role is limited to translating addresses rather than directing frames to the correct switch port.
Similarly, NAT tables are used to perform network address translation, converting private IP addresses to public IP addresses and vice versa. This function is crucial for enabling devices on private networks to communicate with external networks, such as the Internet. While NAT operates at Layer 3 and impacts the IP addressing of packets, it does not facilitate Layer 2 switching or determine which port a frame should be sent to within a VLAN.
The device responsible for efficiently forwarding traffic within a VLAN is the Layer 2 switch. Switches rely on a MAC address table, also referred to as a forwarding table or CAM (Content Addressable Memory) table. This table maintains a dynamic mapping of MAC addresses to the specific ports on which the corresponding devices are connected. When a switch receives a frame, it examines the destination MAC address and consults the MAC address table to determine the appropriate port to forward the frame. This targeted forwarding prevents unnecessary flooding of frames to all ports, improving network efficiency and reducing congestion.
MAC address tables are built and updated dynamically. When a new device connects to the switch, the switch learns the MAC address of the device as it receives frames from it. If the destination MAC address of an incoming frame is not present in the table, the switch temporarily floods the frame to all ports within the VLAN. Once the destination device responds, the switch records the MAC address and updates its table, ensuring future frames are forwarded directly to the correct port. This learning process is continuous and allows the switch to adapt as devices are added, removed, or moved to different ports.
The use of MAC address tables is essential for optimising performance, minimising broadcast traffic, and maintaining proper network segmentation within VLANs. By forwarding frames only to the intended destination port, switches reduce unnecessary network traffic, improve bandwidth utilisation, and maintain the isolation and efficiency that VLANs provide.
While routing tables, ARP tables, and NAT tables perform critical functions at higher layers of the network, the MAC address table is specifically responsible for enabling Layer 2 switches to forward frames accurately within a VLAN. Its dynamic mapping of MAC addresses to switch ports ensures efficient traffic delivery, minimises unnecessary flooding, and supports the overall performance and stability of local area networks.
Question 29
Which feature allows a network to provide high availability for default gateways?
A) HSRP
B) QoS
C) ACL
D) DTP
Answer: A) HSRP
Explanation:
In modern network design, ensuring high availability and uninterrupted connectivity for hosts is critical, especially in enterprise environments where downtime can have significant operational and financial impacts. One common requirement is maintaining reliable default gateway functionality so that if a primary router fails, hosts can still send traffic to other networks without interruption. Several network technologies provide different types of traffic management and security functions, but not all of them address gateway redundancy directly.
Quality of Service, or QoS, is often considered when discussing network reliability. QoS focuses on prioritising network traffic based on predefined policies, ensuring that critical applications such as voice, video, or mission-critical data receive priority over less important traffic. While QoS improves performance under heavy load, it does not provide redundancy for the default gateway. If the router serving as a gateway fails, QoS policies alone cannot maintain network connectivity, making it unsuitable for ensuring gateway availability.
Access Control Lists, or ACLs, are another important tool in network administration. ACLs allow administrators to control and filter traffic based on source and destination IP addresses, protocols, or port numbers. They are essential for security, preventing unauthorised access, and enforcing network policies. However, ACLs do not inherently provide failover capabilities for routers or ensure that a backup device can take over the role of a default gateway in the event of failure.
Dynamic Trunking Protocol, or DTP, is primarily used to negotiate trunk links between switches in a Layer 2 network. DTP automates the creation of trunk ports to carry multiple VLANs, simplifying switch-to-switch connectivity. While DTP is valuable for maintaining consistent VLAN configurations and avoiding misconfigurations, it is unrelated to providing redundancy or high availability for default gateways.
The technology specifically designed to address default gateway redundancy is HSRP, or Hot Standby Router Protocol. HSRP is a Cisco proprietary protocol that allows multiple routers to appear as a single virtual router to hosts on the network. One router is elected as the active router and is responsible for forwarding packets sent to the virtual IP address, which is configured as the default gateway for hosts. Another router is designated as the standby router. If the active router fails, the standby router automatically assumes control, taking over the virtual IP and continuing to forward traffic. This failover process occurs seamlessly, without requiring manual intervention or host reconfiguration, ensuring continuous connectivity.
HSRP relies on the exchange of hello messages between routers to monitor the status of the active router. If a hello message is missed within a predefined interval, the standby router assumes that the active router has failed and immediately begins forwarding traffic. This rapid failover mechanism minimises disruption and helps maintain business continuity. By providing a virtual IP address as the default gateway, HSRP allows hosts to use a single, stable IP address while benefiting from redundancy behind the scenes.
HSRP is widely deployed in enterprise networks because it enhances resilience, reduces the risk of network outages, and provides a reliable method for maintaining uninterrupted access to upstream networks. Unlike QoS, ACLs, or DTP, HSRP directly addresses the need for default gateway availability, making it the appropriate choice when high availability and seamless failover are required.
HSRP ensures that default gateways remain accessible even if a primary router fails. By electing an active router, maintaining standby routers, and using a virtual IP address for hosts, HSRP provides continuous connectivity, minimal downtime, and robust network resilience. For organisations seeking reliable gateway redundancy, HSRP is the definitive solution.
Question 30
Which IPv6 feature allows multiple devices to share the same IP address, with traffic delivered to the nearest device?
A) Unicast
B) Multicast
C) Anycast
D) Link-local
Answer: C) Anycast
Explanation:
In networking, different types of IP addresses are used to manage how data is delivered across devices, and understanding these distinctions is essential for designing efficient and resilient systems. Unicast addresses, for example, are the most straightforward type of IP address. They are assigned to a single interface and ensure that network traffic is delivered directly to that one specific device. When a packet is sent to a unicast address, it travels through the network along a path determined by routing tables until it reaches the intended recipient. This method is highly efficient for one-to-one communication but does not inherently support distribution to multiple devices.
Multicast addresses, in contrast, are designed for one-to-many communication. Traffic sent to a multicast address is delivered to all devices that have joined a specific multicast group. While multicast is efficient for distributing the same data to multiple recipients, such as streaming media or updates, it does not prioritise delivery to the nearest device. Instead, the network delivers packets to every member of the multicast group regardless of their location relative to the sender, which can lead to unnecessary traffic if the network is not carefully managed.
Link-local addresses are another type of IP address, typically used for communication within a single network segment or link. These addresses are automatically configured on all IPv6-enabled interfaces and are only valid within the local link. While they are useful for functions such as neighbour discovery and automatic address configuration, link-local addresses are not suitable for routing packets to the nearest device across a broader network. They are strictly local and cannot be used for scenarios that require intelligent routing or load distribution beyond the immediate link.
Anycast addresses, however, offer a unique approach that combines aspects of unicast and redundancy. Anycast is an IPv6 feature, although it can also be implemented in IPv4 under certain circumstances, where the same IP address is assigned to multiple devices, typically distributed across different locations in a network. When a packet is sent to an anycast address, the network routing infrastructure determines which of the devices sharing that address is «closest» to the sender, based on routing metrics such as path cost, number of hops, or other distance measures. The packet is then delivered to the nearest device, effectively directing traffic to the most optimal location.
Anycast is particularly valuable for services that require both efficiency and high availability. It is commonly used in content delivery networks, Domain Name System (DNS) services, and global load balancing scenarios. By routing traffic to the nearest instance of a service, anycast reduces latency for end users and optimises the use of network resources. Additionally, it inherently provides redundancy: if one instance becomes unavailable, the routing protocol automatically directs traffic to the next nearest instance without requiring complex load-balancing mechanisms.
Overall, anycast addresses allow multiple devices to share a single IP while ensuring that network traffic is delivered to the most appropriate device based on proximity and routing efficiency. This approach enhances performance, reduces latency, and improves service availability, making it an essential tool for scalable, resilient, and high-performing network services. Unlike unicast, multicast, or link-local addressing, anycast uniquely combines the benefits of shared addressing with intelligent routing, making it the preferred choice for scenarios where directing traffic to the nearest device is critical.