Cisco 350-401 Implementing Cisco Enterprise Network Core Technologies (ENCOR) Exam Dumps and Practice Test Questions Set 4 Q46-60
Visit here for our full Cisco 350-401 exam dumps and practice test questions.
Question 46
Which technology enables Layer 3 switches to forward traffic between VLANs without a dedicated router?
A) Routed port
B) SVI
C) EtherChannel
D) ACL
Answer: B) SVI
Explanation:
Routed ports are Layer 3 interfaces configured on a switch to behave like a router interface, but each requires a separate physical port and does not inherently provide VLAN interconnection. EtherChannel aggregates multiple physical links into a single logical link to increase bandwidth and redundancy, but does not perform routing between VLANs. ACLs, or Access Control Lists, control the flow of traffic based on rules but do not enable routing. SVI, or Switch Virtual Interface, is a virtual Layer 3 interface configured on a VLAN within a Layer 3 switch. SVIs allow the switch to route traffic between multiple VLANs internally without requiring an external router, providing inter-VLAN communication. Each SVI is assigned an IP address corresponding to the VLAN, acting as the default gateway for devices in that VLAN. The switch can maintain multiple SVIs simultaneously, allowing communication across VLANs while leveraging the high-speed switching fabric. SVIs also support routing protocols, QoS, and security features directly on the switch. By eliminating the need for separate physical routers for inter-VLAN traffic, SVIs reduce complexity, cost, and latency in the network. SVIs are critical in modern enterprise environments where Layer 3 switches consolidate access and distribution layers, enabling efficient routing and simplified management. Therefore, the correct answer is SVI because it provides Layer 3 routing between VLANs internally on a switch, enabling inter-VLAN communication without a dedicated router.
Question 47
Which protocol is used to automatically assign IPv4 addresses and configuration to hosts on a network?
A) DNS
B) DHCP
C) NTP
D) ARP
Answer: B) DHCP
Explanation:
DNS, or Domain Name System, resolves domain names to IP addresses but does not assign IP addresses to hosts. NTP, or Network Time Protocol, synchronises clocks across network devices and does not assign IP configurations. ARP, Address Resolution Protocol, maps IP addresses to MAC addresses within a subnet but does not assign addresses dynamically. DHCP, or Dynamic Host Configuration Protocol, provides automatic IPv4 addressing and configuration to hosts on a network. DHCP servers maintain a pool of available IP addresses and lease them to clients for a specified period. In addition to IP addresses, DHCP can provide subnet masks, default gateways, DNS servers, and other network configuration parameters. When a host joins a network, it sends a DHCP Discover message to locate a server. The server responds with an Offer, and after the host requests the offered IP, the server confirms the lease. This process allows centralised management of IP address allocation, prevents address conflicts, and simplifies device configuration. DHCP also supports dynamic reallocation of IP addresses, ensuring efficient utilisation of available address space. In enterprise networks, DHCP reduces manual errors, eases the deployment of new devices, and ensures consistency across network configurations. Therefore, the correct answer is DHCP because it automatically assigns IPv4 addresses and other configuration parameters to hosts, streamlining network management and reducing configuration errors.
Question 48
Which Layer 2 protocol prevents switching loops in redundant network topologies?
A) CDP
B) STP
C) VTP
D) DTP
Answer: B) STP
Explanation:
CDP, Cisco Discovery Protocol, is used to identify directly connected Cisco devices and gather device information, but does not prevent loops. VTP, VLAN Trunking Protocolsynchroniseses VLAN configurations across switches but does not control Layer 2 loops. DTP, Dynamic Trunking Protocol, negotiates trunk links but does not manage switching loops. STP, or Spanning Tree Protocol, prevents Layer 2 loops in redundant networks by creating a loop-free logical topology. In redundant environments with multiple paths, loops can lead to broadcast storms, MAC table instability, and network outages. STP elects a root bridge and assigns port roles (root, designated, blocked) to control traffic flow and eliminate loops. If a link fails, STP recalculates the topology to maintain loop-free connectivity. Enhancements like Rapid STP (RSTP) improve convergence time, allowing networks to recover quickly from failures. STP operates transparently, ensuring that redundant paths provide fault tolerance without causing loops. Proper STP configuration is essential in enterprise networks with redundant links to maintain stability and availability. Therefore, the correct answer is STP because it prevents Layer 2 switching loops in redundant network topologies, maintaining a stable and reliable network.
Question 49
Which type of NAT translates private IP addresses to a single public IP address for outbound traffic?
A) Static NAT
B) Dynamic NAT
C) PAT
D) NAT64
Answer: C) PAT
Explanation:
Static NAT maps one private IP to one public IP permanently, suitable for servers, but does not allow multiple hosts to share a single public IP. Dynamic NAT maps private IPs to a pool of public IPs, providing a one-to-one translation without sharing a single address. NAT64 translates IPv6 addresses to IPv4 addresses for IPv6/IPv4 interoperability. PAT, or Port Address Translation, also known as NAT overload, allows multiple private IP addresses to share a single public IP address by differentiating sessions using unique port numbers. This enables many hosts to access the Internet simultaneously using one public IP, conserving IP address space. PAT translates the private source IP and port into the public IP and a unique port for each session, maintaining connection tracking. This is widely used in enterprise networks and home routers where public IP addresses are limited. Therefore, the correct answer is PAT because it translates multiple private IP addresses to a single public IP for outbound traffic, conserving address space while enabling Internet connectivity for multiple hosts.
Question 50
Which feature provides secure management access to network devices using encryption?
A) Telnet
B) SSH
C) HTTP
D) FTP
Answer: B) SSH
Explanation:
Telnet provides remote device management but transmits credentials in plaintext, making it insecure. HTTP allows web-based management but is not encrypted and is susceptible to interception. FTP transfers files over the network without encryption, exposing credentials and data. SSH, or Secure Shell, provides encrypted management access to network devices, ensuring the confidentiality and integrity of login credentials and administrative commands. SSH uses public-key cryptography for secure authentication and encrypts all traffic between the client and device. This prevents eavesdropping, man-in-the-middle attacks, and unauthorised access. Enterprise networks use SSH instead of Telnet to securely configure and manage routers, switches, and firewalls. SSH supports secure remote access, file transfer, and tunnelling, making it essential for modern network administration. Therefore, the correct answer is SSH because it provides encrypted management access to network devices, ensuring secure communication and preventing credential compromise.
Question 51
Which protocol allows routers to discover neighbours and exchange information about directly connected devices?
A) CDP
B) LLDP
C) OSPF
D) EIGRP
Answer: A) CDP
Explanation:
LLDP, or Link Layer Discovery Protocol, is a standards-based protocol similar to CDP but not specific to Cisco devices. OSPF is a link-state routing protocol used for learning routes within an autonomous system, not for neighbour discovery at Layer 2. EIGRP is a hybrid routing protocol designed for route learning and optimal path selection, but does not provide physical neighbour information. CDP, or Cisco Discovery Protocol, is a Layer 2 proprietary protocol that allows Cisco devices to discover directly connected neighbours and share information such as device type, model, IP address, interface identifiers, and software version. CDP packets are sent periodically to multicast addresses to provide continuous updates.
Network administrators use CDP to map network topology, troubleshoot connectivity issues, verify cable connections, and detect misconfigurations. Because CDP operates at Layer 2, it functions even if Layer 3 configurations are incomplete, making it useful during deployment or troubleshooting. CDP supports querying neighbour information using commands like show cdp neighbours and show cdp entry to obtain detailed device information. The information gathered helps administrators identify devices, plan configurations, and detect unexpected network devices. CDP does not forward data traffic or influence routing; its purpose is solely discovery and monitoring. In enterprise networks, CDP is critical for maintaining visibility into the physical and logical network infrastructure, particularly in environments with multiple switches, routers, and other network devices. Therefore, the correct answer is CDP because it allows routers and switches to discover directly connecteneneighbourss and exchange detailed device information, aiding in management, troubleshooting, and network documentation.
Question 52
Which protocol is primarily used to provide secure authentication, authorisation, and accounting in network devices?
A) SNMP
B) TACACS+
C) FTP
D) ICMP
Answer: B) TACACS+
Explanation:
In modern enterprise networks, controlling and monitoring access to critical network devices such as routers, switches, and firewalls is essential for maintaining security, operational efficiency, and regulatory compliance. Various protocols exist for managing network devices, but not all provide the comprehensive security and centralised control required for enterprise environments. Among these, SNMP, FTP, and ICMP have specific roles but are limited in terms of authentication, authorisation, and accounting. TACACS+, or Terminal Access Controller Access Control System Plus, is a Cisco proprietary protocol that addresses these limitations by offering centralised, secure management of AAA functions—authentication, authorisation, and accounting. Understanding the distinctions between these protocols highlights why TACACS+ is the preferred solution for enterprise network security.
SNMP, or Simple Network Management Protocol, is primarily designed for monitoring and managing network devices. It allows administrators to query device status, monitor performance metrics, and receive alerts about network conditions. While SNMP is valuable for observing device operation and network health, it does not provide robust authentication or authorisation controls. Users can potentially access device data without verification of identity, and SNMP lacks comprehensive mechanisms to enforce user-specific command privileges or maintain detailed audit trails, making it unsuitable for enforcing secure administrative policies.
FTP, or File Transfer Protocol, is widely used for transferring files between devices. Administrators may use FTP to upload configuration files or software updates to network devices. However, FTP lacks secure authentication and encryption, exposing usernames, passwords, and file contents to interception. Additionally, FTP does not manage user permissions or track administrative activity, meaning it cannot enforce centralised AAA policies or record accounting information for auditing purposes. Consequently, FTP is limited to file transfer operations rather than secure device management.
ICMP, the Internet Control Message Protocol, is used for basic network diagnostics such as ping and traceroute. It provides valuable information about network reachability, latency, and path selection but offers no mechanism for controlling access, defining command privileges, or logging administrative activity. ICMP is purely a diagnostic tool and cannot enforce security policies or provide accountability for users interacting with network devices.
TACACS+ fills these gaps by providing centralised AAA services tailored for enterprise network administration. It separates the functions of authentication, authorisation, and accounting, allowing administrators to apply granular control over device access and actions. Authentication ensures that users are verified before accessing network devices, typically using credentials stored on a central TACACS+ server. Authorisation determines which commands or operations each authenticated user can execute, enabling role-based access control and minimising the risk of accidental or malicious configuration changes. Accounting tracks all user activity, recording commands executed and sessions initiated, which supports auditing, compliance reporting, and forensic analysis in case of incidents.
One of the key advantages of TACACS+ is security. Unlike some other protocols, it encrypts the entire payload, including usernames, passwords, and commands, preventing sensitive information from being exposed over the network. By integrating with central servers, TACACS+ enables consistent policy enforcement across multiple devices, ensuring that administrators adhere to organisational access controls regardless of the device being managed. It scales effectively for large networks, providing centralised administration for numerous routers, switches, firewalls, and other critical devices.
TACACS+ is widely deployed in enterprise environments to secure remote administrative access, enforce standardised security policies, and maintain accountability. By centralising AAA management, organisations can control who can access devices, define what actions they can perform, and maintain comprehensive logs of all activities. This reduces security risks, ensures compliance with internal and external regulations, and provides network administrators with detailed insights into operational activity.
While SNMP monitors network devices, FTP facilitates file transfers, and ICMP diagnoses network connectivity, only TACACS+ provides secure, centralised management of authentication, authorisation, and accounting. Its ability to encrypt communications, enforce role-based access, and maintain comprehensive activity logs makes it an essential protocol for protecting critical network infrastructure and ensuring compliance. Therefore, TACACS+ is the correct choice for enterprise AAA management, supporting security, accountability, and operational efficiency across the network.
Question 53
Which feature allows multiple physical links to be combined into a single logical link to provide redundancy and higher bandwidth?
A) HSRP
B) EtherChannel
C) STP
D) VTP
Answer: B) EtherChannel
Explanation:
In enterprise networking, ensuring both high availability and optimal bandwidth between network devices is essential. Physical links between switches or routers have limited capacity, and relying on a single connection can create bottlenecks and single points of failure. Cisco offers several technologies to improve network performance and resilience, but EtherChannel is specifically designed to aggregate multiple physical links into a single logical interface, providing both increased bandwidth and redundancy.
Hot Standby Router Protocol, or HSRP, is widely used to provide gateway redundancy by allowing multiple routers to share a virtual IP address. This ensures that if the primary router fails, a secondary router takes over seamlessly. While HSRP enhances Layer 3 reliability, it does not combine multiple physical links for higher throughput, and its purpose is distinct from link aggregation.
Spanning Tree Protocol, or STP, prevents switching loops in Layer 2 networks by selectively blocking redundant paths. STP is crucial for network stability, but it does not provide mechanisms for bandwidth aggregation or link consolidation. Redundant links are kept blocked to prevent loops, so STP alone cannot increase the effective throughput between devices.
VLAN Trunking Protocol, or VTP, simplifies VLAN management by propagating VLAN configuration information across multiple switches. VTP ensures VLAN consistency and reduces administrative overhead, but it does not merge multiple physical connections into a single logical interface, nor does it provide load balancing or redundancy at the link level.
EtherChannel addresses these limitations by allowing multiple physical Ethernet links to be bundled into one logical interface. This logical link functions as a single connection for configuration and management purposes, reducing administrative complexity while significantly increasing effective bandwidth. Traffic is distributed across member links according to load-balancing algorithms, which may consider factors such as source and destination IP addresses, MAC addresses, or Layer 4 ports. If a physical link within the bundle fails, the remaining links continue to carry traffic, ensuring uninterrupted communication and improving overall network reliability.
EtherChannel can operate at both Layer 2 and Layer 3. At Layer 2, it can carry VLAN traffic between switches, effectively combining bandwidth while maintaining logical VLAN separation. At Layer 3, it can be used for routed interfaces, providing redundancy and load sharing between routers or Layer 3 switches. This flexibility makes EtherChannel ideal for connecting access and distribution switches, interconnecting core switches, or linking data centre devices, all while ensuring high availability and improved throughput.
By aggregating multiple physical links into a single logical interface, EtherChannel simplifies network configuration and management, enhances bandwidth utilisation, and provides fault tolerance. It is widely adopted in enterprise networks where reliability, performance, and efficient management are critical.
While HSRP, STP, and VTP each provide essential functions such as gateway redundancy, loop prevention, and VLAN consistency, EtherChannel uniquely combines multiple physical connections into a single logical interface. It delivers increased bandwidth, redundancy, simplified management, and improved network reliability, making it the preferred solution for link aggregation in modern enterprise environments.
Question 54
Which IPv6 address type is automatically configured for communication only within a local link and is mandatory on all IPv6 interfaces?
A) Global unicast
B) Link-local
C) Multicast
D) Anycast
Answer: B) Link-local
Explanation:
In IPv6 networking, various address types serve different purposes, and understanding their functions is essential for effective network design and operation. IPv6 addresses include global unicast, multicast, anycast, and link-local addresses, each with distinct characteristics and use cases. Among these, link-local addresses are unique because they are automatically generated for every IPv6-enabled interface and are critical for local communication and core network operations.
Global unicast addresses are the IPv6 equivalent of public IP addresses in IPv4. They are globally routable and can be used for communication over the Internet or between remote networks. While global unicast addresses are essential for external connectivity, they are not automatically configured and require manual or dynamic assignment through mechanisms such as DHCPv6 or SLAAC (Stateless Address Autoconfiguration).
Multicast addresses, on the other hand, are used for one-to-many communication. A single packet sent to a multicast address is delivered to all devices that are members of the corresponding multicast group. Multicast is commonly used for applications such as streaming media, network announcements, and routing protocol updates. However, multicast addresses are not intended for mandatory local link communication and are not automatically assigned to interfaces.
Anycast addresses are a special type of IPv6 address that can be assigned to multiple devices, typically in different locations. When a packet is sent to an anycast address, the network routes it to the nearest device based on routing metrics such as path cost or distance. Anycast is useful for distributed services such as DNS, content delivery, and load balancing. However, anycast addresses must be manually configured, and they are not automatically generated on interfaces.
Link-local addresses, in contrast, are automatically created on every IPv6-enabled interface and are specifically intended for communication within a single network segment or local link. They are non-routable, meaning packets sent to a link-local address cannot traverse routers to reach other subnets, which helps prevent accidental exposure of local traffic. Link-local addresses are essential for foundational IPv6 operations, including neighbour discovery, router discovery, and routing protocol communications such as OSPFv3 or EIGRP for IPv6. These addresses ensure that devices can communicate with directly connected neighbours even in the absence of globally routable addresses.
Typically, link-local addresses are generated using a modified EUI-64 format derived from the interface’s MAC address, though they can also be configured manually. Routers and hosts rely on link-local addresses to exchange routing information, perform automatic neighbour discovery, and maintain the functionality of the IPv6 protocol stack. The automatic assignment of link-local addresses ensures that every IPv6 interface is immediately operational for local communications, providing a reliable foundation for both basic and advanced network functions.
While global unicast, multicast, and anycast addresses serve important purposes in IPv6 networks, link-local addresses are unique in their automatic assignment and critical role in local link communication. They are essential for IPv6 operations, including routing protocol exchanges and neighbour discovery, and provide a consistent, non-routable address for each interface. Therefore, link-local addresses are the correct choice for mandatory, automatically assigned local communication in IPv6 networks, forming a vital component of IPv6 functionality.
Question 55
Which protocol allows a Layer 3 device to determine the optimal path to remote networks based on cost and shortest path calculation?
A) RIP
B) OSPF
C) EIGRP
D) BGP
Answer: B) OSPF
Explanation:
In IP networking, choosing the optimal path for data packets is a fundamental responsibility of Layer 3 devices. Various routing protocols are available to manage this task, each with its own approach to determining the best path. Understanding the strengths and limitations of these protocols is crucial for designing efficient and reliable networks. Among the commonly used routing protocols are RIP, EIGRP, BGP, and OSPF, but only one of these protocols combines link-state awareness, shortest-path calculation, and scalable hierarchical design: OSPF.
Routing Information Protocol, or RIP, is one of the oldest distance-vector routing protocols. It determines the best path to a destination network based solely on hop count, with a maximum of 15 hops allowed. While simple to configure and understand, RIP’s reliance on hop count ignores other critical performance factors such as link bandwidth, delay, or reliability. This limitation can result in suboptimal routing decisions, slower convergence during network changes, and potential routing loops in complex topologies.
Enhanced Interior Gateway Routing Protocol, or EIGRP, is a Cisco proprietary hybrid routing protocol that uses a composite metric incorporating bandwidth, delay, load, and reliability. EIGRP is more sophisticated than RIP and can provide more efficient path selection. However, it is not a purely link-state protocol; it combines aspects of distance-vector and link-state behaviour. EIGRP maintains neighbour tables and uses the Diffusing Update Algorithm (DUAL) to calculate loop-free paths, but it does not create a complete, network-wide map of topology as a true link-state protocol does.
Border Gateway Protocol, or BGP, operates at a different scope. It is a path-vector protocol used primarily for routing between autonomous systems on the Internet. BGP selects routes based on policy decisions, attributes, and AS path information rather than strict link costs or shortest-path calculations. While essential for inter-domain routing, BGP is not designed for optimal path selection within a single autonomous system and does not provide the fast convergence and network-wide topology awareness that link-state protocols offer.
Open Shortest Path First, or OSPF, is a true link-state routing protocol designed to provide efficient and reliable Layer 3 routing within an autonomous system. Each OSPF router discovers the state of its directly connected links and advertises this information using Link-State Advertisements (LSAs) to all routers within its area. By collecting these LSAs, every router builds a complete and identical map of the network topology. Using the Dijkstra shortest-path algorithm, each router calculates the optimal path to every reachable network based on link costs, which can be configured according to bandwidth, delay, or administrative preferences.
OSPF also supports hierarchical network design through the use of areas, which reduces routing overhead, enhances scalability, and limits the impact of topology changes. When network changes occur, OSPF rapidly converges, recalculating paths without creating routing loops. This ensures that traffic always follows the most efficient routes, maintaining performance and reliability across the network.
While RIP, EIGRP, and BGP each have specific use cases, OSPF is uniquely suited for dynamically determining the optimal path within an autonomous system. Its link-state approach, cost-based shortest-path calculations, hierarchical design, and fast convergence make it the ideal choice for efficient, loop-free routing in modern enterprise networks. OSPF enables Layer 3 devices to maintain an accurate view of the network, ensuring reliable and performance-optimised routing to all destinations.
Question 56
Which feature allows multiple switches to operate as a single logical switch for simplified management and unified configuration?
A) EtherChannel
B) StackWise
C) HSRP
D) VTP
Answer: B) StackWise
Explanation:
In enterprise networking, managing multiple switches efficiently while maintaining redundancy and high availability is a common challenge. Networks often include numerous access switches to connect end devices, and without centralised management, configuration, monitoring, and troubleshooting can become complex and error-prone. Several technologies offer partial solutions to network scalability and redundancy, but only one Cisco technology allows multiple physical switches to operate as a single logical unit: StackWise.
EtherChannel is a widely used method for increasing bandwidth and providing redundancy by aggregating multiple physical links into a single logical connection. While EtherChannel enhances throughput and provides failover between the links, it does not consolidate multiple switches under a unified management plane. Each switch remains individually managed, and administrative tasks must still be performed on each device separately.
Hot Standby Router Protocol, or HSRP, addresses gateway redundancy at Layer 3. HSRP allows multiple routers to provide a single virtual IP as the default gateway for hosts, ensuring that if the active router fails, another router takes over seamlessly. However, HSRP does not combine switches into a single management unit, nor does it simplify overall switch administration.
VLAN Trunking Protocol, or VTP, facilitates the distribution of VLAN configuration information across multiple switches. VTP helps maintain VLAN consistency and reduces the need to configure VLANs individually on each switch. While VTP improves VLAN management, it does not merge switches into one logical device, and each switch still requires independent management for most administrative tasks.
StackWise, in contrast, is a Cisco technology designed to interconnect multiple physical switches into a single logical switch, providing a unified management interface and operational control. In a StackWise configuration, one switch acts as the master and controls the stack’s control plane operations, while the remaining switches serve as member units. This arrangement enables centralised monitoring, configuration, and policy enforcement across all stack members. The stack behaves as a single Layer 2 switch, with consolidated spanning-tree calculations and seamless VLAN propagation.
One of the key advantages of StackWise is its flexibility. Ports from any member switch in the stack can be used interchangeably, allowing network designers to deploy devices without worrying about physical switch boundaries. Additionally, StackWise provides redundancy: if a switch in the stack fails, the remaining switches continue to forward traffic, maintaining high availability. This feature ensures network resilience in environments where downtime is unacceptable.
In enterprise environments with multiple access switches, StackWise significantly reduces operational complexity. Administrators can manage an entire stack as one device, simplifying configuration, firmware updates, and troubleshooting. Policies applied to the stack are automatically propagated to all member switches, ensuring consistent settings across the network and reducing the risk of misconfiguration.
While EtherChannel, HSRP, and VTP provide bandwidth, gateway redundancy, and VLAN consistency, respectively, they do not consolidate multiple switches into a single manageable entity. StackWise uniquely enables multiple physical switches to function as a single logical switch, improving manageability, redundancy, and operational efficiency. By simplifying network administration while maintaining high availability, StackWise is the ideal solution for enterprise networks requiring scalable and resilient access-layer deployments.
Question 57
Which protocol assigns IP addresses and network configuration parameters dynamically to hosts?
A) DNS
B) DHCP
C) ARP
D) NTP
Answer: B) DHCP
Explanation:
In modern networks, efficient management of IP addresses and related configuration parameters is essential for smooth operation and scalability. Several protocols operate at different layers to support networking tasks, but only one protocol is specifically designed to automatically assign IP addresses and provide configuration information to hosts: Dynamic Host Configuration Protocol, or DHCP.
The Domain Name System, or DNS, is often associated with network communication, but it serves a different purpose. DNS resolves human-readable domain names, such as www.example.com, into IP addresses that computers use to communicate. While DNS is essential for name resolution and enabling users to access websites and services without remembering numerical IP addresses, it does not assign IP addresses to hosts or configure their network settings.
Address Resolution Protocol, or ARP, is another important network protocol, but it operates at a lower layer. ARP maps IP addresses to MAC addresses, allowing devices on a local network segment to communicate directly with one another at Layer 2. While ARP is critical for local traffic delivery, it does not provide IP addresses, default gateways, or DNS server information to hosts. Its role is limited to resolving addresses for direct frame delivery.
Network Time Protocol, or NTP, is also unrelated to IP address assignment. NTP synchronises the clocks of networked devices to ensure consistent time across servers, routers, and clients. Accurate timekeeping is essential for logging, security, and application coordination, but NTP does not provide any configuration related to IP addressing or connectivity.
DHCP, in contrast, is specifically designed to automate IP address assignment and related configuration. When a device joins a network, it typically does not have a preconfigured IP address. The host sends a DHCP Discover message as a broadcast to locate available DHCP servers. The server responds with a DHCP Offer, proposing an IP address along with additional information such as subnet mask, default gateway, and DNS servers. The host then sends a DHCP Request to confirm the offered address, and the server completes the process with a DHCP Acknowledgment. This lease-based assignment ensures that IP addresses are allocated efficiently and can be reused when devices leave the network or their leases expire.
DHCP provides significant operational advantages. It reduces administrative overhead by centralising IP address management, preventing duplicate assignments, and ensuring consistent configuration across all hosts. In enterprise environments, DHCP enables rapid deployment of new devices without requiring manual configuration, minimising human errors, and supporting network expansion. It also allows dynamic reassignment of addresses, which is particularly valuable in networks with high device turnover or mobile clients.
While DNS, ARP, and NTP each serve essential functions in network operation, DHCP is uniquely responsible for automating IP address assignment and delivering critical network configuration parameters. By using DHCP, administrators can ensure efficient, consistent, and scalable management of IP addresses, simplify network operations, and support the dynamic needs of modern enterprise networks. Therefore, DHCP is the correct protocol for automated address allocation and host configuration.
Question 58
Which Layer 2 protocol prevents switching loops in networks with redundant links?
A) CDP
B) STP
C) VTP
D) DTP
Answer: B) STP
Explanation:
CDP, Cisco Discovery Protocol, is used for discovering neighbouring devices and sharing device information, but does not prevent loops. VTP, VLAN Trunking Protocol, distributes VLAN configuration information but does not control Layer 2 loops. DTP, Dynamic Trunking Protocol, negotiates trunk links but does not manage redundancy. STP, or Spanning Tree Protocol, prevents Layer 2 loops in networks with redundant links. When multiple paths exist between switches, loops can cause broadcast storms, MAC table instability, and network outages. STP elects a root bridge and assigns port roles—root, designated, and blocked—to maintain a loop-free topology. If a link fails, STP recalculates the topology to maintain connectivity. Rapid STP (RSTP) provides faster convergence and reduces downtime during network changes. STP ensures redundancy while preventing loops, making it essential in enterprise networks with redundant Layer 2 paths. Therefore, the correct answer is STP because it maintains a loop-free Layer 2 topology while allowing redundant paths for fault tolerance.
Question 59
Which IPv6 address type allows delivery of a packet to the nearest device among multiple devices sharing the same address?
A) Unicast
B) Multicast
C) Anycast
D) Link-local
Answer: C) Anycast
Explanation:
In networking, understanding the differences between various IP address types is crucial for designing efficient, scalable, and resilient systems. IPv6, in particular, introduces several address types, including unicast, multicast, link-local, and anycast, each serving distinct purposes. Among these, anycast addresses are uniquely designed to deliver traffic to the nearest device in a group, optimising performance, redundancy, and resource utilisation.
Unicast addresses in IPv6 are the most common type and are used for one-to-one communication. When a packet is sent to a unicast address, it is delivered directly to the single interface that owns that address. This addressing method ensures a predictable and direct path for communication but does not provide proximity-based routing or redundancy beyond what is available in the underlying network infrastructure. Unicast addresses are critical for standard client-server interactions, point-to-point communications, and scenarios where each host requires a unique address.
Multicast addresses, in contrast, are designed for one-to-many communication. When a packet is sent to a multicast address, all devices that are members of the corresponding multicast group receive the packet. This approach is useful for applications such as streaming media, group messaging, and routing protocol updates. However, multicast addresses do not provide delivery to the closest device among multiple recipients; they simply deliver packets to every member of the group, regardless of physical or network proximity.
Link-local addresses are automatically assigned to every IPv6-enabled interface and are used for communication within a single subnet or link. These addresses are essential for local network operations, such as neighbour discovery, router discovery, and routing protocol exchanges like OSPFv3 or EIGRP for IPv6. While link-local addresses provide foundational connectivity on the local link, they are non-routable and cannot be used for communication beyond the immediate subnet.
Anycast addresses offer a unique functionality that differentiates them from unicast, multicast, and link-local addresses. In an anycast configuration, the same IPv6 address is assigned to multiple devices, often located in different geographical or network locations. When a packet is sent to an anycast address, the network uses routing metrics such as path cost, distance, or administrative preference to deliver the packet to the “nearest” device in terms of network topology. This ensures that clients receive faster responses, reduces latency, and balances network load across multiple servers.
Anycast is widely deployed in high-availability services such as DNS resolution, content delivery networks, and global load balancing. By directing traffic to the closest or best-performing node, anycast enhances user experience while providing redundancy; if one device becomes unavailable, traffic is automatically routed to the next nearest node without requiring manual intervention. This feature makes anycast an efficient mechanism for distributing workloads across multiple servers while maintaining a single logical address for clients to reach. While unicast, multicast, and link-local addresses serve specific purposes, anycast is unique in its ability to provide proximity-based routing to multiple devices sharing the same address. It improves performance, supports high availability, optimises resource utilisation, and simplifies network architecture. Therefore, anycast is the correct answer because it delivers packets to the nearest of multiple devices sharing the same IPv6 address, offering efficient, resilient, and scalable communication in modern networks.
Question 60
Which NAT mechanism allows multiple private IP addresses to share a single public IP for Internet access?
A) Static NAT
B) Dynamic NAT
C) PAT
D) NAT64
Answer: C) PAT
Explanation:
Network Address Translation (NAT) is a crucial technology in IP networking that allows private IP addresses to communicate with external networks, typically the Internet. NAT provides several methods for translating internal addresses to external addresses, each with specific use cases and limitations. Understanding these different types of NAT is essential for designing efficient and scalable networks.
Static NAT is one of the simplest forms of address translation. It creates a permanent, one-to-one mapping between a private IP address and a public IP address. This method is commonly used for servers that need to be accessible from the Internet, such as web servers, mail servers, or VPN gateways. Each internal server is assigned a unique public IP address, ensuring predictable reachability. However, static NAT is not suitable for scenarios where multiple devices need to share a single public IP, as each internal host requires a dedicated public address.
Dynamic NAT offers a slightly more flexible approach. It maps private IP addresses to a pool of available public IPs on a first-come, first-served basis. While dynamic NAT allows temporary address assignment and supports multiple hosts, it still requires a one-to-one relationship between private and public IPs at any given time. Once the pool of public addresses is exhausted, additional hosts cannot access external networks until an address becomes available.
NAT64 is a specialised form of NAT designed for IPv6 and IPv4 interoperability. It translates IPv6 addresses to IPv4 addresses and vice versa, enabling IPv6-only devices to communicate with IPv4-only systems. While essential in dual-stack or IPv6 transition networks, NAT64 does not address the problem of allowing multiple IPv4 devices to share a single public IP in IPv4-only networks.
Port Address Translation (PAT), also known as NAT overload, is the most widely used NAT variant for conserving public IP addresses. PAT allows multiple private IP addresses to share a single public IP by using unique port numbers to differentiate individual sessions. When an internal device initiates a connection to an external network, PAT translates the source IP address and port into the public IP address and a unique port number. Responses from external hosts are routed back correctly using the combination of IP address and port, allowing multiple simultaneous connections from different devices without requiring multiple public IPs.
PAT is especially valuable in enterprise networks and home environments where public IP addresses are limited. By mapping numerous private IPs to a single public address, PAT conserves scarce IPv4 address space while ensuring reliable Internet connectivity for all internal hosts. This mechanism also supports session tracking, which allows the NAT device to maintain state information and correctly forward incoming traffic to the appropriate internal host.
While static NAT, dynamic NAT, and NAT64 each serve specific purposes, they are limited in scenarios where many devices need to access external networks simultaneously using a single public IP. PAT addresses this limitation by enabling multiple private IP addresses to share a single public IP through port differentiation. Its efficiency, address conservation, and reliable session management make PAT the preferred solution for enabling large numbers of internal hosts to access the Internet while minimising public IP usage.