CompTIA N10-009 Network+ Exam Dumps and Practice Test Questions Set 3 Q31-45
Visit here for our full CompTIA N10-009 exam dumps and practice test questions.
Question 31
Which protocol is used to encrypt and secure VPN traffic at the IP layer?
A) SSL
B) IPSec
C) PPTP
D) L2TP
Answer: B) IPSec
Explanation:
IPSec, or Internet Protocol Security, is a framework of protocols used to encrypt and secure IP traffic across networks, most commonly in VPNs. It operates at Layer 3 of the OSI model, providing end-to-end security for IP packets. IPSec supports multiple security mechanisms, including Authentication Header (AH) for integrity and data origin authentication, and Encapsulating Security Payload (ESP) for confidentiality through encryption. It can be deployed in two modes: transport mode, which encrypts only the payload of a packet, leaving the header intact for routing, and tunnel mode, which encrypts the entire IP packet and encapsulates it within a new packet with a different header, allowing secure communication between networks over an untrusted network like the Internet.
SSL, or Secure Sockets Layer, operates at the transport layer (Layer 4) and is used for securing web traffic and some VPN implementations but does not provide IP-layer encryption across all types of network traffic. SSL is commonly used in HTTPS and secure web applications but cannot encrypt non-TCP traffic or entire IP packets in the way IPSec does.
PPTP, or Point-to-Point Tunneling Protocol, is an older VPN protocol that uses GRE tunneling and relies on weak authentication mechanisms. While it provides some encryption, it is considered insecure due to vulnerabilities in its authentication and encryption methods. PPTP is rarely used today in modern enterprise environments.
L2TP, or Layer 2 Tunneling Protocol, is often paired with IPSec (as L2TP/IPSec) to provide encryption. L2TP alone does not provide encryption and relies on other protocols to secure the payload. L2TP encapsulates Layer 2 frames over IP networks, enabling VPN functionality but not securing the traffic without IPSec.
IPSec is the correct answer because it provides robust encryption and authentication for IP traffic, enabling secure communication over untrusted networks. It is widely used for site-to-site and remote access VPNs and ensures confidentiality, integrity, and authenticity of data while operating transparently at the network layer. IPSec can handle all IP-based protocols, provides multiple modes for flexibility, and integrates with authentication mechanisms like digital certificates, pre-shared keys, or AAA servers. Modern network architectures rely on IPSec for secure connectivity between branch offices, cloud environments, and remote users, making it a foundational technology for network security.
Question 32
Which Layer 2 technology allows multiple VLANs to traverse a single trunk link?
A) VLAN Tagging
B) EtherChannel
C) STP
D) NAT
Answer: A) VLAN Tagging
Explanation:
VLAN Tagging is the process of inserting a unique identifier into Ethernet frames so that multiple VLANs can share a single physical trunk link between switches or other network devices. The IEEE 802.1Q standard specifies how the tag is added to each frame, enabling the receiving device to identify the VLAN to which the frame belongs. This allows for logical segmentation of a network across multiple switches without the need for additional physical links for each VLAN. VLAN tagging is essential in large networks where multiple departments or services must share physical infrastructure but remain logically separated for security, traffic management, and policy enforcement.
EtherChannel, also known as link aggregation, combines multiple physical links into a single logical link to increase bandwidth and provide redundancy. While EtherChannel enhances throughput and fault tolerance, it does not allow multiple VLANs to traverse a single physical link without VLAN tagging. It is used in combination with trunking to improve performance.
STP, or Spanning Tree Protocol, prevents switching loops in Layer 2 networks by selectively blocking redundant paths. STP does not allow multiple VLANs to share a single trunk link but instead ensures a loop-free topology, maintaining network stability.
NAT, or Network Address Translation, operates at Layer 3 and translates private IP addresses to public addresses. NAT is unrelated to Layer 2 VLAN segmentation and does not facilitate multiple VLANs over a trunk link.
VLAN tagging is the correct answer because it enables the coexistence of multiple VLANs across a single physical link, reducing cabling requirements, supporting logical segmentation, and maintaining traffic separation across shared infrastructure. It is essential for implementing scalable, efficient, and secure enterprise networks, especially in environments with complex departmental or multi-tenant setups.
Question 33
Which wireless security protocol introduced individual encryption keys for each client?
A) WEP
B) WPA
C) WPA2
D) WPA3
Answer: C) WPA2
Explanation:
WPA2, or Wi-Fi Protected Access 2, introduced the Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP) for encryption, which generates a unique session key for each client connected to a wireless network. This per-client encryption improves security over previous protocols, making it much more difficult for attackers to decrypt traffic even if they capture packets from the network. WPA2 also mandates the use of AES encryption, which provides stronger cryptographic protection than TKIP, used in WPA.
WEP, or Wired Equivalent Privacy, was the original wireless encryption protocol. It used a static key shared among all clients, making it highly vulnerable to attacks such as key reuse and packet capture. WEP can be cracked within minutes using widely available tools, which is why it is obsolete.
WPA, or Wi-Fi Protected Access, improved upon WEP by introducing TKIP and dynamic key generation. While TKIP added encryption, it still used a shared key for some aspects and is susceptible to certain attacks, making WPA less secure than WPA2.
WPA3 is the latest wireless security protocol and further strengthens security by introducing features such as Simultaneous Authentication of Equals (SAE), which replaces the pre-shared key exchange method with a more secure handshake. WPA3 also provides individualized encryption and forward secrecy, enhancing protection against offline attacks and ensuring future-proof security.
The correct answer is WPA2 because it introduced per-client encryption keys, providing strong data protection and mitigating vulnerabilities found in earlier wireless security protocols. WPA2 is widely deployed in enterprise and consumer networks and remains a fundamental standard for securing Wi-Fi connections. Its combination of AES encryption and individual keys makes wireless networks resistant to eavesdropping and unauthorized access.
Question 34
Which routing protocol uses link-state information and the shortest path first algorithm?
A) RIP
B) OSPF
C) EIGRP
D) BGP
Answer: B) OSPF
Explanation:
OSPF, or Open Shortest Path First, is a link-state routing protocol that uses the shortest path first (SPF) algorithm, also known as Dijkstra’s algorithm, to determine the most efficient path between routers in an IP network. Each OSPF router maintains a complete map of the network topology by exchanging link-state advertisements (LSAs) with neighbors. The SPF algorithm calculates the shortest path tree to all destinations, ensuring optimal routing. OSPF supports hierarchical design through areas, reducing routing overhead and improving scalability.
RIP, or Routing Information Protocol, is a distance-vector protocol that uses hop count as the metric to determine the best path. RIP periodically sends its entire routing table to neighbors, which is less efficient than OSPF’s link-state approach. RIP is limited in scalability because it cannot support more than 15 hops.
EIGRP, or Enhanced Interior Gateway Routing Protocol, is a hybrid protocol combining distance-vector and link-state features. It uses metrics such as bandwidth, delay, and reliability for path selection and supports rapid convergence. While efficient, it does not strictly use the shortest path first algorithm like OSPF.
BGP, or Border Gateway Protocol, is a path-vector protocol used primarily for inter-domain routing on the Internet. BGP relies on policies, AS-path information, and attributes to select routes, not SPF calculations. It is designed for large-scale, inter-network routing rather than internal LAN or enterprise routing.
OSPF is the correct answer because it uses link-state information and the shortest path first algorithm to compute efficient routes. Its fast convergence, scalability through areas, and optimal path selection make it ideal for large enterprise networks where rapid adaptation to topology changes is essential for maintaining high network availability and performance.
Question 35
Which type of attack overloads a server with numerous simultaneous requests to cause a crash?
A) DoS
B) DDoS
C) Phishing
D) ARP Poisoning
Answer: B) DDoS
Explanation:
A Distributed Denial of Service (DDoS) attack is a type of network attack in which multiple compromised devices, often forming a botnet, simultaneously send an overwhelming volume of traffic to a target server or network. The goal is to exhaust resources such as bandwidth, memory, or CPU, preventing legitimate users from accessing services. DDoS attacks are more powerful than traditional DoS attacks because traffic comes from multiple sources, making them harder to mitigate and trace. Attackers may use various techniques, including SYN floods, UDP floods, or amplification attacks, exploiting vulnerabilities in protocols or services to increase traffic volume.
DoS (Denial of Service) attacks target a single server or network resource from one source, sending excessive requests to disrupt availability. While effective, single-source DoS attacks are easier to detect and block compared to distributed attacks.
Phishing attacks involve tricking users into revealing sensitive information, such as credentials or financial data. Phishing does not directly impact server availability, as it focuses on social engineering rather than overwhelming system resources.
ARP Poisoning is a local network attack that manipulates the Address Resolution Protocol to intercept or redirect traffic within a subnet. ARP poisoning affects confidentiality and integrity rather than availability.
DDoS is the correct answer because it exploits the distributed nature of compromised devices to generate massive traffic loads, overwhelming servers and networks. Organizations use mitigation strategies such as traffic filtering, rate limiting, and cloud-based DDoS protection services to minimize the impact of these attacks and maintain service availability.
Question 36
Which wireless technology allows devices to communicate directly without an access point?
A) Infrastructure Mode
B) Ad Hoc Mode
C) Mesh Mode
D) Repeater Mode
Answer: B) Ad Hoc Mode
Explanation:
Ad Hoc Mode is a wireless communication mode that allows devices to connect directly to each other without requiring an intermediary access point. In this mode, each device participates in peer-to-peer communication, sharing data, resources, or internet connectivity if one device is connected to an external network. Ad Hoc networks are useful for temporary setups, small office environments, or emergency scenarios where infrastructure is unavailable. They can operate in the 2.4 GHz or 5 GHz bands and require each device to manage its own network discovery and routing to peers.
Infrastructure Mode is the standard wireless mode where all client devices connect through a centralized access point, which manages traffic, provides security policies, and connects devices to external networks. While infrastructure mode is more scalable and supports centralized management, it requires a deployed access point, unlike ad hoc mode.
Mesh Mode involves multiple access points or nodes that work together to extend coverage and provide redundancy. Mesh networks are designed for larger environments, often with dynamic routing between nodes. While mesh allows communication between devices indirectly through nodes, it is not peer-to-peer in the pure ad hoc sense.
Repeater Mode extends wireless coverage by rebroadcasting signals from an access point. Devices still communicate through the repeater and the access point rather than directly to each other. Repeater mode improves coverage but does not enable direct device-to-device communication.
The correct answer is Ad Hoc Mode because it enables direct peer-to-peer communication without relying on infrastructure. This mode is flexible for temporary networking, collaborative file sharing, or connecting devices in environments lacking access points. It supports small-scale, low-cost deployments where infrastructure deployment is impractical. Ad hoc networks also require careful security configuration since there is no centralized control, which can make them vulnerable if proper encryption or authentication mechanisms are not implemented.
Question 37
Which IPv6 address type is automatically assigned to all interfaces for communication on the local network?
A) Link-Local
B) Global Unicast
C) Multicast
D) Anycast
Answer: A) Link-Local
Explanation:
Link-local addresses in IPv6 are automatically assigned to every interface and are used to communicate with other devices on the same link or subnet. These addresses typically start with FE80::/10 and are required for essential network functions such as neighbor discovery and routing protocol operations. Link-local addresses are not routable beyond the local segment, ensuring that communication remains contained within the immediate network. They enable devices to communicate even if no global or unique local addresses have been assigned, providing a baseline for IPv6 connectivity.
Global Unicast addresses are publicly routable IPv6 addresses intended for communication over the Internet. Unlike link-local addresses, global unicast addresses require assignment through DHCPv6 or manual configuration and are used for end-to-end external communication.
Multicast addresses are used to send packets to multiple designated devices simultaneously. IPv6 multicast addresses begin with FF00::/8 and allow efficient group communication. While multicast enables sending to multiple recipients, it is not automatically assigned for basic network connectivity.
Anycast addresses are assigned to multiple devices, and packets sent to an anycast address are routed to the nearest instance based on routing metrics. Anycast is useful for load balancing or redundancy, but it is not automatically assigned for all interfaces.
The correct answer is Link-Local because every IPv6-enabled interface automatically receives this address type, enabling essential local communication, network discovery, and routing functions. Link-local addresses provide the foundation for IPv6 operation on any network segment, even before additional addressing schemes are configured. They are crucial for maintaining network functionality and enabling protocols such as OSPFv3 and EIGRP for IPv6, which rely on link-local addresses for neighbor adjacency and routing updates.
Question 38
Which cable type uses twisted pairs with additional shielding to reduce electromagnetic interference?
A) UTP
B) STP
C) Coaxial
D) Fiber Optic
Answer: B) STP
Explanation:
Shielded Twisted Pair (STP) cable consists of twisted pairs of copper wires, similar to UTP, but includes shielding around the pairs or the entire cable to reduce electromagnetic interference (EMI) and crosstalk. The shielding can consist of foil or braided metal, providing protection against external electrical noise. STP is commonly used in environments with high EMI, such as industrial sites, data centers, or near heavy machinery, where unshielded twisted pair (UTP) might suffer from signal degradation. STP cables maintain the benefits of twisted pairs, such as reduced crosstalk, while enhancing reliability in noisy conditions.
Unshielded Twisted Pair (UTP) cables also use twisted pairs to reduce crosstalk but do not have additional shielding. UTP is widely used for Ethernet LANs because it is inexpensive, flexible, and easy to install. However, in environments with significant electrical interference, UTP may not provide reliable transmission.
Coaxial cables use a central conductor, insulating dielectric, metallic shield, and outer jacket to transmit signals. While coaxial provides good resistance to EMI due to the shielding, it is not twisted pair and is typically used for cable television, legacy Ethernet, or broadband connections rather than modern Ethernet LANs.
Fiber Optic cables use light to transmit data and are immune to EMI because they do not carry electrical signals. Fiber is ideal for long-distance, high-speed backbones but is more expensive and less flexible than STP for shorter runs within a building.
The correct answer is STP because it combines twisted-pair design with shielding to reduce interference, ensuring reliable transmission in high-EMI environments. STP supports modern Ethernet speeds and is suitable for applications where electromagnetic interference could otherwise compromise data integrity. It provides a balance between cost, flexibility, and performance for specialized LAN environments.
Question 39
Which network device primarily segments broadcast domains?
A) Switch
B) Router
C) Hub
D) Access Point
Answer: B) Router
Explanation:
Routers operate at Layer 3 of the OSI model and are responsible for forwarding packets between different networks. One of their key functions is to segment broadcast domains. Each interface on a router represents a separate broadcast domain, which prevents broadcast traffic from propagating between networks. This segmentation improves performance, reduces unnecessary traffic, and enhances security by limiting exposure of broadcasts to devices on other networks. Routers use IP addressing to route traffic and can apply policies, access control lists (ACLs), and NAT to manage communication between networks.
Switches primarily operate at Layer 2 and segment collision domains rather than broadcast domains. Each switch port is a separate collision domain, but broadcast traffic is still forwarded to all ports within the same VLAN, so switches alone do not provide broadcast domain segmentation.
Hubs operate at Layer 1 and simply forward electrical signals to all connected devices. They neither segment collision domains nor broadcast domains, resulting in high traffic and collisions. Hubs are considered obsolete in modern networks.
Access points provide wireless connectivity to clients but do not inherently segment broadcast domains. Wireless clients connected to the same SSID are part of the same broadcast domain unless VLANs or separate subnets are implemented.
The correct answer is Router because it separates broadcast domains by routing traffic between networks. This segmentation reduces broadcast storms, improves network efficiency, and allows for controlled inter-network communication. Routers are essential in enterprise and campus networks to maintain performance and scalability.
Question 40
Which attack type involves a malicious actor sending fraudulent ARP messages to intercept traffic?
A) DNS Spoofing
B) ARP Poisoning
C) DoS
D) Phishing
Answer: B) ARP Poisoning
Explanation:
ARP Poisoning, also called ARP spoofing, is a network attack in which a malicious actor sends forged Address Resolution Protocol (ARP) messages on a local network. The attacker associates their MAC address with the IP address of another device, such as a gateway, causing network traffic intended for the legitimate device to be redirected through the attacker’s system. This enables eavesdropping, data modification, or session hijacking. ARP poisoning exploits the lack of authentication in ARP, which trusts all ARP responses on a LAN. Attackers can target multiple hosts, enabling the interception of sensitive information such as credentials, financial transactions, or internal communications.
DNS Spoofing involves manipulating DNS responses to redirect users to malicious websites. While both DNS spoofing and ARP poisoning can redirect traffic, ARP poisoning operates at Layer 2 within a local subnet, whereas DNS spoofing manipulates name resolution at Layer 3/7.
DoS attacks overwhelm network or server resources to prevent service availability. DoS attacks impact availability rather than intercepting or redirecting traffic, which is the focus of ARP poisoning.
Phishing attacks target users through social engineering, attempting to steal credentials or sensitive data via deceptive emails or messages. Phishing does not intercept or manipulate network traffic.
The correct answer is ARP Poisoning because it enables the attacker to intercept or manipulate network traffic within a LAN by sending false ARP messages. ARP poisoning can be used in combination with other attacks, such as MitM, to compromise confidentiality and integrity, making it a significant threat in poorly secured local networks.
Question 41
Which routing protocol uses areas to improve scalability and minimize routing overhead in large networks?
A) RIP
B) OSPF
C) EIGRP
D) BGP
Answer: B) OSPF
Explanation:
OSPF is a link-state routing protocol designed for large enterprise networks that require efficient routing, fast convergence, and strong scalability features. One of its most important characteristics is the use of areas, which segment a network into smaller, more manageable partitions. The primary purpose of dividing an OSPF deployment into areas is to reduce routing overhead, limit the size of routing tables, and improve the overall efficiency of route calculation. In a large environment, processing all routing updates for every available route on every router would be highly inefficient. OSPF solves this problem by organizing routers into a hierarchy that includes a backbone area and additional areas attached to it. This design ensures that routing updates remain localized, and only summarized information is passed between areas, helping maintain network performance even as the network grows significantly.
RIP differs greatly in how it handles routing information in an enterprise environment. It is a distance-vector routing protocol that uses hop count as its metric and sends periodic updates every 30 seconds. RIP networks do not use areas, nor do they support hierarchical routing structures. Every router running RIP must process the entire routing table and broadcast it to neighbors regularly. This approach creates scaling limitations, because as the network grows, the amount of routing information increases and convergence slows down dramatically. RIP networks typically struggle to scale beyond small deployments, making the concept of areas unnecessary and unsupported in this routing protocol.
EIGRP is considered an advanced distance-vector routing protocol and supports features such as rapid convergence, VLSM, and unequal-cost load balancing. Although highly efficient and suitable for medium to large networks, EIGRP does not implement areas the way OSPF does. Instead, EIGRP uses autonomous systems to organize routers, but these are not comparable to the hierarchical area design of OSPF. EIGRP routers share routing information within the same autonomous system, and while the protocol is capable of summarization and reducing overhead, it does not rely on area-based segmentation as a fundamental part of its structure. Summarization in EIGRP can help manage scalability, but it does not replicate the strict, hierarchical multi-area design of OSPF.
BGP is a path-vector protocol primarily designed for routing between autonomous systems on the internet. It is built to manage the massive routing environment of global networks, and it functions differently from internal routing protocols such as OSPF and EIGRP. BGP does not use areas in the same manner as OSPF, nor is it meant to. Instead, it uses policies, attributes, and extensive route filtering techniques to control routing behavior across networks. BGP can manage extremely large routing tables, but it is not designed to segment an internal network into areas for scalability. Its architecture focuses on controlling routes between organizations rather than subdividing a single organization’s internal topology.
The correct answer is OSPF because the protocol is specifically engineered around a multi-area structure that provides hierarchical routing, reducing overhead by limiting the size of link-state databases and controlling the spread of routing updates. The backbone area, known as Area 0, connects all other areas, ensuring efficient flow of summarized routing information. Routers inside an area maintain detailed knowledge of their own area only, while information exchanged between areas is condensed through route summarization. This design helps networks scale without overwhelming routers with unnecessary data or lengthy route recalculations. Another important benefit of OSPF’s area-based design is faster convergence. When a topology change occurs in one area, the impact is contained within that area. Only summarized information leaves the area, ensuring stability in the rest of the network. This makes OSPF more predictable and efficient in large enterprise settings where outages, link changes, and device upgrades are common events that require rapid adaptation.
OSPF’s hierarchical design also enhances security and manageability. Administrators can deploy authentication per area, apply area-specific cost metrics, and configure stub, totally stubby, or not-so-stubby areas to further reduce routing overhead and control route propagation. These features give network designers the ability to tailor each segment of the network to meet specific operational needs, something that distance-vector protocols cannot accomplish. OSPF’s structure helps prevent excessive flooding of link-state advertisements by confining them within an area. Only the backbone area handles inter-area communication, keeping routing efficient even when the overall topology becomes complex.
In contrast, protocols like RIP and EIGRP rely on simpler architectures that do not include dedicated area segmentation. RIP’s periodic updates would overwhelm a large network, making it unsuitable for hierarchical growth. EIGRP offers excellent performance but focuses on administrative boundaries rather than multi-level hierarchy. BGP is designed for external routing rather than internal scalability. Thus, OSPF remains the primary protocol used in large enterprises where the hierarchy of areas plays a central role in managing complexity and maintaining network efficiency.
Question 42
Which WAN technology uses virtual circuits that can be permanently established or dynamically switched to transmit data across provider networks?
A) MPLS
B) DSL
C) Cable Broadband
D) Satellite
Answer: A) MPLS
Explanation:
MPLS is a WAN technology that relies on virtual circuits to transport data efficiently between sites across a provider’s backbone. These virtual circuits can be either permanently established or dynamically created to handle changing network conditions, traffic demands, or routing requirements. Instead of relying solely on traditional IP routing, MPLS adds labels to packets. These labels determine the specific path the packet takes across the provider network. Because the forwarding decision is based on labels rather than full IP lookups, the process becomes faster and more predictable. MPLS achieves high performance by pre-defining label-switched paths, which act as virtual circuits that support different classes of traffic. Companies can use these paths to create connections between multiple sites, providing flexibility, quality of service, and consistent performance across geographically dispersed networks. MPLS also supports traffic engineering, allowing providers to optimize network resources by routing specific types of traffic along designated paths to avoid congestion.
DSL is a broadband technology that uses existing telephone lines to deliver internet connectivity. While DSL can provide reliable access for households and small businesses, it does not use virtual circuits in the way that MPLS does. DSL connections do not rely on a provider’s internal label-switching backbone or virtual circuit configuration. Instead, DSL transmits data over copper loops between the customer premises and the provider’s central office. The primary determining factors in DSL performance are line quality and distance from the central office. While certain network sessions might appear logically isolated, DSL does not create dedicated or dynamic circuits across a provider network and therefore cannot match the flexibility or traffic engineering capabilities of MPLS.
Cable broadband uses coaxial infrastructure that originally supported television services. While it offers high bandwidth and wide availability, cable broadband does not employ the virtual circuit model characteristic of MPLS networks. Instead, cable systems use a shared medium. All subscribers in a local area share bandwidth on the same coaxial segment, and traffic is managed by the provider using DOCSIS standards. The data path is not constructed as a dedicated circuit or dynamically established route. Instead, traffic flows through shared channels and is routed based on standard IP forwarding. Since cable broadband operates with a shared bandwidth pool, performance may decrease during peak usage periods, and it lacks the dedicated traffic control mechanisms available in MPLS networks.
Satellite communication provides internet connectivity using satellite links, generally for remote or rural areas lacking terrestrial infrastructure. While satellite links enable wide-area coverage, they do not involve the creation of virtual circuits through a service provider’s terrestrial backbone. Instead, satellite communication relies on uplink and downlink channels directed to the orbiting satellite, which then relays the signal to a ground station. This communication model does not offer the traffic engineering capabilities or dynamic label switching of MPLS. Satellite systems also suffer from high latency because the signal must travel vast distances, typically to geostationary orbit and back, reducing their suitability for applications requiring quick response times.
The correct answer is MPLS because it uses virtual circuits that can be established permanently or created dynamically depending on the network’s requirements. The technology operates by assigning labels to data packets. These labels correspond to pre-configured paths known as label-switched paths. These paths serve as virtual circuits that guide traffic through the provider’s infrastructure. Unlike traditional IP routing, where routers must analyze the destination IP address of each packet, MPLS routers forward packets based on the label. This process reduces overhead and speeds up packet forwarding decisions. The ability to create and manage virtual circuits in MPLS allows service providers to tailor services for specific customer needs.
One major advantage of MPLS is support for multipoint-to-multipoint and any-to-any connectivity. This flexibility helps businesses create secure, high-performance WAN environments without needing dedicated leased lines, which are more expensive and less adaptable. MPLS can simulate the behavior of various network topologies, including full mesh, partial mesh, and hub-and-spoke, depending on the design requirements. Another important feature is quality of service. MPLS can prioritize different classes of traffic by assigning appropriate labels. Voice, video, and mission-critical data can receive higher priority than routine or bulk traffic, ensuring low latency and consistent performance. This makes MPLS well suited for enterprise applications such as VoIP, video conferencing, real-time monitoring, and ERP systems.
The ability to create dynamic or permanent virtual circuits also allows MPLS to adapt to changes in link utilization or network conditions. If congestion occurs on one path, a different label-switched path can be created to reroute traffic. This is the essence of MPLS traffic engineering, which enables service providers to optimize resource usage and maintain reliability. In contrast, traditional IP routing relies on routing protocols such as OSPF or BGP to adapt to network changes, which can take longer to converge, and lacks the built-in mechanisms to pre-define or engineer traffic flow patterns with the same precision.
In addition to flexibility and performance, MPLS offers security advantages. Although it is not encryption-based by default, MPLS traffic remains logically isolated within the provider’s network. Each customer’s virtual routing environment is separated from others, reducing the likelihood of cross-customer interference compared to public internet transport. This separation provides a level of security similar to private circuits without requiring dedicated infrastructure.
When considering alternatives, DSL, cable broadband, and satellite lack the architectural design to provide virtual circuits or traffic engineering features. They deliver internet connectivity but cannot match MPLS for enterprise-grade routing control, performance guarantees, or virtual circuit management. Therefore, MPLS remains the most suitable technology for organizations that require reliable, customizable, and scalable WAN communication supported by virtual circuits that enhance both efficiency and control.
Question 43
Which wireless security protocol introduced the use of AES encryption to significantly strengthen WLAN protection?
A) WEP
B) WPA
C) WPA2
D) TKIP
Answer: C) WPA2
Explanation:
WPA2 is the wireless security protocol that introduced mandatory support for AES encryption, significantly improving the security of wireless networks compared to previous standards. It was created as an enhancement to the earlier WPA standard, which served as a temporary solution following the discovery of vulnerabilities in the original WEP system. By incorporating AES, WPA2 established a stronger and more reliable framework for protecting wireless communications. AES provides a more secure encryption method, supporting advanced mathematical algorithms that resist attacks targeting weaknesses found in earlier encryption methods. WPA2 also introduced a more robust handshake process, improved key management, and mutual authentication capabilities when used with enterprise features such as 802.1X and RADIUS. These elements combine to protect wireless networks from common threats, including eavesdropping and unauthorized access. Over time, WPA2 became the widely accepted standard, recommended for both home and enterprise environments until the release of WPA3.
WEP is an earlier wireless security mechanism that uses RC4 as its primary encryption method. Although it was widely deployed in the early years of Wi-Fi technology, it suffers from significant security weaknesses. The limited key size, static key usage, and predictable structure of WEP packets allow attackers to easily analyze network traffic and derive the encryption key using publicly available tools. The vulnerabilities in WEP arise from flaws in the initialization vector system and the way RC4 is implemented. Despite attempts to patch some of its shortcomings, WEP cannot provide reliable protection for modern networks. It does not offer the strong encryption or improved authentication provided by AES in WPA2. WEP also lacks the handshake improvements and advanced key management provided by later protocols.
WPA was introduced as a transitional solution designed to replace WEP quickly without requiring major hardware upgrades. It uses TKIP as its primary encryption enhancement, improving key management and adding mechanisms to detect tampering. Although WPA was a substantial improvement over WEP, TKIP relies on RC4, which eventually proved to have its own vulnerabilities. WPA also lacks the complete structural protections offered by WPA2, especially with regard to encryption strength and resilience to brute-force attacks. TKIP was intended to work with older hardware but was not designed as a long-term solution. WPA improves packet integrity and makes attacks more difficult than with WEP, but it does not deliver the full cryptographic strength offered by AES.
TKIP is the encryption mechanism used by the WPA standard. It improves upon the weaknesses of WEP by providing per-packet key mixing, message integrity checks, and a more dynamic key distribution system. However, TKIP does not provide the same level of protection as AES. It was created as an interim solution to allow devices that could not support AES hardware to remain secure. TKIP offers partial mitigation, but it still relies on RC4 and has known vulnerabilities that make it unsuitable for modern wireless security requirements. Even though TKIP enhanced overall protection when compared to the static key system of WEP, it was never intended as a final solution. Over time, the continued development of attacks targeting RC4-based systems exposed the limitations of TKIP more clearly.
The correct answer is WPA2 because it introduced the mandatory use of AES, fundamentally strengthening wireless security. AES encryption is considered highly secure due to its robust cryptographic structure, which is resistant to attacks that exploit mathematical weaknesses found in older algorithms. AES supports multiple key sizes and offers enhanced protection for data integrity and confidentiality. When implemented in WPA2, AES provides a reliable method for securing wireless traffic, ensuring that attackers cannot easily decrypt transmitted data even if they capture large volumes of network traffic. WPA2 also implemented the Counter Mode with Cipher Block Chaining Message Authentication Code Protocol, a mode of operation that contributes to both data confidentiality and integrity.
Another distinguishing feature of WPA2 is its support for enterprise-level authentication through 802.1X. When deployed in enterprise environments, WPA2 integrates with RADIUS servers to authenticate users individually rather than relying on a shared password. This improves accountability and reduces risks associated with compromised credentials. The four-way handshake procedure in WPA2 ensures that encryption keys remain dynamic, unique, and protected throughout the communication session. These enhancements provide safeguards against replay attacks, impersonation attempts, and man-in-the-middle attempts that could compromise older encryption systems.
WPA2 also brings significant improvements in terms of network efficiency and compatibility. While older hardware may not support AES, modern wireless devices are designed to work seamlessly with WPA2. It became the recommended security mechanism for many years, remaining the most commonly used protocol across homes, offices, and enterprise networks. Its widespread adoption helped standardize best practices for Wi-Fi security and minimized the variations in encryption strength that were previously common across different devices and manufacturers.
Comparing WPA2 to alternative options reveals why it became the industry standard. WEP is extremely outdated and insecure, making it unsuitable for protecting any contemporary network. Its predictable key usage and flawed design make it easy for attackers to break, even with basic tools. WPA improved upon WEP but still relies on TKIP, which cannot match the security provided by AES. TKIP’s reliance on RC4 means it inherits vulnerabilities that make it weaker than modern standards. While WPA can still be found in older networks, it is not recommended for new deployments. WPA2, on the other hand, offers advanced encryption and reliable authentication mechanisms that support a wide range of devices and network layouts.
Although WPA3 has begun replacing WPA2 and provides even more advanced protections, including forward secrecy, WPA2 remains a major milestone in wireless security development. Its implementation of AES solved fundamental security issues and established a much more secure foundation for wireless communication. This strength makes WPA2 the protocol that introduced the comprehensive security enhancements missing from earlier versions. Its mandatory AES requirement clearly distinguishes it as the correct answer among the available choices.
Question 44
Which network service is responsible for automatically assigning IP addresses, subnet masks, gateways, and other configuration parameters to devices on a network?
A) DNS
B) DHCP
C) SNMP
D) NTP
Answer: B) DHCP
Explanation:
DHCP is the network service responsible for automatically assigning IP addresses and other configuration parameters to devices on a network. This service significantly simplifies network administration by eliminating the need to manually configure each device. When a device joins a network, it sends a request asking for configuration information. A DHCP server responds with an available IP address, subnet mask, default gateway, DNS server information, lease duration, and any additional parameters defined by the network administrator. This automatic distribution mechanism reduces configuration errors, ensures proper IP address allocation, and helps keep the network organized as devices come and go. DHCP also manages IP address leases, ensuring that addresses are reclaimed and reassigned as needed. By maintaining a dynamic pool of addresses, DHCP prevents conflicts and supports efficient network scaling. DHCP can also reserve specific IP addresses for important devices, guaranteeing consistent configuration for servers, printers, and other essential nodes.
DNS serves a completely different purpose in the network environment. DNS translates domain names into IP addresses, allowing users to access websites and network resources through easily remembered names instead of numerical addresses. When a user enters a domain name, DNS resolves it to the correct IP address through a hierarchical lookup process. Although DNS supports network usability and navigation, it does not distribute IP configuration or assign network parameters to devices. DNS servers rely on accurate IP addresses to function correctly, but they do not participate in allocating or managing those addresses. Their role lies strictly in name resolution rather than managing configuration information for end devices.
SNMP is designed for monitoring and managing network devices. Using SNMP, administrators can gather information about device performance, track operational statistics, and modify configuration parameters on supported equipment. SNMP agents run on network devices such as routers, switches, firewalls, and servers, collecting data that SNMP managers can review. SNMP helps administrators understand device status, detect issues, and maintain network health. However, SNMP does not assign IP addresses or provide configuration details such as gateways or DNS settings. Its role focuses on network management, logging, and monitoring rather than provisioning network configurations to client devices.
NTP is responsible for synchronizing time across devices in a network. Accurate timekeeping is essential for logging, authentication, encryption, and numerous system functions. NTP communicates with time servers, ensuring that clocks on all network devices remain aligned to a reliable time source. While NTP is essential for maintaining accurate timestamps and ensuring consistent system behavior, it does not participate in distributing IP addresses or assigning network settings. Its function is limited to time synchronization and has no ability to manage or allocate network configuration parameters.
The correct answer is DHCP because it automates the assignment of vital network settings that enable devices to communicate properly within a network. DHCP dramatically reduces the administrative burden associated with manual IP configuration. Without DHCP, every workstation, printer, mobile device, and server would require manual configuration, increasing the likelihood of human error, IP conflicts, and inconsistencies. Using DHCP, administrators can ensure that devices quickly receive accurate configuration information each time they join the network. DHCP uses a structured process called DORA: Discover, Offer, Request, and Acknowledge. This four-step interaction allows the device to identify available DHCP servers, receive an offered address, request that address, and receive final confirmation before using the configuration for network communication.
DHCP supports both dynamic and static assignment. Dynamic assignment allows devices to receive any available address from the pool, which is ideal for mobile devices, temporary machines, and general workstations. Static assignment, often referred to as DHCP reservation, allows administrators to map a device’s MAC address to a specific IP address, ensuring that the device always receives the same IP. This combination of dynamic and static allocation makes DHCP flexible enough to support both everyday users and critical infrastructure devices.
DHCP also supports advanced options that distribute more than just addresses. Administrators can configure DHCP to provide information such as time server addresses, WINS server addresses, boot server information for PXE-based installations, VoIP configuration parameters, and specific vendor-related settings. These capabilities allow DHCP to centralize device configuration across the network, reducing the complexity associated with distributed configuration tasks. DHCP scales easily, supporting networks of all sizes—from home networks with a few devices to enterprise environments with thousands of connected nodes.
The service also plays an important role in controlling IP address lifecycles. IP address leases ensure that devices do not retain addresses indefinitely. When a lease expires, the address can be reassigned to another device. This approach prevents wasted address space and helps networks use their available IP address pools efficiently. DHCP can also detect when a device has disconnected from the network. This helps preserve available addresses, especially in environments with high device turnover, such as classrooms, conference areas, or public Wi-Fi networks.
Comparing DHCP to the other services highlights the distinct nature of each. DNS focuses on resolving domain names, not delivering configuration information. SNMP oversees monitoring and management tasks, not network provisioning. NTP ensures clock accuracy across devices but has no capability to assign IP addresses. DHCP alone automates IP configuration, seamlessly providing the parameters necessary for devices to operate within the network. Its role in simplifying network administration makes it indispensable in modern environments. In both small and large networks, the automation, consistency, and reliability offered by DHCP provide valuable benefits that improve security, efficiency, and overall network organization. Its ability to assign, manage, reclaim, and track IP address usage helps ensure stable communication between devices and reduces the risk of conflicts that could disrupt operations.
Question 45
Which cable type is typically used for high-speed, long-distance backbone connections in enterprise networks due to its immunity to electromagnetic interference?
A) Cat 6 UTP
B) Coaxial
C) Fiber Optic
D) Shielded Twisted Pair
Answer: C) Fiber Optic
Explanation:
Fiber optic cabling is the cable type commonly used for high-speed and long-distance backbone connections in enterprise environments, largely due to its immunity to electromagnetic interference. Fiber optic cables transmit data using pulses of light rather than electrical signals, allowing them to avoid many limitations associated with copper-based media. Because data travels as light signals through glass or plastic fibers, external electrical noise does not affect transmission. This makes fiber highly reliable in noisy environments such as manufacturing facilities, data centers, or areas with heavy electrical equipment. Fiber supports extremely high bandwidth and can carry signals over significantly greater distances than copper cables without signal loss. These features make it ideal for core network segments, inter-building connections, and data center uplinks. Fiber is also secure because tapping into a fiber cable without detection is technically challenging, offering advantages in environments requiring confidential data transmission. These benefits collectively explain why fiber optic cabling is the standard option for enterprise backbones.
Cat 6 UTP is a popular twisted pair copper cable used in many organizations for local area network connections. It supports Gigabit Ethernet and even 10-Gigabit Ethernet over short distances. Despite its speed capabilities, Cat 6 UTP uses electrical signals running through copper wires, making it vulnerable to electromagnetic interference. Although twisted pairs help reduce crosstalk and improve signal quality, UTP lacks the shielding required to fully block noise from powerful electrical sources. Cat 6 is also limited in terms of distance. Ethernet standards restrict the maximum run length for twisted pair cables to 100 meters. This limitation makes UTP unsuitable for backbone links that may stretch across multiple floors, buildings, or campus facilities. While Cat 6 UTP is excellent for workstation connections, access layer cabling, and short interconnects, it does not meet backbone performance expectations compared to fiber.
Coaxial cable has historically been used for many different applications, including early Ethernet networks, broadband internet service, and cable television distribution. Coaxial construction includes a center conductor surrounded by an insulating layer, a metallic shield, and a protective cover, making it more resistant to interference than an unshielded twisted pair. While coaxial can support moderate bandwidth over reasonable distances, it does not provide the exceptional performance required for modern enterprise backbones. Coaxial cables cannot support multi-gigabit speeds across long distances in a manner comparable to fiber optic cabling. They are also bulkier, less flexible, and increasingly outdated for networking purposes. Coaxial wiring has been largely replaced by fiber in backbone environments because modern enterprise networks demand far higher throughput, smoother scalability, and more flexible installation than coaxial can offer.
Shielded twisted pair incorporates shielding to reduce interference, offering better protection than an unshielded twisted pair. This cable type includes either foil or braided shields around each pair or around the cable as a whole. STP can be beneficial in electrically noisy environments and provides reliable performance for copper Ethernet standards. However, despite shielding, STP still transmits electrical signals and is therefore not immune to interference. It provides improved noise resistance, but not complete protection. STP suffers from similar distance limitations as UTP, typically restricted to 100 meters for standard Ethernet links. Additionally, STP requires proper grounding to function effectively. Improper grounding can create new issues, such as electromagnetic resonance or increased signal noise. STP’s electrical nature and range restrictions prevent it from competing with fiber optic cabling for high-speed, long-distance backbone deployments.
The correct answer is fiber optic because it meets the essential requirements for enterprise backbone connectivity: extremely high bandwidth, long-distance support, and immunity to electromagnetic interference. Fiber optic systems can support multi-gigabit or even terabit-level speeds, depending on the fiber type, transceivers, and network design. Single-mode fiber is capable of transmitting signals over many kilometers without significant loss, making it suitable for campus environments or metropolitan network deployments. Multi-mode fiber works well within buildings or shorter distances while offering high bandwidth and reliability. The use of lasers or LEDs as light sources allows fiber cables to operate with extraordinary efficiency, maintaining signal clarity over long runs without the attenuation that affects copper cabling.
Fiber also excels at handling emerging networking standards. As demands for higher speed grow due to cloud workloads, virtualization, large-scale data replication, and high-density computing environments, fiber can scale to meet those expectations. Upgrading fiber networks often involves replacing transceivers while leaving the existing fiber cable plant intact, making it cost-effective in the long term. Copper media, by contrast, often requires complete replacement to support new speed standards. Another benefit of fiber is its low latency and resistance to environmental variables. Unlike copper, fiber is not affected by temperature fluctuations, moisture, or corrosion in the same way. This long-term stability helps maintain consistent performance, especially in environments where uptime is critical.
Security is another area where fiber excels. Because fiber carries light signals rather than electrical pulses, intercepting a transmission requires physically tapping into the fiber core. This process is extremely difficult to accomplish without physically damaging the cable, and even slight disturbances often cause noticeable signal degradation, which can alert administrators to tampering. In contrast, copper cables radiate electrical signals, making them susceptible to passive interception with specialized tools.
From a design perspective, enterprise backbones must support a large volume of traffic flowing between distribution layers, data centers, and external connections. Backbone links often carry aggregated traffic from thousands of users, requiring stable, high-bandwidth connections to prevent bottlenecks. Fiber provides this capacity while maintaining reliability and performance consistency across the entire network. Even with modern advances in copper cabling, no copper solution matches the overall performance and noise immunity of fiber for backbone architecture.
While Cat 6 UTP, coaxial, and shielded twisted pair all serve important roles within networks, none meet the combined demands of speed, distance, and resilience required for backbone deployment. Fiber optic cabling remains the standard choice for enterprise backbones because it provides unmatched performance and future-ready scalability. Its immunity to electromagnetic interference, high bandwidth capacity, and exceptional transmission distance reinforce its role as the most appropriate solution for critical, high-performance network infrastructure.