CompTIA 220-1101 A+ Certification Exam: Core 1 Exam Dumps and Practice Test Questions Set 6 Q 76-90

CompTIA 220-1101 A+ Certification Exam: Core 1 Exam Dumps and Practice Test Questions Set 6 Q 76-90

Visit here for our full CompTIA 220-1101 exam dumps and practice test questions.

Question 76: 

A technician is troubleshooting a computer that is experiencing random shutdowns. The technician checks the system and finds that the CPU temperature is reading 95°C under normal load. What should the technician do FIRST?

A) Replace the CPU with a new one

B) Check and reseat the CPU cooler

C) Replace the motherboard

D) Update the BIOS firmware

Answer: B

Explanation:

When a computer experiences random shutdowns and the CPU temperature is reading 95°C under normal load, this indicates a severe overheating problem. Normal CPU temperatures under load typically range between 60-80°C depending on the processor model, so 95°C is critically high and can trigger thermal protection mechanisms that cause the system to shut down to prevent permanent damage.

The first and most logical troubleshooting step is to check and reseat the CPU cooler. This addresses the most common causes of CPU overheating without requiring expensive component replacements. Several issues related to the CPU cooler can cause excessive temperatures. The cooler may not be properly seated on the CPU, creating an air gap that prevents efficient heat transfer. The thermal paste between the CPU and cooler may have dried out, degraded, or was improperly applied initially, reducing its ability to conduct heat. The cooler’s mounting mechanism may have loosened over time due to thermal cycling or physical stress. Additionally, the cooler fins might be clogged with dust, or the fan may not be spinning at the correct speed or may have failed entirely.

By checking and reseating the CPU cooler, the technician can inspect all these potential issues. This involves removing the cooler, cleaning off the old thermal paste from both the CPU and cooler surfaces using isopropyl alcohol, applying fresh thermal paste in the correct amount (typically a small pea-sized dot in the center), and properly reseating the cooler with appropriate mounting pressure. The technician should also verify that the fan is connected to the correct header on the motherboard and spinning at adequate RPMs.

A) is incorrect because replacing the CPU is an expensive and unnecessary step at this point. CPUs rarely fail due to overheating as they have built-in thermal protection. The high temperature reading indicates a cooling problem, not a CPU defect. Only after confirming the cooling system is functioning properly should CPU replacement be considered if problems persist.

C) is incorrect because replacing the motherboard doesn’t address the obvious cooling issue. The motherboard is simply reporting the temperature data from the CPU sensor. There’s no indication that the motherboard itself is faulty. This would be an expensive and time-consuming solution that wouldn’t solve the overheating problem.

D) is incorrect because updating the BIOS firmware is unlikely to resolve a physical overheating issue. While BIOS updates can sometimes improve fan control algorithms or temperature reporting accuracy, they won’t fix a improperly seated cooler, dried thermal paste, or clogged heatsink. BIOS updates should be considered only after eliminating hardware-related cooling problems.

Question 77: 

A user reports that their laptop screen is very dim and difficult to read. The user can barely see the desktop icons. What is the MOST likely cause of this issue?

A) The display driver needs to be updated

B) The LCD backlight has failed

C) The screen resolution is set incorrectly

D) The graphics card has failed

Answer: B

Explanation:

When a laptop screen is very dim but still displays visible content such as desktop icons, this is a classic symptom of LCD backlight failure. The backlight is a critical component in LCD displays that provides the illumination necessary to make the screen content visible. Without proper backlighting, the LCD panel itself still functions and displays images, but they appear extremely dark and difficult to see.

LCD panels do not emit their own light; they work by blocking or allowing light from the backlight to pass through. The backlight is typically a series of LED strips or a cold cathode fluorescent lamp (CCFL) positioned behind or around the edges of the LCD panel. When the backlight fails or dims significantly, users can often still see faint images on the screen, especially in bright environments or when shining a flashlight directly at the screen. This is a key diagnostic indicator that distinguishes backlight failure from complete display failure.

Several factors can cause backlight failure. The backlight inverter, which regulates power to the backlight, may have failed. In older laptops using CCFL technology, the fluorescent tube itself may have reached the end of its lifespan. In newer LED-backlit displays, individual LED strips can fail or the power circuit supplying them may malfunction. Physical damage from drops or pressure on the screen can also damage backlight components.

The troubleshooting approach involves first confirming the backlight issue by shining a bright light at the screen from different angles. If you can see the desktop and icons clearly when illuminated externally, this confirms the LCD panel is working but the backlight is not. You can also try adjusting the brightness using function keys; if there’s no change in brightness levels, this further supports backlight failure. Connecting an external monitor can help verify that the graphics system is functioning properly and the issue is isolated to the laptop display.

A) is incorrect because display driver issues typically cause problems like incorrect colors, resolution issues, screen flickering, or complete failure to display anything. Drivers don’t control the physical backlight hardware. If the driver were corrupted, you would more likely see error messages, distorted images, or the system defaulting to basic VGA mode rather than a dim but otherwise functional display.

C) is incorrect because screen resolution settings affect how sharp and properly scaled the image appears, not the brightness or illumination of the display. An incorrect resolution might make icons appear stretched, compressed, or blurry, but the screen would still be properly lit and easily visible.

D) is incorrect because a failed graphics card would typically result in no display output at all, visual artifacts, distorted images, or system failure to boot. The graphics card processes and sends video signals but doesn’t control the physical backlight component. The fact that icons are visible, even if dim, indicates the graphics card is functioning and successfully sending display signals.

Question 78: 

A technician needs to configure a SOHO router to allow incoming traffic on port 3389 to reach a specific computer on the internal network. Which of the following should the technician configure?

A) Quality of Service (QoS)

B) Port forwarding

C) MAC filtering

D) DHCP reservation

Answer: B

Explanation:

Port forwarding is the correct configuration needed to allow incoming traffic on a specific port from the internet to reach a particular computer on the internal network. In this scenario, port 3389 is the default port used by Remote Desktop Protocol (RDP), which allows remote access to Windows computers. Port forwarding creates a mapping that tells the router to direct incoming traffic destined for a specific port to a designated internal IP address.

When a SOHO router performs Network Address Translation (NAT), it allows multiple devices on the internal network to share a single public IP address. However, this creates a challenge for incoming connections because the router doesn’t automatically know which internal device should receive unsolicited incoming traffic. Port forwarding solves this problem by creating explicit rules that map external port numbers to internal IP addresses and ports.

To configure port forwarding properly, the technician needs to specify several parameters in the router’s configuration interface. First, identify the external port number that will receive incoming traffic (3389 in this case). Second, specify the internal IP address of the destination computer on the local network. Third, specify the internal port number on that computer (also 3389 for RDP). Fourth, select the protocol type (TCP for RDP). Some routers also allow you to name the forwarding rule for easier management.

Best practices for port forwarding include assigning a static IP address or DHCP reservation to the target computer so the internal IP doesn’t change, which would break the forwarding rule. Security considerations are also important since opening ports exposes services to the internet. For RDP specifically, consider changing the default port 3389 to a non-standard port to reduce automated attack attempts, implementing strong passwords, enabling Network Level Authentication, and using VPN access instead of direct port forwarding when possible for enhanced security.

A) is incorrect because Quality of Service (QoS) is used to prioritize certain types of network traffic over others to ensure consistent performance for critical applications. QoS manages bandwidth allocation and traffic priority but doesn’t direct incoming traffic from the internet to specific internal devices. It’s useful for prioritizing activities like video conferencing or VoIP over less time-sensitive traffic like file downloads.

C) is incorrect because MAC filtering is a security feature that controls which devices can connect to the network based on their hardware MAC addresses. It creates an allow list or deny list of devices but doesn’t handle routing incoming traffic to specific ports or devices. MAC filtering operates at Layer 2 and doesn’t involve port numbers or application-level traffic routing.

D) is incorrect because DHCP reservation ensures a specific device always receives the same IP address from the DHCP server based on its MAC address. While DHCP reservation is often used in conjunction with port forwarding to maintain a consistent internal IP address, it doesn’t actually forward incoming traffic. It only manages internal IP address assignment.

Question 79: 

A user’s smartphone is experiencing very short battery life after a recent OS update. The phone gets hot during normal use and the battery drains within a few hours. Which of the following should a technician do FIRST?

A) Replace the battery

B) Factory reset the device

C) Check for runaway applications

D) Downgrade the operating system

Answer: C

Explanation:

When a smartphone experiences sudden battery drain and excessive heat after an OS update, the first troubleshooting step should be to check for runaway applications. A runaway application is a program that consumes excessive system resources, typically due to software bugs, conflicts with the new OS version, or background processes that fail to terminate properly. This is the most common and easily reversible cause of post-update battery issues.

After an operating system update, apps that were functioning normally may develop compatibility issues. The OS update might change how background processes are managed, modify API behaviors, or introduce bugs that cause certain apps to malfunction. These malfunctioning apps can get stuck in loops, continuously attempt to sync data, fail to enter sleep states, or constantly use CPU and network resources. This excessive activity drains the battery rapidly and generates heat as the processor works continuously.

To check for runaway applications, the technician should access the phone’s battery usage statistics, typically found in Settings under Battery or Device Care. This screen shows which apps have consumed the most battery power over recent hours or days. Look for apps showing unusually high percentages of battery usage, especially those running in the background. The technician should also check the phone’s task manager or recent apps list to see which applications are currently active and consuming resources.

Common culprits include social media apps with aggressive background refresh settings, email clients continuously failing to sync, GPS-dependent apps constantly requesting location updates, or newly installed apps with bugs. Once identified, the technician can force stop the problematic app, clear its cache and data, check for app updates that might fix compatibility issues, or temporarily uninstall the app to confirm it’s the source of the problem.

This approach is the least invasive and quickest diagnostic step. It doesn’t require hardware replacement, doesn’t erase user data, and can often immediately identify and resolve the issue. If a specific app is identified as the problem, updating or reinstalling just that app is much simpler than more drastic measures.

A) is incorrect because replacing the battery is an expensive and invasive solution that should only be considered after software issues are ruled out. The sudden onset of battery problems immediately after an OS update strongly suggests a software cause rather than hardware failure. Batteries typically degrade gradually over time, not suddenly after a software update. Premature battery replacement wastes time and money if the real issue is a software problem.

B) is incorrect because a factory reset erases all user data, installed apps, and personalized settings. This is a drastic measure that should be reserved as a last resort after less invasive troubleshooting steps have failed. While a factory reset might resolve the issue by removing problematic apps and configurations, it causes significant inconvenience to the user who must then restore backups and reconfigure their device.

D) is incorrect because downgrading the operating system is typically difficult or impossible on most modern smartphones due to manufacturer restrictions and security policies. Most manufacturers don’t officially support OS downgrades, and attempting unofficial methods can void warranties, cause security vulnerabilities, or brick the device. Additionally, the issue is more likely caused by app incompatibility than fundamental OS problems, so downgrading addresses the wrong cause.

Question 80:

 A technician is installing a new wireless access point in a large office building. The technician needs to ensure the access point provides coverage throughout the floor while minimizing interference from neighboring access points. Which of the following channels should the technician use for a 2.4 GHz network?

A) Channels 2, 5, and 8

B) Channels 1, 6, and 11

C) Channels 3, 7, and 10

D) Channels 4, 8, and 12

Answer: B

Explanation:

For 2.4 GHz wireless networks, channels 1, 6, and 11 are the only non-overlapping channels available in most regions and represent the best practice for minimizing interference in multi-access point environments. Understanding why these specific channels are optimal requires knowledge of how 2.4 GHz WiFi channels are structured and how they overlap.

The 2.4 GHz WiFi band contains 11 channels in North America (14 in some other regions), numbered 1 through 11. Each channel is centered on a specific frequency, with adjacent channels separated by only 5 MHz. However, 802.11b/g/n networks using 2.4 GHz actually require 22 MHz of bandwidth to operate. This means each WiFi transmission spans approximately four channel numbers worth of spectrum. For example, a network operating on channel 6 actually uses frequencies that span from channel 4 through channel 8.

This 22 MHz requirement creates significant overlap between adjacent channels. If one access point uses channel 3 and a nearby access point uses channel 4, their signals will overlap substantially, causing co-channel interference. This interference degrades performance for both networks as devices must wait for clear channels and retransmit corrupted data packets. The interference is particularly problematic because overlapping signals on adjacent channels create noise that’s difficult for receivers to filter out.

Channels 1, 6, and 11 are spaced far enough apart (5 channels or 25 MHz) that their 22 MHz transmissions don’t overlap. Channel 1 uses frequencies roughly from channels -1 to 3, channel 6 uses roughly 4 to 8, and channel 11 uses roughly 9 to 13. This spacing provides complete separation and eliminates co-channel interference between properly configured access points. In a multi-access point deployment, the technician should alternate between these three channels strategically, ensuring that adjacent access points use different channels from this non-overlapping set.

When performing a site survey, the technician should use WiFi analysis tools to identify which channels neighboring buildings or other floors are using and select the least congested option from channels 1, 6, or 11. This approach maximizes throughput and reliability while supporting multiple access points in close proximity.

A) is incorrect because channels 2, 5, and 8 all overlap significantly with each other and with the standard non-overlapping channels. Channel 2 overlaps heavily with both channel 1 and channel 6, channel 5 overlaps with channel 6, and channel 8 overlaps with both channel 6 and channel 11. Using these channels would create substantial interference and poor network performance throughout the office.

C) is incorrect because channels 3, 7, and 10 also create overlapping interference patterns. Channel 3 overlaps with both channels 1 and 6, channel 7 overlaps with both channels 6 and 11, and channel 10 overlaps with channel 11. This configuration would result in constant interference as transmissions on one access point would disrupt communications on others.

D) is incorrect because channels 4, 8, and 12 suffer from the same overlapping problems. Channel 4 overlaps with channel 6, channel 8 overlaps with both channels 6 and 11, and channel 12 (while legal in some regions) overlaps with channel 11 and may not be available in all countries, particularly North America where only channels 1-11 are authorized.

Question 81: 

A customer brings a laptop to a repair shop reporting that the laptop will not power on. The technician verifies that the laptop will not turn on even when connected to a known-good power adapter. Which of the following should the technician do NEXT?

A) Replace the motherboard

B) Remove the battery and attempt to power on with AC only

C) Replace the power adapter

D) Reseat the RAM modules

Answer: B

Explanation:

When a laptop fails to power on even with a known-good power adapter connected, the next logical troubleshooting step is to remove the battery and attempt to power on using AC power only. This diagnostic step helps isolate whether a faulty battery is preventing the system from powering up and is a simple, non-invasive test that doesn’t require replacing components or extensive disassembly.

A failed or shorted battery can prevent a laptop from powering on even when AC power is available. Laptop power management systems are designed with safety circuits that monitor battery health and status. If the battery is severely degraded, internally shorted, or experiencing a critical fault, the laptop’s power management circuitry may refuse to power on the system as a protective measure. This prevents potential damage from unstable power delivery, overheating, or electrical hazards that a faulty battery might cause.

By removing the battery completely, the technician eliminates it as a variable in the power delivery chain. If the laptop successfully powers on and operates normally with just AC power and no battery installed, this confirms the battery is faulty and needs replacement. The laptop’s power system can then deliver AC power directly to the motherboard without routing through or monitoring the battery. This is a definitive diagnostic test that clearly identifies the battery as the problem component.

The process is straightforward on most laptops. The technician should shut down the system completely, disconnect the AC adapter, and remove the battery according to the manufacturer’s instructions. Some laptops have easily removable batteries with release latches, while others require removing bottom panel screws to access internal batteries. After battery removal, reconnect only the AC adapter and attempt to power on the laptop. If it boots successfully, the battery is confirmed as faulty. If it still doesn’t power on, the technician can proceed to other troubleshooting steps knowing the battery isn’t the cause.

This approach follows proper troubleshooting methodology by testing one variable at a time and progressing from simple, reversible tests to more complex interventions. It avoids unnecessary component replacement and provides clear diagnostic information to guide subsequent troubleshooting if needed.

A) is incorrect because replacing the motherboard is an expensive, time-consuming, and premature solution at this stage of troubleshooting. The motherboard is one of the most expensive laptop components and should only be replaced after systematically eliminating all other possible causes. Many simpler issues can prevent power-on, including battery problems, power button failures, or loose connections. Jumping directly to motherboard replacement without proper diagnosis wastes time and money.

C) is incorrect because the problem statement explicitly says the technician has already tested with a known-good power adapter. This means the original adapter has already been ruled out as the cause. Replacing or testing the power adapter again would be redundant and wouldn’t provide new diagnostic information. The technician has already confirmed that AC power is available to the laptop.

D) is incorrect because reseating RAM modules addresses a different symptom profile. RAM issues typically cause the laptop to power on but fail during POST (power-on self-test), often indicated by beep codes, LED patterns, or the system powering on but displaying nothing on screen. When a laptop won’t power on at all with no signs of electrical activity, RAM is unlikely to be the cause since RAM problems don’t typically prevent initial power delivery.

Question 82: 

A user reports that their computer is running extremely slowly and displaying pop-up advertisements even when no browser is open. Which of the following is the MOST likely cause?

A) Outdated device drivers

B) Insufficient RAM

C) Malware infection

D) Failing hard drive

Answer: C

Explanation:

The combination of extremely slow computer performance and pop-up advertisements appearing even when no browser is open is a classic indicator of malware infection, specifically adware or potentially unwanted programs (PUPs). This symptom profile strongly suggests that malicious software has been installed on the system and is consuming resources while generating unwanted advertising.

Malware encompasses various types of malicious software including viruses, trojans, adware, spyware, and ransomware. Adware specifically generates revenue for attackers by displaying advertisements, redirecting web searches, tracking browsing habits, and sometimes installing additional unwanted software. The key diagnostic indicator in this scenario is that pop-ups appear even when the browser is closed, which means the adware is running as a system process or background service independent of browser activity.

Modern adware often installs itself through deceptive means such as software bundling, where legitimate free software includes optional bundled programs that users inadvertently accept during installation. Drive-by downloads from compromised websites, phishing emails with malicious attachments, and fake software updates are other common infection vectors. Once installed, adware typically modifies system startup items to launch automatically when the computer boots, ensuring persistent execution.

The performance degradation occurs because malware consumes system resources including CPU cycles, memory, and network bandwidth. Adware processes run continuously in the background, contact advertising servers, download ad content, monitor user activity, and sometimes mine cryptocurrency or participate in botnets. This resource consumption slows down legitimate applications and normal system operations. Some malware also modifies system files, registry entries, and browser settings, which can further degrade performance.

To address this issue, the technician should boot the computer into Safe Mode to prevent malware from loading automatically, then run comprehensive scans with updated anti-malware tools such as Windows Defender, Malwarebytes, or other reputable security software. The technician should check startup programs, scheduled tasks, browser extensions, and recently installed applications. Removing malware often requires multiple tools as different security programs detect different threat types. After cleaning, the technician should update the operating system and all software, verify browser settings have been restored to defaults, and educate the user about safe computing practices.

A) is incorrect because outdated device drivers typically cause specific hardware-related problems such as printer errors, graphics glitches, audio issues, or peripheral device malfunctions. Drivers don’t generate pop-up advertisements and wouldn’t cause the specific symptom of ads appearing outside the browser. While outdated drivers might contribute to minor performance issues, they wouldn’t cause the severe slowdown and advertising behavior described.

B) is incorrect because insufficient RAM causes performance problems like slow application launching, system sluggishness, and excessive hard drive activity from paging, but it doesn’t generate pop-up advertisements. RAM limitations would affect all applications equally based on memory demand and wouldn’t specifically cause advertising to appear when browsers are closed. While adding RAM might improve performance, it wouldn’t address the underlying cause of the pop-up ads.

D) is incorrect because a failing hard drive typically presents with specific symptoms such as clicking or grinding noises, frequent disk errors, extremely slow file access, system freezes during disk operations, or complete boot failures. While a failing drive causes performance degradation, it doesn’t generate pop-up advertisements. Hard drive diagnostic tools would show SMART errors or bad sectors if the drive were failing, but these hardware issues are unrelated to software-based advertising behavior.

Question 83: 

A technician needs to configure a printer to be accessible by all users on a network. The printer has an Ethernet port but no wireless capability. Which of the following should the technician configure?

A) Printer sharing on a workstation

B) A static IP address on the printer

C) Bluetooth pairing with each computer

D) USB connection to a print server

Answer: B

Explanation:

Configuring a static IP address on a network printer is the most appropriate and professional solution for making an Ethernet-capable printer accessible to all users on a network. A static IP address ensures the printer maintains a consistent, predictable network address that users and computers can reliably connect to without disruption from DHCP changes.

Network printers with Ethernet ports are designed to function as independent network devices rather than peripherals connected to individual computers. When properly configured with a static IP address, the printer connects directly to the network switch or router and becomes accessible to any authorized device on the same network. This approach provides several significant advantages over alternative connection methods.

Static IP addressing prevents connection problems that can occur with DHCP. When a printer receives its IP address dynamically from a DHCP server, that address can potentially change when the DHCP lease expires or the printer is powered off and back on. If the IP address changes, computers that have installed the printer using the old IP address will lose connectivity and generate error messages. Users would need to reinstall the printer with the new address, causing workflow disruptions and help desk calls. A static IP address eliminates this problem by ensuring the printer always uses the same address.

To configure a static IP address, the technician should access the printer’s built-in web interface or control panel. Most network printers have embedded web servers accessible by entering the printer’s current IP address into a browser. Through this interface, the technician can configure network settings including IP address, subnet mask, default gateway, and DNS servers. The static IP should be chosen from the same subnet as other network devices but outside the DHCP scope to prevent address conflicts. For example, if the network uses 192.168.1.0/24 and the DHCP server assigns addresses from 192.168.1.100-200, the printer might use 192.168.1.50.

After assigning the static IP, the technician should document the configuration and add the printer to computers using either the printer’s IP address directly or by creating a DNS record that maps a friendly hostname to the printer’s IP. Modern operating systems make this straightforward through Add Printer wizards that detect network printers or allow manual IP address entry.

A) is incorrect because printer sharing on a workstation requires keeping that specific computer powered on at all times for other users to access the printer. This creates a single point of failure and dependency. If the host workstation is shut down, crashes, or experiences problems, all network users lose printer access. Additionally, the host computer’s performance may degrade as it handles print jobs for multiple users. This approach is suitable only for small, temporary situations, not professional network deployments.

C) is incorrect because Bluetooth pairing is irrelevant when the printer has an Ethernet port and the goal is network-wide access. Bluetooth is a short-range wireless technology typically limited to about 30 feet and designed for point-to-point connections between specific devices. The question explicitly states the printer has no wireless capability and has an Ethernet port, making Bluetooth technically unavailable. Even if Bluetooth were available, it wouldn’t provide the network-wide access required.

D) is incorrect because using a USB connection to a print server is unnecessarily complex when the printer already has native Ethernet networking capability. This approach would require additional hardware (a dedicated print server device), extra configuration, and creates an additional potential failure point. While USB print servers are useful for adding network capability to USB-only printers, they’re redundant when the printer already includes Ethernet. Using the built-in Ethernet port is simpler, more reliable, and more cost-effective.

Question 84: 

A user is unable to access any websites but can successfully ping external IP addresses. Which of the following is the MOST likely cause?

A) The default gateway is misconfigured

B) The DNS server is unavailable

C) The subnet mask is incorrect

D) The network cable is damaged

Answer: B

Explanation:

When a user can successfully ping external IP addresses but cannot access websites, the most likely cause is that the DNS (Domain Name System) server is unavailable or misconfigured. This scenario represents a classic DNS resolution failure where network connectivity is functional but name resolution services are not working properly.

DNS is the internet’s phone book system that translates human-readable domain names like www.example.com into IP addresses like 192.0.2.1 that computers use to communicate. When you type a website address into a browser, the computer first contacts a DNS server to resolve that domain name into its corresponding IP address, then uses that IP address to establish the connection. Without working DNS, domain names cannot be resolved and websites cannot be accessed by their names.

The fact that the user can successfully ping external IP addresses proves that several network components are functioning correctly. The network interface card is working, the physical connection is intact, the computer has a valid IP address and subnet mask, and the default gateway is properly configured and routing traffic to the internet. IP-level connectivity is confirmed operational. The problem manifests only when domain name resolution is required, which directly implicates DNS as the failure point.

Several scenarios can cause DNS failure while maintaining IP connectivity. The configured DNS server addresses may be incorrect or point to servers that are offline. Network problems between the user’s location and the DNS servers may exist even though general internet connectivity works. The DNS service on the configured servers may have crashed or been misconfigured. Firewall rules might be blocking DNS traffic on port 53. Local DNS cache corruption could cause resolution failures. ISP DNS servers sometimes experience outages affecting all customers.

To troubleshoot and confirm DNS issues, the technician can use several diagnostic commands. The command «nslookup» or «dig» can test DNS resolution directly by querying DNS servers for specific domain names. If these commands fail or timeout, DNS problems are confirmed. Checking the computer’s DNS configuration using «ipconfig /all» on Windows or «cat /etc/resolv.conf» on Linux shows which DNS servers are configured. The technician can test by temporarily changing DNS servers to public alternatives like Google DNS (8.8.8.8 and 8.8.4.4) or Cloudflare DNS (1.1.1.1). If websites become accessible after changing DNS servers, this confirms the original DNS servers were the problem.

Resolution typically involves correcting the DNS server addresses in the network adapter configuration, ensuring the DNS servers are operational, flushing the local DNS cache using «ipconfig /flushdns», or configuring alternative DNS servers if the original servers remain unavailable.

A) is incorrect because a misconfigured default gateway would prevent the computer from reaching any addresses outside the local network, including external IP addresses. The default gateway is the router that forwards traffic destined for other networks. If it were misconfigured, pinging external IP addresses would fail completely. The successful pings to external IPs prove the default gateway is correctly configured and routing traffic properly.

C) is incorrect because an incorrect subnet mask affects whether the computer correctly identifies which addresses are local versus remote, potentially causing connectivity issues within or outside the local network. However, an incorrect subnet mask wouldn’t create the specific symptom profile of working IP connectivity but failing name resolution. If the subnet mask were wrong enough to cause problems, pinging external IPs would likely also fail or behave inconsistently.

D) is incorrect because a damaged network cable would cause complete connectivity failure at the physical layer. There would be no network connectivity at all, and the user couldn’t ping any addresses, whether by name or IP. The successful pings to external IP addresses prove the physical connection, network cable, switch port, and all lower-level networking components are functioning properly.

Question 85: 

A technician is configuring a new SOHO wireless router. Which of the following should the technician change from the default settings to improve security? (Select TWO)

A) SSID

B) Channel

C) Administrator password

D) Firmware version

E) DHCP range

F) Wireless mode

Answer: A, C

Explanation:

When configuring a new SOHO wireless router, changing the default SSID and administrator password are two critical security measures that should always be implemented. These changes protect against common attack vectors that specifically target default router configurations and represent fundamental security best practices.

Changing the default SSID (Service Set Identifier) is important for several security reasons. The SSID is the network name that appears when users search for available WiFi networks. Router manufacturers typically set default SSIDs that include the brand name or model number, such as «NETGEAR» or «Linksys-5GHz.» Broadcasting these default names reveals information about the router’s manufacturer and potentially its model, which attackers can use to identify known vulnerabilities specific to that device. Security databases catalog vulnerabilities by manufacturer and model, so revealing this information makes targeted attacks easier.

Additionally, many routers with default SSIDs also retain default administrator credentials. Attackers often scan for networks broadcasting default SSIDs because these networks have a higher probability of having unchanged default passwords as well. Changing the SSID to something unique and non-identifying removes this easy reconnaissance information. The new SSID should not include personal information, addresses, or anything that identifies the location or owner. Something generic like «HomeNetwork» or a random alphanumeric string works well.

Changing the default administrator password is absolutely critical for router security. Most routers ship with well-known default credentials such as admin/admin or admin/password. These default credentials are publicly documented in user manuals and online databases. Attackers routinely attempt these known default passwords when they discover routers that haven’t been properly secured. If successful, they gain complete control over the router, allowing them to modify settings, redirect traffic, install malicious firmware, monitor network activity, or use the network for illegal activities.

A compromised router gives attackers access to all traffic passing through the network. They can perform man-in-the-middle attacks, intercept sensitive data, modify DNS settings to redirect users to phishing sites, or disable security features. The router is the gateway for all network traffic, making it a high-value target. Securing it with a strong, unique administrator password prevents unauthorized access to the router’s configuration interface.

The new administrator password should be long (at least 12 characters), complex (including uppercase and lowercase letters, numbers, and special characters), and unique (not used for any other accounts). Using a password manager to generate and store this password is recommended since administrator access is needed infrequently.

B) is useful for optimizing performance and reducing interference but is not primarily a security setting. Changing the wireless channel helps avoid conflicts with neighboring networks but doesn’t protect against unauthorized access or attacks.

D) is important for maintaining security through patches but isn’t technically changing a default setting. Updating firmware is maintenance rather than configuration. However, checking for and installing firmware updates should be part of the initial setup process.

E) doesn’t significantly impact security. The DHCP range determines which IP addresses are automatically assigned to devices but doesn’t prevent unauthorized access or attacks. Changing it might be done for network organization but isn’t a security priority.

F) refers to 802.11 standards (b/g/n/ac/ax) and affects performance and compatibility but isn’t primarily a security configuration. While disabling very old modes like 802.11b might marginally improve security by forcing stronger encryption, this isn’t a critical default setting to change for security purposes.

Question 86: 

A user reports that their computer randomly restarts without warning and sometimes displays a blue screen with error messages. Which of the following is the MOST likely cause?

A) Corrupted operating system files

B) Insufficient power supply

C) Faulty RAM

D) Malware infection

Answer: C

Explanation:

Random restarts and blue screen errors (commonly called Blue Screen of Death or BSOD) are classic symptoms of faulty RAM (Random Access Memory). Memory failures create unpredictable system behavior because RAM is used constantly by the operating system and all running applications to temporarily store data and program instructions during execution. When memory chips contain defective cells or experience electrical failures, they can provide incorrect data to the CPU, causing system crashes and instability.

RAM operates at very high speeds with extremely tight timing requirements. Modern systems access memory billions of times per second. Even a single defective memory cell or timing error can cause catastrophic failures because corrupted data or instructions propagate through the system. When the CPU receives incorrect data from faulty RAM, it may attempt to execute invalid instructions, access protected memory regions, or make decisions based on corrupted information. These errors trigger protection mechanisms built into the operating system that force a system halt to prevent data corruption or further damage.

Blue screen errors specifically indicate that Windows has detected a critical error from which it cannot safely recover. The blue screen displays information about what caused the crash, including error codes and memory addresses. Common BSOD error messages associated with memory problems include MEMORY_MANAGEMENT, IRQL_NOT_LESS_OR_EQUAL, SYSTEM_SERVICE_EXCEPTION, and PAGE_FAULT_IN_NONPAGED_AREA. These errors often occur randomly because memory failures may be intermittent, affecting only certain memory addresses or appearing under specific conditions like temperature changes or particular usage patterns.

The random nature of the restarts is particularly indicative of hardware issues rather than software problems. Software issues tend to produce more consistent, reproducible failures that occur under specific circumstances or when launching particular applications. Hardware failures, especially memory issues, manifest more randomly because they depend on which physical memory locations are being accessed at any given moment.

To diagnose RAM problems, technicians should run comprehensive memory testing using tools like Windows Memory Diagnostic, MemTest86, or MemTest86+. These utilities perform extensive read/write tests across all memory addresses, checking for errors over multiple passes. Testing should run for several hours or overnight to catch intermittent failures. If errors are detected, the technician should test each RAM module individually if multiple modules are installed to identify which specific module is defective. Physical inspection should check for proper seating in memory slots, bent pins, or visible damage.

A) is incorrect because while corrupted operating system files can cause blue screens and system instability, they typically produce more consistent error patterns. Corruption issues often manifest when accessing specific system files or functions, and errors would likely recur in predictable ways. System file corruption usually results from improper shutdowns, malware, or disk errors rather than occurring randomly. Running System File Checker (sfc /scannow) would identify and repair corrupted system files if this were the cause.

D) is incorrect because while malware can cause system instability, it more commonly produces symptoms like slow performance, unwanted pop-ups, unauthorized network activity, disabled security software, or specific functionality being compromised. Malware sophisticated enough to cause random blue screens is relatively uncommon compared to memory hardware failures. Most malware aims to remain hidden and operational rather than crashing the system. Blue screens from malware would more likely be associated with rootkits or drivers, and would typically show up in security scans.

Question 87: 

A technician is setting up a new workstation and needs to enable hardware-based virtualization. Which of the following BIOS settings should be enabled?

A) TPM

B) Secure Boot

C) VT-x or AMD-V

D) UEFI mode

Answer: C

Explanation:

To enable hardware-based virtualization on a new workstation, the technician must enable VT-x (Intel Virtualization Technology) on Intel processors or AMD-V (AMD Virtualization) on AMD processors in the BIOS/UEFI settings. These hardware virtualization extensions are CPU features specifically designed to improve the performance and capabilities of virtual machines by allowing direct access to certain processor features.

Hardware virtualization technology represents a significant advancement over software-only virtualization. Before hardware virtualization extensions existed, hypervisors had to use software techniques like binary translation to emulate privileged CPU instructions, which created substantial performance overhead. Modern CPUs include dedicated virtualization instructions that allow guest operating systems to execute directly on the processor with minimal intervention from the hypervisor, dramatically improving virtual machine performance.

VT-x and AMD-V provide several critical capabilities for virtualization. They enable the CPU to create isolated execution environments where guest operating systems can run with near-native performance. These technologies provide additional privilege levels beyond the standard ring 0 through ring 3, allowing the hypervisor to maintain control while guest operating systems execute privileged instructions directly. Hardware virtualization also enables Extended Page Tables (EPT) or Rapid Virtualization Indexing (RVI), which accelerate memory management for virtual machines by allowing the CPU to translate guest virtual addresses directly to physical addresses without hypervisor involvement.

Most modern CPUs support hardware virtualization, but manufacturers often ship systems with these features disabled in the BIOS by default. This conservative approach ensures maximum compatibility and prevents potential issues if users aren’t planning to run virtualization software. To enable these features, the technician must access the BIOS/UEFI setup during system boot (typically by pressing Delete, F2, or F10 during POST), navigate to the CPU configuration or advanced settings section, and locate options labeled Intel Virtualization Technology, Intel VT-x, Vanderpool, AMD-V, or SVM Mode. The exact naming varies by manufacturer and BIOS version.

After enabling hardware virtualization in BIOS, virtualization software like VMware Workstation, Oracle VirtualBox, Microsoft Hyper-V, or KVM can utilize these features to create and run virtual machines efficiently. Without hardware virtualization enabled, these applications either refuse to run 64-bit guest operating systems, display error messages about virtualization support, or operate with severely degraded performance. Modern hypervisors typically require hardware virtualization for proper operation and will detect its absence during installation or when attempting to create virtual machines.

A) is incorrect because TPM (Trusted Platform Module) is a security feature that provides hardware-based cryptographic functions, secure key storage, and platform integrity verification. TPM supports features like BitLocker drive encryption, Windows Hello, and secure boot measurements, but it is not related to virtualization capabilities. While TPM can be useful in virtualized environments for security purposes, it doesn’t enable virtualization functionality itself.

B) is incorrect because Secure Boot is a UEFI security feature that ensures only trusted, digitally signed operating system bootloaders can execute during system startup. It protects against bootkits and rootkits that attempt to load before the operating system. While Secure Boot is important for system security, it has no direct relationship to enabling virtualization capabilities. Some virtualization software may have compatibility considerations with Secure Boot, but it’s not what enables hardware virtualization.

D) is incorrect because UEFI mode is the modern firmware interface that replaces traditional BIOS, providing enhanced features like faster boot times, support for drives larger than 2TB, improved security, and a more sophisticated pre-boot environment. While UEFI is generally recommended for modern systems and may be preferred when running virtualization software, simply enabling UEFI mode doesn’t activate hardware virtualization capabilities. Systems can run in UEFI mode without virtualization enabled, and older BIOS mode systems can still use virtualization if the CPU extensions are activated.

Question 88: 

A user reports that when they print documents, the text appears faded and has horizontal white lines across the page. Which of the following is the MOST likely cause?

A) Low toner level

B) Worn fuser assembly

C) Dirty transfer roller

D) Incorrect paper type

Answer: A

Explanation:

Faded text with horizontal white lines across printed pages is a classic symptom of low toner level in a laser printer. As toner cartridges approach depletion, they cannot supply sufficient toner particles to create consistent, dark images across the entire page, resulting in lighter overall print quality with characteristic streaks or lines where toner coverage is insufficient.

Laser printers work through a complex electrophotographic process where a laser or LED creates a latent image on a photosensitive drum by selectively discharging areas corresponding to the image. Toner particles, which are fine plastic powder mixed with pigment and control agents, are attracted to the charged areas on the drum. These particles are then transferred to paper and permanently fused through heat and pressure. This process requires adequate toner supply throughout the entire imaging path to produce consistent, dark output.

When toner levels become low, several problems manifest. The toner cartridge may not distribute toner evenly across the developer roller, creating areas with insufficient toner coverage. The low toner condition often appears first as overall fading where the entire page appears lighter than normal. As the cartridge depletes further, horizontal white lines or streaks appear because certain sections of the developer roller aren’t receiving adequate toner supply. These lines are horizontal because they correspond to specific positions along the circumference of the rotating developer roller or drum that aren’t being properly coated with toner.

The pattern is typically consistent and repeating at intervals that match the circumference of the toner roller or drum. If you measure the distance between the white lines, it often corresponds exactly to the diameter of one of the printer’s rotating components. This distinguishes low toner from other printing problems that might create irregular patterns or inconsistent results.

Modern printers include toner level monitoring systems that estimate remaining toner through various methods including mechanical sensing, optical sensors, or algorithmic tracking based on page coverage. However, these systems aren’t always perfectly accurate. Printers often display «low toner» warnings well before the cartridge is truly empty to encourage timely replacement and prevent print quality issues. Despite warnings, many users continue printing until quality degradation becomes obvious.

The troubleshooting approach is straightforward. First, check the printer’s status display or software interface for toner level indicators. Remove the toner cartridge and gently rock it side to side to redistribute remaining toner, which may temporarily improve print quality. If the problem persists, install a new genuine toner cartridge. Print a test page after replacement to verify that print quality has returned to normal. If print quality doesn’t improve with a new cartridge, other components may be at fault, but low toner is by far the most common cause of this symptom pattern.

B) is incorrect because a worn fuser assembly typically causes different problems. The fuser applies heat and pressure to permanently bond toner to paper. Fuser problems usually manifest as toner that smears or rubs off easily, wrinkled pages, roller marks, or completely blank pages if the fuser fails entirely. A worn fuser might also cause hot offset (toner sticking to the fuser and transferring to subsequent pages) or cold offset. Faded printing with horizontal lines is not characteristic of fuser issues.

C) is incorrect because a dirty transfer roller typically causes different symptoms. The transfer roller applies an electrical charge to paper to attract toner from the drum. Contamination on the transfer roller usually results in repeating marks, spots, or lines at specific intervals matching the roller circumference, but these appear as dark marks or smudges rather than white lines. Extremely dirty transfer rollers might cause uneven toner transfer creating light areas, but this wouldn’t produce the specific pattern of faded text with horizontal white lines typical of low toner.

D) is incorrect because using incorrect paper type generally causes issues like poor toner adhesion, smearing, jams, or curled pages rather than faded printing with horizontal lines. Very smooth or coated papers might prevent proper toner adhesion creating light or splotchy printing, while rough papers might appear lighter because toner sits on surface peaks rather than filling valleys. However, paper type issues wouldn’t create the characteristic repeating horizontal white line pattern that indicates insufficient toner distribution from a low cartridge.

Question 89: 

A technician needs to replace a failed hard drive in a laptop. Which of the following should the technician do FIRST before beginning the repair?

A) Disconnect the power adapter

B) Remove the battery

C) Review the laptop’s service manual

D) Backup important data

Answer: C

Explanation:

Before beginning any laptop hardware repair, the technician should first review the laptop’s service manual or manufacturer documentation. This critical preparatory step ensures the technician understands the proper disassembly procedures, identifies the correct replacement parts, recognizes potential hazards, and follows manufacturer-recommended practices to avoid damaging the device or voiding warranties.

Laptop design varies dramatically between manufacturers and even between models from the same manufacturer. Unlike desktop computers which follow relatively standardized designs with accessible components, laptops are highly integrated devices with proprietary designs optimized for compactness and portability. Components are often layered, requiring removal of multiple parts to access the target component. What seems like a straightforward procedure can become complex without proper guidance. Some laptops require removing the keyboard and palm rest to access hard drives, while others provide easy access panels on the bottom. Attempting repairs without understanding the specific design risks breaking plastic clips, damaging ribbon cables, or creating situations where reassembly becomes impossible.

Service manuals provide essential information that isn’t intuitively obvious. They document the correct disassembly sequence, specify which screws secure which components (many laptops use different screw lengths and types in various locations), identify hidden screws under rubber feet or warranty stickers, warn about fragile components or cables that need special care, and provide torque specifications for proper reassembly. Manuals also indicate whether special tools are required, such as plastic pry tools to release clips without damage, or specific screwdriver types for security screws.

Reviewing documentation also helps identify the correct replacement part. Hard drives in laptops may use different form factors (2.5-inch HDD, M.2 SATA, M.2 NVMe, mSATA), interfaces (SATA, NVMe, PCIe), and physical dimensions. Installing an incompatible drive wastes time and can damage components. The service manual specifies exactly which storage types and capacities are supported, preventing costly mistakes.

Professional technicians always consult service documentation before beginning repairs, even for models they’ve worked on previously, because manufacturers sometimes make running changes to designs. This professional practice minimizes repair time, prevents damage, and ensures quality results. Many manufacturers provide official service manuals through their support websites, or technicians can access third-party repair databases and video guides for additional guidance.

After reviewing the service manual and understanding the procedure, the technician can then proceed with proper safety measures like disconnecting power and removing batteries as documented in the manual, which may specify a particular order of operations or warnings about capacitors that retain charge.

A) and B) are both important safety steps that prevent electrical shock and protect components from electrical damage, but they should be performed after reviewing the service manual. The manual will specify the correct sequence and may include additional safety requirements like holding the power button for several seconds after disconnecting power to discharge capacitors. Performing these steps without understanding the overall procedure might lead to doing them incorrectly or in the wrong order.

D) is incorrect in this context because the question states the hard drive has already failed. Data backup should ideally happen before failure occurs as part of regular maintenance. With a failed drive, data recovery would require specialized tools and techniques beyond simple backup procedures, and attempting to power on the laptop might cause further damage to a failing drive. If the drive were operational but failing, backing up data would be a priority, but once completely failed, the immediate task is replacement, after which data would need to be recovered from existing backups or through professional recovery services.

Question 90: 

A user’s smartphone shows a message indicating storage is almost full. Which of the following should the technician recommend FIRST to free up space?

A) Factory reset the device

B) Remove unused applications

C) Upgrade to a larger capacity phone

D) Disable background app refresh

Answer: B

Explanation:

When a smartphone displays a storage warning indicating that available space is almost full, the first and least invasive recommendation should be to remove unused applications. This approach directly addresses the storage problem by eliminating data that provides no value to the user, is immediately actionable, and carries no risk of data loss for content the user wants to keep.

Smartphone storage fills up from multiple sources including installed applications, application data and cache, photos and videos, downloaded files, music, messages with attachments, and system files. Among these categories, applications and their associated data often consume the largest amount of space. Modern apps have grown considerably in size, with many consuming hundreds of megabytes to several gigabytes each. Games are particularly notorious for large installations, some exceeding 5GB. Additionally, applications accumulate cache data, settings, and user-generated content over time that further increases their storage footprint.

Most smartphone users install numerous applications over time but continue actively using only a fraction of them. These unused apps provide no benefit while consuming valuable storage space. Identifying and removing them represents the quickest path to freeing significant storage with minimal effort. Both iOS and Android provide built-in tools to identify which applications are consuming the most space and when they were last used, making it easy to identify candidates for removal.

The process is straightforward and safe. On iOS, users can navigate to Settings > General > iPhone Storage to see a comprehensive list of apps sorted by size with recommendations for apps that haven’t been used recently. On Android, Settings > Storage shows storage usage breakdown with the ability to drill down into applications and their data consumption. Users can confidently uninstall applications knowing that if they need them again in the future, they can simply reinstall from the app store. Most apps store user data in the cloud or sync with online accounts, so reinstalling typically restores personalization and content automatically.

Beyond removing entire applications, users can also clear app caches through the storage settings. Cache files are temporary data stored to improve performance but can be safely deleted and will regenerate as needed. Photo and video management represents another significant opportunity, such as enabling cloud photo storage with automatic deletion of local copies, removing screenshots and duplicate photos, or transferring media to a computer. Downloaded files, old messages with large attachments, and offline content from streaming services should also be reviewed.

This recommendation empowers users to maintain control over their device while solving the immediate problem. It’s educational, helping users understand what consumes storage and develop better management habits. Unlike more drastic solutions, it preserves all data the user values while eliminating waste.

A) is incorrect because a factory reset is an extreme measure that erases all data, applications, settings, and personalizations from the device. This nuclear option should be reserved as an absolute last resort when troubleshooting serious software problems or preparing to sell/recycle a device. It creates massive inconvenience requiring complete device reconfiguration, reinstallation of all applications, and restoration from backups. For a simple storage problem, factory reset is vastly disproportionate and would frustrate users who lose customizations and potentially data if backups aren’t current.

C) is incorrect because upgrading to a larger capacity phone is expensive and unnecessary when storage issues can typically be resolved through simple management of existing content. This recommendation essentially suggests solving a software and data management problem through hardware expenditure. While storage upgrades might eventually be necessary if users legitimately need more space for essential content, it should never be the first recommendation. Users would be justifiably frustrated if told to buy a new phone when simple app removal could resolve the issue.

D) is incorrect because disabling background app refresh doesn’t directly free up storage space. Background app refresh controls whether apps can update content when not actively in use, which primarily affects battery life and cellular data usage rather than storage consumption. While restricting background activity might marginally slow the accumulation of cached data, it doesn’t remove existing content occupying storage. This recommendation addresses the wrong problem and wouldn’t provide meaningful storage relief to resolve the immediate warning message.