VMware 2V0-21.23 vSphere 8.x Professional Exam Dumps and Practice Test Questions Set 3 Q31 — 45
Visit here for our full VMware 2V0-21.23 exam dumps and practice test questions.
Question 31:
An administrator needs to configure vSphere High Availability (HA) for a cluster. Which component is responsible for monitoring host and virtual machine status?
A) vCenter Server only
B) Master host and slave hosts
C) ESXi host kernel only
D) Virtual machine tools
Answer: B
Explanation:
Master host and slave hosts are responsible for monitoring host and virtual machine status in vSphere High Availability clusters. The HA architecture uses a master-slave relationship where one host is elected as master and coordinates cluster activities while other hosts operate as slaves reporting their status to the master.
The master host performs critical monitoring functions including tracking the health of all hosts in the cluster through network heartbeats, monitoring datastores for datastore heartbeats when network communication fails, maintaining the cluster state and protected virtual machine inventory, making decisions about virtual machine restart placement when hosts fail, and coordinating with vCenter Server to report cluster status. The master host uses multiple communication channels to distinguish between actual failures and network partitions.
Slave hosts also perform important functions by sending heartbeats to the master host confirming they are operational, reporting virtual machine status and resource availability, executing restart operations for virtual machines assigned by the master, and monitoring local virtual machines for failures. Each slave maintains awareness of the master host and participates in master election if the current master fails.
The election process ensures a master always exists by holding elections when clusters form, when current masters fail, or when network partitions heal. Election considers factors like number of mounted datastores, host uptime, and management network connectivity. Having multiple hosts in the election pool provides resilience. The distributed architecture ensures HA continues functioning even if individual hosts fail.
Option A is incorrect because vCenter Server configures HA but the hosts themselves perform runtime monitoring. Option C is wrong because ESXi alone does not provide the distributed monitoring that HA requires. Option D is incorrect because VM tools provide guest information but do not monitor host health or coordinate HA responses.
Question 32:
A vSphere administrator needs to configure resource allocation for a virtual machine. Which advanced setting guarantees a minimum amount of CPU resources?
A) CPU limit
B) CPU reservation
C) CPU shares
D) CPU affinity
Answer: B
Explanation:
CPU reservation guarantees a minimum amount of CPU resources for a virtual machine by reserving specified CPU capacity that the hypervisor commits to provide whenever the virtual machine needs it. Reservations ensure critical workloads receive adequate resources even during contention when multiple virtual machines compete for the same physical resources.
Reservations are expressed in MHz for CPU and MB for memory. When an administrator sets a 2000 MHz CPU reservation, vSphere guarantees the virtual machine can always access 2000 MHz of CPU capacity. The hypervisor admits virtual machines only if sufficient unreserved capacity exists to satisfy their reservations. This admission control prevents overcommitment of guaranteed resources.
Reserved resources remain allocated to the virtual machine whether or not it actively uses them. A virtual machine with a 2000 MHz reservation but using only 500 MHz still reserves the full 2000 MHz, making it unavailable to other virtual machines. This ensures the reserved capacity is available when needed but may result in underutilized physical resources if reserved capacity significantly exceeds actual usage.
Use cases for reservations include guaranteeing resources for mission-critical applications, ensuring consistent performance for latency-sensitive workloads, meeting service level agreements requiring minimum resource availability, and protecting important virtual machines from resource starvation during contention. Administrators should set reservations carefully, reserving only what is truly needed, because excessive reservations reduce overall cluster capacity and may prevent virtual machine placement.
Option A is incorrect because CPU limits cap maximum usage rather than guaranteeing minimums. Option C is wrong because CPU shares determine relative priority during contention but provide no guarantees. Option D is incorrect because CPU affinity binds virtual machines to specific processors but does not guarantee resource amounts.
Question 33:
An administrator needs to perform maintenance on an ESXi host without causing virtual machine downtime. Which feature should be used?
A) Power off all virtual machines
B) vSphere vMotion
C) Restart the host with virtual machines running
D) Disconnect the host from vCenter
Answer: B
Explanation:
vSphere vMotion should be used to perform maintenance on an ESXi host without causing virtual machine downtime by live-migrating running virtual machines to other hosts in the cluster. This feature enables zero-downtime maintenance by moving virtual machines’ active memory, execution state, and network connections to destination hosts while applications continue running uninterrupted.
vMotion migration process occurs in several phases. During pre-migration, vMotion verifies that the destination host has adequate resources and compatible configurations. Memory pre-copy transfers memory pages while the virtual machine continues running on the source host. Quiescing briefly stuns the virtual machine while final memory state and device states transfer. Resumption starts the virtual machine on the destination host while network connections redirect to the new location. The entire process typically completes within seconds with no perceptible interruption to users.
Maintenance mode simplifies the process by automatically migrating or evacuating all virtual machines from a host when maintenance mode is entered. Administrators can configure whether DRS automatically migrates virtual machines or whether manual intervention is required. Once all virtual machines are evacuated, the host enters maintenance mode and is ready for hardware maintenance, firmware updates, or configuration changes without affecting running workloads.
Requirements for successful vMotion include shared storage accessible to both source and destination hosts, compatible CPU features between hosts, sufficient network bandwidth for memory transfer, and identical virtual machine network and device configurations on both hosts. Distributed Resource Scheduler can automate vMotion for load balancing and maintenance workflows.
Option A is incorrect because powering off virtual machines causes downtime, which the question explicitly seeks to avoid. Option C is wrong because restarting with virtual machines running would crash them. Option D is incorrect because disconnecting the host does not evacuate virtual machines or enable maintenance.
Question 34:
A vSphere administrator needs to ensure that specific virtual machines always run on separate physical hosts. Which DRS rule should be configured?
A) Affinity rule
B) Anti-affinity rule
C) Virtual machine to host affinity rule
D) No DRS rule needed
Answer: B
Explanation:
Anti-affinity rule should be configured to ensure specific virtual machines always run on separate physical hosts. This DRS rule type prevents specified virtual machines from running on the same ESXi host, providing separation for redundancy, fault tolerance, or licensing compliance purposes.
Anti-affinity rules address several use cases. Application tier redundancy ensures that multiple instances of the same application component run on different hosts so that single host failure does not take down the entire application tier. For example, anti-affinity rules prevent all web server virtual machines from running on the same host. Licensing compliance satisfies requirements where license terms prohibit running certain software combinations on the same physical hardware. Fault domain separation ensures backup and primary virtual machines remain on separate hosts.
DRS evaluates anti-affinity rules continuously during initial placement when virtual machines power on, during load balancing when DRS considers migrations to optimize resource utilization, and during host maintenance when virtual machines must be evacuated. If rule constraints cannot be satisfied, DRS will not perform migrations that would violate rules. Administrators can configure rules as mandatory (must rules) that DRS never violates or preferential (should rules) that DRS tries to honor but may violate if necessary.
Implementation requires identifying virtual machines that require separation and creating VM-VM anti-affinity rules listing those virtual machines. Multiple rules can be created for different separation requirements. Rules are cluster-specific, so virtual machines in multiple clusters require rules in each cluster. Monitoring ensures rules remain satisfied as the environment changes.
Option A is incorrect because affinity rules keep virtual machines together rather than separating them. Option C is wrong because VM-to-host rules specify which hosts virtual machines can run on but do not ensure separation. Option D is incorrect because without rules, DRS may place virtual machines on the same host.
Question 35:
An administrator needs to configure vSphere DRS to automatically balance workloads across cluster hosts. Which automation level enables fully automated load balancing?
A) Manual mode
B) Partially automated mode
C) Fully automated mode
D) Disabled mode
Answer: C
Explanation:
Fully automated mode enables vSphere DRS to automatically balance workloads across cluster hosts by making both initial placement decisions when virtual machines power on and ongoing migration decisions to optimize resource utilization without requiring administrator approval. This automation level maximizes DRS benefits by continuously optimizing cluster resource distribution.
Fully automated DRS operation includes multiple automated activities. Initial placement automatically selects which host should run a virtual machine when it powers on, considering current resource utilization, reservations, limits, shares, affinity rules, and host compatibility. Load balancing continuously monitors cluster resource utilization and automatically migrates virtual machines when imbalances exceed configured thresholds. Power management can automatically consolidate workloads and power down underutilized hosts during low-demand periods.
DRS decision-making balances multiple factors including CPU and memory utilization across hosts, reservation and limit constraints, affinity and anti-affinity rules, VM-host affinity specifications, and migration costs versus benefits. The migration threshold setting controls how aggressively DRS balances loads, with conservative settings requiring significant imbalances before migrating and aggressive settings optimizing frequently. Priority levels apply migrations in order of benefit.
Fully automated mode is appropriate for production environments with homogeneous hosts, stable workloads, and confidence in DRS decisions. Organizations may use partially automated mode initially to review recommendations before enabling full automation. Monitoring DRS recommendations and migrations ensures automation behaves as expected and that migration frequency is appropriate for the environment.
Option A is incorrect because manual mode only provides recommendations without automating any actions. Option B is wrong because partially automated mode automates initial placement but requires approval for migrations. Option D is incorrect because disabled mode turns off DRS entirely.
Question 36:
A vSphere administrator needs to secure virtual machine traffic within a datacenter. Which vSphere feature encrypts virtual machine network traffic?
A) Standard port groups
B) VM Encryption
C) Distributed Port Groups
D) Network I/O Control
Answer: B
Explanation:
VM Encryption secures virtual machine traffic within a datacenter by encrypting virtual machine files, including memory snapshots and vMotion traffic, protecting data confidentiality even if physical media is compromised or network traffic is intercepted. This feature provides comprehensive encryption for virtual machine data at rest and in motion.
VM Encryption protects multiple components of virtual machine data. Disk encryption encrypts virtual machine disk files (VMDKs) preventing unauthorized access to data on storage arrays or backup media. Memory encryption protects the contents of virtual machine memory including active data and encryption keys. vMotion encryption secures memory transfer during live migrations preventing interception of sensitive data during host-to-host transfers. Snapshot encryption protects memory state and disk contents in snapshots.
Key management infrastructure supports encryption through integration with external Key Management Servers (KMS) that securely store and manage encryption keys. vSphere requests keys from KMS when encrypting or decrypting virtual machines. Keys never persist in clear text on hosts or in vCenter. The separation between encrypted data and keys ensures that compromising storage does not compromise data confidentiality. Multiple KMS servers provide redundancy.
Encryption policies define protection requirements at virtual machine or virtual disk granularity. Administrators can encrypt entire virtual machines, individual disks, or specific snapshots based on data sensitivity and compliance requirements. Performance impact is minimal on modern processors with hardware acceleration for encryption operations. Encrypted virtual machines function normally with vSphere features like DRS, HA, and snapshots.
Option A is incorrect because standard port groups provide network connectivity but not encryption. Option C is wrong because distributed port groups are network configuration constructs without encryption capabilities. Option D is incorrect because Network I/O Control manages bandwidth allocation rather than providing encryption.
Question 37:
An administrator needs to configure storage for a vSphere cluster. Which storage type provides the lowest latency for I/O-intensive applications?
A) NFS datastores
B) iSCSI datastores
C) vSAN with all-flash configuration
D) Fibre Channel datastores with spinning disks
Answer: C
Explanation:
vSAN with all-flash configuration provides the lowest latency for I/O-intensive applications by using flash devices for both capacity and caching tiers, eliminating the mechanical latency of spinning disks and optimizing data placement for maximum performance. All-flash vSAN is specifically designed for demanding workloads requiring consistent low latency and high throughput.
All-flash vSAN architecture uses flash devices throughout the storage stack. Capacity tier devices store the persistent data using high-capacity SSDs optimized for read and write endurance. Cache tier devices use high-performance SSDs or NVMe drives optimized for extremely low latency and high IOPS. The cache absorbs write operations and accelerates reads for frequently accessed data. Intelligent caching algorithms maximize cache hit rates, keeping hot data on the fastest devices.
Performance characteristics of all-flash vSAN significantly exceed traditional storage. Latency typically measures in microseconds rather than milliseconds. IOPS capacity scales linearly as more nodes are added to the cluster. Throughput can saturate network bandwidth before reaching storage limits. Consistent performance eliminates the variability caused by mechanical disk seek times. These characteristics make all-flash vSAN ideal for databases, virtual desktop infrastructure, and latency-sensitive applications.
Additional vSAN features enhance all-flash performance including deduplication and compression reducing the physical capacity required, RAID configurations providing redundancy without sacrificing performance, and erasure coding offering space efficiency for larger clusters. Software-defined architecture enables policy-based management where administrators define storage requirements and vSAN automatically provisions appropriate resources.
Option A is incorrect because NFS traverses network layers adding latency compared to local flash storage. Option B is wrong because iSCSI also includes network latency and typically uses spinning disks. Option D is incorrect because spinning disks have mechanical latency far exceeding flash devices.
Question 38:
A vSphere administrator needs to configure network redundancy for virtual machine traffic. Which feature provides automatic failover between physical network adapters?
A) Single uplink configuration
B) NIC teaming
C) Static route configuration
D) Manual adapter selection
Answer: B
Explanation:
NIC teaming provides automatic failover between physical network adapters by grouping multiple physical adapters into logical teams that present as single network connections to virtual machines. Teaming provides both redundancy through automatic failover when adapters fail and increased bandwidth through load balancing across multiple adapters.
NIC teaming configuration includes multiple policy settings. Failover order specifies active adapters handling traffic normally and standby adapters taking over when active adapters fail. Load balancing policies determine how traffic distributes across active adapters, with options including routing based on originating virtual port, source MAC hash, IP hash, or physical NIC load. Network failure detection uses either link status only or enhanced beacon probing to detect failures.
Failover behavior ensures network connectivity despite adapter or switch failures. When an active adapter fails, teaming automatically shifts traffic to remaining active adapters or standby adapters if no active adapters remain. Failback settings control whether traffic returns to the original adapter when it recovers. Notify switches settings send RARP packets informing switches about new MAC address locations after failover.
Load balancing optimizes bandwidth utilization by distributing traffic across multiple active adapters. Route based on originating virtual port assigns each virtual machine to a specific physical adapter. IP hash enables true load balancing within sessions but requires switch support for etherchannel. Physical NIC load dynamically balances based on current adapter utilization. The appropriate policy depends on workload characteristics and switch capabilities.
Option A is incorrect because single uplink provides no redundancy, creating single points of failure. Option C is wrong because static routes define network paths rather than providing adapter redundancy. Option D is incorrect because manual selection requires administrator intervention rather than providing automatic failover.
Question 39:
An administrator needs to monitor vSphere cluster performance. Which metric indicates that CPU resources are overcommitted?
A) Low CPU usage percentage
B) High CPU ready time
C) Low network throughput
D) High disk latency
Answer: B
Explanation:
High CPU ready time indicates that CPU resources are overcommitted by measuring how long virtual machines wait for CPU resources to become available. Ready time represents the percentage of time virtual machines are ready to execute but cannot access CPU cycles because the hypervisor is scheduling other virtual machines on physical processors.
CPU ready time accumulates when virtual machines compete for physical CPU resources. A single virtual CPU accumulates ready time when it is ready to execute but all physical CPU cores are busy executing other virtual machine workloads or hypervisor operations. Multiple virtual CPUs in the same virtual machine may accumulate ready time independently. High ready time directly impacts application performance because virtual machines cannot execute during ready periods.
Acceptable ready time levels depend on application sensitivity. General guidelines suggest ready time below 5 percent indicates acceptable performance, while ready time consistently above 10 percent indicates resource contention requiring attention. Latency-sensitive applications may require even lower ready time. Monitoring should evaluate both average and peak ready time because brief spikes may impact interactive workloads even if averages appear reasonable.
Causes of high ready time include overprovisioning virtual CPUs on virtual machines, running too many virtual machines on insufficient physical cores, inadequate CPU reservations for critical workloads, and CPU limits constraining virtual machines below their needs. Remediation includes reducing virtual CPU counts to match actual application requirements, adding physical hosts to increase cluster capacity, adjusting resource pools and reservations, and using DRS to balance workloads more effectively.
Option A is incorrect because low CPU usage suggests underutilization rather than overcommitment. Option C is wrong because network throughput measures network capacity, not CPU availability. Option D is incorrect because disk latency measures storage performance rather than CPU overcommitment.
Question 40:
A vSphere administrator needs to configure virtual machine snapshots. Which statement accurately describes snapshot behavior?
A) Snapshots are full backups of virtual machines
B) Snapshots capture virtual machine state and allow point-in-time recovery
C) Snapshots replace the need for backups
D) Snapshots have no performance impact
Answer: B
Explanation:
Snapshots capture virtual machine state and allow point-in-time recovery by preserving the virtual machine disk state, memory contents, and power state at the moment the snapshot is created. This capability enables administrators to return virtual machines to previous states if changes cause problems or testing needs to be reversed.
Snapshot architecture uses delta disks to track changes after snapshot creation. The base disk file becomes read-only when a snapshot is taken, and all subsequent writes go to a new delta disk file. Multiple snapshots create chains of delta files, each containing changes since the previous snapshot. Consolidation merges delta files back into base disks when snapshots are deleted. Memory snapshots preserve RAM contents enabling recovery to running states.
Use cases for snapshots include providing safety nets before making configuration changes or applying updates, enabling quick rollback if changes cause problems, capturing known-good states for testing, and providing short-term protection during maintenance windows. Snapshots are not substitutes for backups because they reside on the same storage as virtual machines and do not protect against storage failures or provide long-term retention.
Performance considerations include write performance overhead because all writes must update delta files, which is slower than writing directly to base disks. Storage consumption grows as delta files expand with changes. Snapshot chains degrade performance more than single snapshots. Management best practices include deleting snapshots promptly after they are no longer needed, avoiding long-running snapshots that accumulate large delta files, and not creating snapshots of virtual machines with high write rates.
Option A is incorrect because snapshots are incremental point-in-time captures, not full backups. Option C is wrong because snapshots do not replace backups due to their limitations and risks. Option D is incorrect because snapshots do impact performance, particularly with multiple snapshots or long snapshot lifespans.
Question 41:
An administrator needs to configure vSphere Update Manager to patch ESXi hosts. Which mode should be used to minimize downtime?
A) Manual update with full cluster downtime
B) Rolling update with maintenance mode
C) Update all hosts simultaneously
D) Skip host reboots
Answer: B
Explanation:
Rolling update with maintenance mode minimizes downtime when patching ESXi hosts by updating hosts sequentially rather than simultaneously, ensuring that workloads remain running on non-updating hosts throughout the patching process. This approach provides near-zero downtime for virtual machines in clusters with DRS and vMotion capabilities.
Rolling update process evacuates one host at a time using maintenance mode to automatically migrate virtual machines to other cluster hosts. Once virtual machines are evacuated, Update Manager installs patches and reboots the host if required. After the host returns from maintenance mode and passes health checks, the process continues with the next host. This sequential approach ensures that only one host is unavailable at any time, maintaining cluster capacity and virtual machine availability.
Update Manager orchestration handles complex update workflows including downloading patches from VMware repositories, creating host baselines defining which patches to apply, scanning hosts to identify missing patches, staging patches to hosts before maintenance windows, and remediating hosts by installing patches and rebooting. Administrators configure remediation settings including cluster remediation order, parallel remediation limits, and failure handling policies.
Prerequisites for successful rolling updates include sufficient cluster capacity to run all virtual machines on N-1 hosts while one host is down, properly configured DRS for automatic virtual machine migration, shared storage accessible to all hosts for vMotion, and compatible networking on all hosts for virtual machine connectivity. Validation testing in non-production environments before production updates reduces risks.
Option A is incorrect because manual updates with full cluster downtime cause extended virtual machine unavailability. Option C is wrong because updating all hosts simultaneously takes down the entire cluster. Option D is incorrect because skipping reboots may leave patches ineffective and hosts in inconsistent states.
Question 42:
A vSphere administrator needs to delegate permissions for specific virtual machines without granting full datacenter access. Which vSphere feature enables granular permission delegation?
A) Assign Administrator role at root level
B) Assign specific roles at virtual machine folder level
C) Use the same permissions for all objects
D) Avoid permission delegation
Answer: B
Explanation:
Assigning specific roles at virtual machine folder level enables granular permission delegation by applying permissions to organizational containers that group related virtual machines. This approach limits user access to only the virtual machines they need to manage without exposing the entire datacenter or unnecessary resources.
Permission delegation in vSphere uses a hierarchical model where permissions assigned at higher levels propagate to child objects unless explicitly overridden. Folder-level permissions provide the appropriate balance between manageability and granularity. Administrators create folders organizing virtual machines by function, department, application, or other logical groupings, then assign roles granting appropriate privileges to users or groups who should manage those specific virtual machines.
Roles define sets of privileges determining what actions users can perform. Built-in roles like Virtual Machine Power User enable operations like powering virtual machines on and off, modifying hardware, and creating snapshots without granting administrative capabilities. Custom roles combine specific privileges matching unique organizational requirements. Roles are reusable across multiple permission assignments, simplifying management.
Permission management best practices include using Active Directory groups rather than individual user accounts for easier management, applying least privilege principles granting only necessary permissions, using folders to organize objects by ownership or function, documenting permission structures for transparency, and regularly reviewing permissions to remove unnecessary access. Separation of duties ensures appropriate checks on administrative actions.
Option A is incorrect because Administrator role at root level grants excessive privileges across the entire environment. Option C is wrong because uniform permissions do not support principle of least privilege or role separation. Option D is incorrect because avoiding delegation requires all management through central administrators, creating bottlenecks and scalability issues.
Question 43:
An administrator needs to configure vSphere HA admission control. Which policy ensures sufficient capacity for virtual machine failover?
A) Disable admission control
B) Host failures cluster tolerates policy
C) No failover capacity reservation
D) Allow overcommitment of all resources
Answer: B
Explanation:
Host failures cluster tolerates policy ensures sufficient capacity for virtual machine failover by reserving enough cluster resources to restart all protected virtual machines if a specified number of hosts fail. This admission control mechanism prevents overcommitment that would make failover impossible when failures occur.
Host failures policy calculates required reserve capacity based on how many host failures the cluster should tolerate. If the policy specifies tolerating two host failures, HA reserves enough resources to restart all virtual machines assuming two hosts fail simultaneously. Available resources on remaining hosts must exceed the reserved capacity. HA prevents powering on additional virtual machines or resource modifications that would violate the policy.
Capacity calculation considers virtual machine resource requirements including configured memory, CPU reservations, and virtual machine overhead. HA uses worst-case assumptions, calculating whether virtual machines could restart if the largest hosts fail, since larger host failures create larger capacity deficits. The calculation includes slot sizes, which represent the resource requirements for the most demanding virtual machine, determining how many protected virtual machines can run on remaining hosts.
Alternative admission control policies include percentage of cluster resources reserved, where HA reserves a percentage of CPU and memory across the cluster, and specify failover hosts, where dedicated hosts are reserved for failover capacity. The host failures policy is typically preferred because it directly maps to business continuity requirements expressed as how many host failures should be tolerated.
Option A is incorrect because disabling admission control allows overcommitment making failover unreliable. Option C is wrong because no reservation prevents guaranteed failover capacity. Option D is incorrect because allowing overcommitment defeats the purpose of admission control.
Question 44:
A vSphere administrator needs to optimize storage performance for virtual machine disks. Which vSphere feature aligns I/O to underlying storage block boundaries?
A) Disable all I/O optimization
B) Partition alignment
C) Random disk placement
D) Ignore storage geometry
Answer: B
Explanation:
Partition alignment optimizes storage performance by ensuring that file system partitions start at addresses matching underlying storage block boundaries. Misalignment causes individual I/O operations to span multiple physical storage blocks, requiring multiple reads or writes to satisfy single logical operations, significantly degrading performance.
Misalignment occurs when partition starting positions do not align with physical storage block boundaries. Legacy partitioning tools created partitions starting at 63 sectors, which does not align with modern 4KB or larger physical blocks. Each misaligned I/O operation requires accessing two physical blocks instead of one, effectively doubling I/O operations and halving throughput. Performance degradation is particularly severe for write-intensive workloads on storage arrays using 4KB or larger blocks.
Modern operating systems and virtualization platforms address alignment automatically. Windows Server 2008 and later create aligned partitions by default. VMware VMFS file systems align virtual disks properly. Linux tools like fdisk support aligned partition creation. However, virtual machines created from physical-to-virtual conversions or older templates may have misaligned partitions requiring correction.
Detection and remediation verify alignment through partition starting sector inspection. Partitions should start at multiples of physical block size, typically sector 2048 or higher. Remediation requires recreating partitions with proper alignment or using tools that can adjust alignment without data loss. Performance improvements from alignment can be dramatic, particularly on arrays with large block sizes. Alignment should be verified whenever storage performance issues are investigated.
Option A is incorrect because disabling optimization reduces performance rather than improving it. Option C is wrong because random placement ignores performance optimization opportunities. Option D is incorrect because ignoring storage geometry prevents optimization and typically results in misalignment.
Question 45:
An administrator needs to configure vSphere Distributed Resource Scheduler affinity rules. Which scenario requires a VM-Host affinity rule?
A) Keep two virtual machines on the same host
B) Ensure virtual machines only run on specific hosts with required hardware
C) Separate virtual machines to different hosts
D) No specific host requirements
Answer: B
Explanation:
Ensuring virtual machines only run on specific hosts with required hardware requires VM-Host affinity rules because this scenario involves constraints between virtual machines and particular physical hosts rather than relationships between virtual machines. VM-Host rules specify which hosts can run specific virtual machines, supporting licensing, hardware, or regulatory requirements.
VM-Host affinity rules address multiple use cases. Hardware-specific workloads require hosts with particular capabilities like GPU cards, specialized network adapters, or storage controllers. Licensing constraints may restrict certain software to specific physical hosts to comply with per-CPU or per-socket licenses. Regulatory requirements might mandate that sensitive workloads run only on hosts meeting security or compliance certifications. Performance considerations may dedicate high-performance hosts to priority workloads.
Rule implementation creates VM-Host affinity rules defining two groups: a virtual machine group containing the virtual machines with special requirements and a host group containing the hosts meeting those requirements. The rule specifies whether virtual machines must run on the specified hosts (required rule) or should preferentially run there (preferential rule). Required rules prevent virtual machines from running on non-compliant hosts, while preferential rules guide placement without absolute restrictions.
DRS evaluates VM-Host rules alongside VM-VM rules, resource requirements, and load balancing objectives. Conflicts between rules may prevent DRS from making optimal placement decisions. Rule design should avoid over-constraining the system, which reduces DRS flexibility and may prevent virtual machines from powering on. Documentation explaining rule purposes helps future administrators understand and maintain rule configurations.
Option A is incorrect because keeping virtual machines together requires VM-VM affinity rules, not VM-Host rules. Option C is wrong because separating virtual machines uses VM-VM anti-affinity rules. Option D is incorrect because scenarios without host requirements do not need VM-Host rules.