Microsoft AZ-104 Microsoft Azure Administrator Exam Dumps and Practice Test Questions Set 10 Q136-150

Microsoft AZ-104 Microsoft Azure Administrator Exam Dumps and Practice Test Questions Set 10 Q136-150

Visit here for our full Microsoft AZ-104 exam dumps and practice test questions.

Question 136

You need to allow an Azure VM to access a database in Azure SQL Database using managed identity authentication rather than stored credentials. What should you configure first?

A) SQL Transparent Data Encryption
B) Azure AD Authentication for SQL
C) SQL Auditing
D) SQL Server Firewall Allow All

Answer: B

Explanation:

Transparent Data Encryption (TDE) and Azure Active Directory (Azure AD) authentication for SQL serve distinct purposes in securing and managing access to Azure SQL databases. Transparent Data Encryption is designed primarily to protect data at rest by encrypting the underlying database files, transaction logs, and backups. By encrypting the storage layer, TDE ensures that if someone gains unauthorized access to the physical storage or backups, the data remains unreadable without the encryption keys. While TDE is a critical security feature to meet compliance requirements and protect sensitive data from breaches, it does not influence how users or applications authenticate to the database. TDE operates entirely at the storage level and does not interact with login mechanisms, credentials, or token-based authentication. Therefore, enabling TDE alone will not allow a virtual machine to authenticate to the database using managed identities, nor will it eliminate the need for stored credentials if the authentication method requires them.

In contrast, Azure AD Authentication for SQL provides a mechanism for SQL databases to accept authentication tokens issued by Azure Active Directory. This enables Azure services, applications, and virtual machines that have managed identities to access SQL databases securely without requiring traditional username and password credentials. Managed identities are automatically registered with Azure AD and can request access tokens to authenticate to supported services, including Azure SQL Database. By enabling Azure AD authentication for SQL, administrators allow these tokens to be validated by the database, granting the virtual machine the ability to connect and perform permitted operations without embedding credentials in code, configuration files, or scripts. This approach enhances security by reducing exposure to stolen credentials, simplifies identity management, and supports seamless integration with Azure’s identity platform.

Other features, while important for security and operational management, do not provide the required authentication capabilities in this scenario. SQL Auditing, for instance, is a mechanism to track database activity for compliance and monitoring purposes. It records events such as successful and failed login attempts, schema changes, and data modifications, providing valuable insights for governance and forensic investigation. However, auditing is purely observational; it does not control or facilitate authentication. Enabling auditing alone will not allow a virtual machine or any service to authenticate to the database using managed identities.

Similarly, SQL firewall rules manage network access by controlling which IP addresses can connect to the database. While firewall configuration is essential to prevent unauthorized network-level access, it does not influence identity validation. Allowing all traffic through the firewall may remove network restrictions, but it does not enable managed identity authentication. The virtual machine would still require a method for validating its identity against the database, which is precisely what Azure AD authentication provides. Firewall rules and managed identity authentication work in complementary layers—firewall rules restrict network traffic, whereas Azure AD authentication verifies identity.

In summary, while Transparent Data Encryption secures data at rest and SQL Auditing provides visibility into database activity, the core requirement for enabling passwordless, secure access from a virtual machine using managed identities is fulfilled only by enabling Azure AD Authentication for SQL. This configuration allows the database to accept and validate tokens issued by Azure AD, providing seamless, secure authentication without relying on stored credentials, and is therefore the correct initial step for this scenario.

Question 137

You want to ensure that snapshots of managed disks can be stored at the lowest possible cost while retaining the ability to restore full disks when needed. What should you choose?

A) Standard SSD
B) Premium SSD
C) Snapshot with Incremental Storage
D) Ultra Disk

Answer: C

Explanation:

When managing storage for Azure virtual machines, it is important to consider both the type of managed disk and the strategy for snapshots, particularly when cost optimization is a key requirement. Azure provides multiple disk types, including Standard SSD, Premium SSD, and Ultra Disk, each designed for different performance and workload scenarios. Standard SSDs offer a cost-effective solution for workloads that require reliable performance but do not demand extremely low latency or high IOPS. They are well-suited for general-purpose workloads and can reduce storage costs compared to premium disks. However, while Standard SSDs help minimize the base cost of disk storage, they do not directly affect the cost of snapshots. The pricing of snapshots is independent of the disk type in terms of raw performance, because snapshot storage charges are based on the amount of data stored rather than the disk tier itself.

Premium SSDs, on the other hand, are designed for production workloads that require high performance, low latency, and consistent IOPS. They are ideal for mission-critical applications, databases, and other workloads where performance is paramount. While Premium SSDs provide enhanced speed and reliability, they come at a higher storage cost. Importantly, using Premium SSDs does not inherently reduce the cost associated with snapshots. While these disks may improve application performance, snapshot storage charges are still determined by the size and frequency of the snapshots rather than the underlying disk tier. Therefore, opting for Premium SSDs alone is not an effective strategy for minimizing snapshot-related expenses.

Ultra Disks are designed for workloads requiring extreme IOPS and high throughput, such as high-performance databases or large-scale analytics workloads. These disks deliver unmatched speed and performance but are also the most expensive option available. Similar to Premium SSDs, the use of Ultra Disks does not provide any cost savings for snapshot storage. In fact, the higher base disk cost makes this option less favorable when the goal is to reduce overall expenses while maintaining backup and recovery capabilities.

Incremental snapshots, however, provide a highly efficient mechanism for minimizing snapshot costs while preserving full recovery capability. Unlike traditional full snapshots, which duplicate the entire disk content for each backup, incremental snapshots store only the changes that have occurred since the previous snapshot. This approach dramatically reduces storage consumption because only new or modified data blocks are saved. Despite this reduction in storage usage, incremental snapshots still allow administrators to restore the full disk at any point in time. The combination of reduced storage requirements and full restore capability makes incremental snapshots particularly well-suited for scenarios where cost efficiency and reliable backup are both priorities. Additionally, incremental snapshots integrate seamlessly with Azure-managed disks and support automation, enabling frequent backups without incurring the high costs associated with full snapshots.

In summary, while disk types such as Standard SSD, Premium SSD, and Ultra Disk determine performance characteristics, they do not directly impact snapshot storage costs. For organizations seeking to minimize expenses while retaining the ability to restore entire disks, incremental snapshots provide the most effective solution. By capturing only the changes since the last snapshot, they optimize storage usage, lower costs, and maintain complete recovery options, making incremental snapshots the preferred strategy for efficient disk backup management in Azure.

Question 138

Administrators must prevent accidental deletion of a mission-critical Azure Storage account. What should they configure?

A) Soft Delete for Blobs
B) Resource Lock – Delete
C) Versioning
D) SAS Token Restrictions

Answer: B

Explanation:

When managing Azure Storage accounts, ensuring that critical resources are not accidentally deleted is an essential aspect of operational security and data protection. Azure provides several mechanisms to protect data and resources, but it is important to understand the distinction between protecting individual data objects, such as blobs, and protecting the storage account as a resource itself. One common misconception is that enabling features like soft delete or versioning inherently protects the storage account from being removed. While these features are valuable for data protection, they operate at the data level rather than the resource level.

Soft delete is a feature designed to protect individual blobs or container-level objects from accidental deletion. When soft delete is enabled, deleted blobs are retained for a configurable retention period, allowing them to be restored if necessary. This feature ensures that inadvertent deletions of individual data objects do not result in permanent data loss. However, soft delete applies strictly to data-level operations within the storage account. It does not prevent administrators or automated processes from deleting the storage account itself. If the storage account is deleted, all contained data, including soft-deleted blobs, will be permanently lost because soft delete does not provide protection at the resource level.

Similarly, blob versioning is designed to maintain historical versions of blob data, enabling recovery from accidental overwrites or unintended modifications. Versioning ensures that older versions of blobs are retained and can be restored if needed. While this provides an additional layer of data protection and supports operational recovery scenarios, it does not affect the security or integrity of the storage account as a resource. Versioning cannot prevent the deletion of the storage account or enforce policies that safeguard resource-level operations.

Restricting Shared Access Signature (SAS) tokens can improve security by limiting access to specific data within a storage account. SAS tokens can be scoped by permissions, time, and IP address, providing fine-grained control over who can access data. While this is a critical aspect of securing storage data against unauthorized access, it does not prevent administrative actions that could delete the storage account itself. SAS restrictions operate at the data access level, not at the level of the storage resource, and therefore cannot address accidental or intentional account deletion by users with administrative privileges.

To directly protect the storage account from accidental deletion, Azure provides delete locks. A delete lock is a type of management lock that can be applied to the storage account at the resource level. Once a delete lock is enabled, any attempt to delete the storage account is blocked until the lock is intentionally removed by an administrator. This ensures that accidental deletions are prevented, as even users with high-level privileges cannot remove the resource without first acknowledging and removing the lock. Delete locks provide a simple but effective safeguard against inadvertent removal of critical infrastructure, ensuring that storage accounts remain intact until proper administrative procedures are followed.

In summary, while soft delete, versioning, and SAS token restrictions offer strong protections at the data level, they do not prevent accidental deletion of the storage account itself. Delete locks are the mechanism specifically designed to enforce resource-level protection, making them the most appropriate solution for safeguarding storage accounts from unintended deletion. By implementing a delete lock, organizations can ensure the continuity and security of their storage infrastructure while still benefiting from data-level protections like soft delete and versioning.

Question 139

You need to create a private, DNS-resolvable name for an internal web application hosted in an Azure VNet. What should you configure?

A) Azure DNS Public Zone
B) Hosts File
C) Azure Private DNS Zone
D) Azure Front Door

Answer: C

Explanation:

In Azure, managing name resolution for resources within a virtual network (VNet) is essential for ensuring seamless internal communication while maintaining security and privacy. Organizations often need an internal DNS solution that resolves hostnames only within the private network, without exposing them to the public internet. Choosing the appropriate DNS mechanism is critical, as different approaches have varying scopes, security implications, and scalability characteristics.

A public DNS zone is designed primarily to publish domain name records to the broader internet. It allows external clients, such as customers or partners, to resolve domain names associated with public-facing services, including websites, APIs, or other cloud-hosted applications. While public DNS zones provide excellent global visibility, they are unsuitable for internal-only name resolution. Using a public DNS zone to resolve internal hostnames would expose those names externally, creating potential security risks and violating the requirement for internal privacy. Public DNS does not integrate directly with VNets to provide private name resolution, meaning any internal hostname would be visible outside the organization, defeating the purpose of an internal-only naming strategy.

Another potential approach is editing the hosts file on individual virtual machines or endpoints. This method allows administrators to create local mappings between hostnames and IP addresses, effectively enabling name resolution on a per-machine basis. While this can work for very small environments or temporary testing scenarios, it does not scale effectively across an entire VNet or enterprise environment. Each machine requires manual configuration, and any changes in IP addresses or new hosts require corresponding updates on every device. This process is time-consuming, error-prone, and difficult to maintain, making it impractical for production environments where consistency and centralized management are critical.

The most effective solution for internal name resolution in Azure is a private DNS zone. Private DNS zones are specifically designed to provide automatic hostname resolution within VNets that are linked to them. When a private DNS zone is associated with one or more VNets, all virtual machines and resources within those VNets can resolve the internal hostnames automatically, without requiring manual configuration or exposing records to the internet. This approach ensures that internal communications remain private, consistent, and centrally managed. Private DNS zones also support dynamic updates, allowing changes in resource IP addresses to be reflected automatically, which greatly reduces administrative overhead and errors compared to manual host file editing. Moreover, they can be integrated with Azure-provided DNS resolution for hybrid scenarios, ensuring a seamless experience for both cloud and on-premises resources.

Azure Front Door, while a powerful service, serves a different purpose. It provides global load balancing, traffic acceleration, and routing for internet-facing applications. It does not offer internal DNS name resolution or private hostname management within VNets. Its primary focus is on improving performance and availability for external clients rather than managing private internal naming.

In summary, for scenarios where internal-only name resolution is required within an Azure virtual network, private DNS zones are the most appropriate solution. They provide centralized, scalable, and automatic resolution of hostnames, ensuring consistency across resources while keeping names private. Public DNS zones, hosts file edits, and services like Azure Front Door do not meet these requirements, either due to exposure to the internet, lack of scalability, or focus on external routing. Private DNS zones deliver both security and operational efficiency, making them the correct choice for managing internal name resolution in Azure.

Question 140

You need to ensure that only approved VM images can be deployed across the subscription. What should you configure?

A) Custom Script Extension
B) Azure Policy Image Whitelisting
C) VM Extensions
D) Azure Monitor

Answer: B

Explanation:

In Azure, managing and enforcing compliance standards for virtual machine (VM) deployments is a critical aspect of maintaining security, operational consistency, and organizational governance. One common requirement for organizations is to ensure that only approved images are used when creating new VMs. This ensures that virtual machines are built from known, secure, and validated sources, helping prevent the introduction of vulnerabilities or unsupported configurations into the environment. Azure provides multiple tools and mechanisms that can manage or configure VMs, but not all of them are suitable for enforcing deployment policies at the image level. Understanding the distinction between these tools is essential to implementing the correct governance approach.

The Custom Script Extension is a feature designed to run scripts inside a virtual machine after it has been deployed. This extension is highly useful for post-deployment configuration tasks, such as installing software, updating system settings, or performing custom initialization. While the Custom Script Extension can effectively automate configuration and management within the VM, it operates at the resource level after deployment. It cannot control or restrict which images are used to create the VM in the first place. Therefore, although Custom Script Extension helps ensure consistency in VM configuration post-deployment, it does not satisfy the requirement of preventing the creation of VMs from unapproved or non-compliant images.

Azure VM Extensions in general are similar in that they provide mechanisms to extend and customize VM behavior. Extensions can install agents, configure operating systems, or manage applications within a VM. Like the Custom Script Extension, these tools are focused on modifying the VM after deployment and do not provide controls over the image selection process itself. They cannot enforce organizational policies to allow or deny the creation of virtual machines based on specific image criteria.

Azure Monitor is another important tool in Azure’s management ecosystem, providing monitoring, logging, and alerting capabilities. Azure Monitor collects performance metrics, diagnostic logs, and telemetry data, which can be used to observe VM health, detect anomalies, and troubleshoot issues. However, Azure Monitor is an observational and analytical tool rather than a policy enforcement mechanism. It does not provide any control over which images can be used during VM creation, and therefore cannot prevent non-compliant deployments.

The tool that directly addresses the requirement of controlling which images are used is Azure Policy. Azure Policy is a governance service that enables organizations to define and enforce rules across their Azure environment. One of its key capabilities is the ability to create policies that whitelist approved images for VM deployment. By defining an “allowed images” policy, administrators can prevent users from creating VMs from unapproved, outdated, or insecure images. When a user attempts to deploy a VM that does not comply with the policy, the operation is denied automatically, ensuring that only compliant resources are provisioned. Azure Policy can also be used to audit existing resources, providing visibility into non-compliant VMs and helping maintain governance over time.

In summary, while Custom Script Extensions and VM extensions are valuable for post-deployment configuration, and Azure Monitor is effective for performance observation and logging, none of these tools can enforce which images are used during VM creation. Azure Policy, through image whitelisting, provides the required governance mechanism to prevent the deployment of unapproved or insecure images. This ensures that all virtual machines are created from validated sources, maintaining both security and compliance in the Azure environment.

Question 141

You are managing Azure Virtual Machines that host a line-of-business application. You need to ensure that the VMs automatically restart when the underlying host undergoes planned maintenance, without manual intervention. Which feature should you configure?

A) Azure VM Auto-Shutdown
B) Azure VM Availability Set
C) Azure VM Automatic Guest Patching
D) Azure VM Automatic Restart

Answer: D) Azure VM Automatic Restart

Explanation:

Azure VM Auto-Shutdown schedules a shutdown at a specified time to save costs, typically in non-production environments. While it manages VM power states, it does not restart machines after host-level maintenance or failures. This means relying solely on Auto-Shutdown cannot satisfy the requirement of automatic recovery after planned maintenance.

Azure VM Availability Set ensures that virtual machines are spread across multiple fault domains and update domains. This protects workloads from hardware failures and host updates affecting a single rack. However, it does not guarantee that an individual VM will restart automatically after Azure-initiated maintenance. Availability Sets focus on redundancy rather than automatic restart behavior.

Azure VM Automatic Guest Patching installs OS updates inside the VM automatically. This enhances compliance and security but does not control the VM’s power state or restart behavior during platform maintenance events. It ensures updated OS software but does not maintain uptime during host maintenance.

Azure VM Automatic Restart enables the virtual machine to restart automatically after planned or unplanned host maintenance. It restores the VM to a running state without administrator intervention. This fulfills the requirement by ensuring continuity for critical applications even during Azure host operations. Therefore, Azure VM Automatic Restart directly addresses the need and is the correct choice.

Question 142

You need to secure administrative access to Azure Virtual Machines by requiring authentication through a managed Azure service rather than exposing RDP or SSH directly. What should you enable?

A) Azure Firewall
B) Azure Bastion
C) Azure Application Gateway
D) Azure Private Link

Answer: B) Azure Bastion

Explanation:

Azure Firewall filters inbound and outbound traffic at the network level. While it can restrict which IP addresses reach a VM, it cannot provide a managed, browser-based RDP or SSH session. It controls traffic, not authentication pathways.

Azure Bastion provides secure RDP and SSH access directly through the Azure portal without exposing public IP addresses. It allows users to connect to VMs securely over TLS without opening inbound ports, fulfilling the requirement for managed administrative access without public exposure.

Azure Application Gateway is a Layer 7 load balancer for web applications. It does not provide VM management access, and cannot replace RDP/SSH connectivity.

Azure Private Link secures access to Azure PaaS services over private endpoints but does not provide administrative access to VMs. It controls service connectivity, not VM sessions.

Azure Bastion is correct because it offers secure, managed administrative access without requiring public network exposure.

Question 143

Your organization wants to reduce storage costs for large volumes of infrequently accessed blob data. The data must still remain online but will only be accessed a few times per year. Which storage tier should you choose?

A) Hot tier
B) Cool tier
C) Archive tier
D) Premium tier

Answer: B) Cool tier

Explanation:

Hot tier is designed for frequently accessed data. While it provides low latency and immediate availability, its storage costs are high for infrequently accessed data.

Cool tier is optimized for data that is rarely accessed but must remain online. It has lower storage costs than hot tier while maintaining immediate availability. Retrieval costs are slightly higher but infrequent usage makes this cost negligible.

Archive tier offers the lowest storage cost but requires rehydration, which may take hours, making it unsuitable if data must remain online.

Premium tier uses SSD storage for high-performance workloads. It is costly and unnecessary for infrequently accessed blob data.

The Cool tier strikes the best balance between cost efficiency and online availability for infrequently accessed data.

Question 144

A company needs to allow its on-premises environment to connect securely to Azure VNets using IPSec over the internet. What should you deploy?

A) Azure Application Gateway
B) Azure ExpressRoute
C) Azure VPN Gateway
D) Azure NAT Gateway

Answer: C) Azure VPN Gateway

Explanation:

In Azure networking, organizations often require secure connectivity between on-premises infrastructure and Azure virtual networks (VNets). A common scenario involves establishing site-to-site VPN connections over the public internet that are encrypted to ensure data confidentiality and integrity during transmission. Selecting the appropriate Azure service for this scenario is critical because different networking services provide distinct capabilities, and not all of them are designed to establish secure site-to-site tunnels.

Azure Application Gateway is primarily a Layer 7 (application layer) load balancer and web traffic manager. It offers features such as URL-based routing, SSL termination, web application firewall integration, and session affinity for HTTP/HTTPS traffic. While Application Gateway is highly effective for distributing and securing web application traffic, it does not function as a VPN endpoint and cannot establish encrypted site-to-site connections between on-premises networks and Azure VNets. Its role is focused on managing and securing inbound web traffic rather than providing network-layer VPN connectivity. Therefore, relying on Application Gateway for IPSec-based site-to-site VPNs would not meet the requirement for encrypted connectivity over the public internet.

Azure ExpressRoute provides dedicated private connectivity between an organization’s on-premises network and Azure. It bypasses the public internet entirely by using a dedicated circuit, which offers high reliability, low latency, and increased security. However, ExpressRoute does not rely on IPSec encryption because traffic does not traverse the public internet. While it is excellent for scenarios requiring private, high-throughput connections, it does not satisfy the specific requirement of establishing encrypted VPN tunnels over the internet. For organizations specifically seeking secure connectivity across public networks, ExpressRoute alone is insufficient because the traffic is inherently private but not encrypted with IPSec for internet transit.

Azure NAT Gateway is another service that serves a different purpose. It provides outbound internet connectivity for virtual machines within a VNet by translating private IP addresses to public IP addresses. NAT Gateway is useful for managing and scaling outbound traffic, ensuring that all VMs share a predictable public IP, and improving security by reducing exposure of internal addresses. However, NAT Gateway does not provide site-to-site connectivity, does not encrypt traffic, and cannot establish VPN tunnels. Its role is limited to outbound traffic management rather than secure inter-network connectivity.

Azure VPN Gateway, in contrast, is the service specifically designed to establish secure site-to-site connections between on-premises networks and Azure VNets. VPN Gateway supports industry-standard protocols such as IPSec and IKE, which provide robust encryption for data traversing the public internet. By leveraging VPN Gateway, organizations can create site-to-site VPN tunnels that securely extend their on-premises network into Azure, ensuring that all communication remains confidential and tamper-proof. VPN Gateway also supports hybrid networking scenarios, dynamic routing, and high availability configurations, making it a comprehensive solution for encrypted connectivity over the internet.

In summary, while Application Gateway, ExpressRoute, and NAT Gateway provide critical networking and traffic management capabilities, they do not meet the requirement for IPSec-encrypted site-to-site VPN connectivity over the public internet. Azure VPN Gateway is the correct solution because it specifically enables encrypted VPN tunnels between on-premises networks and Azure VNets, combining security, reliability, and scalability to meet hybrid connectivity needs.

Question 145

You need to assign a user the ability to manage all virtual machines in a subscription but prevent them from modifying virtual network settings. Which built-in role should you assign?

A) Owner
B) Contributor
C) Virtual Machine Administrator Login
D) Virtual Machine Contributor

Answer: D) Virtual Machine Contributor

Explanation:

When managing access to Azure resources, it is important to assign the least privilege necessary to accomplish a task, ensuring both operational efficiency and security. Role-Based Access Control (RBAC) in Azure provides fine-grained control over permissions by assigning predefined or custom roles to users, groups, or service principals. Selecting the correct role is critical to ensure users can perform their required tasks without gaining unnecessary privileges, which could lead to accidental or malicious modifications of other resources.

The Owner role is the most permissive built-in role in Azure. It grants full access to all resources within a subscription, resource group, or specific resource scope, including the ability to manage networking, storage, compute, and security settings. While this level of access ensures that the user can perform any action, it often exceeds operational requirements and violates the principle of least privilege. Granting Owner access for routine VM management exposes the subscription to unnecessary risk, such as unintended changes to networking, security policies, or other critical infrastructure. Consequently, although Owner technically allows full control over virtual machines, it is not appropriate when the goal is to restrict access to VM lifecycle operations without granting network management capabilities.

The Contributor role provides broad management permissions for most Azure resources. Users with this role can create, update, and delete resources across the assigned scope. While it offers extensive flexibility, it also includes privileges to manage networking resources, storage accounts, and other infrastructure components. This exceeds the requirement for a user who needs to manage virtual machines only. Assigning Contributor access to a user whose responsibility is limited to VM lifecycle management could inadvertently allow changes to unrelated resources, introducing potential operational risks and policy violations. Therefore, Contributor is not ideal for scenarios requiring controlled VM management with network restrictions.

The Virtual Machine Administrator Login role is more restrictive and is primarily designed to allow login access to virtual machines. It enables users to connect to a VM using remote desktop (RDP) or SSH and perform administrative tasks inside the VM operating system. However, this role does not provide permissions to manage the VM’s lifecycle. Users cannot start, stop, restart, resize, or delete the virtual machine itself. As a result, Virtual Machine Administrator Login is insufficient for scenarios where users are expected to manage the VM as an Azure resource rather than only administering the operating system inside it.

The Virtual Machine Contributor role is specifically designed to provide full management of virtual machines while restricting control over other resources, particularly networking. Users assigned this role can create, update, start, stop, restart, resize, and delete VMs, as well as manage VM extensions, disks, and other compute-related settings. Importantly, this role excludes permissions to modify virtual networks, subnets, or other infrastructure components unrelated to the virtual machine’s lifecycle. This makes Virtual Machine Contributor ideal for delegating VM lifecycle management without granting excessive privileges, aligning with the principle of least privilege and ensuring operational safety.

In summary, when the requirement is to allow users to fully manage virtual machines without giving them control over networking or other unrelated resources, Virtual Machine Contributor is the correct choice. It provides the necessary permissions for VM lifecycle management, including creation, modification, and deletion, while preventing access to network configurations and other sensitive resources, ensuring both operational efficiency and security within Azure.

Question 146

You want to protect Azure SQL Database against accidental deletion and provide long-term data retention. Which feature should you configure?

A) Short-term backup retention
B) Geo-restore
C) Long-term backup retention
D) Azure SQL Auditing

Answer: C) Long-term backup retention

Explanation:

Short-term backup retention only preserves data for a limited period and is insufficient for long-term requirements.

Geo-restore protects against regional outages but does not provide long-term backup retention or guard against accidental deletion.

Long-term backup retention allows storing backups for years, ensuring recoverability and protection against accidental deletion or corruption.

Auditing tracks database activities but does not protect against deletion.

Long-term backup retention is correct because it satisfies both protection and retention needs.

Question 147

You need to ensure that an Azure VM can access a private storage account without exposing it publicly. What should you use?

A) Shared Access Signature
B) Private Endpoint
C) Storage Firewall Allow All
D) Service Tags

Answer: B) Private Endpoint

Explanation:

Shared Access Signature allows delegated access but does not prevent public exposure.

Private Endpoint maps the storage account to a private IP in the VNet, ensuring traffic never traverses the public internet, fulfilling the requirement.

Allow All in the storage firewall would expose the account publicly.

Service Tags simplify firewall rules but do not inherently provide private connectivity.

Private Endpoint is correct as it ensures private, secure access.

Question 148

You want to monitor performance metrics and logs from multiple Azure resources in a single centralized workspace. What should you configure?

A) Azure Activity Log
B) Azure Service Health
C) Log Analytics Workspace
D) Azure Monitor Alerts

Answer: C) Log Analytics Workspace

Explanation:

In Azure, effective monitoring and analysis of resources require understanding the different services available and their specific roles. While Azure offers several tools for tracking resource activity and performance, not all services are designed to provide centralized collection and detailed analytics of metrics and logs. Selecting the correct service depends on the specific monitoring requirement, particularly when the goal is to aggregate logs and metrics from multiple resources for comprehensive analysis.

The Azure Activity Log is a critical resource for tracking control-plane events in Azure. It records operations that modify resources, such as creating or deleting virtual machines, updating configurations, or assigning roles. Activity Log provides insights into “who did what and when,” which is invaluable for auditing, compliance, and troubleshooting administrative actions. However, it has limitations when it comes to performance monitoring. Activity Log captures only control-plane events and does not provide detailed runtime metrics, such as CPU utilization, memory usage, or disk I/O. Therefore, while it is essential for auditing and change tracking, Activity Log alone cannot satisfy requirements for analyzing operational performance or correlating metrics across multiple resources.

Azure Service Health offers a different perspective on monitoring, focusing on platform-level events. It provides alerts and information about Azure service outages, planned maintenance, and health advisories that may affect resources within a subscription. Service Health is valuable for staying informed about external factors impacting Azure services and for proactively responding to platform incidents. Despite this, it does not provide performance metrics for individual resources. It informs users about the health status of Azure services but does not allow analysis of VM performance, database throughput, or network latency. Thus, Service Health alone cannot serve as a comprehensive monitoring solution for resource-level metrics.

Azure Monitor Alerts are designed to provide proactive notifications when specific thresholds are crossed or conditions are met. Alerts can trigger actions such as sending emails, executing automation runbooks, or invoking Logic Apps. While alerts are essential for operational responsiveness, they do not provide a centralized repository for logs or historical metrics. Alerts are action-oriented and ephemeral—they notify users of issues but do not store the detailed telemetry required for in-depth analysis or long-term trend monitoring.

In contrast, a Log Analytics Workspace provides a centralized environment for collecting, storing, and analyzing logs and metrics from multiple Azure resources. It enables the aggregation of telemetry from virtual machines, storage accounts, databases, network components, and other services. Once data is ingested into a Log Analytics Workspace, administrators can query it using the Kusto Query Language (KQL), create dashboards, visualize trends, and generate reports. This allows for proactive monitoring, deep analysis, and correlation of data across different resources, providing a holistic view of performance, availability, and security. By consolidating all logs and metrics into a single workspace, organizations can gain operational insights, detect anomalies, and perform root cause analysis efficiently.

In summary, while Activity Log captures control-plane events, Service Health reports platform incidents, and Azure Monitor Alerts notify on threshold violations, none of these services provide a unified repository for detailed metrics and logs. Log Analytics Workspace is the correct choice when the requirement is to centralize logs and metrics from multiple Azure resources for comprehensive monitoring, analysis, and visualization. It enables organizations to maintain visibility across their cloud environment, proactively detect issues, and make informed operational decisions.

Question 149

You need to ensure that Azure Virtual Machines receive automatic OS image and security updates without manual patching. Which feature should you enable?

A) Azure Update Manager
B) Availability Zones
C) VM Scale Sets Autoscaling
D) Azure Backup

Answer: A) Azure Update Manager

Explanation:

Azure Update Manager is specifically designed to handle OS and security updates for virtual machines. It allows centralized scheduling, automated deployment of updates, and ensures compliance with organizational patching policies. By using Update Manager, administrators can automate update management for multiple VMs without manual intervention, ensuring that all critical security patches are applied promptly.

Availability Zones provide redundancy by distributing VMs across physically separate locations within a region to protect against datacenter failures. While Zones enhance availability and fault tolerance, they do not install OS updates or apply security patches. Their purpose is high availability, not maintenance automation.

VM Scale Sets Autoscaling adjusts the number of VM instances dynamically based on workload demand. Although useful for performance and scalability, it does not automate OS updates or security patching. Its function is resource scaling rather than system maintenance.

Azure Backup provides data protection through backups for recovery purposes. It can restore VM disks and files but does not manage operating system updates. Backup ensures recoverability, not compliance or security updates.

Because the requirement is to automatically maintain OS images and security patches, Azure Update Manager is the only choice that satisfies this need.

Question 150

A company wants all newly created Azure resources to follow a set of naming conventions and location restrictions. What should you use?

A) Azure Policy
B) Azure Blueprints
C) Resource Locks
D) Management Groups

Answer: A) Azure Policy

Explanation:

Azure Policy enforces organizational rules on resources. It can restrict allowed regions, enforce naming conventions, and prevent creation of non-compliant resources. Policies are evaluated at creation and modification time, ensuring resources always comply with governance requirements.

Azure Blueprints are packages of ARM templates, role assignments, and policies that enable repeatable deployments of environments. While they can include policies, blueprints are primarily deployment frameworks and do not automatically enforce rules for every resource created outside the blueprint.

Resource Locks prevent accidental deletion or modification of resources. They do not control naming or location restrictions. Locks are protective, not governance-enforcing.

Management Groups allow hierarchical organization of subscriptions for governance and policy assignment. They do not directly enforce rules on resource creation by themselves; they only provide structure for policy application.

Azure Policy is correct because it directly enforces naming and location requirements across the subscription.