Microsoft AZ-104 Microsoft Azure Administrator Exam Dumps and Practice Test Questions Set 8 Q106-120
Visit here for our full Microsoft AZ-104 exam dumps and practice test questions.
Question 106:
You need to design an Azure deployment to ensure that virtual machines are protected from hardware failures within a single datacenter and that updates do not cause all VMs to restart at once. Which Azure feature should you use?
A) Availability Set
B) Availability Zone
C) Virtual Machine Scale Set
D) Load Balancer
Answer: A) Availability Set
Explanation:
Availability Sets play a crucial role in ensuring virtual machine resiliency within a single Azure datacenter by leveraging fault domains and update domains. Fault domains help distribute virtual machines across separate physical racks, power sources, and network switches. This segmentation means that if a hardware issue, such as a rack power failure or switch malfunction, occurs in one fault domain, the other fault domains remain unaffected, keeping at least part of your application available. Update domains complement this by orchestrating planned maintenance in a rolling manner. Azure periodically performs updates, patches, and platform improvements, and without update domains, all VMs might reboot simultaneously. Update domains ensure that VMs are rebooted in stages, one update domain at a time, so your application maintains service continuity during maintenance events. By combining fault domains and update domains, an Availability Set becomes a powerful mechanism for avoiding both unplanned downtime caused by hardware issues and planned downtime caused by Azure updates. This design is most suitable when reliability is required within the same datacenter footprint and when your workload architecture is not yet modernized for multi-zone deployments.
Availability Zones, on the other hand, provide an even greater degree of resiliency by placing resources in physically isolated datacenters within the same Azure region. Each zone has its own independent power, cooling, and networking. This means that even if one zone experiences a complete outage, resources deployed in another zone remain operational. While Availability Zones offer stronger protection than Availability Sets, not all Azure services or VM SKUs support zone-based deployment. Additionally, applications must be explicitly architected to distribute components across zones and handle the increased network latency between physically separated locations. Using zones is ideal when designing for regional business continuity and near-zero downtime expectations. However, Availability Zones function at a broader, cross-datacenter level and are not intended as a replacement for intra-datacenter redundancy mechanisms like update domains within a single facility.
Virtual Machine Scale Sets serve a different purpose. They automate the deployment, scaling, and management of large numbers of identical VM instances. While scale sets can be configured to take advantage of fault domains, availability zones, and load balancers, their primary function is to provide elastic scaling based on demand. Scale sets adjust capacity by adding or removing VM instances as needed, helping optimize performance and costs. Although they support high availability concepts, they do not inherently guarantee VM separation across fault and update domains in the same targeted way Availability Sets do. Their focus is workload elasticity, not the fine-grained redundancy needed to avoid coordinated maintenance reboots in traditional VM deployment patterns.
A Load Balancer contributes to resilience by distributing incoming traffic across multiple VMs. It ensures that if one VM becomes unhealthy, traffic is routed to others. However, it does not influence VM placement across racks or update domains. It cannot prevent simultaneous reboots or hardware-related downtime across all backend instances. It works best as a complementary component rather than a standalone availability solution.
When the primary objective is to mitigate hardware failures within a single datacenter and avoid simultaneous VM restarts during planned platform maintenance, only an Availability Set directly achieves both goals.
Question 107:
A company requires a centralized identity provider for enterprise SSO to Azure resources and integration with on-premises Active Directory. Which Azure service should be used?
A) Azure Active Directory (Azure AD)
B) Azure DNS
C) Azure DevOps
D) Azure Policy
Answer: A) Azure Active Directory (Azure AD)
Explanation:
Azure Active Directory (Azure AD) is Microsoft’s cloud-based identity and access management service, designed to provide organizations with a centralized platform for managing user identities and controlling access to resources. At its core, Azure AD enables single sign-on (SSO), allowing users to access the Azure portal, Microsoft 365 applications, and a wide range of third-party SaaS applications with a single set of credentials. This reduces the complexity of managing multiple passwords and improves the overall user experience, while also enhancing security by centralizing authentication and access control. Azure AD supports widely adopted enterprise authentication protocols, including SAML, OAuth 2.0, and OpenID Connect, which ensures compatibility with most modern applications and identity solutions. It also incorporates advanced security and access management features such as conditional access policies, which allow administrators to define rules that control how and when users can access resources based on conditions like device compliance, location, and risk level. Multi-factor authentication adds another layer of security, protecting against compromised credentials, while role-based access control ensures that users have only the permissions necessary for their job functions. Additionally, Azure AD Identity Protection leverages machine learning to detect suspicious activities and potential threats to accounts, providing administrators with actionable insights to prevent unauthorized access.
A key strength of Azure AD is its ability to integrate with on-premises Active Directory environments through Azure AD Connect. This hybrid integration allows organizations to synchronize user identities between on-premises directories and Azure AD, enabling consistent authentication across both cloud and local resources. This means that employees can use the same credentials to access internal applications, cloud-hosted apps, and the Azure portal, facilitating a seamless experience and minimizing administrative overhead. Hybrid scenarios also allow organizations to implement federated authentication and maintain centralized identity management without fully migrating all workloads to the cloud. As a foundational service for Azure and Microsoft 365, Azure AD serves as the enterprise identity backbone, providing authentication, access control, and user management capabilities that meet enterprise-level security and compliance requirements.
Other Azure services, while essential for different purposes, do not provide the same identity management capabilities. Azure DNS, for instance, is designed to host DNS zones and manage domain name resolution within Azure. While it ensures reliable name resolution and supports application availability, it does not provide authentication, SSO, or identity management features. Similarly, Azure DevOps is a comprehensive suite of development tools for source control, CI/CD pipelines, work tracking, and testing. Although it can integrate with identity providers for authentication, it is not a directory service and cannot centralize identity or provide hybrid synchronization with on-premises AD. Azure Policy focuses on governance by enforcing compliance rules, tagging conventions, and resource configurations, which helps organizations maintain standardized deployments but does not address user authentication or access management.
For organizations seeking centralized identity management, enterprise single sign-on, and hybrid integration with on-premises Active Directory, Azure AD is the appropriate solution. It provides the necessary protocols, security features, access controls, and hybrid synchronization capabilities, ensuring seamless and secure access to both cloud and on-premises resources while serving as the cornerstone for identity services in Microsoft’s ecosystem.
Question 108:
You must grant a junior administrator permission to manage virtual networks but not modify subscription-level billing or RBAC. Which built-in role should be assigned?
A) Network Contributor
B) Owner
C) Billing Reader
D) User Access Administrator
Answer: A) Network Contributor
Explanation:
Network Contributor grants the ability to create, modify, and delete network resources such as virtual networks, subnets, network interfaces, and network security groups. It allows management of networking resources without granting access to manage role assignments or billing information. This role is scoped to resource-level management and is intended for users who need to perform networking tasks but should not change subscription-level configurations or access control settings. By limiting permissions to networking operations, it satisfies the requirement for managing virtual networks while restricting sensitive administrative capabilities.
Owner has full permissions to manage all resources within the assigned scope, including the ability to assign roles to other users and change subscription-level settings. This level of access includes billing and RBAC management and therefore exceeds the required permissions for a junior administrator tasked only with network management. Assigning Owner would grant unnecessary privileges and violate the principle of least privilege.
Billing Reader provides read-only access to billing information and cost data for a subscription. It allows viewing invoices and billing statements but does not permit modification of resources or role assignments. This role is focused on financial oversight and does not allow management of virtual networks or other infrastructure components, so it does not meet the requirement.
User Access Administrator allows management of role assignments and access control for resources. It can delegate permissions and change who has access to resources but does not directly grant rights to create or modify networking resources. While powerful for RBAC administration, this role is inappropriate when the goal is to enable network operations but restrict the user from modifying access controls or billing.
Question 109:
A company needs to ensure that VMs automatically receive OS security updates without manual intervention. Which Azure service or feature should be configured?
A) Update Management in Azure Automation
B) Azure Monitor Metrics
C) Azure Load Balancer health probes
D) Azure Traffic Manager
Answer: A) Update Management in Azure Automation
Explanation:
Update Management in Azure Automation provides a centralized service to schedule and orchestrate operating system updates for both Windows and Linux virtual machines across Azure and on-premises environments. It can scan machines to assess update compliance, create deployment schedules, and apply patches during non-business hours to minimize disruption. It supports grouping of machines, pre- and post-scripts, and reporting on update compliance. This service ensures updates are applied automatically or according to a defined schedule, addressing the requirement for automatic OS security patching.
Azure Monitor Metrics collects and stores numerical measurements from resources for monitoring performance and utilization trends. While metrics are valuable for observability and alerting, they do not perform configuration changes or apply OS patches. Metrics can inform when resources may need attention, but they cannot orchestrate or install security updates on VMs.
Azure Load Balancer health probes check the health or availability of backend instances by probing a specific port or HTTP endpoint and then distributing traffic to healthy instances. These probes are used for traffic routing and high availability, not for managing OS updates or applying patches to virtual machines. While health probes help maintain service availability, they do not provide update management capabilities.
Azure Traffic Manager is a DNS-based traffic routing service that directs client requests to appropriate endpoints based on policies such as performance, priority, or geographic location. It influences how traffic reaches services across regions but does not manage VM operating systems or perform patching tasks. Traffic Manager’s scope is network-level traffic routing and resiliency, not configuration management.
For orchestrating automatic security updates and patch deployments across a fleet of virtual machines with scheduling, compliance assessment, and reporting, the centralized update orchestration feature is the appropriate choice. It directly addresses the requirement to automate OS security updates without manual intervention, making it the right selection.
Question 110:
You need to restrict access to an Azure Storage account so that only traffic from a specific virtual network subnet can reach it. What should you configure?
A) Service endpoints or private endpoint with virtual network integration
B) Shared Access Signature (SAS) tokens with public access
C) Azure CDN with caching rules
D) Azure Reserved IP for the storage account
Answer: A) Service endpoints or private endpoint with virtual network integration
Explanation:
Service endpoints extend virtual network identity to Azure services, allowing traffic from a specific subnet to securely reach Azure Storage over the Azure backbone. This can be combined with storage account firewall rules to allow access only from selected virtual network subnets. Private endpoint provides an even stronger isolation model by assigning a network interface in a subnet with a private IP address that maps directly to the storage account. Private endpoints place traffic on the private network and eliminate public internet access. Both approaches enable restricting storage access to a particular subnet, meeting the requirement for subnet-limited access.
Shared Access Signature tokens provide granular, time-limited access to storage resources and are useful for delegation scenarios. However, SAS tokens do not restrict network-level access by subnet; they grant access via credentials that can be used from any network unless combined with other network restrictions. SAS tokens with public access would not meet the requirement to confine access to a single virtual network subnet.
Azure CDN is a content delivery network used to cache and deliver static content close to end users for improved performance. CDN focuses on distribution and caching, not on enforcing network restrictions between an Azure virtual network and a storage account. CDN can cache storage blobs but does not provide subnet-level access controls to the origin storage account.
Question 111:
An administrator needs to encrypt data at rest in Azure SQL Database using keys that they manage in Azure Key Vault. Which feature should be enabled?
A) TDE with customer-managed keys (CMK)
B) Transparent Data Encryption with platform-managed keys only
C) Always Encrypted with client-side keys only
D) SQL Authentication with a strong password
Answer: A) TDE with customer-managed keys (CMK)
Explanation:
Transparent Data Encryption (TDE) with customer-managed keys allows database encryption keys to be stored and managed in Azure Key Vault under customer control. This gives the organization control over key rotation and lifecycle policies, and it provides the ability to revoke access to keys if necessary. TDE encrypts data at rest, protecting database files and backups. When configured with customer-managed keys, the encryption is still transparent to applications while offering key management that resides in Key Vault, meeting the requirement for using keys managed by the customer.
Transparent Data Encryption with platform-managed keys encrypts data at rest automatically using keys managed by the Azure platform. While it provides encryption and simplifies management, it does not satisfy the requirement for using keys that the organization manages in its own Key Vault. Platform-managed keys lack the customer control aspects such as key rotation ownership and revocation.
Always Encrypted is a feature designed to protect sensitive data by encrypting specific columns on the client side so that plaintext never reaches the database engine. Keys for Always Encrypted can be managed locally or in Key Vault, and it protects data in transit and at rest for those columns. However, Always Encrypted is not a full-database at-rest encryption mechanism and requires client-side involvement and application changes. The requirement specifies encrypting data at rest using Key Vault-managed keys for the database as a whole, which is best met by TDE with customer-managed keys.
SQL Authentication with a strong password secures login credentials for database access but does not encrypt the data at rest. Strong passwords are important for access control but are unrelated to encryption of stored data files and backups. Authentication mechanisms do not substitute for encryption at rest.
Question 112:
You must ensure that a set of Azure resources are deployed together repeatedly and consistently. Which Azure service should you use to define and deploy these resources declaratively?
A) Azure Resource Manager (ARM) templates or Bicep
B) Azure Portal manual creation each time
C) Azure Monitor alerts
D) Azure Blueprints is deprecated and not available
Answer: A) Azure Resource Manager (ARM) templates or Bicep
Explanation:
Azure Resource Manager templates and Bicep provide infrastructure-as-code capabilities, allowing you to declaratively define resources and their dependencies in a template. These templates can be stored in source control, parameterized for different environments, and repeatedly deployed to ensure consistent resource configuration. ARM templates are JSON-based, while Bicep is a more concise domain-specific language that transpiles to ARM templates. Both enable idempotent deployments and are intended for automating and standardizing resource provisioning across environments.
Manually creating resources through the Azure Portal achieves deployment but is error-prone and not repeatable at scale. Manual procedures lack version control, parameterization, and automation benefits required for consistent, repeated deployments. For repeatable environment provisioning, infrastructure-as-code approaches are recommended instead of repeated manual portal actions.
Azure Monitor alerts are designed to detect and notify on metrics, logs, and resource health conditions. Alerts help with operational monitoring and incident response but do not define or deploy resource configurations. They are not a provisioning mechanism and therefore do not meet the need for declarative, repeatable resource deployments.
Question 113:
A requirement states that a storage account must block all public network access but still allow a single on-premises IP range to access it. Which combination should be used?
A) Disable all public access, use a service endpoint or private endpoint, and configure firewall rule for on-prem IP range via NAT or firewall
B) Keep public access enabled and rely on SAS tokens only
C) Use Azure CDN to front the storage account and allow only CDN IPs
D) Create a dedicated VM to proxy requests and leave storage publicly accessible
Answer: A) Disable all public access, use a service endpoint or private endpoint, and configure firewall rule for on-prem IP range via NAT or firewall
Explanation:
Disabling public network access on the storage account prevents direct access from the internet. Using a private endpoint places the storage account on the private virtual network with a private IP, and using service endpoints associates virtual network traffic to the storage account over Azure’s backbone. To permit on-premises access, configure the on-premises firewall or NAT gateway to present a known public IP or use a site-to-site VPN/ExpressRoute that extends the on-premises network into Azure so the storage account can be accessed through the private network. Alternatively, configure storage account network rules to allow the specific on-premises public IP range if a secure NAT or firewall architecture is in place. This combination ensures public access is blocked while permitting trusted on-premises sources.
Keeping public access enabled and relying only on SAS tokens grants credential-based access but does not block network-level public exposure. SAS tokens can be misused if compromised and do not satisfy a strict requirement to block public network access. Additionally, SAS tokens do not restrict access by source IP unless combined with IP restrictions, which would still leave the endpoint publicly routable.
Using Azure CDN to front the storage account is primarily for content delivery and caching. While CDN can reduce direct traffic to the storage account, it is not designed to serve as an access control mechanism for blocking public access to the origin. CDN edge nodes are public and relying on CDN to protect the origin would not strictly meet the requirement to block public network access while allowing a specific on-premises range.
Question 114:
You need to audit administrator actions across an Azure subscription and retain logs for one year to meet compliance. Which service should you enable and where are these logs stored?
A) Azure Activity Log with export to Log Analytics or storage account for retention
B) Azure Advisor recommendations saved in Azure Repos
C) Azure Monitor Metrics retention set to one year for activity events
D) Azure Cost Management billing export
Answer: A) Azure Activity Log with export to Log Analytics or storage account for retention
Explanation:
Azure Activity Log records control-plane operations performed on resources, including administrative actions such as create, update, and delete operations on resources, role assignments, and service health events. By default, Activity Log data is retained for 90 days in the platform, but you can export the logs to a Log Analytics workspace, an Azure Storage account, or Event Hubs for longer retention, analysis, or streaming to SIEM solutions. Exporting to a storage account or Log Analytics allows you to retain logs for a year or longer to meet compliance requirements, run queries and alerts, and maintain an audit trail of administrative activities across the subscription.
Azure Advisor provides best-practice recommendations for cost, performance, security, and reliability; however, it is not an auditing service and does not record administrative actions. Storing Advisor recommendations in a code repo is not a mechanism for auditing administrator activity and does not meet the requirement for retaining an audit trail.
Azure Monitor Metrics collects numerical measurements such as CPU, memory percent, and other performance metrics for resources; these are not the control-plane audit events that show who changed what within the subscription. Metrics retention can be configured, but metrics do not capture the administrative action details required for compliance auditing of admin operations.
Question 115:
A web application in Azure requires an SSL certificate and automatic renewal. Which Azure service can manage certificates and integrate with App Service for automatic renewal?
A) App Service Managed Certificates or Azure Key Vault with App Service certificate binding
B) Azure Storage static website hosting
C) Azure Functions with HTTP trigger only
D) Network Security Group
Answer: A) App Service Managed Certificates or Azure Key Vault with App Service certificate binding
Explanation:
App Service offers managed certificates for custom domains that simplify provisioning and automatic renewal for TLS/SSL. These managed certificates can be bound to App Service apps and are renewed automatically by Azure, reducing operational overhead. For scenarios requiring higher trust or wildcards and more control, certificates can be stored in Azure Key Vault and integrated with App Service via certificate binding. Key Vault centralizes certificate lifecycle management, including automated renewal through Certificate Authorities that Key Vault integrates with, and App Service can import and bind certificates from Key Vault. These approaches provide the required SSL certificate management and automated renewal capabilities for web applications.
Azure Storage static website hosting can serve static content and supports HTTPS via a storage endpoint or CDN, but it does not directly provide certificate lifecycle management integrated with App Service. Storage static website is not relevant for certificate management and automatic renewal for App Service-hosted applications.
Azure Functions can host web endpoints and use TLS through the App Service infrastructure when running on the App Service plan. However, Azure Functions by itself does not manage SSL certificate provisioning and renewal unless integrated into App Service certificate mechanisms or Key Vault. The function runtime is not a certificate management service.
Network Security Group is a network traffic filtering mechanism used to allow or deny inbound and outbound traffic to resources; it has nothing to do with SSL certificate issuance or renewal. NSGs enforce network-level rules but do not manage TLS certificates.
Question 116:
You need to create a backup strategy for VMs that meets the following: daily backups, retention of 30 days, and ability to restore individual files. Which Azure service should you use?
A) Azure Backup (VM backup) with file-level restore enabled
B) Azure Site Recovery only for replication
C) Manual snapshot and copy to storage account scripts
D) Azure Disk Encryption
Answer: A) Azure Backup (VM backup) with file-level restore enabled
Explanation:
Azure Backup is a fully managed backup solution provided by Microsoft Azure that is designed to protect critical workloads running in the cloud, particularly Azure virtual machines (VMs). It offers a comprehensive set of features for backup scheduling, retention, and recovery, allowing organizations to safeguard their data with minimal operational overhead. One of the key strengths of Azure Backup is its ability to configure backup policies according to organizational requirements. Administrators can define backup schedules to run daily or at other frequencies and can specify retention policies that determine how long recovery points are kept. For example, recovery points can be retained for a specified number of days, such as 30, ensuring that historical versions of data are available for restoration when needed. This granular control over backup scheduling and retention helps organizations meet compliance, regulatory, and business continuity requirements without needing to manage complex scripts or custom processes.
In addition to standard VM backups, Azure Backup supports file-level recovery for supported operating systems, which significantly enhances operational flexibility. File-level recovery allows administrators or users to restore individual files from VM backups without performing a full VM restore. This is particularly valuable in scenarios where a single file or folder is accidentally deleted, corrupted, or modified, as it avoids the need to recover the entire virtual machine, saving both time and system resources. Azure Backup achieves this through integration with Recovery Services vaults, which act as a centralized repository for managing backup data. Recovery Services vaults provide a unified interface for monitoring backup jobs, configuring policies, and performing restores, offering a streamlined management experience. This centralized approach simplifies reporting and auditing, allowing administrators to track backup health, compliance with policies, and recovery point availability.
In contrast, Azure Site Recovery (ASR) serves a different purpose. ASR focuses on disaster recovery by replicating virtual machines and workloads to a secondary region or on-premises environment, enabling failover in the event of a regional outage or catastrophic failure. While ASR provides replication, failover orchestration, and testing capabilities, it is not designed as a primary backup solution. It does not inherently provide configurable retention policies or file-level restore functionality in the way that Azure Backup does. ASR is intended for maintaining business continuity and operational availability during disasters rather than providing historical recovery points for long-term data retention.
Another alternative approach is to use manual snapshots of VM disks combined with custom scripts to copy the virtual hard disks (VHDs) to Azure Storage accounts. While this method can create a form of backup, it introduces significant operational complexity. Administrators are responsible for scheduling snapshots, copying VHDs, managing storage, and implementing retention policies manually. Moreover, restoring individual files from raw snapshots can be complex, error-prone, and time-consuming. Compared to Azure Backup, manual snapshot management lacks built-in file-level recovery workflows, automated retention management, and centralized monitoring. This increases the risk of human error and potential data loss, making it less reliable for enterprise backup scenarios.
Overall, Azure Backup provides a comprehensive, fully managed, and reliable solution for protecting Azure virtual machines. With its configurable backup schedules, retention policies, file-level recovery, and integration with Recovery Services vaults, it directly addresses the need for automated, secure, and easily manageable backup and restore capabilities. It ensures that organizations can recover critical data efficiently, meeting both operational and compliance requirements, while minimizing administrative burden and risk.
Question 117:
You must monitor CPU and memory usage of VMs and send alerts when thresholds are breached. Which components should you use?
A) Azure Monitor metrics with alert rules and Log Analytics agent for guest OS metrics
B) Azure Policy to enforce VM sizes
C) Azure Firewall logs only
D) Azure Bastion for monitoring
Answer: A) Azure Monitor metrics with alert rules and Log Analytics agent for guest OS metrics
Explanation:
Azure Monitor is Microsoft’s comprehensive monitoring solution for Azure resources, providing deep insights into the performance, health, and availability of virtual machines, applications, and other services. One of the key capabilities of Azure Monitor is its metrics collection, which provides platform-level telemetry for resources such as virtual machines. For example, standard metrics like CPU utilization, disk I/O, and network throughput are available natively without requiring additional configuration. Administrators can define alert rules based on these metrics so that when a specified threshold is exceeded, notifications or automated actions can be triggered. This allows IT teams to respond proactively to potential performance issues, minimizing downtime and ensuring that applications continue running smoothly.
While platform metrics like CPU percentage are available by default, more detailed guest operating system metrics, such as memory usage, page file consumption, and disk space, require installing agents within the virtual machine. Azure provides two main agents for this purpose: the Log Analytics agent and the Azure Monitor agent. These agents collect performance counters, events, and other telemetry directly from the VM’s operating system and send this data to a Log Analytics workspace. Once the data is in the workspace, administrators can run custom queries, visualize metrics through dashboards, and create alert rules that monitor thresholds or anomalies. By combining agent-collected metrics with alert rules, IT teams can establish proactive monitoring for both CPU and memory utilization, enabling early detection of performance degradation and facilitating timely remediation before user experience or system stability is affected. This integrated approach ensures comprehensive monitoring coverage for both platform-level and guest-level metrics.
In contrast, Azure Policy serves a completely different purpose. It is a governance and compliance tool designed to enforce rules and conventions on Azure resources. For example, Azure Policy can restrict the sizes of virtual machines that can be deployed, enforce required tagging standards, or prevent deployment of resources in non-compliant regions. While Azure Policy is extremely valuable for ensuring that resources are deployed and maintained according to organizational standards, it does not collect runtime performance data. It cannot monitor CPU, memory, or other operational metrics, nor can it generate alerts based on real-time resource utilization. Its focus is on configuration compliance rather than operational monitoring.
Similarly, Azure Firewall is a network security service that inspects and filters inbound and outbound traffic and logs security-related events and threats. Firewall logs are useful for auditing, detecting malicious activity, and investigating security incidents. However, firewall logs do not contain performance metrics for virtual machines, such as CPU usage or memory consumption. Using firewall logs alone would not satisfy the requirement for monitoring and alerting on resource utilization, as they provide security visibility rather than operational performance insights.
Azure Bastion, on the other hand, is a secure connectivity service that allows administrators to access virtual machines through RDP or SSH directly in the Azure portal without exposing public IP addresses. While Bastion enhances security and simplifies remote access, it does not provide monitoring or alerting capabilities. It cannot track CPU, memory, or other performance metrics, nor can it trigger alerts based on system utilization.
In summary, Azure Monitor, combined with the Log Analytics agent or Azure Monitor agent, provides the required capabilities for proactive performance monitoring. It allows collection of both platform-level and guest-level metrics, supports visualization and analysis through Log Analytics workspaces, and enables alerting on CPU, memory, and other critical resource thresholds, ensuring comprehensive operational monitoring for Azure virtual machines.
Question 118:
A developer needs to deploy an ARM template that creates resources in multiple subscriptions. What is the recommended approach?
A) Use Azure DevOps or GitHub Actions to run deployments with proper service connections targeting each subscription sequentially
B) Use a single ARM template deployment from the portal limited to current subscription only
C) Use Azure Policy to automatically create resources in other subscriptions
D) Use a VPN to connect subscriptions and then deploy locally
Answer: A) Use Azure DevOps or GitHub Actions to run deployments with proper service connections targeting each subscription sequentially
Explanation:
Deployment orchestration tools like Azure DevOps Pipelines or GitHub Actions are designed to run automation workflows that can authenticate to multiple subscriptions using service principals or managed identities. By configuring separate service connections for each target subscription, pipelines can deploy the same ARM template or Bicep files to multiple subscriptions in sequence or parallel as required. This approach supports parameterization, environment separation, repeatability, and auditable deployment runs, making it the recommended method for multi-subscription deployments.
Deploying a single ARM template from the Azure Portal is typically scoped to the current subscription and cannot natively target multiple subscriptions in a single portal deployment session. The portal is intended for interactive management and is not ideal for orchestrating multi-subscription automated deployments at scale.
Azure Policy enforces rules and can deploy resources through initiatives or policy-driven deployments in some contexts; however, it is intended for governance and compliance enforcement rather than orchestrating general-purpose resource provisioning workflows across multiple subscriptions. Policy is not the primary tool for controlled template deployments to multiple subscriptions.
Using a VPN to connect subscriptions is not meaningful because subscriptions are logical constructs under an Azure tenant; networking connectivity between virtual networks does not change the deployment scope of ARM templates. A VPN does not enable cross-subscription deployment from a single template execution without appropriate orchestration and authentication mechanisms.
Question 119:
You need to enforce tagging and a set of required tags for all newly created resources in a subscription. Which Azure feature will help you ensure resources are tagged at creation?
A) Azure Policy with a policy definition that requires tags and can append or deny non-compliant resources
B) Resource Locks set to ReadOnly on the subscription
C) Role Assignments to grant users tagging permissions
D) Azure Monitor Autoscale rules
Answer: A) Azure Policy with a policy definition that requires tags and can append or deny non-compliant resources
Explanation:
Azure Policy enables governance by evaluating resources against defined rules and can enforce organizational standards. A policy can be created to require specific tags and values on resource creation. Policy effects like append can automatically add tags with default values when resources are created, while the deny effect can block creation of untagged resources. Policies can be assigned at subscription or management group scopes, ensuring consistent enforcement across environments. Using Azure Policy for tag enforcement provides centralized, automated governance that ensures compliance at the moment of resource provisioning.
Resource Locks set to ReadOnly prevent modifications or deletions of resources but do not enforce tagging on creation. Locks are useful for protecting critical resources from accidental changes but are not a mechanism for requiring tags or enforcing resource creation policies.
Role Assignments control who has permissions to perform actions like create or update resources. While roles determine the ability to assign tags, permissions alone do not enforce that tags are applied at creation time. Relying on role assignments for tag enforcement is ineffective because it depends on user behavior and does not provide automated compliance.
Azure Monitor Autoscale rules adjust resource scale based on performance metrics and do not provide any governance features related to tagging. Autoscale is for dynamic scaling of compute resources and is unrelated to resource metadata enforcement.
To ensure required tags are present on newly created resources and to automate enforcement or remediation, defining and assigning an Azure Policy tailored to tags is the proper solution, offering immediate compliance checks and the ability to append or deny non-compliant resources.
Question 120:
You need to allow secure, delegated access to a specific blob for a partner for 4 hours without sharing the storage account keys. What should you use?
A) Shared Access Signature (SAS) with limited permissions and expiry
B) Storage account access key shared via email
C) Enable anonymous public access on the container
D) Create a new storage account and give full access
Answer: A) Shared Access Signature (SAS) with limited permissions and expiry
Explanation:
Shared Access Signatures (SAS) in Azure Storage are a secure and flexible mechanism to grant controlled access to storage resources without exposing the storage account keys. SAS tokens allow administrators to delegate access to blobs, containers, queues, or tables with fine-grained permissions, such as read, write, delete, or list operations. They can be scoped to a specific resource and configured with a limited validity period, making them ideal for scenarios where temporary, time-bound access is needed. In the scenario described, a SAS token set to expire after four hours would allow a partner or external user to perform specific operations on the designated storage resource within a strict time window. This limited-duration access ensures that resources remain secure and that permissions are automatically revoked when the SAS token expires, mitigating the risk of unauthorized access over time.
SAS tokens also support additional security constraints, such as restricting access to specific IP addresses or ranges and enforcing HTTPS-only access. This further strengthens security by ensuring that only approved clients from trusted locations can use the token and that data in transit remains encrypted. The combination of scoped permissions, time-limited access, and optional network restrictions makes SAS tokens an ideal solution for scenarios requiring controlled, temporary access, such as sharing data with external partners, vendors, or contractors, without compromising the security of the storage account.
In contrast, sharing the storage account access key is a highly insecure approach. Account keys provide full administrative access to the entire storage account, including all containers, blobs, and other resources. If an account key is exposed or misused, the security of the storage account is entirely compromised. Unlike SAS tokens, account keys cannot be restricted in scope or time, meaning that anyone with the key has permanent, unrestricted access. This approach also violates the principle of least privilege, which recommends granting only the minimum necessary permissions for a specific task. Sharing account keys for temporary or scoped access is therefore considered a significant security risk and should be avoided.
Similarly, enabling anonymous public access on a container is not suitable for controlled partner access. Public access allows anyone with the URL to read blobs or containers depending on the settings, providing no authentication, auditing, or expiration. While convenient for public-facing content, it lacks the security controls required for sensitive data sharing. Public access cannot be restricted to specific users, scoped to individual resources, or limited in duration, making it inappropriate when precise, auditable access is needed.
Creating a separate storage account and granting full access to the partner is another possible solution but introduces unnecessary operational complexity. Managing multiple storage accounts for different partners increases administrative overhead and does not inherently solve the problem of temporary, scoped access. It also carries similar security risks if full access is granted. SAS tokens provide the same outcome—controlled access to specific resources—without the management burden of additional accounts.
Overall, SAS tokens are the recommended method for securely sharing Azure Storage resources. They offer precise control over permissions, time-limited access, and optional network restrictions, ensuring secure, auditable, and temporary access for external partners while preserving the integrity and security of the storage account. This makes SAS the ideal solution for delegated access scenarios.