Microsoft DP-300 Administering Azure SQL Solutions Exam Dumps and Practice Test Questions Set 14 Q 196-210
Visit here for our full Microsoft DP-300 exam dumps and practice test questions.
Question 196:
You have an Azure SQL Database that experiences high CPU usage during business hours. You need to identify the queries that are consuming the most CPU resources. Which tool should you use?
A) Query Performance Insight
B) Database Advisor
C) Azure Monitor
D) SQL Server Profiler
Correct Answer: A
Explanation:
Identifying performance issues in Azure SQL Database requires understanding the various monitoring and diagnostic tools available in the Azure ecosystem. Query performance problems are among the most common causes of database performance degradation, and Azure provides specialized tools designed to help database administrators identify problematic queries quickly and efficiently. Understanding which tool provides the most direct and actionable information for query-level performance analysis is essential for effective database administration.
Query Performance Insight is a built-in Azure SQL Database feature specifically designed to help identify and analyze query performance issues. This tool provides a visual interface that shows which queries are consuming the most resources including CPU, memory, and IO. It displays top resource-consuming queries over configurable time periods, shows query execution statistics, and provides query text along with execution plans. Query Performance Insight is specifically optimized for Azure SQL Database and provides immediate visibility into query-level performance without requiring additional configuration.
When investigating high CPU usage, Query Performance Insight allows you to quickly identify which specific queries are responsible for the elevated CPU consumption. The tool displays queries ranked by their CPU consumption, shows how many times each query executed, displays average and total CPU time, and provides trend information showing how query performance changes over time. This makes it the ideal first tool for diagnosing query-related CPU issues in Azure SQL Database.
A) is correct because Query Performance Insight is specifically designed for identifying resource-consuming queries in Azure SQL Database. It provides immediate visibility into which queries are consuming the most CPU, IO, and memory resources with an intuitive visual interface. For the scenario of high CPU usage during business hours, Query Performance Insight allows you to view the time period when CPU was high and immediately see which queries were executing and consuming resources during that timeframe. The tool requires no additional setup and is available directly in the Azure portal for any Azure SQL Database.
B) is incorrect because Database Advisor (also called SQL Database Advisor or Azure SQL Database Automatic Tuning recommendations) focuses on providing recommendations for performance improvements such as creating indexes, dropping unused indexes, or parameterizing queries. While Database Advisor can identify queries that would benefit from optimization, it’s recommendation-focused rather than diagnostic-focused. It doesn’t provide the real-time or historical view of which queries are currently consuming the most CPU that Query Performance Insight offers, making it less suitable for immediate performance investigation.
C) is incorrect because while Azure Monitor provides comprehensive monitoring capabilities including metrics for CPU usage at the database level, it doesn’t provide query-level granularity out of the box. Azure Monitor can show you that CPU is high and display trends over time, but it doesn’t automatically identify which specific queries are causing the high CPU consumption. You would need to configure additional diagnostic settings and potentially use Log Analytics with query store data to get query-level insights, making it more complex than using Query Performance Insight for this specific scenario.
D) is incorrect because SQL Server Profiler is a tool designed for on-premises SQL Server instances and is not the appropriate tool for Azure SQL Database. While SQL Server Profiler can trace and capture query execution details, it cannot connect directly to Azure SQL Database for tracing purposes due to security and architecture limitations. Azure SQL Database uses different tools and approaches for performance monitoring, and attempting to use Profiler would not work. The cloud-native equivalent functionality is provided through Query Performance Insight and Query Store.
Question 197:
You are implementing a backup strategy for an Azure SQL Database. The business requires the ability to restore the database to any point in time within the last 14 days. Which backup feature provides this capability?
A) Point-in-time restore (PITR)
B) Long-term retention (LTR)
C) Geo-restore
D) Database copy
Correct Answer: A
Explanation:
Understanding backup and restore capabilities in Azure SQL Database is fundamental for database administrators who need to implement appropriate data protection strategies. Azure SQL Database provides automated backup features that protect data without requiring manual intervention, but understanding the different backup types and their retention capabilities is essential for meeting business recovery requirements. Different backup features serve different purposes and have different retention characteristics.
Point-in-time restore (PITR) is the built-in backup feature that enables restoring an Azure SQL Database to any specific point in time within the retention period. Azure SQL Database automatically performs full backups weekly, differential backups every 12-24 hours, and transaction log backups every 5-10 minutes. These automated backups enable point-in-time restore capability, allowing you to restore a database to any second within the retention period. The default retention period is 7 days for Basic tier and 35 days for Standard and Premium tiers, but can be configured from 1 to 35 days.
For the scenario requiring restore capability within the last 14 days, PITR provides exactly this functionality. You can configure the backup retention period to 14 days or use a tier that provides this retention by default. When you need to restore, you can specify the exact date and time you want to restore to, and Azure will restore the database to that precise point using the combination of full, differential, and transaction log backups. This granular restore capability is essential for recovering from data corruption, accidental deletions, or application errors.
A) is correct because point-in-time restore (PITR) is specifically designed to provide the ability to restore a database to any point in time within a configurable retention period. For a 14-day recovery requirement, you would configure the backup retention to at least 14 days, and PITR would then allow restoring to any second within that 14-day window. This is the standard Azure SQL Database backup feature that meets short-term recovery requirements and provides the granular recovery capability described in the scenario.
B) is incorrect because long-term retention (LTR) is designed for extended backup retention beyond the PITR window, typically for compliance or archival purposes. LTR allows you to retain full database backups for up to 10 years with weekly, monthly, or yearly retention schedules. However, LTR only retains full backups at specified intervals, not continuous point-in-time restore capability. While LTR is valuable for long-term compliance, it doesn’t provide the granular point-in-time restore within a 14-day window that the scenario requires.
C) is incorrect because geo-restore is a disaster recovery feature that allows restoring a database from geo-redundant backups to any Azure region. While geo-restore uses the same automated backups as PITR, it’s specifically designed for regional outage scenarios where you need to restore a database in a different region. Geo-restore provides recovery capability but is focused on geographic redundancy and disaster recovery rather than being the primary point-in-time restore mechanism. The appropriate feature for the stated requirement is PITR, not geo-restore.
D) is incorrect because database copy creates a transactionally consistent copy of a database at a specific point in time, but it’s not a backup feature. Database copy is used for creating copies for development, testing, or migrating databases, not for ongoing backup and restore operations. It doesn’t provide automated continuous backup capability or point-in-time restore functionality. Database copy is a one-time operation that creates a snapshot copy, whereas backup features provide continuous protection and flexible restore options.
Question 198:
You need to configure an Azure SQL Database to automatically scale compute resources based on workload demands. The database should scale up during peak hours and scale down during off-peak hours to optimize costs. Which feature should you implement?
A) Serverless compute tier
B) Hyperscale service tier
C) Elastic pool
D) Read scale-out
Correct Answer: A
Explanation:
Understanding the various compute options available in Azure SQL Database is essential for optimizing both performance and cost. Different compute models provide different scaling behaviors and cost structures, and selecting the appropriate model depends on workload characteristics and business requirements. Azure SQL Database offers several compute tiers and configurations, each designed for specific usage patterns and scaling needs.
The serverless compute tier is specifically designed for databases with intermittent and unpredictable usage patterns. Unlike the provisioned compute tier where you pay for a fixed amount of compute capacity regardless of actual usage, the serverless tier automatically scales compute resources up and down based on workload demand. It can automatically pause the database during periods of inactivity to eliminate compute charges, and automatically resume when activity resumes. The serverless tier charges based on actual compute usage measured in vCore-seconds.
For scenarios requiring automatic scaling based on workload patterns, the serverless tier provides exactly this capability. You configure a minimum and maximum vCore range, and the database automatically scales within this range based on workload demands. During peak hours when query load increases, the database automatically allocates more compute resources up to the configured maximum. During off-peak hours when load decreases, compute resources are automatically reduced to the minimum or the database can pause if there’s no activity, significantly reducing costs while maintaining performance during active periods.
A) is correct because the serverless compute tier is specifically designed for automatic scaling based on workload demand with a pay-per-use pricing model. It automatically scales compute resources up during high demand and down during low demand, and can even pause during inactivity to eliminate compute costs entirely. For the scenario requiring automatic scaling during peak and off-peak hours to optimize costs, serverless provides exactly this functionality without requiring manual intervention or scheduled scaling scripts. The automatic pause and resume capability further optimizes costs during extended periods of inactivity.
B) is incorrect because while Hyperscale is a service tier that provides excellent scalability for very large databases (up to 100 TB), it doesn’t automatically scale compute resources based on workload patterns. Hyperscale primarily provides rapid storage scaling and fast backup/restore capabilities through its multi-tier storage architecture. Compute scaling in Hyperscale still requires manual intervention or scripting to change service objectives. Hyperscale is designed for large-scale database scenarios rather than automatic cost optimization through workload-based scaling.
C) is incorrect because elastic pools are designed for managing and scaling multiple databases that share a pool of resources, not for automatic scaling of individual databases based on workload patterns. Elastic pools help optimize costs when you have multiple databases with varying usage patterns by allowing them to share resources, but they don’t automatically scale the pool’s resources based on aggregate workload. You would still need to manually adjust pool capacity or use automation scripts to scale the pool, and it doesn’t provide the automatic per-database scaling that the scenario requires.
D) is incorrect because read scale-out is a feature that provides additional read-only replicas for offloading read workloads, not for automatically scaling compute resources based on overall workload demand. Read scale-out allows you to route read-only queries to secondary replicas, improving read performance and isolating read workloads from primary write workloads. However, it doesn’t automatically scale compute capacity up and down or provide cost optimization based on usage patterns. It’s a high availability and read performance feature, not an automatic scaling solution.
Question 199:
You have an Azure SQL Managed Instance that hosts multiple databases. You need to restore one of the databases to a point in time 10 days ago. Which restore method should you use?
A) Point-in-time restore using Azure portal or PowerShell
B) Restore from long-term retention backup
C) Restore using SQL Server Management Studio backup restore
D) Database copy from another managed instance
Correct Answer: A
Explanation:
Understanding backup and restore operations in Azure SQL Managed Instance is crucial for database administrators responsible for data protection and recovery. While Azure SQL Managed Instance provides SQL Server-like capabilities, its backup and restore mechanisms are different from on-premises SQL Server and even from Azure SQL Database. Managed Instance provides automated backups with point-in-time restore capabilities, but the methods for performing restores have specific requirements and limitations.
Azure SQL Managed Instance automatically performs full, differential, and transaction log backups to enable point-in-time restore. The default backup retention period is 7 days, but can be configured up to 35 days for point-in-time restore purposes. These automated backups are managed by Azure and stored in geo-redundant storage. To restore a database to a previous point in time, you use Azure management tools such as the Azure portal, Azure PowerShell, Azure CLI, or REST APIs rather than traditional SQL Server restore commands.
For the scenario requiring restore to a point 10 days ago, you would need to ensure the backup retention period is configured to at least 10 days. Then you would use the Azure portal or PowerShell to initiate a point-in-time restore operation. In the Azure portal, you navigate to the database, select the restore option, specify the target point in time (10 days ago), and provide a name for the restored database. The restore operation creates a new database with the specified name containing data as it existed at the specified point in time.
A) is correct because point-in-time restore using Azure portal or PowerShell is the standard method for restoring Azure SQL Managed Instance databases to previous points in time within the retention period. This method uses the automated backups that Azure SQL Managed Instance continuously creates and allows you to specify the exact date and time you want to restore to. For restoring to a point 10 days ago, assuming backup retention is configured appropriately, this is the correct and only method for performing the restore operation in Managed Instance.
B) is incorrect because long-term retention backups are designed for retaining backups beyond the standard PITR retention period and are typically used for compliance purposes with weekly, monthly, or yearly retention schedules. While LTR could theoretically contain a backup from 10 days ago if configured appropriately, the standard approach for restoring within the typical retention window (up to 35 days) is point-in-time restore, not LTR. LTR is intended for longer retention periods and doesn’t provide the granular second-level restore capability that PITR offers for recent timeframes.
C) is incorrect because SQL Server Management Studio’s traditional backup and restore commands (BACKUP DATABASE and RESTORE DATABASE T-SQL statements) are not supported for Azure SQL Managed Instance for system-managed backups. While you can create copy-only backups to Azure Blob Storage using BACKUP TO URL for certain scenarios, you cannot use RESTORE DATABASE to restore from Azure’s automated backups. The automated backup and restore functionality in Managed Instance must be managed through Azure management interfaces, not through traditional SQL Server restore commands.
D) is incorrect because database copy is not a restore method and doesn’t provide the capability to restore to a previous point in time. Database copy in the context of Managed Instance typically refers to copying a database from one instance to another at the current point in time, not restoring historical data. This operation would create a current copy of the database, not restore it to its state from 10 days ago. Database copy is useful for creating duplicates for testing or migration purposes but is not a backup restore mechanism.
Question 200:
You are configuring high availability for an Azure SQL Database. The solution must provide automatic failover with minimal data loss in case of a regional outage. Which feature should you implement?
A) Active geo-replication
B) Auto-failover groups
C) Zone-redundant configuration
D) Read scale-out
Correct Answer: B
Explanation:
High availability and disaster recovery planning for Azure SQL Database requires understanding the different features available and their specific capabilities. Azure provides several options for ensuring database availability during failures, ranging from local redundancy to cross-region replication. Each option provides different levels of protection, recovery objectives, and automatic failover capabilities. Selecting the appropriate high availability solution depends on requirements for recovery time, recovery point, geographic distribution, and automation.
Auto-failover groups provide managed instance-level or server-level replication and automatic failover capabilities across Azure regions. This feature creates a readable secondary database in a different region and configures automatic failover policies. In the event of a regional outage or other failure affecting the primary region, auto-failover groups automatically initiate failover to the secondary region based on configured policies. The failover includes updating DNS records to redirect applications to the new primary, making the failover process transparent to applications using the listener endpoint.
Auto-failover groups provide several advantages for disaster recovery scenarios. They support multiple databases in a single failover group, ensuring all related databases fail over together maintaining application consistency. They provide a read-write listener endpoint that always points to the current primary and a read-only listener endpoint that points to the secondary, allowing applications to use fixed connection strings. They enable configurable automatic failover policies including grace periods before triggering automatic failover, and they provide minimal data loss through synchronous or asynchronous replication depending on configuration.
B) is correct because auto-failover groups provide both automatic failover capability and cross-region protection for regional outages. Unlike active geo-replication which requires manual failover initiation, auto-failover groups automatically detect failures and initiate failover to the secondary region based on configured policies. The feature provides listener endpoints that automatically redirect connections to the current primary database, minimizing application changes and downtime. For requirements including automatic failover and regional outage protection with minimal data loss, auto-failover groups are the appropriate comprehensive solution.
A) is incorrect because while active geo-replication provides cross-region database replication and the ability to failover to a secondary region, it does not provide automatic failover capabilities. Active geo-replication requires manual initiation of failover through the Azure portal, PowerShell, or API. After failover, applications must be reconfigured to connect to the new primary database as connection strings need to be updated. While active geo-replication is valuable for disaster recovery, the requirement for automatic failover makes auto-failover groups the better choice as they include automatic failover orchestration and listener endpoints.
C) is incorrect because zone-redundant configuration provides high availability within a single region by distributing database replicas across availability zones, but it does not protect against regional outages. Zone-redundancy protects against datacenter-level failures within a region by ensuring replicas exist in different physical locations, but if an entire region becomes unavailable, zone-redundancy would not provide protection. For the requirement of protecting against regional outages, cross-region solutions like auto-failover groups are necessary rather than within-region redundancy features.
D) is incorrect because read scale-out is a performance feature that provides read-only replicas for offloading read workloads, not a high availability or disaster recovery feature. Read scale-out allows directing read-only queries to secondary replicas to improve performance and resource utilization, but these replicas are typically in the same region and don’t provide protection against regional outages. Read scale-out doesn’t include automatic failover capabilities or cross-region protection, making it unsuitable for the stated disaster recovery requirements.
Question 201:
You need to monitor deadlocks occurring in an Azure SQL Database. You want to capture detailed information about the queries and resources involved in deadlocks. Which feature should you enable?
A) Extended events
B) Query Store
C) Database Advisor
D) Automatic tuning
Correct Answer: A
Explanation:
Monitoring and troubleshooting concurrency issues such as deadlocks is an important aspect of database administration. Deadlocks occur when two or more transactions hold locks on resources that the other transactions need, creating a circular dependency that prevents any transaction from proceeding. Capturing detailed information about deadlocks is essential for identifying the queries involved, understanding the lock chain, and implementing solutions to prevent future occurrences.
Extended events is a lightweight performance monitoring system built into Azure SQL Database that allows capturing detailed diagnostic information about database operations. For deadlock monitoring, extended events can capture the system_health session which includes deadlock graphs, or you can create custom event sessions specifically for capturing deadlock information. Extended events can capture complete deadlock graphs showing all transactions involved, the resources they were trying to access, and the lock types causing the deadlock.
When troubleshooting deadlocks, extended events provides the most comprehensive information. The deadlock graph captured by extended events shows the circular lock chain, includes the query text for all statements involved in the deadlock, displays lock modes and resources, and identifies which transaction was chosen as the deadlock victim. This detailed information is essential for understanding why deadlocks occur and implementing solutions such as adjusting transaction isolation levels, modifying query access patterns, or adding appropriate indexes.
A) is correct because extended events is the appropriate feature for capturing detailed deadlock information in Azure SQL Database. Extended events can capture deadlock graphs that provide complete information about all transactions and resources involved in deadlock scenarios. The system_health session, which is enabled by default, already captures deadlock events, and you can query the captured data to analyze deadlocks. For more detailed monitoring, you can create custom extended event sessions with specific deadlock-related events to capture exactly the information needed for troubleshooting.
B) is incorrect because Query Store is primarily designed for monitoring query performance, execution plans, and performance regressions over time, not for capturing concurrency issues like deadlocks. While Query Store captures valuable information about query execution statistics and plans, it doesn’t capture the lock-level details and transaction interactions necessary for deadlock analysis. Query Store focuses on query-level performance metrics such as execution count, duration, and resource usage, not on inter-transaction conflicts and locking behavior.
C) is incorrect because Database Advisor provides performance recommendations such as index suggestions and query parameterization opportunities, but it doesn’t monitor or capture detailed information about deadlocks. Database Advisor analyzes database usage patterns and workload characteristics to provide tuning recommendations, but it’s recommendation-focused rather than diagnostic-focused. It doesn’t provide the event-level capture and detailed lock chain information necessary for deadlock troubleshooting.
D) is incorrect because automatic tuning is a feature that automatically implements performance improvements such as creating indexes and forcing execution plans, but it doesn’t monitor or provide information about deadlocks. Automatic tuning focuses on query performance optimization through automated implementation of recommendations, not on concurrency issues. While automatic tuning can improve overall database performance, it doesn’t address deadlock detection, monitoring, or analysis.
Question 202:
You are configuring auditing for an Azure SQL Database to meet compliance requirements. Audit logs must be retained for 90 days and stored in a location accessible for analysis. Where should you configure the audit logs to be sent?
A) Azure Storage account
B) Azure Event Hub
C) Local disk on the database server
D) Azure SQL Database system tables
Correct Answer: A
Explanation:
Implementing proper auditing for Azure SQL Database is essential for meeting compliance requirements, security monitoring, and forensic analysis. Azure SQL Database auditing tracks database events and writes them to an audit log that can be stored in various destinations. Understanding the available audit log destinations and their appropriate uses helps database administrators implement compliant and effective auditing solutions.
Azure Storage accounts provide a reliable, cost-effective destination for audit logs with configurable retention periods. When you configure auditing to write to a Storage account, audit events are written to blob containers in the specified account. Storage accounts provide long-term retention capabilities, support lifecycle management policies for automatic cleanup after retention periods expire, and offer secure access controls. For compliance scenarios requiring specific retention periods such as 90 days, Storage accounts provide the necessary retention and access capabilities.
Configuring audit logs to write to a Storage account offers several advantages for compliance and analysis. The logs are stored in an immutable append-only format, ensuring audit trail integrity. The Storage account can be configured with appropriate retention policies to automatically delete logs after the required retention period. Access to audit logs can be controlled through Storage account access policies and role-based access control. The logs can be analyzed using various tools including Log Analytics, custom applications, or third-party security information and event management (SIEM) systems.
A) is correct because Azure Storage accounts are the standard and recommended destination for Azure SQL Database audit logs when retention and compliance requirements exist. Storage accounts provide reliable long-term storage, support configurable retention periods including the required 90 days, offer secure access controls, and maintain audit log integrity. Audit logs stored in Storage accounts can be accessed for analysis using various tools and methods, and the cost-effective storage makes it suitable for the retention periods required by most compliance frameworks.
B) is incorrect because while Azure Event Hub is a valid destination for audit logs when real-time streaming and immediate processing are required, it’s not designed for long-term retention. Event Hub is designed for event streaming scenarios where audit events are immediately consumed by downstream processing systems such as SIEM tools or analytics platforms. Event Hub retains data for a limited period (typically 1-7 days) and is not suitable for meeting 90-day retention requirements. Event Hub is appropriate when audit logs need to be streamed to real-time monitoring systems, but Storage accounts should be used for retention requirements.
C) is incorrect because Azure SQL Database does not store audit logs on local server disks as there is no direct file system access to the underlying infrastructure in platform-as-a-service offerings. Unlike on-premises SQL Server where audit logs can be written to local file system paths, Azure SQL Database as a managed service does not provide access to local storage on the database servers. Audit logs must be written to Azure services such as Storage accounts, Event Hubs, or Log Analytics workspaces.
D) is incorrect because audit logs are not stored in Azure SQL Database system tables. While certain diagnostic information exists in dynamic management views and system tables, audit logs themselves are written to external destinations, not stored within the database. Storing audit logs within the database being audited would create security concerns as database administrators could potentially modify or delete audit records. Audit logs must be stored in separate, secure locations to maintain audit trail integrity and meet compliance requirements.
Question 203:
You have an Azure SQL Database experiencing performance issues. You discover that several queries are using suboptimal execution plans. You need to force the database to use better performing execution plans that were previously used. Which feature should you use?
A) Query Store with plan forcing
B) Parameterization forcing
C) Index tuning recommendations
D) Automatic plan correction
Correct Answer: A
Explanation:
Query execution plan management is a critical aspect of maintaining optimal database performance. Execution plans determine how SQL Server executes queries, including which indexes to use, join methods, and data access patterns. Sometimes the query optimizer may generate suboptimal plans due to statistics issues, parameter sniffing, or other factors, resulting in performance degradation. Azure SQL Database provides features to identify and address plan regression issues.
Query Store is a feature that automatically captures query execution history including queries, execution plans, and runtime statistics. Query Store maintains a history of all plans used for each query over time along with performance metrics for each plan. When a query experiences performance regression due to a plan change, Query Store allows you to identify the better performing previous plan and force the query to use that specific plan instead of allowing the optimizer to generate new plans.
Plan forcing through Query Store provides a powerful solution for plan regression scenarios. You can review the query performance history in Query Store, identify queries with performance problems, compare the current plan with historical plans, and force the query to use a previous better-performing plan. Once a plan is forced, SQL Server will use that specific plan for the query regardless of parameter values or statistics changes, ensuring consistent performance. Query Store plan forcing is the recommended approach for addressing plan regression issues in Azure SQL Database.
A) is correct because Query Store with plan forcing provides the capability to force specific execution plans for queries experiencing performance regression. Query Store maintains a history of plans and their performance metrics, allowing you to identify better performing previous plans and force the database to use them. For the scenario where queries are using suboptimal plans but better plans existed previously, Query Store plan forcing is exactly the appropriate solution. This feature is built into Azure SQL Database and provides a straightforward method for addressing plan regression issues.
B) is incorrect because parameterization forcing (also known as forced parameterization) is a database option that forces SQL Server to parameterize queries that would normally be processed as ad-hoc queries, but it doesn’t force specific execution plans. Forced parameterization helps with plan reuse by converting literal values in queries to parameters, reducing plan compilation overhead. However, it doesn’t address the scenario where specific plans need to be forced because current plans are suboptimal. It’s a different optimization technique focused on plan reuse rather than plan selection.
C) is incorrect because index tuning recommendations help identify missing indexes or unused indexes that could improve performance, but they don’t force specific execution plans. Index recommendations from Database Advisor suggest structural changes to improve query performance through better index coverage. While creating recommended indexes might help queries perform better, this approach doesn’t directly force the use of previously better-performing plans. Index tuning is a different optimization approach focused on physical database design rather than plan management.
D) is incorrect because while automatic plan correction is a feature in Azure SQL Database that can automatically detect and fix plan regression issues, it’s a different mechanism from manually forcing plans through Query Store. Automatic plan correction works automatically when enabled but the question asks what feature should be used to force better performing plans that were previously used, which directly describes the manual plan forcing capability of Query Store. While automatic plan correction could potentially address the issue automatically, the specific answer to forcing previously used plans is Query Store plan forcing.
Question 204:
You need to implement Transparent Data Encryption (TDE) for an Azure SQL Database using a customer-managed key stored in Azure Key Vault. Which Azure feature provides this capability?
A) Bring Your Own Key (BYOK)
B) Always Encrypted
C) Dynamic Data Masking
D) Row-Level Security
Correct Answer: A
Explanation:
Data encryption is a fundamental security requirement for protecting sensitive information in databases. Azure SQL Database provides several encryption features for different scenarios and requirements. Transparent Data Encryption (TDE) encrypts the entire database at rest, including data files, log files, and backups. Understanding the different options for managing TDE encryption keys is essential for meeting compliance and security requirements around key management and control.
By default, Azure SQL Database uses service-managed TDE where Microsoft manages the encryption keys. However, many organizations have compliance requirements that mandate customer control over encryption keys. Bring Your Own Key (BYOK) support for TDE allows organizations to use customer-managed keys stored in Azure Key Vault for TDE encryption. This gives organizations complete control over key lifecycle management including key rotation, access control, and audit logging for key usage.
Implementing TDE with BYOK involves several steps: creating an Azure Key Vault, generating or importing encryption keys into the Key Vault, granting the Azure SQL logical server access to the Key Vault, and configuring the database to use the customer-managed key as the TDE protector. Once configured, the database uses the customer-managed key for all TDE encryption operations. Key Vault audit logs capture all key access operations, providing complete visibility into encryption key usage for compliance purposes.
A) is correct because Bring Your Own Key (BYOK) is the feature that enables using customer-managed keys from Azure Key Vault with Transparent Data Encryption in Azure SQL Database. BYOK for TDE gives organizations control over encryption keys while still benefiting from the transparent encryption and performance of TDE. This approach meets compliance requirements for customer-managed encryption keys while maintaining the ease of use and performance characteristics of TDE. BYOK is specifically the term used in Azure SQL Database documentation for customer-managed TDE keys.
B) is incorrect because Always Encrypted is a different encryption feature that provides column-level encryption for protecting specific sensitive columns, not database-level encryption at rest. Always Encrypted encrypts data on the client side before sending it to the database, and the database server never has access to the plaintext data or encryption keys. While Always Encrypted does support customer-managed keys, it’s a different feature from TDE and serves a different purpose. TDE encrypts the entire database at rest, while Always Encrypted protects specific columns from database administrators and other privileged users.
C) is incorrect because Dynamic Data Masking is a security feature that obfuscates sensitive data in query results for unauthorized users, not an encryption feature. Data masking shows masked values to users without appropriate permissions while showing real values to authorized users, but the data itself is not encrypted in the database. Dynamic Data Masking is useful for limiting exposure of sensitive data in applications but doesn’t provide encryption at rest and doesn’t involve encryption key management.
D) is incorrect because Row-Level Security is an access control feature that filters which rows users can see based on security policies, not an encryption feature. Row-Level Security allows implementing multi-tenant data isolation and restricting row visibility based on user attributes, but it doesn’t encrypt data or involve encryption key management. RLS is an authorization mechanism that controls data access at the row level, which is a completely different security control from encryption.
Question 205:
You are migrating an on-premises SQL Server database to Azure SQL Database. The database uses SQL Server Agent jobs for scheduled maintenance tasks. What is the recommended approach for running scheduled tasks in Azure SQL Database?
A) Azure Automation with runbooks
B) SQL Server Agent (not supported in Azure SQL Database)
C) Elastic Database Jobs
D) Azure Logic Apps
Correct Answer: C
Explanation:
Migrating from on-premises SQL Server to Azure SQL Database requires understanding the differences in available features and finding appropriate alternatives for functionality that works differently in the cloud platform. SQL Server Agent is a commonly used feature in on-premises SQL Server for scheduling and executing jobs including T-SQL scripts, maintenance tasks, and other automated operations. However, Azure SQL Database as a platform-as-a-service offering has a different architecture that requires alternative approaches for scheduled task execution.
Elastic Database Jobs (also known as Elastic Jobs) is the Azure SQL Database feature designed specifically for running scheduled T-SQL scripts across one or more databases. Elastic Jobs provides similar capabilities to SQL Server Agent but is designed for cloud-scale operations. You can create jobs that execute T-SQL scripts on a schedule, target single or multiple databases, handle retries and failures, and monitor execution history. Elastic Jobs is the direct replacement for SQL Server Agent when migrating to Azure SQL Database.
Implementing Elastic Jobs involves creating an Elastic Job agent, defining target groups (which databases the jobs should run against), creating job definitions with T-SQL steps, and configuring schedules. Elastic Jobs can target databases across different logical servers and supports both one-time and recurring schedules. The service handles job execution, tracks status, maintains execution history, and provides retry logic for failed executions. For organizations migrating SQL Server Agent jobs to Azure SQL Database, Elastic Jobs provides the most direct migration path with similar capabilities and T-SQL compatibility.
C) is correct because Elastic Database Jobs is the Azure SQL Database feature specifically designed for running scheduled T-SQL scripts and is the recommended replacement for SQL Server Agent when migrating to Azure SQL Database. Elastic Jobs provides job scheduling, T-SQL script execution, targeting capabilities for single or multiple databases, and execution monitoring similar to SQL Server Agent. For organizations migrating maintenance tasks and scheduled operations from on-premises SQL Server Agent to Azure SQL Database, Elastic Jobs is the appropriate and recommended solution.
A) is incorrect while Azure Automation with runbooks could technically be used to execute database scripts through PowerShell or other methods, it’s not the recommended or most appropriate solution for migrating SQL Server Agent jobs. Azure Automation is a general-purpose automation platform designed for managing Azure resources and running various types of scripts, but it requires additional development to execute T-SQL scripts and doesn’t provide the database-specific features that Elastic Jobs offers. Using Azure Automation would require more complex implementation compared to using the purpose-built Elastic Jobs feature.
B) is incorrect and explicitly states that SQL Server Agent is not supported in Azure SQL Database, which is correct. This answer demonstrates awareness that SQL Server Agent doesn’t exist in Azure SQL Database but doesn’t provide the solution. SQL Server Agent is available in Azure SQL Managed Instance, which provides greater compatibility with on-premises SQL Server, but for Azure SQL Database, alternative solutions like Elastic Jobs must be used.
D) is incorrect because while Azure Logic Apps could be used to trigger database operations on a schedule, it’s not the recommended solution for migrating SQL Server Agent jobs. Logic Apps is a workflow orchestration service designed for integrating various systems and services, but it doesn’t provide native T-SQL script execution capabilities or the database-focused job management features that Elastic Jobs provides. Using Logic Apps would require additional connectors and complexity compared to using the purpose-built Elastic Jobs service.
Question 206:
You have multiple Azure SQL Databases that share similar usage patterns with periods of high and low activity. You want to optimize costs by sharing resources across these databases. Which Azure SQL Database feature should you implement?
A) Elastic pool
B) Serverless compute tier
C) Hyperscale service tier
D) Active geo-replication
Correct Answer: A
Explanation:
Cost optimization is an important consideration when managing multiple Azure SQL Databases, especially when databases have varying and complementary usage patterns. Azure provides several features for managing database resources and costs, and understanding which feature best addresses scenarios with multiple databases sharing resources is essential for efficient database management and cost control.
Elastic pools are designed specifically for managing multiple databases that share a pool of resources. An elastic pool allows you to allocate a set amount of compute and storage resources that are shared among all databases in the pool. This is particularly cost-effective when you have multiple databases with varying usage patterns where peak usage times don’t overlap significantly. Instead of provisioning each database for its individual peak load, you can provision a pool for the aggregate load, taking advantage of the statistical multiplexing effect where different databases peak at different times.
When databases are placed in an elastic pool, they share the pool’s DTUs or vCores. A database can consume more resources when it needs them (up to the per-database maximum) and use fewer resources during quiet periods, while other databases in the pool consume resources during their active periods. This sharing model typically results in significant cost savings compared to provisioning each database individually for its peak load. Elastic pools are ideal for SaaS applications with multiple tenant databases, development/test environments with multiple databases, or any scenario with multiple databases having complementary usage patterns.
A) is correct because elastic pools are specifically designed for sharing resources across multiple Azure SQL Databases to optimize costs. When databases have periods of high and low activity that don’t all overlap, placing them in an elastic pool allows them to share compute resources efficiently. Each database can burst to higher resource usage when needed while other databases use less, resulting in better resource utilization and lower overall costs compared to individually provisioned databases. Elastic pools are the primary Azure SQL Database feature for multi-database resource sharing and cost optimization.
B) is incorrect because while the serverless compute tier provides cost optimization through automatic scaling and pausing for individual databases, it doesn’t provide resource sharing across multiple databases. Serverless is beneficial when individual databases have intermittent usage patterns with periods of inactivity, but it optimizes each database independently rather than sharing resources across databases. If you have multiple databases with complementary usage patterns, elastic pools provide better cost optimization than independent serverless databases.
C) is incorrect because the Hyperscale service tier is designed for very large databases (up to 100 TB) requiring rapid scaling capabilities and fast backup/restore operations, not for cost optimization through resource sharing across multiple databases. Hyperscale provides a different storage architecture with fast scaling capabilities but doesn’t offer the multi-database resource pooling that would optimize costs for multiple databases with varying usage patterns. Hyperscale is about scale and performance for large individual databases, not resource sharing across multiple databases.
D) is incorrect because active geo-replication is a high availability and disaster recovery feature that creates readable secondary replicas of databases in different regions, not a cost optimization or resource sharing feature. Active geo-replication actually increases costs by creating additional database replicas, and it doesn’t provide any mechanism for sharing resources across multiple databases. It’s designed for availability and geographic distribution, not for cost optimization scenarios with multiple databases.
Question 207:
You need to implement a solution that prevents users from viewing sensitive credit card data in an Azure SQL Database, while still allowing the application to process the data. Database administrators should not be able to see the sensitive data. Which security feature should you implement?
A) Always Encrypted
B) Transparent Data Encryption
C) Dynamic Data Masking
D) Row-Level Security
Correct Answer: A
Explanation:
Protecting sensitive data in databases requires understanding the different security features available and their specific capabilities. Different encryption and security features serve different purposes and provide different levels of protection. Understanding which feature provides protection from which threats is essential for implementing appropriate security controls that meet specific requirements such as protecting data from privileged users including database administrators.
Always Encrypted is a client-side encryption feature that encrypts sensitive data on the client side before sending it to the database. The encryption keys never leave the client application, and the database server never has access to the plaintext data or the encryption keys. This means that even database administrators with full database access cannot view the plaintext sensitive data. The application encrypts data before inserting or updating records and decrypts data after retrieving it from the database, making the encryption transparent to application logic while providing strong protection.
Always Encrypted is specifically designed for scenarios where sensitive data must be protected from high-privilege users including database administrators, cloud operators, and other unauthorized users who might have access to the database server. Column encryption keys are protected by column master keys, which are stored outside the database in trusted key stores such as Azure Key Vault, Windows Certificate Store, or hardware security modules. Only applications and users with access to the column master keys can decrypt the sensitive data, providing end-to-end protection for highly sensitive information.
A) is correct because Always Encrypted is specifically designed to protect sensitive data from privileged users including database administrators while still allowing applications to process the data. The client-side encryption ensures that sensitive data is never stored in plaintext in the database, and the separation of encryption keys from the database ensures that database administrators cannot access the keys needed to decrypt the data. Applications with appropriate key access can still insert, update, and retrieve encrypted data, making it the ideal solution for the stated requirements of protecting credit card data from DBAs while allowing application processing.
B) is incorrect because Transparent Data Encryption (TDE) encrypts data at rest (in data files, log files, and backups) but the data is automatically decrypted when accessed by any authenticated user including database administrators. TDE protects against threats such as physical media theft but doesn’t protect sensitive data from privileged users who have legitimate database access. Database administrators with appropriate permissions can query and view all data in TDE-protected databases because TDE decryption happens automatically and transparently for authorized database connections.
C) is incorrect because Dynamic Data Masking obfuscates data in query results but the actual data remains in plaintext in the database. Database administrators can easily disable masking rules or grant themselves permissions to see unmasked data since they have administrative access to the database. Dynamic Data Masking is useful for limiting data exposure in applications to non-privileged users, but it doesn’t provide true protection from database administrators who can access the underlying plaintext data directly or modify masking configurations.
D) is incorrect because Row-Level Security controls which rows users can access based on security policies, but it doesn’t encrypt data or prevent database administrators from viewing sensitive data. Row-Level Security is an access control mechanism that filters query results based on user context, but database administrators typically have permissions to bypass RLS policies or can view the data through administrative queries. RLS is useful for multi-tenant isolation and limiting data access, but it doesn’t provide encryption or protection from privileged users.
Question 208:
You are configuring an Azure SQL Database and need to ensure that all connections to the database are encrypted in transit. Which setting should you verify is enabled?
A) Enforce SSL/TLS connection
B) Transparent Data Encryption
C) Always Encrypted
D) Certificate-based authentication
Correct Answer: A
Explanation:
Protecting data in transit is a fundamental security requirement for database connections. While encryption at rest protects data stored on disk, encryption in transit protects data as it travels over the network between clients and the database server. Understanding how to configure and enforce encrypted connections helps database administrators ensure that sensitive data cannot be intercepted or modified during transmission.
Azure SQL Database supports encrypted connections using SSL/TLS protocols. By default, Azure SQL Database requires encrypted connections, but the enforcement and minimum TLS version can be configured. The «Require secure transfer» or «Minimal TLS Version» settings control whether the database server accepts unencrypted connections and which TLS protocol versions are allowed. When SSL/TLS enforcement is enabled, the database server rejects any connection attempts that don’t use encryption, ensuring all data transmitted between clients and the database is protected.
Enforcing SSL/TLS connections protects against several security threats including eavesdropping where attackers intercept network traffic to view sensitive data, man-in-the-middle attacks where attackers intercept and potentially modify data in transit, and other network-based attacks. Modern TLS versions provide strong encryption, authentication of the server to the client through certificates, and integrity protection ensuring data isn’t tampered with during transmission. Verifying that SSL/TLS is enforced and that minimum TLS versions are set appropriately (at least TLS 1.2) ensures secure communication.
A) is correct because enforcing SSL/TLS connections is the setting that ensures all connections to Azure SQL Database are encrypted in transit. This setting makes encrypted connections mandatory and rejects any client connection attempts that don’t use SSL/TLS encryption. By default, Azure SQL Database requires encrypted connections, but administrators should verify this setting is enabled and configure the minimum acceptable TLS version to ensure strong encryption protocols are used. This setting directly addresses the requirement of ensuring all connections are encrypted in transit.
B) is incorrect because Transparent Data Encryption (TDE) encrypts data at rest (stored data in files and backups), not data in transit over network connections. TDE and SSL/TLS address different security concerns—TDE protects against unauthorized access to physical media while SSL/TLS protects data as it travels over the network. Both are important security controls but serve different purposes. For ensuring connection encryption, SSL/TLS enforcement is required, not TDE.
C) is incorrect because Always Encrypted is a client-side column-level encryption feature that protects specific sensitive columns from privileged users, not a connection encryption feature. While Always Encrypted does involve encrypting data before transmission, it’s designed to protect data from database administrators and operates at the application/column level. Always Encrypted doesn’t control whether the connection itself is encrypted using SSL/TLS. Connection-level encryption is handled separately through SSL/TLS enforcement settings.
D) is incorrect because certificate-based authentication is an authentication method where users present digital certificates to prove their identity, not a connection encryption setting. Certificate-based authentication is about verifying user identity, not encrypting the connection. While certificate-based authentication can be used alongside encrypted connections, it doesn’t itself ensure that connections are encrypted. SSL/TLS enforcement is the setting that ensures connection encryption, though SSL/TLS does use certificates for server authentication as part of the encryption handshake process.
Question 209:
You have an Azure SQL Database that contains temporal tables for tracking historical data changes. Users report that queries are running slower than expected. You discover that queries are accessing historical data unnecessarily. How should you optimize query performance?
A) Modify queries to use FOR SYSTEM_TIME clause appropriately
B) Disable temporal tables
C) Create additional indexes on history tables
D) Increase database compute resources
Correct Answer: A
Explanation:
Temporal tables in Azure SQL Database provide built-in support for tracking complete history of data changes, making them valuable for auditing, historical analysis, and point-in-time reconstructions. However, temporal tables require understanding of specific query syntax to access data efficiently. When queries don’t specify appropriate temporal clauses, they may scan both current and historical data unnecessarily, leading to performance problems.
Temporal tables consist of two tables: the current table containing current data and a history table containing all previous versions of rows. When querying temporal tables, the FOR SYSTEM_TIME clause controls which time period’s data is returned. Without this clause, queries only return current data from the current table. However, improper use of the FOR SYSTEM_TIME clause or queries that unnecessarily access historical data can cause performance issues by scanning large history tables when only current data is needed.
The FOR SYSTEM_TIME clause has several options: AS OF returns data as it appeared at a specific point in time, FROM…TO returns data that was active during a time period, BETWEEN…AND returns similar results with slightly different boundary semantics, CONTAINED IN returns rows active entirely within a time period, and ALL returns all rows from both current and history tables. Understanding when to use each option and ensuring queries only access historical data when genuinely needed is essential for maintaining good temporal table performance.
A) is correct because modifying queries to use the FOR SYSTEM_TIME clause appropriately ensures queries only access the data they actually need. If queries are accessing historical data unnecessarily, it suggests they may be using FOR SYSTEM_TIME ALL when they should be querying only current data (no FOR SYSTEM_TIME clause) or using a more restrictive time range. By ensuring queries specify appropriate temporal clauses that match their actual data requirements, you can significantly improve performance by avoiding unnecessary scans of historical data. This is the proper optimization approach for temporal table query performance issues.
B) is incorrect because disabling temporal tables would eliminate the historical tracking functionality that was presumably implemented for specific business or compliance requirements. While disabling temporal tables would improve query performance by eliminating the history table, it would also eliminate the entire audit trail and historical data tracking capability. This is not an optimization—it’s removing functionality. The proper approach is to optimize how queries access temporal data, not eliminate the temporal feature.
C) is incorrect because while creating additional indexes on history tables might improve some historical data queries, it doesn’t address the root cause of queries accessing historical data unnecessarily. If queries don’t need historical data but are retrieving it anyway due to improper use of temporal syntax, adding indexes would add overhead for data modifications while not solving the fundamental problem. Indexes should be considered after ensuring queries are properly written to access only necessary data, but fixing query logic is the primary solution.
D) is incorrect because increasing database compute resources addresses performance through brute force rather than optimization. While more compute resources might make inefficient queries run faster, they don’t address the underlying inefficiency of accessing unnecessary historical data. Increasing resources incurs additional costs and doesn’t solve the fundamental problem. The appropriate approach is to first optimize query logic to ensure only necessary data is accessed, and only consider resource increases if performance problems remain after proper optimization.
Question 210:
You need to configure diagnostic logging for an Azure SQL Database to troubleshoot performance issues. The logs should be sent to Azure Log Analytics for querying and analysis. Which diagnostic setting category provides query execution and performance data?
A) Query Store runtime statistics
B) SQL Insights
C) Errors
D) Database Wait Statistics
Correct Answer: A
Explanation:
Diagnostic logging in Azure SQL Database provides detailed telemetry about database operations, performance, and health. Azure SQL Database supports sending diagnostic data to various destinations including Log Analytics workspaces, Storage accounts, and Event Hubs. Understanding the different diagnostic log categories and what data each category provides is essential for configuring appropriate monitoring and troubleshooting solutions.
Azure SQL Database offers several diagnostic log categories. Query Store runtime statistics includes detailed information about query execution, including query text, execution statistics, wait statistics, and runtime execution data. This category provides comprehensive query performance data similar to what’s available in Query Store but makes it available in Log Analytics for advanced querying and correlation with other telemetry. Other categories include SQLInsights for intelligent performance insights, AutomaticTuning for auto-tuning recommendations and actions, QueryStoreWaitStatistics for wait statistics, Errors for database errors, DatabaseWaitStatistics for aggregate wait statistics, Timeouts for timeout events, Blocks for blocking events, and Deadlocks for deadlock information.
For troubleshooting query performance issues, the Query Store runtime statistics category provides the most comprehensive data including individual query performance metrics, execution counts, duration statistics, CPU and IO consumption, and wait types. This data can be queried in Log Analytics using Kusto Query Language (KQL) to identify performance patterns, compare query performance over time, correlate performance issues with other events, and create custom dashboards and alerts. The integration with Log Analytics provides powerful analysis capabilities beyond what’s available in the Azure portal’s Query Performance Insight interface.
A) is correct because Query Store runtime statistics is the diagnostic log category that provides detailed query execution and performance data. This category captures comprehensive information about query execution including query text, execution statistics, resource consumption, and wait statistics. When sent to Log Analytics, this data can be queried and analyzed using KQL to investigate performance issues, identify problematic queries, and understand query performance trends over time. For troubleshooting query performance issues, Query Store runtime statistics provides the necessary detailed execution data.
B) is incorrect because while SQL Insights is a diagnostic category that provides intelligent performance insights and recommendations, it focuses more on performance analysis and recommendations rather than detailed raw query execution data. SQL Insights provides higher-level performance analysis and automated insights about performance issues, but Query Store runtime statistics provides more detailed granular query execution data that’s typically more useful for detailed performance troubleshooting. SQL Insights complements but doesn’t replace Query Store data for performance investigation.
C) is incorrect because the Errors diagnostic category captures database error events and exceptions, not query execution and performance data. While errors can indicate some types of problems, this category doesn’t provide the query execution statistics, duration metrics, or resource consumption data needed for performance troubleshooting. Errors category is useful for identifying failures and error patterns but isn’t the appropriate category for analyzing query performance characteristics.
D) is incorrect because Database Wait Statistics provides aggregate wait statistics at the database level rather than query-level execution data. While wait statistics are valuable for understanding what resources queries are waiting for, this category provides aggregated database-wide statistics rather than individual query execution information. For detailed query performance analysis, Query Store runtime statistics provides more comprehensive and query-specific data. Database Wait Statistics is useful for understanding overall database resource contention but doesn’t provide the query-level detail needed for troubleshooting specific query performance issues.