Microsoft DP-300 Administering Azure SQL Solutions Exam Dumps and Practice Test Questions Set 3 Q 31-45
Visit here for our full Microsoft DP-300 exam dumps and practice test questions.
Question 31:
You are administering an Azure SQL Database that experiences high CPU usage during business hours. You need to identify the queries consuming the most CPU resources. Which of the following tools should you use?
A) Azure Monitor
B) Query Performance Insight
C) Azure Advisor
D) SQL Server Profiler
Answer: B
Explanation:
Query Performance Insight is the most appropriate tool for identifying queries consuming the most CPU resources in Azure SQL Database. This built-in feature provides a comprehensive view of query performance metrics including CPU usage, duration, execution count, and resource consumption patterns. Query Performance Insight is specifically designed for Azure SQL Database and offers an intuitive graphical interface that displays top resource-consuming queries, making it ideal for troubleshooting performance issues related to CPU utilization during specific time periods.
The tool aggregates query performance data from the Query Store, which automatically captures query execution statistics in Azure SQL Database. Query Performance Insight categorizes queries by their resource consumption, allowing administrators to quickly identify problematic queries that require optimization. The interface displays detailed metrics including total CPU time, average CPU time per execution, and execution frequency, enabling administrators to prioritize optimization efforts based on actual resource impact. Additionally, Query Performance Insight shows query text and execution plans, providing the necessary information to understand why queries consume excessive resources and how to optimize them through index creation, query rewriting, or parameter optimization.
The historical view capability allows administrators to compare performance across different time ranges, identifying whether CPU issues are chronic or related to specific business activities. This temporal analysis is particularly valuable for correlating CPU spikes with business hours or specific application activities. Query Performance Insight also integrates with automatic tuning recommendations, suggesting index improvements and plan optimizations that can reduce CPU consumption without requiring manual query analysis.
A is incorrect because while Azure Monitor provides comprehensive monitoring capabilities for Azure resources including CPU metrics, memory usage, and storage consumption, it focuses on infrastructure-level monitoring rather than query-level performance analysis. Azure Monitor can alert you that CPU usage is high and show historical CPU trends, but it does not provide query-specific details needed to identify which individual queries are causing the high CPU consumption. Azure Monitor is excellent for overall resource monitoring and alerting but requires additional tools like Query Performance Insight for query-level diagnostics.
C is incorrect because Azure Advisor is a recommendation service that provides best practice guidance for optimizing Azure deployments across cost, security, reliability, operational excellence, and performance. While Advisor may suggest general performance improvements for Azure SQL Database such as service tier adjustments or configuration changes, it does not provide real-time query performance analysis or identify specific queries consuming CPU resources. Advisor recommendations are periodic and strategic rather than operational and query-specific, making it unsuitable for immediate troubleshooting of CPU issues.
D is incorrect because SQL Server Profiler is a deprecated tracing tool designed for on-premises SQL Server instances and cannot be used directly with Azure SQL Database. Azure SQL Database does not expose the trace APIs that SQL Server Profiler requires, preventing its use in Azure environments. Even for on-premises scenarios, Microsoft recommends Extended Events as a replacement for SQL Server Profiler. For Azure SQL Database query performance analysis, Query Performance Insight and Query Store are the appropriate tools that provide similar and enhanced capabilities specifically designed for cloud environments.
Question 32:
Your organization needs to implement a disaster recovery solution for an Azure SQL Database with a Recovery Time Objective of 5 minutes and a Recovery Point Objective of 10 seconds. Which of the following solutions would BEST meet these requirements?
A) Geo-replication with read-scale replicas
B) Active geo-replication with auto-failover groups
C) Long-term backup retention
D) Point-in-time restore
Answer: B
Explanation:
Active geo-replication with auto-failover groups is the best solution for meeting aggressive Recovery Time Objective and Recovery Point Objective requirements of 5 minutes and 10 seconds respectively. This solution provides continuous data replication to secondary database replicas in different Azure regions with near-zero data loss, typically achieving RPO in the range of 5-10 seconds. Auto-failover groups enable automatic failover orchestration when primary database failures are detected, ensuring RTO requirements are met without manual intervention.
Active geo-replication maintains up to four readable secondary replicas that are continuously synchronized with the primary database through asynchronous replication. The replication process captures transaction log records from the primary and applies them to secondaries, minimizing replication lag to seconds under normal conditions. Auto-failover groups enhance this capability by providing automatic failover policies that monitor database health and initiate failover when primary unavailability exceeds configured thresholds. The automatic failover process typically completes within minutes, easily meeting the 5-minute RTO requirement. Additionally, auto-failover groups provide a single read-write listener endpoint and a read-only listener endpoint that automatically redirect connections to the current primary, ensuring application connectivity without connection string modifications during failover events.
The combination of continuous asynchronous replication and automatic failover orchestration makes this solution ideal for mission-critical applications requiring high availability and minimal data loss. The readable secondary replicas can also offload read workloads from the primary database, providing additional performance benefits. Geo-replication places replicas in different Azure regions, protecting against regional outages and providing geographic distribution for disaster recovery. The solution’s near-synchronous replication ensures that committed transactions are quickly propagated to secondaries, achieving the 10-second RPO requirement even during failover scenarios.
A is incorrect because while geo-replication with read-scale replicas provides readable secondary databases for load distribution, it does not include automatic failover capabilities. Without auto-failover groups, recovery requires manual intervention to promote a secondary replica to primary status, potentially exceeding the 5-minute RTO requirement. Manual failover processes involve detecting the failure, deciding to fail over, executing the promotion, and updating application connection strings, which collectively take longer than automated approaches. Read-scale replicas alone focus on performance optimization rather than comprehensive disaster recovery orchestration.
C is incorrect because long-term backup retention is designed for compliance and archival purposes, maintaining backups for extended periods up to 10 years, but it does not support the aggressive RTO and RPO requirements specified. Restoring from long-term backups is a time-consuming process that involves creating a new database from backup files, which can take hours depending on database size. The RPO for backup-based recovery is limited by backup frequency, typically measured in hours rather than seconds. Long-term retention is appropriate for regulatory compliance and recovery from logical corruption, not for high-availability disaster recovery scenarios.
D is incorrect because point-in-time restore is a backup-based recovery mechanism that allows restoring databases to any point within the retention period, typically 7 to 35 days. While valuable for recovering from logical errors, data corruption, or accidental deletions, point-in-time restore cannot meet the 10-second RPO requirement because it relies on automated backups taken at intervals. The RPO is limited by the transaction log backup frequency, and the restore process involves creating a new database from backups, which takes considerable time and cannot achieve a 5-minute RTO for production-sized databases.
Question 33:
You need to migrate an on-premises SQL Server database to Azure SQL Database with minimal downtime. The database is actively being used in production. Which of the following migration methods should you use?
A) Backup and restore
B) Azure Database Migration Service with online migration
C) Export/Import using BACPAC
D) Transactional replication
Answer: B
Explanation:
Azure Database Migration Service with online migration is the optimal method for migrating an on-premises SQL Server database to Azure SQL Database with minimal downtime while the source database remains actively used in production. The online migration feature provides continuous data synchronization between the source and target databases, allowing the source to remain operational and accepting changes throughout the migration process. This approach minimizes business disruption by reducing the cutover window to minutes rather than hours or days required by offline migration methods.
The online migration process works by performing an initial full data load to Azure SQL Database, then continuously replicating ongoing changes from the source SQL Server to the target Azure SQL Database using change data capture mechanisms. This continuous synchronization ensures that both databases remain in sync until the planned cutover moment when applications are redirected to Azure SQL Database. The minimal downtime window occurs only during the final cutover when applications briefly pause to complete the transition, typically lasting only a few minutes. Azure Database Migration Service handles schema migration, data migration, and ongoing synchronization automatically, providing monitoring dashboards and validation tools to ensure migration success.
The service supports assessment and compatibility checking before migration begins, identifying potential issues with schema objects, data types, or features that may not be supported in Azure SQL Database. Automatic remediation guidance helps resolve compatibility issues before migration starts. Azure Database Migration Service is a fully managed service that requires no infrastructure deployment, simplifying the migration process and reducing operational overhead. The service supports migration from SQL Server 2005 and later versions to Azure SQL Database, Azure SQL Managed Instance, and SQL Server on Azure Virtual Machines, providing flexibility for various migration scenarios.
A is incorrect because backup and restore is an offline migration method that requires taking the source database offline or accepting data loss for transactions occurring after backup creation. The process involves backing up the on-premises database, transferring the backup file to Azure storage, and restoring it to Azure SQL Database. For actively used production databases, this approach results in significant downtime proportional to the database size and network transfer speed. While simple and straightforward, backup and restore does not meet the minimal downtime requirement for production databases that must remain operational during migration.
C is incorrect because exporting to BACPAC and importing to Azure SQL Database is another offline migration method that creates a point-in-time copy of the database schema and data. The BACPAC export process can impact source database performance and requires the database to be in a consistent state during export. Like backup and restore, this method does not provide continuous synchronization, resulting in downtime from when the export begins until applications are reconfigured to use the new Azure SQL Database. The import process can be time-consuming for large databases, and any changes made to the source database after export begins are lost, making this unsuitable for minimal downtime migrations of active production databases.
D is incorrect because while transactional replication can provide continuous data synchronization similar to online migration, it has significant limitations when targeting Azure SQL Database. Transactional replication requires Azure SQL Database to be configured as a push subscriber with the on-premises SQL Server as the publisher, introducing complex configuration requirements and potential compatibility issues. Azure SQL Database cannot serve as a publisher or distributor, limiting replication topology options. Additionally, transactional replication requires careful management of schema changes, has specific limitations on supported data types and features, and requires more manual configuration and monitoring compared to the fully managed Azure Database Migration Service.
Question 34:
You are configuring transparent data encryption for an Azure SQL Database. You need to use customer-managed keys stored in Azure Key Vault. Which of the following components is required?
A) Service principal
B) Managed identity
C) Shared access signature
D) Storage account key
Answer: B
Explanation:
A managed identity is required when configuring transparent data encryption with customer-managed keys stored in Azure Key Vault for Azure SQL Database. The managed identity provides the authentication mechanism that allows the Azure SQL logical server to access the encryption keys stored in Azure Key Vault without requiring credential management or storing secrets. This approach follows security best practices by eliminating the need to embed credentials in application code or configuration while providing secure, auditable access to cryptographic keys.
Azure SQL Database supports both system-assigned and user-assigned managed identities for accessing Azure Key Vault. When configured, the managed identity grants the SQL logical server permissions to retrieve the transparent data encryption protector key from Key Vault. The Key Vault access policy must be configured to grant the managed identity permissions including get, wrapKey, and unwrapKey operations on the encryption keys. These permissions enable the SQL server to retrieve the key encryption key used to protect the database encryption key that actually encrypts the data at rest. The managed identity approach provides several advantages including automatic credential rotation handled by Azure Active Directory, audit logging of key access through Azure Key Vault logs, and simplified security management without manual credential handling.
Using customer-managed keys with Azure Key Vault provides organizations with full control over encryption key lifecycle including key rotation, access policies, and key deletion capabilities. This satisfies regulatory compliance requirements demanding customer control over encryption keys. The integration with managed identities ensures that even if the SQL logical server is compromised, the encryption keys remain protected in Key Vault with access controlled through Azure Active Directory authentication and authorization. The architecture separates key management from data storage, implementing a critical security boundary that prevents unauthorized access to encryption keys even by Microsoft personnel.
A is incorrect because while service principals can authenticate to Azure Key Vault, they require explicit credential management including client secrets or certificates that must be securely stored and regularly rotated. Service principals introduce credential management overhead and security risks associated with secret storage and exposure. Managed identities are the recommended approach for Azure service-to-service authentication because they eliminate credential management entirely, with Azure Active Directory automatically handling authentication tokens behind the scenes. Using service principals for transparent data encryption key access is unnecessarily complex and less secure than managed identities.
C is incorrect because shared access signatures are authorization mechanisms used primarily with Azure Storage services to grant limited access to storage resources without exposing account keys. Shared access signatures are not used for authenticating Azure SQL Database to Azure Key Vault. They provide time-limited, permission-scoped access to storage objects like blobs, files, queues, and tables, but do not apply to Key Vault authentication. The authentication model for accessing Azure Key Vault relies on Azure Active Directory identities including managed identities, service principals, and user accounts, not on shared access signatures.
D is incorrect because storage account keys are credentials used to access Azure Storage services and are completely unrelated to Azure Key Vault authentication or transparent data encryption configuration. Storage account keys provide full access to all data and operations within a storage account and are not used for cryptographic key management or Key Vault access control. Transparent data encryption with customer-managed keys requires Azure Key Vault for secure key storage and managed identities for authentication, neither of which involve storage account keys.
Question 35:
You need to implement row-level security on an Azure SQL Database table to ensure users can only view records where they are listed as the assigned employee. Which of the following T-SQL objects should you create?
A) Stored procedure
B) Security policy with a filter predicate
C) View with WHERE clause
D) Trigger
Answer: B
Explanation:
A security policy with a filter predicate is the correct approach for implementing row-level security in Azure SQL Database to restrict user access to specific rows based on their identity. Row-level security is a built-in security feature that uses security policies to enforce access control at the row level transparently, without requiring application code changes or query modifications. The security policy defines which rows users can access based on execution context characteristics such as database user identity, application role membership, or session context variables.
The implementation involves creating an inline table-valued function that serves as the security predicate, defining the logic for determining which rows a user can access. This predicate function typically examines the current user’s identity using functions like USER_NAME(), DATABASE_PRINCIPAL_ID(), or SESSION_CONTEXT() and compares it against columns in the table containing ownership or permission information. Once the predicate function is created, a security policy is created and bound to the target table, associating the predicate with table operations. Filter predicates silently filter rows from SELECT, UPDATE, and DELETE operations, while block predicates prevent INSERT and AFTER UPDATE operations on restricted rows. The security enforcement happens automatically at the database engine level, making it impossible for users to bypass through query manipulation or application vulnerabilities.
Row-level security provides several advantages over application-level or view-based security approaches. The security logic is centralized in the database, ensuring consistent enforcement across all applications and access methods including direct database connections, reporting tools, and APIs. Performance is optimized because the predicate integrates directly into query execution plans, potentially leveraging indexes and avoiding full table scans. Changes to security requirements only require modifying the predicate function without application code updates. The approach supports complex security scenarios including hierarchical access, multi-tenancy, and time-based access restrictions through sophisticated predicate logic.
A is incorrect because while stored procedures can implement access control logic by validating user permissions before returning data, this approach requires all data access to go through the stored procedure rather than providing transparent row-level filtering. Users executing direct SELECT statements against the table would bypass stored procedure security checks entirely, creating security vulnerabilities. Stored procedures also require significant application changes to route all database access through specific procedures, and they don’t prevent unauthorized access through reporting tools, ORMs, or ad-hoc queries. Row-level security provides transparent protection regardless of how data is accessed.
C is incorrect because views with WHERE clauses can filter data based on the current user, providing a form of row-level security, but this approach has significant limitations compared to security policies. Views must be explicitly used instead of the base table, requiring application changes and careful permission management to prevent users from accessing the base table directly. If users have permissions on the base table, they can bypass view security entirely. Views also complicate updates, requiring INSTEAD OF triggers for DML operations, and multiple views may be needed for different security contexts. Row-level security eliminates these issues by enforcing security on the base table itself.
D is incorrect because triggers are procedural objects that execute in response to data modification events including INSERT, UPDATE, and DELETE operations, but they do not provide row-level filtering for SELECT queries. While triggers can validate whether users should modify specific rows and roll back unauthorized changes, they cannot prevent users from viewing rows through SELECT statements. Triggers also introduce performance overhead, execute after the triggering operation has begun processing, and require complex error handling logic. Row-level security provides comprehensive protection for both read and write operations through a unified security policy mechanism.
Question 36:
Your Azure SQL Database is experiencing blocking issues affecting application performance. You need to identify the session causing the blocks. Which dynamic management view should you query?
A)dm_exec_sessions
B)dm_exec_requests
C)dm_tran_locks
D)dm_exec_connections
Answer: C
Explanation:
The sys.dm_tran_locks dynamic management view is the most appropriate choice for identifying blocking issues in Azure SQL Database because it provides comprehensive information about currently active lock resources, lock modes, and lock status including which sessions are waiting for locks held by other sessions. This DMV displays all locks requested and granted by the database engine, including the session ID holding each lock, the resource being locked, and whether the lock request is granted or waiting, enabling administrators to trace blocking chains from blocked sessions back to the blocking session at the head of the chain.
When investigating blocking scenarios, sys.dm_tran_locks reveals the complete lock hierarchy showing which sessions hold locks on which resources and which sessions are waiting to acquire conflicting locks on those same resources. The request_status column indicates whether each lock is GRANT (currently held), WAIT (waiting to be acquired), or CONVERT (upgrading lock mode), with WAIT status indicating blocked requests. By joining sys.dm_tran_locks with other DMVs like sys.dm_exec_sessions and sys.dm_exec_requests, administrators can correlate lock information with session details, currently executing queries, and wait statistics to build a complete picture of blocking scenarios.
The resource-level detail provided by sys.dm_tran_locks enables precise identification of contention points including specific tables, pages, rows, or indexes where blocking occurs. This granular information guides optimization efforts such as index tuning, query optimization, or transaction isolation level adjustments to reduce blocking. The DMV also shows lock modes including shared, exclusive, update, intent, and schema locks, helping diagnose whether blocking stems from read-write conflicts, write-write conflicts, or schema modification operations. Understanding lock modes and resources is essential for implementing appropriate solutions whether through query optimization, indexing strategies, or application design changes.
A is incorrect because sys.dm_exec_sessions provides information about authenticated sessions connected to SQL Server including session ID, login name, host name, and program name, but it does not provide detailed information about locks or blocking relationships. While session information is useful contextual data when investigating blocking, sys.dm_exec_sessions alone cannot identify which sessions are blocking others or what resources are involved in blocking scenarios. This DMV is typically joined with sys.dm_tran_locks to add session context to lock information but does not itself reveal blocking.
B is incorrect because sys.dm_exec_requests shows currently executing requests including their wait type and blocking session ID through the blocking_session_id column, which can indicate that a request is blocked, but it does not provide the detailed lock resource information needed to fully diagnose blocking scenarios. While sys.dm_exec_requests can identify that blocking exists and which session is causing the block, it doesn’t reveal what resources are locked or the lock modes involved, limiting diagnostic capabilities. This DMV is useful for identifying blocked requests but must be combined with sys.dm_tran_locks for comprehensive blocking analysis.
D is incorrect because sys.dm_exec_connections provides network connection information for each connection established to the database engine including connection protocol, client network address, and authentication details, but it contains no information about locking, blocking, or transaction states. This DMV is useful for network troubleshooting and connection auditing but is not relevant for diagnosing blocking issues. Connection information does not correlate with lock activity, making sys.dm_exec_connections unsuitable for blocking investigations.
Question 37:
You need to configure an Azure SQL Database to automatically tune performance by creating and dropping indexes based on workload patterns. Which feature should you enable?
A) Query Performance Insight
B) Automatic tuning
C) Intelligent Insights
D) Azure Advisor
Answer: B
Explanation:
Automatic tuning is the correct feature for enabling Azure SQL Database to automatically create and drop indexes based on workload patterns without manual intervention. This feature uses artificial intelligence and machine learning to continuously analyze database workloads, identify performance optimization opportunities, and automatically implement proven tuning recommendations including index creation, index removal, and query plan correction. Automatic tuning operates continuously in the background, adapting to changing workload patterns and reversing changes that don’t improve performance.
The automatic tuning feature focuses on three main optimization categories. Create index recommendations identify missing indexes that would benefit frequently executed queries by reducing scan operations and improving query performance. Drop index recommendations identify unused, duplicate, or rarely used indexes that consume storage and slow down data modification operations without providing query performance benefits. Force last good plan recommendations detect query plan regressions where the query optimizer chooses suboptimal execution plans and automatically forces the database to use the last known good plan. Each recommendation goes through validation before implementation, with performance metrics compared before and after changes to ensure improvements are genuine.
Automatic tuning provides safety mechanisms including automatic rollback of changes that degrade performance, validation periods to ensure changes provide sustained benefits, and monitoring dashboards showing all automatic actions taken. Administrators maintain control through configuration options allowing selective enablement of tuning categories, approval requirements for specific actions, and the ability to override or disable automatic tuning entirely. The feature integrates with Azure SQL Database intelligence systems including Query Store for performance baseline tracking and regression detection. Automatic tuning reduces database administrator workload by handling routine performance optimization tasks while allowing administrators to focus on complex optimization scenarios and strategic planning.
A is incorrect because Query Performance Insight is a monitoring and diagnostic tool that visualizes query performance metrics and identifies resource-consuming queries, but it does not automatically implement performance optimizations. Query Performance Insight provides visibility into database performance including top CPU-consuming queries, longest-running queries, and query execution patterns, helping administrators identify optimization opportunities. However, administrators must manually analyze recommendations and implement changes such as creating indexes or rewriting queries. Query Performance Insight informs optimization decisions but does not execute them automatically.
C is incorrect because Intelligent Insights is a performance diagnostics feature that uses artificial intelligence to detect and diagnose database performance issues, providing root cause analysis and remediation guidance through automatically generated diagnostics logs. Intelligent Insights can detect issues like excessive locking, resource exhaustion, query plan regression, and increased query execution time, providing natural language explanations of problems and suggested fixes. However, Intelligent Insights is purely diagnostic and advisory; it does not automatically implement fixes like creating or dropping indexes. Administrators must review diagnostics and manually apply recommended solutions.
D is incorrect because Azure Advisor is a cloud recommendation service that provides best practice guidance across Azure services including cost optimization, security, reliability, operational excellence, and performance. While Azure Advisor may suggest performance improvements for Azure SQL Database such as scaling service tiers, implementing geo-replication, or adjusting configuration settings, it does not provide query-level index recommendations or automatically create and drop indexes. Azure Advisor recommendations are periodic, strategic, and require manual implementation. It complements but does not replace database-specific automatic tuning capabilities.
Question 38:
You are configuring auditing for an Azure SQL Database to track all database activities. You need to store audit logs in a Log Analytics workspace for centralized analysis. Which of the following should you configure?
A) Diagnostic settings with SQLSecurityAuditEvents category
B) Extended Events session
C) SQL Server Audit
D) Change tracking
Answer: A
Explanation:
Configuring diagnostic settings with the SQLSecurityAuditEvents category is the correct approach for sending Azure SQL Database audit logs to a Log Analytics workspace. Diagnostic settings provide the integration mechanism that routes various diagnostic data categories from Azure SQL Database to destinations including Log Analytics workspaces, Storage Accounts, and Event Hubs. The SQLSecurityAuditEvents category specifically contains security audit events generated by Azure SQL Database auditing, capturing database activities such as data access, schema changes, permission modifications, and authentication events.
The diagnostic settings configuration process involves enabling auditing at either the server level or database level to generate audit events, then configuring diagnostic settings to specify SQLSecurityAuditEvents as the log category to export and selecting Log Analytics workspace as the destination. This integration enables centralized log collection from multiple Azure SQL databases into a single Log Analytics workspace where advanced query capabilities using Kusto Query Language enable sophisticated security analysis, compliance reporting, and anomaly detection across the database estate. Log Analytics retention policies, alerting rules, and visualization dashboards can be applied to audit data for comprehensive security monitoring.
Using Log Analytics workspace as the audit destination provides several advantages over alternative destinations like Storage Accounts or Event Hubs. Log Analytics offers powerful query capabilities for searching, filtering, and correlating audit events with other Azure resource logs collected in the same workspace. Built-in and custom workbooks provide visualization of audit data for security operations teams. Integration with Azure Sentinel enables security information and event management capabilities including threat detection, investigation, and automated response. Alert rules can trigger notifications when suspicious activities or policy violations are detected in audit logs. The approach supports compliance requirements for centralized security monitoring and audit trail retention.
A is correct as explained above. Diagnostic settings with SQLSecurityAuditEvents send audit logs to Log Analytics.
B is incorrect because Extended Events sessions are lightweight performance monitoring mechanisms used primarily for troubleshooting and performance analysis rather than comprehensive security auditing. While Extended Events can capture various database activities, they lack the pre-built audit event categories and compliance-oriented logging provided by Azure SQL Database auditing. Extended Events require manual session configuration including event selection, filtering, and target specification, and they don’t integrate natively with Log Analytics workspaces. Extended Events are valuable for detailed performance troubleshooting but are not designed for security audit trail management or compliance logging.
C is incorrect because SQL Server Audit is an on-premises SQL Server auditing feature that is not directly applicable to Azure SQL Database in the same way it is used on-premises. While Azure SQL Database auditing is conceptually similar to SQL Server Audit, the implementation differs significantly in cloud environments. Azure SQL Database uses its own auditing framework configured through the Azure portal, PowerShell, or ARM templates, with audit destinations configured through diagnostic settings. The SQL Server Audit syntax and stored procedures used for on-premises audit configuration are not available in Azure SQL Database.
D is incorrect because change tracking is a data change capture feature that records insert, update, and delete operations on table rows to support data synchronization scenarios, not security auditing. Change tracking identifies which rows changed and provides minimal change metadata including change version numbers but does not capture who made changes, when changes occurred, what values changed, or the context of modifications. Change tracking is designed for application-level data synchronization rather than audit trail generation for security and compliance purposes. It captures data layer changes without the security context needed for auditing.
Question 39:
You need to restore an Azure SQL Database to a specific point in time three days ago due to accidental data deletion. The database uses the General Purpose service tier. What is the maximum point-in-time restore retention period available?
A) 7 days
B) 14 days
C) 35 days
D) 365 days
Answer: C
Explanation:
The maximum point-in-time restore retention period available for Azure SQL Database in the General Purpose service tier is 35 days. This configurable retention period can be adjusted from a minimum of 1 day up to a maximum of 35 days depending on business requirements for operational recovery. Point-in-time restore leverages automated backups including full backups, differential backups, and transaction log backups that Azure SQL Database automatically creates and maintains without requiring manual intervention or backup scheduling.
The automated backup system for Azure SQL Database takes full database backups weekly, differential backups every 12 to 24 hours, and transaction log backups approximately every 5 to 10 minutes depending on compute size and database activity level. These frequent transaction log backups enable recovery to any specific point within the retention period with granularity down to the second, allowing precise restoration to moments before data corruption or accidental deletions occurred. The backup retention period determines how far back point-in-time restore can reach, with the default retention typically set to 7 days but configurable up to the service tier maximum of 35 days for General Purpose and Business Critical tiers.
Configuring backup retention involves balancing recovery requirements against storage costs, as longer retention periods consume more backup storage. The first 32 GB of backup storage per database is provided at no additional cost, with charges applying to storage exceeding this threshold. Organizations requiring retention beyond 35 days for compliance or archival purposes can implement long-term backup retention policies extending up to 10 years, though these backups are intended for compliance rather than operational recovery. Point-in-time restore creates a new database from the backup chain, allowing administrators to examine restored data, extract specific records, or redirect applications to the restored database after validation.
C is correct as explained above. General Purpose tier supports up to 35 days retention for point-in-time restore.
A is incorrect because while 7 days is the default retention period for point-in-time restore in Azure SQL Database, it is not the maximum retention period available. The 7-day default provides a reasonable balance between recovery capability and storage costs for most workloads, but organizations can configure longer retention up to the service tier maximum when business requirements justify extended operational recovery windows. The default 7-day retention can be increased through Azure portal, PowerShell, CLI, or ARM templates to any value between 1 and 35 days.
B is incorrect because 14 days is not a specific threshold for Azure SQL Database point-in-time restore retention. While administrators can configure 14 days as a custom retention period if desired, this value has no special significance in Azure SQL Database backup architecture. The actual maximum retention for point-in-time restore depends on the service tier, with General Purpose and Business Critical tiers supporting up to 35 days. The 14-day value does not represent a tier limit or default configuration.
D is incorrect because 365 days (one year) exceeds the maximum point-in-time restore retention period of 35 days for operational backups. Long-term backup retention can maintain backups for up to 10 years including yearly retention policies, but these are separate from point-in-time restore capabilities. Long-term retention backups are intended for compliance, archival, and regulatory requirements rather than operational recovery scenarios. These long-term backups provide restore points at specific intervals (weekly, monthly, yearly) rather than the continuous point-in-time restore capability provided by operational backup retention within the 35-day window.
Question 40:
You are implementing elastic pools for Azure SQL Database to optimize costs for multiple databases with varying and unpredictable usage patterns. Which of the following metrics should you primarily monitor to ensure optimal pool sizing?
A) Individual database storage consumption
B) Aggregate eDTU or vCore usage across all databases
C) Number of databases in the pool
D) Individual database connection counts
Answer: B
Explanation:
Aggregate eDTU or vCore usage across all databases in the elastic pool is the primary metric for monitoring and ensuring optimal pool sizing. Elastic pools achieve cost efficiency through resource sharing, allowing multiple databases to share a pool of resources with the assumption that not all databases will simultaneously require maximum resources. Monitoring aggregate resource consumption reveals whether the pool has sufficient capacity to handle the combined workload of all databases or whether sizing adjustments are needed to prevent performance degradation or optimize costs.
The elastic pool model works on the principle of resource pooling where individual databases can burst to higher resource levels when needed, borrowing from the pool’s total capacity, while databases with lower activity contribute unused resources back to the pool. Effective pool sizing requires understanding the aggregate resource consumption patterns rather than simply summing the maximum resources that each database might individually require. By monitoring aggregate eDTU or vCore usage, administrators can identify whether the pool consistently approaches or exceeds its resource limits, indicating the need for scaling up, or whether resources are significantly underutilized, indicating opportunity for cost optimization by scaling down.
Azure provides metrics and alerts for monitoring elastic pool resource consumption including CPU percentage, data IO percentage, log IO percentage, eDTU percentage (for DTU-based pools), and allocated data storage percentage. Sustained high utilization approaching 80-90% suggests the pool is undersized and may cause performance issues including query queuing, increased latency, and timeout errors. Conversely, consistently low utilization below 40-50% indicates overprovisioning with opportunity to reduce pool size and decrease costs. The monitoring strategy should evaluate both average utilization patterns and peak usage scenarios to ensure the pool can handle workload spikes without performance degradation while avoiding excessive overprovisioning.
B is correct as explained above. Aggregate resource usage determines if pool sizing is appropriate for the workload.
A is incorrect because individual database storage consumption, while important for overall capacity planning and cost management, does not directly indicate whether the elastic pool’s compute resources (eDTUs or vCores) are appropriately sized for the workload. Storage is billed separately from compute resources in elastic pools, and storage consumption grows relatively predictably compared to the variable compute resource demands that elastic pools are designed to accommodate. Monitoring storage helps ensure the pool has adequate storage capacity and identifies databases consuming disproportionate storage, but it doesn’t reveal whether the pool’s performance resources match workload requirements.
C is incorrect because the number of databases in the pool is a configuration parameter rather than a performance metric for optimization. While elastic pools have maximum database count limits depending on the service tier and size, simply counting databases doesn’t indicate whether the pool is appropriately sized. A pool might perform well with many small, inactive databases or struggle with fewer large, highly active databases. The resource consumption pattern matters more than the database count. The number of databases provides context for analysis but doesn’t directly measure resource utilization or guide sizing decisions.
D is incorrect because individual database connection counts, while relevant for understanding application connectivity patterns and troubleshooting connection pooling issues, do not directly indicate whether the elastic pool’s compute resources are appropriately sized. Azure SQL Database connection limits are generous and rarely represent bottlenecks in properly designed applications using connection pooling. Connection counts may indicate application architecture issues like connection leaks or missing connection pooling but don’t reveal whether eDTU or vCore resources are sufficient. Query performance and resource utilization metrics are more relevant for pool sizing decisions than connection counts.
Question 41:
You need to implement column-level encryption for sensitive data in an Azure SQL Database to ensure that data remains encrypted even when accessed by database administrators. Which of the following features should you use?
A) Transparent data encryption
B) Always Encrypted
C) Dynamic data masking
D) Row-level security
Answer: B
Explanation:
Always Encrypted is the correct feature for implementing column-level encryption that protects sensitive data from database administrators and other privileged users with access to the database server. Always Encrypted is a client-side encryption technology where encryption and decryption operations occur within client applications using column encryption keys that never leave the client environment. The database server stores only encrypted data and never has access to encryption keys or plaintext data, providing end-to-end protection for sensitive columns even against administrators with full database access.
The Always Encrypted architecture uses a two-tier key hierarchy consisting of column encryption keys that encrypt individual columns and column master keys that protect the column encryption keys. Column master keys are stored securely outside the database in trusted key stores such as Azure Key Vault, Windows Certificate Store, or hardware security modules, ensuring that even with full database access, administrators cannot decrypt data without also compromising the client-side key store. Client applications equipped with Always Encrypted-enabled drivers automatically encrypt data before sending it to the database and decrypt query results transparently, making encryption operations invisible to application logic in most scenarios.
Always Encrypted supports two encryption types deterministic and randomized providing different security and functionality trade-offs. Deterministic encryption produces the same ciphertext for identical plaintext values, enabling equality comparisons, GROUP BY operations, and joins on encrypted columns, though with reduced security against pattern analysis. Randomized encryption generates different ciphertext for each encryption operation, providing stronger security but preventing operations beyond retrieval and decryption. The feature integrates with SQL Server Management Studio and Visual Studio for encryption key provisioning, column encryption configuration, and migration of existing data to encrypted columns with minimal application changes.
B is correct as explained above. Always Encrypted provides client-side column encryption protecting data from database administrators.
A is incorrect because transparent data encryption encrypts the entire database at rest including data files, log files, and backup files to protect against unauthorized access to physical media, but it does not protect against authorized database users including administrators who authenticate normally and access decrypted data. TDE operates transparently at the storage level, encrypting pages as they’re written to disk and decrypting them when read into memory. Once data is in memory or transmitted to clients, it is unencrypted and accessible to any user with appropriate database permissions. TDE protects the database from offline attacks but not from privileged user access.
C is incorrect because dynamic data masking is an authorization feature that obscures sensitive data in query results for non-privileged users by applying masking rules that replace actual values with masked versions, but it does not encrypt data or prevent privileged users from viewing original values. Dynamic data masking works by modifying query results at runtime based on user permissions, showing masked data to unauthorized users while displaying actual data to authorized users. Database administrators and users with UNMASK permission see unmasked data, making this feature unsuitable for protecting data from privileged users. Dynamic data masking provides quick security layering without schema changes but is not encryption.
D is incorrect because row-level security controls which rows users can access based on their identity or role membership, implementing access control at the row level rather than encrypting data. Row-level security filters query results to show only rows matching security predicates defined in security policies but does not encrypt column data or prevent privileged users from viewing accessible rows. Users authorized to access specific rows see all column values in plaintext unless additional column-level protections are implemented. Row-level security addresses authorization and data isolation rather than encryption and confidentiality from privileged access.
Question 42:
Your application requires reading data from an Azure SQL Database with minimum latency. You have configured active geo-replication with multiple secondary replicas. Which connection string parameter should you use to route read queries to secondary replicas?
A) MultipleActiveResultSets=True
B) ApplicationIntent=ReadOnly
C) Encrypt=True
D) Pooling=True
Answer: B
Explanation:
The ApplicationIntent=ReadOnly connection string parameter enables read-scale-out by routing read queries to secondary replicas in active geo-replication configurations, reducing load on the primary database and improving overall application performance. When this parameter is set to ReadOnly, Azure SQL Database automatically redirects connections to one of the available readable secondary replicas instead of the primary database. This read-only routing distributes read workloads across secondary replicas, allowing the primary to focus on write operations and read queries that require strong consistency with recent writes.
The read-scale-out architecture leverages the readable secondary replicas maintained by active geo-replication, which continuously replicate data from the primary database with minimal lag typically measured in seconds. Applications can separate read and write operations using different connection strings one with ApplicationIntent=ReadWrite (or default) for write operations and transactionally consistent reads pointing to the primary, and another with ApplicationIntent=ReadOnly for read queries that can tolerate slightly stale data pointing to secondaries. This separation enables horizontal scaling of read capacity without increasing primary database resources, providing cost-effective performance scaling for read-heavy workloads.
The ApplicationIntent parameter works with the read-only listener endpoint provided by active geo-replication or failover groups, which automatically distributes read connections across available secondary replicas using round-robin load balancing. The feature requires readable secondaries configured in active geo-replication or auto-failover groups, and applications must be designed to handle eventual consistency where reads from secondaries may reflect slightly outdated data compared to the primary. For workloads with high read-to-write ratios, read-scale-out can significantly improve performance and cost efficiency by offloading read queries from the primary database to dedicated read replicas in the same region or different regions for improved latency.
B is correct as explained above. ApplicationIntent=ReadOnly routes queries to readable secondary replicas.
A is incorrect because MultipleActiveResultSets=True (MARS) is a connection string parameter that enables multiple batches of data to be processed concurrently on a single connection, allowing applications to have multiple active result sets on one connection rather than requiring separate connections for each query. MARS addresses connection management and result set handling within client applications but has no relationship to routing queries to secondary replicas. MARS affects how the client driver manages multiple commands on a connection, not which database replica processes queries. This parameter is useful for specific application patterns but does not enable read-scale-out.
C is incorrect because Encrypt=True is a security parameter that enforces TLS/SSL encryption for data transmitted between the client application and the database server, protecting data in transit from eavesdropping and tampering. While encryption is a security best practice recommended for all Azure SQL Database connections, this parameter controls connection security rather than query routing to replicas. Encrypt=True affects the communication channel but not whether queries execute on primary or secondary replicas. Connection encryption and read-scale-out are independent concerns that serve different purposes in database architecture.
D is incorrect because Pooling=True enables connection pooling in the client application, allowing connections to be reused from a pool rather than creating new connections for each database operation, improving performance by reducing connection establishment overhead. Connection pooling is a client-side performance optimization technique that reduces latency and resource consumption but does not determine which database replica processes queries. Pooling affects connection lifecycle management within the application but not routing decisions made by Azure SQL Database. While connection pooling is recommended for application performance, it does not enable read-scale-out functionality.
Question 43:
You need to monitor and identify queries in Azure SQL Database that are experiencing parameter-sniffing issues causing performance variability. Which of the following Query Store features should you use?
A) Top Resource Consuming Queries
B) Tracked Queries
C) Regressed Queries
D) Overall Resource Consumption
Answer: C
Explanation:
The Regressed Queries feature in Query Store is specifically designed to identify queries experiencing performance degradation over time, including issues caused by parameter sniffing where query execution plans become suboptimal for certain parameter values. Regressed Queries compares current query performance against historical baseline performance, automatically detecting queries that have slowed down or consume more resources than previous executions. This feature is particularly effective for identifying parameter sniffing problems where query plans optimized for one set of parameter values perform poorly when different parameter values are provided.
Parameter sniffing occurs when SQL Server’s query optimizer examines the first parameter values used when compiling a query plan and creates an execution plan optimized for those specific values. If subsequent executions use significantly different parameter values that would benefit from alternative execution strategies like different index selections or join algorithms, the cached plan may perform suboptimally. The Regressed Queries view surfaces these performance variations by showing queries with increased duration, CPU usage, logical reads, or execution counts compared to historical performance baselines, enabling administrators to investigate whether parameter sniffing or other factors caused regression.
The Regressed Queries interface presents performance metrics across multiple dimensions including execution duration, CPU time, logical reads, physical reads, and execution count, comparing recent performance against a configurable baseline period. Each regressed query shows multiple execution plans that were used over time, allowing administrators to compare plans and identify when plan changes occurred. For parameter sniffing issues, administrators typically observe different execution plans associated with significantly different performance characteristics. Query Store provides the ability to force specific query plans, offering immediate mitigation for parameter sniffing by ensuring consistent plan usage regardless of parameter values while permanent fixes are developed through query rewriting, index optimization, or query hints.
C is correct as explained above. Regressed Queries identifies performance degradation including parameter sniffing issues.
A is incorrect because Top Resource Consuming Queries identifies queries consuming the most database resources including CPU, memory, IO, and duration based on current or historical usage, but it focuses on absolute resource consumption rather than performance changes over time. While a query affected by parameter sniffing might appear in this list if it consumes significant resources, Top Resource Consuming Queries doesn’t specifically highlight performance regression or variability. This view helps identify the most expensive queries for optimization efforts but doesn’t distinguish between consistently expensive queries and queries experiencing new performance problems.
B is incorrect because Tracked Queries is a Query Store feature that allows administrators to manually select specific queries for detailed monitoring and tracking, creating custom monitoring sets for queries of special interest. While Tracked Queries can monitor queries suspected of parameter sniffing problems, this feature requires proactively identifying and adding queries to the tracked set rather than automatically detecting performance regression. Tracked Queries provides focused monitoring for known problem queries but doesn’t automatically discover queries experiencing new performance issues or regressions caused by parameter sniffing.
D is incorrect because Overall Resource Consumption provides aggregated resource usage metrics for the entire database showing trends in CPU, IO, memory, and query execution over time, but it presents database-wide statistics rather than query-specific performance analysis. This view helps understand overall database workload characteristics and capacity planning but cannot identify individual queries experiencing parameter sniffing or performance regression. Overall Resource Consumption shows macro-level trends like increasing resource usage or query volumes but lacks the query-specific detail needed to diagnose parameter sniffing issues.
Question 44:
You are configuring a new Azure SQL Database and need to ensure that it can handle unpredictable workload spikes without manual intervention. Which compute tier should you select?
A) Provisioned
B) Serverless
C) Hyperscale
D) Managed Instance
Answer: B
Explanation:
The Serverless compute tier is specifically designed to handle unpredictable workload spikes with automatic scaling without manual intervention, making it ideal for databases with intermittent or variable usage patterns. Serverless automatically scales compute resources up and down based on workload demand within configured minimum and maximum vCore limits, charging only for the compute resources actually consumed rather than pre-provisioned capacity. This automatic scaling eliminates the need for manual intervention to handle workload variations while optimizing costs by reducing resource allocation during periods of low activity.
The Serverless tier implements auto-scaling through continuous monitoring of workload metrics including CPU utilization, active sessions, and query complexity, automatically adjusting compute resources in near real-time to meet demand. When workload increases, the database scales up to higher vCore allocations within seconds providing additional CPU, memory, and IO capacity to maintain performance. During quiet periods, the database scales down to the configured minimum vCore setting or can automatically pause after a configurable period of inactivity, incurring no compute charges while paused. The auto-resume capability detects new connection attempts and brings the database online within seconds, making the pause-resume cycle transparent to applications except for slight connection latency.
Serverless is particularly cost-effective for development, testing, and production databases with unpredictable usage patterns including databases used during business hours only, proof-of-concept environments, seasonal applications, and workloads with highly variable demand. The billing model charges per vCore-second of actual compute usage plus storage, rather than charging for continuously provisioned capacity regardless of utilization. Configuration parameters include minimum and maximum vCore limits defining the scaling range, and auto-pause delay specifying inactivity duration before automatic pausing. The tier supports the same features as provisioned compute including geo-replication, backup and restore, and high availability, making it a functionally complete option for variable workloads.
B is correct as explained above. Serverless automatically scales compute resources based on workload demand.
A is incorrect because the Provisioned compute tier allocates fixed compute resources (vCores or DTUs) that remain constant until manually changed by administrators, making it less suitable for unpredictable workload spikes without manual intervention. While provisioned compute offers predictable performance and simplified capacity planning for stable workloads, it cannot automatically scale to handle unexpected load increases. Provisioned databases either experience performance degradation when workload exceeds provisioned capacity or incur unnecessary costs from overprovisioning to handle occasional spikes. Scaling provisioned compute requires manual adjustments with brief unavailability during resize operations.
C is incorrect because Hyperscale is a service tier designed for very large databases up to 100 TB with rapidly scaling storage and read-scale capabilities, not for automatic compute scaling in response to workload variations. Hyperscale architecture provides storage that automatically grows as needed, multiple read replicas for read scale-out, and fast backup and restore operations, but compute resources are provisioned similar to other service tiers and require manual scaling adjustments. Hyperscale addresses database size and read scaling requirements rather than automatic compute adjustment for variable workloads.
D is incorrect because Azure SQL Managed Instance is a deployment option providing near 100% compatibility with on-premises SQL Server for lift-and-shift migrations, not a compute tier that automatically scales. Managed Instance uses provisioned compute resources that remain constant until manually changed, similar to provisioned compute tiers in Azure SQL Database. While Managed Instance supports various service tiers including General Purpose and Business Critical with different performance characteristics, none provide the automatic compute scaling and pause-resume capabilities offered by the Serverless compute tier in Azure SQL Database.
Question 45:
You need to implement a solution to automatically replicate schema changes from a production Azure SQL Database to multiple test environments. Which of the following approaches would be MOST appropriate?
A) Database snapshots
B) SQL Data Sync
C) Transactional replication
D) Azure DevOps with database projects and CI/CD pipelines
Answer: D
Explanation:
Azure DevOps with database projects and CI/CD (Continuous Integration/Continuous Deployment) pipelines is the most appropriate approach for automatically replicating schema changes from production to multiple test environments in a controlled, repeatable, and auditable manner. This approach treats database schema as code using SQL Server Data Tools database projects or similar solutions that maintain schema definitions in source control systems like Git. Changes to database schema are versioned, reviewed through pull requests, tested through automated builds, and deployed consistently across environments using automated deployment pipelines.
The CI/CD pipeline approach for database schema management provides several critical advantages for enterprise database development. Schema changes are tracked in version control providing complete change history, rollback capabilities, and audit trails showing who made what changes and when. Pull request workflows enable peer review of schema changes before they reach production, improving quality and reducing errors. Automated builds validate that schema changes compile correctly and pass unit tests before deployment. Deployment pipelines ensure identical schema changes deploy to all environments in the proper sequence from development through test to production, eliminating environment drift and configuration inconsistencies.
Database projects in Azure DevOps or similar tools like Redgate SQL Change Automation or Liquibase enable declarative schema management where the desired end state is defined rather than writing individual migration scripts. These tools compare the project schema against target databases and automatically generate deployment scripts that safely apply necessary changes. Pre-deployment and post-deployment scripts handle data migrations, reference data updates, and other deployment activities. Integration with Azure SQL Database ensures generated deployment scripts are compatible with Azure-specific features and limitations. The automated pipeline approach scales efficiently to multiple environments, supporting scenarios where schema changes must be replicated to dozens or hundreds of test databases without manual script execution.
D is correct as explained above. CI/CD pipelines provide controlled, automated schema replication across environments.
A is incorrect because database snapshots create point-in-time, read-only copies of databases primarily used for reporting, testing against static data, or recovering from user errors, but they do not provide mechanisms for replicating schema changes to other environments. Snapshots capture the complete database state at creation time and remain static, not reflecting subsequent changes to the source database. While snapshots can be used to create test environments from production data at a specific point in time, they don’t provide ongoing schema synchronization or support selective schema-only replication without data. Database snapshots are also not directly supported in Azure SQL Database as they are in SQL Server on virtual machines.
B is incorrect because SQL Data Sync is a data synchronization service designed to replicate data bidirectionally or unidirectionally between Azure SQL databases, on-premises databases, and SQL Server instances, focusing on data movement rather than schema management. SQL Data Sync works with predefined schemas that must be created independently on all participating databases, and it does not automatically replicate schema changes when tables, columns, indexes, or other objects are added or modified. SQL Data Sync is appropriate for multi-master data replication scenarios or consolidating data from multiple sources but is not designed for schema change management across environments.
C is incorrect because transactional replication is a technology for replicating data from publisher databases to subscriber databases with low latency, primarily designed for operational reporting, data distribution, and load balancing scenarios rather than development and test environment management. While transactional replication can replicate both data and schema changes, it establishes ongoing replication relationships that keep subscribers synchronized with publisher changes continuously, which is not appropriate for test environments that should be isolated from production data changes. Additionally, Azure SQL Database as a subscriber has limitations on schema changes that can be replicated, and managing replication for multiple independent test environments creates unnecessary complexity.