Microsoft DP-300 Administering Azure SQL Solutions Exam Dumps and Practice Test Questions Set 10 Q 136-150
Visit here for our full Microsoft DP-300 exam dumps and practice test questions.
Question 136:
An organization is experiencing high CPU utilization on their Azure SQL Database during specific times of the day. As the database administrator, you need to identify the queries causing the issue. Which of the following tools would be MOST effective for identifying resource-intensive queries?
A) Azure Monitor metrics
B) Query Performance Insight
C) Dynamic Management Views (DMVs)
D) SQL Server Profiler
Answer: B
Explanation:
Performance troubleshooting in Azure SQL Database requires effective tools to identify problematic queries that consume excessive resources like CPU, memory, or I/O. When experiencing performance issues during specific time periods, administrators need visibility into query execution patterns, resource consumption trends, and the ability to drill down into specific query details. Azure provides multiple monitoring and diagnostic tools, each with different capabilities and use cases. Understanding which tool provides the most comprehensive and actionable information for query performance analysis is essential for effective database administration.
Azure SQL Database includes built-in intelligence and monitoring capabilities specifically designed to help administrators identify and resolve performance issues. These tools collect telemetry data continuously, analyze query patterns, and provide recommendations for optimization. The challenge is selecting the tool that provides the most direct path to identifying resource-intensive queries with sufficient detail to take corrective action. Some tools provide high-level metrics while others offer granular query-level insights with execution statistics and historical trends.
B is correct because Query Performance Insight is specifically designed to identify resource-intensive queries in Azure SQL Database and provides the most comprehensive view for troubleshooting CPU utilization issues. Query Performance Insight automatically collects query execution data from the Query Store and presents it through an intuitive Azure portal interface. It shows top resource-consuming queries by CPU, duration, execution count, and I/O, displays query execution trends over configurable time periods (allowing you to focus on specific times when CPU is high), provides query text and execution plans for analysis, shows aggregated statistics across multiple executions, highlights queries with regression in performance, and offers drill-down capabilities to see individual query executions. For the scenario described where CPU is high at specific times, Query Performance Insight allows administrators to set the time range to those problematic periods and immediately see which queries consumed the most CPU, making it the most effective tool for this specific troubleshooting task.
A is incorrect because while Azure Monitor metrics provide valuable information about database resource utilization including CPU percentage, DTU consumption, storage usage, and connection counts, they don’t identify specific queries causing the issues. Azure Monitor shows you that CPU is high but doesn’t tell you which queries are responsible. It’s excellent for alerting and identifying when problems occur, but you need query-level analysis tools like Query Performance Insight to determine the root cause. Azure Monitor metrics are the starting point that tells you there’s a problem, but Query Performance Insight is what you use to diagnose which queries are causing that problem.
C is incorrect because while Dynamic Management Views provide extremely detailed query execution information and are powerful tools for performance analysis, they have significant limitations in Azure SQL Database compared to Query Performance Insight. DMVs like sys.dm_exec_query_stats provide current and recent query statistics but don’t retain historical data across database restarts or over extended periods, require writing and executing T-SQL queries to extract information, don’t provide graphical interfaces for trend analysis, show only cached execution plans (which may not include historical problematic queries), and require more expertise to interpret results effectively. While DMVs are valuable for real-time analysis and advanced troubleshooting, Query Performance Insight provides easier access to historical query performance data with better visualization, making it more effective for identifying queries that caused CPU spikes at specific times in the past.
D is incorrect because SQL Server Profiler is not supported or available for Azure SQL Database. Profiler is a trace tool for on-premises SQL Server that captures detailed events but has been deprecated in favor of Extended Events even in on-premises environments. In Azure SQL Database, you cannot use SQL Server Profiler because you don’t have access to the underlying server infrastructure. Azure provides alternative monitoring solutions including Query Performance Insight, Extended Events (with limitations), and various DMVs. For cloud-based Azure SQL Database, Query Performance Insight is the primary tool for identifying resource-intensive queries, making it the correct answer for this Azure-specific scenario.
Question 137:
A company needs to implement automated backup retention for their Azure SQL Database to meet compliance requirements of retaining backups for 10 years. Which of the following features should be configured?
A) Point-in-time restore (PITR)
B) Long-term retention (LTR)
C) Geo-redundant backup
D) Azure Backup service
Answer: B
Explanation:
Backup retention policies in Azure SQL Database are critical for meeting business continuity, disaster recovery, and regulatory compliance requirements. Different industries and regulations mandate specific backup retention periods, with some requiring retention of 7 years, 10 years, or even longer for audit and compliance purposes. Azure SQL Database provides multiple backup features, each designed for different recovery scenarios and retention timeframes. Understanding the distinction between these features and their appropriate use cases is essential for database administrators responsible for ensuring compliance and data protection.
Azure SQL Database automatically performs full, differential, and transaction log backups to enable point-in-time restore capabilities. However, the default automatic backup retention is limited to a specific period depending on the service tier and is primarily designed for operational recovery rather than long-term compliance. For scenarios requiring retention beyond the standard backup window, Azure provides specialized features that extend retention capabilities while managing storage costs effectively. These extended retention features allow organizations to meet stringent compliance requirements without maintaining complex custom backup solutions.
B is correct because Long-term retention (LTR) is specifically designed for compliance scenarios requiring backup retention beyond the standard retention period, including the 10-year requirement mentioned in the question. LTR allows administrators to configure policies that automatically retain full database backups for extended periods up to 10 years. LTR policies can be configured with flexible retention schedules including weekly backups retained for weeks, monthly backups retained for months, and yearly backups retained for years (up to 10 years). The backups are stored in Azure RA-GRS (Read-Access Geo-Redundant Storage) automatically, ensuring geographic redundancy, and LTR backups are separate from the standard PITR backup retention, allowing organizations to meet both operational recovery and compliance requirements. Configuration is straightforward through Azure portal, PowerShell, or CLI with commands like Set-AzSqlDatabaseBackupLongTermRetentionPolicy. LTR is the purpose-built solution for exactly the scenario described in the question.
A is incorrect because Point-in-time restore (PITR) is designed for operational recovery, not long-term compliance retention. PITR allows restoring databases to any point within the retention period, which ranges from 1 to 35 days depending on the service tier (Basic tier: 7 days, Standard/Premium: 35 days, and configurable for some tiers). While PITR is essential for recovering from accidental data modifications, deletions, or corruption, the retention period is far too short for the 10-year compliance requirement mentioned in the question. PITR and LTR work together—PITR handles short-term operational recovery while LTR handles long-term compliance needs. For a 10-year retention requirement, LTR must be configured.
C is incorrect because geo-redundant backup is a storage redundancy feature, not a retention policy feature. Geo-redundant storage ensures that backups are replicated to a secondary Azure region for disaster recovery purposes, protecting against regional outages. While geo-redundancy is important for ensuring backup availability during regional disasters, it doesn’t extend the retention period of backups. By default, Azure SQL Database backups use geo-redundant storage, but this doesn’t affect how long backups are retained. Whether backups are locally redundant, zone redundant, or geo-redundant, you still need LTR to extend retention to 10 years for compliance. Geo-redundancy and LTR are complementary features—LTR backups are automatically stored in RA-GRS, combining long retention with geographic redundancy.
D is incorrect because Azure Backup service is designed for backing up Azure VMs, on-premises servers, file shares, and SQL Server running in Azure VMs, but it is not used for Azure SQL Database (PaaS). Azure SQL Database has its own built-in backup system with automatic backups, PITR, and LTR capabilities integrated into the platform. You don’t need to configure Azure Backup service for Azure SQL Database—in fact, it’s not supported. For SQL Server running on Azure VMs (IaaS), you would use Azure Backup, but for Azure SQL Database, the native backup features including LTR are the correct solution. This distinction between IaaS and PaaS backup solutions is important for the exam.
Question 138:
An Azure SQL Database is experiencing intermittent connection timeouts from an application. The database is configured in the General Purpose service tier. Which of the following actions would MOST likely resolve the connection timeout issues?
A) Increase the database DTU allocation
B) Implement retry logic in the application
C) Configure an elastic pool
D) Enable zone redundancy
Answer: B
Explanation:
Connection reliability between applications and Azure SQL Database is a critical aspect of cloud database administration. Unlike on-premises databases where network connectivity is generally stable and predictable, cloud databases can experience transient connectivity issues due to various factors including network latency, service maintenance, automatic failovers, load balancing operations, and temporary resource constraints. These transient faults are a normal characteristic of cloud services and must be handled appropriately in application design. Understanding the nature of these connection issues and implementing appropriate solutions is essential for maintaining application reliability.
Azure SQL Database is a highly available platform service that performs various background operations to maintain service quality, including automatic backups, software updates, hardware maintenance, and health monitoring. Some of these operations may cause brief connection interruptions or increased latency. Additionally, the service may throttle connections or terminate long-running idle connections as part of resource management. Applications connecting to Azure SQL Database must be designed to handle these transient conditions gracefully rather than assuming perfect connectivity as might be expected in traditional on-premises environments.
B is correct because implementing retry logic in the application is the most appropriate and recommended solution for intermittent connection timeouts in Azure SQL Database. Retry logic is a resilience pattern specifically designed to handle transient faults by automatically retrying failed operations after a brief delay. Microsoft’s best practices for Azure SQL Database explicitly recommend implementing retry logic for all applications because transient connection failures are expected in cloud environments, automatic retry can resolve most transient issues without manual intervention, exponential backoff strategies prevent overwhelming the database during recovery, and modern application frameworks and database libraries often include built-in retry capabilities (such as Entity Framework’s connection resiliency, JDBC retry policies, or manual implementation using try-catch with delays). For the scenario described where timeouts are intermittent (not constant), this strongly suggests transient faults rather than fundamental resource or configuration problems. Implementing retry logic with exponential backoff (starting with 1-2 second delays and increasing with each retry) typically resolves these issues without requiring infrastructure changes.
A is incorrect because increasing DTU allocation addresses performance issues related to insufficient compute, memory, or I/O resources, not intermittent connection timeouts. If the database were consistently under-resourced, you would see sustained high resource utilization (CPU, Data IO, or Log IO near 100%) and performance degradation, not intermittent connection timeouts. Connection timeouts typically indicate transient network or connectivity issues, not resource exhaustion. While insufficient resources can contribute to timeout issues in extreme cases (when the database is too overloaded to accept new connections), the question describes intermittent timeouts, which is characteristic of transient faults. Adding DTUs would increase costs without addressing the root cause. Before scaling resources, administrators should examine resource utilization metrics to confirm resource constraints exist.
C is incorrect because configuring an elastic pool addresses scenarios where multiple databases need to share resources for cost optimization and handling variable workloads, not connection timeout issues. Elastic pools allow multiple databases to share a set of resources (eDTUs or vCores), providing cost savings when databases have different peak usage times. However, moving a single database into an elastic pool doesn’t resolve connection timeout problems—it’s primarily an economic optimization strategy. If anything, sharing resources in an elastic pool without proper sizing could potentially make performance worse if other databases in the pool consume shared resources. Elastic pools don’t provide any inherent connection reliability improvements over standalone databases.
D is incorrect because enabling zone redundancy improves high availability by deploying redundant replicas across Azure availability zones within a region, protecting against datacenter-level failures. While zone redundancy enhances overall availability and reduces planned downtime, it doesn’t specifically address intermittent connection timeouts caused by transient faults. Zone redundancy is valuable for critical workloads requiring maximum uptime but comes at additional cost (typically 25-30% premium). For intermittent connection timeouts, the proper solution is application-level retry logic, which handles transient faults regardless of whether zone redundancy is enabled. Zone redundancy and retry logic address different aspects of reliability—infrastructure resilience versus application resilience.
Question 139:
A database administrator needs to migrate an on-premises SQL Server database to Azure SQL Database. The database uses SQL Server Agent jobs for maintenance tasks. What should the administrator implement to replace SQL Server Agent functionality in Azure SQL Database?
A) Azure Automation runbooks
B) Elastic database jobs
C) Azure Functions with timer triggers
D) Azure Logic Apps
Answer: B
Explanation:
Migrating databases from on-premises SQL Server to Azure SQL Database involves more than just transferring data and schema—administrators must also migrate or replace operational components like maintenance jobs, scheduled tasks, and automation workflows. SQL Server Agent is a core component of on-premises SQL Server that schedules and executes jobs including database maintenance, ETL processes, report generation, and custom administrative tasks. However, Azure SQL Database (the PaaS offering) does not include SQL Server Agent because it’s a managed service where Microsoft handles platform-level maintenance. Understanding the alternative solutions available in Azure for job scheduling and execution is crucial for successful database migrations.
Azure provides several services that can replace SQL Server Agent functionality, each with different capabilities, pricing models, and ideal use cases. Some solutions are database-centric and designed specifically for SQL operations, while others are general-purpose automation platforms that can orchestrate various Azure services. The choice depends on factors including the complexity of jobs, whether jobs need to execute against multiple databases, integration requirements with other Azure services, and the administrator’s preference for T-SQL versus code-based automation. Selecting the most appropriate replacement ensures that operational requirements continue to be met after migration.
B is correct because Elastic database jobs is the purpose-built replacement for SQL Server Agent functionality in Azure SQL Database environments. Elastic jobs is an Azure service specifically designed to execute T-SQL scripts on a schedule across one or more Azure SQL databases. It provides capabilities very similar to SQL Server Agent including executing T-SQL scripts and stored procedures on schedules defined using cron expressions, targeting single databases, multiple databases, or all databases in a server or pool, retry logic and failure handling, job execution history and monitoring, credential management for authentication, and support for database maintenance tasks like index rebuilding, statistics updates, and custom business logic. Elastic jobs is the natural migration path for SQL Server Agent jobs because it uses familiar T-SQL syntax and scheduling concepts, making it easy for DBAs to transition. For the scenario described where SQL Server Agent jobs are being used for maintenance tasks, elastic jobs provide the most direct functional replacement with minimal redesign.
A is incorrect because while Azure Automation runbooks can execute scripts and automate tasks, they’re not specifically designed for database operations and require more complex setup for SQL tasks. Azure Automation is a general-purpose automation platform that runs PowerShell or Python scripts to manage Azure resources. While you could certainly use Azure Automation to connect to SQL databases and execute maintenance scripts, this approach requires writing PowerShell scripts with SQL connection logic, managing authentication credentials separately, implementing error handling and retry logic manually, and learning a different automation paradigm than SQL Server Agent. For database administrators comfortable with T-SQL and looking for a direct SQL Server Agent replacement, elastic jobs provide a more natural transition. Azure Automation is better suited for cross-service automation and infrastructure management rather than database-specific job scheduling.
C is incorrect because Azure Functions with timer triggers, while capable of running scheduled code including database operations, require application development expertise and are less suitable for traditional database maintenance tasks. Azure Functions is a serverless compute service for running event-driven code. Using Functions for database jobs requires writing code in languages like C#, Python, or JavaScript, implementing database connection logic and SQL execution, managing dependencies and deployment packages, handling authentication and secure credential storage, and maintaining code rather than simple T-SQL scripts. This development-centric approach is more complex than necessary for migrating straightforward SQL Server Agent maintenance jobs. Azure Functions is excellent for event-driven processing and custom application logic but represents significant overhead compared to elastic jobs for database maintenance tasks.
D is incorrect because Azure Logic Apps is a visual workflow automation service designed for integrating applications, data, and services across clouds and on-premises, not for executing database maintenance tasks. Logic Apps excels at orchestrating business processes, integrating SaaS applications, processing messages, and triggering actions based on events. While Logic Apps can connect to SQL Database through connectors to execute queries, it’s not designed for the types of maintenance operations typically performed by SQL Server Agent (index maintenance, statistics updates, database consistency checks). Logic Apps is a no-code/low-code integration platform rather than a database job scheduling system. For replacing SQL Server Agent functionality, elastic jobs is the purpose-built solution that database administrators will find most familiar and appropriate.
Question 140:
An organization needs to implement row-level security in Azure SQL Database to ensure that sales representatives can only view records for their assigned region. Which of the following approaches should be used?
A) Create separate databases for each region
B) Implement security policies using CREATE SECURITY POLICY
C) Use Azure Active Directory conditional access
D) Configure column-level encryption
Answer: B
Explanation:
Data security in multi-tenant or role-based applications requires controlling access at granular levels to ensure users only see data they’re authorized to view. Row-level security (RLS) is a database security feature that restricts data access based on user characteristics without requiring application-level filtering logic in every query. This is particularly important in scenarios where the same database table contains data for multiple regions, departments, customers, or tenants, and users should only access their relevant subset. Implementing security at the database level rather than relying solely on application logic provides defense in depth and reduces the risk of security bugs in application code.
SQL Server and Azure SQL Database provide built-in row-level security capabilities that allow administrators to define security policies based on user attributes. These policies are enforced by the database engine automatically and transparently—applications issue normal SELECT, UPDATE, or DELETE statements, and the database engine automatically filters results based on the security policy. This approach ensures consistent security enforcement regardless of how data is accessed (through applications, reporting tools, direct database connections, or administrative tools) and reduces the complexity of application code by centralizing security logic in the database.
B is correct because implementing security policies using CREATE SECURITY POLICY is the proper and native way to implement row-level security in Azure SQL Database. Row-level security in SQL Database works through two main components: security predicates (inline table-valued functions that define filtering logic) and security policies that bind these predicates to tables. The implementation process involves creating a function that returns a table defining which rows a user can access (the predicate logic might check USER_NAME(), SESSION_CONTEXT, or application role), then using CREATE SECURITY POLICY to bind this predicate to the target table as either a filter predicate (restricts reads) or block predicate (restricts writes). For the sales representative scenario described, you would create a predicate function that checks the user’s assigned region against the region column in the table, then apply this as a security policy. Once configured, when a sales representative queries the table, they automatically see only rows for their region without any changes to application queries. This approach provides transparent, centralized, and enforceable security that meets the requirement described in the question.
A is incorrect because creating separate databases for each region is an architectural anti-pattern that introduces significant complexity and operational overhead. While database separation does provide isolation, it creates numerous problems including difficulty querying across regions for management reporting, complex application logic to connect to different databases, multiplied maintenance effort (backups, updates, schema changes must be applied to all databases), increased costs from multiple database instances, and poor scalability as regions are added or changed. Row-level security provides the isolation benefits without these drawbacks by maintaining all data in a single database with policy-based access control. Database separation might be appropriate for truly independent systems or compliance requirements mandating physical separation, but not for typical multi-tenant access control scenarios.
C is incorrect because Azure Active Directory conditional access controls authentication and access to Azure resources based on conditions like user location, device compliance, sign-in risk, and application being accessed, but it doesn’t provide row-level data filtering within databases. Conditional access is an identity and access management (IAM) feature that decides whether to allow or block authentication attempts or require additional verification like MFA. It operates at the authentication layer, not the data access layer. While conditional access is valuable for restricting who can connect to databases based on contextual factors, once a user is authenticated and connected, row-level security is needed to control which specific rows they can access. These are complementary security layers—conditional access for authentication and connection control, RLS for data-level access control.
D is incorrect because column-level encryption (such as Always Encrypted or Transparent Data Encryption) protects the confidentiality of sensitive column data by encrypting it at rest and/or in transit, but it doesn’t restrict which rows users can access. Column encryption ensures that even if someone gains unauthorized access to database files, backups, or intercepts network traffic, they cannot read encrypted data without proper keys. However, encryption doesn’t implement row filtering—an authorized user with decryption permissions can still see all rows in the table, just with encrypted columns decrypted. The question requires restricting access so sales representatives only see rows for their region, which is row-level filtering, not column-level encryption. These are different security controls addressing different threats—encryption for confidentiality, RLS for access control.
Question 141:
A company’s Azure SQL Database is experiencing blocking issues that are causing application timeouts. As the DBA, you need to identify which sessions are causing blocks. Which Dynamic Management View (DMV) would provide the MOST useful information?
A)dm_exec_sessions
B)dm_exec_requests
C)dm_tran_locks
D)dm_exec_connections
Answer: C
Explanation:
Blocking is a common performance issue in database systems that occurs when one transaction holds locks on resources that another transaction needs to access. While some level of blocking is normal in concurrent systems, excessive blocking leads to query delays, application timeouts, and poor user experience. Identifying and resolving blocking issues requires understanding which sessions hold locks, which sessions are waiting, what resources are locked, and the lock types involved. Azure SQL Database provides Dynamic Management Views that expose internal system state, allowing administrators to diagnose blocking chains and take appropriate corrective actions.
When investigating blocking issues, administrators need several pieces of information: which sessions are blocked (waiting), which sessions are blocking others (holding locks), what specific resources are locked (tables, pages, rows), the types of locks involved (exclusive, shared, update), and potentially the queries being executed by blocking sessions. Different DMVs provide different aspects of this information—some focus on session state, others on current requests, and still others on the locking system specifically. Understanding which DMV provides the most direct and comprehensive view of locking and blocking relationships is essential for efficient troubleshooting.
C is correct because sys.dm_tran_locks provides the most comprehensive information about locks in the system and is the primary DMV for investigating blocking issues. This DMV returns information about currently active lock manager resources including the resource being locked (database, table, page, row, key), the type of lock (shared, exclusive, update, intent, schema, bulk update), the status of the lock request (granted or waiting), the session holding or requesting the lock (request_session_id), and the lock mode and resource specifics. By querying sys.dm_tran_locks, you can identify blocking chains by finding sessions with granted locks and other sessions waiting for locks on the same resources. A common query joins sys.dm_tran_locks to itself to show blocking relationships, revealing which session_id is blocking which other session_ids. You can also join to sys.dm_exec_sessions and sys.dm_exec_sql_text to see what queries the blocking sessions are executing. For the scenario described where blocking is causing timeouts, sys.dm_tran_locks provides the essential lock-level information needed to understand the blocking situation.
A is incorrect because while sys.dm_exec_sessions provides valuable information about active user sessions including session_id, login_name, host_name, program_name, and connection properties, it doesn’t provide detailed information about locks or blocking relationships. You can see a session’s last_request_start_time and status, but you won’t see what resources it’s locking or which other sessions it’s blocking. sys.dm_exec_sessions is useful for understanding session characteristics and identifying active users, but for blocking analysis, you need the lock-specific information from sys.dm_tran_locks. In practice, you would often join sys.dm_exec_sessions to sys.dm_tran_locks to combine session information with lock information for comprehensive blocking analysis.
B is incorrect because sys.dm_exec_requests shows currently executing requests with information including session_id, status, command, wait_type, wait_time, blocking_session_id, and the SQL text being executed. While this DMV does include blocking_session_id which directly tells you which session is blocking a given request, it only shows information about currently executing requests that are waiting or running. It doesn’t show the complete lock picture including what specific resources are locked, the types of locks involved, or granted locks held by sessions not currently executing requests. sys.dm_exec_requests is useful for quickly identifying that blocking exists and seeing which session is the blocker, but sys.dm_tran_locks provides deeper insight into the locking situation, making it more useful for thorough blocking analysis.
D is incorrect because sys.dm_exec_connections provides information about physical connections to Azure SQL Database including connection_id, client_net_address, authentication method, and connection properties, but it contains no information about locks, blocking, or transaction state. This DMV is useful for understanding network connectivity and identifying where connections originate from, but it’s not relevant for diagnosing blocking issues. Connection information and locking information are separate concerns—a session might have a perfect connection but still be involved in blocking. For blocking analysis, sys.dm_tran_locks is the appropriate DMV to query.
Question 142:
An organization wants to ensure that their Azure SQL Database can automatically recover from database corruption or accidental data deletion within the last 24 hours. Which feature provides this capability?
A) Geo-replication
B) Point-in-time restore (PITR)
C) Long-term retention backup
D) Active geo-replication
Answer: B
Explanation:
Data protection and recovery capabilities are fundamental requirements for production database systems. Despite best efforts to prevent issues, databases can experience problems including logical data corruption from application bugs, accidental data deletion by users or administrators, schema changes that need to be reversed, and various other scenarios requiring restoration to a previous state. Azure SQL Database provides multiple features for data protection and availability, each designed for different scenarios and recovery objectives. Understanding which feature addresses which recovery scenario is essential for implementing appropriate data protection strategies.
Recovery scenarios can be categorized into different types requiring different solutions: operational recovery from recent mistakes (hours or days ago), disaster recovery from regional outages, high availability for minimizing downtime during failures, and long-term compliance retention for regulatory requirements. The 24-hour recovery requirement mentioned in the question represents operational recovery—the ability to undo recent changes or recover from corruption while minimizing data loss. This scenario requires a feature that maintains frequent backups or continuous data protection with the ability to restore to specific points in time.
B is correct because Point-in-time restore (PITR) is specifically designed to enable recovery from logical corruption, accidental deletion, or erroneous changes within the backup retention window. PITR works by leveraging Azure SQL Database’s automatic backup system which performs full backups weekly, differential backups every 12-24 hours, and transaction log backups every 5-10 minutes. These backups enable restoration to any specific point in time within the retention period, which is 7 days for Basic tier, 35 days for Standard and Premium tiers, and configurable for vCore tiers. For the 24-hour recovery requirement described in the question, PITR perfectly satisfies the need—if data is accidentally deleted or corrupted at any point within the last 24 hours, the database can be restored to a point immediately before the incident occurred. The restore operation creates a new database on the same server, allowing administrators to verify the recovered data before replacing the original database. PITR provides granular recovery with minimal data loss (only changes made after the restore point are lost).
A is incorrect because geo-replication refers to the general concept of replicating data across geographic regions for disaster recovery, but it doesn’t provide point-in-time recovery from logical errors or data deletion. If data is accidentally deleted from the primary database, that deletion replicates to geo-replicated secondaries almost immediately (typically within seconds). Geo-replication protects against regional failures or disasters, but both primary and secondary databases would contain the same corrupted or deleted data. For recovering from logical errors or data deletion within a 24-hour window, point-in-time restore is the correct feature. Geo-replication and PITR serve different purposes—geo-replication for disaster recovery, PITR for operational recovery from logical errors.
C is incorrect because long-term retention (LTR) backup is designed for compliance and regulatory requirements requiring backup retention beyond the standard retention window (up to 10 years), not for operational recovery from recent incidents. While LTR backups could theoretically be used to recover from events within the last 24 hours, this is not their intended purpose, and they don’t provide the granularity that PITR offers. LTR typically retains weekly, monthly, or yearly backups, not the continuous point-in-time recovery capability that PITR provides through transaction log backups. For recovering from an issue that occurred 24 hours ago, you would use PITR which allows restoring to the exact moment before the problem occurred, not LTR which provides periodic snapshots for long-term compliance.
D is incorrect because Active geo-replication is a high availability and disaster recovery feature that continuously replicates database changes to secondary databases in different regions, but like geo-replication in general, it doesn’t protect against logical corruption or accidental deletion. Active geo-replication creates readable secondary replicas that stay synchronized with the primary database through continuous replication of transactions. If someone accidentally deletes data or corrupts the database, that corruption replicates to all secondary databases. Active geo-replication is designed to enable fast failover during regional outages or planned relocations, not for recovering from logical errors. The combination of Active geo-replication for disaster recovery and PITR for operational recovery provides comprehensive data protection, but for the specific 24-hour recovery scenario described, PITR is the appropriate feature.
Question 143:
A database contains sensitive personal information that must be encrypted at rest and in transit. The organization wants to ensure that database administrators cannot view the unencrypted data. Which encryption feature should be implemented?
A) Transparent Data Encryption (TDE)
B) Always Encrypted
C) Transport Layer Security (TLS)
D) Dynamic data masking
Answer: B
Explanation:
Data encryption is a critical security control for protecting sensitive information from unauthorized access. Different encryption solutions provide protection at different layers and against different threat models. Some encryption methods protect data at rest (on disk), others protect data in transit (over networks), and advanced solutions provide end-to-end encryption protecting data throughout its lifecycle including while being processed. Understanding the distinction between these encryption approaches and the specific threat scenarios they address is essential for implementing appropriate security controls based on regulatory requirements and organizational security policies.
The key requirement in the question is that database administrators should not be able to view unencrypted data. This represents a specific threat model where the database system itself (and its administrators) are considered potentially untrusted, requiring encryption that protects data even from privileged database users. Traditional database encryption methods like Transparent Data Encryption protect data at rest from file system attacks but allow database administrators with proper permissions to query and view decrypted data through normal database interfaces. Meeting the requirement of preventing DBAs from viewing sensitive data requires a fundamentally different encryption approach where decryption happens outside the database engine.
B is correct because Always Encrypted is specifically designed to protect sensitive data from database administrators and other high-privilege users including system administrators, cloud operators, and malware running on the database server. Always Encrypted works by encrypting data on the client side before sending it to the database, and decryption also occurs on the client side after retrieval. The database server only stores and processes encrypted data—it never has access to encryption keys or plaintext data. This means database administrators can manage the database, perform backups, and maintain the system, but they cannot view the contents of encrypted columns even with full administrative privileges. Always Encrypted supports both deterministic encryption (enabling equality searches) and randomized encryption (providing stronger security). The encryption keys are managed outside the database, typically in Azure Key Vault or Windows Certificate Store, and only client applications with access to these keys can decrypt data. For the scenario described where DBAs must be prevented from viewing sensitive personal information, Always Encrypted is the only encryption feature that provides this protection.
A is incorrect because Transparent Data Encryption (TDE) protects data at rest by encrypting database files, log files, and backups on disk, but it does not protect data from database administrators. TDE operates at the storage layer—when data is written to disk, it’s automatically encrypted; when read from disk, it’s automatically decrypted by the database engine. This protection is «transparent» because applications and users don’t need to be aware of it, but it also means that anyone with proper database permissions (including DBAs) can query and view data normally. TDE protects against threats like stolen backup tapes, unauthorized physical access to storage, and file-level attacks, but it doesn’t protect against authorized database users or administrators. While TDE is important as a baseline encryption control (and is enabled by default in Azure SQL Database), it doesn’t meet the requirement of preventing DBAs from viewing sensitive data.
C is incorrect because Transport Layer Security (TLS) encrypts data in transit between clients and the database server, protecting against network eavesdropping and man-in-the-middle attacks. TLS ensures that data traveling over networks cannot be intercepted and read, but once data reaches the database server, it exists in plaintext (unencrypted) form in memory and can be viewed by database administrators. TLS is essential for protecting data during transmission and is enforced by default in Azure SQL Database, but it doesn’t provide the end-to-end encryption needed to prevent DBAs from accessing sensitive data. A comprehensive encryption strategy includes TLS for data in transit, TDE for data at rest, and Always Encrypted for data that must remain encrypted even from database administrators.
D is incorrect because dynamic data masking is not an encryption feature—it’s a policy-based obfuscation technique that hides sensitive data from non-privileged users by masking it in query results. Dynamic data masking applies masking rules that show partial data (like showing only the last four digits of a credit card) or completely masked data (like showing XXXX instead of actual values) to users without unmasking permission. However, the actual data remains unencrypted in the database, and users with sufficient permissions (including DBAs) can view the unmasked data. Additionally, dynamic data masking doesn’t protect against direct queries to the underlying tables or attacks that bypass the masking logic. Dynamic data masking is useful for limiting exposure to developers and support staff, but it doesn’t provide the cryptographic protection required to prevent DBAs from viewing sensitive data—Only Always Encrypted provides that capability.
Question 144:
An Azure SQL Database in the Business Critical service tier is experiencing high read latency on reporting queries. The application requires real-time data for operational queries but can tolerate slight delays for reporting. What configuration would BEST improve reporting query performance without impacting operational workload?
A) Scale up to a higher service tier
B) Enable read scale-out and direct reporting queries to secondary replicas
C) Implement table partitioning
D) Configure an elastic pool
Answer: B
Explanation:
Workload isolation and performance optimization in Azure SQL Database require understanding the different service tiers and their capabilities. The Business Critical service tier (and Premium tier in the DTU model) includes built-in high availability through Always On availability groups with multiple synchronous replicas. While these replicas primarily serve high availability purposes, they also provide an opportunity for workload isolation by offloading read-only queries to secondary replicas. This approach allows organizations to separate analytical or reporting workloads from operational transaction processing, improving performance for both workload types without requiring additional database copies or complex replication setup.
Read scale-out is a feature that leverages the secondary replicas already present in Business Critical and Premium tier databases for high availability purposes. Rather than letting these replicas sit idle while waiting for potential failovers, read scale-out allows applications to direct read-only queries to these replicas. The secondary replicas are automatically synchronized with the primary through synchronous replication, ensuring data consistency with only minimal lag (typically milliseconds). This architecture enables cost-effective workload separation without the overhead of managing separate databases, implementing custom replication, or incurring significant additional costs since the replicas already exist for high availability.
B is correct because enabling read scale-out and directing reporting queries to secondary replicas directly addresses the scenario’s requirements: it offloads reporting workload from the primary replica, freeing resources for operational queries, eliminates resource contention between operational and reporting workloads, provides near real-time data (slight replication lag of milliseconds is acceptable for reporting as stated), doesn’t require application changes beyond modifying connection strings (adding ApplicationIntent=ReadOnly to connection strings), and utilizes existing infrastructure without additional cost since Business Critical tier already includes replicas. Implementation is straightforward—reporting connections specify ApplicationIntent=ReadOnly in the connection string, and the routing mechanism automatically directs these connections to a secondary replica. This approach is specifically designed for the exact scenario described and is a best practice for Business Critical databases with mixed workloads.
A is incorrect because scaling up to a higher service tier increases resources (CPU, memory, I/O) but doesn’t fundamentally solve the problem of resource contention between operational and reporting workloads. Both workload types would still compete for the same resources on the primary replica. While more resources might improve overall performance, it’s an expensive solution that doesn’t provide the workload isolation that read scale-out offers. Additionally, the question specifically mentions this is already a Business Critical tier database, which is already a high tier. Scaling up further would significantly increase costs without leveraging the built-in read scale-out capability that Business Critical already provides. Read scale-out is a more cost-effective solution that achieves better workload separation.
C is incorrect because table partitioning is a technique for dividing large tables into smaller physical segments to improve query performance and manageability, but it doesn’t address the core issue of resource contention between operational and reporting workloads. Partitioning helps with queries that can benefit from partition elimination (accessing only relevant partitions) and can improve maintenance operations like index rebuilds, but both operational and reporting queries would still execute on the same primary replica competing for the same resources. Partitioning is valuable for very large tables but doesn’t provide the workload isolation needed in this scenario. Read scale-out physically separates the workloads onto different replicas, which partitioning cannot achieve.
D is incorrect because elastic pools are designed for scenarios with multiple databases that have varying and complementary resource usage patterns, allowing resource sharing for cost optimization. Elastic pools don’t help with a single database experiencing resource contention between different query types. While you could theoretically create a separate database in an elastic pool for reporting (using database copy or replication), this adds complexity, incurs additional costs, requires managing data synchronization, and is unnecessary when the Business Critical tier already provides read-only replicas through read scale-out. Elastic pools address a different problem (multiple database resource management) than what the question describes (workload isolation within a single database).
Question 145:
A company needs to audit all access attempts and data modifications in their Azure SQL Database for compliance purposes. Which feature should be configured to meet this requirement?
A) Azure Monitor logs
B) SQL Database Auditing
C) Dynamic Management Views (DMVs)
D) Query Performance Insight
Answer: B
Explanation:
Compliance and security auditing requirements mandate that organizations track and log database activities including authentication attempts, data access, schema changes, and privilege modifications. Auditing provides accountability, enables security incident investigation, supports compliance with regulations like GDPR, HIPAA, PCI-DSS, and SOX, and helps detect anomalous behavior that might indicate security breaches. Azure SQL Database provides built-in auditing capabilities specifically designed to meet these requirements without requiring custom solutions or third-party tools. Understanding how to properly configure and utilize these auditing features is essential for database administrators responsible for maintaining compliant and secure database environments.
Effective database auditing must capture comprehensive events including successful and failed login attempts, data query and modification operations (SELECT, INSERT, UPDATE, DELETE), schema changes (CREATE, ALTER, DROP), privilege and role changes (GRANT, REVOKE), and stored procedure executions. The audit system must store these logs securely, retain them for appropriate periods, and provide mechanisms for analysis and reporting. Azure SQL Database offers native auditing functionality integrated with Azure services for log storage and analysis, providing a complete solution for meeting audit and compliance requirements.
B is correct because SQL Database Auditing is the purpose-built feature specifically designed for tracking database events and maintaining audit logs for compliance purposes. SQL Database Auditing captures database events and writes them to audit logs stored in Azure storage accounts, Log Analytics workspaces, or Event Hubs. The auditing feature tracks all accesses and modifications including SELECT statements on sensitive tables, all INSERT, UPDATE, DELETE operations, DDL statements (CREATE, ALTER, DROP), DCL statements (GRANT, REVOKE, DENY), successful and failed authentication attempts, stored procedure execution, and transaction context. Auditing can be configured at the server level (applies to all databases) or database level (specific to one database), with customizable audit policies that define which event categories to capture. The audit logs can be analyzed using Azure portal, queried with T-SQL, or integrated with Security Information and Event Management (SIEM) systems. For the compliance requirement described in the question requiring auditing of all access attempts and data modifications, SQL Database Auditing is the correct and complete solution.
A is incorrect because while Azure Monitor logs collect telemetry and diagnostic information about Azure resources, it’s not specifically designed for compliance auditing of database access and data modifications. Azure Monitor captures metrics like CPU usage, connections, and performance characteristics, and can collect diagnostic logs including SQL Insights, but it doesn’t provide the comprehensive event tracking that SQL Database Auditing offers. You can configure diagnostic settings to send some database telemetry to Log Analytics, but this is supplementary monitoring data, not complete audit trails of data access and modifications. Azure Monitor and SQL Database Auditing serve complementary purposes—Azure Monitor for operational monitoring and performance, SQL Database Auditing for security and compliance. For meeting audit requirements, SQL Database Auditing must be explicitly configured.
C is incorrect because Dynamic Management Views provide real-time or recent historical information about database state and operations but are not an auditing solution. DMVs like sys.dm_exec_sessions or sys.dm_exec_query_stats show current activity and cached query information, but this data is volatile (cleared on restarts), not retained for compliance periods, lacks comprehensive event capture, doesn’t capture failed access attempts reliably, and doesn’t provide secure long-term storage of audit trails. DMVs are valuable troubleshooting and monitoring tools but cannot replace proper auditing for compliance purposes. They show what’s happening now or recently, not a complete historical audit trail of all activities.
D is incorrect because Query Performance Insight is a performance monitoring tool that identifies resource-intensive queries to help with performance tuning, not an auditing tool for compliance purposes. Query Performance Insight shows which queries consume the most CPU, I/O, or duration, and provides query execution statistics and trends, but it doesn’t capture comprehensive audit events like authentication attempts, data modifications, privilege changes, or schema alterations. Query Performance Insight is designed for performance optimization, not security auditing. For the compliance requirement of auditing all access attempts and data modifications, SQL Database Auditing is the appropriate feature, while Query Performance Insight serves an entirely different purpose.
Question 146:
A database administrator needs to implement automated tuning recommendations for indexes in Azure SQL Database. Which feature should be enabled to automatically create and drop indexes based on workload patterns?
A) Query Store
B) Automatic tuning
C) Database Advisor
D) Intelligent Insights
Answer: B
Explanation:
Database performance optimization traditionally requires significant DBA time and expertise to monitor workloads, identify performance issues, analyze query execution plans, and implement appropriate indexes or other optimizations. Azure SQL Database includes artificial intelligence and machine learning capabilities that can analyze workload patterns, identify optimization opportunities, and even automatically implement certain optimizations without manual intervention. These intelligent capabilities help maintain optimal performance as workloads change over time, reduce DBA workload, and ensure that performance optimizations are applied promptly. Understanding the different intelligent performance features and their specific capabilities is important for leveraging Azure SQL Database’s automation capabilities effectively.
Azure SQL Database provides several related but distinct intelligent features: some analyze and provide recommendations, while others can automatically implement those recommendations. Query Store collects query execution data, Database Advisor analyzes performance and provides recommendations, Intelligent Insights detects performance anomalies, and Automatic Tuning can automatically implement certain optimizations. The distinction between recommendation and automated implementation is crucial—some features only advise administrators to take action, while others can be configured to automatically apply optimizations, which is what the question specifically asks about.
B is correct because Automatic tuning is the feature that actually implements performance optimizations automatically, including creating and dropping indexes based on workload analysis. Automatic tuning in Azure SQL Database includes several capabilities, primarily focused on index management including CREATE INDEX (automatically creates missing indexes that would benefit the workload), DROP INDEX (removes duplicate and unused indexes that waste space and slow down write operations), and FORCE LAST GOOD PLAN (reverts to previous good execution plans when plan regression is detected). When automatic tuning is enabled, the service continuously monitors query performance using Query Store data, identifies optimization opportunities using machine learning models trained on vast datasets, validates that optimizations actually improve performance, and automatically applies or reverts changes based on measured impact. For the scenario described where automated index creation and removal is required, automatic tuning must be enabled with the CREATE INDEX and DROP INDEX options set to auto-apply. This can be configured at the database level through the Azure portal or T-SQL commands like ALTER DATABASE SET AUTOMATIC_TUNING (CREATE_INDEX = ON).
A is incorrect because Query Store is a data collection system that captures query execution statistics, execution plans, and runtime information, but it doesn’t implement any optimizations automatically. Query Store is the foundational data source that Automatic Tuning uses to analyze workload patterns and make decisions, but Query Store itself is purely a monitoring and recording system. It enables performance troubleshooting, plan forcing, and analysis, but doesn’t create or drop indexes. Query Store must be enabled for Automatic Tuning to work (as it provides the necessary telemetry), but enabling Query Store alone doesn’t provide automated index management. Think of Query Store as the sensors providing data, and Automatic Tuning as the automation system that acts on that data.
C is incorrect because Database Advisor (also known as Performance Recommendations) analyzes database performance and provides recommendations including index creation and removal suggestions, but it doesn’t automatically implement these recommendations unless Automatic Tuning is also enabled. Database Advisor identifies opportunities and presents them in the Azure portal with estimated performance impact, but administrators must manually review and apply these recommendations, or enable Automatic Tuning to apply them automatically. Database Advisor is essentially the recommendation engine, while Automatic Tuning is the automation engine. The question specifically asks for automated implementation without manual intervention, which requires Automatic Tuning to be enabled.
D is incorrect because Intelligent Insights is an intelligent diagnostics feature that detects performance degradations and anomalies using AI, then provides diagnostic information about the root causes. Intelligent Insights identifies issues like sudden increases in query duration, resource exhaustion, or inefficient query plans, and provides diagnostic logs explaining what happened and why. However, it doesn’t provide specific optimization recommendations and doesn’t implement any changes automatically. Intelligent Insights is excellent for proactive alerting and understanding performance problems, but for automated index management specifically, Automatic Tuning is the required feature. These features work together—Intelligent Insights helps detect problems, while Automatic Tuning helps solve certain types of problems automatically.
Question 147:
An organization is migrating a mission-critical application database to Azure SQL Database and requires an SLA of 99.99% availability. Which service tier and configuration should be selected?
A) General Purpose tier with zone redundancy
B) Business Critical tier
C) Basic tier with geo-replication
D) Standard tier with elastic pool
Answer: B
Explanation:
Service level agreements (SLAs) define the guaranteed availability commitments that cloud providers make to customers, typically expressed as a percentage of uptime over a specific period. Different Azure SQL Database service tiers provide different SLA guarantees based on their underlying architecture, redundancy mechanisms, and high availability implementations. Understanding the relationship between service tiers, availability features, and SLA guarantees is crucial for selecting appropriate configurations for mission-critical applications with specific uptime requirements. Organizations must balance availability requirements against cost, as higher availability tiers command premium pricing.
Azure SQL Database offers several service tiers with different availability characteristics: Basic and Standard tiers provide 99.99% SLA through zone-redundant storage replicas but with higher failover times, General Purpose (vCore) provides 99.99% SLA and can be enhanced with zone redundancy, and Business Critical (vCore) provides 99.99% or 99.995% SLA with built-in Always On availability groups and minimal failover times. The architecture underlying each tier determines not just the SLA percentage but also characteristics like failover speed, replica availability for read scale-out, and resilience to different types of failures. Mission-critical applications must consider both the SLA percentage and the recovery characteristics.
B is correct because the Business Critical service tier provides the highest availability SLA of 99.99% (or 99.995% with zone redundancy) and is specifically designed for mission-critical workloads. Business Critical tier architecture includes Always On availability groups with multiple synchronous replicas (one primary and at least three secondary replicas in the same region), fast automatic failover typically completing in seconds, read scale-out capability on secondary replicas, lowest failover RTO (Recovery Time Objective) and RPO (Recovery Point Objective) with near-zero data loss, and premium local SSD storage providing high I/O performance. The 99.99% SLA translates to approximately 52.56 minutes of potential downtime per year, and with zone redundancy enabled, Business Critical provides 99.995% (approximately 26.28 minutes per year). For mission-critical applications requiring maximum availability, Business Critical is the recommended tier. While it’s more expensive than General Purpose, the architecture specifically addresses mission-critical requirements.
A is incorrect because while General Purpose tier with zone redundancy does provide 99.99% SLA, it has higher failover times compared to Business Critical tier due to architectural differences. General Purpose uses a storage-level redundancy model where compute and storage are separated, with Azure Premium Storage or remote SSD providing storage redundancy. Failovers in General Purpose must re-establish connections and recover transaction logs, typically taking 30 seconds or more (occasionally longer for large databases with many uncommitted transactions). For truly mission-critical applications where even brief unavailability causes significant business impact, Business Critical’s faster failover and zero data loss characteristics make it the better choice. General Purpose is excellent for most production workloads and provides good availability at lower cost, but Business Critical is specifically designed for mission-critical scenarios.
C is incorrect because Basic tier is designed for development, testing, and small-scale production workloads, not mission-critical applications. Basic tier has significant limitations including limited CPU, memory, and IOPS (5 DTUs maximum), maximum database size of 2GB, potentially longer failover times, and a 99.99% SLA that applies but with less robust high availability architecture compared to higher tiers. While you could add geo-replication for disaster recovery, this is primarily for regional failure protection, not for improving the SLA or failover characteristics within the primary region. Basic tier geo-replication also requires manual failover. For mission-critical applications, Basic tier’s resource constraints and architecture are insufficient regardless of geo-replication configuration.
D is incorrect because while Standard tier provides 99.99% SLA and elastic pools offer resource management benefits, neither the tier nor elastic pool configuration is specifically designed for mission-critical workloads. Standard tier (DTU model) uses a similar architecture to General Purpose (vCore model) with acceptable but not optimal failover characteristics. Elastic pools allow multiple databases to share resources for cost optimization but don’t enhance availability beyond what the tier already provides. For mission-critical applications, the combination of Business Critical tier’s Always On architecture, fast failover, and minimal data loss characteristics make it the superior choice. Standard tier and elastic pools are appropriate for many production scenarios but Business Critical is specifically positioned for mission-critical requirements.
Question 148:
A database administrator needs to restore a deleted Azure SQL Database that was removed 5 days ago. The database had a 7-day point-in-time restore retention period configured. Which approach should be used to recover the database?
A) Restore from automatic backup using point-in-time restore
B) Restore from long-term retention backup
C) Contact Azure support to recover the database
D) The database cannot be recovered after deletion
Answer: A
Explanation:
Database deletion represents a critical scenario where recovery capabilities become essential for business continuity. While Azure SQL Database provides robust backup and restore capabilities for operational recovery, the behavior of these capabilities after database deletion requires specific understanding. Administrators must know not only how to recover active databases to previous points in time but also understand retention policies for deleted databases and the procedures for recovering them. The interaction between point-in-time restore retention, database deletion, and backup retention for deleted databases is nuanced and important for disaster recovery planning.
Azure SQL Database automatically backs up all databases with full backups, differential backups, and transaction log backups that enable point-in-time restore capabilities. When a database is deleted, Azure retains these backups for the configured retention period, allowing administrators to restore deleted databases within that retention window. This capability protects against accidental deletion scenarios where databases are dropped by mistake or through security incidents. However, the retention period for deleted database backups is specific and may differ from expectations, requiring administrators to understand the exact policies and time limits.
A is correct because Azure SQL Database retains automatic backups for deleted databases for the configured point-in-time restore retention period (7 days in this scenario), allowing restoration using the point-in-time restore feature. When a database is deleted, the backups are not immediately deleted—they’re retained for the PITR retention period that was configured when the database was active (7-35 days depending on service tier and configuration). Since the database was deleted 5 days ago and had a 7-day retention period, the backups still exist and can be used to restore the database. The restoration process for deleted databases is slightly different from restoring active databases: administrators navigate to the Azure SQL Server (not the deleted database), select «Deleted databases» to view recoverable deleted databases, choose the deleted database, and specify the restore point within the retention window. The restore creates a new database (you specify the name) on the same or different server. This capability is specifically designed for recovery from accidental database deletion and is the correct approach for the scenario described.
B is incorrect because long-term retention (LTR) backups are designed for compliance and regulatory requirements requiring retention beyond the standard PITR window (up to 10 years), not for operational recovery from recent deletions. While LTR could technically be used to recover a deleted database if LTR was configured and backups exist, it’s not the primary mechanism for recovering recently deleted databases within the PITR retention window. LTR backups are taken less frequently (weekly, monthly, or yearly) and are designed for different purposes. The scenario describes a database deleted 5 days ago with 7-day PITR retention—standard PITR restore of automatic backups is the appropriate and simpler approach. LTR would only be necessary if the database was deleted longer ago than the PITR retention period.
C is incorrect because Azure support is not required to recover deleted databases within the automatic backup retention period—this is a self-service capability available to database administrators through the Azure portal, PowerShell, CLI, or REST API. Administrators have direct access to restore deleted databases as long as backups exist within the retention window. Azure support should only be contacted if there are unexpected issues preventing restoration, if backups should exist but aren’t appearing, or for questions about retention policies. For the standard scenario of recovering a deleted database within its retention period, administrators can perform the restoration themselves without support involvement, making this answer incorrect.
D is incorrect because deleted databases can definitely be recovered within their configured backup retention period—this is an important capability specifically provided by Azure SQL Database. The misconception that deleted databases are immediately gone and unrecoverable would be dangerous for production environments. As long as the deletion occurred within the PITR retention period (7 days in this scenario) and only 5 days have passed, the backups are still available for restoration. After the retention period expires, then the backups are deleted and recovery is no longer possible (unless LTR backups were configured). But within the retention window, deleted database recovery is a supported and straightforward operation.
Question 149:
An application connecting to Azure SQL Database is experiencing error 40613 («Database unavailable») intermittently. What is the MOST likely cause and appropriate solution?
A) The database has been deleted; recreate the database
B) The database is undergoing a transient failure; implement retry logic
C) The firewall rules are blocking the connection; update IP whitelist
D) The database is out of storage; increase storage allocation
Answer: B
Explanation:
Error handling and troubleshooting connection issues in Azure SQL Database requires understanding the different error codes, their meanings, and appropriate responses. Cloud databases experience various types of errors including transient faults (temporary connectivity issues that resolve themselves), persistent configuration errors (like firewall blocks), resource exhaustion, and critical issues like database deletion. Error 40613 is a specific error code indicating that the database is temporarily unavailable, and understanding its characteristics helps administrators implement appropriate solutions. The key distinction is between transient and persistent errors, as they require fundamentally different remediation approaches.
Transient errors in Azure SQL Database occur due to the cloud infrastructure’s normal operations including automated failovers, load balancing operations, software updates and patches, hardware maintenance, and temporary resource constraints. These errors are expected characteristics of cloud services and typically resolve within seconds. Applications must be designed to handle these transient conditions gracefully rather than treating them as critical failures. Microsoft provides specific guidance on which error codes represent transient faults and recommends specific retry strategies. Error codes like 40613, 40197, 40501, and 40544 are documented transient errors that should trigger retry logic.
B is correct because error 40613 («Database on server is not currently available») is a documented transient error that occurs when the database is temporarily unavailable due to infrastructure operations like failovers, and the appropriate solution is implementing retry logic in the application. Transient errors are normal in cloud environments and applications must handle them through automatic retry mechanisms with exponential backoff. Best practices for implementing retry logic include detecting transient errors by error code, implementing exponential backoff (start with 1-2 second delay, increasing with each retry), limiting retry attempts (typically 3-5 retries before reporting persistent failure), and logging retry attempts for monitoring purposes. Modern database client libraries and frameworks often include built-in retry capabilities (Entity Framework Core has EnableRetryOnFailure, ADO.NET can use SqlRetryLogicOption, and various ORM frameworks have retry configuration). Since the question states the error occurs intermittently (not persistently), this strongly indicates transient faults rather than a persistent configuration or resource problem. Implementing proper retry logic will handle these transient failures transparently.
A is incorrect because if the database had been deleted, the error would be persistent (occurring on every connection attempt), not intermittent, and would likely produce a different error message specifically indicating the database doesn’t exist (error 40615 or similar). Additionally, a deleted database is a catastrophic event that wouldn’t resolve itself intermittently—it would be a permanent condition requiring administrative action. The intermittent nature of the error described in the question indicates temporary unavailability (transient fault), not deletion. Database deletion would require explicit administrative action or script execution and would not occur spontaneously as part of normal operations.
C is incorrect because firewall configuration errors produce different error codes (typically error 40615 «Cannot open server») and would cause persistent connection failures, not intermittent issues. If firewall rules were blocking connections, every connection attempt from that IP address would fail consistently until the firewall rule is updated. Firewall configurations don’t change spontaneously—they remain stable until administrators modify them. The intermittent nature of error 40613 specifically indicates transient availability issues, not network connectivity or firewall problems. While firewall configuration should be verified during initial setup and connectivity troubleshooting, it’s not the cause of intermittent 40613 errors.
D is incorrect because storage exhaustion produces different error messages related to log file full or insufficient space (error 1105 or similar) and would cause persistent problems with write operations, not intermittent connection availability errors. When a database runs out of storage, it doesn’t become temporarily unavailable and then available again intermittently—it remains in a state where it cannot accept write operations until space is freed or allocation is increased. Error 40613 specifically indicates temporary unavailability due to infrastructure operations, not resource exhaustion. Storage monitoring is important for capacity planning, but it’s unrelated to the intermittent 40613 errors described in the question.
Question 150:
A company needs to copy an Azure SQL Database to a different Azure region for development and testing purposes. The copy should be a one-time operation and does not need to stay synchronized with the source database. Which approach should be used?
A) Configure Active geo-replication
B) Use database copy functionality
C) Implement transactional replication
D) Create a read-scale replica
Answer: B
Explanation:
Azure SQL Database provides several mechanisms for creating database replicas or copies, each designed for different scenarios and requirements. Some features create continuously synchronized replicas for high availability or disaster recovery, while others create point-in-time copies for development, testing, or data distribution purposes. Understanding the distinctions between these features—including whether they maintain ongoing synchronization, their intended use cases, regional capabilities, and cost implications—is essential for selecting the appropriate approach for specific business requirements. The key factors to consider include whether ongoing synchronization is needed, whether the copy is for production (requiring high availability) or non-production purposes, and regional distribution requirements.
Database replication and copying scenarios span various use cases: production disaster recovery requiring continuous synchronization and fast failover, development/test environments needing point-in-time copies without ongoing synchronization, reporting databases requiring readable replicas, and data distribution to multiple regions. Each scenario has different characteristics in terms of synchronization requirements, data freshness needs, write capabilities on the secondary, and cost considerations. Azure SQL Database offers features specifically optimized for each scenario, and selecting the wrong feature can result in unnecessary costs or inappropriate capabilities for the intended use case.
B is correct because database copy functionality is specifically designed for creating one-time, independent copies of Azure SQL databases without ongoing synchronization, which exactly matches the scenario described for development and testing purposes. Database copy creates a transactionally consistent snapshot of the source database at a specific point in time and restores it as a new independent database that can be in the same region or a different region. The copy operation uses the same backup technology as point-in-time restore, ensuring data consistency. Once the copy is complete, the new database is completely independent—changes to the source don’t affect the copy and vice versa. Database copy can be initiated through Azure portal, PowerShell (New-AzSqlDatabaseCopy), CLI (az sql db copy), or T-SQL. The copied database can have a different service tier or compute size than the source, allowing cost optimization for non-production environments. For the described scenario of creating a development/test copy in a different region without ongoing synchronization, database copy is the appropriate and cost-effective solution.
A is incorrect because Active geo-replication creates a continuously synchronized readable secondary database in a different region, which is more than needed for the scenario and incurs ongoing costs for maintaining the replication relationship. Active geo-replication is designed for disaster recovery and high availability scenarios where the secondary must stay current with the primary for potential failover. It provides continuous asynchronous replication, readable secondaries for offloading read workloads, and fast failover capabilities during regional outages. However, the question specifically states the copy is for development/testing and doesn’t need to stay synchronized—this is a one-time copy requirement, not continuous replication. Active geo-replication would be unnecessarily complex and expensive for this use case. After creating the copy with database copy functionality, the development database can be modified independently without affecting production.
C is incorrect because transactional replication is a SQL Server feature for on-premises environments that continuously replicates specific tables or subsets of data from publishers to subscribers, and it’s not the standard approach for creating full database copies in Azure SQL Database. While transactional replication can technically be configured with Azure SQL Database in certain scenarios, it’s complex to set up, primarily designed for on-premises to cloud or hybrid scenarios, intended for replicating subsets of data rather than entire databases, requires ongoing management and monitoring, and is unnecessary for the simple requirement of copying a database once for development purposes. Database copy provides a much simpler and more appropriate solution for creating full database copies in Azure SQL Database.
D is incorrect because read-scale replicas (available in Business Critical and Premium tiers) are secondary replicas within the same region used for offloading read-only queries from the primary replica, not for creating independent database copies in different regions. Read-scale replicas are part of the high availability architecture within a single region, stay continuously synchronized with the primary, are read-only (cannot be written to independently), and are accessed through connection string parameters (ApplicationIntent=ReadOnly) rather than being independent databases. They serve a completely different purpose (workload isolation and performance optimization) than creating independent database copies for development/testing in different regions. For the scenario described, database copy is the correct feature.