Microsoft DP-300 Administering Azure SQL Solutions Exam Dumps and Practice Test Questions Set 9 Q 121-135

Microsoft DP-300 Administering Azure SQL Solutions Exam Dumps and Practice Test Questions Set 9 Q 121-135

Visit here for our full Microsoft DP-300 exam dumps and practice test questions.

Question 121: 

You are administering an Azure SQL Database that experiences unpredictable workload patterns throughout the day. The database sometimes requires high compute resources for analytical queries and minimal resources during off-peak hours. Which purchasing model and service tier would be MOST cost-effective for this scenario?

A) DTU-based purchasing model with Standard tier

B) vCore-based purchasing model with General Purpose tier and serverless compute

C) DTU-based purchasing model with Premium tier

D) vCore-based purchasing model with Business Critical tier

Answer: B

Explanation:

The vCore-based purchasing model with General Purpose tier and serverless compute represents the most cost-effective solution for Azure SQL Databases with unpredictable workload patterns and variable resource demands. Serverless compute is specifically designed for databases with intermittent, unpredictable usage patterns where the workload experiences periods of inactivity followed by bursts of activity. This configuration automatically scales compute resources based on workload demand and pauses the database during inactive periods, charging only for storage during pause periods, which dramatically reduces costs compared to continuously provisioned compute resources.

Serverless compute operates by automatically scaling vCores within a configurable minimum and maximum range based on workload requirements. When the database is actively processing queries, it scales up to meet demand, and when activity decreases, it scales down to the minimum configured vCores or pauses completely after a configurable auto-pause delay period. During paused periods, customers pay only for storage costs (approximately 90% cost reduction), with the database automatically resuming when the next connection attempt occurs. This architecture perfectly addresses scenarios with unpredictable workloads by eliminating the cost of idle compute capacity while maintaining the ability to scale up for demanding analytical queries.

The configuration process involves selecting the vCore-based purchasing model during database creation, choosing the General Purpose service tier which provides balanced compute and storage for most workloads, enabling serverless compute option, configuring minimum vCores (as low as 0.5 vCores) and maximum vCores based on peak requirements, setting the auto-pause delay (default 1 hour, configurable from 1 hour to 7 days), and monitoring through Azure portal metrics to optimize settings. The serverless model charges based on vCore-seconds of compute usage plus storage consumed, with billing granularity of one second for compute usage, making it extremely cost-effective for intermittent workloads.

A) is incorrect because the DTU-based purchasing model with Standard tier provides fixed compute capacity that runs continuously regardless of actual usage. Even during off-peak hours when minimal resources are needed, the database continues consuming and charging for the provisioned DTUs. This model lacks the automatic scaling and auto-pause capabilities that make serverless cost-effective for unpredictable workloads.

C) is incorrect because the DTU-based purchasing model with Premium tier, while offering higher performance and more features, provides even more expensive fixed compute capacity that runs continuously. Premium tier is designed for mission-critical workloads requiring consistently high performance, not for cost optimization with variable workloads. This option would result in the highest costs among the DTU-based options.

D) is incorrect because the vCore-based Business Critical tier, while offering superior performance with local SSD storage and read scale-out replicas, is the most expensive service tier and is designed for workloads requiring maximum performance, high availability, and low latency. Business Critical tier does not offer serverless compute option and would be significantly more expensive than General Purpose serverless for workloads that don’t require Business Critical features.

Organizations should monitor database usage patterns through Azure SQL Analytics, configure appropriate auto-pause delays to balance resume latency against cost savings, set minimum and maximum vCores based on actual workload requirements, implement connection retry logic to handle resume delays, and regularly review cost analysis reports to validate serverless effectiveness for their workload patterns.

Question 122: 

You need to implement a backup strategy for an Azure SQL Database that meets a Recovery Point Objective (RPO) of 10 minutes and allows point-in-time restore for up to 35 days. Which backup configuration should you implement?

A) Configure manual full backups daily with differential backups every 12 hours

B) Enable long-term retention policy with weekly backups only

C) Use default automated backup with extended retention period configured to 35 days

D) Implement transactional replication to a secondary database

Answer: C

Explanation:

Using the default automated backup with an extended retention period configured to 35 days represents the correct approach for meeting the specified requirements of 10-minute RPO and 35-day point-in-time restore capability. Azure SQL Database provides automated backups as a built-in platform feature without requiring manual configuration or maintenance. These automated backups include full backups, differential backups, and transaction log backups that work together to enable point-in-time restore (PITR) capabilities with RPOs as low as 5-10 minutes depending on transaction log backup frequency.

The automated backup system operates with the following schedule: full database backups occur weekly, differential backups occur every 12-24 hours, and transaction log backups occur approximately every 5-10 minutes or after 1GB of transaction log data accumulates, whichever comes first. This transaction log backup frequency directly determines the RPO, as point-in-time restore relies on replaying transaction logs to recover to any specific point in time. The backups are automatically stored in geo-redundant storage (RA-GRS) by default, providing protection against regional disasters, with options to change to zone-redundant storage (ZRS) or locally redundant storage (LRS) based on requirements and cost considerations.

By default, Azure SQL Database retains automated backups for 7 days, but this retention period is configurable from 1 to 35 days for all service tiers except Basic (which supports 1-7 days). Configuring the retention period to 35 days through the Azure portal, PowerShell, or Azure CLI meets the requirement without additional cost beyond the standard backup storage included with the service. The configuration is performed through the database’s backup retention policy settings, specifying the desired retention period in days. Point-in-time restore can then be initiated through Azure portal, PowerShell, Azure CLI, or REST API to restore the database to any point within the retention window.

A) is incorrect because manual backup configuration is unnecessary and cannot achieve the 10-minute RPO requirement. Manual full and differential backups would need to be supplemented with frequent transaction log backups to meet RPO requirements, but Azure SQL Database does not provide manual control over transaction log backups. The automated backup system already provides superior functionality without manual intervention.

B) is incorrect because long-term retention (LTR) policy alone with weekly backups cannot meet the 10-minute RPO requirement. LTR is designed for compliance and archival purposes, allowing retention of full backups for up to 10 years, but it supplements rather than replaces the automated backup system. LTR backups are point-in-time snapshots taken weekly, monthly, or yearly, and cannot provide granular point-in-time restore capabilities.

D) is incorrect because transactional replication creates a real-time copy of data to a secondary database but is not a backup solution. While replication provides high availability and read scale-out capabilities, it does not protect against data corruption, accidental deletions, or application errors because these issues replicate to the secondary database. Replication should complement, not replace, proper backup strategies.

Organizations should regularly test restore procedures to validate backup integrity and restore processes, monitor backup storage consumption through Azure metrics, implement geo-redundant storage for production databases requiring disaster recovery capabilities, combine short-term retention with long-term retention policies for compliance requirements, and document recovery procedures including restore time objectives (RTO) based on database size.

Question 123: 

You are designing a high availability solution for a mission-critical Azure SQL Database that requires automatic failover, read scale-out capabilities, and zone redundancy. Which deployment option should you choose?

A) Single database with General Purpose tier and geo-replication

B) Single database with Business Critical tier and zone redundancy enabled

C) Elastic pool with Standard tier

D) SQL Managed Instance with General Purpose tier

Answer: B

Explanation:

A single database with Business Critical tier and zone redundancy enabled provides the comprehensive high availability features required for this mission-critical scenario. The Business Critical tier is specifically architected to deliver premium performance, built-in high availability with local replicas, read scale-out capabilities through read-only replicas, and zone redundancy options for maximum resilience. This tier uses a deployment architecture based on Always On availability groups technology, creating multiple replicas (typically three secondary replicas plus one primary) that provide both high availability and disaster recovery capabilities within the same region.

The Business Critical tier architecture provides several key advantages. The service tier uses premium locally attached SSD storage for superior performance with single-digit millisecond latency, implements automatic failover between replicas with zero data loss during planned operations, offers 99.995% availability SLA when zone redundancy is enabled (compared to 99.99% for standard deployment), provides read scale-out by exposing one secondary replica as a read-only endpoint through the ApplicationIntent=ReadOnly connection string parameter, and enables workload isolation where reporting or analytical queries execute on secondary replicas without impacting primary workload performance.

Zone redundancy distributes database replicas across different Azure Availability Zones within the same region, protecting against datacenter-level failures. When zone redundancy is enabled, the primary replica and secondary replicas are placed in different availability zones, ensuring that failure of an entire zone does not impact database availability. Failover between replicas occurs automatically and transparently during zone failures, typically completing within seconds to minutes depending on transaction activity. The configuration is enabled during database creation or can be added to existing databases through Azure portal, PowerShell, or Azure CLI, though enabling zone redundancy on existing databases requires a brief service interruption.

A) is incorrect because while General Purpose tier with geo-replication provides geographic disaster recovery capabilities, it does not offer the built-in read scale-out and local high availability features of Business Critical tier. General Purpose tier uses remote storage architecture without local replicas, resulting in higher latency, no built-in read-only replica access, and the geo-replication is asynchronous with potential data loss during failover. This configuration does not meet the requirements for read scale-out and zone redundancy.

C) is incorrect because elastic pools with Standard tier are designed for consolidating multiple databases with shared resources for cost efficiency, not for mission-critical high availability scenarios. Standard tier does not provide zone redundancy, read scale-out capabilities, or the high-performance local SSD storage of Business Critical tier. Standard tier is suitable for workloads with moderate performance requirements and cost optimization priorities.

D) is incorrect because SQL Managed Instance with General Purpose tier, while offering broader SQL Server compatibility and managed instance features, does not provide the read scale-out and zone redundancy capabilities required for this scenario. General Purpose tier in Managed Instance uses remote storage similar to single database General Purpose tier and lacks the local replica architecture necessary for read-only endpoint access and zone-level resilience.

Organizations should configure connection strings with ApplicationIntent=ReadOnly for reporting workloads to leverage read scale-out, implement retry logic to handle brief disruptions during automatic failovers, monitor replica synchronization health through dynamic management views, test failover procedures periodically, and consider implementing geo-replication in addition to zone redundancy for complete disaster recovery strategies covering regional failures.

Question 124: 

You need to migrate an on-premises SQL Server database with 500GB of data to Azure SQL Database with minimal downtime. The database has active transactions during business hours. Which migration method should you use?

A) Export database to BACPAC file and import to Azure SQL Database

B) Azure Database Migration Service in online migration mode

C) Manual backup and restore using native SQL Server backup files

D) Use the Azure portal import/export service with a DAC package

Answer: B

Explanation:

Azure Database Migration Service (DMS) in online migration mode provides the optimal solution for migrating large databases with minimal downtime while maintaining active transactions. Online migration mode enables continuous data synchronization between the source on-premises SQL Server and target Azure SQL Database, allowing the source database to remain fully operational during the bulk data transfer phase. This approach minimizes downtime to just the brief cutover window when you switch application connections from on-premises to Azure, typically measured in minutes rather than hours or days required for offline migration methods.

The online migration process operates through several distinct phases. The initial setup involves creating an Azure Database Migration Service instance in the appropriate region, configuring network connectivity between on-premises infrastructure and Azure through VPN or ExpressRoute, preparing the source database by ensuring it meets compatibility requirements using Data Migration Assistant (DMA), creating the target Azure SQL Database with appropriate service tier sizing, and configuring the migration project with source and target connection strings. The migration executes through full load phase where initial data is copied to Azure, continuous synchronization phase where ongoing transactions are replicated using change data capture mechanisms, and cutover phase where applications are redirected to Azure after verifying data consistency.

Azure DMS supports heterogeneous migrations from various SQL Server versions including SQL Server 2005 and later, handles schema migration including tables, indexes, constraints, stored procedures, and other database objects, provides monitoring dashboards showing migration progress and performance metrics, validates data consistency between source and target, and offers assessment tools to identify potential compatibility issues before migration. The service is particularly effective for large databases because it optimizes data transfer, provides retry mechanisms for transient failures, and supports parallel migration of multiple databases.

A) is incorrect because exporting to BACPAC file and importing to Azure SQL Database is an offline migration method requiring significant downtime. For a 500GB database, the export process could take many hours, during which the database should be quiesced to ensure consistency, followed by additional hours for import to Azure. This approach is suitable for smaller databases or scenarios where extended downtime is acceptable but does not meet the minimal downtime requirement.

C) is incorrect because Azure SQL Database does not support direct restore from native SQL Server backup files (.bak files). While on-premises SQL Server uses native backup and restore functionality, Azure SQL Database uses a different architecture and does not expose the underlying filesystem. Native backup files can be restored to Azure SQL Managed Instance but not to Azure SQL Database, making this approach technically incompatible with the target platform.

D) is incorrect because the Azure portal import/export service using DAC packages (BACPAC files) is another offline migration method similar to option A. This service provides a convenient interface for smaller database migrations but suffers from the same downtime limitations. For a 500GB database with active transactions, the import/export process would require unacceptable downtime and does not provide continuous synchronization capabilities.

Organizations should thoroughly assess database compatibility using Data Migration Assistant before initiating migration, test migration procedures in non-production environments, plan cutover windows during low-activity periods, implement comprehensive monitoring during migration, prepare rollback procedures in case of issues, update connection strings in applications before cutover, and conduct thorough post-migration validation including performance testing and functional verification.

Question 125: 

You are implementing transparent data encryption (TDE) for an Azure SQL Database and need to use customer-managed keys stored in Azure Key Vault. What is the PRIMARY benefit of using customer-managed keys compared to service-managed keys?

A) Better encryption performance and lower latency

B) Customer control over key lifecycle including rotation and access revocation

C) Reduced storage costs for encrypted data

D) Automatic compliance with all regulatory requirements

Answer: B

Explanation:

Customer control over key lifecycle including rotation and access revocation represents the primary benefit of using customer-managed keys (CMK) with Transparent Data Encryption compared to service-managed keys. When using customer-managed keys stored in Azure Key Vault, organizations maintain complete control over encryption key management including key creation, rotation schedules, access policies, key versioning, and the ability to immediately revoke database access by removing permissions or deleting keys. This level of control is essential for organizations with strict security requirements, regulatory compliance mandates requiring key management sovereignty, or security frameworks demanding separation of duties between data management and key management.

The customer-managed key implementation provides several critical capabilities. Organizations can implement automated key rotation policies through Azure Key Vault, ensuring encryption keys are regularly rotated without service disruption while maintaining access to historical keys for data recovery. Access control is managed through Azure Key Vault access policies or Azure RBAC, allowing precise control over who can use encryption keys, with audit logging capturing all key access operations. In security breach scenarios or during offboarding of administrators, organizations can immediately revoke database access by removing the SQL Database’s managed identity permissions in Key Vault, effectively rendering the encrypted data inaccessible without physically accessing or modifying the database.

The configuration process involves creating an Azure Key Vault in the same region as the SQL Database, generating or importing an RSA encryption key (minimum 2048-bit) in the Key Vault, configuring the Key Vault with appropriate access policies including soft delete and purge protection, creating a managed identity for the SQL Database server, granting the managed identity permissions to wrap and unwrap keys in the Key Vault, and configuring the SQL Database to use the customer-managed key as the TDE protector. This setup meets compliance requirements for regulations like HIPAA, PCI-DSS, and GDPR that often mandate customer control over encryption keys.

A) is incorrect because customer-managed keys do not provide better encryption performance or lower latency compared to service-managed keys. In fact, customer-managed keys may introduce slightly higher latency due to the additional network calls required to access keys in Azure Key Vault during database operations. The encryption algorithms and data encryption keys remain the same; only the key encryption key (KEK) storage location differs. Performance considerations should not be a factor in choosing between service-managed and customer-managed keys.

C) is incorrect because the encryption method does not affect storage costs for encrypted data. Both service-managed and customer-managed keys use identical transparent data encryption mechanisms that encrypt data at rest without changing data size or compression characteristics. Storage costs are determined by database size and storage tier selection, not by the key management approach. Customer-managed keys may actually introduce additional costs for Azure Key Vault usage.

D) is incorrect because while customer-managed keys help meet certain compliance requirements, they do not automatically ensure compliance with all regulatory requirements. Compliance is multifaceted, requiring proper implementation of access controls, audit logging, data residency, retention policies, incident response procedures, and numerous other controls beyond encryption key management. Customer-managed keys are one component of a comprehensive compliance strategy but not a complete solution.

Organizations implementing customer-managed keys should enable soft delete and purge protection on Key Vaults to prevent accidental key deletion, implement Azure Policy to enforce customer-managed key usage across subscriptions, monitor Key Vault access logs through Azure Monitor, maintain documented procedures for key rotation and recovery, implement geo-redundant Key Vault replication for disaster recovery, test key revocation and restore procedures regularly, and maintain secure backup copies of encryption keys in escrow arrangements when required by compliance frameworks.

Question 126: 

You are troubleshooting performance issues in an Azure SQL Database. Queries that previously executed in seconds are now taking minutes to complete. Which tool or feature should you use FIRST to identify the root cause?

A) Query Performance Insight

B) Azure Monitor logs

C) SQL Server Profiler

D) Database Console Commands (DBCC)

Answer: A

Explanation:

Query Performance Insight is the most appropriate first tool for identifying the root cause of query performance degradation in Azure SQL Database because it provides immediate, accessible, and comprehensive analysis of query performance without requiring complex configuration or specialized expertise. This built-in Azure SQL Database feature automatically collects and analyzes query performance data, presenting it through an intuitive visual interface that identifies top resource-consuming queries, shows performance trends over time, highlights recent performance changes, and provides detailed execution statistics including duration, CPU time, logical reads, and execution counts.

The Query Performance Insight interface organizes information into multiple views that facilitate rapid troubleshooting. The top resource-consuming queries view displays queries ranked by various metrics including total CPU time, duration, execution count, or logical reads over selectable time periods (last 24 hours, last week, or custom ranges). The query detail view provides specific query text, execution plans, performance metrics over time showing when degradation began, and parameter values for parameterized queries. The recommendations section suggests potential optimizations including missing indexes that could improve performance, and the long-running queries view specifically identifies queries exceeding duration thresholds.

The tool leverages Query Store data, which Azure SQL Database automatically enables and maintains, collecting comprehensive query execution statistics without performance impact. Query Performance Insight does not require any special permissions beyond read access to the database and is immediately available through the Azure portal under the database’s Intelligent Performance section. For the described scenario where previously fast queries now execute slowly, Query Performance Insight will quickly reveal whether specific queries have degraded, whether overall workload has increased, whether execution plans have changed due to statistics or parameter sniffing issues, or whether resource constraints are impacting all queries.

B) is incorrect because while Azure Monitor logs provide valuable diagnostic information, they require prior configuration of diagnostic settings, log analytics workspace creation, and custom query writing using Kusto Query Language (KQL). This approach has more complexity and setup time compared to Query Performance Insight, making it less suitable as the first troubleshooting tool. Azure Monitor logs are better suited for advanced analysis after initial investigation with Query Performance Insight.

C) is incorrect because SQL Server Profiler is not supported for Azure SQL Database. Profiler is a client-side trace utility designed for on-premises SQL Server that cannot connect to Azure SQL Database due to architectural differences and security restrictions. Azure SQL Database provides alternative diagnostic tools including Extended Events, Query Store, and Query Performance Insight that replace Profiler functionality in cloud environments.

D) is incorrect because Database Console Commands (DBCC) while useful for specific diagnostic tasks like checking consistency or viewing execution plans, are limited in Azure SQL Database with many commands disabled or restricted for security and isolation purposes. DBCC commands do not provide the comprehensive performance analysis capabilities needed to quickly identify root causes of query degradation. They are better suited for specific targeted investigations after identifying problem areas through Query Performance Insight.

After identifying problematic queries through Query Performance Insight, administrators should analyze execution plans for suboptimal operations like scans instead of seeks, review query statistics and parameter sniffing issues, implement recommended indexes from the Azure SQL Database advisor, update statistics if they are outdated, consider query rewrites for inefficient patterns, evaluate database service tier sizing for resource constraints, and implement Query Store hints to force known good execution plans.

Question 127: 

You need to implement auditing for an Azure SQL Database to track all database activities and store audit logs for compliance purposes. The solution must retain logs for 90 days and support analysis through Azure Monitor. How should you configure auditing?

A) Enable database-level auditing with audit logs written to a storage account only

B) Enable server-level auditing with audit logs sent to Log Analytics workspace

C) Configure SQL Server Audit at the database level with file targets

D) Use Azure Security Center without additional auditing configuration

Answer: B

Explanation:

Enabling server-level auditing with audit logs sent to a Log Analytics workspace represents the optimal configuration for meeting all specified requirements including comprehensive activity tracking, 90-day retention, and analysis through Azure Monitor. Server-level auditing captures all database activities across all databases on the logical server through a single configuration point, providing centralized management and ensuring consistent audit policies. Directing audit logs to a Log Analytics workspace enables powerful analysis capabilities through Kusto Query Language (KQL), integration with Azure Monitor dashboards and alerts, correlation with other Azure resource logs, and built-in retention policy management supporting the 90-day requirement.

Azure SQL auditing tracks database events and writes them to configured destinations including Azure Storage accounts, Log Analytics workspaces, or Event Hubs. The auditing feature captures comprehensive information including data access and modifications, schema changes, permission and security changes, authentication events showing successful and failed login attempts, database operations like backup and restore, and security events including role membership changes. When configured at the server level, the audit policy automatically applies to all existing and future databases on that server, simplifying management and ensuring no databases are accidentally excluded from audit coverage.

Configuring server-level auditing with Log Analytics workspace destination involves accessing the Azure SQL server resource in the Azure portal, navigating to the Security section and selecting Auditing, enabling Azure SQL Auditing, selecting Log Analytics as the destination, choosing or creating a Log Analytics workspace in the same region for optimal performance and reduced egress costs, configuring the retention period for the workspace (supporting up to 730 days with appropriate workspace pricing tier), optionally enabling audit logging to storage account for long-term archival, and saving the configuration. The Log Analytics workspace retention can be set independently per table, allowing flexible retention policies for different log types.

A) is incorrect because enabling database-level auditing with logs written only to a storage account does not meet the Azure Monitor analysis requirement. While storage accounts provide cost-effective long-term retention, they lack the real-time analysis, querying, and alerting capabilities of Log Analytics workspaces. Storage account logs require downloading and importing into analysis tools, making them unsuitable for regular monitoring and compliance reporting through Azure Monitor dashboards.

C) is incorrect because Azure SQL Database does not support SQL Server Audit with file targets. This is an on-premises SQL Server feature that writes audit records to the local filesystem, which is not accessible in Azure SQL Database’s managed platform. Azure SQL Database provides equivalent functionality through Azure SQL auditing with cloud-native storage destinations including Storage accounts, Log Analytics, and Event Hubs.

D) is incorrect because while Azure Security Center (now Microsoft Defender for Cloud) provides security recommendations and threat detection for Azure SQL Database, it does not replace comprehensive auditing functionality. Security Center focuses on security posture assessment and threat detection rather than detailed activity logging for compliance purposes. Organizations must explicitly configure Azure SQL auditing to meet compliance requirements for detailed activity tracking and log retention.

Organizations should implement audit log analysis queries in Log Analytics to detect anomalous activities, configure alerts for suspicious events like permission changes or unusual data access patterns, ensure Log Analytics workspace has appropriate access controls to prevent unauthorized audit log modification, periodically export audit logs for external compliance or archival systems, correlate SQL audit events with other Azure resource logs for comprehensive security monitoring, document audit log retention policies in compliance documentation, and regularly review audit configurations to ensure coverage of all relevant databases.

Question 128: 

You are implementing a disaster recovery strategy for an Azure SQL Database in the East US region. The solution must provide automatic failover with minimal data loss and a Recovery Point Objective (RPO) of less than 5 seconds. Which feature should you implement?

A) Geo-replication with manual failover

B) Auto-failover groups with read-write listener endpoint

C) Zone-redundant configuration within the same region

D) Long-term retention backups with geo-redundant storage

Answer: B

Explanation:

Auto-failover groups with read-write listener endpoints provide the comprehensive disaster recovery solution that meets all specified requirements including automatic failover capability, minimal data loss with RPO under 5 seconds, and simplified application connectivity through listener endpoints. Auto-failover groups build upon active geo-replication technology while adding automatic failover orchestration, transparent connection redirection through DNS-based listener endpoints, and coordinated failover of multiple databases as a single unit. This feature is specifically designed for mission-critical applications requiring cross-region disaster recovery with automated recovery capabilities.

Auto-failover groups create a continuously synchronized secondary database replica in a different Azure region, with asynchronous replication typically maintaining RPO between 0-5 seconds under normal conditions. The feature provides two listener endpoints: a read-write listener that always points to the current primary database (automatically redirecting after failover), and a read-only listener that points to the secondary database for read scale-out scenarios. Applications connect using these listener endpoints rather than direct server names, enabling automatic connection redirection during failover without application configuration changes or DNS propagation delays.

The configuration process involves creating a geo-replication partner database in the target disaster recovery region, creating an auto-failover group on the primary server specifying the secondary server and failover policy, adding one or more databases to the failover group, configuring the failover policy including grace period before automatic failover (minimum 1 hour to prevent unnecessary failovers from transient issues), and updating application connection strings to use the read-write listener endpoint. During regional outage, the auto-failover group monitors connectivity to the primary region and automatically initiates failover after the grace period expires, promoting the secondary database to primary and updating listener endpoints to redirect connections.

A) is incorrect because while geo-replication provides the necessary cross-region replication with low RPO, manual failover does not meet the automatic failover requirement. Manual failover requires human intervention to detect the outage, make the failover decision, execute the failover operation, and update application connection strings or DNS records to point to the new primary database. This process typically requires minutes to hours depending on organization response procedures, during which the application remains unavailable.

C) is incorrect because zone-redundant configuration provides high availability within a single region by distributing replicas across availability zones, but it does not provide disaster recovery protection against region-wide failures. Zone redundancy protects against datacenter-level failures within a region but cannot protect against regional disasters like natural disasters, widespread power outages, or regional Azure service outages. Zone redundancy and geo-replication serve complementary but different purposes.

D) is incorrect because long-term retention backups with geo-redundant storage provide data protection and compliance retention but are not suitable for disaster recovery with 5-second RPO requirements. LTR backups are point-in-time full database backups taken weekly, monthly, or yearly and stored for extended periods (up to 10 years). Restoring from LTR backups is a manual process taking hours depending on database size, resulting in significant RTO and RPO measured in hours or days rather than seconds.

Organizations should configure multiple databases that comprise an application into the same failover group for coordinated failover, test failover procedures regularly through planned failover operations, monitor replication health and lag through Azure portal metrics and DMVs, implement application retry logic to handle brief connectivity interruptions during failover, use read-only listener endpoints for reporting workloads to offload primary database, document failover procedures including manual failover steps for scenarios requiring operator judgment, and consider implementing multiple failover groups across different region pairs for additional resilience.

Question 129: 

You need to grant a user the ability to view query execution statistics and performance metrics for an Azure SQL Database without providing access to the actual data or the ability to modify database objects. Which database role should you assign?

A) db_owner

B) db_datareader

C) db_ddladmin

D) Grant VIEW DATABASE STATE permission

Answer: D

Explanation:

Granting the VIEW DATABASE STATE permission provides the precise level of access required for viewing query execution statistics and performance metrics without providing unnecessary access to data or modification capabilities. This permission allows users to execute dynamic management views (DMVs) and dynamic management functions (DMFs) that expose query statistics, execution plans, wait statistics, resource utilization, and other performance-related information essential for database performance tuning and troubleshooting. The VIEW DATABASE STATE permission follows the principle of least privilege by granting only the specific capabilities needed for performance monitoring without broader access to sensitive data or administrative functions.

Dynamic management views and functions are essential tools for database performance analysis, providing real-time and historical information about database operations. Key DMVs used for performance monitoring include sys.dm_exec_query_stats showing aggregate performance statistics for cached queries, sys.dm_exec_requests displaying currently executing requests, sys.dm_exec_query_plan exposing query execution plans, sys.dm_db_index_usage_stats tracking index usage patterns, sys.dm_os_wait_stats identifying wait types and bottlenecks, and sys.dm_exec_sessions showing session information. Without VIEW DATABASE STATE permission, users cannot query these DMVs even if they have access to the database, limiting their ability to perform performance analysis.

The permission can be granted through T-SQL commands executed by a user with sufficient privileges (db_owner or CONTROL DATABASE permission). The syntax is straightforward: GRANT VIEW DATABASE STATE TO [username]; where username is the Azure AD user, group, or SQL authentication login requiring access. This permission applies at the database level, allowing the user to view performance information only for the specific database where it was granted, not across all databases on the server. For server-level performance monitoring across multiple databases, the VIEW SERVER STATE permission can be granted at the server level, though this is typically reserved for dedicated database administrator accounts.

A) is incorrect because db_owner role provides complete administrative control over the database including full data access, object modification, permission management, and all other database operations. Assigning db_owner violates the principle of least privilege by granting far more permissions than necessary for viewing performance metrics. Users with db_owner can read, modify, or delete all data, alter schema, and perform any database operation, creating unnecessary security risk.

B) is incorrect because db_datareader role grants read access to all data in all user tables and views but does not provide access to dynamic management views containing performance statistics. While db_datareader allows querying business data, it does not enable performance monitoring through DMVs. Additionally, granting db_datareader provides access to sensitive business data that the question specifically indicates should not be accessible.

C) is incorrect because db_ddladmin role allows users to create, alter, and drop database objects (tables, views, stored procedures, etc.) but does not grant access to performance statistics or DMVs. This role is for database developers and schema managers who need to modify database structure, not for performance monitoring personnel. Like db_datareader, db_ddladmin does not include VIEW DATABASE STATE permission by default.

Organizations should create dedicated Azure AD security groups for database performance monitoring roles, grant VIEW DATABASE STATE to these groups rather than individual users for simplified management, combine VIEW DATABASE STATE with appropriate monitoring tool permissions, implement monitoring dashboards that leverage DMV data without requiring end users to query DMVs directly, document which users require performance monitoring access and the business justification, regularly audit users with performance monitoring permissions, and consider implementing row-level security or dynamic data masking if performance analysts require limited data access for troubleshooting specific issues.

Question 130: 

You are optimizing costs for multiple Azure SQL Databases that have similar resource requirements and usage patterns. The databases support different tenants of a multi-tenant SaaS application. Which deployment option would provide the MOST cost-effective solution?

A) Deploy each database as an independent single database with DTU purchasing model

B) Create an elastic pool and add all databases to the pool

C) Implement SQL Managed Instance with multiple databases

D) Use serverless compute tier for each database independently

Answer: B

Explanation:

Creating an elastic pool and adding all databases to the pool represents the most cost-effective solution for managing multiple Azure SQL Databases with similar resource requirements and complementary usage patterns, particularly in multi-tenant SaaS scenarios. Elastic pools enable resource sharing among multiple databases, allowing them to dynamically consume resources from a shared pool as needed while staying within the pool’s overall resource limits. This architecture dramatically reduces costs compared to provisioning individual databases because it leverages the natural variation in usage patterns where different tenant databases peak at different times, ensuring efficient resource utilization across the entire pool.

Elastic pools operate on the principle that not all databases reach peak resource utilization simultaneously. In a typical multi-tenant SaaS application, different tenants have varying usage patterns based on time zones, business hours, industry-specific cycles, and individual usage behaviors. When databases are combined in an elastic pool, resources are dynamically allocated to databases experiencing high load while idle or low-activity databases consume minimal resources. This resource sharing results in significantly lower costs compared to provisioning each database with sufficient capacity to handle its individual peak load, as that approach results in idle capacity and wasted spending during off-peak periods.

The configuration process involves creating an elastic pool with appropriate service tier (Basic, Standard, Premium, or vCore-based General Purpose or Business Critical), selecting pool size based on aggregate resource requirements across all databases rather than sum of individual peak requirements, adding existing databases to the pool or creating new databases directly in the pool, configuring per-database minimum and maximum resource limits to prevent any single database from monopolizing pool resources, and monitoring pool resource utilization through Azure portal metrics. Cost savings typically range from 30-50% or more depending on usage pattern complementarity and the number of databases in the pool.

A) is incorrect because deploying each database as an independent single database requires provisioning sufficient resources for each database to handle its own peak load independently. This results in resource over-provisioning during off-peak periods with idle capacity generating costs without providing value. For multiple databases supporting a multi-tenant application, this approach is significantly more expensive than pooling resources where usage peaks are distributed across time.

C) is incorrect because SQL Managed Instance, while supporting multiple databases on a single instance, is designed for scenarios requiring broad SQL Server feature compatibility, instance-level features, and lift-and-shift migrations from on-premises environments. Managed Instance pricing is based on instance size (vCores) rather than per-database pricing, and typically costs more than elastic pools for multi-tenant SaaS applications unless specific Managed Instance features are required. For cost optimization without requiring instance-scoped features, elastic pools are more economical.

D) is incorrect because while serverless compute tier provides excellent cost savings for individual databases with intermittent usage patterns through automatic pausing, it does not provide the resource sharing benefits of elastic pools. In a multi-tenant SaaS scenario with many databases, individual serverless databases would each auto-scale independently without leveraging complementary usage patterns across tenants. Elastic pools with standard or vCore provisioned compute typically provide better cost efficiency for workloads with predictable aggregate demand even if individual databases have variable usage.

Organizations should monitor elastic pool resource utilization to ensure the pool is appropriately sized without over-provisioning, configure per-database min eDTU/vCore settings carefully to prevent resource starvation, set per-database max eDTU/vCore limits to prevent any tenant from monopolizing pool resources, regularly review database activity patterns to optimize pool membership, consider splitting databases into multiple pools if usage patterns diverge significantly, implement connection pooling at the application layer to minimize connection overhead, and use database sharding strategies for scaling beyond single elastic pool limits as tenant numbers grow.

Question 131: 

You need to implement row-level security (RLS) in an Azure SQL Database to ensure users can only access rows that belong to their department. Which of the following components must you create as part of the implementation?

A) Security policy and predicate function only

B) Stored procedure for data access only

C) Predicate function, security policy, and appropriate permissions

D) Triggers on all tables containing sensitive data

Answer: C

Explanation:

Implementing row-level security in Azure SQL Database requires creating a predicate function, security policy, and configuring appropriate permissions to ensure users can only access rows that meet their security context criteria. Row-level security is a database feature that automatically filters query results based on user identity or execution context, providing transparent access control at the row level without requiring application changes. The implementation involves three essential components working together: a predicate function that defines the access logic and returns a result indicating whether a row should be accessible, a security policy that binds the predicate function to one or more tables, and proper permissions ensuring users can execute queries while the security infrastructure operates transparently.

The predicate function is a table-valued inline function that contains the filtering logic determining row accessibility. This function typically checks the current user context using functions like USER_NAME(), SESSION_CONTEXT(), or joins to user mapping tables comparing user attributes against row attributes. For department-based filtering, the predicate function might verify that the row’s department column matches the current user’s department retrieved from a user-department mapping table. The function must be created as an inline table-valued function (not multi-statement) for performance optimization, returns a table with a single column indicating accessibility, and should be optimized for efficient execution as it applies to every row in queries.

The security policy binds the predicate function to target tables, specifying whether the policy applies as a filter predicate (automatically applied to SELECT, UPDATE, and DELETE operations) or block predicate (prevents INSERT and UPDATE operations that would violate the security rule). Multiple predicates can be associated with a single table, and policies can be enabled or disabled without dropping them. The syntax involves CREATE SECURITY POLICY statement specifying the schema, policy name, and ADD FILTER PREDICATE or ADD BLOCK PREDICATE clauses referencing the predicate function and target tables. After creation, the policy automatically enforces row-level filtering transparently to users and applications.

A) is incorrect because while predicate function and security policy are essential components, the implementation also requires careful attention to permissions. Users need SELECT permission on base tables to query data, but they should not have ALTER permission on security policies or predicate functions which would allow them to modify or disable security controls. Additionally, proper setup often requires configuring session context or user mapping tables that the predicate function references, making permissions management a critical third component beyond just creating the function and policy.

B) is incorrect because row-level security does not primarily rely on stored procedures for data access. While stored procedures can be part of a complete data access architecture, RLS operates at a lower level by automatically filtering SELECT, UPDATE, DELETE, and optionally INSERT operations regardless of how data is accessed (direct queries, stored procedures, or ORMs). Using only stored procedures for access control would require extensive application changes and would not prevent users with table permissions from directly querying tables.

D) is incorrect because row-level security does not use triggers for enforcement. Triggers are database objects that execute in response to data modification events and would be inefficient and complex for implementing row-level access control. RLS uses a more elegant approach through predicate functions evaluated during query optimization and execution. Triggers would significantly impact performance, require implementation on every table, and would not prevent SELECT operations from returning unauthorized data.

Organizations implementing row-level security should thoroughly test predicate functions to ensure correct logic and performance, use SESSION_CONTEXT() for storing user attributes when built-in functions like USER_NAME() are insufficient, implement comprehensive unit tests validating that users see only authorized rows, monitor query performance as RLS predicates add filtering logic to every query, document the security model clearly for developers and administrators, consider performance implications of complex predicate functions joining multiple tables, implement application-level caching carefully to avoid leaking cached data between user contexts, and maintain principle of least privilege by restricting who can alter security policies and predicate functions.

Question 132: 

You are monitoring an Azure SQL Database and notice increased CPU utilization and longer query execution times. Upon investigation, you discover that several queries are performing index scans instead of index seeks. What is the MOST likely cause of this performance degradation?

A) Insufficient memory allocated to the database

B) Outdated or missing statistics on indexed columns

C) Too many concurrent connections to the database

D) Network latency between application and database

Answer: B

Explanation:

Outdated or missing statistics on indexed columns represent the most likely cause of queries performing index scans instead of index seeks, leading to increased CPU utilization and longer execution times. Statistics are critical metadata objects that contain information about data distribution in tables and indexes, including histograms showing value distribution, density information indicating uniqueness, and row counts. The SQL Server query optimizer relies heavily on statistics to estimate row counts affected by predicates, choose appropriate join strategies, select optimal indexes, and decide between index seeks (efficient targeted lookups) versus index scans (less efficient examination of all or large portions of index entries).

When statistics become outdated due to significant data modifications (inserts, updates, deletes), or when statistics are missing entirely on columns used in WHERE clauses, JOIN conditions, or filtering predicates, the query optimizer makes execution plan decisions based on inaccurate or incomplete information. This can result in the optimizer choosing suboptimal plans including performing index or table scans when index seeks would be more efficient, selecting incorrect join orders, estimating vastly incorrect row counts leading to inappropriate memory grants, and choosing nested loop joins when hash joins would perform better. The impact manifests as suddenly degraded query performance for previously efficient queries, especially after substantial data modifications.

Azure SQL Database provides automatic statistics creation and update capabilities through AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS database options (enabled by default). However, automatic updates trigger based on thresholds that may not adequately respond to rapid data changes, use sampling that might miss important distribution changes, and update asynchronously which can leave queries using outdated statistics temporarily. Administrators should proactively monitor statistics freshness using DMVs like sys.dm_db_stats_properties, manually update statistics on critical tables after significant data modifications using UPDATE STATISTICS commands, consider using full scan statistics updates instead of sampled updates for critical tables, and implement statistics maintenance as part of regular database maintenance windows.

A) is incorrect because while insufficient memory can impact database performance, it typically manifests differently than the described symptoms. Memory pressure usually causes increased physical I/O as pages are evicted from the buffer pool, longer compilation times as plan cache entries are removed, and potentially blocking or timeout issues. Index scans instead of seeks specifically indicate execution plan quality issues rather than resource constraints, though memory pressure can indirectly affect plan quality by removing cached plans and forcing recompilations with potentially outdated statistics.

C) is incorrect because too many concurrent connections impacts database performance through resource contention, blocking, and potential connection pool exhaustion, but it does not directly cause the optimizer to choose index scans over index seeks. Connection count affects resource availability (CPU, memory, locks, worker threads) but does not influence the optimizer’s plan selection logic for individual queries, which is primarily driven by statistics and index structures. High connection count would more likely manifest as blocking, timeout errors, or resource wait times.

D) is incorrect because network latency between application and database affects overall response time but does not impact the query optimizer’s choice between index scans and seeks. Network latency adds fixed overhead to each round-trip between client and server but does not change how the database engine executes queries internally. If network latency were the primary issue, execution time within the database (measured by server-side metrics) would remain stable while client-perceived response time increases, and execution plans would not change.

Organizations should implement regular statistics maintenance procedures including manual statistics updates on large or frequently modified tables, monitor statistics age and modification counters through DMVs, enable Query Store to track execution plan changes over time, use Database Tuning Advisor or Azure SQL Database advisor recommendations for missing statistics, consider implementing statistics maintenance during off-peak hours for large tables, configure trace flags or database-scoped configurations to modify statistics update behavior when default auto-update is insufficient, and establish performance baselines to quickly detect when query plans degrade due to stale statistics.

Question 133: 

You need to enable Advanced Data Security for an Azure SQL Database to detect potential security threats and vulnerabilities. Which of the following features is included in Advanced Data Security?

A) Transparent Data Encryption only

B) Advanced Threat Protection and vulnerability assessment

C) Automatic failover groups

D) Query Performance Insight

Answer: B

Explanation:

Advanced Data Security (now part of Microsoft Defender for SQL) includes Advanced Threat Protection and vulnerability assessment as its core features, providing comprehensive security capabilities for detecting, investigating, and responding to potential threats and security weaknesses in Azure SQL Database. Advanced Threat Protection continuously monitors database activities to detect anomalous behaviors, suspicious activities, and potential security threats such as SQL injection attacks, unusual data access patterns, brute force authentication attempts, and access from unusual locations. Vulnerability Assessment provides automated security scanning that identifies misconfigurations, excessive permissions, exposed sensitive data, and other security vulnerabilities, delivering actionable remediation recommendations.

Advanced Threat Protection operates through intelligent security analytics and machine learning algorithms that establish behavioral baselines for database access patterns, analyze authentication events for suspicious login attempts, detect SQL injection attack patterns through query analysis, identify unusual data exfiltration activities such as large-scale data exports, recognize access from unfamiliar locations or applications, and detect privilege escalation attempts. When threats are detected, Advanced Threat Protection generates security alerts with detailed information including threat description, affected resources, attack tactics aligned with MITRE ATT&CK framework, recommended investigation steps, and mitigation guidance. Alerts integrate with Azure Security Center (Microsoft Defender for Cloud) and can trigger automated responses through Azure Logic Apps or Azure Functions.

Vulnerability Assessment performs periodic automated scans (weekly by default or on-demand) examining database configuration, permissions, encryption settings, and security best practices. The assessment generates detailed reports highlighting high, medium, and low severity vulnerabilities including excessive permissions for users, disabled security features like transparent data encryption or auditing, sensitive data discovered in tables without appropriate protection, unnecessary database principals with dangerous permissions, and missing security updates or patches. Each finding includes detailed descriptions, potential security impact, and step-by-step remediation instructions. The feature tracks remediation progress over time, allowing organizations to monitor security posture improvements and maintain compliance.

A) is incorrect because Transparent Data Encryption (TDE), while an important security feature, is not part of Advanced Data Security. TDE is a separate encryption capability that protects data at rest by encrypting database files and backups. TDE is enabled by default on new Azure SQL Databases and operates independently of Advanced Data Security. Advanced Data Security focuses on threat detection and vulnerability management rather than encryption.

C) is incorrect because automatic failover groups are a high availability and disaster recovery feature, not a security feature. Failover groups enable automatic failover of databases to secondary regions during outages and are completely separate from Advanced Data Security. While high availability contributes to overall system reliability, it is not considered part of the security threat detection and vulnerability assessment capabilities provided by Advanced Data Security.

D) is incorrect because Query Performance Insight is a performance monitoring and optimization tool that helps identify resource-consuming queries and performance bottlenecks. It is part of Azure SQL Database’s Intelligent Performance features and is completely separate from security capabilities. Query Performance Insight focuses on query tuning and performance optimization rather than security threat detection or vulnerability assessment.

Organizations should configure Advanced Data Security email notifications to alert security teams immediately when threats are detected, integrate alerts with Security Information and Event Management (SIEM) systems for centralized security monitoring, establish incident response procedures for different threat types, regularly review and remediate vulnerabilities identified by Vulnerability Assessment, configure baseline security policies in Vulnerability Assessment to track deviations from security standards, implement least privilege access controls to minimize exposure, combine Advanced Data Security with other security features like Always Encrypted for comprehensive defense-in-depth, and conduct regular security reviews incorporating vulnerability assessment findings into security improvement programs.

Question 134: 

You are implementing an Always Encrypted solution for an Azure SQL Database to protect sensitive column data from database administrators. Which component must be installed on the client application servers?

A) SQL Server Management Studio only

B) Always Encrypted-enabled ADO.NET provider or ODBC driver

C) Azure Key Vault client library only

D) SQL Server Agent

Answer: B

Explanation:

An Always Encrypted-enabled ADO.NET provider or ODBC driver must be installed on client application servers to implement Always Encrypted functionality correctly. Always Encrypted is a client-side encryption technology where sensitive data is encrypted within client applications before being sent to the database and remains encrypted at rest and in transit, ensuring that database administrators, cloud operators, and other high-privileged users cannot access plaintext sensitive data. The encryption and decryption operations occur transparently within the client-side database driver, requiring no application code changes beyond connection string modifications and ensuring the database server never has access to encryption keys or plaintext data.

The Always Encrypted-enabled drivers perform several critical functions that make the technology seamless for applications. These drivers transparently encrypt data in parameterized queries targeting encrypted columns before sending to the database, automatically decrypt data received from encrypted columns when reading query results, handle encryption metadata by querying column encryption settings from the database, retrieve column encryption keys from the configured key store (Azure Key Vault, Windows Certificate Store, or custom key store providers), cache encryption keys for performance optimization, and validate column encryption key signatures to ensure keys haven’t been tampered with. The drivers support both deterministic encryption (allowing equality comparisons and joins on encrypted columns) and randomized encryption (providing stronger protection against pattern analysis).

Supported client drivers include .NET Framework Data Provider for SQL Server (System.Data.SqlClient) version 4.6 or later, Microsoft .NET Data Provider for SQL Server (Microsoft.Data.SqlClient), ODBC Driver for SQL Server version 13.1 or later, JDBC Driver for SQL Server version 6.0 or later, and OLE DB Driver for SQL Server version 18.0 or later. Applications must enable Always Encrypted in connection strings by adding «Column Encryption Setting=Enabled» and configure appropriate key store providers for column master key retrieval. The configuration allows applications to work with encrypted data transparently while ensuring encryption keys never leave the client environment, maintaining the security boundary that protects against database-level threats.

A) is incorrect because while SQL Server Management Studio includes Always Encrypted support for configuring encrypted columns and testing queries, SSMS is a management tool, not a component required on application servers. Application servers need the actual database connectivity drivers (ADO.NET, ODBC, JDBC) with Always Encrypted support to encrypt and decrypt data during normal application operations. SSMS might be installed on administrator workstations but is not deployed on production application servers.

C) is incorrect because while Azure Key Vault client libraries may be needed if using Azure Key Vault as the column master key store, the Azure Key Vault integration is handled through the Always Encrypted-enabled database driver. The driver internally uses appropriate key store providers (including Azure Key Vault provider) to retrieve encryption keys. Applications don’t directly interact with Azure Key Vault client libraries for Always Encrypted operations; instead, they configure the driver to use Azure Key Vault as the key store and the driver manages all key retrieval operations.

D) is incorrect because SQL Server Agent is a job scheduling service used in on-premises SQL Server and Azure SQL Managed Instance for automating administrative tasks. It is not available or applicable to Azure SQL Database, has no role in Always Encrypted functionality, and is certainly not a component installed on client application servers. SQL Server Agent is server-side automation infrastructure completely unrelated to client-side encryption operations.

Organizations implementing Always Encrypted should store column master keys in secure key stores like Azure Key Vault with appropriate access controls, implement key rotation procedures to periodically replace encryption keys, use randomized encryption for highly sensitive data that doesn’t require querying, implement application-level caching of encrypted data to minimize performance impact, test application functionality thoroughly as Always Encrypted affects query capabilities (range queries, pattern matching, and functions don’t work on randomized encrypted columns), document which columns are encrypted and their encryption types, implement secure key backup and recovery procedures, and train developers on Always Encrypted limitations and proper usage patterns.

Question 135: 

You need to implement cross-database queries between two Azure SQL Databases located in the same logical server. Which feature should you use?

A) Linked servers

B) Elastic database queries with external data sources

C) Cross-database queries using three-part naming convention

D) Transactional replication

Answer: B

Explanation:

Elastic database queries with external data sources provide the appropriate solution for implementing cross-database queries between Azure SQL Databases, even when they’re located on the same logical server. Azure SQL Database does not support traditional three-part naming (database.schema.table) for cross-database queries like on-premises SQL Server, nor does it support linked servers in the traditional sense. Instead, elastic database queries enable cross-database access through external data sources and external tables, which create virtualized access to tables in remote Azure SQL Databases, allowing queries to span multiple databases transparently through T-SQL queries that reference external tables as if they were local tables.

Elastic queries support two primary scenarios: vertical partitioning (accessing different tables in different databases) and horizontal partitioning (sharding data across multiple databases). For cross-database queries between databases on the same server, the vertical partitioning scenario applies, where different tables or schemas exist in separate databases and applications need to join or query across them. The implementation involves creating a database scoped credential containing authentication information for the remote database, creating an external data source referencing the remote Azure SQL Database using the credential, creating external tables in the local database that map to tables in the remote database, and querying external tables using standard T-SQL as if they were local tables, with the elastic query engine transparently routing operations to the remote database.

The configuration process requires executing T-SQL commands in the database that will issue cross-database queries (the «head» database). First, create a master key if one doesn’t exist: CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘strong_password’. Then create a database scoped credential: CREATE DATABASE SCOPED CREDENTIAL credential_name WITH IDENTITY = ‘username’, SECRET = ‘password’. Next, create the external data source: CREATE EXTERNAL DATA SOURCE remote_db WITH (TYPE = RDBMS, LOCATION = ‘server_name.database.windows.net’, DATABASE_NAME = ‘remote_database_name’, CREDENTIAL = credential_name). Finally, create external tables matching the schema of remote tables: CREATE EXTERNAL TABLE schema_name.table_name (columns) WITH (DATA_SOURCE = remote_db, SCHEMA_NAME = ‘remote_schema’, OBJECT_NAME = ‘remote_table’). Applications can then query external tables with SELECT, JOIN, and other standard T-SQL operations.

A) is incorrect because Azure SQL Database does not support linked servers, which are an on-premises SQL Server feature for connecting to heterogeneous data sources including other SQL Server instances, Oracle databases, and various OLE DB providers. The linked server infrastructure relies on system stored procedures and configuration options not available in Azure SQL Database’s managed platform. Elastic database queries provide equivalent functionality for Azure SQL Database environments.

C) is incorrect because Azure SQL Database explicitly does not support three-part naming convention (database.schema.table) for cross-database queries. Unlike on-premises SQL Server where USE statements and three-part names enable cross-database queries when appropriate permissions exist, Azure SQL Database requires external data sources and external tables for cross-database access. Attempting to use three-part naming in Azure SQL Database results in syntax errors or name resolution failures.

D) is incorrect because transactional replication is a data synchronization technology that copies and distributes data changes from publisher databases to subscriber databases, primarily used for high availability, reporting offload, and data distribution scenarios. While replication can make data available across databases, it creates copies of data rather than enabling cross-database queries against source data. Replication introduces complexity, latency, and synchronization overhead inappropriate for scenarios simply requiring query access across databases.

Organizations implementing elastic queries should carefully consider performance implications as cross-database queries involve network communication and may have higher latency than local queries, implement appropriate indexes on remote tables to optimize query performance, use query hints and optimization techniques to minimize data transfer, implement connection retry logic to handle transient network issues, monitor query performance through Query Store and dynamic management views, consider security implications as database scoped credentials must be managed securely, evaluate whether database consolidation might be preferable to cross-database queries for frequently accessed data, and document external dependencies for maintenance and troubleshooting purposes.