Microsoft PL-400 Power Platform Developer Exam Dumps and Practice Test Questions Set 4 Q46-60
Visit here for our full Microsoft PL-400 exam dumps and practice test questions.
Question 46.
You need to implement a solution where changes to specific Dataverse tables are replicated to an Azure SQL Database in near real-time. Which feature should you use?
A) Data Export Service
B) Azure Synapse Link for Dataverse
C) Custom plugins with Azure SQL connection
D) Power Automate with SQL connector
Answer: B
Explanation:
Azure Synapse Link for Dataverse is the Microsoft-recommended solution for continuous near real-time data replication from Dataverse to Azure Data Lake Storage and Azure Synapse Analytics. While the question mentions Azure SQL Database specifically, Synapse Link provides enterprise-grade data export capabilities with continuous sync, and the data can then be accessed from Azure SQL Database through external tables or processed into SQL Database.
Azure Synapse Link continuously replicates Dataverse table data and metadata to Azure Data Lake in Common Data Model format. The replication is incremental, near real-time, and includes both initial snapshots and ongoing change tracking. This approach is scalable, requires no custom code, provides automatic schema synchronization, and is optimized for analytics and reporting scenarios.
The service handles all the complexity of change detection, data transformation, error handling, and incremental synchronization. It’s designed for enterprise data integration scenarios where you need Dataverse data available in Azure for analytics, machine learning, reporting, or integration with other Azure services. This is far superior to custom solutions for production data replication.
A) The Data Export Service is the older Dataverse data export solution that has been deprecated and replaced by Azure Synapse Link. While it provided similar functionality for exporting data to Azure SQL Database, Microsoft now recommends Azure Synapse Link for new implementations as it provides better performance, more features, and ongoing support.
C) Creating custom plugins with Azure SQL connections is a custom development approach that requires significant code, connection management, error handling, performance optimization, and ongoing maintenance. It doesn’t scale well, can impact Dataverse performance, introduces security complexities, and should only be considered when standard solutions don’t meet requirements.
D) Power Automate with SQL connector could technically sync data but introduces significant limitations including flow run limits, throughput constraints, complexity in handling large data volumes, and costs that scale with data volume. It’s suitable for low-volume scenarios but not for enterprise-grade near real-time replication of entire tables.
Question 47.
You are developing a canvas app that needs to display hierarchical data with parent-child relationships. The hierarchy can be up to five levels deep. Which approach provides the best user experience?
A) Nested galleries for each level
B) Tree view PCF control
C) Single flat gallery with indentation
D) Multiple screens for each level
Answer: B
Explanation:
A tree view PCF control is specifically designed for displaying and navigating hierarchical data structures with multiple levels. Tree view controls provide expand/collapse functionality, visual hierarchy representation with indentation and connecting lines, efficient rendering of large hierarchies, and intuitive navigation that users expect from hierarchical data displays.
Tree view controls handle the complexity of recursive data structures, allow users to expand only the branches they’re interested in (reducing visual clutter and improving performance), support selection of nodes, and can display icons or additional information at each node. For a five-level hierarchy, a tree view provides the clearest and most usable interface.
Several tree view PCF controls are available through the PCF gallery, or you can build custom tree view controls tailored to specific requirements. These controls integrate seamlessly with canvas apps, can be data-bound to Dataverse or other data sources, and provide professional hierarchical navigation without complex custom development.
A) Nested galleries for five levels creates extreme complexity in canvas app development. Each level requires a separate gallery nested within its parent, making the formula logic complicated, performance poor, and maintenance difficult. Beyond two levels of nesting, this approach becomes impractical and provides poor user experience with excessive scrolling and confusing layout.
C) A single flat gallery with indentation requires loading and displaying all nodes in a flattened structure, which doesn’t provide expand/collapse functionality and becomes unwieldy with large hierarchies. Users would need to scroll through all nodes even if they’re only interested in specific branches. This approach doesn’t scale well and lacks the interactive features users expect from hierarchical displays.
D) Using multiple screens for each level forces users to navigate back and forth between screens to explore the hierarchy, losing context and making it difficult to understand the overall structure. This creates poor user experience with excessive navigation, no overview of the hierarchy, and difficulty in understanding parent-child relationships across levels.
Question 48.
You need to create a plugin that sends data to an external REST API. The API occasionally experiences timeouts. How should you handle this to ensure reliability?
A) Implement retry logic with exponential backoff in the plugin
B) Register the plugin as synchronous and increase timeout settings
C) Use an asynchronous plugin with automatic retry from the platform
D) Create a separate Azure Function to call the API
Answer: C
Explanation:
Using an asynchronous plugin with the platform’s automatic retry capabilities is the most reliable approach for calling external APIs that may experience intermittent failures. When you register a plugin as asynchronous, it executes through the asynchronous service which provides built-in retry logic. If the plugin fails due to exceptions (including timeout exceptions), the platform automatically retries the operation multiple times with delays between attempts.
The asynchronous service handles the complexity of retry scheduling, tracks the number of attempts, progressively increases delay between retries, and eventually moves failed jobs to a failed state if all retries are exhausted. This enterprise-grade retry mechanism is battle-tested and handles many edge cases including system restarts, ensuring reliable delivery without custom retry code.
Asynchronous execution also prevents slow or failing external API calls from blocking user operations or causing timeout errors in the user interface. The main Dataverse operation completes successfully, and the external API call happens in the background with automatic retry, providing the best user experience and reliability for integration scenarios.
A) While implementing retry logic with exponential backoff in the plugin is possible, it’s more complex than using the platform’s built-in retry capabilities. Custom retry logic must handle timing, exception types, maximum attempt counts, and other concerns that the asynchronous service already handles. Additionally, in synchronous plugins, extensive retry logic can cause user-facing timeout errors.
B) Registering the plugin as synchronous makes the user wait for the external API call, creating poor user experience when the API is slow or timing out. You cannot increase Dataverse plugin timeout settings beyond the two-minute limit for synchronous plugins. Even with retries, synchronous execution blocks the user and can cause operations to fail with timeout errors.
D) Creating a separate Azure Function adds infrastructure complexity, additional costs, requires managing another deployment and monitoring it separately, and introduces additional points of failure. While Azure Functions have their place in architecture, using Dataverse’s built-in asynchronous plugin capabilities is simpler and more appropriate for this scenario.
Question 49.
You are implementing a solution where a canvas app needs to display data from a SQL Server database that is not in Dataverse. Which connector should you use?
A) SQL Server connector
B) Dataverse connector with virtual tables
C) ODBC connector
D) Custom connector with SQL API
Answer: A
Explanation:
The SQL Server connector is the standard, Microsoft-provided connector for connecting canvas apps directly to SQL Server databases (both on-premises with data gateway and Azure SQL Database). This connector provides comprehensive functionality for querying tables, executing stored procedures, and performing CRUD operations on SQL Server data directly from canvas apps.
The SQL Server connector supports delegation for many operations, allowing efficient querying of large datasets where filtering and sorting happen on the SQL Server side. It provides functions for selecting data, inserting records, updating records, and deleting records. The connector handles authentication (SQL authentication or Windows authentication via gateway) and connection management automatically.
For canvas apps that need to display or manipulate data in existing SQL Server databases, the SQL Server connector is the straightforward, supported solution. It requires minimal setup (just connection configuration), doesn’t require data migration to Dataverse, and provides real-time access to SQL Server data. This is appropriate when you want direct access to SQL data in canvas apps.
B) Dataverse connector with virtual tables is a more complex solution that involves setting up virtual table infrastructure in Dataverse to surface external data. While this approach has benefits for model-driven apps and when you need Dataverse features like security roles and business rules, it’s unnecessarily complex for simply displaying SQL Server data in a canvas app. The direct SQL Server connector is simpler.
C) ODBC connector is a generic connector for databases that support ODBC drivers. While it can connect to SQL Server, the SQL Server connector is more optimized, provides better delegation support, and is the recommended connector specifically for SQL Server. Use ODBC connector only for databases that don’t have dedicated connectors.
D) There is no need to create a custom connector when a standard SQL Server connector exists and provides the necessary functionality. Custom connectors should only be created for APIs or data sources that don’t have standard connectors available. Using standard connectors ensures support, updates, and better performance.
Question 50.
You need to implement business logic that validates data across multiple related tables before allowing a record to be saved. The validation is complex and requires multiple queries. Which approach should you use?
A) Business rule with conditions
B) JavaScript web resource on form save
C) Synchronous plugin on PreValidation stage
D) Synchronous plugin on PreOperation stage
Answer: D
Explanation:
A synchronous plugin on PreOperation stage is the appropriate solution for complex validation logic that requires multiple queries across related tables. PreOperation executes after security checks but before the database transaction commits, providing the ideal point to perform validation that might prevent the operation. The synchronous mode ensures validation happens inline and can prevent the save operation by throwing an exception.
PreOperation stage provides access to the full execution context including the target record’s data, allows querying related tables to gather validation information, and can throw InvalidPluginExecutionException to prevent the save with a user-friendly error message. All database operations are within the transaction boundary, ensuring data consistency if validation fails.
The PreOperation stage is specifically designed for business logic that might prevent operations, performs validation that requires querying other data, or needs to modify data before it’s written to the database. This is the standard pattern for implementing complex validation rules that go beyond what business rules or client-side JavaScript can handle.
A) Business rules have significant limitations for complex validation. They can only access data on the current record and its direct parent, cannot perform multiple queries across unrelated tables, have limited conditional logic capabilities, and cannot implement complex validation algorithms. Business rules are designed for simple validation scenarios, not complex multi-table validation.
B) JavaScript web resources on form save execute only when users save through the form UI, not when records are created or updated through APIs, integrations, imports, or workflows. Client-side validation can be bypassed and should never be the sole implementation of critical business rules. Complex validation logic requiring multiple queries should execute server-side for security and consistency.
C) PreValidation stage executes before security checks, which means validation logic might execute for records the user doesn’t have permission to modify. PreValidation is intended for basic data validation and default value setting, not for complex validation requiring queries to related tables. PreOperation is more appropriate for validation logic that requires accessing related data.
Question 51.
You are developing a model-driven app form that needs to display different sets of fields based on the user’s security role. Which approach should you use?
A) Create multiple forms and assign them to security roles using form order
B) Use JavaScript to show/hide fields based on user roles
C) Create multiple forms and use form scripts to redirect users
D) Use business rules with role-based conditions
Answer: A
Explanation:
Creating multiple forms and assigning them to security roles using form order is the declarative, supported approach for displaying different field sets based on user roles. In form settings, you can specify the form order and assign forms to specific security roles. When users with those roles open a record, they automatically see the appropriate form without requiring custom code or complex logic.
This approach is maintainable, doesn’t require coding, is supported by Microsoft, performs well (no client-side role checks), and clearly separates form designs for different user audiences. You can create a simplified form for basic users and a detailed form for advanced users, each tailored to their specific needs and permissions.
Form assignment by security role is evaluated by the platform, ensuring users always see the appropriate form. If a user has multiple roles that map to different forms, form order determines precedence. This built-in functionality handles the complexity of role checking and form presentation without custom development.
B) Using JavaScript to show/hide fields based on user roles requires custom code, adds complexity, impacts form load performance (needs to check roles and adjust UI), requires maintenance when roles change, and provides a less clean experience than having properly designed separate forms. Fields are still loaded even if hidden, wasting resources.
C) Using form scripts to redirect users between forms creates poor user experience with form reloads, requires custom JavaScript code to check roles and navigate, doesn’t work well offline or in certain contexts, and is more complex than using built-in form assignment. This approach also makes the form behavior harder to understand and maintain.
D) Business rules cannot evaluate user roles or security roles in their conditions. Business rules are designed for record-data-based logic, not user-context-based logic. They cannot determine which role a user has and therefore cannot implement role-based field visibility. This capability doesn’t exist in business rules.
Question 52.
You need to create a solution where external systems can query Dataverse data using standard OData queries. Which API should you expose?
A) Organization Service with custom endpoint
B) Dataverse Web API
C) Custom Azure API Management gateway
D) Custom ASP.NET Web API
Answer: B
Explanation:
The Dataverse Web API is the standard, supported REST API that external systems should use to query Dataverse data. It fully implements the OData v4.0 protocol, providing a consistent, standards-based interface for querying, creating, updating, and deleting data. External systems can use standard OData query options like $select, $filter, $orderby, $top, and $expand to query data exactly as needed.
The Web API handles authentication through Azure Active Directory, enforces Dataverse security roles and permissions, supports both system users and application users for service-to-service scenarios, and provides comprehensive operations for all Dataverse functionality. It’s designed specifically for external integration scenarios and is the recommended API for new integrations.
Using the Web API ensures that external systems benefit from all Dataverse security features, audit logging, plugin execution, and business rules. The API is versioned, well-documented, and Microsoft provides SDKs and samples for various programming languages. This is the standard, supported path for external system integration with Dataverse.
A) The Organization Service is the .NET SDK-based API designed primarily for plugins, workflows, and .NET applications. While you can expose it through custom endpoints, this adds unnecessary complexity, requires custom development, doesn’t provide standard OData queries, and isn’t the recommended approach for external system integration. The Web API is designed for this purpose.
C) Azure API Management can be placed in front of the Dataverse Web API for additional capabilities like rate limiting, transformation, or caching, but it’s not necessary for basic OData query requirements. The Dataverse Web API already provides comprehensive OData support. Adding API Management introduces additional cost and complexity that may not be needed.
D) Creating a custom ASP.NET Web API as a proxy to Dataverse is unnecessary custom development that requires building, hosting, maintaining, and securing another application layer. It adds latency, introduces additional points of failure, and duplicates functionality that the Dataverse Web API already provides. Use the built-in Web API instead.
Question 53.
You are implementing a plugin that needs to create related records in a specific order due to dependencies. One related record requires a field value from another related record that must be created first. How should you implement this?
A) Use ExecuteMultipleRequest with ordered requests
B) Create records sequentially and use returned GUIDs
C) Use ExecuteTransactionRequest with ordered requests
D) Create all records first, then update with dependencies
Answer: B
Explanation:
Creating records sequentially and using the returned GUIDs from each Create operation is the straightforward, reliable approach for handling dependencies between related records. When you execute a CreateRequest, the response includes the GUID of the newly created record. You can immediately use this GUID to populate lookup fields or other fields in subsequently created records.
This approach is simple to understand, easy to debug, ensures proper sequencing of operations, and handles dependencies naturally. If any creation fails, you can handle the error appropriately, and if you’re in a synchronous plugin, the transaction will roll back automatically. The sequential nature makes the code clear and maintainable.
For example, if Record B requires a value from Record A, you first create Record A, capture its GUID from the CreateResponse, use that GUID to set the required field on Record B, and then create Record B. This pattern scales to any number of dependent records and clearly expresses the dependency relationships in code.
A) ExecuteMultipleRequest is designed for executing many independent operations with better performance than individual requests. However, it doesn’t guarantee execution order in a way that helps with dependencies. The requests are processed in parallel or batched, and you can’t use values from one request in another within the same ExecuteMultipleRequest. This doesn’t solve the dependency problem.
C) ExecuteTransactionRequest ensures all operations succeed or fail together but doesn’t help with dependencies where you need a value from one operation to use in another. The requests in ExecuteTransactionRequest are prepared before execution, so you can’t capture a GUID from one operation and use it in another within the same transaction request. Sequential creation is needed.
D) Creating all records first and then updating them with dependencies requires twice as many database operations (creates plus updates), is less efficient, more complex to code and debug, and creates temporary states where data is incomplete. If the process fails partway through, you have orphaned records without proper relationships. Sequential creation is cleaner.
Question 54.
You need to implement a solution where users can scan QR codes in a model-driven app on mobile devices. Which approach should you use?
A) Embedded canvas app with barcode scanner control
B) JavaScript web resource with HTML5 camera API
C) Custom PCF control with camera access
D) Power Apps Mobile app with native camera
Answer: A
Explanation:
Embedding a canvas app with the barcode scanner control into a model-driven app form provides the most straightforward, supported solution for QR code scanning in model-driven apps. Canvas apps have the built-in barcode scanner control that works excellently on mobile devices, and embedded canvas apps integrate seamlessly with model-driven apps, allowing data to flow between them.
The embedded canvas app can access the barcode scanner control, scan QR codes or other barcode formats, process the scanned data, and pass it back to the model-driven app form through the PowerAppsGrid or component framework integration. This approach leverages the strengths of both app types: model-driven apps for business data management and canvas apps for mobile-specific capabilities like barcode scanning.
Embedded canvas apps maintain the model-driven app context, can read and write data from the host form, provide a consistent user experience, and require no custom coding. The barcode scanner control handles all the complexity of camera access, barcode recognition, and format detection across iOS and Android devices.
B) JavaScript web resources with HTML5 camera API require significant custom development, handling camera permissions, implementing barcode recognition algorithms or integrating third-party libraries, testing across mobile browsers, and managing various device-specific issues. This is much more complex than using the built-in barcode scanner control.
C) Creating a custom PCF control with camera access is possible but requires significantly more development effort than embedding a canvas app with the barcode scanner control. PCF development involves TypeScript/JavaScript coding, managing camera APIs, implementing barcode recognition, and extensive testing. Use built-in capabilities before building custom controls.
D) Power Apps Mobile app provides the runtime environment for model-driven apps on mobile devices but doesn’t directly add QR scanning capabilities to model-driven apps. The model-driven app still needs a scanning interface, which is best provided through an embedded canvas app with barcode scanner control or a custom PCF control.
Question 55.
You are developing a plugin that needs to prevent users from deleting records that are marked as «locked» through a custom field. Which exception should you throw to display a user-friendly error message?
A) InvalidPluginExecutionException with custom message
B) ArgumentException with custom message
C) ApplicationException with custom message
D) Exception with custom message
Answer: A
Explanation:
InvalidPluginExecutionException is the specific exception type designed for plugins to communicate business rule violations and validation errors to users. When you throw an InvalidPluginExecutionException with a custom message, Dataverse displays that message to the user in the UI, provides appropriate error handling in the platform, and properly rolls back the transaction.
The exception message you provide should be clear, user-friendly, and explain why the operation cannot proceed. For example: «This record is locked and cannot be deleted. Please contact your administrator if you need to delete this record.» This message will appear in the user interface, helping users understand what went wrong and potentially how to resolve the issue.
Using InvalidPluginExecutionException signals to the platform that this is a business logic error, not a system error, and should be handled accordingly. The platform logs these exceptions appropriately, doesn’t trigger system alerts for expected business rule violations, and presents the error in a user-friendly manner across different client types (web, mobile, API).
B) ArgumentException is a .NET framework exception for invalid method arguments and is not appropriate for business rule violations in plugins. While you could technically throw it, it’s not designed for this purpose, won’t be handled as gracefully by the platform, and doesn’t clearly communicate that this is a business logic error rather than a coding error.
C) ApplicationException is a generic .NET exception that was historically used for custom application exceptions but has fallen out of favor. It’s not specific to Dataverse plugins, doesn’t provide the platform-specific handling that InvalidPluginExecutionException does, and is not the recommended exception type for plugin business logic errors.
D) Throwing a generic Exception is bad practice in any .NET application. It doesn’t communicate the specific nature of the error, prevents calling code from handling different exception types appropriately, and in the context of Dataverse plugins, doesn’t receive the special handling that InvalidPluginExecutionException gets for presenting user-friendly errors.
Question 56.
You need to create a canvas app that works offline and syncs data when connectivity is restored. The app should handle conflicts when the same record is modified both offline and online. Which strategy should you implement?
A) Use collections with last-write-wins conflict resolution
B) Implement timestamp-based conflict detection with user resolution UI
C) Use SQL Lite local database with automatic merge
D) Store changes in queue and prevent conflicts through locking
Answer: B
Explanation:
Implementing timestamp-based conflict detection with a user resolution UI provides the most robust solution for offline apps that need proper conflict handling. When the app goes offline, you store the timestamp (or RowVersion) of each record. When syncing after coming back online, you compare the stored timestamp with the current server timestamp to detect if the record was modified by someone else while you were offline.
When a conflict is detected (server record was modified after you went offline), you present a conflict resolution UI showing the user both versions: their offline changes and the current server version. The user can then make an informed decision about which version to keep, or manually merge the changes. This approach provides transparency and control over conflict resolution.
The implementation involves storing records in collections when offline, tracking their original timestamps, detecting conflicts during sync by comparing timestamps, and providing a conflict resolution interface when conflicts occur. This pattern is commonly used in distributed systems and provides the best user experience for collaborative scenarios where conflicts may occur.
Last-write-wins is a simple conflict resolution strategy that automatically overwrites server data with offline changes without detecting or handling conflicts. This can lead to data loss when multiple users work on the same records because the latest save simply overwrites previous changes without considering that important data might be lost. It’s only appropriate when data loss is acceptable.
C) Canvas apps don’t have access to SQL Lite local databases. While SQL Lite is used in some mobile app development, it’s not available in Power Apps canvas apps. Canvas apps use collections for local storage during the session. Additionally, automatic merge of conflicts without user review can lead to data inconsistencies and loss of important changes.
D) Storing changes in a queue is part of the solution, but preventing conflicts through locking is not feasible in offline scenarios. You cannot lock records on the server when the app is offline because there’s no connection to the server. When the app goes offline, other users can still access and modify data, so conflicts are possible and must be handled.
Question 57.
You are implementing a solution where a Power Automate cloud flow needs to call a plugin’s logic. The plugin contains complex business rules that should be reused. What should you create?
A) Custom action with plugin implementation, call from flow
B) Convert plugin logic to flow actions
C) Use HTTP action to call plugin endpoint
D) Create Azure Function wrapper for plugin
Answer: A
Explanation:
Creating a custom action with the plugin implementation and calling it from Power Automate is the correct architectural pattern for reusing plugin business logic in flows. Custom actions appear as standard operations in Power Automate’s Dataverse connector, making them easy to discover and use. The plugin registered on the custom action message contains the actual business logic, ensuring consistency across all consumers.
This approach provides several benefits: business logic is centralized in the plugin and reused across different consumers (model-driven apps, canvas apps, flows, external systems), the custom action defines a clear contract with input and output parameters, changes to business logic only need to be made in one place, and the solution is maintainable and follows Microsoft’s recommended patterns.
Custom actions bridge the gap between Dataverse’s server-side plugin capabilities and Power Automate’s workflow automation, allowing flows to trigger complex business logic implemented in plugins without reimplementing that logic in flow steps. This architectural pattern promotes code reuse and maintains a single source of truth for business rules.
B) Converting plugin logic to flow actions means reimplementing complex business logic that already exists, creating duplicate code that must be maintained in two places, risking inconsistencies between implementations, and losing the performance and transaction benefits of server-side plugin execution. This violates the DRY (Don’t Repeat Yourself) principle.
C) Plugins don’t expose HTTP endpoints that can be called directly. Plugins execute within the Dataverse server process in response to specific messages. There’s no direct HTTP endpoint to call plugins. You would need to create a custom action that the plugin handles, which is essentially option A, or create a completely separate web service.
D) Creating an Azure Function wrapper for plugin logic introduces unnecessary complexity, additional infrastructure, duplicates business logic, creates another deployment and maintenance point, and doesn’t leverage Dataverse’s built-in custom action capabilities. This over-engineering provides no benefits compared to using custom actions which are designed for this scenario.
Question 58.
You need to implement a plugin that performs calculations on financial data using high precision decimal values to avoid rounding errors. Which data type should you use in your plugin code?
A) decimal
B) double
C) float
D) Money
Answer: A
Explanation:
The decimal data type in C# is specifically designed for financial and monetary calculations where precision is critical. Unlike floating-point types (float and double), decimal uses base-10 representation which accurately represents decimal fractions without the rounding errors that occur with binary floating-point. This makes decimal the correct choice for financial calculations, currency operations, and any scenario where precision matters.
When working with Dataverse money fields, whole number fields, or decimal fields in plugins, you should perform calculations using the decimal type to maintain precision throughout your calculations. The decimal type provides 28-29 significant digits of precision, ensuring that financial calculations remain accurate even through multiple operations.
Using decimal prevents the accumulation of rounding errors that can occur with binary floating-point types when representing values like 0.1 or 0.01. For financial calculations, even tiny rounding errors can accumulate to significant amounts when processing many transactions, making the choice of decimal critical for accuracy and compliance.
B) double is a binary floating-point type that cannot accurately represent many decimal fractions, leading to rounding errors in financial calculations. For example, 0.1 cannot be exactly represented in binary floating-point, causing accumulated errors in repeated calculations. Never use double for financial calculations despite its larger range, as precision is more important than range in financial contexts.
C) float is also a binary floating-point type with even less precision than double (approximately 7 significant digits). It suffers from the same decimal representation problems as double but with worse precision. Using float for financial calculations will lead to significant rounding errors and potential financial discrepancies. This should never be used for money calculations.
D) Money is a Dataverse data type, not a C# data type that you can use for calculations in plugin code. In plugins, Money fields are accessed through the Money class, but calculations should be performed on the extracted decimal value using the decimal type. The Money type is for representing monetary values with currency information, not for performing calculations.
Question 59.
You are developing a solution that requires executing business logic when records are imported through Data Import Wizard. The logic must execute for each imported record. What should you implement?
A) Plugin registered on Create message
B) Power Automate flow triggered on create
C) Custom workflow activity
D) Post-import script
Answer: A
Explanation:
A plugin registered on the Create message executes for every record creation regardless of how the record is created, including through the Data Import Wizard, API calls, user interface, or other integration methods. This ensures consistent business logic execution across all record creation paths, making it the appropriate solution when business rules must apply universally.
When records are imported through Data Import Wizard, each record creation triggers the Create message pipeline, causing your plugin to execute. The plugin can validate data, set calculated values, create related records, or perform any other business logic needed. This ensures imported data goes through the same business rule validation as manually created records.
Registering on the Create message provides consistent behavior, ensures data integrity regardless of entry method, reduces duplicate business logic (no separate import validation needed), and leverages Dataverse’s transaction management for reliability. The plugin executes within the import transaction, allowing validation failures to prevent bad data from being imported.
B) Power Automate flows triggered on record creation may not execute during data import operations depending on import settings and mode. Additionally, flows are asynchronous and would execute after import completes, meaning they can’t prevent invalid data from being imported. Flows are better for post-processing workflows rather than inline validation during import.
C) Custom workflow activities are reusable components that can be called from workflows but don’t automatically execute during import. Workflows must be explicitly triggered, either manually or by configuration, and may not run during Data Import Wizard operations. They don’t provide the automatic execution for all imports that a Create message plugin provides.
D) There is no «post-import script» feature in Dataverse. While you could potentially run scripts or processes after import completes, this wouldn’t provide per-record validation during import, couldn’t prevent invalid data from being imported, and isn’t a built-in feature. Plugins provide the proper mechanism for per-record business logic during import.
Question 60.
You need to create a Power Apps portal page that displays data in a chart format. The chart should be interactive and update when users filter data. Which approach should you use?
A) Embed Power BI report in portal
B) Use Chart.js with portal web templates
C) Create portal list with chart view
D) Embed model-driven app chart
Answer: A
Explanation:
Embedding a Power BI report in a Power Apps portal provides the most powerful, interactive charting capabilities with professional visualizations, extensive chart types, cross-filtering between visuals, drill-down capabilities, and rich user interaction features. Power BI reports embedded in portals can be secured to show only data the current user should see, support filtering, and provide a premium data visualization experience.
Power BI integration with portals allows you to create sophisticated dashboards and reports in Power BI Desktop or the Power BI service, configure row-level security, embed the reports in portal pages using the powerbi Liquid tag, and provide portal users with interactive data exploration capabilities. Users can filter, drill down, and interact with visualizations naturally.
The combination of Power BI’s advanced analytics and visualization capabilities with portal’s public or authenticated access provides the best solution for interactive charting requirements. Power BI handles the complexity of rendering, interaction, and performance optimization, while the portal provides the access control and integration with Dataverse data.
B) Using Chart.js with portal web templates requires custom development including writing HTML/JavaScript code, manually querying data and formatting it for Chart.js, implementing interactivity and filtering logic, and handling all the complexity of chart rendering and updates. While possible, this is significantly more effort than using Power BI’s built-in capabilities.
C) Portal lists can display Dataverse data in various formats including basic charts, but these are relatively simple visualizations with limited interactivity and customization compared to Power BI. Portal list charts are suitable for simple scenarios but don’t provide the rich interactive charting experience that Power BI offers for data exploration.
D) Model-driven app charts cannot be directly embedded in Power Apps portals in a way that provides interactive filtering. While you might be able to embed entire model-driven apps, this doesn’t provide the targeted chart embedding and customization needed. Power BI is the proper solution for advanced interactive charting in portals.