Microsoft PL-400 Power Platform Developer Exam Dumps and Practice Test Questions Set 2 Q16-30
Visit here for our full Microsoft PL-400 exam dumps and practice test questions.
Question 16.
You need to implement row-level security in Dataverse so that users can only access account records owned by their business unit. Which security feature should you configure?
A) Field-level security
B) Security roles with business unit access level
C) Column security profiles
D) Sharing privileges
Answer: B
Explanation:
Security roles with business unit access level provide the mechanism for implementing row-level security based on business unit ownership in Dataverse. When you configure a security role, you specify privileges for each table and set the access level (depth) for each privilege. The business unit access level restricts users to accessing only records owned by users in their business unit.
Dataverse supports five access levels: None (no access), User (only records owned by the user), Business Unit (records owned by users in the user’s business unit), Parent: Child Business Units (records owned by users in the user’s business unit and all child business units), and Organization (all records regardless of owner). By setting the access level to Business Unit for the Read privilege on the Account table, users can only view accounts owned by members of their business unit.
The business unit hierarchy in Dataverse is established through the organization structure settings. Users are assigned to business units, and records are owned by users or teams. The security role configuration combined with ownership and business unit assignment creates the row-level security model that ensures users only access appropriate records.
A) Field-level security controls access to specific columns (fields) within records, not which rows users can access. It’s used for protecting sensitive data within records that users can already access, like salary information or social security numbers, but doesn’t provide row-level filtering.
C) Column security profiles (also called field security profiles) work with field-level security to control access to specific secured columns. Like field-level security, they control column access within accessible records, not which rows users can see. They don’t provide row-level filtering based on business unit.
D) Sharing allows specific records to be shared with individual users or teams beyond what their security role would normally allow. While sharing can grant access to specific records, it’s a mechanism for exceptions to the security model, not for implementing systematic row-level security based on business unit ownership.
Question 17.
You are creating a plugin that needs to create related records in a single transaction. If any record creation fails, all changes should be rolled back. How should you implement this?
A) Use ExecuteMultipleRequest with ContinueOnError set to false
B) Use ExecuteTransactionRequest with requests collection
C) Create each record individually and handle errors manually
D) Use batch operations in the Web API
Answer: B
Explanation:
ExecuteTransactionRequest is specifically designed for executing multiple organization service requests within a single database transaction. This request ensures that either all operations succeed together or all fail together with automatic rollback. This is exactly what’s needed when creating related records that must maintain data integrity as a unit.
When you use ExecuteTransactionRequest, you add multiple OrganizationRequest objects (such as CreateRequest, UpdateRequest, etc.) to its Requests collection. The platform executes these requests sequentially within a transaction boundary. If any request fails, the entire transaction is automatically rolled back, ensuring that no partial data is committed to the database.
This transactional behavior is critical for maintaining referential integrity and business logic consistency. For example, if you’re creating an order with order line items, you don’t want the order header to be created if one of the line items fails. ExecuteTransactionRequest guarantees this all-or-nothing behavior without requiring manual transaction management.
A) ExecuteMultipleRequest is designed for improving performance when executing many operations, but it has a fundamentally different behavior. Even with ContinueOnError set to false, each request is committed individually as it succeeds. If a later request fails, earlier successful requests are not rolled back, resulting in partial data commits.
C) Creating each record individually and handling errors manually does not provide transactional integrity. Even if you detect an error and attempt to delete previously created records, race conditions can occur, other users might have already accessed the partial data, and you cannot guarantee complete rollback of all changes.
D) Batch operations in the Web API (using $batch) are primarily for reducing network round trips by bundling multiple requests. While you can specify that a batch should behave as a changeset, this is not the standard approach for plugin development, which uses the Organization Service SDK, not the Web API.
Question 18.
You need to debug a plugin that is running in the Dataverse sandbox. The plugin is not behaving as expected in production. What tool should you use to debug the plugin?
A) Visual Studio with remote debugging
B) Plugin Registration Tool with plugin profiler
C) Fiddler web debugging proxy
D) Browser developer tools
Answer: B
Explanation:
The Plugin Registration Tool with plugin profiler is the Microsoft-recommended tool for debugging plugins in Dataverse, especially in sandbox (isolated) mode. The plugin profiler allows you to capture the execution context and any exceptions that occur when a plugin runs in the actual Dataverse environment, including production, without requiring direct debugging access to the server.
When you use the plugin profiler, you install a profiler solution an AR Continue configure it to capture plugin executions. The profiler stores detailed information about each plugin execution including the execution context, input/output parameters, and any exceptions that occurred. You can then download this profile and replay it in Visual Studio on your local machine, allowing you to step through the code with the exact same context that caused the issue in production.
This approach is particularly valuable for debugging issues that only occur in production because you can capture the real execution context including actual data values, user context, and organizational settings. You can debug without impacting production users and without requiring administrative access to production servers.
A) Visual Studio with remote debugging requires attaching to the server process, which is not possible with sandbox plugins in Dataverse. Sandbox plugins run in an isolated process for security and reliability, and Microsoft doesn’t provide remote debugging access to production Dataverse servers for security reasons.
C) Fiddler is a web debugging proxy used for capturing and analyzing HTTP/HTTPS traffic. While it can be useful for debugging Web API calls or webhook requests, it cannot debug the internal execution of plugins that run server-side within Dataverse. Plugins don’t generate HTTP traffic that Fiddler could capture.
D) Browser developer tools are used for debugging client-side JavaScript code that runs in web browsers. They cannot debug server-side plugin code that executes within Dataverse. Browser tools have no visibility into server-side execution contexts or plugin code.
Question 19.
You are developing a canvas app that needs to work offline. The app must store data locally when there is no network connection and sync when connectivity is restored. Which approach should you use?
A) Use Dataverse connector with offline mode enabled
B) Use collection functions to store data locally and Power Automate to sync
C) Use SQL Server connector with local database
D) Enable offline mode in the mobile app settings
Answer: B
Explanation:
Canvas apps don’t have built-in offline data synchronization like model-driven apps do. To enable offline functionality in a canvas app, you must implement custom logic using collections to store data locally in the app’s memory. When the app starts or when network connectivity is restored, you use Power Automate flows or direct Dataverse connector calls to synchronize the local collection data with Dataverse.
The typical pattern involves loading data into collections when the app starts and has connectivity, allowing users to work with this cached data when offline, storing any changes (creates, updates, deletes) in separate collections while offline, and then synchronizing these changes back to Dataverse when connectivity is restored. You can use the Connection.Connected function to detect network status.
While this approach requires manual implementation of sync logic and conflict resolution, it provides the flexibility to create offline-capable canvas apps. You need to handle scenarios like data conflicts (where server data changed while the user was offline), determine what data to cache locally (to manage memory usage), and provide user feedback about sync status.
A) The standard Dataverse connector in canvas apps does not have an offline mode setting. Unlike model-driven apps which have built-in offline capabilities, canvas apps require manual implementation of offline functionality using collections and custom sync logic.
C) Canvas apps cannot connect to local SQL Server databases on the user’s device. The SQL Server connector connects to network-accessible SQL servers, not local databases. Additionally, this approach would require the user’s device to run SQL Server, which is impractical for mobile devices.
D) While Power Apps Mobile has some settings related to offline capabilities, these primarily affect model-driven apps. Canvas apps don’t automatically gain offline capabilities just by enabling settings in the mobile app. Offline functionality must be specifically implemented in the canvas app logic.
Question 20.
You need to create a plugin that sends an email notification when a case is created with high priority. The email sending operation should not block the case creation. How should you implement this?
A) Create a synchronous plugin on PreCreate and send email using .NET SmtpClient
B) Create an asynchronous plugin on PostCreate and use SendEmailRequest
C) Create a synchronous plugin on PostOperation and use SendEmailRequest
D) Create a workflow to send the email
Answer: B
Explanation:
Creating an asynchronous plugin on PostCreate (PostOperation stage of the Create message) with SendEmailRequest is the optimal approach for this requirement. Asynchronous plugins execute in the background without blocking the main operation, which means the case record will be created immediately and the email will be sent afterward without making the user wait.
The PostCreate stage ensures that the case record has been successfully committed to the database before attempting to send the email. This is important because you don’t want to send notification emails for records that might fail to be created. Using SendEmailRequest (the standard Dataverse API for sending emails) ensures that the email is tracked in Dataverse and benefits from built-in retry logic if the send operation fails.
Asynchronous plugins are queued and executed by the asynchronous service, which provides automatic retry capabilities if failures occur. If the email fails to send due to a temporary issue, the system will automatically retry the operation. This makes asynchronous plugins highly reliable for operations that don’t need to happen immediately within the user’s transaction.
A) Using a synchronous plugin on PreCreate with .NET SmtpClient has multiple problems. First, synchronous execution blocks the case creation until the email is sent, creating poor user experience. Second, PreCreate occurs before the record is saved, so you don’t yet have a case number or final record data. Third, using SmtpClient bypasses Dataverse email tracking and requires managing email server credentials in code.
C) While using SendEmailRequest in PostOperation is better than using SmtpClient, making the plugin synchronous causes the user to wait for the email to be sent before the case creation completes. Email operations can be slow due to network latency or email server response times, unnecessarily delaying the user’s operation.
D) Workflows (both classic workflows and Power Automate flows) are viable alternatives and were commonly used before async plugins became popular. However, the question asks specifically how to implement this as a plugin. That said, using Power Automate would be a reasonable alternative approach in practice.
Question 21.
You are implementing a solution where multiple plugins need to share common business logic. What is the best practice for organizing this shared code?
A) Copy the common code into each plugin class
B) Create a separate class library project and reference it from plugins
C) Use inheritance to create a base plugin class with shared methods
D) Store the code in a JavaScript web resource
Answer: B
Explanation:
Creating a separate class library project for shared business logic is the recommended best practice when multiple plugins need common functionality. This approach promotes code reusability, maintainability, and follows the DRY (Don’t Repeat Yourself) principle. The class library can contain helper methods, business logic, data access code, and utility functions that multiple plugins can reference.
When you create a class library for shared code, you can version it independently, unit test it thoroughly, and update it without modifying individual plugin projects. Each plugin project references the class library as a dependency, and when you build the plugin assembly, the shared library can be merged into the plugin assembly using tools like ILMerge or included as a separate assembly that you register alongside the plugin.
This architectural approach also improves collaboration in team environments, as different developers can work on the shared library and plugin implementations independently. It establishes clear separation of concerns, where the class library contains business logic and the plugin classes contain only plugin-specific code like registration information and execution context handling.
A) Copying common code into each plugin class creates significant maintenance problems. When the shared logic needs to be updated, you must modify multiple plugin classes, increasing the risk of inconsistencies, bugs, and errors. This approach violates fundamental software engineering principles and creates technical debt.
C) While inheritance can be used to share some functionality through a base plugin class, this approach is less flexible than using a separate class library. Inheritance creates tight coupling between the base class and derived plugins, and C# doesn’t support multiple inheritance, limiting flexibility when plugins need different combinations of shared functionality.
D) JavaScript web resources are client-side code that runs in browsers and cannot be referenced or used by server-side plugins. Plugins are compiled .NET assemblies that execute in the Dataverse server environment and have no access to JavaScript code. This option reflects a fundamental misunderstanding of the platform architecture.
Question 22.
You need to implement a solution where users can upload images in a canvas app and store them in Dataverse. The images should be retrievable and displayable in the app. Which data type should you use for the image field?
A) Single line of text
B) Image data type
C) Multiple lines of text
D) File data type
Answer: B
Explanation:
The Image data type in Dataverse is specifically designed for storing and displaying images within the platform. When you create a field with the Image data type, Dataverse handles the image storage, retrieval, thumbnail generation, and optimization automatically. This data type is optimized for images that need to be displayed in forms, views, and canvas apps.
In canvas apps, you can use the Camera control or Add Picture control to capture or select images, then save them directly to Image fields in Dataverse using the Patch function. When retrieving records, the Image field provides a URL that can be directly used in the Image control’s Image property. Dataverse automatically handles image formatting, size limits, and generates multiple sizes for different display contexts.
The Image data type has built-in size limits (maximum 30 MB for full image, 10 MB for thumbnail) and automatically creates optimized versions for different use cases. It integrates seamlessly with model-driven app forms where images can be displayed and edited directly in the form designer without custom code.
A) Single line of text fields can only store text strings up to 4,000 characters. While you could theoretically store a Base64-encoded image string if it’s small enough, this is not an appropriate solution. It doesn’t provide image display capabilities, would require custom encoding/decoding, and cannot handle images larger than a few kilobytes.
C) Multiple lines of text can store longer strings (up to 1,048,576 characters) and could theoretically store Base64-encoded images. However, this approach requires manual encoding/decoding, doesn’t provide automatic thumbnail generation, offers no image optimization, and is inefficient for image storage. It’s a workaround, not a proper solution.
D) File data type is designed for storing documents and files of any type, not specifically optimized for images. While it can store image files, it doesn’t provide the automatic thumbnail generation, image optimization, or direct display capabilities that the Image data type offers. File fields are better suited for PDFs, Word documents, and other file types.
Question 23.
You are creating a PCF control that needs to respond when the bound field value changes. Which method in the control lifecycle is called when the bound data updates?
A) init()
B) updateView()
C) getOutputs()
D) notifyOutputChanged()
Answer: B
Explanation:
The updateView method is called whenever the bound data or other context properties change, making it the correct method for responding to field value updates. This method receives the updated context as a parameter, allowing the control to access the new value and update its visual display accordingly.
The updateView method is the core rendering method in the PCF lifecycle and is invoked not only during initial load but also whenever there are changes to the context, including bound field value changes, control resize events, or updates to any context properties. Inside updateView, you should check what has changed in the context and update your control’s UI appropriately.
When implementing updateView, it’s important to make the method efficient because it can be called frequently. You should compare the new values with previous values to determine what actually changed before performing expensive rendering operations. The method provides access to all updated context information including bound field values, container dimensions, and utility functions.
A) The init method is called only once when the control first initializes. It’s used for one-time setup tasks like creating DOM elements, initializing variables, and setting up event handlers. It does not receive bound data updates after the initial load, so it cannot be used to respond to subsequent field value changes.
C) The getOutputs method is called when you need to return data from the control back to the platform, typically after the user interacts with the control. It returns output values but is not called when bound input values change. It’s for outputting data from the control, not responding to input changes.
D) The notifyOutputChanged method is called by your control code to inform the platform that the control has new output values that should be retrieved via getOutputs. It’s used to push changes from the control, not to receive changes to bound data. It’s the opposite direction of data flow from what the question asks.
Question 24.
You need to create a solution that automatically generates quote documents in PDF format when a quote is won. The PDF should include data from the quote and related products. Which approach should you use?
A) Create a Word template and use document generation
B) Create a JavaScript web resource to generate PDF using a library
C) Use Power Automate with a custom connector to a PDF service
D) Create a plugin that generates PDF using a .NET library
Answer: A
Explanation:
Using a Word template with Dataverse’s built-in document generation feature is the recommended and most efficient approach for creating PDF documents from Dataverse data. Dataverse provides native integration with Word templates that allows you to create professionally formatted documents with data from records and related records, and then automatically convert them to PDF format.
Word templates in Dataverse support the Developer tab features including content controls for binding data fields, repeating sections for related records (like products on a quote), and conditional formatting. You create the template as a Word document, upload it to Dataverse, and then users can generate documents directly from records through the UI, or you can automate generation through Power Automate or plugins using the GenerateDocumentRequest API.
The document generation process automatically merges data from the quote record and related product records into the template, generates a Word document, and can automatically convert it to PDF format. The generated documents can be automatically attached to the record, emailed to customers, or stored in SharePoint. This approach requires no custom code, is fully supported by Microsoft, and handles all the complexity of document generation and PDF conversion.
B) Using JavaScript web resources to generate PDFs would be a complex custom solution requiring client-side PDF libraries, would only work when users access forms in the browser, and would not work for background automation or server-side generation. This approach adds unnecessary complexity and maintenance burden.
C) While Power Automate with a custom connector to an external PDF service is possible, it’s more complex and expensive than using built-in document generation. It requires developing or subscribing to an external PDF service, managing API calls and authentication, and handling potential service availability issues.
D) Creating a plugin that generates PDF using .NET libraries is a custom coding approach that requires significant development effort, ongoing maintenance, and testing. While technically possible, it’s unnecessarily complex when Dataverse provides built-in document generation capabilities that meet the requirement without custom code.
Question 25.
You are developing a model-driven app form that needs to show or hide a tab based on the value of an option set field. The tab visibility logic should execute immediately when the field value changes. How should you implement this?
A) Create a business rule with a condition
B) Create a JavaScript function on the field OnChange event
C) Create a plugin on the Update message
D) Create a Power Automate flow
Answer: B
Explanation:
Creating a JavaScript function on the field’s OnChange event is the correct approach for implementing dynamic form behavior that responds immediately to field changes. The OnChange event fires instantly when a user modifies a field value, allowing your JavaScript code to show or hide tabs in real-time without form submission or page refresh.
JavaScript provides access to the formContext object which includes methods for controlling tab visibility through the setVisible method. Your OnChange function can read the current value of the option set field using getAttribute().getValue(), apply your business logic to determine whether the tab should be visible, and then call formContext.ui.tabs.get(«tabName»).setVisible(true/false) to show or hide the tab accordingly.
This client-side approach provides immediate visual feedback to users and creates a responsive, intuitive user experience. The logic executes entirely in the user’s browser without server round trips, making it fast and efficient. JavaScript form scripting is specifically designed for this type of dynamic form behavior and is the standard approach for show/hide logic in model-driven apps.
A) Business rules can show or hide fields and sections, but they cannot show or hide entire tabs on a form. Business rules have limited UI manipulation capabilities compared to JavaScript and don’t support tab visibility control. This is a documented limitation of business rules.
C) Plugins on the Update message execute on the server side after a record is saved, not immediately when a field value changes on the form. Plugins cannot manipulate form UI elements like tabs. They execute too late and in the wrong context for implementing real-time form behavior.
D) Power Automate flows are designed for workflow automation and execute asynchronously after records are saved. They cannot control form UI elements in real-time and would not provide the immediate visual response needed when a user changes a field value. Flows execute on the server and have no access to the form UI.
Question 26.
You are developing a canvas app that needs to display data from multiple Dataverse tables with complex filtering and sorting requirements. Performance is critical. Which delegation strategy should you use?
A) Load all records into collections and filter in memory
B) Use delegable functions and keep dataset under 2000 records
C) Use non-delegable functions and increase data row limit to 2000
D) Create views in Dataverse with pre-filtered data and use delegation
Answer: D
Explanation:
Creating views in Dataverse with pre-filtered data and leveraging delegation is the most effective strategy for handling complex filtering and sorting while maintaining performance in canvas apps. Dataverse views allow you to define filtering, sorting, and column selection at the server level, and when you reference these views from canvas apps using delegable operations, the processing happens on the server rather than downloading large datasets to the client.
Delegation is a fundamental concept in canvas apps where data operations are pushed to the data source for processing rather than bringing data to the client. When you use properly configured views with delegable functions, Dataverse performs the heavy lifting on the server side, returning only the necessary results. This approach works efficiently with large datasets far exceeding the data row limit and provides optimal performance.
By creating targeted views that pre-filter data based on common query patterns, you reduce the amount of data that needs to be processed and ensure that your app can work with datasets of any size. Views also improve maintainability because filtering logic is centralized in the view definition rather than scattered throughout app formulas. This approach scales well as data grows and provides consistent performance regardless of dataset size.
A) Loading all records into collections and filtering in memory is the worst approach for performance. It downloads all data to the client device, consumes significant memory, requires lengthy initial load times, and will fail or timeout with large datasets. This approach does not scale and violates canvas app best practices.
B) While using delegable functions is important, artificially limiting datasets to under 2000 records doesn’t address the root performance issue and may not be practical for business requirements. This approach works around the problem rather than solving it and may exclude important data from the app.
C) Using non-delegable functions with increased data row limits still downloads all records to the client before filtering, creating performance problems. The maximum data row limit is 2000, and even at this limit, non-delegable operations perform poorly. This approach doesn’t scale and creates poor user experience.
Question 27.
You need to implement a solution where changes to account records in Dataverse trigger real-time notifications to an external system via webhook. Which feature should you configure?
A) Create a Power Automate flow with Dataverse trigger
B) Configure a service endpoint with webhook registration
C) Create an Azure Logic App integration
D) Use Change Tracking API with polling
Answer: B
Explanation:
Configuring a service endpoint with webhook registration is the native Dataverse mechanism for sending real-time event notifications to external systems. Service endpoints allow you to register external HTTP endpoints that Dataverse will call synchronously or asynchronously when specific events occur, such as account record changes. This is implemented through the Service Bus/Webhook integration in the Plugin Registration Tool.
When you register a webhook service endpoint, you specify the external URL that should receive notifications and then register steps (similar to plugin steps) that define which events should trigger the webhook. When the specified event occurs, Dataverse automatically posts the execution context data to your webhook endpoint in JSON format, providing details about the changed record and the operation performed.
Webhooks provide low-latency, event-driven integration without requiring polling or intermediate services. They support both synchronous and asynchronous execution modes, allow you to include or exclude specific fields from the payload, and provide retry logic for failed deliveries. This is the most direct and efficient method for real-time Dataverse-to-external-system integration and is specifically designed for this use case.
A) Power Automate flows can trigger on Dataverse changes and call external systems, but they introduce additional latency and are asynchronous by nature. While flows are easier to configure for non-developers, they add an extra layer between Dataverse and the external system and have different performance characteristics compared to native webhooks.
C) Azure Logic Apps can integrate with Dataverse and call external systems, but like Power Automate flows, they introduce additional components and latency. Logic Apps are essentially the Azure equivalent of Power Automate and have similar characteristics including asynchronous execution and additional overhead compared to native webhook integration.
D) The Change Tracking API requires external systems to poll Dataverse periodically for changes, which is not real-time and is inefficient. Polling creates unnecessary load on both systems, introduces latency between when changes occur and when they’re detected, and doesn’t meet the requirement for real-time notifications.
Question 28.
You are developing a plugin that needs to perform a calculation using data from related records. The related records are in a 1:N relationship with the primary record. Which approach provides the best performance?
A) Use RetrieveMultiple with LinkEntity to get all data in one query
B) Use Retrieve to get the primary record, then loop through related records retrieving each one
C) Use FetchXML with multiple separate queries
D) Use late-bound entities with individual Retrieve calls
Answer: A
Explanation:
Using RetrieveMultiple with LinkEntity to retrieve the primary record and all related records in a single query is the most efficient approach for plugin performance. LinkEntity allows you to join related tables in a single query, similar to SQL JOIN operations, minimizing database round trips and reducing overall execution time. This is particularly important in plugins where execution time directly impacts user experience.
A single query with LinkEntity retrieves all necessary data in one database operation, reducing network latency and connection overhead. You can specify which columns to retrieve from both the primary and related tables using ColumnSet, further optimizing performance by retrieving only needed data. The query returns a hierarchical result set where related records are accessible through the Entity’s related entities collection.
When working with relationships in Dataverse plugins, minimizing the number of database queries is critical for performance. Each separate query incurs overhead for connection management, query parsing, and result serialization. By consolidating data retrieval into a single query with LinkEntity, you achieve significantly better performance, especially when dealing with multiple related records or complex relationship structures.
B) Retrieving the primary record first and then looping through related records with individual Retrieve calls creates multiple database round trips, resulting in poor performance. Each Retrieve operation incurs separate overhead, and with many related records, this approach becomes exponentially slower. This is an anti-pattern in plugin development.
C) Using FetchXML with multiple separate queries suffers from the same performance problems as option B. While FetchXML is a valid query language in Dataverse, executing multiple separate queries when you could use a single query with joins is inefficient and creates unnecessary database load and execution time.
D) Late-bound entities with individual Retrieve calls combine two performance issues: using late-bound access (which has slightly more overhead than early-bound) and making multiple separate database queries. This approach provides the worst performance of all options and should be avoided when dealing with related records in performance-sensitive plugins.
Question 29.
You need to create a custom API in Dataverse that performs a complex business operation involving multiple tables. The API should be callable from Power Automate and canvas apps. What should you create?
A) Custom action with input and output parameters
B) JavaScript web resource
C) Plugin registered on a standard message
D) Azure Function with HTTP trigger
Answer: A
Explanation:
Creating a custom action with input and output parameters is the recommended approach for implementing custom business logic that needs to be callable from multiple platforms including Power Automate, canvas apps, model-driven apps, and external systems. Custom actions are first-class Dataverse components that can be included in solutions, support versioning, and provide a clean API contract with defined inputs and outputs.
Custom actions can be either bound (associated with a specific table) or unbound (global actions). For complex operations involving multiple tables, an unbound custom action is typically appropriate. You define the action’s parameters in the Dataverse metadata, and then implement the business logic either through a workflow (for simple operations) or by creating a plugin registered on the custom action message for complex logic.
Once created, custom actions appear in Power Automate as standard connectors, can be called from canvas apps using the Dataverse connector, are available in model-driven app command bars and JavaScript, and can be invoked via the Web API or Organization Service. This provides a consistent, reusable interface for your business logic across all Power Platform components and external integrations.
B) JavaScript web resources execute client-side in browsers and cannot contain complex business logic involving multiple tables that needs server-side transaction support. They’re not callable from Power Automate or canvas apps in a structured way and don’t provide the security, transaction management, or API characteristics needed for this requirement.
C) While you could register a plugin on a standard message like Create or Update, this doesn’t provide a clean API interface specifically for your business operation. Standard messages are tied to CRUD operations on specific tables and don’t provide the semantic clarity or dedicated input/output contract that a custom action provides.
D) Azure Functions with HTTP triggers are external to Dataverse and require additional infrastructure, authentication management, and network connectivity. While they can be called from Power Platform components through custom connectors, they don’t integrate as seamlessly as custom actions and add complexity, cost, and maintenance overhead.
Question 30.
You are implementing a solution that needs to track all changes made to specific fields on the account table for audit purposes. Which feature should you enable?
A) Change Tracking
B) Audit logging
C) Field-level security
D) Duplicate detection
Answer: B
Explanation:
Audit logging in Dataverse is specifically designed for tracking changes to data for compliance and audit purposes. When you enable auditing on a table and specific fields, Dataverse automatically records all changes including who made the change, when it was made, the old value, and the new value. This creates a comprehensive audit trail that meets regulatory and business requirements for data change tracking.
Auditing can be configured at three levels: organization level (enabling auditing globally), table level (enabling auditing for specific tables), and field level (enabling auditing for specific fields). For the requirement to track changes to specific fields on the account table, you would enable auditing at the organization level, enable it for the account table, and then enable it for each specific field you want to track.
The audit logs are stored in the audit table and can be viewed through the Dataverse interface, exported for analysis, or queried programmatically. Audit records are retained based on configured retention policies and include detailed information about create, update, delete, and access operations. This provides the comprehensive audit trail needed for compliance with regulations like GDPR, HIPAA, and SOX.
A) Change Tracking is designed for data synchronization scenarios where external systems need to identify which records have changed since their last query. It provides a lightweight mechanism to detect changes but doesn’t store historical change details, old values, or who made changes. It’s not suitable for audit purposes.
C) Field-level security controls who can view or edit specific fields, providing access control rather than change tracking. While it can restrict access to sensitive fields, it doesn’t record or track changes to those fields. It serves a different purpose related to security rather than auditing.
D) Duplicate detection is a feature that identifies and prevents duplicate records from being created in Dataverse. It has nothing to do with tracking changes to existing records or maintaining audit trails. It’s used for data quality purposes, not auditing.