Microsoft PL-400 Power Platform Developer Exam Dumps and Practice Test Questions Set 6 Q76-90
Visit here for our full Microsoft PL-400 exam dumps and practice test questions.
Question 76.
You need to create a plugin that prevents deletion of account records that have related opportunities in «Open» status. The plugin should provide a user-friendly error message. How should you implement this?
A) Synchronous plugin on PreOperation Delete with opportunity query and InvalidPluginExecutionException
B) Asynchronous plugin on PostOperation Delete with rollback logic
C) JavaScript web resource on form with delete prevention
D) Business rule that checks opportunities before delete
Answer: A
Explanation:
A synchronous plugin registered on PreOperation stage of the Delete message is the correct implementation for preventing deletions based on business rules. In PreOperation, you query for related opportunities with «Open» status, and if any exist, throw an Invalid PluginExecution Exception with a clear message like «Cannot delete this account because it has open opportunities.» This prevents the delete operation and displays the message to users.
PreOperation stage is ideal for validation logic that prevents operations because it executes before the database operation commits, allowing clean cancellation without side effects. Synchronous execution ensures immediate feedback to users, and InvalidPluginExecutionException is designed specifically for communicating business rule violations with user-friendly messages.
The implementation queries related opportunities filtering by status, counts or checks for existence of open opportunities, and throws the exception with an informative message if validation fails. The transaction automatically rolls back, and users see the error message explaining why they cannot delete the record.
B) Asynchronous plugins execute after the operation completes, meaning the account would already be deleted before your plugin runs. While you could theoretically restore the record, this is complex, error-prone, doesn’t cleanly prevent the operation, and doesn’t provide immediate user feedback. Async plugins aren’t suitable for validation that prevents operations.
C) JavaScript web resources only execute when users delete through the form UI and can be bypassed through API calls, bulk deletes, cascading deletes, or other non-UI paths. Client-side validation should never be the sole implementation of business rules. Server-side plugin validation is required for reliable enforcement.
D) Business rules cannot check related records or perform queries against other tables. They can only evaluate conditions on the current record and immediate parent lookups. Business rules also cannot prevent delete operations — they only work on Create and Update. Business rules are insufficient for this requirement.
Question 77.
You are developing a canvas app that needs to work with large datasets (100,000+ records). Users need to filter and search across multiple fields. Which approach provides the best performance?
A) Load all records into collections and use Filter function
B) Use delegable queries directly on the data source with Filter and Search
C) Use Power Automate to filter data and return results
D) Create multiple smaller collections for different subsets
Answer: B
Explanation:
Using delegable queries directly on the data source with Filter and Search functions allows the data source (Dataverse, SQL Server, SharePoint) to perform the filtering and searching server-side, returning only matching records to the app. Delegation is specifically designed for working with large datasets efficiently by pushing query processing to the data source rather than pulling all data to the client.
For datasets exceeding 100,000 records, delegation is not just best practice — it’s essential. Non-delegable approaches hit data row limits (maximum 2000) and perform poorly even at that limit. Delegable queries can work with millions of records because the data source handles the heavy lifting, returning only the filtered results.
To ensure delegation, use delegable functions (Filter, Search, LookUp, Sort on most data sources), avoid non-delegable functions in filter expressions, test with datasets larger than 2000 records to verify delegation works, and check for delegation warnings in the formula bar. The formula bar shows blue underlines when delegation warnings exist.
A) Loading all records into collections is impossible with large datasets due to data row limits (maximum 2000 records), would consume excessive device memory and cause app crashes if limits were higher, creates terrible user experience with lengthy initial load times, and completely defeats the purpose of delegation. Never load large datasets entirely into collections.
C) Using Power Automate to filter data introduces latency (flows take seconds to execute), complicates the solution with additional components, consumes flow runs (which have limits and costs), and doesn’t provide the real-time responsive experience that direct delegable queries offer. Power Automate is for automation, not for real-time data querying in apps.
D) Creating multiple smaller collections still requires loading subsets of data into memory, doesn’t solve the fundamental problem of working with large datasets, requires complex logic to manage which collection to use, and still hits data row limits if subsets exceed 2000 records. This approach doesn’t scale and misses the point of delegation.
Question 78.
You need to implement a solution where a plugin calls multiple external APIs in parallel to improve performance. The plugin should wait for all API calls to complete before proceeding. Which approach should you use?
A) Use Task.WhenAll with async API calls
B) Create multiple threads with Thread.Join
C) Execute API calls sequentially in a loop
D) Register multiple plugins that execute simultaneously
Answer: A
Explanation:
Using Task.WhenAll with async API calls is the modern, efficient approach for parallel external API calls in plugins. This pattern allows multiple HTTP requests to execute concurrently, significantly reducing total execution time compared to sequential calls. Task.WhenAll waits for all tasks to complete before returning, ensuring you have all results before proceeding with plugin logic.
The implementation involves creating tasks for each API call using HttpClient with async methods (GetAsync, PostAsync), collecting the tasks in an array or list, using await Task.WhenAll(tasks) to execute them in parallel and wait for completion, and then processing the results. This approach is non-blocking and efficient, making good use of I/O wait time.
For example, if you need to call three APIs that each take 2 seconds, sequential execution takes 6 seconds total, while parallel execution with Task.WhenAll takes only 2 seconds (the time of the longest call). This dramatic performance improvement is crucial in plugins where execution time affects user experience.
B) Creating multiple threads with Thread.Join is the older, more complex approach to parallelism. While it works, it’s less efficient than async/await patterns, doesn’t handle I/O-bound operations (like HTTP requests) as elegantly, consumes more system resources (threads are heavier than tasks), and uses outdated patterns that modern C# has improved upon. Task-based asynchrony is the current best practice.
C) Executing API calls sequentially in a loop is the simplest code but provides the worst performance. Each API call must complete before the next begins, meaning total execution time is the sum of all individual call times. For plugins calling multiple external services, this creates unnecessarily long execution times and poor user experience. Parallelism should be used when calls are independent.
D) Registering multiple plugins that execute simultaneously doesn’t help with parallel API calls within a single business operation. Each plugin instance handles one operation, and you can’t easily coordinate multiple plugin registrations to call different APIs for the same operation. This approach shows misunderstanding of plugin architecture and doesn’t solve the performance problem.
Question 79.
You are implementing a model-driven app form that needs to display read-only data from an external system alongside Dataverse data. The external data should not be stored in Dataverse. Which approach should you use?
A) Virtual table with custom data provider
B) JavaScript web resource fetching external data
C) Power Automate flow syncing data periodically
D) iFrame displaying external system UI
Answer: A
Explanation:
Virtual tables with a custom data provider allow you to surface external data in Dataverse without actually storing it, making the external data appear as native Dataverse tables in model-driven apps. The custom data provider (implemented as a plugin) handles Retrieve and RetrieveMultiple operations by fetching data from the external system in real-time and returning it in Dataverse format.
Virtual tables appear in forms, views, and subgrids just like regular Dataverse tables, providing seamless integration that users don’t recognize as external. The data provider translates Dataverse queries into external system queries, fetches data, and transforms results into Dataverse entity format. This provides read-only access (if you only implement Retrieve operations) to external data without duplication.
This approach is ideal for displaying reference data from external systems, integrating with systems of record that you don’t control, showing real-time data without synchronization delays, and avoiding data duplication and storage costs. Virtual tables are the recommended pattern for read-only external data integration in model-driven apps.
B) JavaScript web resources fetching external data can display information on forms but require custom UI implementation, only work in the web client interface, don’t integrate with views or subgrids, require managing external API authentication client-side (security concern), and don’t provide the seamless integration that virtual tables offer. JavaScript is more complex and less integrated.
C) Power Automate flows syncing data periodically stores copies of external data in Dataverse, which violates the requirement of not storing the data. Periodic sync also means data may be stale, introduces data duplication and storage costs, and creates synchronization complexity. This approach doesn’t meet the requirement for read-only access without storage.
D) iFrames displaying external system UI can show external data but provide terrible user experience with separate authentication, different UI paradigms, security restrictions on cross-origin iFrames, lack of integration with Dataverse features, and don’t allow the external data to appear as native Dataverse records. iFrames are generally a poor integration pattern.
Question 80.
You need to implement a solution where users can create records in a canvas app even when offline, and those records are automatically created in Dataverse when connectivity is restored. Which pattern should you implement?
A) Store creates in collections, detect connectivity with Connection.Connected, sync when online
B) Use Dataverse connector which automatically handles offline
C) Store data in local SQL database with automatic sync
D) Use browser localStorage with background sync service worker
Answer: A
Explanation:
Storing create operations in collections while offline, detecting connectivity status using the Connection.Connected property, and syncing to Dataverse when connectivity is restored is the standard pattern for offline-capable canvas apps. Collections provide in-memory storage for data while the app runs, and you can persist pending creates across app sessions by storing collection data in app variables or using SaveData/LoadData for longer-term local storage.
The implementation involves detecting when the app goes offline (Connection.Connected changes to false), storing new records in a «pending creates» collection with a flag or separate collection to distinguish them from synced records, periodically checking Connection.Connected to detect when connectivity returns, and when online, iterating through pending creates, patching them to Dataverse, handling any errors, and removing successfully synced items from the pending collection.
This pattern requires manual implementation but provides full control over the sync process, error handling, conflict resolution, and user feedback. You can show users which records are pending sync, allow them to view and edit pending records, and handle scenarios where sync fails for some records.
B) The standard Dataverse connector in canvas apps does not automatically handle offline scenarios or queue operations for later sync. Unlike model-driven apps which have built-in offline capabilities, canvas apps require manual implementation of offline patterns using collections and sync logic. The connector will simply fail when offline without automatic queuing.
C) Canvas apps cannot access local SQL databases on the user’s device. There’s no SQL Lite or local database capability in canvas apps. Data must be stored in app collections, variables, or using SaveData for simple key-value persistence. SQL databases are not part of the canvas app runtime environment.
D) Canvas apps don’t have access to browser localStorage (for security and isolation) or service workers. These are web platform APIs that aren’t exposed to the Power Apps runtime. Canvas apps use their own data storage mechanisms (collections, variables, SaveData/LoadData) and don’t have access to low-level browser storage APIs.
Question 81
You are developing a plugin that needs to execute different logic based on whether the operation is being performed by a user or by the system (automated process). Which execution context property should you check?
A) UserId vs. InitiatingUserId comparison
B) IsExecutingOffline property
C) Depth property
D) Mode property
Answer: A
Explanation:
Comparing the UserId and InitiatingUserId properties of the execution context can help determine if the operation is being performed directly by a user or through an automated process. When a user performs an operation directly, these values are typically the same. However, when workflows, plugins, or other automated processes perform operations on behalf of users or the system, these values may differ, providing a clue about the nature of the execution.
Additionally, you can check if the UserId matches known system user GUIDs or check the caller’s context to determine if the operation is automated. Some organizations use dedicated integration or service accounts for automated processes, making it easier to identify system-initiated operations versus user-initiated ones through user ID comparison.
However, the most reliable approach often involves checking specific patterns in your organization’s architecture, such as whether certain fields are set that only automated processes would set, or examining the call stack depth. The execution context provides various properties that together help identify the operation source, though no single property definitively distinguishes all user operations from all system operations.
B) The IsExecutingOffline property indicates whether the plugin is executing in offline mode (in Dynamics 365 for Outlook), not whether the operation is user-initiated versus system-initiated. Both users and automated processes can trigger operations in online mode, so this property doesn’t distinguish between them.
C) The Depth property indicates how many levels deep in the plugin execution chain you are, which helps prevent infinite loops but doesn’t indicate whether the original operation was user-initiated or system-initiated. Both types of operations can have various depths depending on how many plugins trigger other operations.
D) The Mode property indicates whether the plugin is executing synchronously or asynchronously, not whether it was initiated by a user or system. Both users and automated processes can trigger both synchronous and asynchronous plugins, so Mode doesn’t distinguish the operation source.
Question 82
You need to create a canvas app that displays a map showing the route between two addresses. Which approach provides the best functionality?
A) Interactive map PCF control with routing capabilities
B) Static image from mapping service API
C) Address input controls with text directions
D) HTML text control with embedded map iframe
Answer: A
Explanation:
An interactive map PCF control with routing capabilities provides the richest functionality for displaying routes between addresses. Several PCF map controls available through the PCF gallery or third-party providers support routing features that calculate and display the path between two locations, show turn-by-turn directions, display distance and estimated travel time, and allow users to interact with the map by zooming and panning.
These controls integrate seamlessly with canvas apps, can accept address data from text inputs or Dataverse records, call routing APIs to calculate optimal paths, and display the route visually on an interactive map. Users can see the entire journey, identify waypoints, and understand the geographic relationship between locations, providing much better context than text directions alone.
Interactive map controls with routing typically use services like Azure Maps, Google Maps, or Bing Maps APIs to calculate routes. They handle the complexity of geocoding addresses to coordinates, calling routing services, rendering the route on the map, and providing interactive features. This delivers a professional, user-friendly experience for route visualization in canvas apps.
B) Using static images from mapping service APIs can display routes but lacks interactivity and requires repeatedly calling APIs to generate new images when addresses change. Users cannot zoom, pan, or interact with static images, and this approach consumes more API calls and provides inferior user experience compared to interactive map controls.
C) Address input controls with text directions don’t provide any visual map representation of the route. While text directions are useful supplementary information, they don’t give users the spatial understanding that map visualization provides. Users cannot see the geographic context, route overview, or surrounding areas with text-only directions.
D) HTML text controls in canvas apps have limited support for interactive content and attempting to embed map iframes faces cross-origin restrictions, doesn’t integrate well with Power Apps data binding, and provides poor user experience. This approach is technically problematic and doesn’t leverage proper canvas app architecture.
Question 83
You are implementing a solution where a plugin needs to prevent updates to records that are in «Approved» status. Users should see a clear error message explaining why the update is blocked. How should you implement this?
A) Synchronous plugin on PreOperation Update checking status field and throwing InvalidPluginExecutionException
B) Asynchronous plugin on PostOperation Update with rollback
C) Business rule preventing updates on approved records
D) JavaScript web resource blocking form saves
Answer: A
Explanation:
A synchronous plugin registered on PreOperation stage of the Update message is the correct server-side implementation for preventing updates based on record status. The plugin checks if the record has «Approved» status, and if so, throws an InvalidPluginExecutionException with a user-friendly message such as «Cannot update approved records. Please contact your administrator to reopen the record before making changes.»
PreOperation stage is ideal because it executes before the database update commits, allowing you to prevent the operation cleanly without any data being modified. The synchronous execution mode ensures users receive immediate feedback about why their update was blocked, and the InvalidPluginExecutionException is specifically designed to communicate business rule violations to users with clear error messages.
The plugin retrieves the current record from the database to check its status (since the Target entity only contains changed fields), compares the status value to determine if the record is approved, and throws the exception with a descriptive message if updates should be prevented. The platform automatically rolls back the transaction and displays the error message to users across all interfaces including UI, API, and imports.
B) Asynchronous plugins execute after the operation completes, meaning the record would already be updated before your validation runs. You cannot effectively prevent an operation that has already occurred, and attempting to reverse it through another update creates complexity and potential data inconsistency. Async plugins are inappropriate for validation that prevents operations.
C) Business rules have limitations including inability to execute for all update paths (they don’t run for API updates in some scenarios), limited conditional logic capabilities, and inability to provide custom error messages. While business rules can make fields read-only, they don’t provide the robust server-side validation with clear error messaging that plugins offer.
D) JavaScript web resources only work when users update through the form UI and can be bypassed through API calls, bulk updates, workflows, or other non-UI paths. Client-side validation should never be the sole implementation of critical business rules, as it’s not enforceable. Server-side plugin validation is required for reliable enforcement.
Question 84
You need to implement a solution where external systems can subscribe to receive notifications when specific Dataverse records change. Which feature should you configure?
A) Webhooks registered through service endpoints
B) Power Automate flows with HTTP actions
C) Azure Service Bus integration
D) Custom notification plugin posting to external endpoints
Answer: A
Explanation:
Webhooks registered through service endpoints provide the native Dataverse mechanism for pushing event notifications to external systems via HTTP. Service endpoints allow external systems to subscribe to Dataverse events by providing webhook URLs that Dataverse will call when specified events occur, such as record creates, updates, or deletes. This implements the publish-subscribe pattern for event-driven integration.
When you register a webhook service endpoint using the Plugin Registration Tool, you specify the external HTTP endpoint URL and then register steps defining which events should trigger webhook notifications. When the specified events occur, Dataverse automatically posts the execution context data in JSON format to the webhook URL, including details about the changed record, the operation performed, and other relevant context.
Webhooks provide low-latency event notifications without requiring external systems to poll for changes, support both synchronous and asynchronous execution, include retry logic for failed deliveries, and allow filtering to send only relevant events to subscribers. This is the recommended approach for event-driven integration where external systems need real-time notification of Dataverse changes.
B) Power Automate flows with HTTP actions can send notifications to external systems but introduce additional latency compared to direct webhooks, add another component to manage and monitor, and have different performance and scaling characteristics. While flows work for many scenarios, native webhooks provide more direct, lower-latency integration for event notifications.
C) Azure Service Bus integration is designed for reliable queued messaging in enterprise integration scenarios and is more complex than webhooks for simple HTTP notification requirements. Service Bus is excellent for guaranteed delivery and complex messaging patterns but introduces additional infrastructure and complexity when simple HTTP webhooks suffice.
D) Creating custom notification plugins that post to external endpoints is essentially reimplementing what webhooks provide out of the box. While this approach works, it requires custom code, testing, and maintenance for functionality that Dataverse’s webhook feature already provides in a supported, tested, and optimized implementation. Use built-in webhooks rather than custom solutions.
Question 85
You are developing a model-driven app where users should only see records they own plus records owned by users they manage in the organization hierarchy. Which security feature should you configure?
A) Hierarchical security model with Manager hierarchy
B) Security roles with Parent: Child Business Units access level
C) Share records with manager’s team
D) Custom plugin filtering records by hierarchy
Answer: A
Explanation:
Hierarchical security with Manager hierarchy is specifically designed for this scenario where users should access records they own plus records owned by their direct and indirect reports. When you enable hierarchical security for a table and configure it to use the Manager hierarchy, Dataverse automatically grants users access to all records owned by users below them in the management chain defined by the parentsystemuserid field on user records.
Hierarchical security works in conjunction with security roles, where the security role defines base privileges (like Read access at User level for records they own), and the hierarchy extends that access downward through the organization structure. This provides management visibility into their team’s data without requiring complex sharing rules or custom code to maintain access.
The configuration involves enabling hierarchy security at the organization level, selecting which hierarchy type to use (Manager or Position), enabling hierarchical security for specific tables where you want this behavior, and ensuring security roles grant appropriate base privileges. Once configured, the hierarchy automatically determines additional record access based on organizational relationships.
B) Security roles with Parent: Child Business Units access level grant access based on business unit hierarchy, not management hierarchy. Business units are organizational divisions (like departments or regions), while management hierarchy reflects reporting relationships between individual users. These are different structures, and business unit access doesn’t implement manager-subordinate record access.
C) Sharing records with manager’s teams requires manually creating teams, assigning team members, and configuring sharing rules. This creates administrative overhead, doesn’t automatically update when organizational structure changes, and doesn’t naturally represent the management hierarchy. Hierarchical security is the proper feature designed specifically for management visibility.
D) Creating custom plugins to filter records by hierarchy would require significant development effort, complex queries to traverse the management hierarchy, performance concerns with large organizations, and ongoing maintenance. This reimplements functionality that hierarchical security provides out of the box and is not the recommended approach when platform features exist.
Question 86
You are developing a plugin that performs complex calculations requiring decimal precision. The calculation results must match exactly between different executions. Which approach ensures consistent decimal calculations?
A) Use decimal type with explicit rounding at each step
B) Use double with Math.Round for final result
C) Use float for better performance
D) Convert to integer for calculations then divide
Answer: A
Explanation:
Using the decimal data type with explicit rounding at each calculation step ensures consistent, predictable results in financial and precision-critical calculations. The decimal type uses base-10 representation that accurately represents decimal fractions without the rounding errors inherent in binary floating-point types. Explicit rounding at each step using Math.Round with specified decimal places ensures deterministic results across different executions and environments.
When performing multi-step calculations, rounding intermediate results to appropriate precision prevents accumulation of tiny differences that could lead to inconsistent final results. For example, when calculating tax amounts, discounts, or financial totals, rounding each intermediate calculation to cents ensures that the same inputs always produce identical outputs regardless of execution environment or timing.
The decimal type provides 28-29 significant digits of precision, which is more than sufficient for business calculations while maintaining exact decimal representation. Combined with explicit rounding strategy, this approach guarantees consistency required for financial calculations, audit trails, and scenarios where exact reproducibility is essential. This pattern is standard practice in financial software development.
B) Using double introduces binary floating-point representation that cannot exactly represent many decimal values like 0.1, 0.01, or 0.3. Even with Math.Round for the final result, intermediate calculation differences can accumulate and potentially lead to inconsistent results. Double is inappropriate for financial calculations requiring exact decimal representation and consistency.
C) Float has even worse precision than double (approximately 7 significant digits versus 15-16 for double) and suffers from the same binary floating-point representation issues. Using float for calculations requiring consistency and precision is never appropriate, as the limited precision causes significant rounding errors even in simple calculations. Float should never be used for financial or precision-critical work.
D) Converting to integer for calculations by multiplying by a power of 10, performing integer arithmetic, then dividing back can work for some scenarios but becomes complex with multiple decimal places, different scales, and risks integer overflow with large values. This approach is error-prone and unnecessarily complex when decimal type provides proper decimal arithmetic natively.
Question 87
You need to create a canvas app that allows users to record audio notes and store them in Dataverse. Which control and data type combination should you use?
A) Microphone control with File column in Dataverse
B) Microphone control with Image column in Dataverse
C) Add media control with Multiple lines of text column
D) Microphone control with Single line of text for base64 encoding
Answer: A
Explanation:
The Microphone control in canvas apps captures audio recordings, and the File column data type in Dataverse is designed for storing file data including audio files. This combination provides the proper architecture for audio note functionality. The Microphone control records audio from the device’s microphone and provides the recorded audio as data that can be saved to Dataverse File columns using the Patch function.
File columns in Dataverse can store various file types including audio formats, support files up to 128 MB, maintain file metadata like filename and size, and provide efficient storage and retrieval. When you patch audio from the Microphone control to a File column, Dataverse handles the storage, and you can later retrieve and play the audio using the Audio control in canvas apps.
This approach leverages purpose-built components and data types designed specifically for media storage, provides efficient binary storage rather than inefficient text encoding, maintains file metadata for better management, and follows Microsoft’s recommended patterns for storing media in Power Apps solutions. The File data type was specifically added to Dataverse to support these scenarios.
B) Image columns are designed specifically for image data and have size limits optimized for images (maximum 30 MB). While you might technically be able to store audio data in an Image column, this misuses the column type, doesn’t provide appropriate metadata, and may cause issues with image processing features. File columns are the correct type for audio data.
C) Multiple lines of text columns store text data and are not designed for binary audio data. While you could encode audio as base64 text, this increases file size by approximately 33%, consumes text storage inefficiently, has performance implications for encoding/decoding, and misuses a text field for binary data. File columns provide proper binary storage.
D) Single line of text has a maximum length of 4,000 characters, which is far too small for audio recordings even when base64 encoded. A one-minute audio recording could be several megabytes, requiring millions of characters when encoded. This approach is completely impractical and shows fundamental misunderstanding of data types and audio storage requirements.
Question 88
You are implementing a solution where a plugin needs to query records using dynamic filter conditions passed from client code. Which approach prevents SQL injection-style attacks?
A) Use QueryExpression or FetchXML with parameterized conditions
B) Build FetchXML strings by concatenating user input
C) Use string concatenation to build QueryExpression filters
D) Execute raw SQL queries with input validation
Answer: A
Explanation:
Using QueryExpression or FetchXML with properly parameterized conditions prevents injection attacks by separating query structure from data values. When you use QueryExpression, you construct filter conditions using strongly-typed objects like ConditionExpression where values are passed as parameters, not concatenated into query strings. Similarly, when using FetchXML, you build the XML structure properly and insert values as attribute values, not as concatenated strings.
These Dataverse query APIs are designed to safely handle user input by treating all condition values as data rather than executable code. The platform properly escapes and validates values, preventing malicious input from altering query structure or accessing unauthorized data. This is analogous to using parameterized SQL queries or prepared statements in traditional database programming.
For example, when filtering by account name, you create a ConditionExpression with the attribute name, operator, and value as separate parameters. The Dataverse platform ensures the value is treated as literal data regardless of its content, preventing any injection-style attacks. This architectural approach eliminates entire classes of security vulnerabilities.
B) Building FetchXML strings by concatenating user input is extremely dangerous and the exact pattern that creates injection vulnerabilities. Malicious input containing XML special characters or carefully crafted FetchXML fragments could alter query structure, bypass filters, or access unauthorized data. Never concatenate user input directly into FetchXML strings without proper XML encoding at minimum.
C) String concatenation to build QueryExpression filters doesn’t make sense architecturally because QueryExpression uses object construction, not string building. However, if you’re dynamically building filter logic, you must still use proper ConditionExpression objects with parameterized values rather than any string concatenation approach. The question seems to describe an anti-pattern that shouldn’t be used.
D) Dataverse plugins cannot execute raw SQL queries. The platform provides IOrganizationService with QueryExpression and FetchXML as the query mechanisms, specifically designed to prevent SQL injection. You don’t have direct SQL access from plugins, which is a security feature. All queries must go through the organization service APIs that provide built-in protection.
Question 89
You need to create a model-driven app form that displays a timeline of activities and posts similar to the out-of-box timeline control but with custom filtering. Which approach should you use?
A) Use the timeline control with custom views
B) Create a custom PCF control replicating timeline functionality
C) Use a subgrid with activity entity
D) Embed a canvas app displaying activities
Answer: A
Explanation:
The timeline control in model-driven apps is a powerful, feature-rich component designed specifically for displaying activities, posts, notes, and custom activity types in chronological order. The control supports custom filtering through configuration of which activity types to display, custom views for filtering records, and settings for sorting and display options. This provides the timeline experience with customization through configuration rather than code.
When you add a timeline control to a form, you can configure which tables appear in the timeline (activities, posts, notes), create custom views to filter which records display, set default sorting, configure which actions users can perform, and customize the display format. This configuration-based customization handles most requirements for custom filtering without requiring custom development.
The timeline control provides excellent user experience with features like quick create forms, filtering by activity type, search within timeline, refresh functionality, and support for custom activities. It’s a mature component that has been refined over many versions and provides consistent behavior that users expect in Dynamics 365 applications. Use this built-in control before considering custom development.
B) Creating a custom PCF control to replicate timeline functionality would require months of development to recreate the rich feature set including activity types, posts, notes, attachments, quick create, filtering, searching, sorting, and all the user interactions. This massive development effort is unjustified when the timeline control provides customization through configuration.
C) Using a subgrid with the activity entity displays activities in a grid format but lacks the rich timeline visualization, doesn’t show posts or notes inline, doesn’t provide the chronological visual representation with dates, and misses many timeline-specific features like walls, posts, and integrated communication. Subgrids are for tabular data display, not timeline experiences.
D) Embedding a canvas app to display activities requires building custom UI for activities, handling all the activity types and their specific fields, implementing timeline visualization, and managing refresh and interaction logic. This custom development is unnecessary when the timeline control provides the needed functionality with configuration-based customization.
Question 90
You are developing a plugin that needs to access the pre-image of a record to compare old values with new values during an update operation. How should you access the pre-image data?
A) Register pre-image in plugin step and access via PreEntityImages collection
B) Query the record using Retrieve before processing Target
C) Access through InputParameters with «PreImage» key
D) Use AuditDetail to retrieve previous values
Answer: A
Explanation:
Registering a pre-image when you register the plugin step and accessing it through the PreEntityImages collection in the plugin execution context is the correct and most efficient approach for accessing pre-update values. A pre-image is a snapshot of the record’s state before the operation, automatically populated by the platform when you configure it during plugin registration in the Plugin Registration Tool.
When you register a plugin step on the Update message, you can add a pre-image by specifying an alias name and selecting which attributes to include in the image. During plugin execution, you access this pre-image through context.PreEntityImages[«YourAliasName»], which provides an Entity object containing the field values as they existed before the update. This allows efficient comparison between old and new values without additional database queries.
Pre-images are efficient because the platform retrieves the data as part of the update operation anyway, so including it in the execution context adds minimal overhead. You should include only the specific attributes you need to check rather than all attributes, optimizing performance. This pattern is standard for update validation plugins that need to detect which fields changed or what the previous values were.
B) Querying the record using Retrieve before processing Target works but is less efficient than using pre-images. It requires an additional database query that the platform has already performed internally. Pre-images provide the same data with better performance. While this approach is sometimes necessary in specific scenarios, pre-images are the recommended pattern for accessing previous values.
C) InputParameters contains the Target entity with the new/changed values and other message-specific parameters, but it doesn’t contain a «PreImage» key. Pre-images and post-images are in separate collections (PreEntityImages and PostEntityImages) in the execution context, not in InputParameters. This answer reflects misunderstanding of the execution context structure.
D) AuditDetail and audit records contain historical change information but accessing audit data requires separate queries to the audit table, introduces significant performance overhead, and only works if auditing is enabled for the table and fields. Audit is for historical tracking, not for real-time value comparison in plugins. Pre-images provide immediate access to previous values efficiently.