Microsoft PL-400 Power Platform Developer Exam Dumps and Practice Test Questions Set 7 Q91-105

Microsoft PL-400 Power Platform Developer Exam Dumps and Practice Test Questions Set 7 Q91-105

Visit here for our full Microsoft PL-400 exam dumps and practice test questions.

Question 91

You need to implement a canvas app that works across multiple languages with translations for labels, messages, and field names. Which approach should you use?

A) Create language-specific variables populated based on User().Language

B) Use Dataverse choice column labels with translations

C) Create separate app versions for each language

D) Use Language() function to switch text controls

Answer: A

Explanation:

Creating language-specific variables that are populated based on the User().Language property is the standard pattern for implementing multi-language support in canvas apps. You create collections or variables containing translations for all text strings used in the app, detect the user’s language preference using User().Language, and load the appropriate translations. All labels, buttons, messages, and other text reference these variables rather than hard-coded strings.

The implementation typically involves creating a translation table in Dataverse or Excel containing language codes and translated strings, loading translations into a collection when the app starts based on the user’s language, and using these collection values throughout the app for all displayed text. When you need to display a label, you reference the collection like translationCollection.LookUp(Key=»Welcome»).Value instead of hard-coding «Welcome».

This approach provides flexibility to add new languages without modifying app logic, centralizes translations for easier management, supports dynamic language switching if needed, and scales well to large applications with many translated strings. It’s the recommended pattern for building truly multi-language canvas apps.

B) Dataverse choice column labels do support translations and will automatically display in the user’s language in model-driven apps. However, this only helps with choice values, not with all the other text in canvas apps like button labels, instructions, messages, and field labels. You still need a comprehensive translation approach for all app text, which choice translations alone don’t provide.

C) Creating separate app versions for each language creates massive maintenance overhead because every change must be replicated across all versions, makes it difficult to add new languages, creates version management complexity, and violates the principle of having a single source of truth. This approach is impractical and should never be used when proper multi-language patterns exist.

D) There is no Language() function in Power Apps that switches text controls automatically. While User().Language detects the user’s language preference, you must implement the logic to load and display appropriate translations. This answer suggests a non-existent automatic translation feature. Multi-language support requires deliberate implementation of translation patterns.

Question 92

You are developing a plugin that needs to execute only when specific columns on a record are updated, and should ignore updates to other columns. The plugin should not execute at all if non-relevant columns are updated. How should you implement this?

A) Register plugin step with filtering attributes for relevant columns

B) Check Target entity in plugin code for relevant attributes

C) Use shared variables to communicate which columns changed

D) Register separate plugin steps for each column

Answer: A

Explanation:

Registering the plugin step with filtering attributes for the relevant columns ensures the plugin only executes when those specific columns are included in the update operation. This filtering happens at the platform level before your plugin code runs, providing optimal performance by preventing unnecessary plugin executions when users update non-relevant fields. The Plugin Registration Tool allows you to select specific attributes as filters during step registration.

When you specify filtering attributes, Dataverse checks if any of the specified attributes are present in the update request. If none of the filtered attributes are being updated, the plugin step doesn’t execute at all, saving processing resources and execution time. If at least one filtered attribute is present, the plugin executes. This platform-level filtering is more efficient than running the plugin and checking in code.

This approach is essential for performance optimization in environments with many plugins and frequent record updates. By ensuring plugins only run when relevant data changes, you reduce overall system load, improve response times, and prevent unnecessary processing. Filtering attributes should be configured for any plugin that doesn’t need to run on every update.

B) Checking the Target entity in plugin code means your plugin executes for every update to the table, and then your code determines whether to proceed based on which attributes are present. While this works, it’s inefficient because the plugin is instantiated, context is prepared, and your code runs even when no relevant columns changed. Filtering attributes prevents execution entirely.

C) Shared variables are for passing data between plugins in the execution pipeline, not for determining which columns changed or filtering plugin execution. While you could use shared variables to communicate which fields a previous plugin processed, this doesn’t prevent your plugin from executing unnecessarily. Filtering attributes is the proper mechanism for execution control.

D) Registering separate plugin steps for each column would work but creates administrative overhead with many plugin steps to manage, makes the solution harder to understand and maintain, and doesn’t provide benefits over a single plugin step with multiple filtering attributes. One step with multiple attribute filters is cleaner and more maintainable.

Question 93

You need to create a Power Apps portal page where authenticated users can update their own contact information but cannot see or update other users’ information. Which security configuration should you use?

A) Table permissions with Contact scope for the contact table

B) Web page permissions restricting access by role

C) Entity form permissions with user-specific filters

D) Table permissions with Global scope and custom filtering

Answer: A

Explanation:

Table permissions configured with Contact scope automatically filter data so that authenticated portal users can only access records directly related to their contact record. For the contact table, Contact scope means users can only see and edit their own contact record, which is exactly what the requirement describes. This provides secure, automatic filtering without custom code or complex configuration.

When you create table permissions for the contact table with Contact scope and associate them with the web role(s) assigned to authenticated users, the portal automatically enforces that users can only access their own contact record. You configure the specific privileges (Read, Write, Create, Delete) to control what operations users can perform on their own record.

This built-in scoping mechanism is fundamental to portal security and handles the common requirement of users managing their own profile information. Combined with entity forms configured for the contact table, this provides a secure self-service profile management experience where users cannot accidentally or maliciously access other users’ data.

B) Web page permissions control access to portal pages themselves, determining who can view specific pages. They don’t provide data-level filtering to ensure users only see their own contact records. Web page permissions and table permissions work together but serve different purposes — pages versus data access.

C) Entity form permissions don’t exist as a separate concept in portal security. Entity forms are configured with settings but rely on table permissions for actual data access control. The filtering happens through table permissions with appropriate scope settings (Contact, Account, Parent, Self, Global), not through entity form configuration.

D) Table permissions with Global scope allow users to access all records of the table (subject to privilege settings), which is the opposite of what’s needed. Global scope doesn’t restrict access to only the user’s own record. Custom filtering through code would be a complex workaround for functionality that Contact scope provides out of the box.

Question 94

You are implementing a plugin that creates related child records when a parent record is created. The child records require values from the parent record including the parent’s GUID. When should the plugin retrieve the parent’s GUID?

A) Access the Id property from Target entity in InputParameters

B) Query the parent record after creation using RetrieveMultiple

C) Access the Id property from OutputParameters after PostOperation

D) Use the PrimaryEntityId from execution context

Answer: C

Explanation:

In a Create operation, the record’s GUID is generated by Dataverse and is not available in the Target entity during PreOperation or PreValidation stages. The GUID becomes available in the OutputParameters collection after the record is created in the database. In PostOperation stage, you can access OutputParameters[«id»] which contains the newly created record’s GUID. This is the proper way to get the ID for creating related child records in PostOperation.

The pattern for creating related records in PostOperation involves registering the plugin on PostOperation stage of the Create message, accessing the newly created parent record’s GUID from context.OutputParameters[«id»], using this GUID to set lookup fields on child records you’re creating, and then executing Create requests for the child records. This ensures you have the valid parent GUID to establish relationships.

PostOperation stage is appropriate because the parent record has been successfully created and committed to the database, the GUID is available, and you can now create properly related child records. If you registered in PreOperation, the parent GUID wouldn’t exist yet because the record hasn’t been created. PostOperation is the correct stage for creating related records that need to reference the parent.

A) The Target entity in InputParameters during Create operations does not have the Id property set because the GUID hasn’t been generated yet (unless explicitly provided by the caller, which is uncommon). Attempting to access Target.Id in PreOperation will return an empty GUID. The ID is assigned during the database insert operation, so it’s only available after that occurs.

B) Querying the parent record after creation using RetrieveMultiple would require knowing some unique identifier to query by. While possible if you have alternate keys or unique fields, this adds unnecessary complexity and a database query when the GUID is readily available in OutputParameters. This approach is inefficient compared to accessing OutputParameters.

D) The execution context does have a PrimaryEntityId property, but this is typically the ID of the record the plugin is registered on. For Create operations, this might not be populated until after the record is created. The reliable location to get the newly created record’s ID in PostOperation is OutputParameters[«id»], which is guaranteed to contain the created record’s GUID.

Question 95

You need to implement a canvas app that displays data from multiple unrelated Dataverse tables with complex filtering that exceeds delegation limits. Which approach provides the best performance?

A) Create a Power Automate flow that consolidates data into a single collection

B) Use multiple data sources with delegable queries and combine results client-side

C) Create a view or virtual table that joins the data server-side

D) Load all data into collections and filter using non-delegable functions

Answer: C

Explanation:

Creating a view or virtual table that performs the data consolidation and filtering server-side is the most performant approach for complex scenarios involving multiple tables and complex filtering. By joining or aggregating data at the database level, you reduce the amount of data transferred to the client, leverage database optimization for joins and filtering, and can then query the view using delegable operations from the canvas app.

For related tables, you can create Dataverse views that join tables using relationships, apply filtering at the database level, and expose the consolidated data as a queryable view. For more complex scenarios or when joining unrelated tables, you might create a custom data entity, use stored procedures exposed through custom APIs, or implement virtual tables with custom data providers that consolidate data from multiple sources.

This architecture moves the complexity to the server where it can be handled efficiently with proper database query optimization, returns only the consolidated result set to the canvas app reducing network traffic and client-side processing, and allows the app to use delegable queries against the consolidated view working with large datasets efficiently. This is the enterprise architecture pattern for complex data scenarios.

A) Using Power Automate flow to consolidate data introduces significant latency (flows take seconds to execute), doesn’t provide real-time data as flows must be triggered and complete before data is available, introduces complexity with flow management and error handling, and isn’t suitable for interactive app scenarios where users expect immediate data display. Flows are for automation, not real-time data consolidation.

B) Using multiple data sources with delegable queries and combining results client-side still requires transferring all the data from each source to the client and performing the complex filtering and joining in the app. This approach faces data row limit constraints on each source (2000 records maximum), performs poorly with large datasets, and consumes significant client device resources. Server-side consolidation is more efficient.

D) Loading all data into collections is limited by the 2000 record data row limit per source and would fail entirely if you need more data than that limit allows. Even if data fits within limits, loading large amounts of data and performing complex client-side filtering creates poor performance, long load times, and high memory usage. This approach doesn’t scale and should be avoided.

Question 96

You are developing a plugin that needs to send sensitive data to an external API over HTTPS. The API requires client certificate authentication. How should you configure the plugin for certificate authentication?

A) Load certificate from Azure Key Vault or certificate store and attach to HttpClient

B) Include certificate file in plugin assembly and load at runtime

C) Pass certificate as base64 string in plugin configuration

D) Store certificate in Dataverse attachment and retrieve in plugin

Answer: A

Explanation:

Loading the client certificate from Azure Key Vault or the server certificate store and attaching it to the HttpClient is the secure, enterprise-grade approach for certificate-based authentication in plugins. Azure Key Vault provides secure certificate storage with access control and audit trails. For on-premises deployments or specific scenarios, certificates can be installed in the server’s certificate store and accessed by the plugin using certificate thumbprint.

The implementation involves authenticating to Azure Key Vault using managed identity or certificate-based authentication, retrieving the client certificate, creating an HttpClientHandler with the certificate attached to its ClientCertificates collection, and using this handler to create the HttpClient for making API calls. The certificate is securely stored and accessed only by authorized code.

This approach ensures certificates are managed through proper enterprise certificate management processes, can be rotated without redeploying code, have appropriate access controls and monitoring, and never exist as plaintext files in source control or deployments. Certificate-based authentication requires proper certificate lifecycle management that Azure Key Vault or certificate stores provide.

B) Including certificate files in the plugin assembly creates security risks because certificates would be embedded in the DLL, visible to anyone with access to the assembly file, difficult to rotate without recompiling and redeploying, exposed in source control if not properly handled, and violates security best practices. Certificates should never be embedded in application binaries.

C) Passing certificates as base64 strings in plugin configuration stores them in the Dataverse database where they might be visible in solution exports, doesn’t provide the same security controls as Key Vault, makes certificate rotation more difficult, and exposes certificates to anyone with access to plugin registration. This is more secure than embedding in assembly but far less secure than Key Vault.

D) Storing certificates in Dataverse attachments exposes them through the database, makes them accessible to users with appropriate privileges, complicates access control, and doesn’t follow certificate management best practices. Certificates should be managed through dedicated secret/certificate management systems like Azure Key Vault, not stored as regular data in business applications.

Question 97

You need to create a model-driven app where certain fields should be visible only to users with specific security roles. The visibility should be enforced at the server level. Which approach should you use?

A) Enable field-level security on fields and create field security profiles

B) Use JavaScript to show/hide fields based on user roles

C) Create multiple forms assigned to different security roles

D) Use business rules with role-based conditions

Answer: A

Explanation:

Field-level security (column-level security) enforced at the server level ensures that specific fields are only visible to users who have been granted access through field security profiles. This security is enforced by the platform across all access methods including UI, API, reports, and exports, making it true security rather than just UI-level hiding. When field security is enabled on a field, users without appropriate field security profile permissions see the field as empty or unavailable.

The configuration process involves enabling security on specific fields in the table definition, creating field security profiles that define which users or teams can create, read, or update those secured fields, and assigning users to the appropriate profiles. Only users assigned to profiles that grant Read access can see the field values, ensuring server-side enforcement.

This approach is the only one that provides true security enforcement, as it operates at the data access layer regardless of how users attempt to access the data. JavaScript hiding can be bypassed, multiple forms provide different views but don’t enforce security, and business rules cannot evaluate user roles. Field-level security is the proper mechanism for role-based field visibility requirements.

B) JavaScript show/hide based on user roles only affects the form UI and can be easily bypassed through API access, advanced find, reports, exports, or by users who disable JavaScript. Client-side hiding is not security and should never be relied upon for sensitive data protection. Server-side field-level security is required for actual security enforcement.

C) Creating multiple forms assigned to different security roles shows different field sets to different roles but doesn’t prevent users from accessing the data through other means like API, views, advanced find, or reports. Forms control UI presentation but don’t enforce data-level security. Users could still see the field values through non-form interfaces without proper field-level security.

D) Business rules cannot evaluate security roles or user roles in their conditions. Business rules work with record data, not user context. They cannot implement role-based field visibility. This capability simply doesn’t exist in business rules — they’re designed for record-based business logic, not user-role-based access control.

Question 98

You are implementing a plugin that needs to call multiple external APIs sequentially where each API call depends on the result of the previous call. The total execution time should be minimized. Which approach should you use?

A) Sequential async/await calls for each API

B) Parallel execution using Task.WhenAll

C) Create separate threads for each API call

D) Use synchronous HttpClient calls in sequence

Answer: A

Explanation:

Sequential async/await calls for each API when there are dependencies between calls is the correct approach. Since each API call depends on results from the previous call, they cannot be executed in parallel. Using async/await allows the plugin to efficiently wait for each API response without blocking threads, minimizing resource usage while maintaining the required sequential execution order.

The async/await pattern is specifically designed for I/O-bound operations like HTTP API calls. When you await an API call, the calling thread is released back to the thread pool and can handle other work while waiting for the response. Once the response arrives, execution continues with processing the result and making the next dependent API call. This pattern provides optimal resource utilization for sequential dependent operations.

For example, if Call B requires data from Call A’s response, and Call C requires data from Call B’s response, you must execute them sequentially. Using await for each call ensures efficient thread utilization while maintaining the correct execution order. This is more efficient than synchronous blocking calls and correctly handles the dependency requirement that parallel execution cannot satisfy.

B)WhenAll executes tasks in parallel and waits for all to complete. This approach is only suitable when calls are independent of each other. Since the requirement specifies that each call depends on the previous call’s result, parallel execution is not possible. You cannot start Call B until Call A completes and provides the needed data. Task.WhenAll doesn’t fit this sequential dependency pattern.

C) Creating separate threads for sequential dependent calls doesn’t help because each call must wait for the previous one anyway. Threading doesn’t reduce total execution time when operations are sequential and dependent. Additionally, manually managing threads is more complex and less efficient than async/await patterns for I/O-bound operations like HTTP calls.

D) Using synchronous HttpClient calls works but blocks the calling thread during each API call, which is inefficient resource usage. Synchronous blocking prevents the thread from doing other work while waiting for network I/O. Async/await provides better thread utilization without changing the sequential execution order, making it superior to synchronous calls for HTTP operations.

Question 99

You need to implement a canvas app that displays real-time stock price data that updates every few seconds. The data comes from an external API. Which approach provides the most efficient implementation?

A) Timer control triggering custom connector calls at intervals

B) Power Automate flow polling API and updating Dataverse

C) WebSocket connection through custom PCF control

D) Continuous loop in App OnStart refreshing data

Answer: A

Explanation:

A timer control configured to trigger at short intervals (minimum one second) that calls a custom connector to the external stock price API provides a straightforward, supported solution for near real-time data updates in canvas apps. The timer’s OnTimerEnd event executes the custom connector action, retrieves updated stock prices, and updates app variables or collections that the UI displays.

This approach balances functionality with simplicity, using built-in canvas app capabilities without requiring complex custom development. You can adjust the timer interval based on how current the data needs to be, implement error handling for API failures, and display loading indicators during refreshes. The custom connector wraps the external stock price API, handling authentication and request/response formatting.

While not true real-time push updates, timer-based polling every few seconds provides acceptable user experience for most stock price display scenarios. This pattern is commonly used in canvas apps for displaying frequently changing data and is well-supported with clear implementation patterns and good performance characteristics when properly implemented.

B) Using Power Automate to poll the API and write to Dataverse introduces unnecessary components and latency. The flow must poll the API, write to Dataverse, then the app must refresh from Dataverse — adding delays and complexity. Additionally, frequent polling by flows consumes flow runs and may hit limits. Direct API calls from the app provide faster updates with less complexity.

C) Canvas apps don’t have native WebSocket support, and implementing WebSocket connections through custom PCF controls requires significant development complexity including maintaining persistent connections, handling reconnection logic, dealing with cross-origin issues, and managing connection lifecycle. This level of complexity is rarely justified when timer-based polling meets requirements.

D) Creating a continuous loop in App OnStart would block the app’s startup and freeze the UI. Canvas apps are not designed for continuous background execution loops. The App OnStart should complete quickly to allow the app to become interactive. Timer controls are specifically designed for periodic execution, making them the appropriate mechanism for recurring updates.

Question 100

You are developing a plugin that updates records based on complex business rules. The plugin logic requires unit testing before deployment. Which approach enables effective unit testing of plugin code?

A) Extract business logic into separate testable classes, mock IOrganizationService

B) Deploy to development environment and test with actual data

C) Use Plugin Registration Tool profiler to test plugin execution

D) Write integration tests that call Web API endpoints

Answer: A

Explanation:

Extracting business logic into separate testable classes and mocking the IOrganizationService interface enables true unit testing of plugin code in isolation. This architectural pattern separates the plugin class (which handles execution context and platform integration) from business logic classes (which contain the actual rules and calculations). Business logic classes accept IOrganizationService as a dependency, allowing you to inject mock implementations during testing.

Using mocking frameworks like Moq or FakeXrmEasy, you can create mock IOrganizationService instances that return predefined test data without requiring actual Dataverse connections. This allows fast, repeatable unit tests that verify business logic correctness across various scenarios including edge cases and error conditions. Tests run in milliseconds and can be automated in build pipelines.

Dependency injection, and testable design. The business logic becomes independent of the Dataverse platform, making it easier to test, maintain, and potentially reuse. The plugin class becomes a thin integration layer that delegates to well-tested business logic classes, improving overall code quality and maintainability.

B) Deploying to development environment and testing with actual data is integration testing, not unit testing. While important, this approach is slow (requires deployment and environment access), difficult to repeat consistently (data state changes), hard to test error conditions and edge cases, and provides slow feedback. Unit tests should run in seconds without external dependencies, which this approach doesn’t provide.

C) Plugin Registration Tool profiler is excellent for debugging plugins in real environments but doesn’t replace unit testing. Profiler testing requires deployed plugins, real environment access, and manual test execution. It’s valuable for troubleshooting and integration testing but doesn’t provide the fast, automated, repeatable testing that proper unit tests offer for development.

D) Integration tests calling Web API endpoints test the entire stack including Dataverse, plugins, and network, but they’re slow, require environment access, are difficult to set up for specific test scenarios, and provide slow feedback. They’re valuable for verifying end-to-end functionality but complement rather than replace unit tests. Unit tests should verify business logic independently before integration testing.

Question 101

You are implementing a solution where a model-driven app needs to display aggregated data from thousands of child records (sum, count, average) on the parent record form. The aggregation should update in real-time as child records change. Which approach provides the best performance?

A) Use rollup fields with appropriate recalculation frequency

B) Calculate aggregates using JavaScript on form load

C) Create a plugin to calculate and store aggregates on parent

D) Use Power Automate to calculate aggregates periodically

Answer: A

Explanation:

Rollup fields in Dataverse are specifically designed for calculating aggregate values (sum, count, min, max, average) from related child records efficiently. Rollup fields leverage platform-optimized calculation engines that handle large numbers of child records performantly, cache calculated values to avoid repeated calculations, and automatically recalculate when child records change based on configured schedules or triggers.

When you define a rollup field, you specify the related table, the relationship, filter conditions for which child records to include, and the aggregation operation. The platform handles all the complexity of querying child records, performing calculations, updating the parent record, and scheduling recalculations. For thousands of child records, this platform-level optimization significantly outperforms custom calculation approaches.

Rollup fields support both scheduled recalculation (hourly by default, configurable) and on-demand calculation when the parent record is opened or refreshed. For real-time requirements, you can configure more frequent recalculation or trigger manual recalculation through the CalculateRollupFieldRequest API. This provides the right balance between data currency and system performance for aggregate scenarios.

B) Calculating aggregates using JavaScript on form load would require querying thousands of child records to the client browser, performing calculations client-side, and doing this every time the form loads. This approach has terrible performance with large child record sets, consumes excessive network bandwidth, may hit query limits, and creates unacceptable form load times. Client-side aggregation doesn’t scale.

C) Creating a plugin to calculate and store aggregates can work but requires custom development to handle all scenarios including child record creates, updates, deletes, and ensuring calculations stay current. This custom solution must handle performance optimization, error handling, and concurrent updates. Rollup fields provide this functionality out-of-the-box with better platform optimization.

D) Power Automate calculating aggregates periodically introduces latency (aggregates are only as current as the last flow run), consumes flow runs, has performance limitations with large datasets, and adds unnecessary complexity when rollup fields provide built-in aggregate functionality. Flows are better for complex workflows rather than simple aggregations that rollup fields handle natively.

Question 102

You need to create a canvas app that allows users to draw diagrams with shapes, lines, and annotations. Which control provides the best functionality for this requirement?

A) Pen input control for freehand drawing

B) Custom PCF control with HTML5 canvas

C) Image control with upload functionality

D) HTML text control with SVG rendering

Answer: B

Explanation:

A custom PCF control using HTML5 canvas provides the necessary functionality for creating interactive diagram drawing applications with shapes, lines, text annotations, and manipulation capabilities. HTML5 canvas offers low-level drawing APIs that support shapes (rectangles, circles, paths), lines with various styles, text rendering, image manipulation, and event handling for interactive drawing experiences.

Building a diagramming PCF control involves implementing drawing tools for different shape types, handling mouse/touch events for drawing and manipulating shapes, providing selection and editing capabilities, implementing undo/redo functionality, and serializing the diagram state for saving to Dataverse. While this requires significant development, it provides the full-featured diagramming capability that the requirement describes.

Alternatively, you could use existing diagramming libraries like Fabric.js, Konva.js, or draw2d wrapped in a PCF control to accelerate development. These libraries provide rich drawing features, shape libraries, manipulation tools, and export capabilities. PCF controls allow integration of these powerful JavaScript libraries into canvas apps with proper data binding and Power Apps integration.

A) Pen input control captures freehand drawing and signatures but doesn’t provide structured shape creation, lines with connection points, text annotations, or shape manipulation capabilities. It’s designed for capturing handwritten input as images, not for creating structured diagrams with distinct shapes and objects that can be individually edited.

C) Image control with upload functionality only displays static images and doesn’t provide any drawing or creation capabilities. Users would need to create diagrams externally and upload them, which doesn’t meet the requirement for in-app diagram creation. This approach provides no interactive drawing functionality within the canvas app.

D) HTML text control displays static HTML content and doesn’t support interactive drawing or manipulation. While you could theoretically display SVG markup, HTML text controls don’t provide the interactivity needed for creating and editing diagrams. They’re for displaying content, not for interactive graphic creation.

Question 103

You are developing a plugin that needs to handle different business logic for different business units. The logic varies significantly between business units. Which design pattern should you use?

A) Strategy pattern with business unit-specific strategy classes

B) Single plugin with large if/else blocks checking business unit

C) Separate plugin assemblies for each business unit

D) Configuration table with business unit-specific rules

Answer: A

Explanation:

The Strategy pattern is ideal for scenarios where business logic varies based on context (like business unit). This design pattern defines a family of algorithms (business logic implementations), encapsulates each one in separate classes, and makes them interchangeable. The plugin determines which business unit the operation applies to and instantiates the appropriate strategy class to handle the logic.

Implementation involves creating an interface or abstract base class defining the business logic methods, implementing concrete strategy classes for each business unit’s specific logic, and having the plugin select and instantiate the appropriate strategy based on the business unit. This approach provides clean separation of concerns, makes each business unit’s logic easy to test independently, simplifies maintenance as logic is isolated in dedicated classes, and makes it easy to add new business units.

The Strategy pattern keeps the plugin class clean and focused on execution context handling while delegating business logic to specialized classes. This architectural approach scales well as the number of business units or complexity of logic grows, follows object-oriented design principles, and makes the codebase more maintainable than monolithic conditional logic.

B) A single plugin with large if/else blocks checking business unit creates difficult-to-maintain code that violates the Single Responsibility Principle, makes testing complex as all logic is intermingled, becomes harder to understand as logic grows, and increases risk of bugs when modifying logic for one business unit affects others. This monolithic approach should be avoided in favor of proper design patterns.

C) Creating separate plugin assemblies for each business unit creates deployment and maintenance overhead with multiple assemblies to manage, complicates solution packaging, makes shared code management difficult, and is architecturally excessive when different logic can be handled through design patterns within a single well-structured assembly. Separation at the class level is more appropriate than assembly level.

D) A configuration table with business unit-specific rules works for simple configurable logic but doesn’t handle «significantly varying» complex business logic well. Configuration-driven approaches are excellent for parameter-driven logic but can’t easily represent complex algorithms, conditional branches, integrations, and sophisticated business rules that require full programming language capabilities. Strategy pattern provides the needed flexibility.

Question 104

You need to implement a solution where users can upload large files (100MB+) to Dataverse through a canvas app. Which approach handles large files most effectively?

A) Use File column data type with direct upload from canvas app

B) Chunk files and upload in segments using custom connector

C) Upload to SharePoint first then link from Dataverse

D) Convert to base64 and store in Multiple lines of text

Answer: C

Explanation:

Uploading large files to SharePoint first and then linking from Dataverse is the recommended architecture for handling files over 128MB (the maximum File column size) and provides better performance even for smaller large files. SharePoint is specifically designed and optimized for large file storage with features like chunked upload, version control, check-in/check-out, and efficient binary storage.

This approach involves configuring SharePoint document management integration with Dataverse, uploading files through the SharePoint connector in canvas apps (which handles large files efficiently), and automatically creating document location records that link the Dataverse record to the SharePoint file. Users can then access files through Dataverse forms while files are actually stored in SharePoint’s optimized storage.

SharePoint provides superior large file handling with support for files up to terabytes in size, efficient chunked upload that resumes on failure, version history and co-authoring capabilities, and integration with Microsoft 365 features like search and compliance. For large file scenarios, leveraging SharePoint’s purpose-built file storage is more effective than trying to store everything in Dataverse File columns.

A) File column data type in Dataverse has a maximum size of 128MB per file. For files larger than this limit, File columns cannot be used at all. Even for files under the limit, uploading very large files through canvas apps can face timeout and performance issues. File columns are suitable for moderate-sized files but not optimized for 100MB+ files.

B) Chunking files and uploading in segments through custom connector is technically possible but requires significant custom development to implement chunking logic, handle upload resumption on failure, reassemble chunks server-side, and manage error scenarios. This complexity is unnecessary when SharePoint provides optimized large file handling out of the box.

D) Converting large files to base64 and storing in text fields is completely impractical for 100MB+ files. Base64 encoding increases size by ~33%, making a 100MB file ~133MB of text. Multiple lines of text fields have limits, and storing huge base64 strings creates performance problems, consumes excessive storage, and is a fundamental misuse of text fields. This should never be done for large files.

Question 105

You are developing a plugin that needs to validate data against business rules stored in an external system via API. The validation is required before saving the record. If validation fails, the save should be prevented. Which stage and exception handling should you use?

A) PreOperation stage, throw InvalidPluginExecutionException if validation fails

B) PostOperation stage, delete record if validation fails

C) PreValidation stage, throw Exception if validation fails

D) PreOperation stage, set status field to indicate validation failure

Answer: A

Explanation:

PreOperation stage with InvalidPluginExecutionException is the correct combination for validation that prevents save operations. PreOperation executes after security checks but before the database transaction commits, providing the opportunity to call external validation APIs and prevent the save if validation fails. InvalidPluginExecutionException is specifically designed to communicate validation failures to users with clear error messages.

When your plugin calls the external validation API in PreOperation and receives a validation failure response, throwing InvalidPluginExecutionException with a descriptive message prevents the record from being saved, rolls back the transaction automatically, and displays the validation error message to users. This provides immediate feedback about why the operation was blocked.

PreOperation is preferred over PreValidation for external API calls because PreValidation executes before security checks and should contain only lightweight validation. External API calls have latency and should execute in PreOperation after security is confirmed. The InvalidPluginExecutionException ensures proper error handling and user notification across all client types (UI, API, imports).

B) PostOperation stage executes after the record is saved to the database. Deleting the record if validation fails is complex, creates temporary invalid data states, generates additional database operations and audit records, and doesn’t cleanly prevent the save operation. Validation should prevent saves in PreOperation, not clean up after saves in PostOperation.

C) PreValidation stage is intended for lightweight validation that doesn’t require external calls. Calling external APIs in PreValidation adds latency before security checks occur and could slow down operations for users who don’t have permission anyway. Additionally, throwing generic Exception instead of InvalidPluginExecutionException doesn’t provide the user-friendly error handling that InvalidPluginExecutionException offers.

D) Setting a status field to indicate validation failure doesn’t prevent the save — the record would be saved with a failure status. The requirement is to prevent invalid records from being saved at all, not to save them marked as invalid. Throwing an exception that prevents the save is the correct approach for validation that must block operations.