Microsoft PL-400 Power Platform Developer Exam Dumps and Practice Test Questions Set 5 Q61-75

Microsoft PL-400 Power Platform Developer Exam Dumps and Practice Test Questions Set 5 Q61-75

Visit here for our full Microsoft PL-400 exam dumps and practice test questions.

Question 61.

You are developing a plugin that needs to retrieve configuration data stored in environment variables. How should you query environment variables in the plugin code?

A) Query the environmentvariabledefinition and environmentvariablevalue tables

B) Use AppSettings from web.config

C) Access through IOrganizationService configuration

D) Read from plugin secure/unsecure configuration

Answer: A

Explanation:

Environment variables in Dataverse are stored in two related tables: environmentvariabledefinition (which contains the definition and schema name) and environmentvariablevalue (which contains the actual values). To retrieve environment variable values in a plugin, you query these tables using the IOrganizationService, joining them to get both the definition and the current value for your environment.

The recommended approach is to query environmentvariabledefinition by schema name, then retrieve the related environmentvariablevalue record to get the actual value. You can use QueryExpression or FetchXML with a LinkEntity to join these tables efficiently. The environmentvariablevalue table contains the Value field which holds the actual configuration data you need.

Environment variables provide a modern, solution-aware way to store configuration that can vary between environments. They support different values in development, test, and production, are included in solutions for easy deployment, and can be managed through the Power Platform admin center or solution import process. This is Microsoft’s recommended approach for plugin configuration in new developments.

B) Plugins running in Dataverse sandbox mode don’t have access to web.config files or traditional .NET configuration mechanisms. The plugin executes in an isolated process with restricted access to server resources for security. Configuration must be provided through Dataverse-specific mechanisms like environment variables, plugin configuration, or querying configuration tables.

C) IOrganizationService doesn’t have a specific configuration property for accessing environment variables. You must explicitly query the environment variable tables using standard query methods (Retrieve, RetrieveMultiple) just like any other Dataverse table. The service provides data access, but you must construct the queries to retrieve configuration.

D) Plugin secure/unsecure configuration is an older approach where configuration strings are passed during plugin step registration. While still supported, Microsoft now recommends environment variables for new developments because they’re more flexible, support different values per environment, integrate with solutions better, and can be changed without re-registering plugins.

Question 62.

You need to implement a solution where changes to account records trigger immediate notifications to a Teams channel. Which approach provides the lowest latency?

A) Synchronous plugin posting to Teams webhook

B) Power Automate instant flow with Teams connector

C) Asynchronous plugin with Teams API call

D) Scheduled Power Automate polling for changes

Answer: A

Explanation:

A synchronous plugin registered on the Create or Update message that posts directly to a Teams webhook provides the lowest latency for immediate notifications. Synchronous plugins execute inline with the operation, so notifications are sent within milliseconds of the record change. Teams incoming webhooks are designed for simple HTTP POST operations that complete quickly, making them suitable for synchronous execution.

The plugin can construct a message card with relevant account information and POST it to the Teams webhook URL immediately when the account is modified. Since the webhook call is typically fast (under one second), it doesn’t cause noticeable delay in the user’s operation. This provides true real-time notification with minimal latency between the record change and the Teams message.

However, it’s important to implement proper error handling so that Teams webhook failures don’t prevent account updates from completing. You might wrap the Teams call in a try-catch block or use timeout settings to ensure the plugin completes even if Teams is temporarily unavailable. For critical notifications where some latency is acceptable, asynchronous plugins might be more reliable.

B) Power Automate instant flows with Teams connector introduce additional latency because the flow must be triggered, queued, execute through the flow engine, and then call the Teams connector. While typically completing within seconds, this is slower than a direct synchronous plugin call and involves more moving parts that can introduce delay.

C) Asynchronous plugins execute after the main operation completes and are queued for execution by the asynchronous service. This introduces latency ranging from seconds to minutes depending on system load. While async plugins are more reliable for external calls (with automatic retry), they don’t provide the immediate notification that synchronous plugins offer.

D) Scheduled Power Automate flows that poll for changes introduce the most latency, as they only check periodically (minimum every minute, but often longer intervals). There’s inherent delay between when a change occurs and when the next scheduled poll detects it. This approach is suitable for batch notifications but not for immediate alerts.

Question 63.

You are implementing a canvas app that needs to display real-time data that updates automatically when backend data changes. Which feature should you use?

A) Timer control with data refresh

B) Power Automate with push notifications

C) SignalR with custom connector

D) Observable collections

Answer: A

Explanation:

A timer control that periodically refreshes data is the standard approach for near real-time data updates in canvas apps. The timer control can be configured to trigger at regular intervals (minimum one second), and you use the OnTimerEnd property to refresh your data source or collections. This provides automatic updates without user interaction, keeping displayed data reasonably current.

While not true real-time (there’s always a delay equal to the timer interval), this approach works reliably, is simple to implement, doesn’t require external infrastructure, and is well-supported within canvas apps. For most business scenarios, refreshing data every few seconds provides acceptable «real-time» experience without the complexity of true push notifications.

You can optimize performance by only refreshing specific data sources rather than the entire app, adjusting timer intervals based on how current the data needs to be, and potentially pausing the timer when the app is not in focus. This balances data currency with API call limits and performance.

B) Power Automate can send push notifications to Power Apps Mobile but cannot directly update data displayed in a running canvas app. Push notifications alert users to launch or check the app, but they don’t provide true real-time data updates to already-running app sessions. You would still need to implement data refresh logic when users respond to notifications.

C) SignalR is a real-time communication framework, but canvas apps don’t have native SignalR support. While you could theoretically create a custom connector that wraps SignalR functionality, this would be extremely complex, requires significant custom development, and isn’t a supported pattern for canvas apps. The timer-based refresh approach is far simpler and officially supported.

D) Observable collections is not a feature in Power Apps. While the term exists in other frameworks (like WPF), Power Apps collections don’t have observable/reactive capabilities that automatically update when backend data changes. You must explicitly refresh collections through functions like Refresh, ClearCollect, or Collect. Timer controls provide the mechanism to trigger these refreshes periodically.

Question 64.

You need to create a plugin that calculates and updates a rollup field value. The calculation is complex and cannot be implemented using standard rollup field functionality. When should the plugin execute?

A) PreOperation on Update of child records

B) PostOperation on Create, Update, Delete of child records

C) PreOperation on Retrieve of parent record

D) Asynchronous on Create, Update, Delete of child records

Answer: B

Explanation:

PostOperation stage on Create, Update, and Delete messages of child records is the correct approach for maintaining custom rollup calculations. PostOperation ensures that the child record operation has successfully completed before your plugin calculates the new rollup value, guarantees data consistency, and allows the plugin to query all child records including the newly created/updated one to calculate the rollup.

By registering on all three messages (Create, Update, Delete), you ensure the parent’s rollup value is updated whenever child records change. The plugin queries all related child records, performs the custom calculation, and updates the parent record with the new rollup value. This pattern maintains accuracy and reflects all changes to child records.

The PostOperation stage is appropriate because the child record changes are committed, your calculation can include those changes, and updating the parent record happens as a separate operation after the child operation succeeds. This maintains transaction boundaries while ensuring rollup accuracy.

A) PreOperation on Update would calculate the rollup before the child record update is saved, meaning your calculation would be based on old data. Additionally, PreOperation doesn’t trigger on Create or Delete, so you’d miss rollup updates when child records are added or removed. PostOperation ensures you’re working with current, committed data.

C) PreOperation on Retrieve would recalculate the rollup every time someone views the parent record, creating terrible performance. Rollup calculations should happen when child data changes, not on every retrieve. This approach would also require synchronous execution that delays record retrieval, creating poor user experience.

D) While asynchronous execution is generally preferred for performance, registering only as «Asynchronous» without specifying PostOperation stage is incomplete. You need PostOperation stage with asynchronous mode. Additionally, for rollup scenarios, you might want synchronous execution so the parent record immediately reflects changes, depending on business requirements. The stage (PostOperation) is more critical than the mode.

Question 65.

You are developing a model-driven app that needs to display related records from multiple tables in a single view. Which feature should you use?

A) Associated view with multiple relationships

B) Create a custom page with multiple subgrids

C) Use Power BI embedded report

D) Create a form with multiple quick view forms

Answer: B

Explanation:

Creating a custom form page with multiple subgrids is the standard approach for displaying related records from multiple tables in a single view within model-driven apps. Each subgrid can display records from a different related table, providing a comprehensive view of all related data. Subgrids support filtering, sorting, inline editing, and can be configured to show specific views of the related data.

On a form, you can add multiple subgrid controls, each bound to a different relationship. For example, on an account form, you could have subgrids showing related contacts, opportunities, cases, and orders all on the same page. Each subgrid operates independently with its own view definition, allowing users to see and interact with multiple types of related records without navigating away.

This approach provides a unified interface, allows users to perform actions on related records directly from the parent record form, supports responsive design across devices, and is the standard pattern in Dynamics 365 applications. It’s fully supported and provides excellent user experience for viewing and managing related data.

A) Associated views show records from a single related table, not multiple tables simultaneously. Each relationship has its own associated view. While you can navigate between different associated views, you cannot display multiple associated views (from different relationships) in a single page view. Associated views are for navigating the grid interface, not for composite forms.

C) Power BI embedded reports can display data from multiple tables and provide powerful visualizations, but they’re read-only and don’t provide the interactive CRUD operations that subgrids offer. Power BI is excellent for analytics and reporting but isn’t the primary tool for displaying and managing operational data from multiple related tables in forms.

D) Quick view forms display data from a single related record (the record referenced by a lookup field), not collections of related records. Quick view forms are read-only snapshots of a parent or related record’s fields. For displaying multiple related records (not just one related record), subgrids are the appropriate control, not quick view forms.

Question 66.

You need to implement a solution where users can digitally sign documents within a model-driven app. The signature must be captured, stored, and displayed on forms. What should you implement?

A) Embedded canvas app with pen input control

B) JavaScript web resource with HTML5 canvas

C) Image field with upload functionality

D) Third-party signature PCF control

Answer: A

Explanation:

Embedding a canvas app with the pen input control into a model-driven form provides the best supported solution for signature capture. Canvas apps have a native pen input control specifically designed for capturing handwritten input and signatures. The embedded canvas app can capture the signature, convert it to an image format, and save it to an image field in Dataverse where it can be displayed on the form.

The embedded canvas app can be configured to read and write data from the host model-driven form, allowing seamless integration. When a user needs to sign, they interact with the embedded canvas app which shows the pen input control. After signing, the canvas app saves the signature image to a Dataverse image field, and the model-driven form displays the stored signature.

This approach requires no custom coding in JavaScript, leverages built-in Power Apps capabilities, works consistently across devices (desktop with mouse, tablets with stylus, touch devices), and provides a professional signature capture experience. The solution is maintainable and follows Microsoft’s recommended patterns for extending model-driven apps.

B) Creating a JavaScript web resource with HTML5 canvas for signature capture requires significant custom development including implementing the drawing logic, handling touch/mouse/stylus events across different browsers and devices, converting canvas content to image format, uploading to Dataverse, and extensive cross-platform testing. This is much more complex than using the built-in pen input control.

C) An image field with upload functionality requires users to create a signature image externally and upload it, which is cumbersome and doesn’t provide the seamless signature capture experience expected in modern applications. This approach doesn’t provide real-time signature capture and creates a disjointed user experience.

D) While third-party signature PCF controls exist and could work, recommending them as the primary solution introduces external dependencies, potential licensing costs, and support concerns. The embedded canvas app with pen input control is a first-party Microsoft solution that should be considered first before introducing third-party dependencies.

Question 67.

You are implementing a plugin that calls an external API. The API requires OAuth 2.0 authentication with bearer tokens. How should you manage the authentication in the plugin?

A) Store client credentials in plugin configuration and obtain tokens in plugin code

B) Use Azure Key Vault to store credentials and retrieve in plugin

C) Hard-code credentials in plugin assembly

D) Pass credentials from client-side JavaScript

Answer: B

Explanation:

Using Azure Key Vault to store sensitive credentials like client secrets and retrieving them from the plugin is the security best practice for managing authentication credentials. Azure Key Vault provides secure storage for secrets, keys, and certificates with access control, auditing, and encryption. Plugins can authenticate to Key Vault using managed identity or certificate-based authentication and retrieve secrets at runtime.

This approach ensures credentials are never stored in plain text in Dataverse, are not visible in solution exports, can be rotated without redeploying plugins, have proper access controls and audit trails, and meet enterprise security compliance requirements. The plugin retrieves the client secret from Key Vault, uses it to obtain an OAuth token from the authorization server, and then calls the external API with the bearer token.

When implementing this pattern, you configure the plugin to authenticate to Key Vault (preferably using managed identity if hosted in Azure or certificate-based auth), retrieve the secret, cache the OAuth token with appropriate expiration handling, and securely dispose of secrets after use. This provides enterprise-grade security for external API integration.

A) Storing client credentials in plugin configuration (secure or unsecure configuration) is less secure than Key Vault because the credentials are stored in Dataverse database, are visible to anyone with appropriate privileges, appear in solution exports (for unsecure configuration), don’t provide the same level of access control and auditing, and are harder to rotate without re-registering plugin steps.

C) Hard-coding credentials in plugin assemblies is a severe security violation. Credentials would be embedded in the compiled DLL, visible to anyone with access to the assembly, impossible to rotate without recompiling and redeploying, exposed in source control if not properly handled, and represents a major security risk. This should never be done.

D) Passing credentials from client-side JavaScript exposes them to anyone who can view browser network traffic or JavaScript code. Client-side credentials are fundamentally insecure because client code and network traffic can be inspected by users. Authentication for server-side operations must be managed server-side with credentials never exposed to clients.

Question 68.

You need to create a solution where multiple plugins share common utility methods for logging, error handling, and data validation. How should you structure this code?

A) Create a shared class library project referenced by plugin projects

B) Copy utility methods into each plugin class

C) Use a base plugin class with protected utility methods

D) Create a web service with utility methods

Answer: A

Explanation:

Creating a shared class library project that contains common utility methods and referencing it from multiple plugin projects is the best practice for code reuse in plugin development. The shared library can contain logging frameworks, error handling utilities, validation logic, data access helpers, and other common functionality that multiple plugins need. This promotes DRY principles and maintainability.

When you build plugin projects that reference the shared library, you can either merge the libraries into a single assembly using ILMerge (creating one DLL to register) or register both the plugin assembly and the shared library assembly in Dataverse. Modern Dataverse development typically uses ILMerge or similar tools to create a single assembly for easier deployment and management.

This architectural approach provides clear separation of concerns where plugins contain plugin-specific logic and the shared library contains reusable utilities, allows independent testing of utility methods, makes updates to common functionality easy (update once, rebuild affected plugins), and follows standard software engineering practices for modular design.

B) Copying utility methods into each plugin class violates the DRY principle, creates maintenance nightmares (bugs must be fixed in multiple places), increases the risk of inconsistencies between implementations, and makes code reviews and updates more difficult. This anti-pattern should be avoided in favor of proper code sharing through libraries.

C) Using a base plugin class with protected utility methods is less flexible than a separate shared library. Base classes create inheritance hierarchies that can become complex, limit flexibility when plugins need different combinations of utilities, and don’t allow sharing code with non-plugin components (like console applications or custom workflow activities). Composition through libraries is generally better than inheritance.

D) Creating a web service with utility methods introduces unnecessary complexity, latency from network calls, additional infrastructure to host and maintain, and potential availability issues. Utility methods for logging, validation, and error handling should be in-process code libraries, not external services. Web services are for business logic, not utility functions.

Question 69.

You are developing a canvas app that needs to implement complex conditional logic that exceeds the formula bar character limit. Which approach should you use?

A) Break the logic into multiple controls with intermediate values

B) Move complex logic to a Power Automate flow

C) Create a custom action with server-side logic

D) Use Component properties with formula logic

Answer: C

Explanation:

Creating a custom action with server-side logic implemented in a plugin is the appropriate solution when business logic becomes too complex for Power Fx formulas in canvas apps. Custom actions provide a clean API that the canvas app can call through the Dataverse connector, with all the complex logic executing on the server side where there are no formula length limits, full programming language capabilities, and better performance for complex operations.

The custom action defines input and output parameters that form a clear contract between the canvas app and the business logic. The canvas app passes necessary data as input parameters, the server-side plugin executes the complex conditional logic with access to full .NET capabilities and Dataverse data, and returns results through output parameters. This architecture separates UI concerns from business logic.

This approach provides better performance (complex logic runs on the server), maintainability (business logic in C# is easier to test and debug than complex formulas), reusability (the same action can be called from multiple apps), and scalability (server-side execution handles complexity better than client-side formulas). It’s the recommended pattern for complex business rules.

A) Breaking logic into multiple controls with intermediate values can work for moderately complex logic but becomes unwieldy for very complex scenarios. This approach clutters the canvas with hidden controls used only for calculations, makes debugging difficult, impacts performance with many formula recalculations, and doesn’t truly solve the underlying problem of complex logic belonging on the server.

B) Moving complex logic to Power Automate flows is possible but introduces latency because flows are asynchronous. Users would trigger the flow and wait for results, creating a disjointed experience. Flows are better for background processes and workflows than for immediate business logic evaluation needed for interactive app experiences. Custom actions with synchronous plugins are more appropriate.

D) Component properties can help organize formula logic within components but don’t solve the fundamental problem of formula complexity and length limits. The logic still must be expressed in Power Fx formulas with the same constraints. Components improve organization but don’t provide the full programming capabilities needed for truly complex logic.

Question 70.

You need to implement a solution where a model-driven app form displays calculated values that depend on complex business rules involving multiple related tables. The calculation should happen in real-time as users modify fields. What should you implement?

A) Calculated field on the table

B) Rollup field with custom FetchXML

C) JavaScript web resource with OnChange events

D) Business rule with complex conditions

Answer: C

Explanation:

A JavaScript web resource with OnChange event handlers is the appropriate solution for complex real-time calculations in model-driven app forms. JavaScript executes immediately when users modify fields, can access data from multiple related records through Web API calls, perform complex calculations with full JavaScript programming capabilities, and update form fields instantly to display results.

The OnChange event handlers trigger when users modify specific fields, the JavaScript retrieves any additional data needed (from related tables or lookups), performs the calculation using your business logic, and updates the calculated field value on the form. This provides immediate feedback to users as they interact with the form, showing how their changes affect calculated values.

For complex business rules involving multiple tables, JavaScript provides the flexibility to query related data, implement sophisticated algorithms, handle edge cases, and provide user feedback through form controls. While calculated fields and rollup fields handle simpler scenarios, JavaScript is necessary when calculations are complex, involve multiple tables, or require real-time responsiveness.

A) Calculated fields in Dataverse have significant limitations: they only access fields on the current record and direct parent lookups, cannot query related child records or perform complex multi-table joins, recalculate only when the record is saved (not real-time during editing), and have limited formula capabilities. They’re insufficient for complex business rules involving multiple related tables.

B) Rollup fields calculate aggregate values from related child records but have limitations: they update on a schedule (not real-time), can only aggregate from directly related records, have limited calculation options (sum, count, min, max, average), and cannot implement complex custom business logic. They don’t provide the real-time calculation during form editing that the requirement specifies.

D) Business rules have even more limitations than calculated fields: they can only access current record and immediate parent fields, cannot query related tables, cannot perform complex calculations, and have very limited conditional logic capabilities. Business rules are designed for simple show/hide, required/optional, and basic field operations, not complex multi-table calculations.

Question 71.

You are implementing a plugin that needs to determine whether a record is being created through the UI, API, or data import. Which execution context property should you check?

A) InputParameters

B) MessageName

C) InitiatingUserId

D) CallerOrigin

Answer: A

Explanation:

The InputParameters collection of the plugin execution context contains metadata about how the operation was initiated, including the CallerId and other context information. While there isn’t a single definitive property that explicitly states «UI» or «API,» you can examine InputParameters and other context properties to infer the origin. However, more specifically, checking the execution context’s depth, shared variables, and examining if specific parameters like «SuppressDuplicateDetection» are set can provide clues.

Actually, determining the exact origin (UI vs API vs import) is not straightforward in Dataverse plugins because the platform intentionally abstracts these details to ensure consistent business logic execution regardless of how data enters the system. This design ensures that business rules apply uniformly. If you truly need to distinguish origins, you might need to use workarounds like checking for specific patterns in the execution context or having callers pass custom parameters.

In practice, plugins should generally execute the same business logic regardless of how records are created, ensuring data consistency and integrity across all entry points. If you need different behavior based on origin, consider whether that’s the right design or if you should use plugin step filtering, shared variables, or rearchitecting the solution.

B) MessageName tells you what operation is being performed (Create, Update, Delete, etc.) but doesn’t indicate whether the operation came from UI, API, or import. All three sources trigger the same messages, so MessageName doesn’t distinguish between origins.

C) InitiatingUserId identifies which user started the operation chain but doesn’t indicate how they initiated it. The same user could create records through UI, API calls, or data import, and InitiatingUserId would be the same in all cases. This property identifies who, not how.

D) CallerOrigin is not a standard property of the IPluginExecutionContext. While the execution context contains various properties about the operation, there isn’t a built-in CallerOrigin property that explicitly identifies the source as UI, API, or import. This property doesn’t exist in the standard plugin execution context.

Question 72.

You need to create a canvas app that allows users to take photos and automatically extract text from the images using OCR. Which AI Builder model type should you use?

A) Text recognition model

B) Object detection model

C) Form processing model

D) Entity extraction model

Answer: A

Explanation:

The text recognition prebuilt model in AI Builder is specifically designed for extracting text from images through optical character recognition (OCR). This model can process photos taken with the camera control or uploaded images and extract all visible text, including handwritten text, printed text, and text in various languages and fonts.

In a canvas app, you use the camera control to capture images, then pass those images to the AI Builder text recognition model using the AI Builder connector. The model processes the image and returns extracted text that you can display, store in Dataverse, or use in further processing. This prebuilt model requires no training and works immediately.

The text recognition model is ideal for scenarios like scanning business cards, extracting text from documents, reading serial numbers from equipment, capturing information from signs or labels, and any use case where you need to convert image-based text to digital text. It’s one of the most commonly used AI Builder models.

B) Object detection models identify and locate objects within images (like detecting products, people, or items) but don’t extract text. They return bounding boxes and labels for detected objects. If you need to detect what objects are in a photo but don’t need to read text, object detection is appropriate, but it doesn’t provide OCR capabilities.

C) Form processing models are designed for structured documents like invoices, receipts, or forms where you want to extract specific fields in specific locations. While they do extract text, they’re optimized for structured document processing where you train the model on document layout. For general text extraction from photos, the simpler text recognition model is more appropriate.

D) Entity extraction models identify and classify entities (like names, dates, addresses) within text that you already have. They don’t perform OCR to extract text from images. Entity extraction is a natural language processing function that works on text input, not images. You would use text recognition first to get text, then entity extraction to identify entities within that text.

Question 73.

You are developing a plugin that needs to update records across multiple tables within a single transaction. If any update fails, all changes should be rolled back. Which execution mode should you use?

A) Synchronous plugin in PreOperation or PostOperation stage

B) Asynchronous plugin with manual transaction handling

C) Multiple synchronous plugins coordinated through shared variables

D) Power Automate flow with batch operations

Answer: A

Explanation:

Synchronous plugins executing in PreOperation or PostOperation stage automatically participate in the Dataverse transaction. All database operations performed within a synchronous plugin are part of the same transaction as the operation that triggered the plugin. If any operation fails (including exceptions thrown by your plugin), the entire transaction rolls back automatically, ensuring data consistency.

When you perform multiple Create, Update, or Delete operations within a synchronous plugin, they all execute within the transaction boundary. If your plugin throws an exception or if any database operation fails, Dataverse automatically rolls back all changes including the original operation that triggered the plugin and all operations performed by the plugin code. This provides atomic, all-or-nothing behavior.

This transactional behavior is built into the platform and requires no special coding for transaction management. You simply perform your operations, and the platform ensures transactional integrity. This is the standard pattern for maintaining data consistency across multiple related operations in Dataverse.

B) Asynchronous plugins execute outside the main transaction after the triggering operation completes. They cannot participate in the original transaction and cannot roll back the triggering operation. Additionally, plugins don’t have access to manual transaction handling APIs like TransactionScope because Dataverse manages transactions at the platform level. Async plugins aren’t suitable for atomic multi-table updates.

C) While you can coordinate multiple plugins through shared variables, each plugin participating in the same operation chain is already part of the same transaction if they’re synchronous. Shared variables help pass data between plugins but don’t provide transaction coordination — that’s automatic. Multiple separate synchronous plugins don’t provide benefits over one plugin doing multiple operations within the transaction.

D) Power Automate flows execute asynchronously after the triggering operation completes and don’t provide atomic transaction behavior across multiple steps. If a flow step fails, previous steps don’t automatically roll back. Flows have error handling and retry capabilities but don’t provide the atomic transactional guarantees that synchronous plugins offer.

Question 74.

You need to implement a canvas app that allows users to scan barcodes and look up product information from an external inventory system via API. Which components are required?

A) Barcode scanner control, custom connector for inventory API, and gallery for results

B) Camera control, Power Automate flow, and data table

C) Text input control, SharePoint list integration, and labels

D) Image control, Dataverse connector, and forms

Answer: A

Explanation:

The barcode scanner control provides native barcode scanning capabilities in canvas apps, the custom connector wraps the external inventory API to make it callable from Power Apps, and a gallery displays the product information returned from the API. This combination provides a complete solution for scanning barcodes and retrieving product data from external systems.

When users tap a button or activate the barcode scanner, the control opens the device camera with barcode recognition. Once a barcode is scanned, the app calls the custom connector passing the scanned barcode value, the custom connector queries the external inventory API, and results are returned and displayed in a gallery or form controls.

Custom connectors are necessary when integrating with external APIs that don’t have prebuilt connectors. You define the API operations, authentication, parameters, and responses in the custom connector definition, and it then appears alongside standard connectors in your canvas app. This architecture cleanly separates concerns: barcode scanning, API integration, and data display.

B) While the camera control can capture images, it doesn’t provide barcode recognition out of the box — you’d need additional processing. Using Power Automate introduces unnecessary latency and complexity for real-time lookup scenarios. The barcode scanner control with direct custom connector calls provides a better, more responsive user experience.

C) Text input for manual barcode entry defeats the purpose of mobile barcode scanning, SharePoint lists aren’t appropriate for external inventory systems, and this combination doesn’t address the requirement to scan barcodes or integrate with an external API. This solution misses the core requirements entirely.

D) Image control displays images but doesn’t scan barcodes, Dataverse connector accesses Dataverse data not external APIs, and this combination doesn’t provide barcode scanning or external system integration. This answer shows fundamental misunderstanding of the requirements and available components.

Question 75.

You are implementing a solution where users should only be able to update specific fields on records they don’t own. The other fields should be read-only. How should you implement this?

A) Configure field-level security for the specific fields

B) Use JavaScript to make fields read-only based on record ownership

C) Create security roles with field-level privileges

D) Configure column-level security with write privileges for specific fields

Answer: D

Explanation:

Column-level security (also called field-level security) allows you to grant specific field privileges (Create, Read, Update) independent of record-level access. You enable security on specific fields, create field security profiles that grant Update privilege for those fields, and assign users to those profiles. Users can then update those secured fields even on records they can only read at the record level.

The configuration involves enabling field-level security on the specific fields that should be editable, creating a field security profile that grants Update privilege for those fields while other field privileges remain controlled by record-level permissions, and assigning users to the profile. This provides granular control where users have read access to most fields but can update specific fields.

This server-side security enforcement works across all access methods (UI, API, imports) ensuring consistent behavior. It’s the proper way to implement scenarios like allowing users to update status or comments fields on records they don’t own, while keeping other fields protected by record-level security.

A) Configuring field-level security alone doesn’t automatically grant update privileges for specific fields when users lack record-level update privileges. You need to specifically grant Update privilege in the field security profile for users to update those fields on records they can’t otherwise update. The answer needs to mention both enabling field security and granting update privileges.

B) JavaScript makes fields read-only only in the UI and can be bypassed through API calls, workflows, imports, or by users who disable JavaScript. Client-side security is not real security and should never be the sole implementation of security requirements. Server-side security through field-level security is required.

C) Security roles control record-level privileges (Create, Read, Write, Delete, Append, etc.) and access levels (User, Business Unit, Organization), not field-level privileges. While security roles are crucial for overall access control, they don’t provide the granular field-level update privileges on records users don’t own. Field-level security is a separate system.