Amazon AWS Certified AI Practitioner AIF-C0 Exam Dumps and Practice Test Questions Set 10 Q136-150
Visit here for our full Amazon AWS Certified AI Practitioner AIF-C01 exam dumps and practice test questions.
Question 136
A company wants to detect fraudulent transactions using labeled historical data. What machine learning approach should they use?
A) Supervised learning
B) Unsupervised learning
C) Reinforcement learning
D) Transfer learning
Answer: A) Supervised learning
Explanation:
Unsupervised learning focuses on discovering patterns, structures, or groupings within data that does not come with predefined labels. Because it works without explicit examples of categories such as “fraud” or “not fraud,” its strength lies in revealing hidden relationships rather than making direct predictions about specific classes. While it can be useful for detecting anomalies or unusual behaviors that may indicate fraudulent activity, it cannot, on its own, assign definitive fraud classifications. Instead, it highlights irregularities that may warrant further investigation, leaving the final determination to either a human analyst or an additional supervised method. Therefore, although unsupervised learning can support exploratory analysis in fraud detection, it is not well suited as the primary method for classifying transactions.
Reinforcement learning, by contrast, is built around the idea of training an agent to make sequential decisions by interacting with an environment and receiving rewards or penalties. This paradigm excels in dynamic scenarios such as robotics, gameplay, or adaptive control systems, where the agent learns through trial and error to maximize long-term rewards. Fraud detection, however, is typically a static prediction problem: a system receives a transaction and must classify it immediately without engaging in a process of ongoing interaction or feedback loops. Because reinforcement learning requires an environment that provides continuous rewards tied to the agent’s actions, it does not align well with the one-step nature of fraud detection. Moreover, the ethical and practical concerns of letting a system “experiment” with real transactions make reinforcement learning unsuitable for this task.
Transfer learning offers a different advantage by enabling the use of pre-trained models that were originally developed for one task and adapting them to another. This technique is particularly powerful when the new task has limited labeled data but shares meaningful similarities with the original task. For example, a model trained on a large dataset of general images can be fine-tuned to classify medical images. However, transfer learning only works when an appropriate and relevant pre-trained model exists. In highly specialized domains such as fraud detection, where data patterns are specific, rapidly evolving, and often proprietary, suitable pre-trained models are rarely available. As a result, transfer learning has limited applicability unless a closely related model can be found.
Supervised learning, on the other hand, is fundamentally designed for tasks in which each training example comes with a known label. In fraud detection, historical records typically include clear indications of whether a transaction was fraudulent or legitimate. These labeled examples allow the model to learn the characteristics and signals that differentiate normal behavior from fraudulent activity. With enough data, supervised algorithms can generalize from past cases to new, unseen transactions with high accuracy. Because fraud detection relies heavily on recognizing patterns that have been verified and documented, supervised learning is the most appropriate and effective approach. It directly maps the input features of a transaction to the desired output classification, making it well suited for operational fraud detection systems where accuracy, reliability, and real-time decision-making are essential.
Question 137
Which AWS service helps automatically improve data quality before training ML models?
A) AWS Glue DataBrew
B) Amazon Rekognition
C) Amazon Lex
D) Amazon Textract
Answer: A) AWS Glue DataBrew
Explanation:
Amazon Rekognition is a service designed to process and interpret visual content, specifically images and videos. It uses advanced machine learning models to identify objects, people, text, scenes, and activities. In security settings, it can help recognize suspicious behavior or match faces against a database. In media applications, it can categorize images, detect inappropriate content, or assist with automated video analysis. Its strength lies in its ability to quickly process large volumes of visual data and return structured information that can be used in downstream applications, dashboards, or automated workflows. Because it is fully managed, organizations can integrate visual intelligence without needing to build their own computer vision models.
Amazon Lex, on the other hand, is focused on enabling conversational interactions through chatbots and voice assistants. It uses the same underlying technology as Amazon’s Alexa, giving developers access to natural language understanding and automatic speech recognition capabilities. Lex can interpret user input, manage dialog flows, and generate appropriate responses. It is commonly used for customer service bots, automated help desks, and voice-enabled applications. The tool reduces the complexity of building conversational systems by providing an easy way to define intents, slots, and conversation logic. By integrating with other AWS services, Lex can retrieve data, fulfill user requests, and complete transactions, making it a versatile tool for interactive applications.
Amazon Textract is another specialized service, but its focus is on document processing. Traditional optical character recognition tools can only extract simple text. Textract goes further by understanding the layout and structure of documents, enabling it to identify forms, tables, checkboxes, and key-value pairs. This makes it useful for automating tasks that involve invoices, financial statements, medical forms, or legal paperwork. Instead of manually entering information from documents, teams can use Textract to automatically extract relevant text and structured data, reducing errors and speeding up workflows. It can also integrate with downstream analytics or machine learning systems, allowing organizations to analyze large volumes of documents efficiently.
AWS Glue DataBrew serves a different but equally essential purpose in the machine learning pipeline. Preparing data is often one of the most time-consuming aspects of building ML models. Datasets may be incomplete, inconsistent, or contain errors, and they often come from multiple sources. DataBrew provides a visual interface that allows users to profile, clean, and transform their datasets without needing to write code. It can detect anomalies, identify missing values, normalize formats, and apply hundreds of built-in transformations through an intuitive point-and-click interface. This helps data engineers and analysts understand data quality issues early and make systematic improvements.
Because DataBrew emphasizes usability and automation, it allows teams to prepare high-quality datasets more efficiently. Clean, consistent data is critical to training accurate machine learning models, and DataBrew simplifies this process significantly. By improving dataset reliability before training begins, it helps ensure that downstream machine learning systems perform better, making it an ideal tool for the data preparation stage of any ML project.
Amazon Rekognition is a service designed to process and interpret visual content, specifically images and videos. It uses advanced machine learning models to identify objects, people, text, scenes, and activities. In security settings, it can help recognize suspicious behavior or match faces against a database. In media applications, it can categorize images, detect inappropriate content, or assist with automated video analysis. Its strength lies in its ability to quickly process large volumes of visual data and return structured information that can be used in downstream applications, dashboards, or automated workflows. Because it is fully managed, organizations can integrate visual intelligence without needing to build their own computer vision models.
Amazon Lex, on the other hand, is focused on enabling conversational interactions through chatbots and voice assistants. It uses the same underlying technology as Amazon’s Alexa, giving developers access to natural language understanding and automatic speech recognition capabilities. Lex can interpret user input, manage dialog flows, and generate appropriate responses. It is commonly used for customer service bots, automated help desks, and voice-enabled applications. The tool reduces the complexity of building conversational systems by providing an easy way to define intents, slots, and conversation logic. By integrating with other AWS services, Lex can retrieve data, fulfill user requests, and complete transactions, making it a versatile tool for interactive applications.
Amazon Textract is another specialized service, but its focus is on document processing. Traditional optical character recognition tools can only extract simple text. Textract goes further by understanding the layout and structure of documents, enabling it to identify forms, tables, checkboxes, and key-value pairs. This makes it useful for automating tasks that involve invoices, financial statements, medical forms, or legal paperwork. Instead of manually entering information from documents, teams can use Textract to automatically extract relevant text and structured data, reducing errors and speeding up workflows. It can also integrate with downstream analytics or machine learning systems, allowing organizations to analyze large volumes of documents efficiently.
AWS Glue DataBrew serves a different but equally essential purpose in the machine learning pipeline. Preparing data is often one of the most time-consuming aspects of building ML models. Datasets may be incomplete, inconsistent, or contain errors, and they often come from multiple sources. DataBrew provides a visual interface that allows users to profile, clean, and transform their datasets without needing to write code. It can detect anomalies, identify missing values, normalize formats, and apply hundreds of built-in transformations through an intuitive point-and-click interface. This helps data engineers and analysts understand data quality issues early and make systematic improvements.
Because DataBrew emphasizes usability and automation, it allows teams to prepare high-quality datasets more efficiently. Clean, consistent data is critical to training accurate machine learning models, and DataBrew simplifies this process significantly. By improving dataset reliability before training begins, it helps ensure that downstream machine learning systems perform better, making it an ideal tool for the data preparation stage of any ML project.
Amazon Rekognition is a service designed to process and interpret visual content, specifically images and videos. It uses advanced machine learning models to identify objects, people, text, scenes, and activities. In security settings, it can help recognize suspicious behavior or match faces against a database. In media applications, it can categorize images, detect inappropriate content, or assist with automated video analysis. Its strength lies in its ability to quickly process large volumes of visual data and return structured information that can be used in downstream applications, dashboards, or automated workflows. Because it is fully managed, organizations can integrate visual intelligence without needing to build their own computer vision models.
Amazon Lex, on the other hand, is focused on enabling conversational interactions through chatbots and voice assistants. It uses the same underlying technology as Amazon’s Alexa, giving developers access to natural language understanding and automatic speech recognition capabilities. Lex can interpret user input, manage dialog flows, and generate appropriate responses. It is commonly used for customer service bots, automated help desks, and voice-enabled applications. The tool reduces the complexity of building conversational systems by providing an easy way to define intents, slots, and conversation logic. By integrating with other AWS services, Lex can retrieve data, fulfill user requests, and complete transactions, making it a versatile tool for interactive applications.
Amazon Textract is another specialized service, but its focus is on document processing. Traditional optical character recognition tools can only extract simple text. Textract goes further by understanding the layout and structure of documents, enabling it to identify forms, tables, checkboxes, and key-value pairs. This makes it useful for automating tasks that involve invoices, financial statements, medical forms, or legal paperwork. Instead of manually entering information from documents, teams can use Textract to automatically extract relevant text and structured data, reducing errors and speeding up workflows. It can also integrate with downstream analytics or machine learning systems, allowing organizations to analyze large volumes of documents efficiently.
AWS Glue DataBrew serves a different but equally essential purpose in the machine learning pipeline. Preparing data is often one of the most time-consuming aspects of building ML models. Datasets may be incomplete, inconsistent, or contain errors, and they often come from multiple sources. DataBrew provides a visual interface that allows users to profile, clean, and transform their datasets without needing to write code. It can detect anomalies, identify missing values, normalize formats, and apply hundreds of built-in transformations through an intuitive point-and-click interface. This helps data engineers and analysts understand data quality issues early and make systematic improvements.
Because DataBrew emphasizes usability and automation, it allows teams to prepare high-quality datasets more efficiently. Clean, consistent data is critical to training accurate machine learning models, and DataBrew simplifies this process significantly. By improving dataset reliability before training begins, it helps ensure that downstream machine learning systems perform better, making it an ideal tool for the data preparation stage of any ML project.
Question 138
A logistics company wants to predict delivery delays using historical route data. Which AWS service can help build this model quickly?
A) Amazon SageMaker Autopilot
B) AWS Lambda
C) Amazon Polly
D) Amazon Translate
Answer: A) Amazon SageMaker Autopilot
Explanation:
AWS Lambda runs serverless functions but cannot train predictive models. Amazon Polly converts text to speech. Amazon Translate converts text between languages. Amazon SageMaker Autopilot automatically builds, trains, and tunes ML models for tabular datasets such as route histories. It identifies the best algorithms and creates a model without requiring deep ML expertise. This makes Autopilot the most suitable choice.
Question 139
A company wants to detect inappropriate content in user-uploaded images. Which service should they use?
A) Amazon Rekognition
B) Amazon Textract
C) Amazon Comprehend
D) Amazon Macie
Answer: A) Amazon Rekognition
Explanation:
Amazon Textract is a specialized AWS service designed to automatically extract text and structured information from a wide range of documents. Traditional optical character recognition tools can capture plain text, but Textract goes further by understanding the layout of the document, including tables, forms, key-value pairs, and checkboxes. This makes it highly effective for processing complex documents such as invoices, contracts, medical forms, tax statements, and financial reports. By automating what would otherwise require manual data entry, Textract helps organizations reduce errors, accelerate document-heavy workflows, and efficiently convert paper-based information into digital formats that can be searched, analyzed, or fed into other applications. Its ability to handle large volumes of documents at scale makes it valuable for enterprises looking to streamline document management and data extraction processes.
Amazon Comprehend is another important service in the AWS ecosystem, but its focus is on analyzing the meaning and structure of textual content rather than visual or document formatting. It uses natural language processing techniques to identify entities, key phrases, sentiment, language, and topics within text. This allows organizations to understand customer feedback, analyze social media posts, extract insights from emails, or process internal documents for trends and themes. Comprehend can also classify documents into categories, detect personally identifiable information, and uncover relationships between different pieces of text. Its machine learning models enable companies to automate tasks that would otherwise require teams of analysts to read large amounts of text. By providing deeper insight into written content, Comprehend helps organizations make data-driven decisions and improve understanding of both internal and external communications.
Amazon Macie adds another layer of intelligence to data handling, but its purpose is centered on security rather than content analysis. Macie is designed to identify and protect sensitive data stored in Amazon S3. It uses machine learning to detect personally identifiable information, financial details, and other forms of confidential data. Macie continuously monitors S3 environments for security risks, improper access controls, or accidental exposure. However, Macie operates only on data stored in S3 buckets and does not analyze image content or multimedia files in the way that some other AWS services do. Its primary goal is to help organizations maintain compliance, secure customer information, and avoid data leaks by offering detailed visibility into where sensitive data resides.
Amazon Rekognition fills an entirely different role by providing advanced image and video analysis capabilities. One of its most valuable features is the ability to detect unsafe, inappropriate, or harmful content in visual media. It can identify explicit imagery, violence, weapons, and other types of content that may violate platform policies or pose safety concerns. This makes Rekognition particularly effective for content moderation in applications where users upload images or videos, such as social media platforms, online communities, or e-commerce sites. By automatically flagging or filtering problematic visuals, Rekognition enables organizations to maintain safe environments, reduce manual moderation workloads, and respond quickly to potential risks.
Together, these AWS services support a wide range of text, document, and image analysis needs, each addressing a specific role in content understanding, data protection, or safety management.
Question 140
Which AWS service is designed specifically for detecting anomalies in business metrics?
A) Amazon Lookout for Metrics
B) AWS CloudTrail
C) Amazon Aurora
D) Amazon VPC
Answer: A) Amazon Lookout for Metrics
Explanation:
AWS offers a wide range of services, each tailored to specific aspects of cloud infrastructure, data management, and analytics. Among these, AWS CloudTrail, Amazon Aurora, Amazon Virtual Private Cloud (VPC), and Amazon Lookout for Metrics each serve distinct purposes, helping organizations manage, monitor, and gain insights from their operations in different ways. Understanding the functionality and limitations of each is crucial to effectively leveraging them in a cloud environment.
AWS CloudTrail is a service that focuses primarily on logging and auditing. Its main function is to record API calls made across an organization’s AWS environment, providing visibility into user activity and system changes. This includes tracking actions performed by users, applications, or AWS services themselves. CloudTrail logs capture details such as the identity of the caller, the time of the call, the resources affected, and the request parameters. This is invaluable for security auditing, compliance monitoring, and troubleshooting operational issues. However, it is important to note that CloudTrail does not perform analysis of business data or generate insights related to business performance metrics. Its role is strictly centered on tracking system and user activity to ensure operational transparency and accountability.
Amazon Aurora, in contrast, is a relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source solutions. It supports MySQL and PostgreSQL, making it compatible with a wide range of applications. Aurora is designed to provide high performance, scalability, and reliability, which makes it suitable for transactional workloads, data warehousing, and backend application storage. While Aurora is excellent for storing and querying business data, it does not inherently provide anomaly detection or predictive analytics on the data it holds. Instead, it serves as the foundation where structured business information, such as sales records, user engagement, or inventory data, can be reliably stored and accessed.
Amazon Virtual Private Cloud (VPC) is another foundational service but focuses on networking. VPC allows organizations to provision isolated sections of the AWS cloud where they can define and control networking resources, such as subnets, IP addresses, route tables, and security groups. This ensures secure communication between services, applications, and users, both within the cloud environment and with on-premises networks. While VPC is critical for network architecture, security, and connectivity, it does not process or analyze business metrics or transactional data. Its role is infrastructure-focused rather than analytics-focused.
Amazon Lookout for Metrics, by contrast, is a machine learning service specifically designed to analyze business data. It examines time-series metrics such as sales, revenue, website traffic, or customer engagement to automatically detect anomalies or unexpected patterns. Unlike CloudTrail, Aurora, or VPC, Lookout for Metrics directly applies intelligence to the data to uncover deviations that may indicate opportunities, issues, or operational risks. It can flag sudden drops in sales, unexpected spikes in usage, or trends that could impact business performance. This makes it an ideal tool for monitoring key performance indicators (KPIs) and maintaining situational awareness of business operations, enabling teams to respond quickly to anomalies and optimize performance.
While CloudTrail, Aurora, and VPC provide critical support for auditing, storage, and networking, Amazon Lookout for Metrics delivers actionable insights from business data. Together, these services allow organizations to operate securely and efficiently in the cloud while gaining meaningful visibility into business performance.
Question 141
Which service allows storing massive datasets at low cost and is commonly used for ML training data?
A) Amazon S3
B) Amazon Redshift
C) Amazon RDS
D) AWS Glue
Answer: A) Amazon S3
Explanation:
Amazon Redshift is a data warehouse. Amazon RDS stores relational data. AWS Glue transforms data but cannot store large files efficiently. Amazon S3 is designed for scalable, durable, and low-cost object storage and is widely used for ML datasets, images, logs, and training data.
Question 142
A developer wants to add real-time personalization to a mobile shopping app. Which service should they use?
A) Amazon Personalize
B) Amazon SageMaker
C) Amazon Translate
D) AWS Lambda
Answer: A) Amazon Personalize
Explanation:
Amazon Web Services provides a variety of tools designed to address different aspects of application development, data analysis, and machine learning. Among these, Amazon SageMaker, Amazon Translate, AWS Lambda, and Amazon Personalize each serve unique purposes and target different challenges. Understanding the strengths and limitations of these services helps organizations select the right tool for specific use cases, particularly when building intelligent applications, creating customized experiences, or automating workflows.
Amazon SageMaker is a comprehensive machine learning platform that allows organizations to build, train, and deploy models at scale. It provides a range of tools for data preprocessing, feature engineering, model training, hyperparameter tuning, and deployment. SageMaker supports multiple machine learning frameworks such as TensorFlow, PyTorch, and XGBoost, making it highly flexible for a variety of predictive tasks, from fraud detection to customer churn prediction. While SageMaker offers an end-to-end solution, it requires a significant level of expertise in machine learning concepts, model selection, and data preparation. Users must understand the principles of supervised and unsupervised learning, feature selection, model evaluation metrics, and deployment strategies to fully leverage its capabilities. For organizations with in-house data science teams, SageMaker provides the flexibility and power needed to build sophisticated machine learning solutions, but for teams without specialized expertise, the learning curve can be steep.
Amazon Translate, in contrast, focuses on a specific application of artificial intelligence: language translation. It enables the translation of text between multiple languages in near real-time, helping businesses reach a global audience and deliver content in the preferred language of their users. Translate leverages advanced neural machine translation models to ensure high-quality, contextually accurate translations. It is straightforward to use and does not require deep technical knowledge or machine learning expertise. Applications of Translate include localizing websites and mobile applications, translating customer support content, and enabling multilingual communication in chatbots and interactive services. While Translate is highly effective for language conversion, it is not designed to generate predictive insights, build recommendation systems, or personalize experiences based on user behavior.
AWS Lambda serves yet another purpose, offering a serverless compute environment for running code in response to events. Lambda allows developers to execute functions without provisioning or managing servers, which simplifies application development and reduces operational overhead. It integrates seamlessly with other AWS services, making it ideal for tasks such as processing data streams, handling API requests, or triggering automated workflows. However, Lambda is not a machine learning service and does not generate predictive models or personalized recommendations on its own. Its role is primarily operational, running code efficiently and scaling automatically based on demand.
Amazon Personalize addresses the specific need for real-time personalization and recommendation. It enables organizations to provide tailored product or content suggestions based on individual user behavior and preferences. Using machine learning under the hood, Personalize analyzes historical interactions and generates recommendations that adapt to each user’s activity. This capability is especially valuable in mobile shopping applications, streaming platforms, and e-commerce websites, where personalized suggestions can significantly improve user engagement, conversion rates, and customer satisfaction. Unlike SageMaker, Personalize abstracts away the complexity of model building, training, and tuning, allowing teams to deploy personalized experiences without requiring deep machine learning expertise.
These AWS services collectively cover a wide spectrum of cloud capabilities. SageMaker offers advanced model building for skilled teams, Translate handles multilingual communication, Lambda runs scalable event-driven code, and Personalize provides real-time personalized recommendations. Each tool addresses a distinct set of challenges, enabling organizations to create smarter, more responsive, and user-centric applications while leveraging the power of cloud-based machine learning and automation.
Question 143
A media company wants to quickly generate subtitles from video audio. Which AWS service can help?
A) Amazon Transcribe
B) Amazon Polly
C) Amazon Rekognition
D) Amazon S3
Answer: A) Amazon Transcribe
Explanation:
Amazon Web Services provides a range of tools designed to handle different aspects of data processing, multimedia analysis, and cloud storage. Among these tools, Amazon Polly, Amazon Rekognition, Amazon S3, and Amazon Transcribe serve distinct purposes and are suited for particular tasks, each addressing a unique need in application development, content management, and media processing. Understanding their functionality and limitations is essential for effectively integrating them into workflows and applications.
Amazon Polly is a service focused on generating natural-sounding speech from text. It allows developers to convert written content into spoken audio, which can be used for applications such as voice assistants, audiobook production, automated announcements, and accessibility features for visually impaired users. Polly offers a variety of lifelike voices and supports multiple languages, giving organizations flexibility in how they present spoken content to end users. While Polly excels at producing speech, it does not perform transcription, meaning it cannot convert audio or spoken words back into text. Its capabilities are unidirectional, from text to voice, and it is not designed to analyze or interpret audio recordings.
Amazon Rekognition, in contrast, specializes in the analysis of images and videos. It uses machine learning to detect objects, people, faces, text, activities, and even unsafe or inappropriate content within visual media. Rekognition can be applied in security monitoring, content moderation, identity verification, and media indexing. It can detect explicit imagery, track movement across video frames, and recognize patterns that would be time-consuming or impossible to identify manually. However, despite its advanced computer vision capabilities, Rekognition does not handle audio transcription or convert speech to text. Its functionality is limited to visual content, making it unsuitable for tasks such as generating subtitles from spoken dialogue.
Amazon S3, or Simple Storage Service, serves a different foundational role by providing scalable, durable, and secure storage for files in the cloud. S3 can hold virtually unlimited amounts of data, including documents, images, audio, and video files. It integrates seamlessly with other AWS services, acting as the repository from which applications can retrieve or upload data. While S3 is crucial for storing media or textual content, it does not process, analyze, or transform that content on its own. Its primary purpose is storage, not computation or analysis, making it a supporting service for workflows that involve other tools like Polly, Rekognition, or Transcribe.
Amazon Transcribe is specifically designed for converting spoken audio into written text, making it ideal for applications that require transcription, captioning, or subtitle generation. It uses advanced automatic speech recognition models to accurately transcribe audio from meetings, podcasts, videos, customer support calls, and other spoken content. Transcribe can handle multiple speakers, detect timestamps for easier alignment with video content, and support a variety of languages. Unlike Polly, which produces audio from text, Transcribe works in the opposite direction by taking audio input and creating a text output. This makes it particularly valuable for generating subtitles for videos, creating searchable text archives of audio recordings, and supporting accessibility initiatives.
Together, these AWS services cover a broad range of content processing needs. Polly converts text to speech, Rekognition analyzes images and video, S3 securely stores files, and Transcribe converts audio to text. By combining these services, organizations can build comprehensive multimedia applications that handle storage, visual analysis, audio transcription, and speech generation efficiently, each service playing a complementary role in the overall workflow.
Question 144
Which AWS service allows custom translation workflows using domain-specific vocabulary?
A) Amazon Translate Custom Terminology
B) Amazon Comprehend
C) Amazon Lex
D) Amazon Textract
Answer: A) Amazon Translate Custom Terminology
Explanation:
Amazon Comprehend performs NLP tasks. Amazon Lex builds chatbots. Amazon Textract extracts text. Amazon Translate Custom Terminology allows users to provide domain-specific vocabulary, ensuring accurate translation for specialized terms. This makes it ideal for industries with unique terminology.
Question 145
A company wants to build a simple virtual assistant that reads text responses aloud. Which two services work together to achieve this?
A) Amazon Lex and Amazon Polly
B) Amazon Rekognition and Amazon Textract
C) Amazon Comprehend and AWS Glue
D) Amazon Translate and Amazon RDS
Answer: A) Amazon Lex and Amazon Polly
Explanation:
Amazon Rekognition and Textract analyze images and documents. Amazon Comprehend analyzes text. Amazon Translate converts languages. Amazon RDS stores relational data. Amazon Lex manages conversational interfaces, and Amazon Polly converts the chatbot responses into speech. Combined, they create a virtual assistant capable of both understanding and speaking, making this the correct pairing.
Question 146
A company wants to use Amazon Bedrock to generate customer-friendly summaries of product reviews while ensuring the output stays aligned with corporate messaging guidelines. Which configuration helps achieve consistent tone across all generated summaries?
A) Increase temperature
B) Use model prompts with structured templates
C) Enable multi-turn conversation mode
D) Reduce context window size
Correct Answer: B)
Explanation:
Increasing temperature produces more creative and varied outputs. While useful for exploration, it does not support a stable and predictable writing tone. Generative models with higher variability tend to introduce stylistic differences between responses. For a company that wants consistency in customer-facing summaries, such unpredictable shifts make maintaining a uniform voice difficult.
Model prompts with structured templates provide predictable language patterns and tone guidance. This approach offers repeatable formats that the model can follow for every request. By embedding specific phrasing, structure, and tone instructions into the prompt, the company ensures that summaries remain aligned with corporate messaging. Templates help the model adhere to an established narrative style and prevent creative deviations. Such structured prompting is the most reliable method for controlling tone across repeated generations, which is crucial for brand consistency.
Enabling multi-turn conversation mode allows follow-up questions but does not guarantee consistent tone in standalone text outputs. Multi-turn capability enhances interaction but plays no role in maintaining a specific writing style across generated summaries. Since product review summarization is usually a single-turn process, this configuration does not help meet the requirement.
Reducing context window size limits how much information the model can consider. While this may constrain content, it does not help maintain consistent tone. In fact, reducing context may cause the model to lose important guidance that helps enforce specific writing styles. Limiting context size influences the volume of information available, but it does not promote messaging consistency.
Structured prompting with templates is the most effective method to maintain a consistent corporate tone in generated summaries.
Question 147
A business wants to use Amazon Comprehend to identify personal data (PII) in user-submitted documents. They want high accuracy in multiple languages. What is the most suitable feature?
A) Syntax analysis
B) Key phrase extraction
C) PII detection
D) Entity-level sentiment
Correct Answer: C)
Explanation:
Syntax analysis identifies grammatical components like parts of speech. While useful for linguistic understanding, it does not detect sensitive information such as emails, names, or phone numbers. Its purpose is structural parsing rather than privacy protection. Therefore, it does not meet the requirement for PII detection.
Key phrase extraction identifies central ideas within a text. It helps summarize content but does not analyze sensitive data categories. This feature cannot classify or detect personally identifiable information. Although valuable for highlighting important themes, it offers no protection for documents containing sensitive user information.
PII detection identifies sensitive elements such as addresses, names, IDs, and financial details. It supports multiple languages and is specifically designed to locate personal data for redaction or compliance purposes. This makes it ideal for businesses receiving user-submitted documents that may contain regulated personal information. Because the goal is to detect sensitive elements with accuracy across languages, PII detection is the most appropriate choice.
Entity-level sentiment focuses on how users feel about specific entities. While informative for customer analysis, it does not identify personally identifiable information. Sentiment analysis cannot detect structured sensitive details or help with compliance controls. Its focus is emotional interpretation, not privacy risk detection.
Thus, PII detection is the correct feature for identifying sensitive data in multilingual user documents.
Question 148
A healthcare provider wants to analyze patient call recordings and identify medical terminology with high precision. Which Amazon Transcribe feature helps achieve this?
A) Language switching
B) Custom vocabulary
C) Partial result stabilization
D) Channel identification
Correct Answer: B)
Explanation:
Language switching identifies when speakers change languages during speech. This is useful in multilingual scenarios but does not improve recognition of specialized terminology. It helps classify linguistic changes rather than enabling medical-specific accuracy enhancements. Therefore, it is not suitable for improving detection of medical words.
Custom vocabulary enhances transcription accuracy for domain-specific terms. Healthcare organizations often use unique terminology that generic speech recognition may not recognize well. By adding specialized terms to the vocabulary list, the transcription engine can properly identify and transcribe medical words. This feature is crucial for environments that require terminology precision, making it ideal for healthcare call analysis.
Partial result stabilization helps maintain smoother interim transcript updates. While beneficial for real-time applications, it does not influence the accuracy of specialized words. This feature ensures better temporary results but does not affect the recognition of medical terms. It improves user experience rather than transcription precision.
Channel identification separates speakers recorded on different audio channels. This helps differentiate voices but does not improve vocabulary recognition. While it aids transcription clarity for multi-channel audio, it does not enhance domain-specific accuracy.
Thus, custom vocabulary is the correct approach for improving recognition of medical terminology.
Question 149
A financial company wants to restrict Amazon Lex from generating responses that include sensitive regulatory details. Which configuration supports this requirement?
A) Response card configuration
B) Lex versioning
C) Guardrail controls
D) Dialog state management
Correct Answer: C)
Explanation:
Response card configuration helps present buttons or images to guide user interactions. While it enhances user experience, it does not prevent the bot from generating unwanted regulatory content. Cards offer UI support, not content protection. Therefore, this configuration is not suitable for controlling sensitive information output.
Lex versioning enables maintaining different versions of the bot. While important for release management, it does not provide content filtering capabilities. Versioning supports lifecycle processes, but it does not restrict specific response behaviors or enforce compliance-related limitations. It is organizational, not content-focused.
Guardrail controls restrict certain types of responses and help prevent inappropriate, risky, or undesired content. They enforce safety and compliance rules by preventing the bot from generating disallowed text. For highly regulated industries like finance, guardrails help ensure that conversational responses comply with regulatory standards. They enable control over what the bot is allowed to say, making them ideal for preventing unlawful or sensitive disclosures.
Dialog state management controls the flow of conversation but does not impose content restrictions. It guides turn-by-turn interactions but cannot block specific regulatory statements. While essential for conversation flow, it is unrelated to compliance filtering.
Thus, guardrail controls are the correct method for preventing sensitive regulatory responses.
Question 150
A retail analytics firm uses Amazon Rekognition for product detection but wants to decrease detection of irrelevant items in crowded store images. What setting should be adjusted?
A) Image format
B) Confidence threshold
C) Label count limit
D) Bounding box visibility
Correct Answer: B)
Explanation:
Image format determines whether the input is JPEG, PNG, or another type. Changing the format does not significantly affect false positive rates. While certain formats may compress differently, they do not directly filter detections. Thus, image format cannot reduce irrelevant item detection.
Confidence threshold determines the minimum certainty required before the system returns a label. Increasing this threshold reduces false positives because it forces the model to produce detections only when it is more confident. In crowded store environments, objects overlap and cause noise, making this parameter crucial for controlling quality. It directly influences detection sensitivity, helping filter out low-confidence, irrelevant items.
Label count limit affects how many categories are returned but does not change the accuracy of detection. Limiting labels can shorten the output but still may include irrelevant items. It controls quantity rather than correctness.
Bounding box visibility controls how output bounding boxes are displayed. This has no effect on detection accuracy and is purely representational. Visibility settings do not influence the underlying detection engine.
Thus, the confidence threshold is the setting that reduces detection of irrelevant items.