[7-Nov-2019 Update] Exam AI-100 VCE Dumps and AI-100 PDF Dumps from PassLeader

Valid AI-100 Dumps shared by PassLeader for Helping Passing AI-100 Exam! PassLeader now offer the newest AI-100 VCE dumps and AI-100 PDF dumps, the PassLeader AI-100 exam questions have been updated and ANSWERS have been corrected, get the newest PassLeader AI-100 dumps with VCE and PDF here: https://www.passleader.com/ai-100.html (129 Q&As Dumps –> 170 Q&As Dumps)

BTW, DOWNLOAD part of PassLeader AI-100 dumps from Cloud Storage: https://drive.google.com/open?id=1jf5rzyh0mJhEXhEdB4n0-Z5mgfaZdywu

NEW QUESTION 111
You have an app named App1 that uses the Face API. App1 contains several PersonGroup objects. You discover that a PersonGroup object for an individual named Ben Smith cannot accept additional entries. The PersonGroup object for Ben Smith contains 10,000 entries. You need to ensure that additional entries can be added to the PersonGroup object for Ben Smith. The solution must ensure that Ben Smith can be identified by all the entries.
Solution: You modify the custom time interval for the training phase of App1.
Does this meet the goal?

A.    Yes
B.    No

Answer: B
Explanation:
Instead, use a LargePersonGroup. LargePersonGroup and LargeFaceList are collectively referred to as large-scale operations. LargePersonGroup can contain up to 1 million persons, each with a maximum of 248 faces. LargeFaceList can contain up to 1 million faces. The large-scale operations are similar to the conventional PersonGroup and FaceList but have some differences because of the new architecture.
https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/how-to-use-large-scale

NEW QUESTION 112
You have an app named App1 that uses the Face API. App1 contains several PersonGroup objects. You discover that a PersonGroup object for an individual named Ben Smith cannot accept additional entries. The PersonGroup object for Ben Smith contains 10,000 entries. You need to ensure that additional entries can be added to the PersonGroup object for Ben Smith. The solution must ensure that Ben Smith can be identified by all the entries.
Solution: You create a second PersonGroup object for Ben Smith.
Does this meet the goal?

A.    Yes
B.    No

Answer: B
Explanation:
Instead, use a LargePersonGroup. LargePersonGroup and LargeFaceList are collectively referred to as large-scale operations. LargePersonGroup can contain up to 1 million persons, each with a maximum of 248 faces. LargeFaceList can contain up to 1 million faces. The large-scale operations are similar to the conventional PersonGroup and FaceList but have some differences because of the new architecture.
https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/how-to-use-large-scale

NEW QUESTION 113
You have an app named App1 that uses the Face API. App1 contains several PersonGroup objects. You discover that a PersonGroup object for an individual named Ben Smith cannot accept additional entries. The PersonGroup object for Ben Smith contains 10,000 entries. You need to ensure that additional entries can be added to the PersonGroup object for Ben Smith. The solution must ensure that Ben Smith can be identified by all the entries.
Solution: You migrate all the entries to the LargePersonGroup object for Ben Smith.
Does this meet the goal?

A.    Yes
B.    No

Answer: A
Explanation:
LargePersonGroup and LargeFaceList are collectively referred to as large-scale operations. LargePersonGroup can contain up to 1 million persons, each with a maximum of 248 faces. LargeFaceList can contain up to 1 million faces. The large-scale operations are similar to the conventional PersonGroup and FaceList but have some differences because of the new architecture.
https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/how-to-use-large-scale

NEW QUESTION 114
Your company plans to develop a mobile app to provide meeting transcripts by using speech-to-text. Audio from the meetings will be streamed to provide real-time transcription. You need to recommend which task each meeting participant must perform to ensure that the transcripts of the meetings can identify all participants. Which task should you recommend?

A.    Record the meeting as an MP4.
B.    Create a voice signature.
C.    Sign up for Azure Speech Services.
D.    Sign up as a guest in Azure Active Directory (Azure AD).

Answer: B
Explanation:
The first step is to create voice signatures for the conversation participants. Creating voice signatures is required for efficient speaker identification.
Note: In addition to the standard baseline model used by the Speech Services, you can customize models to your needs with available data, to overcome speech recognition barriers such as speaking style, vocabulary and background noise.
https://docs.microsoft.com/bs-latn-ba/azure/cognitive-services/speech-service/how-to-use-conversation-transcription-service

NEW QUESTION 115
You need to create a prototype of a bot to demonstrate a user performing a task. The demonstration will use the Bot Framework Emulator. Which botbuilder CLI tool should you use to create the prototype?

A.    Chatdown
B.    QnAMaker
C.    Dispatch
D.    LuDown

Answer: A
Explanation:
Use Chatdown to produce prototype mock conversations in markdown and convert the markdown to transcripts you can load and view in the new V4 Bot Framework Emulator.
Incorrect:
Not B: QnA Maker is a cloud-based API service that lets you create a conversational question-and-answer layer over your existing data. Use it to build a knowledge base by extracting questions and answers from your semi-structured content, including FAQs, manuals, and documents. Answer users’ questions with the best answers from the QnAs in your knowledge base – automatically. Your knowledge base gets smarter, too, as it continually learns from user behavior.
Not C: Dispatch lets you build language models that allow you to dispatch between disparate components (such as QnA, LUIS and custom code).
Not D: LuDown build LUIS language understanding models using markdown files.
https://github.com/microsoft/botframework/blob/master/README.md

NEW QUESTION 116
You are designing an AI solution that will provide feedback to teachers who train students over the Internet. The students will be in classrooms located in remote areas. The solution will capture video and audio data of the students in the classrooms. You need to recommend Azure Cognitive Services for the AI solution to meet the following requirements:
– Alert teachers if a student facial expression indicates the student is angry or scared.
– Identify each student in the classrooms for attendance purposes.
– Allow the teachers to log voice conversations as text.
Which Cognitive Services should you recommend?

A.    Face API and Text Analytics.
B.    Computer Vision and Text Analytics.
C.    QnA Maker and Computer Vision.
D.    Speech to Text and Face API.

Answer: D
Explanation:
Speech-to-text from Azure Speech Services, also known as speech-to-text, enables real-time transcription of audio streams into text that your applications, tools, or devices can consume, display, and take action on as command input. Face detection: Detect one or more human faces in an image and get back face rectangles for where in the image the faces are, along with face attributes which contain machine learning-based predictions of facial features. The face attribute features available are: Age, Emotion, Gender, Pose, Smile, and Facial Hair along with 27 landmarks for each face in the image.
https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-to-text
https://azure.microsoft.com/en-us/services/cognitive-services/face/

NEW QUESTION 117
You need to evaluate trends in fuel prices during a period of 10 years. The solution must identify unusual fluctuations in prices and produce visual representations. Which Azure Cognitive Services API should you use?

A.    Anomaly Detector
B.    Computer Vision
C.    Text Analytics
D.    Bing Autosuggest

Answer: A
Explanation:
The Anomaly Detector API enables you to monitor and detect abnormalities in your time series data with machine learning. The Anomaly Detector API adapts by automatically identifying and applying the best- fitting models to your data, regardless of industry, scenario, or data volume. Using your time series data, the API determines boundaries for anomaly detection, expected values, and which data points are anomalies.
https://docs.microsoft.com/en-us/azure/cognitive-services/anomaly-detector/overview

NEW QUESTION 118
You plan to perform analytics of the medical records of patients located around the world. You need to recommend a solution that avoids storing and processing data in the cloud. What should you include in the recommendation?

A.    Azure Machine Learning Studio
B.    the Text Analytics API that has container support
C.    Azure Machine Learning services
D.    an Apache Spark cluster that uses MMLSpark

Answer: D
Explanation:
The Microsoft Machine Learning Library for Apache Spark (MMLSpark) assists in provisioning scalable machine learning models for large datasets, especially for building deep learning problems. MMLSpark works with SparkML pipelines, including Microsoft CNTK and the OpenCV library, which provide end-to- end support for the ingress and processing of image input data, categorization of images, and text analytics using pre-trained deep learning algorithms.
https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781789131956/10/ch10lvl1sec61/an-overview-of-the-microsoft-machine-learning-library-for-apache-spark-mmlspark

NEW QUESTION 119
Your company has an on-premises datacenter. You plan to publish an app that will recognize a set of individuals by using the Face API. The model is trained. You need to ensure that all images are processsed in the on-premises datacenter. What should you deploy to host the Face API?

A.    A Docker container
B.    Azure File Sync
C.    Azure Application Gateway
D.    Azure Data Box Edge

Answer: A
Explanation:
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
Incorrect:
Not D: Azure Data Box Edge is an AI-enabled edge computing device with network data transfer capabilities. This article provides you an overview of the Data Box Edge solution, benefits, key capabilities, and the scenarios where you can deploy this device. Data Box Edge is a Hardware-as-a-service solution. Microsoft ships you a cloud-managed device with a built-in Field Programmable Gate Array (FPGA) that enables accelerated AI-inferencing and has all the capabilities of a storage gateway.
https://www.docker.com/resources/what-container

NEW QUESTION 120
You have a Bing Search service that is used to query a product catalog. You need to identify the following information:
– The locale of the query
– The top 50 query strings
– The number of calls to the service
– The top geographical regions of the service
What should you implement?

A.    Bing Statistics
B.    Azure API Management (APIM)
C.    Azure Monitor
D.    Azure Application Insights

Answer: A
Explanation:
The Bing Statistics add-in provides metrics such as call volume, top queries, API response, code distribution, and market distribution. The rich slicing-and-dicing capability lets you gather deeper understanding of your users and their usage to inform your business strategy.
https://www.bingapistatistics.com/

NEW QUESTION 121
You have a Face API solution that updates in real time. A pilot of the solution runs successfully on a small dataset. When you attempt to use the solution on a larger dataset that continually changes, the performance degrades, slowing how long it takes to recognize existing faces. You need to recommend changes to reduce the time it takes to recognize existing faces without increasing costs. What should you recommend?

A.    Change the solution to use the Computer Vision API instead of the Face API.
B.    Separate training into an independent pipeline and schedule the pipeline to run daily.
C.    Change the solution to use the Bing Image Search API instead of the Face API.
D.    Distribute the face recognition inference process across many Azure Cognitive Services instances.

Answer: B
Explanation:
Incorrect:
Not A: The purpose of Computer Vision is to inspects each image associated with an incoming article to scrape out written words from the image and determine what types of objects are present in the image.
Not C: The Bing API provides an experience similar to Bing.com/search by returning search results that Bing determines are relevant to a user’s query. The results include Web pages and may also include images, videos, and more.
Not D: That would increase cost.
https://github.com/Azure/cognitive-services

NEW QUESTION 122
You plan to deploy a global healthcare app named App1 to Azure. App1 will use Azure Cognitive Services APIs. Users in Germany, Canada, and the United States will connect to App1. You need to recommend an app deployment solution to ensure that all the personal data of the users remain in their country or origin only. Which three Azure services should you recommend deploying to each Azure region? (Each correct answer presents part of the solution. Choose three.)

A.    Azure Key Vault
B.    Azure Traffic Manager
C.    Azure Kubernetes Service (AKS)
D.    App1
E.    the Cognitive Services resources
F.    an Azure Storage resource

Answer: ADF
Explanation:
https://github.com/microsoft/computerscience/blob/master/Labs/Azure%20Services/Azure%20Storage/Azure%20Storage%20and%20Cognitive%20Services%20(MVC).md

NEW QUESTION 123
Hotspot
Your company plans to deploy several apps that will use Azure Cognitive Services APIs. You need to recommend which Cognitive Services APIs must be used to meet the following requirements:
– Must be able to identify inappropriate text and profanities in multiple languages.
– Must be able to interpret user requests sent by using text input.
– Must be able to identify named entities in text.
Which API should you recommend for each requirement? (To answer, select the appropriate options in the answer area.)
PassLeader-AI-100-dumps-1231

Answer:
PassLeader-AI-100-dumps-1232
Explanation:
Box 1: Content Moderator. The Azure Content Moderator API is a cognitive service that checks text, image, and video content for material that is potentially offensive, risky, or otherwise undesirable. When such material is found, the service applies appropriate labels (flags) to the content. Your app can then handle flagged content in order to comply with regulations or maintain the intended environment for users.
Box 2: Language Understanding (LUIS). Designed to identify valuable information in conversations, LUIS interprets user goals (intents) and distills valuable information from sentences (entities), for a high quality, nuanced language model. LUIS integrates seamlessly with the Azure Bot Service, making it easy to create a sophisticated bot.
Box 3: Text Analytics. The Text Analytics API is a cloud-based service that provides advanced natural language processing over raw text, and includes four main functions: sentiment analysis, key phrase extraction, named entity recognition, and language detection.
https://docs.microsoft.com/bs-latn-ba/azure/cognitive-services/content-moderator/overview
https://www.luis.ai/home
https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/

NEW QUESTION 124
Drag and Drop
You are designing an AI solution that will use IoT devices to gather data from conference attendees and then analyze the data. The IoT device will connect to an Azure IoT hub. You need to ensure that data contains no personally identifiable information before it is sent to the IoT hub. Which three actions should you perform in sequence? (To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.)
PassLeader-AI-100-dumps-1241

Answer:
PassLeader-AI-100-dumps-1242
Explanation:
ASA Edge jobs run in containers deployed to Azure IoT Edge devices. They are composed of two parts:
1. A cloud part that is responsible for job definition: users define inputs, output, query, and other settings (out of order events, etc.) in the cloud.
2. A module running on your IoT devices. It contains the ASA engine and receives the job definition from the cloud.
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-edge

NEW QUESTION 125
You need to meet the testing requirements for the data scientists. Which three actions should you perform? (Each correct answer presents part of the solution. Choose three.)

A.    Deploy an Azure Kubernetes Service (AKS) cluster to the East US 2 region.
B.    Get the docker image from mcr.microsoft.com/azure-cognitive-services/sentiment:latest.
C.    Deploy an Azure an Azure Container Service cluster to the West Europe region.
D.    Export the production version of the Language Understanding (LUIS) app.
E.    Deploy a Kubernetes cluster to Azure Stack.
F.    Get the docker image from mcr.microsoft.com/azure-cognitive-services/luis:latest.
G.    Export the staging version of the Language and Understanding (LUIS) app.

Answer: EFG

NEW QUESTION 126
……


Get the newest PassLeader AI-100 VCE dumps here: https://www.passleader.com/ai-100.html (129 Q&As Dumps –> 170 Q&As Dumps)

And, DOWNLOAD the newest PassLeader AI-100 PDF dumps from Cloud Storage for free: https://drive.google.com/open?id=1jf5rzyh0mJhEXhEdB4n0-Z5mgfaZdywu