Respostas Corretas
Respostas Corretas
Respostas Corretas
Questão 1
What is the name of the architecture choice that allows for the most control over data?
Fully private
Which of the following is a challenge when deploying large language models for knowledge
search in the enterprise?
All of the above
What is the benefit of using deterministic answers from large language models in knowledge
search?
All of the above
What does a new hire to IBM Consulting want to use Sidekick AI for?
To get detailed information about how they're adhering to all the standards and equality
What is the main advantage of using prompt tuning over fine tuning when tailoring a large
language model to a specific task?
Prompt tuning is more efficient as it avoids the need for extensive retraining and can be
implemented more quickly
____________ is a data store built on open lakehouse architecture optimized for governed data
and AI workloads.
watsonx.data
To ensure that solutions are targeted to real world business scenarios, IBM will focus on the
following AI use cases:
IT processes, Customer service, and HR and talent management
What are the three key characteristics of foundation models?
They are trained on a large amount of data, they often use unsupervised learning, and they are
trained on a specific task
Which of the following options correctly explains why a common risk of using Large Language
Models (LLMs) is that of a “flagrantly false narratives"?
AI doesn’t know how to tell authentic stories
AI only guesses at the next best syntactically correct word and cannot infer meaning
Large Language Models are created by hackers
IBM is not governing the rules of AI in all situations
How can semantic search help reduce the number of documents needed for large language
models in knowledge search?
By using cosine similarity
By using Azure cognitive search
By using Watson Discovery
By using embeddings
What is the Sidekick AI code analyzer assistant trained to do?
To create personas
To summarize calls
To generate code
To analyze code quality
Which method used by LLMs might favor generic words with higher probabilities over specific
words with lower probabilities?
Sampling
Reinforcement learning
Maximum likelihood estimation
Beam search