⚡ Free Classes and Scholarships Available for Underprivileged Students -

Building AI solutions by using ML APIs or foundational models

Architecting Low-Code AI Solutions

10 Questions
No time limit
Practice Mode
0%
Score
0
Correct
0
Incorrect
10
Total Questions
Back to Topics
Question 1 of 10
Your company needs to build a solution to extract structured data from thousands of scanned invoices in various formats. The invoices contain standard fields like invoice number, date, line items, and totals, but the layouts differ across vendors. Which Google Cloud solution would be most appropriate for this use case?
Explanation
Document AI with a pre-trained invoice parser processor is the most appropriate solution. Document AI provides specialized processors for common document types like invoices that can handle various layouts and extract structured data without requiring training. Vision API (option A) would require significant custom logic to structure the extracted text. AutoML Vision (option C) is designed for image classification, not structured data extraction, and would be unnecessarily complex. A custom Vertex AI model (option D) would require substantial training data and development effort when a pre-trained solution already exists.
Question 2 of 10
You are building a customer service chatbot that needs to answer questions based on your company's internal documentation, product manuals, and FAQ databases. The solution should provide accurate, contextual answers with source citations. Which Vertex AI approach would be most efficient?
Explanation
Vertex AI Agent Builder with RAG (Retrieval Augmented Generation) is the most efficient solution. It allows you to connect your documentation as a data source and automatically retrieves relevant context to generate accurate, grounded responses with citations. Fine-tuning (option A) is more complex, expensive, and doesn't inherently provide source citations. A custom BERT implementation (option C) would require significant development effort. Dialogflow CX (option D) is better for intent-based conversational flows rather than open-ended question answering over large document sets.
Question 3 of 10
Your retail company wants to improve product recommendations on your e-commerce platform. You have transaction history, product catalog data, and user browsing behavior. You need a solution that can be implemented quickly without extensive ML expertise. What should you use?
Explanation
Retail API with recommendations AI is specifically designed for e-commerce use cases and provides pre-built recommendation capabilities optimized for retail scenarios. It handles product catalog integration, user events, and generates personalized recommendations without requiring ML expertise. BigQuery ML (option A) requires more manual feature engineering and model development. AutoML Tables (option C) would need significant data preparation and isn't optimized for recommendation scenarios. A custom TensorFlow Recommenders model (option D) requires advanced ML expertise and longer development time.
Question 4 of 10
You need to build a sentiment analysis solution to classify customer feedback emails into positive, negative, or neutral categories. You have 500 labeled examples and need a solution deployed within a week. Which approach would best meet these requirements?
Explanation
The Natural Language API's analyzeSentiment method is the best choice because it provides pre-trained sentiment analysis that works immediately without training, meeting the one-week timeline. While you have labeled data, 500 examples is relatively small. AutoML Natural Language (option B) typically requires more examples (1,000+) for good performance and takes time to train. Fine-tuning a foundational model (option C) requires more technical expertise and setup time. BigQuery ML (option D) would require feature engineering and may not achieve the same accuracy as the pre-trained API for sentiment analysis.
Question 5 of 10
Your organization processes medical research papers and needs to extract specific entities like drug names, dosages, medical conditions, and treatment protocols. The terminology is highly specialized. Which solution would be most effective?
Explanation
Healthcare Natural Language API is specifically designed for medical and life sciences content and includes pre-trained models for extracting medical entities like medications, conditions, and procedures with domain-specific understanding. The standard Natural Language API (option B) lacks medical domain specialization. AutoML Entity Extraction (option C) would require extensive labeled medical training data. Vision API with regex (option D) wouldn't understand medical context or handle variations in medical terminology effectively.
Question 6 of 10
You are building a multilingual content moderation system for a global social media platform. You need to detect inappropriate content in 50+ languages with minimal latency. Which Google Cloud solution should you choose?
Explanation
Perspective API is specifically designed for content moderation and toxicity detection, supporting multiple languages natively with low latency. It's optimized for this exact use case. Training separate AutoML models (option A) would be extremely expensive and complex to maintain. Translating to English first (option B) adds latency and potential translation errors. While PaLM 2 (option D) could work, it would have higher latency and cost compared to the specialized Perspective API.
Question 7 of 10
Your company wants to generate product descriptions for 10,000 items in your catalog using AI. Each description should be creative, engaging, and maintain your brand voice. You have 50 example descriptions that exemplify your desired style. What's the most cost-effective approach?
Explanation
PaLM 2's text-bison model with few-shot prompting is the most cost-effective solution. You can include your 50 examples in the prompt context to guide the model's style without fine-tuning, and the API can efficiently generate all 10,000 descriptions. Fine-tuning (option B) requires more setup and 50 examples is insufficient for quality fine-tuning. AutoML Natural Language (option C) doesn't support open-ended text generation for this use case. A custom seq2seq model (option D) would require significant development effort and more training data.
Question 8 of 10
You need to implement a solution that converts customer support call recordings to text, identifies key topics discussed, and determines customer sentiment. The solution should process files uploaded to Cloud Storage. Which combination of services would be most appropriate?
Explanation
Contact Center AI Insights is purpose-built for exactly this use case, providing integrated call transcription, topic extraction, sentiment analysis, and conversation analytics in a single solution optimized for customer service scenarios. While option A would work, it requires more integration effort and doesn't provide call-specific insights. Option C would work but is more complex to orchestrate. Option D requires training custom models when pre-built solutions exist specifically for this use case.
Question 9 of 10
Your application needs to classify images of manufactured products into 200 defect categories. You have 50,000 labeled images. Training time and ongoing maintenance should be minimized. Which approach should you take?
Explanation
AutoML Vision is the best choice for this scenario. With 50,000 labeled images across 200 categories, you have sufficient data for AutoML, which handles the entire ML pipeline including feature engineering, architecture search, and hyperparameter tuning with minimal maintenance. Vision API (option A) provides generic labels, not custom defect categories. Fine-tuning ViT (option C) requires more ML expertise and manual tuning. A custom CNN (option D) requires significant ML engineering effort when AutoML can achieve comparable results with less maintenance.
Question 10 of 10
You are building a document search system for legal contracts where users ask natural language questions and need to find relevant contract clauses with exact source references. The contract database updates weekly. Which Vertex AI solution provides the best balance of accuracy and maintenance?
Explanation
Vertex AI Search (part of Agent Builder) with unstructured data is ideal for this use case. It handles document ingestion from Cloud Storage, provides semantic search capabilities, returns extractive answers with source citations, and can easily handle weekly updates through re-indexing. Fine-tuning PaLM 2 weekly (option B) is impractical, expensive, and doesn't provide source citations. Matching Engine (option C) requires manual embedding generation and doesn't provide extractive answers. BigQuery ML (option D) requires more custom development for document chunking, embedding, and answer extraction compared to the purpose-built Vertex AI Search.