Fine-tuning an LLM on a single task involves adjusting model parameters to specialize in a particular domain. What is the primary challenge associated with fine tuning for a single task compared to multi task fine tuning?
In the context of a supply chain attack involving machine learning, which of the following is a critical component that attackers may target?
An AI system is generating confident but incorrect outputs, commonly known as hallucinations. Which strategy would most likely reduce the occurrence of such hallucinations and improve the trustworthiness of the system?
What role does GenAI play in automating vulnerability scanning and remediation processes?
Which of the following is a characteristic of domain-specific Generative AI models?
In the Retrieval-Augmented Generation (RAG) framework, which of the following is the most critical factor for improving factual consistency in generated outputs?
What is a key concept behind developing a Generative AI (GenAI) Language Model (LLM)?
What is a potential risk associated with hallucinations in LLMs, and how should it be addressed to ensure Responsible AI?