How does Genesys fine-tune large language models (LLMs)?
Hallucinations are mitigated via fine-tuning models with conversational datasets that are selected for the use cases such as Customer Care, and industry verticals such as healthcare, financial services, retail. This process can reduce hallucinations significantly by reweighting the models to the use case.
Prompting best practices are set to instruct the large language model (LLM) so that it avoids fabricating answers and says “I don’t know” if the question isn’t relevant or answerable. This behavior gives the LLM a high degree of confidence in the answer, constraining the response with examples of correct outputs and setting the deterministic temperature to be as low as possible.
Retrieval-augmented generation (RAG) constrains responses so that they are derived from a known-good set of data from the business.