Automated scoring options in evaluation forms

Coming soon: AI scoring in quality evaluation forms
 

Prerequisites

  • Genesys Cloud CX 3, Genesys Cloud CX 2 WEM Add-on I, Genesys Cloud CX 1 WEM Add-on II or Genesys Cloud EX license

The following permissions:

 

Evaluation forms measure script adherence, compliance with business practices, customer satisfaction, or other business benchmarks. 

When you create an evaluation form, you can configure automated scoring: AI scoring or Evaluation Assistance.

Note:
  • You can configure a maximum of 15 questions with AI Scoring within one evaluation form.
  • Multiple-choice questions with AI scoring enabled cannot have more than three answer options.
  • You can add up to 10 Evaluation Assistance conditions to each evaluation form question.

Set up automated scoring

  1. Create a question group as described in Create and publish an evaluation form.
  2. When you add a new question, you can set Automated scoring for each question.
    Choose automated scoring for an evaluation question

    The options are:
    •  No Automation
    • AI Scoring
    • Evaluation Assistance
  3. Save the question and finish the evaluation form set up as described in Create and publish an evaluation form.

AI scoring

Note: To have access to AI scoring, you must have the Quality > Evaluation Form > Edit AI scoring permission.

Optionally, you can have the Quality > Evaluation > View Sensitive Data permission set, as it allows you to see the reasoning context that the AI displays as an explanation as to why the selected answer was chosen.

You can enable AI Scoring per question within the evaluation form editor to have Genesys Cloud use AI to prefill the answer on an evaluation.

The AI prompt updates, but does not lose the existing scoring settings during the following editorial actions:

  • Reorder questions or question groups 
  • Delete questions or question groups – this results in updates to the AI prompt.
  • Add some questions or question groups – this results in updates to the AI prompt.

Create question prompts

The prompt is constructed automatically from the text of the question group, the question, the answers, and the help text. If you rephrase unclear questions and add context to these fields, you can improve the effectiveness of your questions for AI scoring.

Examples

Question prompt help in AI scoring

Example 1

  • Unclear Question: “Was the customer satisfied?”
    • Clear Question for AI Scoring: “Did the agent resolve the customer’s issue to their satisfaction by the end of the conversation?”
    • Help Text: “Focus on whether the agent addressed the customer’s main concerns and if the customer expressed positive feedback or indicated satisfaction at the end.

Example 2

  • Unclear Question: “Did the agent greet the customer properly?”
    • Clear Question for AI Scoring: “Did the agent greet the customer with a friendly tone and mention their name at the start of the call?”
    • Help Text: “Check if the agent greeted in a warm, polite manner and used the customer’s name within the first 30 seconds of the call, setting a friendly tone.”

Prompt filters

Configured questions for AI scoring use a prompt filter, so Genesys Cloud only provides answers to questions where the AI model’s answer has a high confidence level.

  • Answer limit: You cannot save an AI scoring configuration for questions that have more than three answer options.
  • N/A or no evidence: In this case, the AI model has no sufficient information to answer a question. Genesys Cloud displays the context when the AI model cannot provide an answer.

  • Rolling accuracy: Genesys Cloud examines saved data for the AI-generated answers, and evaluates how accurate the data is. Accuracy is calculated based on how many times a human being edited an answer. If the accuracy measure is sufficient, Genesys Cloud displays the answer on future evaluations with the respective question.
    If this accuracy measure gets too low, Genesys Cloud reports a low confidence error on the question until the accuracy begins to improve. 

Notes:
  • If you change and republish a form, Genesys Cloud resets the tracked history of questions with configured AI scoring.
  • Genesys Cloud tracks the history for a particular question for up to 30 days. If there is no activity for the question after 30 days, the history is deleted.

Guidelines for AI scoring

Genesys Cloud advises you to refer to these guidelines during the use of AI scoring.

Construct questions 

  • Focus on transcript-driven questions. Use AI scoring for questions that can be answered directly from the transcript. Avoid questions requiring information not available in the transcript or beyond the conversation context.
    Example
    “Did the agent greet the customer at the beginning of the conversation using a standard greeting such as ‘hi,’ ‘hello,’ or ‘good morning’?”
  • Avoid subjective queries that rely on subjective interpretations. You can rephrase subjective questions with specific, measurable criteria, which are supported by detailed help text.
    If you provide clear and actionable criteria, subjective questions become measurable and easier for AI to assess.
    Example
    • A question with subjective interpretation: “Did the agent display patience throughout the interaction?”
    • Improved question: “Did the agent allow the customer to finish speaking without interruptions?”
    • Help text example: The agent should let the customer complete their statements before responding. Interruptions are defined as speaking over the customer mid-sentence. 
       

Improve question clarity

  • Use complete sentences. Phrase questions clearly and avoid shorthand.
    Example:
    Replace “Greeting protocol” with: “Did the agent greet the customer at the beginning of the conversation using a standard greeting such as ‘hi,’ ‘hello,’ or ‘good morning’?”
  • Provide relevant context to questions.
    Example:
    For a question like “Did the agent confirm the customer’s identity?”,  include help text such as:
    Agents must verify the customer’s phone number and order ID before resolving the issue.
  • To avoid ambiguity, clarify business terms and business processes.
    Example
    In case of a question like “Did the agent explain the account escalation process?”, clarify with:
    Did the agent clearly explain the steps involved in escalating a concern, including who to contact, the required information, and expected response times?
     
  • Standardize terminology. Use consistent terms across all questions. For example, consistently use ‘agent’ and ‘customer’ instead of alternatives, like ‘staff’ or ‘client.’

Best Practices

  • Focus on transcript-derived answers: Ensure all questions are directly answerable based on transcript data.
  • Keep instructions straightforward: Use simple, actionable language to guide AI scoring.
  • Refine continuously: Leverage AI feedback to iteratively improve the clarity and relevance of questions.

Evaluation assistance

Note: Evaluation assistance requires Genesys Cloud CX 2 WEM Add-on I for Genesys Cloud CX 2.

An evaluation assistance condition is made up of topics. Topics are collections of phrases that indicate a business-level intent. For example, if you want to identify interactions where the customer wants to cancel a service, create a topic named Cancellation and include several phrases, such as “close out my account” or “I want to cancel.” In addition, topics help to boost the recognition of specific words and phrases in the interaction because they adapt the underlying language model to look for organization-specific language in conversations.

When you add an evaluation assistance condition to a form question, speech and text analytics must locate interactions with the topics included in the condition. If an interaction with the specific topic is located, an answer to the specific question is automatically generated. For more information, see Understand programs, topics, and phrases and Work with a topic.

Notes:
  • A topic cannot be used in more than one evaluation assistance condition.
  • When you delete a topic, the condition that includes the deleted topic is no longer valid.

Add an evaluation assistance condition to a form question

  1. Create an evaluation form question.
  2. Click Add Evaluation Assistance Condition that is associated with the answer for which you want an answer to be automatically generated. 
  3. From the select if conversation list, select if you want the conversation to include or exclude one or more topics.
  4. From the these topics list, select one or more topics.
  5. (Optional) Click the Add Evaluation Assistance Condition option again to add another condition to the same answer.

Notes:
  • An evaluation assistance condition can only be configured for one answer per question.
  • When more than one topic is selected for a single Includes condition, only one of the topics must be found for the condition to be true.
  • When more than one topic is selected for a single Excludes condition, only one of the topics must be found for the condition to be false (that is, all topics must be absent from the interaction for the condition to be true).
  • When more than one condition is created for a single answer, all the conditions must be true for the answer to be automatically entered.