Prerequisites

  • Genesys Cloud CX 1 WEM Add-on II, Genesys Cloud CX 2 WEM Add-on I, Genesys Cloud CX 3, or Genesys Cloud EX license.
  • View sensitive data: Recording > Recording > ViewSensitiveData permission
  • SpeechAndTextAnalytics > Data > View permission
  • Recording > Recording > View or Recording > RecordingSegment > View permission
  • Voice transcription enabled. For more information, see Configure voice transcription.

The Transcript tab provides the speaker-separated transcription of the conversation between external (customer) and internal (IVR, ACD, agent, conference, or voicemail) participants. Transcripts provide insights of what took place in the interaction, allowing the user to uncover business problems as well as areas of opportunity.

Notes:

  • Voice transcription is not available in all Genesys Cloud supported languages. For more information, see Genesys Cloud supported languages.  
  • The language model used within the Genesys voice transcription capability is trained based on contact center conversations. As a result, it is best suited to transcribe those types of conversations. Genesys voice transcription is not trained on general conversations and is not meant to be used as a generalized transcription engine.
  • To work with a voice transcript, you must first select an interaction whose transcript you want to view. You can select an interaction from the results of an interaction search or a content search. 
    • Interaction search – The interaction search results are based on metadata only. For more information, see Interactions view.
    • Content search – The content search results are based on transcript content, speech analytics data, or both. For more information, see Content Search view.
  • When working with voice transcription, sentiment analysis is automatically enabled.
  • If the user does not have Recording > Annotation > View permission, the time synchronization between the audio and the transcript may be inaccurate when audio suppression events (like Secure Pause, Hold Suppression) take place.

When working with a voice transcription, you can perform these tasks:

  1. Click Performance > Workspace > Interactions.
  2. Click the row of the interaction whose transcript you want to view.
  3. Click the Transcript tab.

Note: As the interaction recording plays, the corresponding words (up to three), are highlighted with a blue background in the transcript. After the words are spoken, they are grayed out in the transcript. For more information, see Work with an interaction overview.

Click the image to enlarge.

Voice interaction

  1. In the Transcript tab, in the Search field at the top of the transcript, enter the words that you want to find.
  2. Press Enter

Notes:

  • The Search field under the Transcript tab contains the number of instances of the searched words found in the transcript.
  • To move from one found instance to another, use the next and previous arrows in the Search field.
  • Every instance of the searched word or words that are found in the transcript are highlighted with a yellow background.
  • The words highlighted in orange indicate the current instance of the searched words, out of the total number of instances found in the transcript.

The words and sentences in the transcript are divided into speaker sections: customer, IVR, ACD, or agent.

On the left side of the screen, every section is distinguished by a time stamp. This time stamp indicates when the speaker began to speak and two of the following icons representing the speaker.

Note: Internal and external participants in chat and message transcripts are separated by a background color. This background color does not appear in email transcripts. See the following image.

Click the image to enlarge.

Interaction speaker icons

Click the image to enlarge.

View who is talking during a voice interaction

Note: Within the transcript, there are visual indicators that provide information about the dialect and/or program associated with the transcript content. For example, in the image above, it is clear that the dialect associated with the transcript is en-US and the program is Topic spotting UI testing. These visual indicators appear in the transcript at the moment the language and/or program is changed.

To protect the privacy of your customers, automatic reduction is enabled by default to ensure that any sensitive PII or PCI data collected from an interaction is automatically redacted from voice transcripts. For example, instead of a credit card number the transcript includes [Redactioncard number ]. When PII or PCI is data is detected and redacted, the user will hear silence in the audio playback where the data was found.

The following is a complete list of redacted data:

[card number ]
[card expiry date ]
[national id ]
[postal code ]
[personal info ]
[phone ]
[name ]
[location ]
[user info ]
[email ] 

A select number of users can access transcripts that are not redacted only if they have the Recording > Recording > ViewSensitiveData permission. By default, roles do not come with this permission. This permission must be granted manually by an administrator.

Note:
  • In accordance with the Payment Card Industry Data Security Standard (PCI DSS) guidelines, Genesys recommends the use of Secure Pause or Secure Flows to ensure no PCI data is transcribed and made available to Genesys Cloud users.
  • Genesys recommends that you use Secure Pause or Secure Call Flows as the first line of defense. Only Secure Pause and Secure Call Flows are valid by an external Qualified Security Assessor as Level 1 PCI DSS compliant. For more information about PCI DSS compliance, see PCI DSS compliance.

For more information, see Enable automatic redaction of sensitive information.

  1. Click Performance > Workspace > Interactions.
  2. Click the row of the interaction whose transcript you want to view.
  3. Click the Transcript tab.
  4. Click the Copy Transcript option (Copy transcript) in the upper right corner of the transcript. 
Note: If a user does not have permission to view sensitive data, the data is masked.

The following metadata will be copied.

  • Interaction Type
  • Interaction ID
  • Transcript Start Time (User’s timezone)
  • Transcript End Time (User’s timezone)
  • Transcript Duration
  • Direction
  • Internal Participant(s)
  • External Participant(s)
  • Transcript:
    • Date/Time 
    • Participant Type (Internal/External)
    • Participant
    • Text