Set Whisper Audio action
Edge and Media Tier version 126.96.36.19952 or later
The Set Whisper Audio action enables you to set up whisper audio on a per call basis. Use the whisper audio previously configured for the queue, or configure an audio sequence in the Audio Sequence Editor. You can set up whisper audio for all agents, or for agents only configured for auto-answer. For more information, see the Phone tab on the Edit user configuration data page and Set behavior and thresholds for all interaction types in Create and configure queues.
- When the action runs, the system picks up the language of the call flow and uses that language for whisper audio playback.
- The whisper audio set in Architect overrides audio set for the queue.
- The system counts the duration for whisper audio as Handle Time.
- Whisper audio is not part of recordings.
- The audio that you set for whisper is the audio obtained at the time the Set Whisper Audio action runs.
- When you use whisper audio with outbound flows, best practice recommends that you use a persistent WebRTC connection and enable auto-answer for agents.
- Third-party text-to-speech (TTS) engines are not supported for whisper audio.
When whisper audio that explicitly references a prompt to play, the system does not “snapshot” or make a copy of the prompt audio upon publish. Audio playback to agents uses the current audio set on a prompt for the referenced prompt. If you recently updated audio on a prompt resource, it may take up to an hour for the updated audio to play at runtime.
Let’s assume you have a user prompt called Prompt.Orlando, with audio that plays “Orlando office” in en-US. You use Prompt.Orlando both in the initial greeting and as whisper audio. Callers enter the flow, hear “Orlando office,” and choose to transfer to a queue. Before connecting to a caller, the agent hears “Orlando office.”
However, perhaps you decide to change the audio for Prompt.Orlando to “Orange County.” Now, when callers enter the flow, they hear “Orlando office” as the initial greeting, and choose to transfer to a queue. Before connecting to a caller, the agent hears, “Orange County.”
The system snapshots audio for the initial greeting upon publish, and still uses “Orlando.” The whisper prompt audio reflects the current state of the prompt audio when played at runtime, and uses “Orange County.”
Best practice recommends that you do not create an audio sequence with multiple prompts but no TTS. In rare cases, whisper audio can be unable to fetch the .wav file at runtime and as a result, only play some of the prompts. For example, assume that you have three prompts in your audio sequence:
Prompt A: "This customer is" Prompt B: "not" Prompt C: "a high priority customer"
In this case, the middle prompt can fail to play. Instead, configure the prompts as follows:
Prompt A: "This customer is high priority" Prompt B: "This customer is not high priority"
Another approach to take when no TTS support exists for a language: put all configuration into one prompt and do not rely on concatenating audio prompts at runtime.
|Name field||Type a distinctive name for the action. The label you enter here becomes the action’s name displayed in the task sequence.|
Select one of the following:
Configure success and failure paths
This path indicates that Architect did not encounter any errors during the process. It is not a measure of whether the data received is the intended result or functionality.
This path indicates that Architect encountered an error while running the action or a problem occurred while processing the results. Specify the action to take. For example, play audio to indicate that the action was unsuccessful, or transfer the caller to an agent or representative for assistance.
Note: If the network experiences connectivity issues, the action automatically takes this failure path.