You can design custom LLM prompts tailored to your script to develop a fully generative AI bot. By crafting prompts in alignment with your script, the LLM will interact with users effectively to achieve the desired outcomes.
To create a LLM prompt, navigate to the LLM prompt section, name it and start writing your prompts.
Let us understand with an example: Below is a prompt that helps to fetch the policy number from a query. You can refer to the prompt below as an example.
Given a user response, convert it into JSON format with appropriate error codes and policy number. Follow these rules:
Detect Policy Number: The policy number can be provided in different formats:
As a straightforward alphanumeric string, e.g., 'TWRD12345929'.
Using phonetic alphabet examples for each letter, e.g., 'T for Texas, W for Wind, R for Rat, D for Dog 12345929'.
Handle Denial or Requests to Wait: The user might:
Deny or refuse to provide the policy number, e.g., 'I can't give my policy number.'
Request to wait or hold on, e.g., 'Can you hold on for a moment?'
Response Format:
If the policy number is present, provide it in JSON format as: {"policy_number": "EXTRACTED_POLICY_NUMBER"}
If the user refuses to provide the policy number, respond with: {"policy_number":"No"}
if the user accepting only then, respond with: {"policy_number":"Yes"}
If the user asks to wait or hold on, respond with: {"policy_number":"hold"}
Return Only JSON: Your response should be in JSON format without any additional explanation.
Examples:
Input: "My policy number is TWRD12345929."
Response: {"policy_number": "TWRD12345929"}
Input: "T for Texas, W for Wind, R for Rat, D for Dog 12345929"
Response: {"policy_number": "TWRD12345929"}
Input: "yes I do"
Response: {"policy_number":"Yes"}
Input: "sure"
Response: {"policy_number":"Yes"}
Input: "can you hold it for few seconds"
Response: {"policy_number":"hold"}
Input: "Can you hold on for a moment?"
Response: {"error_code":"hold"}
Input: "I can’t give my policy number right now."
Response: {"error_code":"No"}
User Response: {{prompt_query}}
JSON Response:
One more example you can find is below, where we try to extract an Email address from the ASR Input.
Extract email id from the given input but as we take this inputs from the Voice channel it may have ASR detection issues so look out for that. Once email id is extracted provide the output in JSON format {"email_address":"[GIVEN EMAIL ID]"} and if email id is not present or you are not
able to find it then provide output {"email_address":"NotGiven"}
Here are few examples:
User: TRICITI ESC e.o@gmail.com.
JSON: {"email_address":"tricitiesceo@gmail.com"}
User: my email is TRICIT i.e. s@gmail.com
JSON: {"email_address":"tricities@gmail.com"}
User: Edmond at globalmessaging.net.
JSON: {"email_address":"edmond@globalmessaging.net"}
User: KRIST ENBORNE at CCT as.com.
JSON: {"email_address":"kristenborne@cctas.com"}
User: Mls@tel-us.com.
JSON: {"email_address":"mls@tel-us.com"}
User: Alan dot Hartmann at star Tel dot COM.
JSON: {"email_address":"alan.hartmann@startel.com"}
User: Good morning
JSON: {"email_address":"NotGiven"}
There are 2 types of prompts available:
Once the prompt is written, to process the output of a prompt, A Snippet called post processor needs to be added to the Custom prompt. This snippet processes the Output generated from LLM to a compatible JSON that can be used within the bot.
NOTE: A post processor snippet should be set as “use in Prompt”.
Once the correct post processor is set, you can add those in workflow and use them as per the requirement.
Please refer to the video below to know how to use LLM prompt in workflow.