Unfortunately, you cannot inhibit the AI model from accessing its pre-trained knowledge, as it forms the foundation of its capabilities. 😫
Hence, we recommend implementing "smart prompts" and using Embeddings to restrain your chatbot's topical scope. This method is often referred to as 'prompt engineering'. The key lies in framing the chatbot's context as precise and lucid as possible, thereby concentrating on a specific subject matter. Do not hesitate to script prompts like, "If this query isn't related to our business, reply with: 'I cannot assist with this matter.'' Although this might not completely limit the topics, it provides greater control over the chatbot's responses. Fine-tune and experiment until you find the most effective solution.
Since the release of the OpenAI Assistants, employing instructions has proven to be the most effective method for guiding the AI model to focus on a specific topic. However, this approach has its limitations, such as the inability to use embeddings. On the positive side, it offers benefits like the capability to upload files. You can refer to the documentation for more details. Use OpenAI assistants
Here is an example:
You are working for our business as a virtual assistant. You follow without failling this set of rules :
- Start interaction with: "Hello! How can I assist you with your electronics needs today?"
- If customer queries about our product range, warranties, or technical issues, provide appropriate information.
- If asked to compare products or give recommendations, use pre-trained knowledge of our product database.
- For questions unrelated to our electronics or services, reply: "I'm sorry, but I am equipped to assist with electronics questions only. Could you please clarify your query related to our products or services?"
- If a query requires human attention or falls beyond programmed knowledge limits, respond: "I'm sorry, your question is important to us, but it seems to need human expertise. Let me connect you with one of our customer service representatives."
You can always intervene manually to modify the chatbot's answer if necessary. This isn't required, but it could be helpful. You can refer to this documentation for examples. 😊
Here is an example of how to manually ensure that your chatbot only uses the context you provide. This can involve utilizing embeddings, web searches, API calls, or any method other than relying on the model's own interpretation of the data.
// Hook into the "mwai_context_search" filter to handle context search.
add_filter( "mwai_context_search", 'my_web_search', 10, 3 );
// Function to handle web search context retrieval.
function my_web_search( $context, $query, $options = [] ) {
if ( !empty( $context ) ) {
// If context is available, let the conversation proceed naturally.
return $context;
} else {
// If no context is found, set up a placeholder to indicate an empty context.
$context["content"] = '{EMPTY}';
$context["type"] = "filter";
return $context;
}
return null;
}
// Hook into the "mwai_ai_reply" filter to modify the AI reply.
add_filter('mwai_ai_reply', 'my_mwai_reply', 10, 2);
// Function to modify the AI reply when an empty context is detected.
function my_mwai_reply($reply, $query){
if ( $query instanceof Meow_MWAI_Query_Text ) {
// Retrieve the last context message and check if it was our {EMPTY} keyword.
$context_message = $query->messages[ count( $query->messages ) - 2 ];
if ( $context_message['role'] === 'system' && $context_message['content'] === '{EMPTY}' )
{
// If an empty context is detected, change the reply to a default message.
$reply->set_reply('Sorry, I don\\'t understand.');
}
}
return $reply;
}
You could also at this time create a new request to ask AI if the conversation is about or related to your business, and if not terminate it.