Skip to main content

I have been trying to get a Zapier Chatbot to use knowledge effectively.  I have two tables set up in Zapier.  The first table lists the titles of simulated patients in our database.  It lists the case title in one column and a description of the case in another column.  When I connect that table through knowledge, I am able to get the Chatbot to perform pretty darn well. I can ask it what kinds of cases I should use for different scenarios and it makes really good recommendations.  

 

I have a second table that includes a lists of questions and answers. The questions ask about how to use the cases or teach certain topics using the cases.  When I connect this second table to the chatbot and ask the same questions I asked with only one table connected, the chatbot suddenly gives quite poor recommendations.  Where it might have recommended 7-9 cases before, it now only recommends two and neither are really that good. 

 

I have tried to write my prompt to tell the chatbot to use one table when looking for case recommendations and the other when answer questions about how to teach, but I haven’t been able to over come the problem.  I also tried to put all the information in a single Table and created a “Category” column and labeled the row for whether it was a Case or FAQ. I also was not successful with that approach.  

 

Any suggestions?  

 

Does any know why this happens?  

 Hi and welcome to the Community, @joe miller🎉

That’s very strange indeed. I wonder if it’s the two sets of information being in a different format (cases and FAQs) that’s causing it some trouble. 🤔 If you add another knowledge source table of the same type (e.g. two tables of cases or two tables of FAQs) is the chatbot able to answer questions as expected?

And can you share some screenshots showing the full instructions provided to the chatbot? That’ll help us to see the current set up and whether there might be some tweaks that need to be made to get it working as expected.

Looking forward to hearing from you on this!


It is definitely that they are in different formats.  I tested your suggestion of uploading the same knowledge file twice in two different formats (once as a Zapier Table and once as a CSV).  That dd not appear to impact the results. There is some variability in how it responds.  Sometimes it makes five suggestions, and sometimes up to 10. And, the results are shuffled around a little each time it responds to the same prompt.  

 

However, when I add the new file with the different format the response (FAQ v. just a table with cases), I get a low quality response.  It always responds with the exact two cases.  I have tried to alter the prompt to describe when to use the each knowledge file (they serve different purposes) but I have not been able to make that work.  I suspect there is a trick to the prompting, but I haven’t figured it out yet.  I have included the prompt I have been trying to use when I have both piece of knowledge attached.  

 

Thanks for any suggestions.  


Thanks for getting back to me @joe miller. 🙂

I had a look at a couple templates we’ve got for building a Simple FAQ chatbot and a Sales Support chatbot to compare the instructions against what you’re using and they look a lot different. 

Simple FAQ chatbot instructions:

#Role: 
You are a subject matter expert in INSERT RELEVANT CONTENT HERE], focused on delivering precise answers based on specific content sources.

#Objective: 
Your primary objective is to answer user questions using the content from the provided links. If unable to answer, inform the user politely of this limitation.

#Audience: 
You will be engaging with a wide range of individuals, responding only to queries that relate to the content in the provided links.

#Style: 
Your writing style is clear, objective, and concise. Your responses should be factual and directly related to the information available in the links.

#Context: 
Your responses should directly use or paraphrase the content from the provided links, maintaining the original meaning. Prioritize understanding the user's question and ask clarifying questions if the user's request isn't clear.

#Other Rules: 
- Never invent information in your responses. Only engage with questions that can be answered with the content from the links. 
- In cases of conflicting information, present the latest information as correct. 
- If you lack sufficient information to provide an accurate answer, ask for more details. 
- Always remain polite, even when unable to answer a question.

 
Sales Support chatbot instructions:

#Role: 
You are an incredibly intelligent, engaging, and helpful bot specializing in providing information from a specific data source.

#Objective: 
Your objective is to efficiently answer frequently asked questions using key information from the connected data source.

#Audience: 
Users seeking information contained within the associated data source.

#Style: 
Your responses should be concise and focused, pulling directly from the data source. The tone of your responses should match that of the connected document.

#Context: 
Engage users by providing the most relevant information from the data source to answer their questions. Focus on delivering accurate and direct answers without elaborating excessively.

#Other Rules: 
- Ensure all answers are directly sourced from the connected document. 
- Avoid writing long responses; instead, find and provide the most pertinent information. 
- Maintain the tone used in the connected document in your responses. 
- Do not address topics that fall outside the scope of the associated data source.

 

So it could be that because it doesn’t have a clear idea of it’s role, objective, audience or the style, content and rules to use it’s having trouble interpreting your request in the correct way.

Can you try adjusting the instructions you’re using to be more like those examples above and let me know if that works better? 


Thanks, Sam.  I did some work on the second knowledge table including reducing the number of rows and rewriting the FAQs and Answers for clarity.  This improved the behavior across the board.  I did not need to change the prompt to improve the output.  I appreciate you taking time to look at my prompt and examples from your own library!


Thanks for the update @joe miller. I’m so pleased you were able to get it performing better without needing to adjust the prompt at all, great work! 🙌 🎉

If anything changes or there’s anything further we can help with do reach out in the Community again. In the meantime, happy Zapping! ⚡️


Reply