Question

Trouble using Open AI Embedding action

  • 8 April 2023
  • 6 replies
  • 198 views

Userlevel 2

This question was split from:

 

 

Hi everyone,
I am trying to use the embedding search function with OpenAI.
Here is what I did (in the video link, you can see the sub-zap and zap that I used): https://www.awesomescreenshot.com/video/16345010?key=92615d73eaa23be37c1c47288fda0cc9

  1. Sub-Zap > I used this as a way to call upon my “source of knowledge” (the embedding), essentially the document on which OpenAI should search for answers to the question.
  2. Zap: a new question triggers the Zap, which calls upon the sub-zap through a keyword that triggers it, and then the text with the information needed for the answer is called upon.

So far so good.

As you can see, despite having inserted the output of the sub-zap in the document field, OpenAI tells me that it cannot find any documents.


What am I doing wrong?
Does the document in question have to be a "real" document, such as a file, or can it also be an output like in my case?
I can't figure it out…


Thank you very much in advance for your help.


This post has been closed for comments. Please create a new post if you need help or have a question about this topic.

6 replies

Userlevel 2

Now OpenAi answer, but output is quite bizarre: https://www.awesomescreenshot.com/video/16345659?key=f8e6d60340f050a47aa0b0c91aefebce


My document is a text describing some aspects of the company and its owners. The last question I tested was precisely about the owners.
Should the question be very close in grammar and sentence construction to what OpenAI could find in the text?
A bit like the prompt / completion pairs are done for fine-tuning …

I’m sure doing wrong something. 

Thank you again!

Userlevel 7
Badge +12

Hi @LucaRe!

For the Documents field, you need a list of items that become categories that OpenAI will use. You can either add them one at a time, or use a line-item field (as in Reid’s example video)

It looks like you’re using text from Evernote in the documents field, which is text with html tags in it. That wont work for an Embedding action. 

Could I ask what you’re looking to do with the action? It sounds like you want the step to pull specific information out of the text, is that right? eg Find the owner of the company?

If you want to do that, I think that you would need to use the Send Prompt action. I’d advise having a play in OpenAI playground to refine the prompt that you need to find the correct information. I haven’t used this kind of prompt myself, but I wonder if it would work if you used a prompt like “Using the text below, tell me who is the founder of the company [add the evernote text in quotes below the prompt]”

The Search Embeddings action works best if you provide chatGPT with a list of options (the documents) and ask it what category something fits into. 

 

I hope that helps, let us know if you have any questions!

 

Userlevel 2

Hi @Danvers 

thank you for your response.

My goal is to provide GPT with a series of information (specifically, about the company's mission/vision and descriptions of all its products - around 80 of them).

The purpose of this step is to have a "knowledge base" to use in various ways, such as customer support or creating custom content.

I don't think the issue lies in the text format, as I added a formatting step, but the results are still random, as you can see here: https://www.awesomescreenshot.com/video/16422198?key=02de75aba2afd0e6072ea6f0b2621642

The problem may lie in what the real purpose of the embedding is and how the text is set up. I may be asking too much of this function.

What I had in mind was to replicate the experience of providing starting information and asking ChatGPT to generate prompts accordingly.

I was hoping to do the same thing with this function and bypass the "real" embedding process used with, say, Pinecone.

In this case, it seems that the embedding is used to search for existing content.

To accommodate my intention, I could potentially use the content found and pass it to a subsequent step, integrate it with ChatGPT, add a temporary memory key, and then have it generate the prompt.

However, this seems like a very complex and unreliable step.

What do you think?

Thanks a lot again for your support!

Userlevel 2

I was thinking...maybe I should cut the text into chunks of no more than 2K tokens and then put them all as documents among which GPT has to search. Maybe the pieces could be divided by topic: mission, vision, roles of the founders, product 1, product 2 ...

 

what do you think? Could that be a solution?

Userlevel 4
Badge +6

Hi @LucaRe 

You may try that if it can be the solution to your workflow but if not, I'm afraid the functionality you're looking to set up is a bit beyond what we're capable of helping you with, within our channels here at Zapier Community.

Your best bet is probably going to be to hire an API developer or check in with a certified Zapier Certified Agency Partner to see if they can assist in one of two ways:

1. Search for a Zapier Agency Partner on your own: https://zapier.com/experts/.

2. Submit a project request via this form: zapier.typeform.com/to/o8n5HN.

They might still use Zapier to do so, but it will likely require some custom code app steps within your Zap.

Userlevel 2

Hello @Brem, thank you for the response. 

I will try the workflow and if it doesn't work, I will contact an expert. 

Thank you

Have a nice day.