Best answer

Keep information in memory on ChatGPT


Userlevel 2

Good morning.

I have a question about using the ChatGPT integration and what it can do for me. My goal is to recreate the fine-tuning process on my own using ChatGPT, until I can take advantage of the OpenAI integration with fine-tuning.

I noticed the Memory Key field in the settings and thought it might be used as constant information for ChatGPT to consider (a type of on-the-go fine-tuning). However, this isn't the case.

Here's what I need: for ChatGPT to remember a set of basic information during each step of my Zap, much like it does during live chats. This would help me obtain more precise answers. I could include this information in the prompt each time, as I'm already doing with the OpenAI integration, but this would use up tokens and fragment the request with many different prompts.

Can you tell me how I can achieve this goal? And, more importantly, could you explain the real difference between ChatGPT and OpenAI integrations?

Thank you very much for your help.

icon

Best answer by Brem 4 April 2023, 19:01

View original

This post has been closed for comments. Please create a new post if you need help or have a question about this topic.

12 replies

To use ChatGPT to generate text for fine-tuning, you can provide it with a prompt or seed text and have it generate additional text based on that input. You can then use this generated text as part of your training data, along with other text sources such as books, articles, and other online content.

Once you have your training data, you can use transfer learning techniques to fine-tune a language model on that data. Fine-tuning is the process of taking a pre-trained language model and updating its weights on a smaller, domain-specific dataset to improve its performance on that particular domain.

Userlevel 3
Badge +4

Hi @LucaRe 

I see that you’re already working with one of our Support Specialists on this.

I would recommend to continue working with them as they can check the concern further.

Thanks! 😊

To use ChatGPT to generate text for fine-tuning, you can provide it with a prompt or seed text and have it generate additional text based on that input. You can then use this generated text as part of your training data, along with other text sources such as books, articles, and other online content.

Once you have your training data, you can use transfer learning techniques to fine-tune a language model on that data. Fine-tuning is the process of taking a pre-trained language model and updating its weights on a smaller, domain-specific dataset to improve its performance on that particular domain.

nice

Userlevel 1

I’m looking for similar capabilities.  Is this possible and if so, could you share the information on how to achieve it here?

Userlevel 4
Badge +6

Hi @smittym 

Selecting fine-tuned engines isn't available in the integration just yet. Currently, ChatGPT and OpenAI are still in beta.

However, looking at our records, we currently have a feature request open to add support for fine-tuning engines. I'm happy to add you to this request, and should the team ever add support for this feature, we'll be sure to send you an email to let you know.

Userlevel 2

I’m looking for similar capabilities.  Is this possible and if so, could you share the information on how to achieve it here?

Hello @smittym 
Fine-tuning is available in integration with OpenAI. You can choose your FT from the list of models instead of Davinci and others.
Alternatively, if you have few contents to manage you can use integration with ChatGPT and use the memory key to make GPT remember what you sent and received as a response.
If you want help explaining how to do it. It is a very interesting feature, I tested it in combination with sub-zaps that I used as a "knowledge base" to draw from ChatGPT to respond consistently.

Userlevel 3
Badge +4

Hi @LucaRe 

Thanks for sharing that information. It’s a huge help!
@smittym  — Let me know if that works.

Userlevel 1

@LucaRe Cool.  I couldn’t quite figure out how the memory key variable worked.  Do you just plug in the response you received in the previous action to the memory key?

Userlevel 4
Badge +6

Hi @smittym 

Here’s a link to an article about ChatGPT and Zapier plugin: https://help.zapier.com/hc/en-us/articles/14058263394573-Use-the-Zapier-plugin-in-ChatGPT-beta-#connect-chatgpt-to-your-zapier-account-0-0.

Hopefully it helps!

Userlevel 2

Hi @smittym 

It is quite simple to use the "Memory key".
Basically it serves to ensure that ChatGPT considers the prompts it receives as part of a single conversation, otherwise each prompt would be considered as a separate dialogue.
Let me give you a concrete example that I have personally used.

I created a sub-zap (it is not necessary that it be a sub-zap) which solely aims to provide a "basic knowledge" that I want ChatGPT to have. [See the screen below ... do not mind that in that test I was using Claude instead of GPT, the functioning is the same].

Note that in the Memory Key I inserted the word "TESTAMI".
In any other Zap (including ChatGPT / Claude depending on what you are using), it will be sufficient to insert in the memory key your word, in my case "TESTAMI" so that GPT knows what you are talking about.

In the screen of the Zap I did a small test: I enter a question in a spreadsheet, the question is passed to GPT / Claude and thanks to the presence of the Memory Key the response is coherent with the basic knowledge provided in the sub-zap.

However!: The memory of GPT3.5 and 4 is limited to 4K and 8K respectively (i.e. 4-8,000 tokens, i.e. about 12,000 / 32,000 characters), exceeding this limit the dialogue continues but there is a progressive "forgetfulness" of what has been said in the past.

You could remedy the problem by creating a kind of "loop" or step of your Zaps in which basic knowledge is recalled, this will cost us a few cents of token usage but will prevent GPT / Claude from answering by chance!

 

There is an open thread regarding the use of the Embedding function which should allow GPT to retrieve the information it needs from one or more documents provided.

 


I still can't get it to work well, take a look, I think it will be useful to you and you'll find it interesting.
If you also find a way to solve my requests, even better!

The Embedding function should allow GPT to access information contained in documents in order to provide more coherent responses. I have not yet been able to implement it effectively, but I believe studying how to use it could be useful for obtaining better results with ChatGPT and GPT-3 models in general.

Please take a look at the discussion thread I mentioned and if you have any insights on how to improve the use of embeddings to retrieve relevant information from provided documents, that would be very helpful. Resolving my requests for using embedding functionality would be even more helpful!

Let me know if you have any other questions. I'm still learning how to best leverage GPT models' capabilities, so any tips or suggestions are welcome.

 

If you need more help feel free to ask me.

 

Bye bye

 

Userlevel 1

Thank you.  That makes complete sense.

Userlevel 2

Ok @smittym, perfect.
Glad to have been of help.

If you need anything, just give me a shout!

Have a nice day. 😊