Skip to main content

I created a Zap that generates Facebook post content for me based on the data I provide using chatgpt.

To my surprise, I see that after generating several pieces of content, ChatGPT not only generates them based on the data I sent but also the history of previous 'conversations,' which in this case are requests to generate a post.

Not only do I not need this, but it also generates uncontrolled and unnecessary usage of a substantial amount of tokens.

Is it possible to disable ChatGPT from using historical conversations while generating replay?

 

Hi @GrzegorzK 

Good question.

We would need to see detailed screenshots with how your Zap steps are configured in order to have enough context.


Hi @GrzegorzK! 👋

It looks like the Conversation (ChatGPT) action has an option to set a memory key which would allow it to continue the conversation from past messages:
c331cc9e12e19f2e1554b953d8bbbdee.png

If you entered a memory key then I suspect that may be why the conversation history was included.

But if you didn’t set a memory key, please can you share some screenshots of your current set up for the ChatGPT action? This will help us to better identify what might be causing this to happen.

Looking forward to hearing from you on this!


@SamB I am having the same issue. I haven’t even turned the Zap on. Just by using the “retest zap” button, the output window shows “history” and includes text from previous test and versions of the prompt that i have moved away from. I have attached a screenshot of the output, and the “memory key” field in this step has never been used. As you can see, it is quite annoying as it keeps cancelling the output I need, and are using tokens for the history of previous outputs i have no use for.

 


Hi there, @OdiinSparr!

Sorry to hear you’re running into this issue as well. It’s very strange that it’s not issuing a new test when you click on the Retest step button. Perhaps it’s a glitch with the editor? Can you try opening the Zap in a different browser or private browsing window to see if that then allows a new test to be sent to ChatGPT? @GrzegorzK - might be worth you also giving this a try as well to see if that does the trick. 🤞

With that “was_cancelled” field being set to “true” and cutting off the response, that is the expected behaviour when testing in the Zap Editor. But when the Zap runs live the full response will be given. See this related topic in Community for details:

Can you try turning the Zap on and run a live test to double-check that the response isn’t cut off?

If the response is still being cut-off or you’re still running into issues with the Zap not submitting a new test, then it may be best to reach out to the Support team. They’ll be able to dig into the logs for your Zap to further investigate what might be causing this behaviour.

Please do keep us in the loop on how you get on!