Best answer

Chat GPT 4 Cutting off Responses – Not cancelled, was_trimmed false, only using around half of token limit

  • 29 September 2023
  • 8 replies
  • 385 views

I read the posts about the 150-second timeout, but this seems like a different issue. Please help :)

 

icon

Best answer by SamB 2 October 2023, 12:53

View original

8 replies

Could you provide more information regarding what you are trying to achieve with your end to end flow?

I teach a creative writing class, and I am asking ChatGPT to develop outlines for possible short stories based on information from a questionnaire I give my students to complete about their lives. 

The questionnaire comes from Typeform, and I am using Zapier to send the answers to ChatGPT. I put the questionnaire in the user message and then the instructions for what to do with the questionnaire in the assistant instructions. 

Does that context help?

I am trying to recreate the flow end to end to help debug this. Do you mind sharing the typeform questions and assistant instructions you are using (or at least some subset so i can try to create it?). Also are u using gpt3.5 or gpt4?

I’m using gpt4. I think I made a mistake though. I think it is being cancelled. What is the best solve for this? I’ve seen a few different responses in the threads. 

 

i want to help re-create this flow on my account to reproduce the issue to help you debug it. 

So if u can share the:

  1. Typeform link
  2. Chatgpt user message
  3. Chatgpt assistant instructions

I can re-create this flow

Userlevel 7
Badge +11

Hi there @zach_noequal! 👋

Am I correct in thinking that the issue here is that the response is being cut off when testing and it appears to have been cancelled?

If so, I think this might be the expected behaviour here. When testing in the Zap Editor if it takes ChatGPT longer than 50 seconds to finish generating a response it will output wherever it’s got to, cutting off the rest of the response. And it will set the was_cancelled field to be “true”:
a5530d30578061240ca2630540773485.png
But when the Zap is running live, ChatGPT would have up to 15 mins to generate a response. So the Zap would receive the full response for most cases. 🙂

If that’s not the issue here, can you please provide some further details to help give us some more context here? For example are you seeing any error messages or is the response incorrect in some way? If you can share any screenshots showing the selected fields and settings for the ChatGPT action that’ll help us to see if there’s anything there that might be causing issues. Please remove/hide any private information from the screenshots before sharing (like names, email addresses etc.).

Looking forward to hearing from you on this!

Thank you, SamB! That seems to be the issue. Is there any workaround for testing? It makes it very cumbersome to work with the flow if I have to trigger it live to test individual steps. 

Userlevel 7
Badge +11

You are most welcome @zach_noequal! 🤗

There isn’t a workaround for getting around the 50 seconds limit when testing in the Zap Editor, unfortunately. The only way to not run into that limit would be to run tests while the Zap is switched on.

Sorry for the less than ideal news on this. Please do get in touch if you have further questions or if we can assist with anything else at all! 

Reply