Skip to main content

it looks like gpt takes a min or two to give responses to big queries. (4000word words completion task), but zapier is not waiting for the response and just stop the zap. and zapier is taking advantage of the tasks done until we get there, also openai api counts the query when it receives one. 

 

any ideas to make zapier wait until they get the response from gpt?

I have the exact same problem. Waiting for the responses. 


Hey there, @connortack - great question and we appreciate you raising this in community. There has been some discussion around the timeframe OpenAI API takes to respond and I wanted to pop in and share some context.

Typically zaps will timeout at 30s. This is the case for the vast majority of apps with very few exceptions. If anyone is curious for a more in depth explanation around timeout errors a Zapier expert wrote this blog post.

Our team is monitoring the volume of these errors and while we’re not able to wait for an indefinite amount of time for a response, the team did recently double the timeout threshold to 60s for OpenAI. In the interest of full transparency though we’re unsure if increasing this will resolve the errors since there’s also the possibility of slowdowns on the OpenAI side that we have no control over and/or of Zapier not receiving a response at all.

All that said, we understand the impact this has on ya’ll and will continue to monitor logs. We’ll also be sure to keep this thread updated with any further changes. Thanks again for sharing your feedback here in community!


This seems to still be a big problem. Any update on how Zapiers dealing with this?


Let’s say the openai zap times out, is there a way, to then trigger another zap to offer an alternate solution? 


I’m also having this issue.

It’s inconsistent, sometimes it fails, sometimes not due to timeouts. Is it possible to configure the timeout of the Zap through the Zap Setup? 
 

Failed to create a prompt in OpenAI (ChatGPT)

The app did not respond in-time. It may or may not have completed successfully.


Hey there, friends - thanks for keeping us in the loop! Full transparency, most of these timeouts are likely the result of slow response times from OpenAI. 😔 Which ultimately, isn’t something Zapier has control over.

That said, while customizing timeouts within a zap isn’t an option, there is an auto replay feature for failed zap runs on Professional plans or higher. I know that’s not a true fix but wanted to mention it for the situations where it may be applicable. 

We appreciate ya’ll voicing your thoughts and if anything changes on our end we’ll be sure to share here!


Understand that Zapier does not have control over OpenAI’s performance, but given that this is probably not going to improve can Zapier not implement App/Integration Specific Timeouts or allow the Customer to set/adjust the timeout limit at a Zap level? The reason for this is that OpenAI says it is completing responses in the API monitor in around 2-4 minutes. 

Especially considering the variable length of Prompts etc. It is taking much much longer, infact almost impossible to develop any solution using ChatGPT and Zapier. 

Sure, we could write our own service to manage automations. However, isn’t this what we have been paying Zapier for?


@P4NDA 

 

You can drastically improve the quality of the service by managing your prompts better, especially for the chat applications. 

The response message has to fully finish (stream) before the timeout on Zapier. 

By splitting up your prompts and limiting the response you are requesting then you can build a pretty reliable service.


Agree that currently chat gpt and zapier integrations are useless for building anything. I cannot have a user wait and refresh several times to get the gpt response, sometimes taking 2 or more minutes. This should be instant to replicate the live dialogue experience that you get directly on gpt. anything less is not suitable for a customer facing service. If it was for internal work, then such waiting times may have been tolerable, but surely not for a customer experience.


Just adding here I’m having these same issues. My request with GPT is taking 2-3 minutes, too long for the Zap.

Any workaround success with API?

I’m currently sending information through a GPT Assistant. I’m thinking I could use Open AI API Zap to send the request, add a Zap delay of 5 minutes, then another Zap to fins the Open AI API output. It’s not foolproof but it should work. Any thoughts?

 


Hi folks! 👋

Just came across this thread and wanted to clarify that ChatGPT actions during live Zap runs are now allowed up to 15 mins to be completed before throwing a timeout error. See related Community topic where this update was announced previously: 


I do also want to note here that testing in the Zap editor is still limited to a much shorter timeout period (50 seconds) so you may still find that tests will timeout and return only a partial response. 

@jbtx - that’s sounds like a good idea for a potential workaround. But I’d also recommend reaching out to our Support team to investigate this further. As 2-3 minutes is definitely less than 15 minutes so you shouldn’t be seeing timeout errors occurring still! 

In fact, if anyone is still running into these timeout errors please reach out to the Support team to investigate as it could well be that there’s a bug that’s since developed that causes requests to timeout. You can get in touch with them here: https://zapier.com/app/get-help

Hope that helps! 🙂



@jbtx - that’s sounds like a good idea for a potential workaround. But I’d also recommend reaching out to our Support team to investigate this further. As 2-3 minutes is definitely less than 15 minutes so you shouldn’t be seeing timeout errors occurring still! 
 

Big news! Thanks @SamB. Important clarification that Zap Editor timeout =/= live public timeout. I have only tested timeout in editor, not live website. So this alone is helpful. 


15 minutes is more than enough time otherwise. THANK YOU!


You’re very welcome, @jbtx. Glad I could help! 🤗

Seems that you’re likely all set for now but please do reach out in Community again if we can assist with anything else.