Hey @Rei 👋 Hope you don’t mind but I moved your reply out into a new topic so we can address this different issue separately.
The 04-mini-deep-research model is a faster model than the o3-deep-research model, so it’s possible that it’s just being too slow to respond and is timing out. But rather than surfacing a timeout error, it’s left stuck in that “Delayed” status - could be that there’s a bug that’s preventing the error from surfacing as it normally would. 🤔
I can see you reached out to our Support team and they’ve have reached out to our engineers about this. Which is perfect, they’ll be able to help get to the bottom of what’s causing those Zap runs to get stuck.
In the meantime, I’d suggest using the 04-mini-deep-research model instead since that’s faster and more cost affective model than the older o3 model.
Please do keep us updated on how it goes with our Support team, curious to know what they find!
@SamB
Thanks for the suggestion! 🙏
Just to summarize for others who might be running into the same issue:
Zapier uses a resumption process internally – it sends the request to OpenAI, generates a callback URL, and then waits for the response to resume the Zap run.
The problem here seems to be that the o3-deep-research response takes longer than the time Zapier is willing to wait, so Zapier times out first. That leaves the Zap stuck in the “Waiting/Delayed” state instead of surfacing a proper timeout error(It seems this is typical problem).
Will keep this thread updated once I hear back from Support/Engineering!
Sorry for the delay in my reply here @Rei. Thank you so much for sharing that summary here for folks - it’s super helpful. 🤗
Looking forward to hearing how it goes with the Support/Engineering teams!