Can I manage the ChatGPT rate limit for a long Zap loop using GPT for journalist requests?

  • 8 September 2023
  • 1 reply



I am running a long Zap loop using GPT for journalist requests (HARO). The idea is we get emails with maybe 100 requests, and use Zapier to feed each request through GPT to filter its suitability. It’s a great use of Zapier and GPT but the problem is we hit the rate limit for ChatGPT (10,000 tokens per minute) very quickly. We’re using GPT3.5 Turbo for the first input, then GPT4 for a small subset of requests that pass the filter.


I’m looking for any advice on how to manage this rate limit. Since it’s a time-based rate limit, what is the best way to space out the request loop? I considered turning on Autoreplay, but that would also have to work on some delay. I’m going to work on the prompt to reduce the token inputs as well. Any other solutions are welcome!

This post has been closed for comments. Please create a new post if you need help or have a question about this topic.

1 reply

Userlevel 7
Badge +14

Hi @markolaz 

Good question.

Try adding a Delay After Queue step to the Zap: