Skip to main content

So I am new to user Zapier but seem to understand its limits so far. I am having ChatGPT generate an article barely only a second paragraph, its hitting a brick wall. This is within only 30 seconds or so also.

I keep getting was_cancelled false not true as I have read many times that would show timeout, it barely runs more then 30 seconds most times and whether it says true or false, either cuts off the response and then retries to ask again sometimes 3 times. It seemed to work at first and now giving multiple errors like this one. Whether I use ChatGPT3 Turbo or ChatGPT4 with massive amounts of tokens currently with ChatGPT3 I use 3800 or more. It cuts off and does this weird retry and gives me 3 outputs instead of giving me one long output.

Could you help me with understanding why this is, is this a Zapier limitation hit or OpenAPI limitation?

Hi @darylweston 

Check the OpenAI status page: https://status.openai.com/


Thanks for this and the fast reply, did not know they had this page to view status problems. It mentions ChatGPT4 issues with latency and currently I am using only ChatGPT3 turbo and so unsure as to why it would be cutting itself out 3 times and retrying giving me multiple outputs. I did notice I had more luck yesterday then today.


@darylweston 

I’d wait for the OpenAI incident to be resolved before testing the Zap again.

If the issue persists, then please post detailed screenshots showing how your Zap steps are configured.


I was wishing for ChatGPT to write me an Article that would read for about 5 minutes and I worked out so for that I would need 2,100 words to make it so. Generating an article that big in the testing area of Zapier gives me the most of 3 minutes. I noticed inside Zapier it states that testing only runs for 50 seconds then cancels with was_cancelled true which could be one of the reasons it would not complete but yes this and the now (only today) ending barely a quarter of the way into the output it produces and trying again seems odd. Just wondering if I wished to achieved the real 2,100 words article, would I need to run the Zap fully rather then testing phases?


@darylweston 

For some Zap app actions, the testing has different limitations than the a live Zap Run, so may be worth trying.


To start off with: There are no ChatGPT api issues now, its all green.

So in the first instance I choose conversation.

I give it the message as follows, choosing the gpt-4 model with a key.

 

 

Knowing it shouldn't be anymore then the max tokens I set below, that should cover a 2,100 article correct?

 

To show just what happens 

and you can see here it retries lots of times:

To prove the point its not using all the allocated tokens, it gives me this. Showing that I have tokens_remaining of 6062. (The below screenshot)

As a simple test, setting up a webhook and then chatgpt plugin and creating something similar, I wonder if it would give you the same output?

 

I really am at a loss, I wanted to use Zapier to help with this task but I am going back and forth for the last 12 hours looking into this.

If you can help then thank you. 


@darylweston 

Check this OpenAI article about tokens: https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them

 

What are tokens?

Tokens can be thought of as pieces of words. Before the API processes the prompts, the input is broken down into tokens. These tokens are not cut up exactly where the words start or end - tokens can include trailing spaces and even sub-words. Here are some helpful rules of thumb for understanding tokens in terms of lengths:

  • 1 token ~= 4 chars in English

  • 1 token ~= ¾ words

  • 100 tokens ~= 75 words

Or

  • 1-2 sentence ~= 30 tokens

  • 1 paragraph ~= 100 tokens

  • 1,500 words ~= 2048 tokens


@darylweston 

You may be hitting the timeout window for the ChatGPT action to finish processing in a Zap step.

 

For context with the Code app the Timeout is listed in seconds.

 


Did you find any solution for this?


Reply