Best answer

How do I continue the ChatGPT Zap element to rewrite blog posts from Google Sheets?

  • 17 October 2023
  • 9 replies
  • 222 views

Hi,

I am trying to create an automation to update and rewrite exsisting blog posts on our blog. 

I am organizing my data in Google Sheets. 

When I get to the first ChatGPT zap element I input to complete prompt.  I use a memory key: [row number of my sheet][time/date element from trigger] (should be unique every time.)

The response times out, so I get a “true” in the was_cancelled field. However, the first 300-400 words of the response are captured. 

I added another ChatGPT zap element with the intent on prompting the AI to “continue” as I would in the live on site chat functionality. 

So for the second ChatGPT zap element I input just “continue” in the user message. I use the same memory key as in the first element. I get an output like this:

“Certainly! In our previous conversation, we were discussing how I can assist you. How may I help you today?”

It appears that all of the history is in the test output, but it will not output the continuation of the blog that was cut off. 

Any help would be appreciated. Thanks. 

Jason

icon

Best answer by JDKru 30 October 2023, 17:32

View original

This post has been closed for comments. Please create a new post if you need help or have a question about this topic.

9 replies

Userlevel 7
Badge +14

Hi @JDKru 

Good question.

To help us have true context, please post detailed screenshots with how your Zap steps are configured.

Sure

 

Here is the map of what I have so far. I am building logic so if the article needs to be “continued” a few times it can be. I wrote into the prompt to have ChatGPT end the article with a very specific searchable sentence so that I could filter if the result has reached it’s end. As you can see I only built this out to “continue” on additional time. 

 

Now here is the configuration of Step #3, which is referenced as the first step in my orginal post:

 

Now here is the screenshot of the Tested results from this:

You can see that the bottom of the article is cut off mid thought, and the “was_cancelled” is true.

 

Now here is the configuration of the Step 8 which I originally referenced as the Second ChatGPT zap element. 

My prompt is just the word “continue” which is what I would use if I were using ChatGPT.

Here is the rest of the configuration.

 

Now here is the output in that test results. 

You can see here that it is certainly not continuing the conversation. Rather starting a generic new one based on the standalone prompt of “continue”

 

So the question is: How do I get it to continue the output from the original prompt. 

 

Thank you so much for your help. 

 

Jason

 

Userlevel 7
Badge +14

@JDKru 

Maybe look into using this Zap action: Formatter > Text > Split Text in Chunks for AI Prompt

It doesn’t seem to be having an issue with take the prompt I am giving it in Step 3, it is responding appropriately, where I am having the issue is that it’s response is being truncated because it is timing out, and or running out of tokens. Which is fine, I am asking it to write a 1000 word blog post. So I would expect that it might not be able to get it all in with the tokens and time allotted. 

 

Am I misunderstanding how the Memory Key is supposed to be working? 

 

It would seem that the way it currently works the most of a conversation that it can have would be one prompt with one response. How are people configuring chat bots where they are getting a back and forth interaction working. 

 

It seems like I must be missing something here, something that is simple. 

 

Userlevel 7
Badge +14

@JDKru 

Perhaps try asking AI to return results in smaller output (e.g. 250 words), then using additional AI steps to ask for the next output of X words.

 

 

That is more or less what I am asking how to do. It isn’t working. I am not able to have the system continue the conversation. 

Userlevel 7
Badge +14

@JDKru 

Try changing the prompt to use something besides “continue”.

Like “continue and return the next 250 words”.

Ok, let me see if that will free it up. I was just playing around with stacking ChatGPT elements, and it does seem to work. I am thinking that the was_cancelled return is breaking it. So we need to make sure that the total session time does not exceed 50 seconds. 

 

I’l respond in a few minutes a soon as I have tested this strategy. 

Ok, I figured out the solution to this. The testing system worked differently then the actual system. So it will time out on a longer request when testing the zap, but it will get it time in production. So in short, everything was/is working properly, you just need to not trust the test….