Skip to main content

Intermediate Dev here. I understand the difference between Chatgpt/gpt-4 api’s etc..

 

I’ve been using GPT-4-Turbo-0409 through zaps but I want to test the leading GPT-4 model. This one costs more according to open ai’s pricing. However when I select it (vs turbo 0409), it does not take the context into account. Context = Formatted Text via Formatting or even a reply from a previous gpt output within the same automation

Looking for answers.

Hi @suritech, welcome to the Community! 🎉 

Usually if it’s not performing as expected it’s most often down to issues with it’s interpretation of the prompt/instructions it’s been given. It can take a bit of trial and error to get the prompting for ChatGPT actions working as desired. We have a guide that helps to give some tips on how to write more effective prompts that you may find helpful in getting it to interpret it’s given instructions correctly: How to write an effective GPT-3 or GPT-4 prompt

You also mentioned that it’s not taking into account the previous replies so my initial thought here is that perhaps a Conversation ID hasn’t been specified - assuming you’re using the Conversation with Assistant action? In which case the ChatGPT action wouldn’t be able to reference past conversations. Do you think that could be the cause here?

Would you mind sharing a screenshot showing what fields and settings have been selected in the Action section of that ChatGPT action? We can’t access your Zaps so seeing a screenshot of that will help us to see if there’s anything on the setup that might be causing issues here.

Looking forward to hearing from you on this!


Reply