I have a gpt 4.0 api key, and it’s still cutting off my outputs character limit.
Example (generating a game preview):
“Aliens, abandoned high-rises, and a fight for survival – welcome to The Highrise!"
Introducing The Highrise, a thrilling first-person survival game that will have you on the edge of your seat. Set in a once-familiar high-rise building, now abandoned and overtaken by terrifying alien monsters, your mission is to gather resources, craft weapons and tools, and ultimately send a rescue signal to escape this nightmare.
The Highrise offers an immersive experience with its eerily realistic setting. Inspired by actual skyscrapers, each floor and facility in the game feels authentic and true-to-life. As you explore the abandoned shopping malls, offices, and residences, you'll be haunted by the chilling emptiness and the lurking danger that lies around every corner.
But be warned: the alien monsters, known as sniffers, have an exceptional sense of hearing and smell. Any careless noise or movement could send them rushing towards you, so stealth and strategy are key in this high-stakes game of survival.
As you delve deeper into the lower floors of the building, you'll uncover more resources and better tools. But beware – the monsters grow stronger and more numerous the closer you get to the ground. It's a high-risk, high-reward situation that will test your survival instincts to the limit.
Key features of The Highrise include managing hunger and thirst, crafting tools and weapons, and navigating the building's intricate ventilation system – your only safe haven from the monsters. With six playable characters, each with unique stats, skills, and backstories, you'll have plenty of options to tailor your gameplay experience.
Choose between single”
As you can see, it cuts off the response at the end there and it can definitely output a LOT more text than just that. I don’t see anywhere i can adjust the maximum tokens unless i’m missing something. Is there a fix to this?
Best answer by DanversView original
I found this questions because I have exactly the same problem and was looking for a solution. Hopefully someone can help.
No need for the 4.0 API key.
I just wrote the answer in this article, let me know if that worked for you.
I was getting the same issue and for me it was due to using the memory key box. So it was cutting off every time I inputted something. I erased the words in the box and it works now. Although since it is removed it doesn’t save previous information used.
Hope this helps
If you’re using the ChatGPT integration (as opposed to the OpenAI one) and are seeing prompt responses cut off due to length, it could be due to the time limit Zapier has for returning a response. If it takes longer than 150 seconds for ChatGPT to provide a response, Zapier will post what there is at that point and cut off the rest. If the response is cut off, the field
was_cancelledwill say true.
We do have a feature request to remove the cutoff for the conversation action for the ChatGPT integration. I’ve added everyone on this post as an interested user for the request. That lets the team know how many users want to see this and also means that you’ll get an email when we have an update.
In the meantime, if you check out this post, @pieternmnmn and @REXZ Automations have replied with great suggestions on working around this issue by adding steps into your Zap.
If any other members find this thread and are running into the same issue, please add a reply to the post above to let us know that you’d like to be added to the feature request. To be super clear, this happens in the ChatGPT integration, and is you see
was_cancelled: truein the output of the step, that’s how you know that this issue is the cause of your prompt response being cut off.
To try and keep things tidy, I’m going to close this post.
@bstieve, @DeichHalo and @ChrisBraggion if you’re seeing a different behaviour in your Zap, or are using the OpenAI integration rather than the ChatGPT one, could I ask you please post a new question? Include as much information as possible (eg number of tokens used, where you have set token limits, etc) and we’ll do our best to understand why you’re running into your issue.