Skip to main content
Question

How to create a Zapier flow to update PostgreSQL records and add new rows using an API list


I have an api endpoint from which i only have the option to get a list of endusers, no new row or update row capability, but i can sort the list i.e. by last change date, and limit how many records i take out.

My question is how do I make a zapier flow that 1. Updates exciting records in my postgreSQL server and 2. Create new rows, while at the same time using as few paid tasks as possible with this kind of API endpoint?

 

This is my take on it, but very unsure if it’s the best way, or if it even works proper - what do you think?

  1. I run an update each hour
  2. I get the time + date the zap last ran through storage
  3. I get a list of endusers from my API endpoint and use (sort=-updatedAt&limit=100) to sort the list by last updated, and limit it to 100 records.
  4. I use a loop to create max 100 individual records from the JSON list just retrieved
  5. I use a filter condition to only allow records with a ‘UpdatedAt’ date that is newer than the ‘Zapier storage date’ - last time it ran.
  6. I take the ID from one record, and search my PostgreSQL database to see if it exists or not
  7. Split pathway

      8/9. If it Dosen’t exists - then I create a new row in PostgreSQL

     12/13 If it already exists - then I update the row in PostgreSQL

     10/14 - If last loop is true - break loop

     11/15 - Set time of zap run i zapier storage

 

 

 

Did this topic help you find an answer to your question?

3 replies

SamB
Community Manager
Forum|alt.badge.img+11
  • Community Manager
  • 7442 replies
  • February 19, 2025

Hey there, ​@BMulf. Thanks for joining our Community! 🎉 

My question is how do I make a zapier flow that 1. Updates exciting records in my postgreSQL server and 2. Create new rows, while at the same time using as few paid tasks as possible with this kind of API endpoint?

Your current setup looks like it would do what you want—create a new row or update an existing one. In terms of saving tasks it may help to know that only successfully run Webhook and PostgreSQL actions in the Zap would count towards your task usage—the Schedule trigger, Storage, Looping, Filter and Paths actions wouldn’t use any tasks You can learn more about this here: What does not count toward task usage?

That said, I wonder if it might be more efficient to use that GET request for the trigger for your Zap? As it’s possible to use a Retrieve Poll Webhooks trigger to make a GET request to an API endpoint, which should mean that the Zap would only trigger if it “sees” new data that it needs to act upon. So you could get rid of the existing Webhooks action and you’d save yourself a task each time the Zap is run. And since the Zap would likely trigger for each new/updated user separately, you may not need the Filter, Looping and Storage actions either. 

You can learn more about how to set up a Retrieve Poll webhook trigger here: Trigger Zaps from polling webhooks.

Hope that helps to get you pointed in the right direction. If you run into any issues on that or have further questions just let us know!


  • Author
  • Beginner
  • 1 reply
  • February 20, 2025

@SamB Hi Sam 

 

Thanks for your response. I went ahead and tested out the Retrive Pool webhook - and this was exactly what I was looking for. 1. It saves me 100 tokens for each run I would otherwise have done with the other flow (As i pooled the last 100 records). 2. I get updates fluently, and not only with a fixed interval. 3. It reduced the complexity of the flow significantly, as seen below.

However, in regards to the Retrive Pool webhook and the documentation you shared: Trigger Zaps from polling webhooks. Then I have questions that isen’t covered.

  1. How exactly does the pooling thing works. With what interval does it check the API?
  2. How should the URL for the API be formatted, for the pooling to work as wanted?

I ask because Chat gpt is advising on this kind of solution:  [API link]?updatedAt[gt]=LAST_POLL_TIMESTAMP&sort=-updatedAt&limit=100.

 

From this I wonder

  1. Do we need to sort the data coming from the API on the last ‘UpdatedAt’ field - or is the pooling working this out automatically in some way?
  2. Do we need to insert some kind of ‘Last_pool_time’ as proposed? If yes, how do we do that/where do we get this field from? Because, we can’t use the storage stage before the trigger.
  3. Sorting the output of the API on the ‘updatedAt’ is that necessary i.e. ‘sort=-updatedAt’

Hope you have some good insights on this. 

As a note: Having educational material on these kinds of use cases, talking through the different elements that Zapier provides, and why some should be used and others not, together with the Architecture and setup, would be invaluable resources for us as users. So please take this case and forward it to your educational team so the can produce such content. I am sure they would love it - I would as an educational professional if I were handed such great material to start with ;) 

 

Finally, I actually think this flow can be reduced to two steps even. The reason step 2-7 exists is because Zapiers PostGreSQL dosen’t support UpCert commands - so please add this. If that was introduced, the tasks would further be reduced from 3 each run to only 2. I will be looking into doing anUpCert directly though through the code module, however I do believe that I then need to add 'PostgREST' on top of my database, for this to work. Do you have any thoughts or input on that too ​@SamB ? 

 

 


SamB
Community Manager
Forum|alt.badge.img+11
  • Community Manager
  • 7442 replies
  • February 21, 2025

Yay! I’m so glad my suggestion of using that Retrieve poll trigger helped, ​@BMulf! 😊

How exactly does the pooling thing works. With what interval does it check the API?

As it’s a polling trigger I’d have thought it would check for new data every 1 or 2 minutes—depending on your Zapier plan. 

How should the URL for the API be formatted, for the pooling to work as wanted?
I ask because Chat gpt is advising on this kind of solution:  [API link]?updatedAt[gt]=LAST_POLL_TIMESTAMP&sort=-updatedAt&limit=100.

Yes you’d want to include the query parameters in the URL field. Here’s an example for of how I set up a Retrieve Poll trigger to make a request to get status updates for a project:

d1bca24c80617e132681b15353896c5a.png

Do we need to sort the data coming from the API on the last ‘UpdatedAt’ field - or is the pooling working this out automatically in some way?

It expects to receive an array of objects in reverse chronological order so you’d want to sort it that way using the sort query parameter that is relevant for the API you are making the request to. 

Do we need to insert some kind of ‘Last_pool_time’ as proposed? If yes, how do we do that/where do we get this field from? Because, we can’t use the storage stage before the trigger.

Is this in reference to ensuring the Zap only triggers for new data that it hasn’t “seen”? If so, I wouldn’t have that that you’d need to reference a ‘Last_pool_time’ field as the trigger would use the value in the Deduplication Key field to help determine whether it’s already “seen” the item or not. You can learn more about the deduplication that Zaps with polling triggers use here: Data deduplication in Zaps

Sorting the output of the API on the ‘updatedAt’ is that necessary i.e. ‘sort=-updatedAt’

If your API doesn’t send over the data in the order it expects (newest first, oldest last) then yes, you’ll need to sort it.

As a note: Having educational material on these kinds of use cases, talking through the different elements that Zapier provides, and why some should be used and others not, together with the Architecture and setup, would be invaluable resources for us as users. So please take this case and forward it to your educational team so the can produce such content. I am sure they would love it - I would as an educational professional if I were handed such great material to start with ;) 

That’s a great idea! I’ll pass this over to the team so they can look into getting some use cases and more details added to the Webhooks documentation to make things clearer.

Finally, I actually think this flow can be reduced to two steps even. The reason step 2-7 exists is because Zapiers PostGreSQL dosen’t support UpCert commands - so please add this. If that was introduced, the tasks would further be reduced from 3 each run to only 2.

Yes, with the Retrieve Poll trigger, you likely wouldn’t need the Filter, Looping, or Storage actions. I can’t submit feature requests from my end, you’d need to reach out to our Support team about this here so that they can open up a feature quest on your behalf to have an Upsert type of action added to the PostgreSQL app. 

 I will be looking into doing anUpCert directly though through the code module, however I do believe that I then need to add 'PostgREST' on top of my database, for this to work. Do you have any thoughts or input on that too ​@SamB ? 

Yes, if you install PostgREST on your database, it should allow it to handle Upsert requests via a HTTP request. I’ve not done it myself but I’d have thought you’d be better to use a Webhooks action though, instead of a Code action, to make the request to the database. 

Hope that helps to clarify things. Let me know if there’s anything further I can help with! 🙂