Use Octoparse to scrape job postings from LinkedIn. This happens once a day and should bring back like 700-1000 jobs.
Send the structured data to an Airtable.
Send it to ChatGPT for it to interpret the job postings and create tags.
Update the data in Airtable with the tags.
My challenge is that this setup killed my task number in one night. I guess every job posting that ran through, counted as its own task? What did I do wrong here? How might I remake this so that I’m not creating 1000 pages of zap runs in one night? See the Zap Runs history.
This is the workflow1000 pages of runs! Look at the dates here.
Best answer by Troy Tessalone
Hi @khou
Zap Trigger: Octoparse - New Data Processed
Looks like the Zap triggers for each new row and triggers at the same time from Octoparse.
Consider adding a Filter as Zap step 2 to include only rows of data you want to process
You probably also want to add a Delay (After Queue) as Zap step before the first Airtable step to make sure API limits are not hit for Airtable and ChatGPT
Consider adding a Filter as Zap step 2 to include only rows of data you want to process
You probably also want to add a Delay (After Queue) as Zap step before the first Airtable step to make sure API limits are not hit for Airtable and ChatGPT