Skip to main content

Hi Zapier Community,

I'm working on a Zap designed to automatically process a CSV file received daily as an email attachment and use it to update or create records in a Zapier Table. I've gotten close, but I'm running into some confusing issues, particularly with testing steps inside a loop.

My Goal:
Receive daily email -> Parse CSV attachment -> For each row: Find record in Zapier Tables by entityInterface, if found Update, if not found Create.

My Current Workflow:

  1. Trigger: Email Parser by Zapier (New Email - gets attachment)

  2. Action: Code by Zapier (Run Python) - Fetches the attachment content via URL and returns the entire CSV content as a single text string (e.g., in an output field named csv_content). (Reason: Tried outputting a list/line items directly, but hit the 250 item limit).

  3. Action: Looping by Zapier - Create Loop From Text

    • Input Text: Mapped to csv_content from Step 2.

    • Text Delimiter: Set to \n (newline).

  4. Action (Inside Loop): Code by Zapier (Run Python)

    • Input full_text: Mapped to csv_content from Step 2.

    • Input line_number: Mapped to Loop Iteration from Step 3.

    • Code: Parses the specific line (linessline_number - 1]) from full_text using csv.reader and outputs a dictionary with keys like entityName, entityInterface, Description, OperationalStatus.

  5. Action (Inside Loop): Zapier Tables (Find Record)

    • Table: My Target Table

    • Lookup Field: Field 2 (This column holds my unique entityInterface values)

    • Lookup Value: Mapped to 4. Run Python -> entityInterface.

  6. Action (Inside Loop): Paths

    • Path A (Found): Rule checks if 5. Find Record -> Zap Search Was Found Status is true.

    • Path B (Not Found): Rule checks if 5. Find Record -> Zap Search Was Found Status is false.

  7. Action (Inside Path A): Zapier Tables (Update Record)

    • Record ID: Mapped to 5. Find Record -> Record ID.

    • Field 1 (entityName): Mapped to 4. Run Python -> entityName.

    • Field 3 (Description): Mapped to 4. Run Python -> Description.

    • (Field 2 / entityInterface is NOT mapped here)

  8. Action (Inside Path B): Zapier Tables (Create Record)

    • Field 1 (entityName): Mapped to 4. Run Python -> entityName.

    • Field 2 (entityInterface): Mapped to 4. Run Python -> entityInterface.

    • Field 3 (Description): Mapped to 4. Run Python -> Description.

 

The Problem / Confusion:

When I test the steps inside the loop (Steps 4, 5, 7, 😎, Zapier seems to consistently use data from the first iteration (Loop Iteration = 1).

  • This causes the Step 4 (Run Python) test to show it's outputting the header names (entityName, entityInterface, etc.) as the values.

  • This then causes the Step 5 (Find Record) test to fail ("Nothing could be found") because it's searching Field 2 for the literal text entityInterface.

  • This then causes the Step 7 (Update Record) test to fail ("Cannot create ULID...") because it receives an invalid Record ID from the failed Step 5 test.

My Questions:

  1. Is the current workflow approach (Code -> Loop From Text -> Code -> Find -> Paths -> Update/Create) a valid and robust way to handle this, given the Code step item limit?

  2. Is it normal for the tests of steps inside a loop to consistently use Iteration 1 data, even if prerequisite steps failed their tests based on that data?

  3. Are there any obvious flaws or better ways to configure this?

I feel like the logic is almost right, but the testing experience inside the loop is making it hard to be 100% confident. Any insights or suggestions would be greatly appreciated!

Thanks!

Without seeing the actual data structure in the email trigger, it’s hard to say if what you’re doing is “the best way”, but what you’ve shown seems like it is *a* way that should work. 
 

I think you’re close to answering your own question in that, yes, while testing loops, they will only use data from iteration 1. 
 

If you wanted to test data that would eventually exist in other iterations, you can temporarily manually type that into previous action steps and re-test. That way it will pull corresponding data into the test results, and then you can use it in downstream actions and test those. 


If that’s not possible for some reason, you could always publish and turn the zap on, send yourself an email with a very small csv with all known test conditions or possible data configurations (good and bad), and see how it works. Are there any errors? Did it pass with flying colors? But with real test data, you can tackle any action step errors and evaluate if the current flow is “the best” flow.