Skip to main content

Goal

  • Pull ~50k characters from Google Sheets.

  • Split into 1–X text chunks (currently ~9).

  • For each chunk, create a short summary that preserves a unique eID: …].

  • At the end, create one consolidated summary (de-duplicated, all IDs preserved) and write it back to Sheets.

Current Zap (simplified)

  1. Schedule (Every Hour)

  2. Google Sheets — Get Many Spreadsheet Rows

  3. Formatter — Utilities (prep)

  4. Formatter — Text (split into chunks)

  5. Looping by Zapier — Create Loop From Line Items

  6. ChatGPT (OpenAI) — Conversation (per chunk) → produce a short summary and append it to ChatGPT Memory

  7. Filter by Zapier — only continue when Loop Iteration Is Last = true

  8. Formatter — Utilities (minor prep for the final call)

  9. ChatGPT (OpenAI) — Conversation (final)read from the same ChatGPT Memory Key and produce the consolidated summary

  10. Google Sheets — Update Spreadsheet Row(s)

ChatGPT configuration (key parts)

  • Model: gpt-4.1 for chunking (step 6), gpt-5 for final (step 9).

  • Enable Memory: ON in steps 6 and 9.

  • Memory Key: same static value for the whole run (derived from the trigger timestamp).

  • Reset Memory: OFF in steps 6 and 9.

  • Max Tokens: high limits (e.g., 2200 for step 6, 4000 for step 9).

  • Prompts:

    • Step 6 instructs ChatGPT to write a very short, structured summary and add it to memory.

    • Step 9 instructs ChatGPT to read all prior summaries from memory and create a single de-duplicated summary (in German), explicitly referencing all tID: …].

The problem

Everything runs without errors, but the final step only includes the first text chunk (≈10 of ~93 short summaries). Based on logs, step 6 clearly runs for all loop iterations and “acknowledges” saving to memory. However, step 9 appears to receive only the first chunk’s content from memory.

What we’ve already tried

  • Verified the same Memory Key is used in steps 6 and 9 for the entire run.

  • Ensured Reset Memory = OFF for all steps after the first.

  • Increased token limits.

  • Gated the final step with Loop Iteration Is Last = true so it runs after the loop.

  • Reviewed the loop output and confirmed each chunk is processed.

  • Worked extensively with Zapier Support over several weeks. The Zap is considered “healthy,” but the symptom persists.

Questions for the community

  1. Is ChatGPT Action Memory designed to persist and aggregate reliably across multiple loop iterations within the same run?

  2. Are there known constraints (token windows, truncation behavior, step isolation) that would explain why only the first iteration’s content remains visible to the final step?

  3. Are there best practices to make ChatGPT Memory behave deterministically across a Loop (e.g., explicit “read-modify-write” patterns or required response shaping)?

  4. Any recommended patterns to collect all per-iteration summaries without introducing additional storage steps (we know Storage/Digest based designs exist, but we’re trying to understand whether the Memory-only pattern can work reliably)?

  5. If Memory-only is not reliable for this, can the community confirm that and point to the canonical approach for loop aggregation with ChatGPT Actions?

Redacted prompt skeletons (illustrative)

Step 6 (per chunk):

“Summarize the following text in 2–3 bullet points (40–60 words). Keep the original ID: …] visible. Append this mini-summary to memory under this run’s key. Return only ‘OK’.
TEXT: {{Loop Value}}”

Step 9 (final):

Read all mini-summaries from memory for this run and produce one de-duplicated, thematically grouped German summary. Keep all ID: …] references next to the relevant points and add a final complete ID list with brief notes on merges/omissions.”

Any insights, gotchas, or confirmations would be hugely appreciated. Again, we’ve already worked in depth with Zapier Support; we’re now seeking the community’s collective experience with Looping + ChatGPT Memory specifically.

Hey there ​@acDE 👋

Great questions! The Support Team would likely be best-placed to answer these types of questions but I’m happy to answer from my perspective:

1. Is ChatGPT Action Memory designed to persist and aggregate reliably across multiple loop iterations within the same run?

No. ChatGPT’s Memory was not specifically designed for structured aggregation across loops. Since Looping by Zapier runs iterations in parallel, the “append to memory” writes wouldn’t run in a guaranteed order and could potentially overwrite rather than accumulate if the ChatGPT actions are running at the same time. 

2. Are there known constraints (token windows, truncation behavior, step isolation) that would explain why only the first iteration’s content remains visible to the final step?

I suspect it’s more that the loop iterations are running at the same time so the rest of the loops may not have finished at the time that the last loop is run. In which case the memory state it “sees” at the end can might reflect whichever memory write landed first or last, not the whole set. 

3. Are there best practices to make ChatGPT Memory behave deterministically across a Loop (e.g., explicit “read-modify-write” patterns or required response shaping)?

I don’t think it would be possible to make ChatGPT’s Memory act like that. From my understanding the memory is designed for storing user preferences and general context across chats, not structured, ordered data writes across multiple loops running at the same time.

4. Any recommended patterns to collect all per-iteration summaries without introducing additional storage steps (we know Storage/Digest based designs exist, but we’re trying to understand whether the Memory-only pattern can work reliably)?

If you don’t want to store the summaries in a Digest before releasing it to the final ChatGPT action (on the last loop iteration) then you might want to use something like Delay After Queue to help space out the ChatGPT actions that are running across each loop. That would ensure that the ChatGPT actions don’t run at the same time, although it wouldn’t guarantee that the loop iterations run sequentially it should help prevent them from overwriting each other.

5. If Memory-only is not reliable for this, can the community confirm that and point to the canonical approach for loop aggregation with ChatGPT Actions?

I wouldn’t have thought that ChatGPT Memory alone would be reliable for this as loop iterations run in parallel. In my opinion a more reliable approach would be to store the summaries in either a Digest or in something like Zapier Tables. Then have a ChatGPT reference all the summaries that are stored for that digest/table record. That said, given that all the loops run in parallel, the last loop could potentially be running while others still haven’t been completed. So if I was building this I’d add in a Delay For step to run after that “Loop Iteration Is Last” Filter before getting the summaries, just to help make sure that they’ve had enough time to add all the summaries into the digest/table.

Hope that helps to answer your questions. Let me know if you have any further issues or questions - happy to help! 🙂