Goal
-
Pull ~50k characters from Google Sheets.
-
Split into 1–X text chunks (currently ~9).
-
For each chunk, create a short summary that preserves a unique
eID: …]
. -
At the end, create one consolidated summary (de-duplicated, all IDs preserved) and write it back to Sheets.
Current Zap (simplified)
-
Schedule (Every Hour)
-
Google Sheets — Get Many Spreadsheet Rows
-
Formatter — Utilities (prep)
-
Formatter — Text (split into chunks)
-
Looping by Zapier — Create Loop From Line Items
-
ChatGPT (OpenAI) — Conversation (per chunk) → produce a short summary and append it to ChatGPT Memory
-
Filter by Zapier — only continue when
Loop Iteration Is Last = true
-
Formatter — Utilities (minor prep for the final call)
-
ChatGPT (OpenAI) — Conversation (final) → read from the same ChatGPT Memory Key and produce the consolidated summary
-
Google Sheets — Update Spreadsheet Row(s)
ChatGPT configuration (key parts)
-
Model: gpt-4.1 for chunking (step 6), gpt-5 for final (step 9).
-
Enable Memory: ON in steps 6 and 9.
-
Memory Key: same static value for the whole run (derived from the trigger timestamp).
-
Reset Memory: OFF in steps 6 and 9.
-
Max Tokens: high limits (e.g., 2200 for step 6, 4000 for step 9).
-
Prompts:
-
Step 6 instructs ChatGPT to write a very short, structured summary and add it to memory.
-
Step 9 instructs ChatGPT to read all prior summaries from memory and create a single de-duplicated summary (in German), explicitly referencing all
tID: …]
.
-
The problem
Everything runs without errors, but the final step only includes the first text chunk (≈10 of ~93 short summaries). Based on logs, step 6 clearly runs for all loop iterations and “acknowledges” saving to memory. However, step 9 appears to receive only the first chunk’s content from memory.
What we’ve already tried
-
Verified the same Memory Key is used in steps 6 and 9 for the entire run.
-
Ensured Reset Memory = OFF for all steps after the first.
-
Increased token limits.
-
Gated the final step with
Loop Iteration Is Last = true
so it runs after the loop. -
Reviewed the loop output and confirmed each chunk is processed.
-
Worked extensively with Zapier Support over several weeks. The Zap is considered “healthy,” but the symptom persists.
Questions for the community
-
Is ChatGPT Action Memory designed to persist and aggregate reliably across multiple loop iterations within the same run?
-
Are there known constraints (token windows, truncation behavior, step isolation) that would explain why only the first iteration’s content remains visible to the final step?
-
Are there best practices to make ChatGPT Memory behave deterministically across a Loop (e.g., explicit “read-modify-write” patterns or required response shaping)?
-
Any recommended patterns to collect all per-iteration summaries without introducing additional storage steps (we know Storage/Digest based designs exist, but we’re trying to understand whether the Memory-only pattern can work reliably)?
-
If Memory-only is not reliable for this, can the community confirm that and point to the canonical approach for loop aggregation with ChatGPT Actions?
Redacted prompt skeletons (illustrative)
Step 6 (per chunk):
“Summarize the following text in 2–3 bullet points (40–60 words). Keep the original
ID: …]
visible. Append this mini-summary to memory under this run’s key. Return only ‘OK’.
TEXT: {{Loop Value}}”
Step 9 (final):
“Read all mini-summaries from memory for this run and produce one de-duplicated, thematically grouped German summary. Keep all
ID: …]
references next to the relevant points and add a final complete ID list with brief notes on merges/omissions.”
Any insights, gotchas, or confirmations would be hugely appreciated. Again, we’ve already worked in depth with Zapier Support; we’re now seeking the community’s collective experience with Looping + ChatGPT Memory specifically.