I am trying to capture the full transcript from Fireflies.ai and send it into a chat gpt prompt. Zapier is not extracting the full transcript from Fireflies.ai. It only allows you to leverage the transcript url or file from Fireflies.ai. How can ensure that the full transcript is captured within this workflow?
Chat GPT Prompt:
Summarize this meeting transcript and include main topics, action items, next steps and detailed notesFire Flies.ai inserted data elements:{{180316144__notee]caption__speaker_name}}{{180316144__notee]caption__sentence}}{{180316144__notee]note}}{{180316144__summary}}
Page 1 / 1
Hi @E_glenn
Good question.
For clarity and context, please post detailed screenshots with how your Zap steps are configured along with the encountered error, thanks.
This post has been edited by a moderator to remove personal information. Please remember that this is a public forum and to remove any sensitive information prior to posting.
When I checked the history of the last run I compared what the total transcript covered vs what zapier sent chat gpt. I relized that the full transcript was not being sent into chat gpt. It may have something to do with the inserted data in the prompt. I want to send the full transcript from Fireflies.ai into chat gpt.
@E_glenn
We’d need to see a full list of the data points returned from the Zap trigger step.
What do you mean when you say “Full list of data points”? The below details consist of the test trigger from Fireflies.ai. I can see all the data points that are sent from Fireflies.ai into Zapier within the test:
I have a prompt that features some test data, but it's not the correct.
action
false
question
false
caption
sentence
I have a prompt that features some test data, but it's not the correct.
speaker_id
2
time
26.1
endTime
33.04
speaker
2
averageConfidence
0.9591864285714288
speaker_name
Eric Glenn
index
4
match
none
filterType
none
incoming
false
metrics
1
word
prompt
category
Nouns
2
word
data
category
Nouns
3
word
correct
category
Nouns
sentiment
0
sentimentType
neutral
questionFilter
false
importance
0.5701988935470581
readability
0.8505799770355225
task
0.9648336172103882
algScore
20.097169126455018
importanceScore
10.455891370773315
taskMatch
false
id
4
2
id
1
note
I'm wondering why it is set up like that.
action
false
question
false
caption
sentence
I'm wondering why it is set up like that.
speaker_id
1
time
36.62
endTime
40.01
speaker
1
averageConfidence
0.8229922222222223
speaker_name
Eric Glenn
index
6
match
none
filterType
none
incoming
false
metrics
sentiment
0.2222222222222222
sentimentType
positive
questionFilter
false
importance
0.11034409701824188
readability
0.6480569243431091
task
0.933576226234436
algScore
18.069248179320383
importanceScore
9.443268376588822
taskMatch
false
id
6
3
id
2
note
We have an automation test with Barrick and Fred.
action
true
question
false
caption
sentence
We have an automation test with Barrick and Fred.
speaker_id
1
time
51.03
endTime
54.39
speaker
1
averageConfidence
0.8265822222222222
speaker_name
Eric Glenn
index
7
match
fred
filterType
noteFilter
incoming
false
metrics
1
word
Fred
category
People
2
word
Barrick
category
Nouns
3
word
Fred
category
Nouns
sentiment
0
sentimentType
neutral
questionFilter
false
importance
0.6503729820251465
readability
0.718589723110199
task
0.9329748153686523
algScore
21.604229755609214
importanceScore
12.050266608413384
taskMatch
false
id
7
4
id
3
note
See if we can find a tutorial app here.
action
true
question
false
caption
sentence
See if we can find a tutorial app here.
speaker_id
1
time
104.08
endTime
107.07
speaker
1
averageConfidence
0.8418077777777778
speaker_name
Eric Glenn
index
11
match
we can><find
filterType
taskNoteFilter
incoming
false
metrics
sentiment
0
sentimentType
neutral
questionFilter
false
importance
0.6153966188430786
readability
0.8204665780067444
task
0.970687747001648
algScore
25.51338827767414
importanceScore
15.767366372621977
taskMatch
false
id
11
5
id
4
note
Choices text appears to be the common text.
action
false
question
false
caption
sentence
Choices text appears to be the common text.
speaker_id
4
time
204.36
endTime
207.22
speaker
4
averageConfidence
0.82783875
speaker_name
Eric Glenn
index
16
match
none
filterType
none
incoming
false
metrics
sentiment
0
sentimentType
neutral
questionFilter
false
importance
0.3249487280845642
readability
0.8004507422447205
task
0.7312167882919312
algScore
18.759122942739285
importanceScore
9.183560801106829
taskMatch
false
id
16
6
id
5
note
Okay, we've run it for four minutes.
action
false
question
false
caption
sentence
Okay, we've run it for four minutes.
speaker_id
6
time
209.8
endTime
211.972
speaker
6
averageConfidence
0.9484085714285715
speaker_name
Eric Glenn
index
18
match
none
filterType
none
incoming
false
metrics
1
word
four minutes
category
Numbered Nouns 1
2
word
for four minutes
category
Dates
3
word
four
category
Numbers
4
word
minutes
category
Nouns
sentiment
0
sentimentType
neutral
questionFilter
false
importance
0.2647213935852051
readability
0.7716603875160217
task
0.9792461395263672
algScore
19.60311531233475
importanceScore
10.07487112681071
taskMatch
false
id
18
customTopicSentencesWithContext
meetingTaskNotes
We have an automation test with Barrick and Fred.
See if we can find a tutorial app here.
meetingMetricNotes
Okay, we've run it for `four minutes` .
meetingPricingNotes
I have a prompt that features some test data, but it's not the correct.
Data that I was looking for.
We have an automation test with Barrick and Fred.
That is the same question.
Okay, we've run it for four minutes.
meetingQuestions
meetingDateTimeNotes
I have a prompt that features some test data, but it's not the correct.
Data that I was looking for.
We have an automation test with Barrick and Fred.
That is the same question.
Okay, we've run it for four minutes.
summary
• This test is designed to evaluate the automation system with Barrick and Fred. It involves running a tutorial app, which appears to be successful after four minutes of testing.
You may have to upload the Transcript to Dropbox to get the file contents:
###
Hey @E_glenn!
I did some checking and it definitely looks like we don’t get the content of the transcripts, only a url to the transcript file in the transcriptUrl field. I can’t see any existing feature requests to get the transcript file contents. I’d recommend contacting our Support team to get a feature request opened up on your behalf. You can reach them here: https://zapier.com/app/get-help
In the meantime, Troy’s suggested workarounds of uploading the file to an app like Dropbox to extract its content or connect to Fireflies.ai’s API via a Webhooks by Zapier action might do the trick. If you have success with those workarounds or find a different solution here please do let us know!
Hi @E_glenn! Not sure how tied you are to Fireflies.ai, but I recently came across this tool that returns the text from a transcript that you can then feed into OpenAI: https://whisperapi.com/
You will need to send your file or file URL via a webhook step, and it returns the text directly in the response that you can use in your following action steps.
Hope that helps! I was recently exploring the same challenge with OpenAI and Fireflies and came across your post.
Thanks so much for sharing that helpful alternative here, @courtneywaid. It’s much appreciated!