Skip to main content

I use Perplexity Pro subscription. I have tested a prompt to collect specific data about a company name and was satisfied with results. There is only one input field for the prompt on Perplexity website. I used “Sonar” model.

Then I used Zapier and Google sheets to process a list of the companies with the same prompt where the name of the company will change with each iteration. To clarify I used the same ‘user prompt” in Zapier interface and tried blank or trivial “system prompt”. I tried “sonar”, “sonar-reasoning” and “sonar pro” models.

The resulting output was much worse through Zapier. The sources (citations), that AI used, were not really relevant, while using Perplexity directly gave fewer but much more relevant sources. For example I asked whether the company uses specific technology and Perplexity searched for “<company name> <technology>” finding relevant web pages for citation. Inside Zapier the sources were just about the company. No surprise that it couldn’t answer about whether the technology is used.

So I wanted a simple thing: run Perplexity in a cycle with company name as a parameter. Then parse and save the results. Zapier helped with cycling, parsing and storing. But the perplexity step was bad quality. Inside perplexity website I have good AI quality, but cycling, parsing and storing must be done manually. So is there a way to combine both processes?

Hey ​@Nazume,

You may want to refine the System prompt to get your desired output. It’s natural to have different results directly through Perplexity due to the difference in the prompt engineering. Are you using Chat completion action in Zapier to do this? Refine the prompts and play around the parameters to get your desired output.

 

See how to better prompt to Perplexity API here- https://docs.perplexity.ai/guides/prompt-guide. Hope it helps!


Yes, I am using chat completion action in Zapier.

You say that having different results with the same prompt and model through Zapier and directly is natural. But I am not talking about small differences. One approach give accurate results with several tests I made. The other usually give half results wrong with the same prompt. No matter how much you try. And yes, I’ve read the prompting guide. It is mainly about user prompt, which is immutable across my comparison.

I suppose that Perplexity API and Perplexity web interface are two different tools. One just works and the other either is inferior or need to be fine-tuned to have equal quality.

Maybe the vast difference is not obvious without sharing the data. There is my Perplexity web interface thread: https://www.perplexity.ai/search/there-is-a-standa-ltd-company-FSR5IaiDT3.869YzZXw94Q All 6 answers are answered correctly.

And this is the chat completion output with the same user prompt and empty system prompt (I also tried to guide AI with custom system prompts to no avail):

The official website of Standa Ltd, a Lithuanian company producing motorized stages among other opto-mechanical products, is **https://www.standa.lt/**[1][4].

1) **Is there a motorized stage with stepper motor?** The available information confirms that Standa manufactures motorized translation and rotation stages but does not explicitly specify the use of stepper motors in standard catalog products. Their product descriptions focus more broadly on motorized and manual stages and modern motion control devices, including direct drive positioners and hexapods[1][7]. There is no direct mention of stepper motors in the searched pages.

2) **If yes, provide a link to that product on the official site.** Since the stepper motor presence is not affirmed by publicly available information, no direct product page link for a stepper motor stage can be provided.

3) **Is there a motorized stage with direct drive motor?** Yes. Standa explicitly mentions producing "direct drive positioners" among their modern motion control devices[1]. This confirms they have motorized stages with direct drive motors.

4) **If yes, provide a link to the single product page on the official site.** While the company confirms direct drive stages exist, their website publicly lists categories like translation and rotation stages but does not clearly separate or provide a distinct product page solely for direct drive motor stages in the readily available catalogue. The general translation and rotation stages section is here: https://www.standa.lt/products/catalog/translation_rotation[7] More precise product links for direct drive devices are not clearly identified on public pages without further navigation or contacting the company.

5) **Is there a motorized stage with a piezo motor?** There is no explicit mention of piezo motor-driven stages in the publicly available information about Standa's products. Their catalog descriptions highlight stepper and DC motors generally but do not reference piezo motors specifically[1][7].

6) **If yes, provide a link to the piezo motor stage product page.** As piezo motor stages are not confirmed, no direct product page link can be provided. **Summary:** - Standa Ltd produces motorized stages and direct drive positioners. - The existence of stages specifically driven by stepper or piezo motors is not explicitly confirmed in publicly accessible information. - The official website for product details is https://www.standa.lt/ and the relevant product catalog for motorized stages is at https://www.standa.lt/products/catalog/translation_rotation[1][4][7]. - For specifics on stepper or piezo motor stages, direct inquiry to Standa or deeper navigation/contact through their site may be necessary.

Only 1 of 6 answers is correct. The research of AI was shallow and have too little information to determine the answer. Useless.

So, do we agree that there is no way to use the same Perplexity as I have in web browser in Zapier?

I really hope that solving this issue will benefit your wonderful automation tool. Zapier + working AI = Profit :)) While I continue to send prompts manually for now :(


Hey ​@Nazume.

Even if you use the same user prompt, you may still still want to refine the prompts to get more accurate answers from Perplexity API. As you also said that Perplexity API’s is more fine tuned. As said in the previous reply you may want to play around the parameters or try different models to get your desired output. Maybe also try changing the model to sonar-deep-research or sonar-reasoning-pro. See more about Chat completion API here- https://docs.perplexity.ai/api-reference/chat-completions-post, there are lot of parameters like reasoning effort, max tokens, temparature and many more that you can change to see if helps improve your output. If you have any feature request or feedback, you can reach the Zapier Support here- https://zapier.com/app/get-help

You can also try using Zapier agents with Web search functionality to see if it gets you the desired results. Here is a helpful article about Zapier agents- https://zapier.com/blog/zapier-agents-guide/. You may want to do a trial and error with parameters and LLM models to see which one gives you a more accurate result. Hope it helps!