Working with Prompts

Prompts are the questions you send to AI engines on behalf of your brand. Learn how to create, run, and interpret prompt results to understand your AI visibility.


What Is a Prompt?

prompt is a question or query that you want to test against AI engines. Think of it as simulating what a potential customer might type into ChatGPT or Perplexity when researching your category.

For example, if you sell project management software, a prompt might be:

"What are the best project management tools for remote teams?"

AI Brand Report sends this question to each engine, captures the response, and analyses whether your brand was mentioned, how positively, and what sources were cited.


Creating Prompts

Manually

  1. Go to your project and click Prompts in the sidebar
  2. Click New Prompt
  3. Type your question
  4. Click Save

Auto-generated prompts

  1. From the Prompts page, click Generate Prompts
  2. The system analyses your domain and suggests relevant questions
  3. Review the suggestions and save the ones you want to keep

Running Prompts

Once prompts are saved, you can run them individually or in bulk:

  • Run individually — click Run next to a specific prompt to fire it against all engines simultaneously
  • Bulk run — use the bulk action to run all prompts at once (useful for a scheduled refresh)

Running a prompt takes 10–30 seconds per engine. Results appear as soon as each engine responds.


Understanding Prompt Results

Each prompt result page shows the full AI response from each engine, along with structured analysis.

Engine Cards

Each engine (ChatGPT, Perplexity, Gemini, Claude) has its own collapsible card showing:

Field Description
Presence Whether your brand name, domain, or synonyms were detected
Stance The AI's overall tone — Positive, Neutral, or Negative
Confidence How confident the analysis is in that stance classification
Rationale A brief explanation of why that stance was assigned
Score A 0–100 visibility score for this engine/prompt combination
Journey Stage Which stage of the buyer journey this prompt maps to
Topic Cluster The topic category this prompt belongs to
Full Response The raw AI response text, formatted with headings, lists, and tables
Citations Sources cited by the engine (available for Perplexity)

Previous Runs

Below the latest results, you'll see a history of all previous runs for this prompt — useful for tracking how AI knowledge about your brand changes over time.


Writing Good Prompts

The quality of your prompts directly affects the usefulness of your data.

Effective prompt patterns

Category comparison queries — these reflect real user research behaviour and often trigger competitive mentions:

  • "What are the best [category] tools for [use case]?"
  • "Compare [your brand] vs [competitor] for [use case]"

Buyer intent queries — questions asked when someone is close to making a decision:

  • "Is [your brand] good for [enterprise / small business / specific use case]?"
  • "What are the pricing options for [category]?"

Problem-first queries — how users describe pain points before they know which product to buy:

  • "How do I [solve problem] without [common limitation]?"

What to avoid

❌ Avoid Reason
"Tell me about [Your Brand]" Too brand-specific; doesn't reflect real user search behaviour
Very generic queries Won't reliably trigger category-relevant answers
Questions with obvious answers Waste runs without generating useful visibility data
Queries unrelated to your category Will skew your journey stage and topic distributions

Prompt Organisation

Journey Stages

Each prompt result is automatically classified into a buyer journey stage:

Stage Description
Awareness User is learning about a problem or category
Consideration User is comparing options and evaluating features
Decision User is ready to buy and comparing final choices
Retention User is an existing customer looking for help or alternatives

Topic Clusters

Results are grouped into topic clusters (e.g. Pricing, Integrations, Security, Customer Support) to help you spot where your coverage is strong or weak.


Monitoring Frequency

Cadence When to use
Weekly For key "money" prompts in competitive categories
Monthly Full prompt library re-run to catch gradual knowledge drift
After events After a product launch, PR campaign, or major content push

Re-running prompts is the only way to see whether your AI optimization efforts are paying off.

Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.

Still need help? Contact Us Contact Us