Using the Dashboard

The Dashboard is your central hub — it shows AI Visibility Score, Sentiment, Visibility, Engine Coverage, Competitor Share of Voice, Journey Stages, and Priority Actions all in one place.

Overview

The Dashboard (/projects/{id}/dashboard ) is the main page for each project and the single source of truth for your brand's AI health. It brings together every key metric in one scannable view.

Use it to answer: "How is my brand performing across AI engines right now?"


Card Layout

The Dashboard is organised into six cards, displayed top to bottom:

1. AI Visibility Score (hero card)

The primary card — displayed with a high-contrast blue background to make it stand out as the most important metric. Shows your AI Visibility Score as a large number (0–100), averaged across all prompt runs and all engines.

  • High score (70–100): AI mentions your brand prominently and positively
  • Mid score (40–70): Mixed visibility — present in some contexts, absent in others
  • Low score (0–40): AI rarely or poorly mentions your brand

The card also shows a "Data available" / "No data yet" badge, and a prompt nudge if no prompts have been run or created yet.


2. Sentiment

Displayed side-by-side with the Score card (stacks on mobile). Shows the split of AI tone across your latest prompt results:

Tile What it means
Positive % Share of prompts where AI described your brand favourably
Neutral % Share of prompts where AI mentioned you factually
Negative % Share of prompts where AI expressed concerns

Each tile shows the percentage as the primary number with the raw count as a sub-label (e.g. 18 of 25 prompts).


3. Visibility

Two sections in one card:

Brand / Domain / Synonyms tiles:

Tile Meaning
Brand visible % of analysed prompts where your brand name was detected
Domain visible % of analysed prompts where your website domain was cited
Synonyms % of prompts where a synonym was found, plus total mention count

Engine Coverage:

A progress-bar row for each AI engine (ChatGPT, Perplexity, Gemini, Claude) showing what percentage of your prompts returned a brand mention from that engine.

Status Meaning
✅ Active At least one prompt has been run and returned a result from this engine
— No data No prompt results from this engine yet

4. Competitors — Share of Voice

A table showing your brand vs. each tracked competitor, measured as a percentage of total AI mentions across all prompts.

Column Meaning
Entity Brand name. Your brand is tagged you
Share of Voice Visual progress bar — blue for your brand, grey for competitors
% Numeric share of total AI mentions

Click Manage Competitors to add, edit, or remove competitors.


5. Journey Stages & Topics

Two horizontal bar charts showing the distribution of your prompt results:

  • Journey Stages — where in the buyer journey each prompt sits (Awareness, Consideration, Decision, Retention)
  • Topic Clusters — which topic categories dominate your prompt set (e.g. Pricing, Security, Integrations)

Use these to spot funnel gaps — if all your prompts sit in one stage, you're missing visibility elsewhere.


6. Priority Actions

Links to the full Issues & Recommendations report. Professional plan users can generate AI-powered issues — each scored by Impact × Effort and grouped by category (Visibility, Sentiment, Competitor Gap, Content Coverage, Citation & Authority). Starter plan users can access the static recommendation sections.


Editing Your Project

Click the pencil icon (✏️) in the top-right of the page header to edit your project name, domain, or synonyms.


Tips

  • Run prompts first — the Dashboard shows  for all metrics until at least one prompt has been run.
  • Refresh regularly — the Dashboard reflects the latest run for each prompt. Re-running prompts updates all scores automatically.
  • Score + Sentiment together — the hero Score card and the Sentiment card sit side by side intentionally. A high score with mostly positive sentiment is the target; a high score with neutral/negative sentiment may mean AI mentions you but in a poor context.
  • Check Engine Coverage — if an engine shows "No data", your prompts haven't been run against it yet. Running prompts sends them to all active engines simultaneously.
Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.

Still need help? Contact Us Contact Us