The prompt
Paste this into Claude, ChatGPT, Cursor, or any MCP-compatible client connected to your Bourd account. Replace the highlighted placeholders with your own values before sending.
In the {{ workspace name }} workspace, find every prompt where {{ your domain }} is cited in the source list but {{ your brand }} isn't mentioned in the answer text on any model. List them with response counts, then ask me which one to focus on. Wait for my answer before continuing.
For the prompt I pick, compare {{ your domain }}'s page against the other cited pages and figure out why {{ your brand }} is being passed over for the mention. Then write a brief that fixes it.
Steps:
1. List the cited URLs across all models and the competitor brands mentioned in the answer. Note which cited URLs belong to brands that get mentioned.
2. Pull {{ your domain }}'s page and the top three or four other cited URLs through Bourd's path extraction.
3. Compare {{ your domain }}'s page against the cited pages whose brands get mentioned. Form a hypothesis for why {{ your brand }} is being passed over: coverage gaps, less direct claims, weaker structure, vocabulary mismatch with the prompt, or some combination. Cite specific differences from the page content.
4. Based on the hypothesis, recommend one of two paths:
- Rewrite {{ your domain }}'s page to address what's missing
- Write a new page if {{ your domain }}'s page is answering a different question than the prompt is asking
Default to the rewrite if it's plausible. State which path you're proposing and why.
Brief format:
- Proposed H1
- H2 outline
- Unique angle in one sentence
- Three to five questions the mentioned-brand pages answer well that {{ your domain }}'s page doesn't (or under-answers)
Stop at brief level. The writer drafts from there. 11 placeholders to fill in. The agent will call: list_prompts, list_prompt_responses, list_citations, and 1 more.
What this workflow does
Why mentions matter more than citations
Why it works
What a good brief looks like
One page, with:
- A proposed H1
- Three to six H2s
- A one-sentence angle
- Three to five questions where other cited pages answer better than yours, each naming the page that falls short (including yours when it applies)
- A recommendation to rewrite your cited page or commission a new one, with the reasoning
Anything longer is the writer's job. If the agent returns four pages of outline, push back and ask for the cuts. The writer needs room to write.
Common failure modes
How to extend it
- Ask for three brief variants at different intents (informational, comparison, decision-stage) so a single session covers a cluster instead of one query.
- Have the agent draft the meta title and description in the same turn.
- Chain into a Notion or Google Docs MCP to land the brief in the writer's workspace.
- Tag the source prompt so you can re-check whether your brand earns a mention 30 days after the page ships.
When to skip it
Tools the agent calls
The agent picks from these 4 Bourd MCP tools based on the prompt. You do not call them directly.
Pairs with
MCP agents chain across servers. Add these alongside Bourd and the same prompt can hand output straight to the tools you already work in.