Running an Analysis
Running an Analysis
Section titled “Running an Analysis”Running an analysis is the process of sending one or more of your saved prompts to the selected AI models. This is the core intelligence-gathering step in Bourd. The platform executes the prompts, captures the responses, and analyzes them for brand mentions, rankings, and citations.
This guide covers how to run analyses and interpret the results.
Prerequisites
Section titled “Prerequisites”Before running an analysis, you must have completed the following setup:
Running an Analysis
Section titled “Running an Analysis”You can initiate an analysis run in two ways: for a single prompt or for multiple prompts in bulk.
From the Prompts List Page (Bulk Analysis)
Section titled “From the Prompts List Page (Bulk Analysis)”This method is efficient for running a set of related prompts simultaneously.
- Navigate to the Prompts page.
- Select the checkboxes next to the prompts you wish to run.
- A bulk action bar will appear at the top of the list. Click the Run button.
From a Single Prompt’s Detail Page
Section titled “From a Single Prompt’s Detail Page”This method is useful for testing a new prompt or getting a quick update on a specific query.
- Navigate to the Prompts page.
- Click on a prompt in the list to go to its detail page.
- Click the Run Analysis button in the top-right corner.
The Analysis Process
Section titled “The Analysis Process”After initiating a run, you will be presented with the model selection dialog.
1. Select AI Models
Section titled “1. Select AI Models”In the “Run Analysis” modal, select the AI models you want to query.
- Model Selection: Check the boxes for the models you want to include in the analysis. Running a prompt across multiple models provides a broader understanding of your brand’s perception across the AI ecosystem.
- Credit Usage: The modal will display the estimated number of credits required for the run. Credits are consumed per prompt execution per model.
Click Run to start the analysis.
2. Monitor Execution
Section titled “2. Monitor Execution”After starting the run, you will be redirected to the Results page, where you can monitor the status of the execution in real-time.
- Pending: The run is queued and will begin shortly.
- In Progress: The prompts are actively being sent to the AI models.
- Completed: The run is finished, and all responses have been analyzed.
- Failed: The run could not be completed due to an error.
Viewing Analysis Results
Section titled “Viewing Analysis Results”Once a run is complete, you can view the detailed results.
Execution History
Section titled “Execution History”The Results page lists all historical and in-progress runs. Click on any completed run to view its summary.
Response Details
Section titled “Response Details”From the run summary page, click on a specific prompt’s response to open the detailed analysis view. This view is broken down into four key sections.
Response Content
Section titled “Response Content”This section displays the full, raw text response received from the AI model. This is the source data for all analysis.
Brand Mentions
Section titled “Brand Mentions”This section provides a structured analysis of the brands mentioned in the response.
- Your Brand: If your primary brand was mentioned, it will be highlighted here, along with its rank (the order in which it was mentioned, e.g., 1st, 2nd, 3rd).
- Competitor Mentions: A list of any configured competitors that were mentioned, also with their rank.
This section is critical for understanding your competitive positioning. A rank of “1st” indicates a strong top-of-mind presence for that query.
Citations
Section titled “Citations”If the AI model provided any sources for its information, they are listed here. Each citation includes:
- Source URL: The direct link to the cited source.
- Anchor Text: The text from the response that is linked to the source.
Citation analysis helps you identify the content and domains that influence AI models.
Metadata
Section titled “Metadata”This section provides technical details about the execution for quality assurance, including:
- Model Used: The specific AI model version (e.g.,
gpt-5orgrok-4). - Token Usage: The number of input and output tokens consumed.
Permissions
Section titled “Permissions”- Required Role:
AdminorOwnerusers can run analyses. Memberroles are typically view-only and cannot initiate runs, though this may vary based on workspace configuration.
Troubleshooting
Section titled “Troubleshooting”- Run Failed: This can happen if an AI model is temporarily unavailable or if there’s a configuration issue. Try running the analysis again. If the problem persists, check the system status page or contact support.
- No Brand Mentions: If a response does not mention your brand or competitors, it means the model did not find them relevant for that specific prompt. This is valuable intelligence in itself, indicating a potential gap in your brand’s AI visibility. Consider refining the prompt or reviewing your content strategy.
Next Steps
Section titled “Next Steps”After running an analysis, the next step is to review your results and identify trends.