Skip to content

Running an Analysis

Running an analysis is the process of sending one or more of your saved prompts to the selected AI models. This is the core intelligence-gathering step in Bourd. The platform executes the prompts, captures the responses, and analyzes them for brand mentions, rankings, and citations.

This guide covers how to run analyses and interpret the results.

Before running an analysis, you must have completed the following setup:

  1. Configured your Primary Brand and Competitors
  2. Created at least one Prompt

You can initiate an analysis run in two ways: for a single prompt or for multiple prompts in bulk.

From the Prompts List Page (Bulk Analysis)

Section titled “From the Prompts List Page (Bulk Analysis)”

This method is efficient for running a set of related prompts simultaneously.

  1. Navigate to the Prompts page.
  2. Select the checkboxes next to the prompts you wish to run.
  3. A bulk action bar will appear at the top of the list. Click the Run button.

This method is useful for testing a new prompt or getting a quick update on a specific query.

  1. Navigate to the Prompts page.
  2. Click on a prompt in the list to go to its detail page.
  3. Click the Run Analysis button in the top-right corner.

After initiating a run, you will be presented with the model selection dialog.

In the “Run Analysis” modal, select the AI models you want to query.

  • Model Selection: Check the boxes for the models you want to include in the analysis. Running a prompt across multiple models provides a broader understanding of your brand’s perception across the AI ecosystem.
  • Credit Usage: The modal will display the estimated number of credits required for the run. Credits are consumed per prompt execution per model.

Click Run to start the analysis.

After starting the run, you will be redirected to the Results page, where you can monitor the status of the execution in real-time.

  • Pending: The run is queued and will begin shortly.
  • In Progress: The prompts are actively being sent to the AI models.
  • Completed: The run is finished, and all responses have been analyzed.
  • Failed: The run could not be completed due to an error.

Once a run is complete, you can view the detailed results.

The Results page lists all historical and in-progress runs. Click on any completed run to view its summary.

From the run summary page, click on a specific prompt’s response to open the detailed analysis view. This view is broken down into four key sections.

This section displays the full, raw text response received from the AI model. This is the source data for all analysis.

This section provides a structured analysis of the brands mentioned in the response.

  • Your Brand: If your primary brand was mentioned, it will be highlighted here, along with its rank (the order in which it was mentioned, e.g., 1st, 2nd, 3rd).
  • Competitor Mentions: A list of any configured competitors that were mentioned, also with their rank.

This section is critical for understanding your competitive positioning. A rank of “1st” indicates a strong top-of-mind presence for that query.

If the AI model provided any sources for its information, they are listed here. Each citation includes:

  • Source URL: The direct link to the cited source.
  • Anchor Text: The text from the response that is linked to the source.

Citation analysis helps you identify the content and domains that influence AI models.

This section provides technical details about the execution for quality assurance, including:

  • Model Used: The specific AI model version (e.g., gpt-5 or grok-4).
  • Token Usage: The number of input and output tokens consumed.
  • Required Role: Admin or Owner users can run analyses.
  • Member roles are typically view-only and cannot initiate runs, though this may vary based on workspace configuration.
  • Run Failed: This can happen if an AI model is temporarily unavailable or if there’s a configuration issue. Try running the analysis again. If the problem persists, check the system status page or contact support.
  • No Brand Mentions: If a response does not mention your brand or competitors, it means the model did not find them relevant for that specific prompt. This is valuable intelligence in itself, indicating a potential gap in your brand’s AI visibility. Consider refining the prompt or reviewing your content strategy.

After running an analysis, the next step is to review your results and identify trends.