Skip to main content

Analytics & Health

retestr.ai goes beyond simple pass/fail reports by providing deep insights into the stability and health of your test suite.

Test Health Dashboard

The main dashboard provides an at-a-glance view of your project's quality metrics.

Key Metrics

  • System Status: Real-time indicator of the retestr.ai platform health.
  • Pass Rate: The percentage of tests that passed in the last 50 runs. A dip here indicates a regression or increased flakiness.
  • Active Jobs: The number of tests currently running or queued.
  • Total Runs: The total number of test runs executed in the project.
  • Sparklines: Mini-charts showing the pass/fail status of recent runs, helping you spot patterns (e.g., "It started failing 5 runs ago").
  • Pass Rate Color Coding:
    • Green (>90%): Healthy.
    • Yellow (75-90%): Needs attention.
    • Red (<75%): Critical instability.

Flake Detection

Flaky tests (tests that pass and fail intermittently without code changes) are the enemy of trust in CI. retestr.ai automatically identifies them.

How it works

The system analyzes the state transitions of your tests over time. If a test frequently flips between PASS and FAIL within a short window, it is flagged as Flaky.

  • Flakiness Score: A score from 0-100 indicating how unstable a test is.
  • Top Flaky Tests: The dashboard lists the most unstable tests so you can prioritize fixing them.

Recent Regressions

The dashboard highlights tests that were previously passing but have recently started failing. This list is your "To-Do" list for fixing broken features.

Visual Performance Metrics

Visual regression isn't just about pixels; it's about performance. Retestr extracts key Web Vitals from the Playwright trace of every run.

  • LCP (Largest Contentful Paint): How long it takes for the main content to load.
  • CLS (Cumulative Layout Shift): How much the layout shifts unexpectedly.
  • FPS (Frames Per Second): Frame rate during animations or WebGL rendering.

These metrics are stored alongside your visual results, allowing you to catch "invisible" regressions where the app looks correct but feels sluggish.

Cost Intelligence

As you scale, keeping track of testing costs is vital. The Usage & Billing dashboard provides transparency into your consumption.

  • Daily Usage: See how many screenshots/runs were consumed each day.
  • Forecast: A linear regression model predicts your end-of-month usage based on current velocity, helping you avoid overage charges.
  • Breakdown: View usage by Project or User to identify who is running the most tests.

Run Progress & ETA

When a test suite is running, you don't have to guess when it will finish.

  • Real-time Progress Bar: Shows the exact percentage of completed tests.
  • Smart ETA: The "Time Remaining" is calculated based on the historical average duration of the specific tests in the current run.

Relative Timestamps

Throughout the application, times are displayed relatively (e.g., "5 mins ago", "2 hours ago") to help you quickly understand the recency of results. Hovering over any timestamp reveals the exact date and time.

Reviewer Leaderboard (Gamification)

To encourage team engagement and "clean code", retestr.ai includes a Leaderboard widget on the dashboard home screen.

Scoring System

Points are awarded for activities that improve quality or maintain the test suite:

  • Review (5 pts): Approving or rejecting a visual diff.
  • Contribution (2 pts): Creating or updating a test case.
  • Execution (1 pt): Manually triggering a test run.

Time Periods

You can toggle the leaderboard to view top contributors for the last 7, 30, or 90 days, helping to recognize both recent heroes and long-term maintainers.

ROI & Time Saved Widget

Quantify the value of your automated visual testing efforts directly on the dashboard.

The ROI & Time Saved widget tracks your team's usage and provides a gamified summary of:

  • Hours Saved: Calculated by multiplying the number of tests run by the estimated time it would take a human to manually perform the same checks.
  • Money Saved: Calculated by multiplying the "Hours Saved" by a configurable average "Hourly Rate" for QA engineers.

This feature provides immediate visibility into the return on investment of your test suite, making it easy to justify the value of automation to stakeholders.