Quality Gates
Beyond visual regression, retestr.ai provides powerful quality gates to ensure your application is performant, accessible, and error-free.
Automated Quarantine Workflow
Flaky tests (tests that pass and fail intermittently without code changes) destroy trust in your test suite. retestr.ai automatically detects and quarantines flaky tests so they don't break your builds.
How it Works
- Detection: The system monitors the pass/fail history of every test case over a rolling 24-hour window.
- Quarantine: If a test flips status (Pass ↔ Fail) more than 3 times in 24 hours, it is automatically marked as Quarantined.
- Impact: Quarantined tests continue to run, but their failures do not fail the overall Test Run. This allows you to monitor the test without blocking deployment pipelines.
- Recovery: Once a quarantined test achieves 10 consecutive passing runs, it is automatically Un-Quarantined and returns to normal enforcement.
Audit Logs
All quarantine and un-quarantine events are logged in the Audit Logs (Settings > Audit Logs) for full traceability, including the reason (e.g., "High Flakiness", "Stabilized").
Notifications
When a test is quarantined, retestr.ai automatically sends alerts to keep your team informed:
- Email Alerts: A batched email digest is sent to all Organization Admins and Editors, listing the quarantined tests and the reason (e.g., "High Flakiness (6 transitions in 24h)").
- Webhooks: The system dispatches
test:quarantinedandtest:unquarantinedevents to your configured webhooks, allowing you to integrate with Slack, Microsoft Teams, or custom internal tools.
Automated Accessibility Guardrails
Ensure your application is accessible to all users by automatically running accessibility checks on every visual snapshot.
retestr.ai integrates with axe-core, the industry standard for accessibility testing.
How it works
When enabled, the runner automatically injects axe-core into the page before taking a screenshot. It scans the page for violations (WCAG 2.0, 2.1, etc.) and reports them alongside the visual diff.
Configuration
You can enable accessibility checks in your Test Case settings:
{
"checkViolations": true,
"impact": ["critical", "serious"]
}
If violations are found, the test can be configured to fail, preventing inaccessible code from reaching production.
Strict Console Guardrails
Catch hidden errors that might not be visually obvious.
Fail on Console Errors
You can configure your tests to fail immediately if console.error() is called during execution. This is critical for catching:
- React hydration errors.
- Unhandled promise rejections.
- Third-party script failures.
- Missing assets (404s logged to console).
Ignore Patterns
Sometimes, third-party libraries (like ads or analytics) log errors that you can't control. You can define Ignore Patterns using Regex to whitelist known noise.
{
"failOnConsoleError": true,
"ignorePatterns": [
"Recaptcha",
"Analytics"
]
}
Audit Mode
For ultra-fast quality checks, you can run tests in Audit Mode (see Running Tests).
In Audit Mode:
- No Screenshots: Visual comparison is skipped entirely to save time and resources.
- Strict Enforcement: Both Console and Network guardrails are automatically enabled and enforced, even if disabled in the test settings.
- Purpose: Ideal for checking "Is the site up?" and "Are there any JS errors?" without waiting for visual processing.
Strict Network Guardrails
Ensure API stability and prevent broken links.
Fail on Network Errors
You can configure your tests to fail if any network request returns a 4xx (Client Error) or 5xx (Server Error) status code. This catches:
- Broken images (404).
- Failed API calls (500).
- Authentication issues (401/403).
Ignore Patterns
If your application purposefully expects some errors (e.g. testing a 404 page), or has noisy third-party trackers that fail often, you can ignore them using Regex patterns.
{
"failOnNetworkError": true,
"ignorePatterns": [
"google-analytics",
"api/non-critical-endpoint",
"expected-404"
]
}
Visual Performance Metrics
retestr.ai automatically extracts performance metrics from the Playwright trace to detect performance regressions.
Metrics Tracked
- LCP (Largest Contentful Paint): How long it takes for the main content to load.
- CLS (Cumulative Layout Shift): How much the layout shifts unexpectedly.
- FPS (Frames Per Second): Frame rate during animations (if available).
These metrics are displayed in the Performance Card on the Run Details page.
Layout Shift Debugging
If a high CLS score is detected, the AI Smart Triage will often flag "Layout Shift" as the cause of the visual diff, helping you correlate performance issues with visual bugs.
Custom Metric Assertions
For application-specific metrics (e.g., "Game Load Time", "Memory Usage"), you can report custom values directly from your test script.
// In your test script
const memory = performance.memory.usedJSHeapSize;
window.retestr.reportMetric('JS Heap (MB)', memory / 1024 / 1024);
These values are tracked over time, allowing you to spot trends and regressions.