CI/CD Dashboard Wiki

This page explains the upcoming qacloud CI/CD dashboard, what users will be able to practice, and how the experience is planned to work once the feature is released.

Status: in test now

The CI/CD dashboard is currently in test. The concept, payload contract, storage model, and dashboard flow are being validated now. It is not yet presented as a finished public feature, and a broader rollout will be available soon.

What Users Can Practice

The goal is to let users go beyond “my test passed locally” and practice the workflow that real teams use after execution. Instead of stopping at the test runner, they will be able to publish a result summary and see whether a deployment would be allowed or blocked.

Smoke and sanity coverage

Submit quick health-check suites and confirm whether a release gate would stay open for a target application.

Regression reporting

Track pass rate, failed tests, and run duration across repeated regression submissions.

API and UI automation publishing

Send summaries from Playwright, Cypress, Jest, Pytest, Newman, or custom scripts without coupling the feature to one framework.

Deployment gate thinking

Practice how quality signals influence a release decision instead of treating automation as a detached script.

How It Will Work

  1. User signs in to qacloud and copies their API key.
  2. User runs a local suite or CI pipeline for one qacloud application such as Market, Bank, Hotel, Rental, TaskTracker, Ticket, Crypto, DataHub, or Sandbox.
  3. User posts a lightweight JSON summary to POST /api/deployment-gate/reports.
  4. qacloud stores the run and evaluates a simple gate rule.
  5. User opens the dashboard and reviews status, pass rate, failures, and recent history.
Planned release rule for the first version

The first version is intentionally simple: failed = 0 means release allowed, and failed > 0 means release blocked.

Planned Scope

Area Plan
Ownership One shared CI/CD dashboard per user account.
Grouping Reports scoped by application, project name, suite name, branch, and environment.
Writers Local scripts and hosted CI providers will both be supported.
Readers Users will review a dedicated dashboard plus a compact summary card in the profile area.
Stored detail Compact metadata only: totals, failures, pass rate, provider, links, and trimmed failure messages.
Why the design is application-first

This feature is planned around the existing qacloud apps, not around the shell. Users should be able to practice release quality for a specific application and suite, then compare runs across apps from one account.

Planned Payload

The initial contract is lightweight so users can send results from different tools without building a large custom integration.

{
  "application": "bank",
  "projectName": "Banking App - UI Tests",
  "suiteName": "Checkout smoke",
  "provider": "GitHub Actions",
  "branch": "main",
  "environment": "staging",
  "metrics": {
    "total": 12,
    "passed": 12,
    "failed": 0,
    "skipped": 0,
    "flaky": 0,
    "durationMs": 52000
  },
  "failures": []
}

That shape is designed to support smoke, sanity, regression, and mixed API/UI reporting without storing oversized raw artifacts in the primary record.

What Comes Next

During the test phase, qacloud is validating the dashboard experience, summary card, app-level grouping, and report ingestion flow. After that, users will see a public rollout with clearer onboarding and guidance for Playwright, Jest, and Pytest style submissions.