Skip to content

Cysic Auction Mechanism

1. Overview

The Cysic auction mechanism determines how proving tasks are assigned, how prover and verifier rewards are calculated, and how underperforming nodes are penalized. If you run a prover or verifier node, understanding this flow helps you choose a better bid, reserve the right amount of CYS, and estimate your earning behavior more accurately.

2. Why the Auction Exists

The mechanism is designed to balance four goals:

  • Fairness: tasks are allocated through competitive bidding under a maximum requester-defined price.
  • Efficiency: faster and more reliable operators are more likely to complete tasks successfully and earn rewards.
  • Security: failing nodes can be penalized through slashing.
  • Incentive alignment: reserved CYS increases long-term commitment and can improve payout outcomes.

3. Task Parameters

Each proving task is published by a requester with three core parameters:

  • Task difficulty (task_difficulty): measured in cycles.
  • Maximum acceptable bid (bid_max): the highest unit price the requester will accept, denominated in CYS per million cycles.
  • Task deadline (task_ddl): the maximum time allowed to complete the task.

4. What Provers Must Prepare

Before a prover can participate meaningfully in auctions, it needs:

  • Reserved CYS: a prover must reserve the minimum required amount of CYS to be eligible.
  • A configured bid price: the price in the prover config is used as the reference for future auctions.
  • Sufficient performance: the prover software benchmarks hardware capability automatically to decide whether the node can finish tasks before the deadline.

5. How Prover Selection Works

5.1 Auction Flow

When a task is published:

  1. Task broadcast: the requester publishes the task, which is propagated to all registered provers.
  2. Capability assessment: each prover checks whether it can complete the task before task_ddl.
  3. Bid submission: eligible provers submit their configured bids. Any bid above bid_max is discarded automatically.
  4. Bid selection: once bidding closes, valid bids are sorted from low to high. Let N be the number of valid bids.

5.2 Winner Selection

The winner selection then works like this:

  • The candidate pool starts from the second-lowest valid bid and can include up to 8 entries total, covering ranks 2 through min(N, 9).
  • If fewer than 9 valid bids exist, the pool includes all bids from rank 2 through rank N.
  • Three provers are selected uniformly at random from that candidate pool.
  • If two provers bid the same price, the prover with the larger reserve ranks higher at the pool boundary.
  • The selected bid price (bid_select) is the lowest price inside the candidate pool, which is equivalent to the second-lowest valid bid overall.

6. What This Means for Your Prover Bid

Your configured bid affects both selection probability and reward level:

  • A lower bid makes it more likely that your prover enters the candidate pool.
  • A higher bid can improve reward per task, but also increases the chance that your prover is excluded from the pool.
  • The lowest valid bid does not automatically win. Instead, it helps set the market price, while actual winners are randomly drawn from the candidate pool beginning at the second-lowest bid.

For setup details, see How to Run a Prover Node.

7. Prover Reward Distribution

The total prover reward pool is:

\[ task\_reward\_prover = bid\_select \times task\_difficulty \times 80\% \]

Rewards are then split among successful provers after verifier confirmation:

  • If 2 provers succeed, the reward is split 7:3, assigned randomly between them.
  • If 3 provers succeed, the reward is split 7:2:1, assigned randomly among them.

8. Failure and Slashing

If a prover misses the deadline:

  • the task is reassigned to a backup prover.
  • the failing prover is slashed according to:
\[ slash = \beta \times bid\_max \times task\_difficulty \]

Where β = 1 initially, subject to future adjustment.

9. Verifier Rewards

Verifier rewards are funded from the remaining 20% of the task reward.

  • 20 verifiers are randomly selected for each task. The selection is determined by the block hash at task creation time, keeping the verifier set unpredictable.
  • The verification round closes once 60% of them, meaning at least 12 verifiers, have submitted results.
  • Only verifiers that submit before the task closes share the verifier reward.

The verifier reward formula is:

\[ verifier\_reward = \frac{bid\_select \times task\_difficulty \times 20\%}{n_{submitted}} \]

Where n_submitted is the number of verifiers that submitted on time.

For setup details, see How to Run a Verifier Node.

10. Reserve-Weighted Incentives

Prover rewards can be increased based on reserve share:

\[ final\_task\_reward\_prover = task\_reward\_per\_prover \times \big(1 + \gamma \times R\big) \]

Where:

  • γ is initially 0.25.
  • R = reserve_token / total_reserve_token.

This means provers with larger reserves can receive proportionally higher rewards.

11. Security Considerations

  • Sybil resistance: token reservation reduces the risk of Sybil attacks by requiring economic commitment.
  • Collusion prevention: pricing determined by the second-lowest bid helps reduce collusion risk, while randomized winner selection prevents a single high-throughput prover from monopolizing task allocation.
  • Reliability enforcement: slashing makes underperforming provers bear economic costs.
  • Verifier integrity: distributing rewards among multiple verifiers improves proof validation robustness.

12. Key Takeaways for Node Operators

  • Prover operators should set bids carefully, maintain enough reserve, and make sure hardware can meet deadlines consistently.
  • Verifier operators should prioritize reliable uptime and low-latency connectivity because only timely submissions earn rewards.
  • Both roles should keep enough CYS available for reserve requirements and transaction fees.