Skip to main contentThis section provides technical specifications and reference information for Confidence Flags.
For conceptual explanations of how flags work, see Flag Concepts.
Rules
Key Characteristics
- Priority-based evaluation: Rules evaluate in priority order, with lower priority numbers evaluated first
- First match wins: The first rule that matches determines the variant
- Required state: Newly created rules start disabled—you must explicitly enable them
- Segment dependency: Rules require an allocated segment to function
Rule Composition
- Segment: Defines who is eligible for the rule
- Assignment specification: Defines variant assignments using bucket-based randomization
- Targeting key selector: Specifies which field from evaluation context to use for randomization (defaults to
targeting_key)
- Priority: Determines evaluation order
- Enabled state: Controls whether the rule is active
Best Practices
- Create rules in a disabled state, test thoroughly, then enable
- Use meaningful segment names that clearly describe the target audience
- Put more specific rules before general rules (using priority)
- Use bucket count 100 for percentage-based splits (or 1000 for per-mille precision)
- If the targeting key field is missing or
null in the evaluation context, the rule doesn’t match (empty string "" is valid)
Variant Assignment
Bucket Ranges
bucketCount defines how many buckets to divide users into
- Each assignment specifies one or more bucket ranges (for example,
{lower: 0, upper: 50})
- Ranges are inclusive at lower bound, exclusive at upper bound
- Common pattern: Use
bucketCount: 100 for percentage-based splits (50/50 = buckets 0-50, 50-100)
Assignment Types
-
Variant assignment: Assigns a bucket range to a specific variant (most common use case)
- Use for A/B tests, multivariate tests, rollouts
-
Fall-through assignment: Passes assignment to the next matching rule, but logs an assignment event.
Suitable for logging which users matched a segment without changing their experience
- Creates complex rule chains where different segments handle different aspects
-
Client default assignment: Returns the default values specified by each client.
Allows clients to define their own fallback behavior.
Suitable for gradual rollouts or feature toggles
Targeting Key Selector
The targetingKeySelector specifies which field from the evaluation context to use for randomization. Common patterns:
targeting_key (default): General user identifier
user_id: User-level randomization
device_id: Device-level randomization (same user, different devices get different variants)
session_id: Session-level randomization (new variant each session)
Archive Flags
Archive Behavior
When you archive a flag:
- Flag still exists and you can reference it
- Resolve requests return the user-specified default value
- Flag appears as archived in the UI
- Historical data remains available
- You can’t use it in new experiments
- SDKs return default values for all users
When you archive a segment:
- Segment enters the
ARCHIVED state
- You can’t use it in new rules
- Existing rules using the segment continue to work
- Confidence preserves historical allocation data
- Frees up coordination space for other segments
When to Archive
Archive flags when:
- Feature has been fully rolled out to all users
- Experiment has concluded and you chose a winner
- Feature is being permanently removed
- Flag is deprecated and no longer needed
Archive segments when:
- Experiment using the segment has ended
- Targeting criteria is no longer relevant
- Consolidating segments
- You created the segment for testing only
Before You Archive
- Check that no active experiments depend on the resource
- Consider removing archived segments from rules (optional)
- Document why you’re archiving the resource
- Notify team members about the archival
After You Archive
- Verify the resource shows as archived
- Check that the change doesn’t affect active experiments
- Update documentation to reflect the change
- Remove feature flag code from application (if applicable)
- Archive related resources (segments for a flag, flags using a segment)
Find Archival Candidates
List flags and segments, then review for:
- Flags/segments with no recent activity
- Resources from completed experiments
- Flags with all rules disabled
- Segments in ALLOCATED state with no active rules
- Test resources no longer in use
Clean Up Application Code
After archiving a flag, remove the flag code from your application and use the winning variant’s behavior directly.
Best Practices
- Archive promptly—don’t let unused resources accumulate
- Document reasons and timing of archival
- Clean up in stages: archive first, then remove code later
- Review regularly for archival candidates
- Coordinate with team when archiving shared resources