Loading...
Loading...
OmegaEngine · Feedback & RLHF
Feedback Grading lets operators review OmegaEngine decisions and submit corrections. Each grade feeds a continuous learning loop that recalibrates risk scores, tunes policy weights, and reduces false positives over time.
OmegaEngine evaluates a request and returns a decision with a risk score, policy outcome, and audit trail.
A human reviews the decision and marks it as correct, incorrect, or needs-review — with optional notes.
Grades feed into Bayesian recalibration, adjusting risk thresholds and policy weights for future decisions.
The decision was accurate. Reinforces the current policy weights and risk threshold.
The decision was wrong. Triggers recalibration with the provided correct outcome.
Decision was ambiguous. Flags for senior review and excludes from auto-calibration.
// Grade a single decision
await omega.feedback.grade({
decisionId: "dec_abc123",
grade: "INCORRECT",
correctOutcome: "ALLOW",
reason: "False positive — vendor was already approved in Q4 review",
graderId: "operator@company.com",
});// Bulk feedback for training pipelines
await omega.feedback.bulkGrade([
{ decisionId: "dec_001", grade: "CORRECT" },
{ decisionId: "dec_002", grade: "CORRECT" },
{ decisionId: "dec_003", grade: "INCORRECT", correctOutcome: "BLOCK" },
]);
// Returns:
// { processed: 3, accepted: 3, calibrationTriggered: true }Track grading coverage and policy improvement over time:
Percentage of decisions that have been graded. Target: >80% for high-risk domains.
Rolling 30-day accuracy of OmegaEngine decisions vs. human grades. Shows calibration impact.
Inter-rater reliability across your grading team. Highlights ambiguous policy areas.
Median time from decision to grade. Faster grading = faster recalibration cycles.
Access the full feedback dashboard to review pending decisions, submit grades, view calibration metrics, and export training data for fine-tuning.