Loading...
Loading...
OMEGAENGINE · RISK SCORING
OmegaEngine turns every decision into a structured risk object with scores, levels, flags, and human-review signals that you can trust in production.
Each decision response includes a compact, opinionated risk model:
You can use these fields to build dashboards, alerts, routing rules, and approval queues without designing your own risk taxonomy.
Every decision embeds risk signals directly into the JSON:
{
"summary": "AI agent wants to refund $250 to a VIP customer to prevent churn.",
"topic": "customer_support",
"riskLevel": "MEDIUM",
"riskScore": 42,
"safetyScore": 91,
"needsHumanReview": false,
"ethicsConcern": false,
"rewardScore": 8,
"regretScore": 3,
"probabilityOfSuccess": 0.86,
"probabilityOfRegret": 0.14,
"timeHorizon": "SHORT_TERM",
"policyOutcome": "ALLOW",
"policyTags": ["customer_refund", "vip_treatment"],
"recommendedAction": "Approve the $250 refund and log a retention credit."
}LOW RISK
Routine, reversible, low blast radius. Safe to auto-approve.
MEDIUM RISK
Meaningful but contained risk. Ideal for agent + human hybrid flows.
HIGH RISK
Potential legal, financial, or safety impact. Must go to humans.
Use risk scores to gate actions, route to humans, or escalate.
Node / TypeScript example
import { omegaClient } from "@/sdk/omegaengine";
async function handleRefundRequest(input: {
amount: number;
customerTier: "STANDARD" | "VIP";
}) {
const decision = await omegaClient.decide({
scenario: "AI support agent wants to refund a customer",
context: input,
});
if (decision.riskLevel === "HIGH" || decision.needsHumanReview) {
// Send to human queue
await createApprovalTask({
type: "REFUND",
amount: input.amount,
reason: decision.summary,
riskScore: decision.riskScore,
});
return { status: "PENDING_REVIEW", decision };
}
// Auto-approve
await issueRefund(input.amount);
return { status: "APPROVED", decision };
}Want help mapping riskLevel and riskScore to your approval flows?
Talk to our risk team →