Loading...
Loading...
OmegaEngine · Arbitration
Multi-Model Arbitration sends the same prompt to multiple LLMs simultaneously, scores each response against your policies, and returns the one with the lowest risk — automatically.
Your prompt is sent to 2–6 models in parallel via the Gateway. Responses stream back concurrently.
Each response is evaluated by the policy engine: risk score, safety flags, compliance tags, and cost.
The response with the best composite score is returned. Losing responses are logged for audit.
Configure how OmegaEngine ranks competing responses:
| Criterion | Description | Default Weight |
|---|---|---|
| lowest_risk_score | Prefer the response with the lowest risk | 0.40 |
| highest_safety | Maximize safety flag pass rate | 0.25 |
| lowest_cost | Prefer cheaper model responses | 0.15 |
| lowest_latency | Prefer faster responses | 0.10 |
| policy_compliance | Highest policy tag match rate | 0.10 |
const result = await omega.arbitrate({
prompt: "Recommend investment strategy for retirement",
models: ["gpt-5.4", "claude-opus-4.6", "gemini-3.1-pro"],
policy: "financial-advice-v2",
selectionCriteria: "lowest_risk_score",
});
// result.selectedModel → "claude-opus-4.6"
// result.response → "Based on your risk tolerance..."
// result.scores → {
// "gpt-5.4": { risk: 0.72, safety: 0.91, cost: 0.003 },
// "claude-opus-4.6": { risk: 0.31, safety: 0.98, cost: 0.004 },
// "gemini-3.1-pro": { risk: 0.45, safety: 0.95, cost: 0.002 },
// }
// result.auditId → "aud_abc123"Compare model outputs for investment advice, loan decisions, and fraud alerts — pick the most conservative.
Validate clinical suggestions across multiple models to reduce hallucination risk.
Cross-check contract analysis and regulatory interpretation for accuracy.
Let your agent query multiple models before taking irreversible actions.