Loading...
Loading...
OMEGAENGINE · LOGGING & AUDIT
OmegaEngine logs every decision through a structured RequestLog row: inputs, model, risk signals, policy outcome, probabilities, ethics flags, and audit anchors. This is your real moat: explainable, compliant, production-grade judgment—not just “LLM vibes”.
Every call to /api/decision, /api/commonsense, or related endpoints writes a single row to RequestLog in Postgres. No extra work required.
Core fields
Judgment & risk signals
Policy & ethics layer
Outcome & audit
You can query the same data that powers the Logs Control Room via /api/logs. This endpoint is authenticated and rate-limited just like decision endpoints.
Request
GET /api/logs?limit=50&cursor=... HTTP/1.1 Host: omegaengine.ai Authorization: Bearer OMEGAENGINE_API_KEY Content-Type: application/json
Example · cURL
curl -X GET "https://omegaengine.ai/api/logs?limit=20" \ -H "Authorization: Bearer $OMEGAENGINE_API_KEY" \ -H "Content-Type: application/json"
Example · TypeScript (Node)
const OMEGAENGINE_API_KEY = process.env.OMEGAENGINE_API_KEY!;
type RequestLog = {
id: string;
createdAt: string;
scenario: string | null;
topic: string | null;
riskLevel: "LOW" | "MEDIUM" | "HIGH" | null;
riskScore: number | null;
safetyScore: number | null;
probabilityOfSuccess: number | null;
probabilityOfRegret: number | null;
policyOutcome: string | null;
needsHumanReview: boolean | null;
ethicsConcern: boolean | null;
// ...plus all other fields
};
async function fetchRecentDecisions(limit = 50): Promise<RequestLog[]> {
const res = await fetch(
`https://omegaengine.ai/api/logs?limit=${limit}`,
{
headers: {
"Authorization": `Bearer ${OMEGAENGINE_API_KEY}`,
"Content-Type": "application/json",
},
}
);
if (!res.ok) {
throw new Error(`Failed to fetch logs: ${res.status}`);
}
const json = await res.json();
// depending on the version, you may get { data: RequestLog[] }
return Array.isArray(json) ? json : (json.data ?? []);
}
fetchRecentDecisions()
.then((rows) => {
"Latest high-risk decisions:",
rows.filter((r) => r.riskLevel === "HIGH")
);
})
.catch(console.error);Most teams forward RequestLog data into their existing observability stack (Datadog, Splunk, BigQuery, Snowflake, S3 lake, etc). You can either:
Example pattern · daily export job
// Pseudo-code: run this as a cron to ship OmegaEngine logs
// from /api/logs into your SIEM or data warehouse.
async function exportOmegaLogsSince(timestamp: string) {
const res = await fetch(
`https://omegaengine.ai/api/logs?limit=500&since=${encodeURIComponent(timestamp)}`,
{
headers: {
"Authorization": `Bearer ${process.env.OMEGAENGINE_API_KEY}`,
"Content-Type": "application/json",
},
}
);
const data = await res.json();
const rows = Array.isArray(data) ? data : (data.data ?? []);
// 1) Normalize / map only fields you care about
const events = rows.map((row) => ({
ts: row.createdAt,
scenario: row.scenario,
topic: row.topic,
risk_level: row.riskLevel,
risk_score: row.riskScore,
safety_score: row.safetyScore,
policy_outcome: row.policyOutcome,
needs_human_review: row.needsHumanReview,
ethics_concern: row.ethicsConcern,
model: row.model,
latency_ms: row.latencyMs,
tokens_used: row.tokensUsed,
endpoint: row.endpoint,
client_tag: row.clientTag,
}));
// 2) Push into your SIEM / warehouse of choice
await sendToDatadog(events);
// or sendToSplunk(events), sendToS3(events), sendToSnowflake(events)...
}If you want help wiring OmegaEngine logs into your specific compliance / audit stack, talk to us and we’ll ship a tailored integration.
Anyone can wire an LLM to an agent. Very few teams can show structured, explainable, policy-aware judgment logs for every AI decision in production.