EU AI Act Compliance
OmegaEngine aligns with obligations under the EU AI Act (2025 text), including GPAI requirements, deployer responsibilities, and high-risk AI controls. OmegaEngine acts primarily as a judgment-layer safety component and does not perform biometric categorization, social scoring, or prohibited practices.
1. Classification
- • OmegaEngine = GPAI component (general-purpose)
- • Customers = deployers
- • Integrates safely into high-risk categories by design:
- - HR, credit, lending
- - Healthcare triage decisions
- - Access to essential services
- - Law-enforcement (non-biometric)
2. Required Controls Implemented
- • Logging & traceability (Art. 12)
- • Technical documentation & model cards (Art. 13)
- • Post-market monitoring hooks (Art. 61)
- • Transparency obligations (Art. 52)
- • Data governance: minimization + lawful basis (Art. 10)
- • Human oversight assistive features (Art. 14)
- • Risk management: structured scoring (Art. 9)
3. Deployers using OmegaEngine
Customers integrating OmegaEngine into high-risk systems must:
- • Provide human oversight for BLOCK/REVIEW actions
- • Maintain internal documentation of use-cases
- • Ensure lawful basis for data inputs
- • Configure their own retention windows
4. Prohibited AI Practices
OmegaEngine **does not** and cannot be configured to:
- • Perform biometric categorization
- • Run social scoring systems
- • Manipulate vulnerable groups
- • Emotion inference in workplace/schools
- • Predict criminality or protected attributes
5. Documentation & Evidence
OmegaEngine auto-generates:
- • Input/output logs
- • Policy version applied
- • Risk score metadata
- • Time-stamped decision receipts