Frequently Asked Questions

Find answers to common questions about DIFC Regulation 10 certification and our MISSION+ methodology.

DIFC Regulation 10 establishes the AI System Certification Framework for any organization developing, deploying, or operating High-Risk Processing Systems in the DIFC. It ensures AI is built and operated with transparency, fairness, accountability, human oversight, and safety. Companies using AI for scoring, classification, automation, decision-making, or processing personal data may be required to certify part or all of their systems. MISSION+ acts as an Accredited Certification Body (ACB) to review, assess, and certify these systems.
A system is considered High-Risk if it: - Processes personal or sensitive data - Uses autonomous or semi-autonomous algorithms - Produces decisions with material impact on individuals or organisations - Could cause significant harm if misused or breached Examples: - Credit scoring and lending models - Employee screening and hiring systems - Insurance underwriting models - Fraud detection systems - Autonomous decision engines
- A1 - Assisted: Suggests actions only - A2 - Human-in-the-Loop: Proposes outputs; human approves - A3 - Policy-Bounded Autonomy: Acts automatically within guardrails - A4 - Fully Autonomous: Executes without real-time human oversight Only A3 and A4 typically fall under Regulation 10's certification regime. MISSION+ categorises the system during Phase 1 of the assessment.
Any organisation operating in DIFC that deploys or develops High-Risk AI systems must comply, including: - Banks and financial institutions - Fintechs - Insurers and brokers - HR tech platforms - Healthcare systems - Logistics and mobility platforms - Enterprise AI builders MISSION+ provides Pre-Assessment Scoping to confirm if certification applies.
MISSION+ performs a structured Documentation Intake (Phase 1.2), including: - AI Governance Framework - AI Risk Management Framework - Data Protection Impact Assessment / AI-DPIA - Model Cards & Dataset Cards - System Architecture diagrams - Risk Registers (operational, privacy, cyber, algorithmic) - Human Oversight (HITL) procedures - ASO appointment letter & training records - Data lineage documentation - Internal control and audit records A full Document Verification Checklist is included in Appendix A (MISSION+ methodology).
MISSION+ combines: - NIST AI RMF (Govern-Map-Measure-Manage) - MISSION+ How We Work philosophy (execution-first, evidence-led) - DIFC Regulation 10 Certification Requirements Our approach is: - Evidence-based, not checkbox-based - Product-led, with deep TEVV (Testing, Evaluation, Verification, Validation) - Human-centric, validating HITL and ASO roles - Risk-based, prioritising systems with real-world impact The full nine-phase methodology is used from Pre-AuditTestingCertificationMonitoringRecertification.
MISSION+ conducts comprehensive TEVV testing, aligned with NIST and Regulation 10: MISSION+ uses statistically valid sample sizes from NIST SP 1270, NIST 800-53, and AI RMF MEASURE functions.
The ASO must be: - Appointed before the AI system is operational - Independent from the development team - Competent in AI risk, governance, and oversight - Able to access all logs, evidence, and incidents - Responsible for monitoring, reporting, and escalation MISSION+ validates ASO competency and independence during Phase 3 - Governance Review.
The regulation requires a 90-day maximum from when the application is complete. MISSION+ typically follows: - Pre-Assessment & Scoping: 1-2 weeks - Evidence Intake & System Mapping: 2-4 weeks - Testing & Technical Assurance (TEVV): 3-6 weeks - Reporting & Decision: 2-3 weeks Total: 6-12 weeks, depending on completeness of documentation and system complexity.
MISSION+ may issue: - Certification Granted - Certification Granted with Conditions - Certification Deferred (Pending Remediation) - Certification Refused Certification is valid for 5 years, subject to ongoing monitoring and annual attestation.
MISSION+ conducts: - Quarterly drift monitoring - Annual attestation reviews - Review of incidents, complaints, and model updates - Triggered reassessments if major changes occur
Reassessment is required when: - A model or dataset changes materially - A new high-risk use case is added - Governance processes change - Significant incidents or breaches occur - Regulators request review Every 5 years, full re-certification is mandatory.
Suspension or revocation occurs if: - High-risk issues remain unresolved - Evidence of harm or misuse emerges - Governance failures occur - The system materially deviates from certified behaviour - Misleading claims about certification are made The ACB must notify the ComMISSIONer within 30 days.
MISSION+ offers: - Pre-assessment and eligibility confirmation - Gap analysis and readiness assessment - Documentation support (model cards, governance packs, DPIAs) - Technical testing and evidence generation - Certification audit and decision - Post-certification monitoring, attestation, and re-certification preparation We also offer Lite, Pro, and Premium packages for readiness + certification support.
We recommend the following steps: 1. Conduct an internal AI & Data Governance review 2. Establish HITL and ASO oversight 3. Document all models, datasets, risks, and decisions 4. Ensure transparency artefacts (model cards, DPIAs, risk registers) exist 5. Conduct internal TEVV tests before applying MISSION+ can conduct a 1-week readiness check to assess maturity.
Based on MISSION+ audits: - Missing or incomplete model & dataset documentation - No AI-DPIA or insufficient risk assessments - Weak human oversight or no HITL logs - Inadequate bias & fairness testing - No incident response or safety controls - ASO not appointed or not independent - Gaps in data lineage or governance documentation
Yes, if they are: - Embedded in a high-risk system - Producing decisions that materially affect individuals - Performing automated scoring, classification, or recommendations
Yes. Remediation must be completed within 12 months per Regulation 10.
You can begin with a free Pre-Assessment Consultation where MISSION+: - Confirms eligibility - Identifies system boundaries - Provides an Audit Plan & Testing Strategy - Estimates cost and timeline
MISSION+ maintains a publicly available and simple complaints-handling procedure, as required under Regulation 10. Any individual, data subject, organisation, or stakeholder may submit a complaint if they believe: - A Certified System is not operating as certified - There is a data protection or fairness concern - There has been a breach of obligations under Regulation 10 - Their personal data has been affected by the certified AI system Complaints can be submitted in writing to our dedicated mailbox: reg10-complaints@mission.plus
MISSION+ accepts complaints from: - Data subjects whose personal data has been processed by a Certified System - Customers, partners, or stakeholders of the Certification Applicant (CA) - Regulators or government bodies - Employees or insiders raising concerns - Any individual or entity identifying a potential infringement MISSION+ treats all complaints seriously and equally.
When MISSION+ receives a complaint, we: 1. Acknowledge receipt within 14 days, naming the investigator assigned to the case 2. Begin an independent investigation following our internal procedures 3. Provide either: - a full response within 14 days, or - a timeline for when the full investigation will conclude 4. Aim to provide a written final response within eight weeks, except in complex cases Complaints Policy and Procedure: MISSION+ does not charge any fee for submitting or processing a complaint.

Still have questions? We're here to help.

Contact Our AI Expert