The UK Gambling Commission (UKGC) has issued a serious warning about the use of artificial intelligence in anti-money laundering (AML) systems across the iGaming industry. As more operators embrace AI to automate monitoring and risk detection, the regulator is seeing signs of overconfidence — and underperformance.
According to the UKGC, some gambling firms are implementing AI and behavioural models that generate customer risk scores without fully understanding how those algorithms work. In multiple cases, algorithms required long periods of data accumulation before producing a score — a delay that allowed high-risk transactions to occur undetected. Even more concerning: operators often couldn’t explain why a model triggered a flag — or failed to.
These gaps in explainability and oversight are especially problematic given that many AI tools are delivered by external vendors. The regulator noted instances where consultants had inserted incorrect logic or risk definitions into compliance documents, undermining the reliability of AML systems.
The UKGC has made clear that while AI can support compliance, it cannot replace clear governance. Operators will now be required to explain algorithmic logic, weightings, and thresholds — and the Commission plans to test these models using real-world cases. Transparency, traceability and human supervision are now essential criteria.
The Commission will also assess whether written AML policies align with system configurations. In many firms, that consistency is still missing.
As AI becomes a foundational layer in modern iGaming operations, this development is a stark reminder: intelligence without accountability introduces risk. For AI to enhance compliance, operators must pair it with structured governance, internal education, and ongoing validation.