FCP 2026: 4 Crucial Topics for the Future of Financial Crime Prevention
The stakes are high: financial institutions collectively spend billions each year on combating financial crime.
In the Netherlands alone, more than €1.4 billion is spent annually on money laundering prevention, with an additional €1 billion in administrative burdens for companies and individuals1. These costs highlight not only the scale of the challenge, but also the urgent need for technological solutions, such as AI, that are both effective and efficient.
With the rapid rise of artificial intelligence (AI) and increasingly complex regulatory expectations, the field is changing at an unprecedented pace. Keeping up with new developments demands both the right technology and a solid grasp of its principles to maintain fairness, transparency, and effectiveness.
To explain the new developments and how to effectively integrated those, we’re launching a new blog series on the future of FCP: to explore how organizations can balance innovation with responsibility. We will explore the dive into the most pressing challenges in FCP modeling and show how AI and machine learning are shaping its development. Along the way, we’ll share insights on building trust and reliability in next-generation models, along with practical tips, discussion points, and industry perspectives for those working with these emerging technologies.
What to Expect
Each blog post in this series of 4 will focus on a specific topic that is crucial to the present and future of the FCP domain. The posts will follow the typical lifecycle of model development – starting with the development and concluding with validation:
- Bias and Fairness: AI models are powerful, but they don’t always treat everyone equally, as seen during the “Toeslagenaffaire” in the Netherlands or COMPAS case in the United States. In high-stakes domains like FCP such consequences can be just as severe. In the rush to make AI models “fair”, many organizations fall into the trap of chasing strict statistical parity at the expense of performance and context. In FCP, this tension is especially acute: regulators push for ever-higher recall, while fairness requirements demand balanced precision across different groups. The result? Conflicting objectives, artificial “levelling down,” and potential compliance risks under GDPR. The conventional bias-mitigation mindset is challenged, where a more mature, context-aware approach to fairness is advocated for.
- Explainability: AI and ML models go hand in hand with the challenge of explainability. These technologically advanced models are frequently described as black boxes: powerful, yet difficult to interpret In practice, model developers outsource the responsibility for model interpretability to SHAP or similar feature-importance methods – and call the job done. But does this really provide meaningful insight into how a model behaves? In FCP, explainability must go beyond technical metrics; it should focus on the people who use these models every day. The real value of AI in this space often lies in how it supports analysts – helping them interpret alerts, make informed decisions, and justify their actions. This article explores how shifting from model-centric to human-centric explainability can transform AI from a black box into a trusted, collaborative partner in the fight against financial crime.
- Model Drift and Data Drift: The financial world is always in movement. As technologies evolve and customer behaviour shifts, models built to detect financial crime can quickly become outdated. This article explores the importance of monitoring data drift (changes in input patterns) and modeldrift (changes in the relationships between inputs and outcomes). From building models that can withstand shifting behaviour to setting smart thresholds and review cycles, it highlights how banks can keep their FCP models relevant and reliable.
- The Model Validation Process: Model validation in FCP has long followed the same framework as credit risk – a framework often characterized by manual, time-consuming, and inflexible processes. With criminals constantly adapting their tactics and regulators becoming ever more demanding, this traditional approach is no longer sustainable. This article explores how generative AI can transform model validation without replacing human expertise. When implemented with care and oversight, AI elevates validators, ensuring their skills are applied where they matter most and keeping FCP programs agile in a rapidly evolving landscape.
By progressing through these topics in order – from identifying risks of bias, to ensuring transparency, to monitoring drift, and finally to examining validation frameworks – a clear narrative is created that links principles to practice. Responsible innovation is not just about ticking boxes on a regulatory checklist, it’s about creating an FCP framework that people can trust. These blogs offer insights and perspectives for making this happen.
Looking Ahead
This series of upcoming blog posts is just the beginning. Each topic opens the door to deeper discussions, case studies, and practical applications. Whether you’re a risk manager, data scientist, compliance officer, or executive, the series provides insights into the challenges and practical approaches that financial institutions can take to address them.
Citations
- Grootbanken voorzien verlies van duizenden banen bij witwascontroles - Financieel Dagblad, October 2nd ↩︎