Briefly
On 27 January 2026 the Monetary Conduct Authority (FCA) launched the Mills Assessment to look at the long-term affect of AI on monetary companies. Led by Sheldon Mills, this initiative invitations business suggestions to assist form how AI would possibly remodel client experiences, market constructions, and regulatory approaches in retail monetary companies. The decision for enter closes on 24 February, following which Mills will current suggestions to the FCA board in the summertime, culminating in an exterior publication to foster knowledgeable debate.
In additional element
The Mills Assessment
The evaluation focuses on 4 essential areas:
- AI expertise evolution: By 2030, AI techniques are anticipated to develop into extra autonomous and adaptive, with agentic AI being extra able to impartial decision-making and steady studying. Moreover, multimodal AI, which integrates textual content, speech, photos and different information, may radically alter buyer journeys and repair supply. The FCA is eager to grasp how, at current, companies count on to make use of AI techniques inside their enterprise, and their plans for the way their use of AI will evolve.
- Affect on markets and companies: AI is already producing efficiencies throughout core actions inside monetary companies companies, and its affect is about to develop. As prospects delegate selections to AI, competitors amongst monetary companies companies may doubtlessly improve by means of the event of novel worth propositions; nonetheless, new types of market energy might also emerge, for instance if sure AI suppliers favour sure monetary companies companies. Monetary companies companies should adapt to keep up relevance and competitiveness. The FCA is eager to grasp the potential affect on, amongst others, market construction and buyer relationships, and to discover whether or not AI techniques present companies functionally equal to regulated actions similar to recommendation or intermediation whereas remaining exterior the regulatory perimeter.
- Client developments: Customers could more and more work together with monetary companies through AI interfaces, with expectations prone to shift in the direction of extremely personalised, automation-driven monetary journeys alongside lowered tolerance for friction, poor worth or opaque outcomes. Whereas this has the potential to enhance monetary outcomes, it additionally raises considerations about reliance on unregulated recommendation, vulnerability to mis-selling, mannequin bias or hallucination dangers, and AI-driven fraud. The FCA is eager to listen to views on each the alternatives and challenges this presents.
- Regulatory strategy: The FCA intends to stay an outcomes-focused regulator, leveraging present frameworks such because the Client Responsibility and Senior Managers and Certification Regime (SMCR) to make sure flexibility and compliance. The regulator is searching for enter on how the FCA can reap the benefits of the alternatives AI presents, but additionally, amongst others, on whether or not present frameworks are appropriate for the regulation of AI. For instance, the FCA will assess how related senior managers beneath the SMCR can proceed to discharge their tasks for the deployment and upkeep of AI techniques, and whether or not Client Responsibility expectations ought to be revised to account for the affect of AI.
Key issues for AI use in retail monetary companies
Use of AI to this point
Thus far, AI has been deployed within the monetary companies sector for a wide range of functions, together with aiding with creditworthiness assessments, enhancing buyer due diligence and monetary crime monitoring (similar to fraud detection), and offering buyer help interfaces similar to on-line chatbots. Every of those use instances introduces distinctive dangers, nonetheless, and requires a cautious evaluation of information safety points. For instance, AI-driven credit score assessments can inadvertently perpetuate bias if historic information incorporates underlying prejudices. Equally, AI chatbots could often present unclear or inaccurate responses, a problem which may be significantly acute for weak prospects. The FCA is conscious of those dangers, as highlighted by Sheldon Mills in his speech launching the evaluation. As AI turns into extra superior and autonomous, these dangers may improve.
There are additionally challenges for companies in making use of their tasks and obligations beneath the SMCR to the adoption and use of AI, which can have resulted in a slower uptake than would possibly in any other case have been the case. Given the FCA’s place that senior managers ought to preserve oversight of AI deployment, for instance, the dearth of “explainability” of many AI fashions might be seen to battle with the SMCR’s requirement for senior managers to exhibit they perceive and management dangers inside their areas of duty.
The FCA’s evolving strategy
At current, it seems that the FCA intends to keep up its present strategy to regulating AI, refraining from introducing extra or bespoke rules particularly for AI. As an alternative, the FCA will proceed to leverage present principles-based and outcomes-focused regulatory frameworks to information companies’ adoption and use of AI, offering companies with a level of flexibility of their strategy to AI. Nonetheless, this strategy shouldn’t be with out challenges: the FCA’s supervisory strategy has been reactive, leaving companies with little sensible readability and steering on tips on how to apply present regulatory frameworks to their AI deployment, pushing the burden onto companies to take a place ahead.
We could, nonetheless, see this strategy evolve within the coming months, given the suggestion in a latest Home of Commons Treasury Committee report (revealed per week earlier than the launch of the Mills Assessment) that this lack of readability from the FCA could have contributed to a chilling impact on AI adoption. In response, the Treasury Committee has beneficial that the FCA ought to subject steering on the applying of its present regulatory frameworks to AI utilization, that HM Treasury ought to designate main AI and cloud suppliers as essential third events beneath the Crucial Third Events Regime (giving the regulators higher supervisory management over these suppliers), and that the Financial institution of England and the FCA ought to conduct AI-specific stress testing. The Treasury Committee expects to see motion on the primary two of these suggestions by the tip of 2026.
Going ahead
It’s evident that AI will play an more and more distinguished position in monetary companies. Particularly, well-designed and sturdy AI instruments can drive effectivity and ship price financial savings, serving to companies meet the Client Responsibility’s expectations round worth and worth. Nonetheless, companies should make sure that any AI instruments or companies they undertake are completely evaluated in opposition to the FCA’s evolving regulatory expectations.
Companies which can be particularly reliant on AI, or that plan to extend their use of such applied sciences, ought to think about responding to the FCA’s Name for Enter. The findings of the Mills Assessment will affect how the FCA interprets and applies regulatory necessities to AI instruments and software program, offering a possibility for stakeholders to assist form the long run regulatory panorama.
Briefly
On 27 January 2026 the Monetary Conduct Authority (FCA) launched the Mills Assessment to look at the long-term affect of AI on monetary companies. Led by Sheldon Mills, this initiative invitations business suggestions to assist form how AI would possibly remodel client experiences, market constructions, and regulatory approaches in retail monetary companies. The decision for enter closes on 24 February, following which Mills will current suggestions to the FCA board in the summertime, culminating in an exterior publication to foster knowledgeable debate.
In additional element
The Mills Assessment
The evaluation focuses on 4 essential areas:
- AI expertise evolution: By 2030, AI techniques are anticipated to develop into extra autonomous and adaptive, with agentic AI being extra able to impartial decision-making and steady studying. Moreover, multimodal AI, which integrates textual content, speech, photos and different information, may radically alter buyer journeys and repair supply. The FCA is eager to grasp how, at current, companies count on to make use of AI techniques inside their enterprise, and their plans for the way their use of AI will evolve.
- Affect on markets and companies: AI is already producing efficiencies throughout core actions inside monetary companies companies, and its affect is about to develop. As prospects delegate selections to AI, competitors amongst monetary companies companies may doubtlessly improve by means of the event of novel worth propositions; nonetheless, new types of market energy might also emerge, for instance if sure AI suppliers favour sure monetary companies companies. Monetary companies companies should adapt to keep up relevance and competitiveness. The FCA is eager to grasp the potential affect on, amongst others, market construction and buyer relationships, and to discover whether or not AI techniques present companies functionally equal to regulated actions similar to recommendation or intermediation whereas remaining exterior the regulatory perimeter.
- Client developments: Customers could more and more work together with monetary companies through AI interfaces, with expectations prone to shift in the direction of extremely personalised, automation-driven monetary journeys alongside lowered tolerance for friction, poor worth or opaque outcomes. Whereas this has the potential to enhance monetary outcomes, it additionally raises considerations about reliance on unregulated recommendation, vulnerability to mis-selling, mannequin bias or hallucination dangers, and AI-driven fraud. The FCA is eager to listen to views on each the alternatives and challenges this presents.
- Regulatory strategy: The FCA intends to stay an outcomes-focused regulator, leveraging present frameworks such because the Client Responsibility and Senior Managers and Certification Regime (SMCR) to make sure flexibility and compliance. The regulator is searching for enter on how the FCA can reap the benefits of the alternatives AI presents, but additionally, amongst others, on whether or not present frameworks are appropriate for the regulation of AI. For instance, the FCA will assess how related senior managers beneath the SMCR can proceed to discharge their tasks for the deployment and upkeep of AI techniques, and whether or not Client Responsibility expectations ought to be revised to account for the affect of AI.
Key issues for AI use in retail monetary companies
Use of AI to this point
Thus far, AI has been deployed within the monetary companies sector for a wide range of functions, together with aiding with creditworthiness assessments, enhancing buyer due diligence and monetary crime monitoring (similar to fraud detection), and offering buyer help interfaces similar to on-line chatbots. Every of those use instances introduces distinctive dangers, nonetheless, and requires a cautious evaluation of information safety points. For instance, AI-driven credit score assessments can inadvertently perpetuate bias if historic information incorporates underlying prejudices. Equally, AI chatbots could often present unclear or inaccurate responses, a problem which may be significantly acute for weak prospects. The FCA is conscious of those dangers, as highlighted by Sheldon Mills in his speech launching the evaluation. As AI turns into extra superior and autonomous, these dangers may improve.
There are additionally challenges for companies in making use of their tasks and obligations beneath the SMCR to the adoption and use of AI, which can have resulted in a slower uptake than would possibly in any other case have been the case. Given the FCA’s place that senior managers ought to preserve oversight of AI deployment, for instance, the dearth of “explainability” of many AI fashions might be seen to battle with the SMCR’s requirement for senior managers to exhibit they perceive and management dangers inside their areas of duty.
The FCA’s evolving strategy
At current, it seems that the FCA intends to keep up its present strategy to regulating AI, refraining from introducing extra or bespoke rules particularly for AI. As an alternative, the FCA will proceed to leverage present principles-based and outcomes-focused regulatory frameworks to information companies’ adoption and use of AI, offering companies with a level of flexibility of their strategy to AI. Nonetheless, this strategy shouldn’t be with out challenges: the FCA’s supervisory strategy has been reactive, leaving companies with little sensible readability and steering on tips on how to apply present regulatory frameworks to their AI deployment, pushing the burden onto companies to take a place ahead.
We could, nonetheless, see this strategy evolve within the coming months, given the suggestion in a latest Home of Commons Treasury Committee report (revealed per week earlier than the launch of the Mills Assessment) that this lack of readability from the FCA could have contributed to a chilling impact on AI adoption. In response, the Treasury Committee has beneficial that the FCA ought to subject steering on the applying of its present regulatory frameworks to AI utilization, that HM Treasury ought to designate main AI and cloud suppliers as essential third events beneath the Crucial Third Events Regime (giving the regulators higher supervisory management over these suppliers), and that the Financial institution of England and the FCA ought to conduct AI-specific stress testing. The Treasury Committee expects to see motion on the primary two of these suggestions by the tip of 2026.
Going ahead
It’s evident that AI will play an more and more distinguished position in monetary companies. Particularly, well-designed and sturdy AI instruments can drive effectivity and ship price financial savings, serving to companies meet the Client Responsibility’s expectations round worth and worth. Nonetheless, companies should make sure that any AI instruments or companies they undertake are completely evaluated in opposition to the FCA’s evolving regulatory expectations.
Companies which can be particularly reliant on AI, or that plan to extend their use of such applied sciences, ought to think about responding to the FCA’s Name for Enter. The findings of the Mills Assessment will affect how the FCA interprets and applies regulatory necessities to AI instruments and software program, offering a possibility for stakeholders to assist form the long run regulatory panorama.



















