The application of artificial intelligence (AI) in professional regulation is accelerating, particularly in areas involving high-volume decision-making, such as fitness to practise (FtP). Regulators are exploring AI tools to improve consistency, efficiency, and oversight. However, the use of AI in FtP processes raises critical ethical, legal, and human rights questions. This briefing outlines where AI might offer value, identifies key risks, and provides recommendations for responsible regulatory use.
Where AI could add value
As it stands, sole reliance on AI would cause substantial risk, however it could be used to support the FtP process and overviews. Examples include:
- Case triage and prioritisation: Algorithms can help prioritise high-risk cases based on pre-set criteria, improving speed and resourcing.
- Decision-support tools: AI can provide panels with data on precedent and comparable sanctions to support consistency.
- Risk modelling: Predictive analytics can help identify emerging risk areas or patterns of harm across professions and specific professional areas and employers, or by region in terms of socio-economic and population trends.
- Systemic and structural insight: Pattern recognition tools can help identify systemic issues (e.g. workplace culture, geographical trends) from aggregated FtP data. Structural insight could include assessment of the type of policies of referring employers and how often they are applied, or even how often other sources such as Freedom to Speak Up Guardians or Patient Advice and Liaison Service (PALS) have been accessed prior to referral to a professional regulatory body. This richness of data could help inform how FtP is structured in terms of fairness.
- Bias auditing: AI can assist in detecting disproportionality in referrals, outcomes, and processes when combined with disaggregated data.
Risks and ethical concerns
Despite its potential, AI in FtP introduces significant risks:
- Data bias: AI systems trained on past FtP data may reflect historic biases (e.g. over-referral or harsher outcomes for racialised or migrant professionals). It is worth considering how the issue of bias is currently managed in regulatory bodies – are there impartial critical evaluators, perhaps there is diversity of representation and knowledge in those evaluating AI systems.
- Opacity: Many AI tools operate as “black boxes,” with decision processes that cannot be easily explained or challenged.
- Procedural fairness: Use of AI could undermine Article 6 (ECHR) rights if it compromises transparency, access to redress, or meaningful human oversight. (Article 6 of the European Convention on Human Rights (ECHR) guarantees the right to a fair trial, encompassing both criminal and civil proceedings. It ensures everyone has access to a fair and public hearing within a reasonable time by an independent and impartial tribunal established by law.)
- Over-reliance: Panels may defer to AI outputs even when professional judgement is more appropriate.
- Discriminatory outcomes: If AI reinforces patterns of disproportionate scrutiny or sanctioning, regulators are expected to ensure their processes do not amplify or impose discrimination on individuals they assess under the FtP process.
Legal and human rights considerations
Regulators have a legal duty to ensure FtP processes are fair, proportionate, and non-discriminatory. Key legal considerations include:
- Equality Act 2010: Systems must be screened for indirect discrimination, especially where protected characteristics intersect with referral or sanction trends.
- GDPR & data ethics: AI systems using personal or sensitive data must be transparent, lawful, and subject to human review.
- Right to a Fair Trial (Article 6): Registrants must be able to understand and challenge decisions, including any AI-derived recommendations.
Recommendations
To explore AI responsibly in FtP, regulators should:
- Use AI for support, not substitution: AI should assist, not replace, human judgement.
- Conduct equality impact assessments: All tools must be assessed for potential bias and unintended consequences.
- Ensure transparency and explainability: Registrants must be able to understand how AI-informed insights were used.
- Involve diverse stakeholders: Include practitioners, ethicists, and lay members in AI tool design, procurement, and oversight.
- Pilot and evaluate before full use: Start with small-scale trials and commit to robust evaluation.
- Maintain clear communication: Be transparent with the public and profession about where and how AI is used.
Conclusion
AI has potential to support safer, fairer regulatory systems—but only if its use is deliberate, ethical, and embedded in transparent human oversight. FtP decisions carry life-changing consequences and must be grounded in trust, not just efficiency. Regulators must lead with responsibility, ensuring that new tools serve justice, not undermine it.
Leave a comment