Page cover

Risks

While integrating generative AI into physiotherapy offers numerous benefits, it also presents significant challenges that must be carefully considered. These risks include potential impacts on practitioner skills and patient relationships, as well as concerns about data security and decision-making accountability. Understanding these challenges is essential for physiotherapists to use generative AI responsibly and ethically. By acknowledging and proactively addressing these concerns, the physiotherapy community can develop approaches that harness AI's capabilities while safeguarding against its potential drawbacks.

We also note that the Council of the European Union has approved the Artificial Intelligence (AI) Act, a landmark law that harmonises AI rules across the EU based on a risk-based approach: the greater the potential harm to society, the stricter the requirements. The regulation also applies to non-EU companies, such as those in Switzerland, if their AI systems are used within the EU or affect people in the EU. Under this framework, most generative AI systems are considered limited-risk and are subject to specific transparency obligations, such as informing users that they are interacting with AI.

The EU AI Act therefore introduces a risk-based regulatory framework that requires physiotherapists to ensure transparency when using AI tools, particularly in informing patients about AI interactions, while maintaining appropriate data protection standards. Most AI applications in physiotherapy are considered limited-risk, though systems used for diagnosis or treatment planning may face stricter requirements.


Deskilling and over-reliance on AI

Excessive dependence on AI tools may compromise the development of essential clinical skills and decision-making abilities (Natali et al., 2025).

  • Potential loss of intuition and hands-on expertise as practitioners become overly dependent on technology to suggest management pathways

  • Challenges in maintaining competency or discerning high-value care when AI systems are unavailable or provide incorrect recommendations

  • Gradual undervaluing of traditional physiotherapy skills and clinical experience in favour of AI-driven approaches

Data privacy and security

The effective use of AI may require the sharing of patient data, raising significant privacy and security concerns (Chen & Pouyan Esmaeilzadeh, 2024).

  • Increased risk of data breaches and unauthorised access to sensitive patient information, for example, by uploading personal medical information to generative AI platforms

  • Potential concerns around misuse of patient data for commercial or unauthorised purposes

  • Patient discomfort with extensive data collection and AI-driven input into their care, either through lack of informed consent, or lack of autonomy in decision-making

  • Complex challenges in maintaining governance and compliance with evolving data protection regulations

Errors and liability

AI systems may make mistakes, particularly in complex or unusual cases, raising questions about responsibility and accountability.

  • Uncertainty about who bears responsibility when AI-assisted decisions lead to adverse outcomes

  • Lack of clear liability and reporting frameworks for cases involving AI-generated recommendations (Price et al., 2019)

  • Particular challenges in managing complex or unusual cases that fall outside AI's training data

  • Increasing complexity in maintaining compliance with professional and legal standards

Depersonalisation of care

The increased use of AI could potentially compromise the human element essential to physiotherapy practice.

  • Potential negative impact on therapeutic relationships as technology increasingly mediates patient interactions

  • Risk of reduced human interaction and/or replacement of hands-on time with patients

  • Gradual erosion of empathetic care elements that are fundamental to the profession

  • Patients may misinterpret AI interactions (e.g. with chatbots) as human-like, a phenomenon known as the Eliza effect, which can distort expectations and undermine authentic therapeutic communication

Bias and fairness

AI systems may perpetuate or amplify existing biases, leading to inequitable treatment outcomes, especially toward minority cultural or clinical populations (Sarkar & Bates, 2024).

  • Risk of unfair outcomes for underrepresented groups in generative AI training datasets

  • Limited representation of diverse patient populations in AI system development

  • Significant concerns about bias in AI training data affecting clinical recommendations, leading to ongoing challenges in ensuring AI systems consider all patient demographics equally

Discussion questions on the risks of integrating AI into clinical practice

Last updated