Risks
Last updated
Last updated
While integrating generative AI into physiotherapy offers numerous benefits, it also presents significant challenges that must be carefully considered. These risks include potential impacts on practitioner skills and patient relationships, as well as concerns about data security and decision-making accountability. Understanding these challenges is essential for physiotherapists to use generative AI responsibly and ethically. By acknowledging and proactively addressing these concerns, the physiotherapy community can develop approaches that harness AI's capabilities while safeguarding against its potential drawbacks.
We should also note that the Council of the European Union has approved the Artificial Intelligence (AI) Act, a landmark law that harmonises AI rules across the EU based on a risk-based approach: the greater the potential harm to society, the stricter the requirements. The regulation also applies to non-EU companies, , if their AI systems are used within the EU or affect people in the EU. Under this framework, most generative AI systems are considered limited-risk and are subject to specific transparency obligations, such as informing users that they are interacting with AI.
Deskilling and over-reliance on AI
Excessive dependence on AI tools may compromise the development of essential clinical skills and decision-making abilities (Natali et al., 2025).
Potential loss of intuition and hands-on expertise as practitioners become overly dependent on technology to suggest management pathways
Challenges in maintaining competency or discerning high-value care when AI systems are unavailable or provide incorrect recommendations
Gradual undervaluing of traditional physiotherapy skills and clinical experience in favour of AI-driven approaches
The effective use of AI may require the sharing of patient data, raising significant privacy and security concerns.
Increased risk of data breaches and unauthorised access to sensitive patient information (for example, by uploading personal medical information to generative AI platforms)
Potential concerns around misuse of patient data for commercial or unauthorised purposes
Patient discomfort with extensive data collection and AI-driven input into their care, either through lack of informed consent, or lack of autonomy in decision-making
Complex challenges in maintaining governance and compliance with evolving data protection regulations
AI systems may make mistakes, particularly in complex or unusual cases, raising questions about responsibility and accountability.
Ongoing uncertainty about who bears responsibility when AI-assisted decisions lead to adverse outcomes
Lack of clear liability and reporting frameworks for cases involving AI-generated recommendations
Particular challenges in managing complex or unusual cases that fall outside AI's training data
Increasing complexity in maintaining compliance with professional and legal standards
The increased use of AI could potentially compromise the human element essential to physiotherapy practice.
Potential negative impact on therapeutic relationships as technology increasingly mediates patient interactions
Risk of reduced human interaction and/or replacement of hands-on time with patients
Gradual erosion of empathetic care elements that are fundamental to the profession
Patients may misinterpret AI interactions (e.g. with chatbots) as human-like, a phenomenon known as the Eliza effect, which can distort expectations and undermine authentic therapeutic communication
AI systems may perpetuate or amplify existing biases, leading to inequitable treatment outcomes, especially toward minority cultural or clinical populations (Sarkar & Bates, 2024).
Risk of unfair outcomes for underrepresented groups in generative AI training datasets
Limited representation of diverse patient populations in AI system development
Significant concerns about bias in AI training data affecting clinical recommendations, leading to ongoing challenges in ensuring AI systems consider all patient demographics equally
How can physiotherapists maintain their clinical reasoning skills while benefiting from AI assistance?
What measures would make you and your patients feel comfortable about the use of AI and data privacy in your practice?
How should responsibility and liability be assigned in cases where AI-supported reasoning and decision-making leads to adverse outcomes?
In what ways might the increased use of AI in physiotherapy affect the therapeutic relationship between practitioner and patient?
How can we ensure that AI systems used in physiotherapy practice are fair and unbiased across diverse patient populations?