Text/HTML

Open Book

Text/HTML

     Photo of David Relling

Presidents Perspective

Artificial Intelligence Use at FSBPT: A Measured and Evidence-Based Approach 

David Relling, PT, PhD

Artificial intelligence (AI) is a frequent topic in the media, higher education, business, healthcare, and, more recently, regulation. Although there have been forms of AI for many years, maybe decades, this topic became more mainstream with the introduction of Large Language Models (LLMs) and generative AI (GAI), such as ChatGPT. Whether or not you use one of the more notable GAI systems from ChatGPT, Google, Microsoft, or others, you may already have experience with AI. For example, website chatbots used for answering questions, automated voice answering systems, and spam email filters are all some form of AI.  

To better understand AI, it is good to recognize some of the underlying terms, such as algorithms, data analytics, and machine learning. The algorithm is a proprietary set of instructions that provides the specific steps that an AI system will follow to produce a result. The algorithm underlying AI systems like ChatGPT is first trained on a large data set through a process called machine learning. Machine learning improves the AI’s information, decision-making, and resulting output or actions. In the case of ChatGPT, the training dataset is any information that can be accessed on the internet. Since their inception, the proprietary algorithms of these GAI’s have used data analytics to evaluate and improve their output. Additionally, data analytics help incorporate any subsequent information submitted by users to develop answers to future “prompts” (i.e., questions or commands). Similarly, in healthcare, AI systems have been developed based on large data sets to identify abnormalities in medical imaging, skin lesions, and other areas.  

Healthcare AI systems can be very helpful when the data set is large and the algorithm is designed well. However, systems that use enormous datasets and proprietary, unknown algorithms have raised concerns about potential bias and inaccuracy. In a recent study (Obermeyer et al., 2019), an AI algorithm for recommending preventative care services for populations of patients with complex medical conditions was found to be racially biased. The algorithm, which did not include race as a factor, was built with a large amount of data to predict patients with complex health situations who should be enrolled in preventative care/disease management programs or receive a referral to their primary care provider. Under further analysis, the researchers found that the AI algorithm connected healthcare costs to the severity of the risk for complex health issues, and, therefore, more disease management services were provided to patient groups that previously accessed and paid for more healthcare. The confounding issue with this underlying algorithm was that access to healthcare services isn’t equal for all races. Therefore, during the application of the health prediction algorithm, the race of an individual influenced the level of preventative programs and services due to the lower healthcare expenditures of patients of color in the original training database. Therefore, optimal clinical decisions by healthcare practitioners require knowledge of the data set, algorithm, and software to train the AI. However, the underlying details in many situations are not readily available due to the proprietary nature of the software. Completely relegating the healthcare practitioner’s knowledge, clinical expertise, and application to patient-specific circumstances to unknown AI algorithms can cause harm and quickly escalate to a regulatory issue.  

How does this apply to the regulation of physical therapy? The sudden recognition and expansion of AI in healthcare creates a challenge for regulatory bodies like state licensing boards since they do not regulate the AI system itself, but rather the licensee using the AI. It is important for regulatory bodies to educate their licensees about topics such as accountability, informed consent and digital privacy, and ethical principles related to incorporating AI into their clinical practice. The FSBPT Model Practice Act may be helpful in guiding regulatory boards and licensees in this process. The Model Practice Act applies accountability to the physical therapist as the PT is “fully responsible for managing all aspects of the physical therapy care of each patient/client” with or without the assistance of AI. Additionally, the physical therapist and physical therapist assistant must strictly adhere to privacy requirements, making sure to implement a high degree of confidentiality as the “information relating to the physical therapist-patient/client relationship is confidential” and shall not be communicated to a third party, like a generative AI tool, that is not involved in the patient’s/client’s care without the written authorization of the patient/client.   

FSBPT has recognized the rapid rise of AI technology along with its potential positive and adverse implications for regulation, safe and effective physical therapy practice, and the organization. To inform our members, FSBPT has hosted multiple webinars and presenters going back to the 2020 annual education meeting. At the most recent annual education meeting in October 2024, there were four sessions related to AI: two keynote sessions and two breakout sessions. Presenters from the Northeastern University Institute for Experiential AI provided a keynote on selected insights into the ethical considerations for AI implementation in healthcare and research. The presenters followed this with a breakout session on ethical considerations with a greater focus on initial use of AI systems. A second keynote by Frank Meyers, Federation of State Medical Boards staff, describes FSMB’s response to AI and the challenges to clinicians and regulators for managing AI use. The FSMB published a position paper on this topic in April 2024 that you may find very helpful. At the same time, FSBPT exam volunteers and staff presented on FSBPT’s inspection and early trials of AI for the National Physical Therapy Examination (NPTE) program (see the article in the 2024 FSBPT Forum).   

Successful completion of the NPTE is necessary for licensure in each jurisdiction and, therefore, a critical piece of our collective mission to protect the public. Security, validity, and reliability of the NPTE are paramount to an effective examination development program, requiring a substantial amount of both human and financial resources. This is an area where the potential for AI may be very helpful for FSBPT and our many examination volunteers. Initial use of AI at FSBPT has been able to develop thousands of draft items in a secure, closed environment. At the same time, there appears to be potential for AI to assist with editing for spelling, grammar, item style, and identifying items that test similar content or may cue a test taker to the correct answer (i.e., “enemy” items). The early introduction of AI appears to allow exam volunteers and staff to focus on applying their critical physical therapy knowledge and clinical skills to ensure the relevance and quality of the items for the NPTE. There may be other areas where AI can assist FSBPT and our members in more efficiently completing the important work of the licensure process during this time of regulatory scrutiny. As an organization, we will continue to seek out and share the potential benefits of AI for regulation through webinars, educational sessions, and research aims through the Healthcare Regulatory Research Institute (HRRI). 

The potential benefits of AI for the important work of regulators need to be balanced with the necessary security, reliability, validity, and ethical principles. Technological advances can produce excitement that comes with a sense of urgency to implementation. Our collective mission of protecting the public by promoting safe and effective physical therapy requires a measured and evidence-based response to ensure the best outcomes for both consumers and providers of physical therapy services.  



References: 

Navigating the Responsible and Ethical Incorporation of Artificial Intelligence into Clinical Practice (Federation of State Medical Boards, April 2024), https://www.fsmb.org/siteassets/advocacy/policies/incorporation-of-ai-into-practice.pdf. 

Obermeyer, Ziad, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. "Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations." Science 366 (447): 447–53. 

FSBPT Model Practice Act 7th ed. (Federation of State Boards of Physical Therapy, 2022),  https://www.fsbpt.org/Free-Resources/Regulatory-Resources/Model-Practice-Act. 

Text/HTML