Evidence-Based Selection of a Fall Risk Assessment Tool: A Program Evaluation Review

Fall prevention strategies are a consistent topic of discussion for healthcare regarding patient safety, as patient falls are costly to the patient and the organization. This project uses the CDC Framework for Program Evaluation to assess the fall prevention policy of a local hospital system, with particular emphasis on the fall risk assessment tool, Hester Davis. This project also explores the risks and benefits of adopting an alternative fall risk assessment tool, predictive analytics. Predictive analytics uses electronic health record (EHR) data analysis to provide a highly individualized patient fall risk score based on a large variety of patient and environmental factors. Comparative analysis of the two tools was performed in 104 chart reviews, which provided evidence for the use of predictive analytics. Recommendations are provided for a development of a new fall prevention policy that includes predictive analytics as the primary fall risk assessment tool. Based on these recommendations, this project also includes a competency-based orientation toolkit, which can be put into place should the organization choose to transition the policy to utilize predictive analytics as the primary fall risk assessment.

the local hospital organization which includes the incidence of falls with hip fracture on the hospital quality and patient safety dashboard.
In addition to the risk of physical injury associated with hospital falls, there is also an emotional toll to consider. Patients, their families, and hospital employees may also suffer emotional distress related to the falls. Jayasinghe, et al. (2014) reported that a significant percentage of elderly experience posttraumatic stress after a fall. This is often associated with a decrease in mobility related to the fear of falling again and a loss in confidence in balance (Ang, et al., 2018). Families also suffer distress related to the patient falling again, loss of confidence in the ability to care for the patient, and social isolation (Ang, et al., 2018). Employees may also suffer mental distress related to patient falls and may physically injure themselves while trying to prevent the patient from falling.
In addition to the obvious concerns for patient safety, falls are also financially detrimental. Because in-hospital falls are considered a hospital-acquired condition, the facility is not reimbursed for care associated with the fall. According to Spetz, et al. (2015), the cost of noinjury falls ranges from $1100 to $2000, injury fall costs range from $7000 to $15000, and serious injury fall cost ranges from $17500 to $31000. These costs are associated with surgical interventions, imaging, and extension in length of stay (Fields, et al., 2015). The facility is 100% responsible for these costs. Additionally, legal action and costs may be involved if the patient and family find fault with the providers related to the fall.
With so many implications in healthcare, fall prevention is a clear priority at healthcare facilities across the nation. This has spurred the creation of many fall risk screening tools and fall risk interventions. This organization utilizes the Hester Davis Fall Risk Assessment Screening (HDFRAS) to determine risk status and appropriate intervention to prevent falls.
Hester Davis is a validated tool to determine fall risk based on several factors such as history of falls, medications, mobility, and mental status. Per policy, the Hester Davis assessment should be completed once per 12-hour shift and fall prevention interventions are based on the score produced (MWHC Policy Database, 2021). While the tool is validated for use, studies have found it to be ineffective in use. Kaiser, et al. (2021) found HDFRAS had minimal ability in distinguishing between high and low risk fall patients when they compared the data with patients who did and did not experience a hospital fall. While HDFRAS is ineffective at times, it is also ineffectively utilized within the organization. Many of the fall risk factors can change throughout a 12-hour shift and may increase the patient from being low risk to high risk. This creates concern that a patient may become high risk but go hours without high-risk interventions until the next shift assessment.
With these risk in mind, there is a clear opportunity for improvement within the organization. Fall prevention is an organizational goal, providing increased stakeholder engagement. Administration wants falls to become a never event and nursing staff will support a program that efficiently and accurately assesses the patients. An automated, predictive analysis program will provide better assessment than the standard evaluation tool currently used.
Stakeholder engagement supported a thorough analysis and comparison of the predictive analysis program to validate the program effectiveness.

Purpose of the Program Evaluation Project
This project focused on comparing a new predictive analysis fall score program with the standard Hester Davis Fall Risk Assessment Screening currently in use to determine if a new program would provide more accurate fall risk data and be more effective in preventing falls.
Hospitalized inpatients were the target population, as this population is routinely assessed per hospital protocol. It was also critical to assess this population as these patients are hospitalized typically over several days and the fall risk scores can fluctuate from day to day. These patients are at the highest risk for poor fall risk evaluation, as the assessments are completed in the morning when the primary nurse may not have gathered adequate objective data to accurately assess the patient's fall risk status. A dynamic predictive analysis program collects that objective data based on prior documentation and active patient events, such as narcotic administration or procedures requiring anesthesia.
This project evaluated the effectiveness of an embedded predictive analysis program for fall risk assessment by comparing the program results to the HDFRAS results via individualized chart reviews over the course of four weeks. This objective provided data on the accuracy of the new fall analysis program, as well as provided insight into the need for adapting the fall prevention policy.
The project evaluated the readiness for change through examination of additional factors such as administration and staff buy-in and required training for understanding and use of a new fall risk assessment tool. This data collection was essential for the development of a potential policy change. Readiness for change was assessed through feedback from the presentation of findings to stakeholder groups and subsequent meetings with the interdisciplinary team working on the project.

Program Problem Statement
For adult inpatients (P), how does the establishment of an EHR vendor embedded predictive analytic continuous fall risk warning system (I) compared to utilizing Hester Davis falls risk assessment (C) influence the adult inpatient falls risk program evaluation (O).

Population
The population for this program evaluation was adult inpatients. This includes all patients, age 18 and above that are admitted to a hospital unit, either through the emergency room or through a direct admission process. Pediatric patients are excluded as they have varying fall risk factors and require a separate assessment tool.

Intervention
The intervention was evaluated in an electronically embedded predictive analysis model that utilizes data available from the electronic health record. The automated fall risk model evaluated patient fall risk status based on electronic health information available throughout the electronic chart and updated in real time based on changes in patient variables throughout the hospital stay. This data encompasses all areas known to be fall risk factors, including diagnoses, lab results, medication administration, clinical documentation, and even written progress notes from all disciplines. The system provides a fall risk score and creates a system alert that indicates the patient is a fall risk and interventions should be put in place to prevent falls.

Comparison
The intervention was compared to the Hester Davis fall risk assessment screening that is currently in use. Hester Davis is a validated assessment tool that evaluates patients on the following categories: age, date of last fall, mobility, medications, mental status, toileting needs, volume and electrolyte status, communication, and behavior (Hester & Davis, 2013). Each category has varying levels that are selected through nursing assessment and those points contribute to the total score. An assessment score of 10 or higher equates to a high fall risk status.

Outcome
The outcome of this program evaluation is the influence the analysis and evaluation have on the fall risk program within the organization's hospitals. The possible outcomes included a change in policy, no change in policy, or the integration of an additional assessment piece based on the program evaluation.

Utility of Program Review
This program evaluation was necessary to determine which fall assessment program provided the most thorough fall risk evaluation and the greatest potential reduction in falls.
Hospital administrators are key stakeholders as the organization continues a journey to high reliability, quality, and safety. Accuracy of fall assessments and reduction of falls are key components to improving patient safety, therefore the evaluation of a new tool was beneficial in determining the impact it could have on fall prevention within the organization.
Nursing and clinical staff are also key stakeholders in this evaluation, as these groups are the primary caregivers assessing fall risk status and implementing fall prevention interventions. An automated prediction analysis tool could reduce the active assessment burden while still providing accurate fall risk information. Nursing can safely implement appropriate fall interventions, without relying on a rushed report from a previous nurse or an early morning assessment that may not be focused on mobility.
The program evaluation also impacts the practice setting, inpatient units. Alarm fatigue continues to be closely studied and has a profound impact on busy inpatient units. Ruskin & Hueske-Kraus (2015) discuss avoiding over monitoring as an appropriate solution to improve response time from staff. A more accurate fall assessment could lead to a reduction in unnecessary alarms and alerts, creating a better response to the bed alarm intervention. A safer hospital environment improves patient and employee satisfaction.
The population of adult inpatients clearly benefits from this evaluation as it determines the safest and best practice for fall prevention. With falls ranging from no patient injury to potential serious injury such as fractured hip or subdural hematomas, the patient population is the most important and benefitted aspect of this program evaluation. The goal of healthcare is to keep patients healthy and safe and fall prevention directly impacts this goal, making it an essential need to the patient population.

Analytical Framework
This program evaluation is guided by the Center for Disease Control's (CDC) Program Evaluation Framework, which includes six steps to evaluation (A Framework for Program Evaluation, 2017). First, stakeholders are engaged. Describing the significance of the practice problem in terms of patient safety and finance brought the need for evaluation to the forefront and encouraged participation and cooperation with the process. This flows nicely into the next step, describing the program, as the need was previously identified, and a potential solution outlined. Describing the program not only considers how to move forward with program evaluation, but also identifies resources required, outcomes expected, and any factors that will contribute or inhibit the evaluation. The third step, focus the evaluation, is one of the most critical aspects of this paper. This provided more detail about utility, feasibility, propriety, and accuracy. This step outlines the worth of the program evaluation; is the effort worth the information it may yield? This question is addressed throughout this paper with information regarding usefulness, budget and timeline, and eventual program recommendations.
The next three steps involve synthesis of information regarding the program and are notably influenced by the Johns Hopkins Nursing Evidence-Based Practice model (JHNEBP).
This model provides materials and practices for gathering, synthesizing, and utilizing evidence (Dang, et al., 2022).
Step four, gathering credible evidence, was completed through a complete literature search and appraisal. The evidence is explored, and results are reported in an evidence table, which includes evidence grading, using the JHNEBP appraisal tools, so the quality of the research is clear. The evidence supports the program evaluation in order maintain be evidence-based practice, which leads to step five, justify conclusions (A framework, 2017).
The evidence is read and critically evaluated to determine recommendations and support for the program. This step is completed with through critical appraisal and theme development, in which the author extracts commonalities in the evidence and allows them to shape the evidence for the program evaluation. Lastly, ensuring use and sharing lessons learned is the final step.
This outlines the program review results, plan for dissemination of the findings, and the plan for sustainability within the organization through the creation of a competency-based orientation toolkit.

Evidence Search Strategy, Results, and Evaluation
Following the analytic framework steps for program evaluation, the literature was reviewed to gather evidentiary support for the program. The following section provides information on how the literature search was conducted, the articles reviewed, and the strength and quality of evidence in the articles.

Search Strategy
The search strategy utilized the databases CINAHL Complete and ProQuest. Keywords used for the search included fall risk, adult inpatients, and electronic health record data, abbreviated EHR. After the initial search, filters were applied and included academic journals only, English language articles, articles within the time frame of 2002-2022, and adult age only, defined as over 18 years old. Inclusion criteria consisted of articles addressing the use of electronic health record data in developing inpatient fall risk status. Exclusion criteria consisted of articles related to fall prevention in outpatient or home settings and articles that incorporated other inpatient preventative screening, such as pressure ulcer prevention. The search included abstract and full text results.

Results
The search of the two databases yielded 33 results after the filters were properly applied. Two additional articles were identified in the references of another article and were included in the search results. Refer to Figure 1 in appendix A for a completed PRISMA diagram related to evidence search results.
Two articles were removed as duplicates in the database searches. 31 articles were screened for inclusion in the literature search and 19 of those records were excluded. The articles were excluded as they addressed the use of EHR data in creating fall prevention interventions, rather than developing fall risk alert models. Other articles addressed fall risk screenings but did not utilize EHR data in the screening scoring process. Therefore, articles that did not directly use EHR data as a means of determining fall risk data were excluded from the literature search.
Articles included in the evidence evaluation relate to the use EHR data to develop predictive analytics related to fall risk. None of the articles directly compare this intervention to the Hester Davis assessment, therefore additional articles related to the validity of the Hester Davis were also included for comparison. This creates a total of 12 research articles selected for evaluation and program analysis.

Evaluation
Each article was evaluated for strength and quality of evidence according to the Johns Hopkins Nursing Evidence Based Practice (JHNEBP) Model (Dang, et al., 2022). Table 1 provides a brief description of the requirements for the strength of evidence levels and the quality of evidence grades. Table 2, located in appendix A, includes a summary of the primary evidence and includes ten articles. Of these ten articles, three were rated a level II strength and seven were rated a level III strength. Of the ten articles, seven were grade A regarding quality of evidence, and 3 were grade B quality. Table 3, located in appendix B, contains the two systematic reviews evaluated in the literature. One systematic review was determined to be level II strength, with one review receiving an A quality grade and one receiving a grade B.
None of the articles met criteria for level I strength. This is a limitation of the evidence as no randomized controlled trials were located to evaluate. However, the available evidence did provide studies with adequate strength and quality evidence. Most articles displayed sufficient sample sizes with detailed literature and methods discussions, and consistent recommendations. Articles that received a grade B for quality related to a smaller sample size or slightly less thorough literature reviews. Overall, the quality of evidence is sufficient and reliable, providing adequate information to critically appraise the use of EHR data in developing fall risk status.

Fall Risk Factors
Utilizing electronic health record (EHR) information in research remains a relatively new field that has provided an enormous amount of patient details to analyze. Many of the studies in this literature search detailed the wealth of information EHR data provides on fall risk factors.
Standard fall risk assessments use few data points, manually entered by nursing, to estimate a patient's fall risk status. These assessments are based on clinician assessment and are found to have low sensitivity and specificity for fall risk patients (Matarese, et al., 2014;Oliver, et al., 2004), as summarized in table 3, found in appendix B. Matarese, et al. (2014), a systematic review of 13 assessment studies, noted that the authors were unable to recommend a particular screening and further research was required to develop an accurate screening. Unfortunately, none of these studies reviewed the Hester Davis scale, but it is important to note that screenings of a similar nature to Hester Davis were found inadequate in the clinical setting.
Jung & Park (2017) used EHR data to isolate factors contributing to falls at the time of occurrence and found eighteen different variables that increased the risk for fall. Similarly, Giles, et al. (2006) drew EHR data from the plan of care and standard nursing assessments and observations to accurately predict fall risk. These factors are not typically included in standard fall risk assessments. Cho, et al. (2018) also uses alternate EHR data, such as lab tests and vital signs, to develop fall risk status. This is an excellent example of fall risk factors that can change frequently and are typically missed on a standard assessment completed once during a shift.
This evidence indicates that on analysis, there are many factors that contribute to fall risk status, but often these factors are not all addressed in standardized screening assessment.
Furthermore, these factors can be highly individualized and vary throughout patient populations, which again is not accounted for in standard assessments. EHR data captures fall risk data from every available category within the patient's record and provides a more accurate risk assessment.

Predictive Analytic Ability
A strong theme throughout the literature search was the ability of EHR analysis to predict patients that would fall. Many of the studies were retrospective analysis, completed through review of patients previously admitted. This gave studies an opportunity to evaluate fall risk scores on patients with known falls, to determine if the data that was available at the time would truly reflect the apparent high fall risk status. Six of the studies found EHR data reflected accurate fall risk status when compared with the rate of falls (Cho, et al., 2018;2019;Giles, et al., 2006;Jung & Park, 2017;2019;Moskowitz, et al., 2020). Three of the studies formulated structured, fall predictive models from EHR data analysis and were found to have high sensitivity and specificity when compared to standard, manual screening tools such as the Morse Fall Scale, Hendrich II, and STRATIFY (Cho, et al., 2021;Jung & Park, 2019;Lindberg, et al., 2020).
Also, important to note, one study found no statistical significance between the EHR predictive analysis program when compared to the standard Johns Hopkins Fall Risk Assessment Tool (Rivera, et al., 2021). This study was cut short by the start of the COVID-19 pandemic, but did recommend creating a hybrid assessment, in which EHR predictive analysis for falls is an included factor in the standard nurse-completed documentation fall screening.
This theme throughout the literature suggests that predictive analysis is fully capable of accurately determining fall risk and warrants further testing in real time.

Individualized Care
Another theme discussed throughout the evidence was the ability for EHR fall data analysis to create a patient specific fall risk evaluation and help to create more problem-targeted interventions. This gives facilities the ability to provide more patient-centered care, the preferred approach. Giles, et al. (2006) discusses the use of generic fall risk interventions, which often yield unsatisfactory results. With EHR fall risk analysis, the scores can be easily broken down and the areas creating the highest risk can be targeted with more specific interventions. Jung & Park, (2017) further support this notion through examining factors present at the time of the fall.
For example, they noted a significant factor to be where the patient was in within length of stay.
The EHR data reflected patients who were early into their stay had higher fall rates. This isolates an individualized factor that can be independently addressed through tailored intervention. Rivera, et al. (2021) used the automated model to a further extent, which populated targeted fall interventions according to the analytic result provided by the program.
These articles collectively reflect the ability of EHR-driven analysis to improve patientcentered care. Fall risk is a highly variable calculation across the adult patient population.
Patients in their 30's likely require different fall interventions than patients in their 80's. EHR analysis can help drive the fall risk prevention program through interpretation of population specific variables that lead to population specific interventions.

Evidence-based Recommendation Statement
Based on the evidence review, an organization program evaluation for EHR data-based predictive analysis is recommended to continue and be incorporated into the fall prevention program. It should be noted that the evidence search is lacking regarding level I strength of evidence, with no randomized controlled trials performed. Of the 12 articles reviewed, all obtained level II and III for strength and had high quality grades, rating either an A or B. Most of the studies are retrospective, meaning the use of EHR fall risk data analysis has not been widely tested in real life settings. The evidence is also limited by the lack of direct comparison to the Hester Davis screening, although many similar screenings were compared to EHR predictive analysis, and the performance of these screenings fell short (Cho, et al., 2019).
Although these limitations are recognized, the data from the retrospective studies is consistent, promising, and merits incorporation into practice. As the evidence in the fall prevention area continues to develop, there are already many areas of data analytics in use within the electronic medical record system of the organization, such as modified early warning system (MEWS) scores, sepsis alerts, and clinical decision support. Incorporating this technology into fall prevention is a natural and prudent step to make. While cost for the model is a factor, the transition would be smooth as clinical staff are accustomed to using these tools already, as previously mentioned. By utilizing a combined approach to fall prevention, there will be a reduction in required education and policy alterations. For these reasons, it is the recommendation of this evaluation to incorporate the EHR data-based predictive analysis program into the existing fall prevention protocols.

Program Analysis and Evaluation Plan
Analyzing the existing fall prevention program and evaluating for potential changes was most thoroughly completed by using the CDC Program Evaluation Framework ("A Framework," 2017). This framework includes six steps for program evaluation which are detailed to this specific fall prevention program analysis and evaluation in the following section.

Engage Stakeholders
Stakeholder engagement is essential to any program evaluation as support is needed from the people the program impacts the most. One critical standard of this step includes the identification of stakeholders ("Evaluation standards," 2021), which was initiated with the significance of the practice problem. The use of a stakeholder power/interest grid was also used to identify which stakeholders should be prioritized (see Appendix F). Administration and nursing staff are identified as the main stakeholders in this project. The main appeal of this project for both groups is a potential decrease in patient falls, thus increasing patient safety, which is a top priority for the organization on their journey to becoming a highly reliable organization. Organizational council meetings are interdisciplinary, including administration and clinical staff. Presentation of the problem in this setting created interest, engagement, and support in the project from necessary stakeholders. Figure 2 in Appendix C logically displays the description of the program. Standards for step two include program documentation and context analysis ("Evaluation standards," 2021).

Describe the Program
These are displayed in figure 2, a logic model found in appendix C. The logic model is an illustration of why this project is needed and what results may be expected on its completion. Cho, et al. (2019) has validated the use of an EHR data driven predictive analysis model for fall risk status and has since actively tested this model as a primary fall risk screening tool (Cho, et al., 2021). This evidence highlights a new opportunity for patient safety. This project begins with analysis of the current Hester Davis fall screening, explores the possibility of a significant change to the fall risk prevention program, and a potential impact on fall rates, patient safety, and workflow efficiency.
A key aspect of the logic model also includes contextual factors, which are uncontrollable, but do impact the project process ("Evaluation standards," 2021). Cost is a clear contextual factor as the price of the program is a fixed factor necessary to the program. Fall risk interventions are another contextual factor as the program analysis relates to fall risk assessment processes rather than preventative interventions. While this may be an adaptable factor after completion of assessment changes, it may still impact how stakeholders view the results of the evaluation.

Focus the Evaluation Design
The significant impact of this program analysis and evaluation lies in the potential benefit to patient safety and cost savings as it strives to find a cost-effective solution to fall prevention.
Close analysis of the Hester Davis scale was completed through fall risk and fall data review.
This coincided with evaluation of the EHR data predictive analysis and comparisons were completed. This information revealed a need for change. Table 4 in Appendix D illustrates the plan utilized for program analysis in a timely manner and includes a proposed schedule for policy change process.
The program analysis and evaluation budget, in Table 5 of Appendix E, displays potential costs of this program analysis. The budget was not impacted by the cost for the EHR predictive analysis model as it was built in to the original purchased EHR, but just not in use.

Gather Credible Evidence
The purpose of the program analysis and evaluation was to determine if the current fall prevention program could be improved or changed to provide better fall prevention. Tables 2 and 3 in appendix B display credible evidence to support the evaluation considering promising new ways to utilize EHR data to prevent falls. This evidence supported the need to gather organization evidence before change could be implemented. Data was collected from chart reviews of Hester Davis scores of fall patients and directly compared to an EHR calculated risk status number. A trained and qualified team collected this data to ensure it is high quality information This data was used to determine the need for a change in fall prevention policies.
These processes are set in place to achieve the standards for reliable and valid information ("Evaluation standards," 2021).

Justify Conclusions
When the program analysis was completed, the results were interpreted, reviewed, and recommendations were made for the future of the fall prevention program. This step is largely based on statistical analysis to determine the official recommendation of the program analysis ("Evaluation standards," 2021). Data collected throughout the analysis revealed whether changes would be beneficial to the fall risk prevention program. Although the numbers tell a clear story, judgement is still involved in making the right choice for the setting and population.
Discussion with clinical informatics council on other factors impacting the program aided in developing a formal recommendation at the end of the program analysis period. The formal recommendation was provided to administrative stakeholders, along with an explanation of the data that supports the new recommendation.

Ensure Use and Share Lessons Learned
Review and analysis of the existing fall prevention policy, procedures, assessment tools, and policy outcomes was completed. Further analysis of the fall prevention tools, Hester Davis and predictive analysis, was completed through comparative chart reviews, providing results on the accuracy of each program. The information systems team was able to provide a four-week pilot period for predictive analytics, which allowed for this comparison. Analysis of the Hester Davis assessment tool was completed through chart reviews which evaluated the Hester Davis score for accuracy on 24 hours after admission and at discharge. 104 patient charts were reviewed over a four-week period, see Appendix G for the evaluation tool used. The patient population consisted of general medical and general surgical patients on one acute care inpatient unit. Patient data including assessment information, laboratory results, medications, and mobility were analyzed to determine accuracy of fall risk assessment tools. The results of this analysis reflected Hester Davis was 75% accurate 24 hours post-admission and 70% accurate on discharge. In comparison, predictive analytics was 97% accurate 24 hours post-admission and 96% accurate on discharge. Of the instances in which the tools reflected different results 24 hours post admission, predictive analytics was 86% more accurate. In the same circumstance on discharge, predictive analytics was 81% more accurate.
Additionally, patient falls that occurred during the four-week study period were also reviewed through safe reports and chart review to determine if patients were correctly scored by Hester Davis and predictive analytics. Two patient falls were recorded during this four-week period. One patient was accurately assessed as a high fall risk but did not have appropriate interventions in place. The second patient was not appropriately assessed as a high fall risk, therefore appropriate interventions were not in place to prevent the fall.
This information indicates that the desired outcomes of the fall prevention policy are not fully achieved with the current practices in place.

Program Evaluation Discussion and Recommendations
The role of data analysis in healthcare is ever-increasing, allowing for improvements in safety and patient-centered care. Utilization of a predictive analytics used to determine fall risk status have been found to decrease falls and provide cost reduction. Oh-Park, et al. (2021) noted a 39% decrease in falls with the use of predictive analytics in fall prevention. While traditional fall risk assessment tools, such as Hester Davis, have been validated and are generally still used today, they are limited in many ways. A program review was completed to appraise the current fall prevention program and determine if the use of predictive analytics could enhance patient safety outcomes. A standard fall risk tool like Hester Davis has many limitations that may lead to an inappropriate fall score for the patient. They are dependent on adequate nursing understanding of the assessment and accurate nursing documentation. They use limited data points; Hester Davis using only eight categories rated with a scale. Throughout the chart review process, Hester Davis was found to be inaccurately completed or did not provide an accurate assessment 25% on admission and 30% on discharge indicating a substantial number of patients that were not receiving appropriate fall risk interventions. In contrast, predictive analytics uses over twenty data points to calculate a fall risk score. Each score is specific to the patient, rather than general categories that attempt to encompass all patients. While predictive analytics does use several areas of nursing documentation when calculating a score, it is not fully dependent nursing assessment input. This creates less room for human error in predicting fall risk.
Per the facility policy, Hester Davis is required shift documentation, to be completed once in a 12-hour shift. Many things can change throughout this time that alter a patient's fall risk status and may not be captured until the following shift, leaving the patient without fall risk interventions for hours. Predictive analytics updates the fall risk score every four hours, reflecting current score throughout the shift. The objective of this project was to evaluate the current fall assessment tool in use and provide recommendations for the future state of fall prevention for the organization. Through current practice analysis, it was determined Hester Davis has many limitations that may be leading to inaccurate fall risk assessments and potentially increasing falls. The predictive analytics program has scholarly evidence to suggest it is more effective at assessing an individual's fall risk status. Therefore, a CBOT was developed to adequately prepare the organization for the future use of predictive analytics, increasing fall risk accuracy and improving fall prevention strategies. Limitations to this program include limited data analysis for patients in observation for less than 12 hours, as predictive analytics may not have enough data to calculate an accurate score. This creates the basis for continuing use of Hester Davis on admission. Data was also limited to comparison on one medical-surgical unit. The patient population is varying, but it would be useful to evaluate different patient populations such as intensive care and progressive care.

Dissemination Plan
The project results were first disseminated in throughout the organization. Results were presented to the small project team which consisted of members from multiple disciplines including information systems, regulatory affairs, and nursing education. The team determined further appropriate organizational groups for presentation. These included the falls committee, consisting of nursing and upper administration, and the quality improvement council, which is a shared governance group for nursing.
Further public dissemination of the project will occur two ways. The program evaluation and practice recommendations will be written up and submitted to the University of St.
Augustine for Health Sciences for review as a Doctor of Nursing Practice Scholarly Project. The document will be submitted to the university's Scholarship and Open Access Repository (SOAR), which houses student and faculty research work. Additionally, the project topic is appropriate and may be of interest to clinical informatics journals for more widespread dissemination. Computers, Informatics, Nursing (CIN) is an appropriate journal for this project, as predictive analytics in healthcare continues to be a field of interest, and the journal submission process is available on the website. This dissemination plan allows the evaluation team to answer questions and hear concerns from leadership as they consider making this fall prevention policy change throughout the organization. Further dissemination to the public will occur through university channels and potential journal publication.

Conclusion
The purpose of this project for fall prevention program analysis and evaluation was to investigate potential improvements to fall prevention policies. This was achieved through a rigorous review of the problem, an exploration of the literature, analysis of the evidence, formulation of a recommendation, and completion of a plan for analysis and evaluation of the fall program.
Inpatient falls continue to plague healthcare as a costly mistake, to both the patient and the organization. The evidence supports the use of automated EHR predictive analysis model for fall risk as it provides accurate results, decreases human error in assessment, and improves workflow efficiency. It is the recommendation of this program review to incorporate a predictive analytics program for fall risk scoring into the fall prevention program.     How to use this tool: • Identify who will conduct the mapping and who will be on the mapping team.
The mapping team should include at least two frontline staff on the Implementation Team and at least one person who has experience with process maps. Try to use the same team members if more than one process is mapped.
• Have the Implementation Team identify and define every step in the current process for fall prevention.
• Define a beginning, an end, and a methodology for all of the processes to be mapped. For example, some processes are mapped through the method of direct observation of the process taking place, while others can be mapped by knowledgeable stakeholders talking through and documenting each step in the process.
• When defining a process, think about staff roles in the process, the tools or materials staff use, and the flow of activities.
• Everything is a process, whether it is admitting a patient, serving meals, assessing pain, or managing a nursing unit. Identify key processes involving fall prevention. The goal of defining a process is to hone in on patient safety vulnerabilities and potential failures in the current process.
• Examples of processes might include initial fall risk factor assessments (e.g., when does it occur, who does it, what happens if a patient is found to have risk factors) or postfall management.
Determine if there are any gaps and problems in your current processes, and use the results of this analysis to systematically change these processes.

PROCESS ANALYSIS PROCEDURES
• Take time to brainstorm and listen to every team member.
• Make sure the process is understood and documented.
• Make each step in the process very specific.
• Use one post-it note, index card, or scrap piece of paper for each step in the process.
• Lay out each step, move steps, and add and remove steps until the team agrees on the final process.
• If a process does not exist (for example, there is no process to assess fall risk factors upon admission and readmission), identify the related processes (for example, the process for admission and readmission).
• If the process is different for different shifts, identify each individual process.
Example: Process for Making Buttered Toast Step Definition 9. Use knife to cut pat of butter.
10. Use knife to spread butter on toast.
IDENTIFY THE STEPS OF YOUR DEFINED PROCESS: • Press people for details.
• At the end of the gap analysis, compile the results in a document that displays each step so that team members have the map of the current process in front of them during the team discussion (Step 2).
HOLD TEAM DISCUSSION.

EVALUATE YOUR CURRENT PROCESS AS YOU DEFINE IT:
• What policies and procedures do we have in place for this process?
• What forms do we use?
• How does our physical environment support or hinder this process?
• Which staff are involved in this process?
• Which parts of this process do not work?
• Do we duplicate any work unnecessarily? Where?
• Are there any delays in the process? Why?

Competency Based Orientation Toolkit for the Use of Predictive Analytics Fall Prevention
Program

Purpose Statement
The purpose of this CBO toolkit is to inform, educate, and prepare organizational staff on the use of the Predictive Analytics based fall prevention program to improve patient safety and decrease fall-related safety events within the organization.

Audience
Target audiences for this CBO toolkit include the following: • All adult inpatient nurses • IS: Information systems is the department responsible for building and releasing the predictive analytics program, as well as tracking and troubleshooting any program problems with the EHR

Implementation Strategy
The implementation strategy outlines the process for transitioning to the use of a new fall prevention policy using predictive analytics.

Evaluation Strategy and Tools
• Current process analysis • 10 weekly chart audits, performed by nursing leadership, to monitor adherence rates to new fall prevention policy documentation.
• RCA completed on reported falls using safe reports and review with involved departments.
• Update falls monthly in the Patient Safety Dashboard in Sharepoint.
• Monitor and discuss concerns/trend in monthly Fall Committee meeting.

Stakeholder engagement and analysis tools
• Stakeholder analysis performed using a power/interest grid (see Appendix F) identifying hospital administrators, information systems, nursing staff, and patients as stakeholders.
• Leadership support assessment (see Appendix J) • Patient Safety Dashboard information provides organization-wide falls rate on a monthly, quarterly, and annual basis. Comparison of the falls rate creates stakeholder engagement throughout the organization, particularly for hospital administration.
• Unit falls are reviewed monthly at staff meetings. Staff are also informed on increased fall rate and increased number of serious injury events related to falls.
• Patients will be provided an information sheet in the welcome packet on the risk of falls. Teams meetings, IS request tickets, and committee meetings. RNs will be alerted to the policy change via email, staff, huddles, and Microsoft Teams notifications. They will be provided a date of completion for the necessary education.

Communication Planning Tools
Patient communication of falls procedures will be provided through verbal education from the nursing staff and through a patient hand out provided in the welcome packet (see Appendix L).

Proposed Fall Risk Policy Update to include Predictive Analytics scoring
The purpose of this policy is to establish clear standards of care for fall risk assessment, fall interventions, and post-fall guidelines.
1. All patients will be treated as a high fall risk until determined otherwise.
2. The Hester Davis fall risk assessment tool will be used and documented on admission through 12 hours after admission. This time provides the program to gather enough information to accurate provide a fall risk score. The score will correlate to low, moderate, or high fall risk.
3. Nursing will acknowledge the fall risk score in the safety flowsheet. Nursing may disagree with the determined fall risk status and select a more appropriate status but must provide a comment for explanation.
4. Fall prevention interventions will be implemented based on the risk level. These interventions will cascade down in the flowsheets for selection and documentation.
5. Predictive analytics will reassess fall risk status every four hours.
6. BPA will notify nursing of any interval changes in fall risk status and provide updated interventions in the flowsheets.
7. Nursing will educate the patient regarding fall risk status and interventions that are being enacted.
8. Fall risk interventions for low, moderate, and high will remain the same. • CBOT for Predictive Analytics Tool (see Appendix I) • Current Process Analysis (see Appendix H) • Root Cause Analysis form (see Appendix O)

Scenario Examples of Process in Use
Scenario 1 A patient 75 yo female is admitted with a diagnosis of bacterial pneumonia. She is alert and oriented. She is currently on room air. She has IV antibiotics infusing. She uses a cane to ambulate at home. She reports feeling short of breath and weak. You complete her Hester Davis assessment and receive a score of 7. This correlates to low risk and you educate the patient on low-risk fall interventions.
On the following shift, predictive analytics provides a fall risk score of 72. The next nurse reads the fall risk criteria next to the score which indicates the patient also had abnormal vital signs and abnormal labs, received a dose of pain medicine 5 hours ago, she received 2 of her home blood pressure medicines, is now requiring oxygen, her weakness and shortness of breath continues. All these things contributed to an increase in her fall risk score. The RN assesses the patient and agrees with the high fall risk status. The nurse puts high fall risk interventions into place immediately and the patient is educated on these new interventions.

Scenario 2
You assume care of a 32 yo male admitted for appendicitis. He is scheduled to undergo an appendectomy at 0800. His current predictive analytics fall risk score is 29. The predictive analytics summary reports his current medication regimen and abnormal labs are influencing his score. Based on his use of narcotics, you disagree with a low risk assessment, document this in the flowsheet, and implement moderate risk interventions.
He returns from surgery several hours later. He received general anesthesia and several dose of IV pain medication. He also had low blood pressure and low respirations in PACU. Predictive analytics now indicates a fall risk score of 71. You put into place high fall risk interventions.
The two days later, the patient has converted to only requiring Tylenol for pain and his vital signs are stable. He is ambulating around the unit independently and is awaiting his discharge.
Predictive analytics provides a fall risk score of 22 and low fall risk interventions are put into place.

Root Cause Analysis Form
• Reporter: o Anonymous option • MRN: • Date: • Department Involved: • Secondary department involved: • Unit Census: • Staffing: • Type of event: o Fall • Level of harm: o Ranges from "no harm" to "patient death" • Describe the event: o Free text