+++
History of Injury Prevention
++
Historically, injuries were seen as “accidents” that could not be predicted, and, therefore, could not be prevented. This limited perspective resulted in an unaggressive approach to injury prevention of restricted scope that had little effect.12,13 Over the last 100 years, several visionary individuals had successive insights that established the public health basis for injury prevention. These frameworks resulted in a rational approach that now guides effective injury prevention.
++
In 1916, a volunteer pilot in the Canadian Royal Flying Corps named Hugh DeHaven was on his final training flight when his plane collided with another and fell 500 ft to the ground.14 The gunner of the plane died, but DeHaven survived with significant injuries, spending the rest of his military service as a clerk who was involved in the collection of bodies during World War I.14 He began to notice different injury patterns and began to theorize that the design of the plane’s interior may affect or even prevent the injuries sustained by its passenger. These observations led to the earliest developments of modern injury prevention. By applying engineering principles to injury events, DeHaven created the biomechanical foundation for injury that ultimately led to the development of automotive safety belts.15,16
++
The epidemiologist John E. Gordon built on DeHaven’s foundation with another novel perspective by pointing out that injuries can be evaluated using the standard epidemiologic framework of host, agent, and environment (Table 3-2). Just like any other condition affecting human health, Gordon explained that injuries were not random, but occurred with recognizable patterns across time and populations.17 This conceptual evolution was a paradigm shift—from single-cause explanations of injury that inadequately described the injury event and, therefore, limited prevention opportunities to a multifactorial understanding of the components of injury. This would allow injuries to be studied from several perspectives and opportunities for prevention identified.18
++
++
The fundamental work done by DeHaven and Gordon applying public health principles to injury set the stage for the most notable of the early pioneers of injury prevention, William Haddon, the first director of the National Highway Traffic Safety Administration (NHTSA). Haddon is most well known for his expansion of Gordon’s epidemiologic framework for injury prevention, by incorporating a temporal element to the host-agent-environment schema, which ultimately became known as Haddon’s Matrix (Table 3-3).19 The pre-event phase allows us to examine the factors surrounding host, agent, and environment that influence the likelihood that an event capable of producing an injury will occur (such as a car crash). An example of a host factor in the pre-injury phase would be alcohol impairment, agent factors could include brakes or maintenance, while an environmental factor could be road condition. During the event phase, there are factors influencing the probability that the event (ie, car crash) will result in an injury, and if so, to what extent. A host factor during the event could be seatbelt use, an agent factor might be crush resistance of the car, and an environmental factor could be the presence or absence of dividers that would keep the car from ricocheting into ongoing traffic. In the post-event phase, these three components (host, agent, and environment) can be evaluated for factors that influence the ultimate consequences of injury.
++
++
This conceptual framework was further leveraged by Haddon to develop 10 strategies that formed the foundation of most current injury prevention and control efforts (Table 3-4).19 The underlying concept to most of these strategies is based on the work pioneered by DeHaven; that is, separating the injury-producing “energy” from the host.18,19 Haddon’s work marks the most pronounced shift in the transition from a simplistic, single-cause, individual-level perspective of injury events to complex, multifactorial, societal-level causation. Haddon’s approach also integrated multidisciplinary involvement into injury prevention, including clinicians, epidemiologists, engineers, law enforcement agencies, policy experts, educators, and mental health experts.
++
+++
Principles of Injury Prevention
++
Most interventions can be thought of as either being active or passive on the part of the person being protected. Active interventions involve a behavior change and require people to perform an act such as putting on a helmet, fastening a seatbelt, or using a trigger lock for a handgun. Passive interventions require no action on the part of those being protected and are built into the design of the agent or the environment, such as airbags or separation of vehicle routes and pedestrian walkways. Passive interventions are generally considered more reliable than active ones19,20; however, many interventions that are considered passive still inherently carry an active component, even if it is at the societal or political level, such as passing legislation to require certain safety features in automobiles. The number of times an active intervention needs to be performed to be effective is also a consideration in terms of efficacy. For example, a seatbelt must be used each time to be effective, while a vaccine usually only requires active participation for a limited time interval for it to have long-term effectiveness.
++
Another framework often applied to injury prevention strategies is that of the following “three E’s”: (1) enforcement and legislation, (2) education and behavior change, and (3) engineering and environmental modifications. Initially, education was the main area of focus for injury prevention. If applied uncritically without a strong framework and thorough evaluation, behavior change through educational interventions in isolation can be difficult to achieve. A comprehensive report has suggested that the most effective interventions are engineering/environment, followed by enforcement, and lastly by education.21 Educational interventions are usually most effective when complemented with modalities from the other “E’s”; that is, the most effective injury strategies typically have components of all three. An example is the child safety seat, an engineering solution for injury prevention, which was only successfully implemented through successful education campaigns and careful law enforcement.21
++
Other factors that must be considered when choosing and implementing injury control strategies are fidelity versus adaptability. Fidelity refers to the measure to which a program is implemented as intended. Fidelity has been found to influence the measured effectiveness of an outcome.22 While fidelity to the program’s intended implementation is critical to achieving desirable outcomes, contexts may differ widely in a number of ways, ranging from socioeconomic characteristics of the population served to cultural nuances that may influence implementation of the program. Adaptability is the ability of a program to be modified so that it is applicable in a specific context. An effective injury control program needs to strike an appropriate balance between fidelity to established, evidence-based methodology, while being adaptable enough to maintain relevance to the specific population being served. Often, the fidelity and adaptability of a specific program will influence its prioritization among potential injury control interventions.
++
Prioritization of targets in injury control for intervention depends on multiple factors.18 The frequency and severity of a type of injury are fundamental to whether investments should be made to prevent or improve treatments for that injury; that is, having a solid base of evidence for the epidemiology of injury is key to prioritization. Certain injuries may occur frequently, but if the consequences of that injury in terms of severity are minimal, there may be a more important target for injury prevention or control. The cost of injuries in terms of direct health care costs and indirect societal and economic effects must also be considered. Effective arguments for implementing an injury control program can be made if savings in terms of averted injury-associated costs are demonstrated. Awareness of the importance of cost-effectiveness analyses and their potential as a tool for advocacy is steadily increasing.23,24,25 Understanding of the resources available to fund and sustain the intervention is of primary consideration, as well, and will clearly influence the intervention chosen. Finally, less easily quantifiable but equally important are the acceptability and feasibility (including political) of a program in the community.18 When several strategies for injury control are available and found to be acceptable as potential interventions, prioritizing them may be difficult. Obviously, the most effective strategy proposed should be prioritized; however, often, a mixed strategy is most effective and should be used, if resources allow.26 When choosing between primarily active versus primarily passive interventions, the passive intervention is usually favored as the more reliable approach.19,20 Finally, sustainability of a potential program is essential if it is to provide long-term effect, so assessing the ability of a program to become ultimately accepted and sustained may play into the decision as to whether or not to adopt it. An “institutionalized” program is one that achieves ongoing support and commitment from the agency, organization, or community in which it is based.12
++
Certain common characteristics run through many successful injury prevention programs. These include a multidisciplinary approach and community involvement and should involve ongoing evaluation of both the process and outcome of the program. Depending on the targeted injury type, a program might involve contributions from the following: health care professionals, public health practitioners, epidemiologists, psychologists, manufacturers, traffic safety and law enforcement officials, experts in biomechanics, educators, and individuals associated with the media, advertising, and public relations as previously noted. Health care professionals might include those in primary care, such as pediatricians, and those involved in acute trauma care. Finally, individual members of the public might be involved.20,27
+++
Injury Control: From Surveillance to Dissemination
++
The public health approach can be applied to injury prevention and control as it is applied to any problem at the population level. This approach is comprised of the components described below18:
++
Surveillance
Risk factor identification
Ascertaining natural history
Intervention
Evaluation
Dissemination
++
The components of a comprehensive injury prevention program are demonstrated in Box 3-1.
++
Box 3-1: Components of an Injury Prevention Program
++
Evaluation of an injury prevention program is an essential component of any program, allowing implementers to assess program effectiveness and make appropriate improvements. It also provides quantitative information for funders, increasing the program’s accountability and support, and ensures that resources are being used in a beneficial and cost-effective way.28,29 While the science of program evaluation is extensive, a comprehensive review is beyond the scope of this discussion. Therefore, a brief overview of the necessary components and underlying standards is provided.
++
Program evaluation should be built into the program from inception, including during the development of theoretical frameworks supporting the program’s premise. For example, a logic model developed in anticipation of forming a program should be assessed for the validity of underlying assumptions before implementation of the program.29 Early stakeholder engagement is key to this process, as stakeholders can often identify gaps in the theoretical basis of a proposed program and help supply alternatives or solutions. As a program is developed, it is important to have discrete, agreed-upon metrics by which to assess the program, so that progress can be measured and seen by all those involved. One helpful approach is to achieve consensus responses to the series of essential questions outlined in Box 3-2. Included are examples of responses based on a violence intervention program.
++
Box 3-2: Fundamental Questions Upon Which to Build Effective Program Evaluation
What will be evaluated?
What criteria will be used to judge program performance?
The number of people screened and enrolled
The percentage of clients needs that are addressed
The violent injury recidivism rate in San Francisco
What standards of performance on the criteria must be reached for the program
80% of patients aged 10–30 years who are activated as a trauma at SFGH will be screened for risk factors
90% of patients who are screened as “high risk” for injury recidivism will be offered Wraparound services
50% of clients approached will be enrolled as Wraparound clients client need will be met 75% of the time
The violent injury recidivism rate for those aged 10 to 30 years will be reduced by 50%
What evidence will indicate performance on the criteria relative to the standards?
Institutional trauma registry data will supply the number of eligible patients
Case manager records will demonstrate client enrollment
Case manager records and client surveys/interviews will reflect whether client need were met
Trauma registry data will be reviewed for violent injury recidivism
What conclusions about program performance are justified based on the available
Are changes in injury recidivism correlated to Wraparound services?
Which Wraparound services are most correlated to violent injury recidivism reduction?
What areas of Wraparound need strengthening, improvement, or adjustment?
Is this a model that can be used in other communities?
++
Program evaluation can focus on two broad areas, including formative evaluation and summative evaluation (including results, impact, and outcome evaluation).29 In order to address these two areas, a combination of qualitative and quantitative evaluation metrics is often needed. Formative measures may include measurable goals inherent to the program’s development, such as acquisition of human and capital resources or construction of program components. Summative evaluation measures are specific to the delivery of the program, outcomes of interest that the program is intended to influence, and the impact that the program is having on the community.29 These measures can be a combination of process measures, impact measures, and outcome measures.30 Process measures include measuring the success of delivering the program’s services to the intended community. An example using a falls prevention program would be the number of people educated or number of handrails installed in response to program initiatives. Measuring impact means evaluating the immediate, short-term effects of the program. For a falls prevention program, this could be noting an increase in knowledge about falls prevention among program participants. Outcome measures for this same program would include the number of fall events or fall-related injuries seen before and after implementation of the program. Selection of these measures should be done early in program development, so that evaluation is built into the infrastructure of the program. For injuries, data sources used to evaluate outcome measures vary in terms of capture and resource expenditure (Fig. 3-1). Well-designed community-based surveys with sophisticated sampling strategies are costly and not sustainable on a longitudinal basis, but they provide a good estimate of injury incidence per population. Trauma registries are extremely useful tools for injury surveillance and are present in all level-I trauma centers; however, the population captured is subject to selection biases based on severity and geography. Understanding the nature of the data and its inherent selection biases are critical to building and interpreting an injury surveillance system.
++
++
The steps in program evaluation have been well-defined by the United States Centers for Disease Control and Prevention, among others (Fig. 3-2).31,32,33 The six steps are as follows:
++
Engage stakeholders (people involved in program operations, the population served, funders, and others affected).
Describe the program (needs assessed, expected effects, context, logic model/theoretical framework, activities).
Focus the evaluation (identify the measurable effects that are most important to stakeholders).
Gather the evidence (indicators, data sources, quality and quantity of data available).
Justify conclusions (correlate conclusions using evidence gathered and assess this conclusion against predetermined standards set by the stakeholders).
Share lessons (dissemination of results and lessons learned improves the likelihood that the program will be used).
++
++
The standards underlying these six steps strive to ensure that the program evaluation will be useful to users and stakeholders, ethically conducted, as well as accurate and feasible.31,32 Ultimately, program evaluation should result in improvement of the program, identification of successful and unsuccessful components, and ensure that we invest in strategies that work while discontinuing those that do not.
++
For those interested in a more comprehensive review of program evaluation or further in-depth reading on the techniques and tools of program evaluation, references are provided.29,31,32,33,34