There is lots of news about the upcoming 2016 Summer Olympics and Paralympic games. Will Rio be ready? Will there be doping sanctions? Which athletes are going to represent the United States? After the CDC posted a Level 2 Alert in response to the threat from Zika, stellar US athletes in golf and soccer gave notice that they were not going to Rio. How will other countries respond to various health alerts? Will it really be a competition among the world’s top athletes?
I flash back to freshman orientation. My first day on campus I head over to the natatorium to meet Dutch, the coach of the women’s swim team. As the team gathers, I realize that I am the only member of the women’s collegiate swim team who is totally and completely ‘green’. I’d taught every level of Red Cross and YMCA swimming lessons; was an experienced life guard; swam almost since I could walk. But, I’d never swam in a meet…never even attended a swim meet.
The college had just hired Dutch, whose dream was creating an Olympic caliber team. To this day I have no idea why he gave me a swimming scholarship, but he did. One of my room mates was an Olympic gold medal winner in the breast stroke. Another held a national record in the back stroke. Virtually every other member of the team was at least a national record holder. All were internationally recognized. I became their unofficial mascot. When I swam in a meet they clapped because I finished! I would not be surprised if my record still stands for the slowest collegiate 50 yard crawl. The data was clear. He knew that my talents lay in teaching not racing, so he shifted me to leading swimming classes. There I excelled — truly, a much better fit.
Back to program evaluation. Whether for a collegiate swimming program, an Olympic training program, or the actual Olympics, data drives decisions. Ongoing, rigorous program evaluation that evaluates training, individual performance, facilities, etc. is the key to success. Program evaluation is a process of data collection, conducted according to a specific set of guidelines. Performance data is collected, analyzed, interpreted, and conclusions are drawn. Winners are announced and awards bestowed. There are tears of joy and groans of disappointment. One becomes a gold medal winner. Another becomes a superb coach. Participants move forward or are redirected. The foundation for excellence is good data that is regularly collected and analyzed, then fed back into the system for performance improvement. It is all about program evaluation.
Using Patton’s approach to utilization-focused evaluation, the CDC defines program evaluation as “the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future program development. Further, the CDC writes: “Program evaluation does not occur in a vacuum; rather, it is influenced by real-world constraints.”
The NNPRFTC’s Accreditation Standard 3 – Evaluation is focused on how NP postgraduate training programs use systematic formative (on-going) and summative (final) data collection to conduct quality assurance, assess training effectiveness, communicate, and build capacity. An important component of evaluation is that the findings should be clearly integrated into a programmatic feedback loop that moves the program forward. For example, information about the effectiveness of individual clinical preceptors should be shared with the individual preceptor and incorporated into that person’s continuing education and/or performance improvement plans.
Standard 3 has 11 evaluation elements that cover institutional, programmatic, trainee, instructor and staff performance. Specifically, there are components for: program curriculum; trainee performance, feedback, and remediation as necessary; clinical faculty/instructor and support staff performance, feedback, and remediation as necessary; adequacy of organizational support including operations and finances; and overall programmatic self-evaluation including outcome measures and corresponding action plans.
Drawing again from the CDC guide on program evaluation : “What distinguishes program evaluation from ongoing informal assessment is that program evaluation is conducted according to a set of guidelines. With that in mind … (e)valuation should be practical and feasible and conducted within the confines of resources, time, and political context. Moreover, it should serve a useful purpose, be conducted in an ethical manner, and produce accurate findings. Evaluation findings should be used both to make decisions about program implementation and to improve program effectiveness.
I’d like to share one more quote on the importance of program evaluation from Research to Results Briefs: “While conducting an evaluation may seem complicated, expensive, or even overwhelming, it is important to remember that program evaluations serve as tools to improve programs. Simply put, program evaluations are conducted to make programs better. Evaluations benefit programs at every stage of implementation. For start-up programs, evaluations can provide process data on the successes and challenges of early implementation; and, for more mature programs, evaluations can provide outcome data on program participants. While evaluation is not without challenges, the information obtained from a program evaluation can help to streamline and target program resources in the most cost-efficient way by focusing time and money on delivering services that benefit program participants and providing staff with the training they need to deliver these services effectively. Data on program outcomes can also help secure future funding.”
In closing, adapting the 5 reasons identified by the U.S. Department of Health and Human Services, Administration for Children and Families, program evaluation contributes to program success because:
- Program evaluation can find out “what works” and “what does not work.”
- Program evaluation can showcase the effectiveness of a program to the community and to funders.
- Program evaluation provides timely, targeted feedback to program participants.
- Program evaluation can improve staff’s front line practice with participants.
- Program evaluation can increase a program’s capacity to conduct a critical self-evaluation and plan for the future.“
Next, we will explore how to create a reliable and valid system for program evaluation.
Wrapping up with George Bernard Shaw’s take on why on-going measurement is essential: “The only man who behaves sensibly is my tailor; he takes my measurements anew every time he sees me, while all the rest go on with their old measurements and expect me to fit them.”
Until next time,
Candice