Brian Roberts LT8000 F19 miniProj 4 Case Study Word Document
Background
History of Evaluation
The stages of project development such as initiating, planning, and executing are parts of a process to ensure goals are met and stakeholders are satisfied. One major element of a project’s life cycle is evaluation. Numerous models and evaluation processes have been developed over the years and within the field of instructional design, Michael Scriven contributed several key concepts. He created the terms used to describe evaluation conducted as formative, during the project, and summative, occurring at the end of the project. Scriven defines evaluation as a process conducted to investigate the results of efforts such as instructional development (Johnson & Bendolph, 2018). Despite the age of many evaluation models, several are popularly utilized among projects within the fields of instructional design and human performance improvement. Daniel L. Stufflebeam’s CIPP model, developed between 1960 and 1970, uses four types of evaluation as part of the process of a comprehensive evaluation examining context, input, process, and product. Within his CIPP model, Stufflebeam (2015) defines evaluation as “the systematic process of delineating, obtaining, reporting, and applying descriptive and judgmental information about some object’s quality, cost-effectiveness, probity, feasibility, safety, equity, and significance.”
Relevance to Design
Throughout the field of instructional design and technology, evaluation can be found as an iterative component of many processes such as the phases of ADDIE. The four parts of the CIPP model are questions guiding decision making throughout a project’s development. These parts are important for the instructional design process as they are similarly aligned within phases of instructional design. Stufflebeam (2015) poses four questions as part of the process asking the evaluator “[w]hat needs to be done? How should it be done? Is it being done? Is it succeeding?” The initial part of CIPP is the context evaluation which examines goals similar to a needs assessment in the analyze phase. Input evaluation asks questions related to the planning stages alongside the design and develop phases. Process evaluation asks about actions matching the implementation phase. Product evaluation examines the outcomes of both formative and summative evaluations during the evaluate phase.
Overview
The goal of this case study is to examine the importance and implications of the evaluation process as a critical element of the design process regarding instruction and performance improvement. The following sections address implications observed from several articles regarding the evaluation process in contexts such as training and academics. Implications reviewed will be the basis for an analysis of a case study that utilized the CIPP model to evaluate a suicide prevention program developed by a government agency in Taiwan.
Implications of Evaluation
The process of evaluation throughout projects can have several implications for all parties including the project team, stakeholders, and learners. Awareness of the importance of evaluation for projects throughout the development phases and scopes is important when determining success and the need for adjustments during and after development. A critical element of meaningful evaluation is the collection and organization of data. Without useful data, evaluations can be difficult to conduct when measuring a project’s effectiveness or worth. A few key implications of conducting evaluations are determining if outcomes are met, verifying necessary resources for success, and efficiency or effectiveness of the product. For context, three of the evaluation parts of the CIPP model will be used as a comparison for cases of program evaluation.
Related to the process evaluation part of the CIPP model, Waters (2008) determined through summative evaluations that the design and delivery of a summer reading program were inefficient and lacked measurable results on improving reading skills among participants. Surveys and log sheets collected from parents as part of the summative evaluation provided feedback that participation was lower than expected and that the training requirements and participation were too demanding for most families that initially signed up to participate. Evaluation process results directed adjustments made before a second iteration of the program was conducted. One take away is from the discovery that the lack of regular communication with the participants during the study would have informed more about success and interventions that could have increased participation (Waters, 2008).
Similar to the input evaluation part of the CIPP model, Burns (2015) examines the importance of data for evaluation in order to measure program effectiveness of healthcare projects with client-specific data rather than program informing data sets. The struggle with their evaluation is that “single-subject data do not lend themselves easily to aggregation across groups to facilitate commentary on an entire program’s effectiveness” (Burns, 2015). To accommodate the needs of diverse interests in a healthcare program, they worked with behavior analysts to create categories to organize informative data before beginning the program in order to proactively fulfill the need to make future program-level decisions. This attention is similar to the input evaluation phase of CIPP to help ensure an accurate measurement of criteria throughout phases of a design model such as ADDIE.
Utilizing the CIPP model to conduct a recent evaluation of a nursing program, Lippe and Carter (2018) share a success story of following Stufflebeam’s model to gain insights on the effectiveness of a prelicensure nursing program identifying redundancies, missing content, and learner abilities to meet program outcomes. For their evaluation, the CIPP model was chosen as a summative evaluation of the program. The product evaluation of the CIPP model conducted through student formative and summative surveys provided the most insight for the nursing program. Results determine the program’s ability to appropriately support students, identify redundancies in the curriculum as well as content issues that expose variable mastery among students end-of-life care skills for elderly patients. The product evaluations provided essential feedback to inform program decisions to adjust the curriculum for further progress in an already successful nursing program (Lippe & Carter, 2018).
Case Study of Suicide Prevention Programming
Recently, a healthcare program as part of an initiative to reduce suicides in Kaohsiung City, Taiwan, utilized Stufflebeam’s CIPP model to evaluate their program three years after launching the Koahsiung Suicide Prevention Center (KSPC). Ho et al. (2011), as the evaluation team, chose the CIPP model based on its longstanding popularity for program evaluation as a method to determine the effectiveness of efforts taking place in Kaohsiung towards the main goal of lowering the suicide rate within Taiwan’s second-largest city.
The study followed all four parts of the CIPP model as a summative evaluation to evaluate the effectiveness of programming from the KSPC. The context evaluation of the project determined goals such as the need to reduce the suicide rate for Koahsiung which ranks higher than the national average for suicides as a cause of death in Taiwan. For the input evaluation, information and data related to the establishment of the KSPC in 2006 included the resources, funding, and staffing for each year including 2006, 2007, and 2008. Process evaluation of KSPC is modeled after Australia’s suicide prevention model to include each of the three strategies: universal, selective, and individual. Universal strategies of the process evaluation examine educational handouts, KSPC website and activity, suicide data analysis that target the large group of the entire city for outreach efforts of the center. Selective strategies within the process evaluation include the services available for the public to seek out such as a 24-hour crisis line and gatekeepers from the medical services industry. Individual strategies of the process evaluation are the direct interactions between KSPC and clients through follow-up visits on the telephone, mental health referrals from gatekeepers, supporting a more effective reporting system for suicide-related deaths, intervention methods, and expanding resources of the KSPC. The product evaluation of the KSPC focused on the three elements that showed the greatest importance from initiatives including the crisis line, reporting system, and follow-ups with clients.
Several discoveries are included from the CIPP evaluation results to help determine efficacy and decisions to maintain the program asking for continued funding. As part of the context evaluation, it was discovered that suicide rates and suicide as a cause of death were considerably underreported. This helped inform several of the goals and initiatives from the KSPC such as improving the reporting systems used by medical professionals and the design of programming elements that were developed (Ho et al., 2011). The input evaluation shared a reflection of the budget and financial needs for the KSPC to run effectively which is noted by the resourcefulness of partnerships created with local hospitals during the evaluation period. Among input factors, funding was the greatest identified shortfall. Each of the three strategies within the process evaluation identified successes and failures of program efforts such as the increase in the use of educational materials and services such as the crisis line and a slow start to the gatekeeper training program. The gatekeeper program initially lacked incentives but did manage substantial growth after the first year. The product evaluation did identify program success in a decrease of reported suicides during the evaluation period with an increase in client use of services and educational materials.
Case Study Analysis
Keeping consideration of the referenced implications of examples from cases of program evaluation, the KSPC appears to have conducted an effective yet modified version of the CIPP evaluation model. Further data or analytics would help inform the context evaluation regarding the previously underreported suicide rates and incorrectly reported causes of deaths. The extra data and analytics would also inform the input, process, and product evaluations. Additional perspective on the increased accuracy of reporting in comparison to the impact of efforts towards the cause of death reporting could help inform the product evaluation more effectively. Ho et al. (2011) mention one of the strategies of the KSPC is addressing the inaccurate reporting system for the cause of death, but additional evaluation of the trends of estimated annual data compared against collected data as part of the process evaluation would inform a better perspective of the report of success. Trends versus observed data evaluation may help with assessing the program’s strengths towards not only reducing the suicide rate but also correcting the reported accuracy. One weakness within the input strategy is the focus on the financial constraints and comparisons to other country’s programs. The input strategy does not include the criteria as outlined by Stufflebeam’s (2015) checklist to determine what the program input needs are to properly function as a program. One of the implications of evaluation is determining if program outcomes are the result of efficient and effective delivery. A weakness in the study is that the evaluators determine that the KSPC programming is effective but fail to determine if it is efficient from the resources available. Within the product evaluation, Ho et al. (2011) mention “All four indicators are improving year by year. They show that the KSPC accomplishes targeted aims and improvements in all four indicator areas. Also, decreasing trends in suicide rates reveal the KSPC’s ability to provide effective interventions and its contribution toward decreasing suicide in high risk groups.” The sustainability and effectiveness evaluations are sub-parts of Stufflebeam’s (2015) CIPP model which examines if the program can function efficiently and long term. These extra sub-evaluations would have been beneficial if considered in the KSPC evaluation to determine program success and justification for ongoing annual funding.
Summary
Overall, evaluation can have numerous goals and outcomes. The most critical contribution is the insights gained to inform adjustments or determine if the project is meeting the needs of clients and stakeholders. Through years of adjustments and expansions on the CIPP evaluation model, Stufflebeam developed a model still widely used decades after inception. Discussions of other evaluation cases alongside the KSPC program show that while each evaluation may have strengths and weaknesses, a properly conducted evaluation will help provide calculated feedback on a program to determine if efforts yielded the results necessary to continue delivery with potentially needed adjustments. For Ho et al., their evaluation may have a few small weaknesses but it did appear to effectively inform the stakeholder on the main goal of determining if KSPC is successful at reducing suicide rates for the cause of death within Kaohsiung, Taiwan.
References
Burns, C. E. (2015). Does my program really make a difference? Program evaluation utilizing aggregate single-subject data. American Journal of Evaluation, 36(2), 191–203. doi: 10.1177/1098214014540032
Ho, W.-W., Chen, W.-J., Ho, C.-K., Lee, M.-B., Chen, C.-C., & Chou, F. (2011). Evaluation of the suicide prevention program in Kaohsiung City, Taiwan, Using the CIPP evaluation model. Community Mental Health Journal, 47(5), 542–550. https://doi.org/10.1007/s10597-010-9364-7
Johnson, R. B., & Bendolph, A. (2018). Evaluation in instructional design: A comparison of the major evaluation models. In Reiser, R. A. & Dempsey, J. V. (Eds.), Trends and issues in instructional design and technology (4th ed., pp. 87-96). Retrieved from VitalSource.com.
Lippe, M., & Carter, P. (2018). Using the CIPP model to assess nursing education program quality and merit. Teaching & Learning in Nursing, 13(1), 9–13. doi: https://doi.org/10.1016/j.teln.2017.09.008
Stufflebeam, D. (2015). CIPP Model Evaluation Checklist: A Tool for Applying the CIPP Model to Assess Projects and Programs [PDF file]. Retrieved from https://wmich.edu/evaluation/checklists
Waters, K. R. (2011). The importance of program evaluation: A case study. Journal of Human Services, 31(1), 83–93. Retrieved from https://search.ebscohost.com/login.aspx?direct=true&AuthType=ip,shib&db=eue&AN=67378218&site=ehost-live&scope=site&custid=gsu1