This class took a favorable turn for me, in Week 5, when we reviewed the Objectives/Outcomes worksheets. Reading Chapters 3 and 4 of Cennamo’s & Kalk’s book, Outcomes, and Assessments, made the concept of writing learning outcomes seem easy. My immediate take away was to make sure the learning outcomes speak to the expected post-training performance and not the in-training expectation. Another important take away was to avoid immeasurable verbs, such as “understand” and “know” (Cennamo & Kalk, p. 69-70). While conceptually, the task seemed easy, the actual practice is a bit challenging. I appreciated the opportunities the activities of ABCE Instructional Objectives, Parts of Objectives, and Identify Learning Outcomes afforded me. I thoroughly enjoyed being able to review each question of each exercise with Dr. Richardson and my classmates. Besides being able to determine my understanding based on my answers and the correct answers, I was able to gain an understanding of thought through the classroom discussions. I challenged myself to practice at work by renaming and rewriting some learning outcomes. As I tried to rewrite some of the course outcomes, I was able to identify the gap in our assessments. I was not able to write a measurable outcome because we did not have robust tools to check for understanding. This rewriting difficulty supported the Essential Triangle of Instructional Design (Cennamo & Kalk, p. 10), in that the outcomes must be in alignment with assessments and activities.
The next week brought on the opportunity to develop learning assessments. My immediate takeaway was that in some instances, it is okay to write the learning assessment before writing the learning outcome. The idea that the chosen assessment type should replicate the actual environment and support the learning outcome seems like common sense. Cennamo & Kalk (2019), said it best that the test format should not interfere with the learner’s ability to demonstrate their mastery of the skill (p.73). However, it is common for learning teams to do “what we have always done” and only rely on multiple-choice or gamification but not practical applications. It is also essential to find different ways to test for understanding. It is easy to fall into a rut, especially when the material is the same, but the people are different. So, I appreciated the Dipsticks: Efficient Ways to Check for Understanding link. I was excitedly surprised to see one ideal that I use in training, the Advertisement. I have the class create and present advertisements after teaching product knowledge. Other possible assessments that I will try is the 3-2-1 and the Color Codes. Due to the training period that we have, our most utilized assessments are Kahoot gamification and role-plays. However, as I attempted to rewrite our learning objectives, it occurred to me that Kahoot is not representative of the working environment. While Kahoot is useful for post-module reinforcement, it is vital to have the final exam be a simulation of the role.
While the case studies and activities afforded several practical applications, there were some themes I would like to develop more. An example is the instructional sub-skills analysis from case study #3. While I performed the task, I am not quite sure where it fits in the “checklist” of design. I fear that the entire picture will come together at the final project, and it will be too late. I always try to apply the material to my current role, but I don’t see how this process fits into new hire corporate training. As I write this, I think that one application can be in partnership with the hiring team. We can conduct a sub-skills analysis to identify viable candidates. However, this is a concept I would like to see in more practical application on how you draw from this to move through the design process.