IDT Reflections – ID’s are the SME of the delivery of the content

I truly believe IDT is still an emerging field despite its lengthy history.  This is evident to me because most of the literature on IDT surrounds only three industries, military, healthcare, and higher education despite its ability to benefit nearly any field – once they learn about it.

In fact, one of the most humbling experiences happened when my mom asked me to describe what I do so she could explain it to her friends.  She didn’t want to say “well she is finishing her Master’s next May in Instructional Design & Technology. No, she’s not a computer designer, no, she’s not a teacher, no, I did not mean Industrial (for context, I have a B.S. in Industrial Engineering).  She, well, she, she …” I can appreciate that, and at the same time take note that clearly, our field has not broken into the main stream quite yet.  Given this, I helped her understand that our field is growing and what I am really interested in doing is 1) working with instructors to build more interactive, learner driven classes and 2) developing online learning modules both focusing on adult learners.  An example I gave her to use is that often time instructors are the SME on the content while ID’s are the SME of the delivery of the content. BAM! I saw the light bulb go off!

After which mom, dad, and I had a great conversation on how IDs could be used in trials to help lawyers educate the jury on specific key points they want the members to takeaway from the arguments and how IDs could be used in advertising.  Both ideas highly divergent from the roots of ID, but after a lengthy debate/conversation, I could see how IDs could use their skills in both of these fields.  Food for further thought I guess…

 

Making Connections

So I am writing to you from Eindhoven, Netherlands!  It has been a great trip here and one of the highlights was meeting a fellow named Jan (pronounced Yawn – a derivative of John).  I had the pleasure of meeting him at Frontys, a college here in Eindhoven specializing in educating primary teachers (5yrs-12yrs).  He is the regional Innovation Manager across the Netherlands (working in five colleges, mostly in the southeast).  We discussed online learning, similarities and differences in adult education, and the software we use while teaching/learning.  The whole experience was delightful and carried on for 2.5 hrs (more than I anticipated but every minute well worth it).  I mention all of this because I truly enjoyed hearing about the assessment/evaluation protocol they use in their program.

360 Review – Not Unlike Job Performance Assessments

In the Frontys program he is leading, new teachers receive formative feedback nearly instantaneously and from various sources including the instructor, a mentor, and at least one peer reviewer.  Each of these assessors reviews the draft work product and provides in-depth feedback on its delivery, content, and overall consistency with the aim of the program.  After which, the new teachers can then make the necessary adjustments to their work product before submitting it to the instructor for final review.  I really liked this idea because the main focus was the learner producing an outstanding work product not just receiving a passing grade on the assignment.  Jan pointed out that no one would even consider submitting a work product for final review until their reviewers had essentially given them a “great job” on the feedback — thus there are no surprises when students submit their work for review.  They already know they have done well and are proud of the work they have produced.

I really like the way of centering on the learner and teaching them how to seek feedback, give feedback, incorporate feedback, and lastly take more ownership in their own studies – translating it more into actual work practices (rather than solely in academia).  I know that I will have more thoughts/ideas around this, however I am still processing our conversation – but wanted to share it while it was still fresh.

Formative Feedback vs Formative Evaluation

It appears that this is a reoccurring theme in my evaluation exploration…

What is the difference between formative feedback and formative evaluation?

And to me it is clear: the difference is who is receiving the attention and when you give it.

I’ll elaborate. As I am a student and because all of my formal studies have suggested I do so, I will explain this from the perspective of the learner.

Formative Feedback – represents information communicated to the learner by the instructor while applying new information that is intended to modify the learner’s thinking or behavior for the purpose of improving his/her understanding.

Conversely….

Formative Evaluation- represents information communicated to the instructor by the learner after a lesson ends that is intended to modify the instructor’s approach of teaching the material.

So while sounding very similar these terms apply to separate audiences for very different purposes.  Hopefully this helps to cement your understanding – but if it doesn’t please provide me with some (choose one: FEEDBACK or EVALUATION) to help me modify my explanation. 🙂

From 3rd Person to 1st Person

Providing formative feedback during first hand experiences, especially for processes, may be easier than you think with the introduction of Google Glass.

Google Glass can be used to record instructors demonstrating a process and learners can then replay the video while attempting the new process first hand – thus making the transition to a first hand experience smoother.

For example, watch this video on how to intubate a patient using Google Glass.

It would be interesting assess students who are learning a new procedure (be it intubating a patient, operating a piece of lab equipment, or changing a tire) via an interactive video shot from a 1st person perspective using Glass. Ideally, this would assist users form a more concrete understanding of how to proceed before an actual hands-on experience

Evaluating Video Clips

This week I read an interesting article: “Do medical students watch video clips in eLearning and do these facilitate learning?” by KALLE ROMANOV & ANNE NEVGI.

The article found that:

Almost 20% of third year medical students neglected video clips as a multimedia learning tool.

Female medical students more actively used multimedia content in eLearning.

Video-watchers more frequently used the collaborative discussion tools.

Students who watched video clips were more active in using collaborative eLearning tools and achieved higher course grades.

I can say from personal experience that I actively participate in all my online courses and discussion boards which I think contributes greatly to my overall comprehension of the material presented.  It is sad to me that even though those who watched the video clips did in fact show greater learning (through evaluation of course) 20% of the students did not even attempt to watch the clips. I wonder if the study was conducted again if there would be 1) increased collaboration and multimedia interactions and 2) decreased gender differences as I would anticipate the general audience who participates in eLearning modules may have changed even in just 7 years.

Additionally, given the reading material this week on Problem Based Learning (PBL) I was excited to see it referenced in the article:

A recent study showed that medical students regarded online discussion as useless when integrated with face-to-face contact in tutorial problem-based learning (PBL) groups (de Leng et al. 2006).

Other Timelines

I love seeing everyone’s work and learning from it. In case you do too, I have included some other people’s timelines (no guarantees the links will work forever).

Evaluating my Evaluation Discussion

As I associate most with Kirkpatrick’s 4 levels of evaluation I will reflect on last’s weeks discussion.

LEVEL 1 – REACTION

I believed the class enjoyed the discussion, particularly my example case study where I tried to give an example of each evaluation model discuss in the book using how we evaluate instructors and courses here at GSU.  I base this off the positive comments given at the end of the presentation and participation throughout.

LEVEL 2 – LEARNING

While I did not implement an assessment at the end of the discussion, I did ask the class to jot down the key points they took away from the reading and the discussion.  Although, very few people participated in the exercise – an indication that I may have lost some of them – the ones who did answer the question touched on the main points we discussed as a group (norm vs. criterion-reference testing, formative vs. summative evaluation, the 5 presented evaluation models).

LEVEL 3 – BEHAVIOR & LEVEL 4 – RESULTS

Cannot and will not be measured – as with most studies I will not cover these most crucial aspects due to cost and time constraints.

All that said, I did enjoy the discussion and look forward to two discussions tonight from my classmates!

Discussing Evaluation Tonight

In tonight’s class I will be leading a discussion on the latest reading from our text Trends and Issues in Instructional Design and Technology  on Evaluation in Instructional Design: A Comparison of Evaluation Models.  I chose this topic because I was fascinated with criterion-referenced testing.  It hadn’t occurred to me that there would be any other method — despite having taken a norm-test or two in my life time :::cough::: SAT :::cough:::

If you are really interested in seeing what I will be presenting tonight you can preview it here.

Wish me luck! 🙂