Name: Jonathan Pratt
Title: Problem-based Learning in Higher Education STEM courses in disparate nations.
Introduction:
It is commonly accepted that learners flourish in an active, engaging environment. All else equal, an authentic learning situation, approximating the actual goal task, is normally superior to a simply declarative or route learning process, in terms of both learner engagement and expected learning outcomes. Problem-based Learning is a 1950s innovation towards a more learner-centric model of instructional design, originally developed to give students authentic tasks, with the assumption that they should be as close to real-world situations as possible, including the fact that the tasks could initially be ill-defined. To examine the relationship between active learning and Problem-Based Learning, and concurrent quality of outcomes, two case studies are presented here.
Overview of the case(s):
Ting et al (2019) examine the question of active learning as an outcome in PBL very directly, while addressing performance increase as a natural result of increased active learning. They do this with a calculus class at a Hong Kong University. They course typically has low-levels of learner engagement in a traditional, lecture-based environment. Through PBL and gamification, they increase engagement and thus improve learner outcomes. Indeed, the way Ting et al operationalized learner activity, as time spent in active learning and with active engagement, one might think it would be hard for student outcomes to not improve if those factors increased. Though they used objective metrics for assessment of learning (concept inventories and exam scores), Ting et al’s measurement of the engagement and activity factors hinged on student self-assessment.
Meanwhile Yadav et al (2011) criticize studies of PBL for focusing largely on student surveys and self assessments. So, instead they focus on objective performance measures, to some degree in opposition to students’ self-reporting as to how well they had learned. Yadav et al measure increased learning outcomes when using Problem-based methods of instruction compared to non-Problem-based Instruction (via pre-post class quantitative assessment devices, with Problem-based and non-Problem-based instruction groups compared). Yadav et al paradoxically found students contra-factually believed they learned more from non-Problem-based Learning activities than from the problem-based activities.
Solutions Implemented:
The originating problem underlying Ting et al’s study is easily defined:
“Despite extensive evidence-based research on the benefits of active learning strategies on student achievement […] a large number of STEM instructors in Hong Kong do not put these strategies into practice due to lack of incentives and support from their tertiary institutions.” (2019)
They go on to describe a “competitive, exam-oriented” education culture dominated by lectures and expository teaching placing undue emphasis on abstract and declarative knowledge. Not surprisingly, Hong-Kongese students tend to find courses tedious, boring, and irrelevant. Thus lack of attention, passivity, and “off-task” behaviors are common in the nation’s classrooms in general. This problem does not stop at middle and high school, but persists into the University level (2019).
Ting et al approach the problem of active learning through both Problem-based Learning and Gamification. This is in contrast to the norm within Hong Kong, where students are taught using long-standing bias towards passive learning and teaching, by teachers who mostly learned using those methods themselves. The problems with passive learning approaches such as lectures and testing are general student boredom and lack of involvement (Ting et al, 2019).
One thing we can say about Ting et al, is they do not hold back their use of tools to increase engagement. They combine a collaborative and game-based environment with Problem-based learning and succeed in increasing classroom engagement. The main criticism I can see of this case as an example might be that the variables are getting confused in deploying so many tools at once, even if that deployment is successful. Which is where Yadav et al become useful contributors here.
Yadav et al take a slightly different approach, though still in post secondary STEM education, by focusing almost entirely on objective learning outcomes. Because the teaching in engineering during their time was commonly through traditional lectures, declarative testing, and somewhat inauthentic processes which are meant to be combined later to create a knowledge of the target learning, some of the same problems of student disengagement noted by Ting et al might implicitly be present in Yadav et al’s classes as well. Indeed, Yadav et al said they are motivated by the trend, contemporary to that time, of creating more engaging and authentic learning. Their environment is thus more friendly to Problem-based learning or innovation in general than the rigid environment that Ting et al describe in their paper. But Yadav et al still get similar results in terms of better learning outcomes, and they do it solely based on the use of Problem-based Learning, without the other variables present in Ting et al’s 2019 study.
As I said earlier, the most interesting facet of Yadav et al’s paper for comparison to Ting et al is that student’s do not rate their learning in Problem-based classes higher than the traditional classes. In fact, some of Yadav et al’s subjects seem to prefer the traditional classes, at least as presented by their instructors, and they find the Problem-based Learning too nebulous. Nonetheless, the Problem-based Learning produces better results as measured quantitatively and objectively (Yadav et al, 2011).
Outcomes:
We see University STEM students measurably learning more from Problem-Based learning activities (as noted by Ting et al in 2019 as well as Yadav et al in 2011). We would expect this given greater learner engagement and more active student learning. However, it also seems student assessment of their own learning, based on confidence in what they know, may not be accurate at all. In fact, as Yadav et al (2011) find, students perception of their own learning could well be lessened during Problem-based Learning, even if their objectively-measured learning has improved, as experimental subjects say:
“I felt at times I did not know much of what was going on [with PBL] because the students had to teach themselves the material and I was sometimes unclear on what the material was.”
Another student in Yadav et al’s (2011) experiment also complains:
“Doing the projects would have been much better if the [electronic] components had been explained in class. I had never before seen the components we were using and was unsure how to [electronically] connect them to solve our problems. Also, I had a hard time understanding what was going on in class when it was not being explained in class.”
These kinds of comments can be understood given the standard methodology of Problem-based Learning, in which students are presented with an authentic and potentially ill-defined problem. The authors of the study recognize this and conclude that some classes may require greater scaffolding. It is interesting, though, that the students appear to have some expectation as to what it feels like to learn or understand something, and might fail to recognize their own learning if those expectations are defied.
When Ting et al implement their Problem-Based Method, which is also Gamified and Collaborative, they see statistically significant increases in test performance and measured conceptual understanding. Additionally, in contrast to Yadav et al’s groups, those Hong-Kongese students’ self-reported levels of engagement are much higher. A very positive result Ting et al note is that students with less background knowledge were encouraged to catch up or even exceed the performance of students with more background knowledge in the subject matter.
Implications:
The clearest implication builds rather logically on what is already understood by most educators and instructional designers, particularly in the West. As I said in the introduction, it seems to be rather uncontroversial common knowledge in the field of education that course designs promoting active and engaged students are superior to those that do not. Indeed, some sort of active process on the part of the learner, at the very least during points of evaluation, are required by most methods of teaching to determine that any learning has occurred at all.
Given the usefulness of active processes on the part of the learners, instructional design methods that require or strongly encourage student engagement and active learning are desirable over those that do not. Problem-based learning is one such method, and as expected, creates better learning outcomes. The increase in learning outcomes turned out to be very strong in Ting et al’s 2019 study where the control group had extremely un-engaging course design, and the experimenters included more than one element to increase engagingness.
Gamification is sometimes used to created engagingness and active learning by itself (and indeed, some elements of it very much promote learner activity (see Toda et al, 2019, for a list of elements that could be considered “gamification”). By itself, gamification can leverage psychological factors such as competitive instincts or team involvement to create more engagement (Toda et al, 2019), but it may sometimes lack the authenticity called for by Problem-Based Instruction. However, combining these two powerful methods evidently creates a serious weapon in an instructional designer’s arsenal. Gamification along with Problem-Based learning very effectively overcame deep inertia of learner disengagement in Ting et al’s study (2019).
Given the nuclear combination of Problem-based learning, a collaborative learning environment, and gamification used by Ting et al (2019), one might object that the variables are too muddy to make any conclusions about Problem-based learning alone from the study. However, looking at Yadav et al’s 2011 study as well, we have a case of clear increase in performance measures even despite students opinion of how well they learned in a Problem-based learning environment. So, that is a case where even the variable of student opinion of their own outcomes is independent of their objective learning outcome in a Problem-based environment as compared to a non-Problem-based environment.
References:
Ting, F. S. T., Lam, W. H., & Shroff, R. H. (2019). Active Learning via Problem-Based Collaborative Games in a Large Mathematics University Course in Hong Kong. Education Sciences, 9(3), 172. doi: 10.3390/educsci9030172
Toda, A. M., Klock, A. C. T., Oliveira, W., Palomino, P. T., Rodrigues, L., Shi, L., Bittencourt, I., Gasparini, I., Isotani, S., & Cristea, A. I. (2019). Analysing gamification elements in educational environments using an existing Gamification taxonomy. Smart Learning Environments, 6(1). https://doi.org/10.1186/s40561-019-0106-1
Yadav, A., Subedi, D., Lundeberg, M. A., & Bunting, C. F. (2011). Problem-based Learning: Influence on Students Learning in an Electrical Engineering Course. Journal of Engineering Education, 100(2), 253–280. doi: 10.1002/j.2168-9830.2011.tb00013.x