SP 2024: Learning analytics project in Secondary science education (Hyunkyu Han)

Title: Learning analytics project in Secondary science education

Author Name: Lee, H. S., Pallant, A., Pryputniewicz, S., Lord, T., Mulholland, M., & Liu, O. L.

Selected Case: Automated text scoring and real‐time adjustable feedback: Supporting revision of scientific arguments involving uncertainty. Science Education, 103(3), 590-622. 

1. Introduction

Learning analytics is defined as “a measurement, collection, analysis, and reporting of data about learners and their contexts, for understanding and optimizing learning and the environment in which it occurs” (Long & Siemens, 2011). In STEM education, learning analytics captures the richness of students’ responses, unveils the underlying mechanisms of learning, and generates visual presentations of complex datasets (Li et al., 2020). Text-based automated assessment and feedback systems perform automated grading, provide feedback, evaluate student understanding and engagement, maintain academic integrity, and assist in teaching evaluation. These systems support conceptual understanding by analyzing the content of students’ written responses’ content, providing instructors with classifications by meaning, and promoting student reflection and metacognition. Additionally, such systems facilitate problem identification and troubleshooting through detailed knowledge of student responses, grades, and automated feedback (Gao et al., 2024).

This paper explores the technology of automated assessment and formative feedback, focusing on HASbot, designed to support the writing of uncertainty-infused scientific arguments. HASbot has been integrated into a secondary Earth Science class, assessing students’ argumentation and uncertainty, and providing immediate scaffolds, thereby effectively enhancing student achievements in writing scientific arguments. It also demonstrates the potential of automated feedback systems to improve learning outcomes [Figure 3].

2. Overview of the Case

In a secondary Earth Science class, students use data to develop scientific arguments, mirroring the work of environmental scientists. However, real-world data is often incomplete and uncertain, which complicates the argumentation process. To address this, an uncertainty-infused scientific argumentation framework, which references real environmental science practices, was created. Using professional-level data sourced either directly from scientists or generated by environmental models(figure 1), students write and revise their scientific arguments by enabling them to address scientific questions. Students evaluate this data to determine its suitability as evidence for their arguments and consider the strengths and weaknesses of their data collection and interpretation methods. Students build their arguments through four parts: (1) Claim, (2) Explanation for the claim, (3) Uncertainty rating, and (4) Uncertainty attribution. This approach helps students link their evidence to their claims and critically assess their certainty. HASbot provides automated feedback after students’ scientific arguments. HASbot displays a score bar with different colors indicating the machine score(diagnostic) and provides a statement of what students to consider to improve the score (suggestive) (Figure 2).  

 

[Figure 1] Sample task with water model

 

[Figure 2] HASbot’s feedback on four parts of the arguments

 

3. Solutions Implemented

The study involved 343 secondary students enrolled in Earth Science and Environmental Science classes. The experimental module focused on ‘the freshwater availability and sustainability (water module)’. Students engaged with a three-dimensional model covered in the water module and completed 5-6 questions in about 45 minutes. The water module included nine scientific argumentation tasks, each following the sequence outlined in Figure 3.

[Figure 3] HASbot Learning Process

 

The effectiveness of HASbot was evaluated through pre and post-tests. A linear regression analysis was conducted to estimate the impact of HASbot, considering variables such as gender, language proficiency, and prior computer experience. Additionally, submissions and revisions were analyzed to identify patterns in student engagement with the argumentation tasks, Screencast videos were used to analyze student’s group work.

 

4. Outcomes

[Table 1] Pre to post-test gains in uncertainty-infused scientific argumentation

 

The findings indicated significant gains from pre to post-test, showing HASbot’s substantial impact on students’ achievement in uncertainty-infused scientific argumentation (Table 1). The large effect size of 1.39 in Uncertainty Attribution shows a significant enhancement in students’ abilities to assess and express uncertainties in scientific argumentation, a critical scientific skill. The linear regression analysis showed that the HASbot experience was a significant predictor for post-test performance in scientific argumentation. Students improved their scientific argumentation abilities regardless of gender, English language learner status, and prior computer-based science learning experience. Students revised their arguments more frequently with their own generated data than statistical data from models. Most revisions focused on changes to the explanation or uncertainty attribution elements. They experienced critical reasoning associated with the limitation of the evidence as well as evidence reasoning and learned how to articulate sources of uncertainty during argumentation when they received feedback from ongoing real-time support.

 

5. Implications

The deployment of HASbot, an automated feedback system, has effectively enhanced students’ abilities in scientific argument writing regardless of their backgrounds. The system’s specific feedback, based on solid empirical evidence and theoretical frameworks, identifies and addresses areas needing improvement and improves students’ engagement and achievement with scientific arguments. The study concludes that automated scoring and feedback are crucial for educational progress, but it also stresses the importance of careful development and use of these technologies. Creating accurate automated scoring models based on high-quality rubrics derived from human expertise is essential. In conclusion, this study demonstrates the innovative potential of systems like HASbot in advancing STEM education. As learning analytics technologies evolve, it is crucial to enhance and adapt these systems to align with pedagogical goals in STEM education.

 

References

Gao, R., Merzdorf, H. E., Anwar, S., Hipwell, M. C., & Srinivasa, A. (2024). Automatic assessment of text-based responses in post-secondary education: A systematic review. Computers and Education: Artificial Intelligence, 100206.

Lee, H. S., Pallant, A., Pryputniewicz, S., Lord, T., Mulholland, M., & Liu, O. L. (2019). Automated text scoring and real‐time adjustable feedback: Supporting revision of scientific arguments involving uncertainty. Science Education103(3), 590-622.

Li, S., & Lajoie, S. P. (2022). Promoting STEM education through the use of learning analytics: A paradigm shift. In Artificial Intelligence in STEM Education (pp. 211-224). CRC Press.

Siemens, G., & Long, P. (2011). Penetrating the fog: Analytics in learning and education. EDUCAUSE review46(5), 30.

[Back to Home]