Bitte benutzen Sie diese Referenz, um auf diese Ressource zu verweisen: doi:10.22028/D291-40765
Titel: Comparing Two Subjective Rating Scales Assessing Cognitive Load During Technology-Enhanced STEM Laboratory Courses
VerfasserIn: Thees, Michael
Kapp, Sebastian
Altmeyer, Kristin
Malone, Sarah
Brünken, Roland
Kuhn, Jochen
Sprache: Englisch
Titel: Frontiers in Education
Bandnummer: 6
Verlag/Plattform: Frontiers
Erscheinungsjahr: 2021
Freie Schlagwörter: cognitive load
differential measurement
rating scale
validity
split-attention effec
STEM laboratorie
multitrait–multimethod analysis
DDC-Sachgruppe: 370 Erziehung, Schul- und Bildungswesen
Dokumenttyp: Journalartikel / Zeitschriftenartikel
Abstract: Cognitive load theory is considered universally applicable to all kinds of learning scenarios. However, instead of a universal method for measuring cognitive load that suits different learning contexts or target groups, there is a great variety of assessment approaches. Particularly common are subjective rating scales, which even allow for measuring the three assumed types of cognitive load in a differentiated way. Although these scales have been proven to be effective for various learning tasks, they might not be an optimal fit for the learning demands of specific complex environments such as technology-enhanced STEM laboratory courses. The aim of this research was therefore to examine and compare the existing rating scales in terms of validity for this learning context and to identify options for adaptation, if necessary. For the present study, the two most common subjective rating scales that are known to differentiate between load types (the cognitive load scale by Leppink et al. and the naïve rating scale by Klepsch et al.) were slightly adapted to the context of learning through structured hands-on experimentation where elements such as measurement data, experimental setups, and experimental tasks affect knowledge acquisition. N 95 engineering students performed six experiments examining basic electric circuits where they had to explore fundamental relationships between physical quantities based on the observed data. Immediately after the experimentation, the students answered both adapted scales. Various indicators of validity, which considered the scales’ internal structure and their relation to variables such as group allocation as participants were randomly assigned to two conditions with a contrasting spatial arrangement of the measurement data, were analyzed. For the given dataset, the intended three-factorial structure could not be confirmed, and most of the a priori-defined subscales showed insufficient internal consistency. A multitrait–multimethod analysis suggests convergent and discriminant evidence between the scales which could not be confirmed sufficiently. The two contrasted experimental conditions were expected to result in different ratings for the extraneous load, which was solely detected by one adapted scale. As a further step, two new scales were assembled based on the overall item pool and the given dataset. They revealed a three-factorial structure in accordance with the three types of load and seemed to be promising new tools, although their subscales for extraneous load still suffer from low reliability scores.
DOI der Erstveröffentlichung: 10.3389/feduc.2021.705551
URL der Erstveröffentlichung: https://doi.org/10.3389/feduc.2021.705551
Link zu diesem Datensatz: urn:nbn:de:bsz:291--ds-407650
hdl:20.500.11880/36636
http://dx.doi.org/10.22028/D291-40765
ISSN: 2504-284X
Datum des Eintrags: 20-Okt-2023
Fakultät: HW - Fakultät für Empirische Humanwissenschaften und Wirtschaftswissenschaft
Fachrichtung: HW - Bildungswissenschaften
Professur: HW - Prof. Dr. Roland Brünken
Sammlung:SciDok - Der Wissenschaftsserver der Universität des Saarlandes

Dateien zu diesem Datensatz:
Datei Beschreibung GrößeFormat 
feduc-06-705551.pdf1,37 MBAdobe PDFÖffnen/Anzeigen


Diese Ressource wurde unter folgender Copyright-Bestimmung veröffentlicht: Lizenz von Creative Commons Creative Commons