Session Information
22 SES 12 A, Teaching and Learning: Students' Agency
Paper Session
Contribution
Scientific models are specialised representations of a concept, process or phenomenon, with the purpose of helping illustrate, explain, or predict (Lee, Chang, & Wu, 2017). Educators and researchers agree that models are of great importance in the generation, evaluation, and communication of scientific knowledge, especially through scientific pedagogy (Krell, Upmeier zu Belzen, & Krüger, 2012; Speth et al., 2014). As a consequence, models have been included in the standards and required curricula for science at K-12 and university levels in multiple countries (AAAS, 2011; NGSS Lead States, 2013; UK Department of Education, 2015; KMK [Sekretariat der Ständigen Konferenz der Kultusminister der Länder in der BRD], 2005)
Previous work on how context affects modeling have explored students’ epistemological understandings of modeling using models that are presented to the students (Gobert et al., 2011; Krell et al., 2012). Students’ explanations about the purpose of models in biology varied across contexts and also varied when presented with contextualised vs decontextualised models (Krell et al., 2012). Schwarz, (2002) reported that aspects of meta-modelling knowledge also varied with context. However, we do not know how context influences students’ constructed models.
There is an important consequence of having students construct models: models have additional attributes that can be measured to gain added insights into student thinking and reasoning that are not captured in narrative responses. Besides the content of the model, the architecture can tell us about aspects of students’ cognitive structures (CS; Ifenthaler, Masduki, & Seel, 2011). A cognitive structure is a mental framework that serves as a way to store and connect information about concepts (Ausubel, 1963; Ifenthaler et al., 2011). Student-generated models are partial representations of their mental models, which in turn, are products of their CS (Dauer, Momsen, Speth, Makohon-Moore, & Long, 2013; Ifenthaler et al., 2011). Changes in the architecture of models over time have been used to understand changes in CS and demonstrate differences in expert/novice understanding about a biological system (Dauer et al., 2013; Hmelo-Silver & Pfeffer, 2004).
While it is evident that there has been a fair amount of work linking aspects of students’ CS to the architecture of their external representations, we presently do not know whether model architecture is sensitive to item feature context. In this study, we ask whether item feature context (i.e., variables in a question prompt) impacts the architecture of students’ constructed models of evolution by natural selection.
This study is a collaboration between researchers at Michigan State University, USA and the Technical University of Denmark, Denmark.
Method
This study is being conducted at a large, public university in the Midwest in collaboration with researchers in Denmark. Data for these analyses came from student responses in large introductory biology courses for majors that focused on content domains of genetics, evolution, and ecology. We designed four isomorphic prompts containing the following basal structure: “(Taxon) has (trait). How would biologists explain how a (taxon) with (trait) evolved from an ancestral (taxon) without (trait)?” Contextual features of prompts varied in taxon (human vs. cheetah) and type of trait. We chose cheetahs and humans as focal taxa because while we have little information on their reasoning about ‘humans’, prior studies have examined how students reason when the taxon is ‘cheetah’ (Nehm & Ha, 2011). From the 4 prompts, we created two forms of the assessment, each containing 2 prompts that differed in taxon. In order to control for potential influences of order, each form was further divided into sub-forms that differed in the order of appearance of each taxon. Each student provided model-based responses to two prompts (same type of trait, different taxa. We analysed models for aspects of model architecture like size (number of structures) and complexity (Web-Causality Index, WCI). We then analysed the data to explore the effects of prior achievement on variation in model architecture. We also intend to analyse the data to explore effects of other demographic variables on the variation seen.
Expected Outcomes
We have done preliminary analysis on the responses obtained from one course. Our results indicate that contextual features are eliciting differences in model architecture. While context did not significantly impact model size, complexity did vary with context. Hmelo-Silver & Pfeffer (2004) found that while models constructed by both experts and novices had almost the same number of structures, the experts’ models had significantly more behaviours. This indicated that while the actual size of the CS was comparable in terms of number of structure, there were significant differences in the degree to which the expert and novice connected those structures within their CS (Ifenthaler et al., 2011). Our results indicate that despite being presented with the two prompts at the same time, students are using a more novice like approach when responding to prompt when the context is about humans and their reasoning is more expert-like when the context is non-humans. We also saw that prior performance affected the degree to which context influenced student reasoning. Middle-achieving students constructed models that were unaffected by context, both in terms of size and of complexity. Students with lower GPAs had the highest variation in complexity, the highest mean complexity when responding to cheetah prompts, and low sized models for both the contexts. High-achieving students had both low size and complexity. Middle achievers are often the ‘forgotten’ tritile. Resources are mostly directed towards the high and low achieving students. However, our research indicates that this tritile is behaving in a manner that is more expert-like and deserves additional investigation. Our work furthers our understanding about seemingly insignificant factors (prompt contexts) that may be influencing student thinking in ways that obscure our interpretations of their understanding. This will be valuable in designing assessments that truly measure student understanding and tailoring equitable instruction.
References
American Association for the Advancement of Science [AAAS]. (2011). Vision and Change in Undergraduate Biology Education: a call to action. Washington, DC. Ausubel, D. G. (1963). Cognitive Structure and the Facilitation of Meaningful Verbal Learning. Journal of Teacher Education, 14(2), 217–222. Dauer, J. T., Momsen, J. L., Speth, E. B., Makohon-Moore, S. C., & Long, T. M. (2013). Analyzing change in students’ gene-to-evolution models in college-level introductory biology. Journal of Research in Science Teaching, 50(6), 639–659. Gobert, J. D., O’Dwyer, L., Horwitz, P., Buckley, B. C., Levy, S. T., & Wilensky, U. (2011). Examining the relationship between students’ understanding of the nature of models and conceptual learning in biology, physics, and chemistry. International Journal of Science Education, 33(5), 653–684. Hmelo-Silver, C. E., & Pfeffer, M. G. (2004). Comparing expert and novice understanding of a complex system from the perspective of structures, behaviors, and functions. Cognitive Science, 28(1), 127–138. Ifenthaler, D., Masduki, I., & Seel, N. M. (2011). The mystery of cognitive structure and how we can detect it: Tracking the development of cognitive structures over time. Instructional Science, 39(1), 41–61. KMK (Sekretariat der Ständigen Konferenz der Kultusminister der Länder in der BRD). (2005). Bildungsstandards im Fach Biologie für den Mittleren Schulabschluss. München & Neuwied: Wolters Kluwer. Krell, M., Upmeier zu Belzen, A., & Krüger, D. (2012). Students ’ Understanding of the Purpose of Models in Different Biological Contexts. International Journal of Biology Education, 2(2), 1–34. Lee, S. W. Y., Chang, H. Y., & Wu, H. K. (2017). Students’ Views of Scientific Models and Modeling: Do Representational Characteristics of Models and Students’ Educational Levels Matter? Research in Science Education, 47(2), 305–328. Nehm, R. H., & Ha, M. (2011). Item feature effects in evolution assessment. Journal of Research in Science Teaching, 48(3), 237–256. NGSS Lead States. (2013). Next Generation Science Standards: For States, By States. Schwarz, C. V. (2002). Is There a Connection? The Role of Meta-Modeling Knowledge in Learning with Models. In Keeping learning complex: The Proceedings of the Fifth International Conference of the Learning Sciences (ICLS). Mahwah, NJ: Erlbaum. Speth, E. B., Shaw, N., Momsen, J., Reinagel, A., Le, P., Taqieddin, R., & Long, T. (2014). Introductory biology students’ conceptual models and explanations of the origin of variation. CBE Life Sciences Education, 13(3), 529–539. UK Department of Education. (2015). National curriculum in England: science programmes of study - GOV.UK.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.