Evaluating ourselves to death. An alternative way to compare on- and offline learning.
Author(s):
Conference:
ECER 2009
Format:
Paper

Session Information

11 SES 03 A, E-learning and informal learning

Paper Session

Time:
2009-09-28
14:00-15:30
Room:
HG, HS 46
Chair:
José Cajide

Contribution

Recently, media based learning and e-learning are up to date. Current trends focus on cooperative and collaborative learing, regarding the importance of web 2.0 applications and media literacy. Institutions of higher education seem to accepted the dare during the last years. Online- and blended learning arrangements become important parts of the curriculum - with great potentials: Interaction and activity are high in the ratings and formal learning becomes more and more informal. As this is an antagonism dicactical issues and practical benefits seem to be ambiguous. It is still relevant to evaluate and measure the additional values of learning scenarios for being able to develop concepts for the use of the new media. For measuring the success of e-learing, comparision studies are used in most cases - one setting is compared to another. But are such comparisions useful? We are "evaluating ourselves to death" regarding the surges of evaluation studies that are conducted with various forms, various claims and definitions and various topics. One part of these studies report of no significant difference between the settings, another part says that e-learning is more successful while the third part found out that the best outcome can be reached with traditional learning. Thus, reliable conclusions are not possible - even worse: in the end it is not even clear what learning outcome really is, as the definitions change in the plurality of the studies (cp. Annabell Preussler & Peter Baumgartner 2006). So, what makes learning successful and how can this be detected? In the area of education this is evaluated by measuring the learning outcome, which becomes an indicator for the construct of learning quality, but actually, learning outcome is a construct itsself. Secondly, there is another problem: By comparing the learning outcome of two (or more) different settings, mostly the same assessement for both groups is used - which contents the risk of regarding only one specific dimension of learning. In this work a meta-evaluation was conducted in order to find out how the problem is being dealt with in practice (Annabell Preussler 2008). Evaluation studies regarding the coherence of e-learning and learning outcome and and the influence of e-learning respectively were analysed and compared. One hypothesis was that learning outcome can not be operationalized clearly and that nonspecific comparisons of e-learning and traditional learning can not be conducted unlimitedly meaningfully.

Method

In order to analyse and evaluate existent studies a meta-evaluation was conducted. This was done as a review (cp. Cook & Gruder, 1978:17), which did not explicitly focus on the representativeness of the single results but documented indifferent perceptions of learning outcome and their operationalisation and measurement. After the specification of the relevant criteria, 11 primar studies were involved in the meta-evaluation. The coding was done using the methods of deductive content analysis (cp. Mayring, 1993, cp. Widmer, 1996:64).

Expected Outcomes

While reasonable and practical procedures reason to good feasibility of the single studies, the studies regarding accuracy (validity and reliability) and especially the adequacy of research decisions score far worse. Different conditions were either measured with the same tests or - if the learning objectives are properly tested at the same level of cognitive processes - the design of the assessement was not related to the intended learning objectives. For future studies comparing the outcome of e-learning to traditional learning these recommendations are given: 1. The intended learning objectives of both groups should be in the same level of cognitive process dimensions (Lorin W. Anderson & David R. Krathwohl 2001). 2. The assessment should be adequate to these objectives 3. The design of the assessement should correspond to the learning arrangement in the test groups.

References

Anderson, Lorin W. & Krathwohl, David R. (2001). A taxonomy for learning, teaching, and assessing: a revision of Bloom's taxonomy of educational objectives. Addison Wesley Longman. Cook, Thomas D. & Gruder, Charles L. (1978). Metaevaluation research. Evaluation Quarterly, 2, S. 5-51. Mayring, Philipp (1993). Qualitative Inhaltsanalyse. Grundlagen und Techniken. 4., erweiterte Auflage. Weinheim: Beltz. Preußler, Annabell (2008): Wir evaluieren uns zu Tode. Möglichkeiten und Grenzen der Bewertung von Online-Lernen. [WWW-Dokument http://deposit.fernuni-hagen.de/505]. Preussler, Annabell & Baumgartner, Peter (2006). Qualitätssicherung in mediengestützten Lernprozessen – Zur Messproblematik von theoretischen Konstrukten. In: Sindler, Alexandra (Hrsg.): Qualitätssicherung im eLearning. Reihe Medien in der Wissenschaft. Münster: Waxmann. S. 73-85. Widmer, Thomas (1996). Meta-Evaluation: Kriterien zur Bewertung von Evaluationen. Bern: Haupt.

Author Information

University of Duisburg-Essen
Mediendidaktik und Wissensmanagement
Duisburg
54

Update Modus of this Database

The current conference programme can be browsed in the conference management system (conftool) and, closer to the conference, in the conference app.
This database will be updated with the conference data after ECER. 

Search the ECER Programme

  • Search for keywords and phrases in "Text Search"
  • Restrict in which part of the abstracts to search in "Where to search"
  • Search for authors and in the respective field.
  • For planning your conference attendance, please use the conference app, which will be issued some weeks before the conference and the conference agenda provided in conftool.
  • If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.