Session Information
09 SES 01 B, E-Assessment and Model-Test Critically Discussed
Paper Session
Contribution
E-learning represents an evolution in how education is conceived, precipitated by technological capabilities for which researchers must create and design new theories that allow its full potential for users to be developed. In the area of higher learning, educational institutions all over the world are now offering instruction through various systems of distance learning. While in this article we view e-learning as an agent of innovation in the full sense of the word, it appears that theories of teaching and learning created specifically for presencial education are also still much in evidence, and are adapted, rather than re-invented, for virtual settings.
One area of e-learning in which we have detected more difficulty in terms of innovation is that of assessment; and while the research done in recent years has increased notably throughout the world, developing new models and objects of assessment, there is still much that can be done to increase innovation in e-assessment, given that, as Bevitt (2014) affirms, there is an significant gap in the literature.
In any case, until 2002, in-depth studies of innovation possibilities in e-assessment were scarce, and those that did exist focused more on the classification of assessment into formative and summative (Birnbau, 2001 and Scriven, 1996), as in the case of the Input-Process-Output model (Mehrotra, Hollister and McGahey, 2001) and others dealing with similar elements (Stufflebeam, 2000; Rockwell, Furgason & Marx, 2000; Potts & Hagan, 2000; Forster & Washington, 2000). At a later stage, we find Moore, Lockee & Burton (2002), who defended “formative discussion”, along with models such as the design review, expert, small group review, field work and process review models. However, as we have indicated, none of these earlier studies has pointed to the possibility of assessment without the presence of the student, and still less on the basis of a formative, continuous assessment with all of the guarantees demanded of knowledge testing in higher education, at least in the area of the humanities and social sciences.
Our study concords with those of Gikandi, Morrow & Davis (2011) in the view that e-assessment includes characteristics distinct from presencial contexts, due in particular to the asynchronous nature of online interactivity. For this reason, it demands from instructors a re-thinking of online pedagogy, aimed at developing effective strategies of formative assessment. It involves, as noted by Rodríguez & Ibarra (2011), a conception which is open, flexible and knowledge-sharing, and which focuses attention on the use of techniques for promoting and maximizing learning opportunities, whether through tasks, feedback or the processes of peer assessment and self-assessment. Fitzgerald, Hackling & Dawson (2013), meanwhile, highlight the advantages of collecting data which can later be assessed by videoconference.
The methods and strategies that we have been defining here are those which might bring about innovative changes in the university classroom, as they will determine the type of assessment tasks that instructors will establish with these technologies. Technology by itself, after all, does not constitute innovation; it is, rather, attitude and paradigm shifts which enable the exploration of new methods of learning assessment.
Specifically, the objectives were:
- Present and examine the viability of a learning assessment system for virtual teaching environments, with guarantees of knowledge accreditation and without the necessity of the university student’s physical presence.
- Collaborate in the innovation and development of e-learning as an agent of educational change, particularly in the development of university teaching methodology that facilitates adaptation to the European Higher Education Area.
- Test the suitability of virtual classrooms as instruments of learning assessment.
- Verify the usefulness of the videoconference interview as a valid element for the type of online assessment tested.
Method
The characteristics of our research and the type of data examined have led us to select grounded theory as the most appropriate method for the characteristics, purpose and execution of the present study. We agree with Strauss & Corbin (1990, p. 10) in their view that such methodology allows us to discover theories that “lie dormant in the data”. Through grounded theory, according to Strauss (1987), theories can be generated from texts collected in natural contexts, with findings being in effect theoretical formulations of reality. It is thus differentiated from other methods by its emphasis on theory construction. The same author defines its basic procedures as being: data collection, coding and memoing (analytical annotation). To develop a theory, it is essential that the categories identified be outlined, constructed and interrelated; these constitute the conceptual element of the theory, revealing the relationships between themselves and the data collected. The research project within which this study falls can represented in three phases or study types that we have called: exploration, verification and saturation. However, for reasons of space, what follows here refers exclusively to the third of these phases, the saturation study, which was carried out during the 2015/16 academic year. The saturation study, which we focus on here, enables us to define the theoretical outline of the e-assessment model under study and thus verify it by generating a theory. The method we have followed is based principally on the strategic methodology of constant comparison. According to Bustingorry, Sánchez & Marina (2006), such a strategy combines inductive category generation with the simultaneous comparison of all observed social incidents. To the degree that a social phenomenon or incident is registered and classified, and assigned to a category or type, it is also compared with those already classified within the same category; later, the process gradually evolves to, rather than a comparison of one incident to the others, a comparison to the properties of the category. The discovery of relationships, then, and the generation of hypotheses, begin with the analysis of initial observations, which is continuously refined throughout the collection and analysis of data, with constant feedback guiding the categorization process. As newer events are compared in this way with earlier ones, new typological dimensions, as well as new relationships, can be revealed.
Expected Outcomes
In this study we have arrived at theoretical saturation and generated a grounded theory for a specific e-learning assessment model, currently a very interesting one in that it does not require the physical presence of the student. Our analysis of this setting has enabled us to establish the following theory on our research topic, which we present in the form of recommendations for an e-assessment process of that would provide accreditation of student learning in virtual environments: First, and in relation to general aspects of assessment, we agree with the studies done by Meyn et al. (2002) and Daly et al. (2010) that for the accreditation of student learning it is necessary for e-assessment to be highly formative in character. We have observed that it is essential for students to learn in a meaningful way, and for their assessment to be integrated into the actual process of teaching/learning. With respect to the assessment of individual and group activities, for e-assessment to provide accreditation of the student’s learning we consider that the competencies to be assessed must be integrated into the teaching/learning process itself. In the category final assessment, it was found that, if a final assessment is done, it should be based on competencies. Lastly, in the category assessment tools, we find that it is necessary to encourage consistency when assessing, for which we recommend that, as much as possible, a variety of synchronous and asynchronous tools be used. In the case of the synchronous virtual classroom, it is essential that formative and assessment activities be pedagogically designed, as such classrooms allow a face-to-face contact which asynchronous platforms lack.
References
Agostinho (2005). Naturalistic inquiry in e-learning research. International Journal of Qualitative Methods, 4(1), 813-26. Bahous, R. & Nabhani, M. (2011). Assessing education program learning outcomes. Educational Assessment, Evaluation and Accountability, 23(1), 21-39. Bevitt, S. (2014). Assessment innovation and student experience: a new assessment challenge and call for a multi-perspective approach to assessment research. Assessment & Evaluation in Higher Education (In press). Birnbaum, B. W. (2001). Foundations and practices in the use of distance education. Lewiston: Mellen Press. Hew, K., Liu, S., Martinez, R. Bonk, C. & Lee, J. Y. (2004). Online education evaluation: what should we evaluate? Taken from: ERICDatabase. Lejk, M. & Wyvill, M. (2010). The effect of the inclusion of self-assessment with peer assessment of contributions to a group project: a quantitative study of secret and agreed assessments. Assessment & Evaluation in Higher Education, 26(6), 616-636 Lu, J & Zhang, Z. (2012). Understanding the Effectiveness of Online Peer Assessment: A Path Model. Journal of Educational Computing Research, 46 (3), 313-333. Mehrotra, C. M., Hollister, C.D. & McGahey, L. (2001). Distance learning: principles for effective design, delivery, and evaluation. Thousand Oaks: Sage Publications. Meyen, E., Aust, R., Bui, Y., Ramp, E. & Smith, S. (2002). The Online Academy formative evaluation approach to evaluating online instruction. The Internet and Higher Education, 5, 89-108. Moore, M., Lockee, B., & Burton, J. (2002). Measuring success: evaluation strategies for distance education. Educause Quarterly, 25(1), 20-26. Noskova, T., Pavlova, T. & Yakovleva, O. (2014). Communicative Competence Development for Future Teachers. The New Educacional Review, 38(4), 189-199. Pachler, N., Daly, C., Mor, Y. & Mellar, H. (2010). Formative e-assessment: practitioner cases. Computers & Education, 54, 715–721. Patton, M. Q. (2001). Qualitative Research and Evaluation Methods (2nd Edition). Thousand Oaks: Sage Publications. Potts, M. K. & Hagan, C. B. (2000). Going the distance: using systems theory to design, implement, and evaluate a distance education program. Journal of Social Work Education, 36(1), 131-145. Rockwell, K., Furgason, J., & Marx, D. B. (2000). Research and evaluation needs for distance education: a Delphi study. Online Journal of Distance Learning Administration, 3,3. Weschke, B. & Canipe, S. (2010). The faculty evaluation process: the first step in fostering professional development in an online university. Journal of College Teaching & Learning, 7(1), 45-58.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.