Session Information
Paper Session
Contribution
The discrimination of the accuracy and reliability of information is not only a foundational skill across all disciplines, but also a skill required in everyday life. It is a central component of digital literacy (using technologies to find, use and disseminate information). In the past when access to information was through gatekeepers, such as librarians, editors and publishers, as this meant that a great deal of quality control had already occurred before the information was made available. Today the internet provides students with unmediated access to a vast amount of digital information with a wide spectrum of accuracy and reliability. These sources are chosen by students before subscribed resources, usually exclusively, for convenience and familiarly (Denison & Montgomery, 2012; Mbabu, Bertram, & Varnum, 2013).
There is widespread concern that students lack the motivation and experience to evaluate sources for accuracy and reliability. A common finding is that students see their information search as possessing a definite answer and expect to find it from one source with no further investigation needed (Connaway, Hood, Lanclos, White, & Le Cornu, 2013; Ostenson, 2013). Students overwhelmingly make judgments largely on the ‘look’ of the site (Bartlett & Miller, 2011) and use a sites’ ranking in search engines as an indicator of information quality (Connaway et al., 2013). Student are also subject to the widespread tendency to find evidence to support their pre-existing position on a topic, rather looking for alternative viewpoints (Stapleton & Helms-Park, 2006)
The most common approach to developing student judgment in this area has been through checklists or guidelines, of various forms, such as tick-boxes for yes/no responses; categories with a number of bullet points to consider; scales; scores; flowcharts or acronyms (Ostenson, 2013; Mandalios, 2013). As such they do not necessarily help students to identify which web resources are appropriate to use and which are not (Dahl, 2009). They also tend to be time consuming and labour intensive. This approach also fails to engage students, and doesn’t guarantee that students will even use the criteria beyond the task at hand. Checklists also fail to reproduce the ways in which experts make judgments of accuracy and reliability and do not put students on the path to expertise which needs to be context dependent.
For non-academic content that is now available on the internet criteria for judgement include the currency, relevance or content coverage, accuracy, authority, coverage, and purpose or objectivity (Mandalios, 2013; Ostenson, 2013). The use of a checklist in this context has not surprisingly been found to be rigid and ineffective and does not facilitate the development of critical thinking required for dealing with complex multifaceted content (Meola 2004). Giving students criteria by which to evaluate information doesn't result in them making more sophisticated judgments (Walraven et al., 2009), and can oversimplify when issues are complicated or controversial (Ostenson, 2013).
For sustained, long-term improvement in judgments of accuracy and reliability, teaching approaches need to include how the motivation and purpose of the user affect their effort towards evaluating sources (Metzger, 2007). Hence this research examines a case study of a teaching program which aimed to support students to develop sophisticated judgments of accuracy and reliability by getting them to create their own criteria through construction of a creative decision making tool. Additionally, it has increasingly been recognised that collaboration between librarians and faculty staff can better achieve more sophisticated skill development (Mbabu, Bertram, & Varnum, 2013).
Method
Expected Outcomes
References
Bartlett, J., & Miller, C. (2011). Truth, Lies and the Internet: A Report into Young People's Digital Fluency: Demos London. Connaway, L. S., Hood, E. M., Lanclos, D., White, D., & Le Cornu, A. (2013). User-centered decision making: a new model for developing academic library services and systems. IFLA journal, 39(1), 20-29. Dahl, C. (2009). Undergraduate research in the public domain: the evaluation of non-academic sources online. Reference Services Review, 37(2), 155-163. Denison, D. R., & Montgomery, D. (2012). Annoyance or Delight? College Students' Perspectives on Looking for Information. Journal of Academic Librarianship, 38(6), 380-390. doi: 10.1016/j.acalib.2012.08.007. Mandalios, J. (2013). RADAR: An approach for helping students evaluate Internet sources. Journal of Information Science, 39(4), 470-478. Mbabu, L. G., Bertram, A., & Varnum, K. (2013). Patterns of Undergraduates' Use of Scholarly Databases in a Large Research University. Journal of Academic Librarianship, 39(2), 189-193. doi: 10.1016/j.acalib.2012.10.004 Meola, M. (2004). Chucking the checklist: A contextual approach to teaching undergraduates Web-site evaluation. portal: Libraries and the Academy, 4(3), 331-344. Metzger, M. J. (2007). Making sense of credibility on the Web: Models for evaluating online information and recommendations for future research. Journal of the American Society for Information Science and Technology, 58(13), 2078-2091. Ngo, L 2012 How to Evaluate Electronic Resources, University of California Berkley Library, retrieved 20/02/13 Ostenson, J. (2013). Reconsidering the Checklist in Teaching Internet Source Evaluation. portal: Libraries and the Academy, 14(1), 33-50. Stake, Robert E. (2000). Case studies. In Norman K. Denzin & Yvonna S. Lincoln (Eds.), Handbook of Qualitative Research. Thousand Oaks, CA: Sage Publications. Stapleton, P., & Helms-Park, R. (2006). Evaluating Web sources in an EAP course: Introducing a multi-trait instrument for feedback and assessment. English for specific purposes, 25(4), 438-455. Tytler, Russell, Prain, Vaughan, Hubber, Peter, & Waldrip, Bruce (Eds.). (2013). Constructing representations to learn in science Rotterdam: Sense Publishers. Walraven, A., Brand-Gruwel, S., & Boshuizen, H. (2009). How students evaluate information and sources when searching the World Wide Web for information. Computers & Education, 52(1), 234-246.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.