Session Information
12 SES 08, Paper Session
Paper Session
Contribution
Social tagging services allow users to freely annotate digital objects, resulting in data structures known as Folksonomies. Social bookmarking services (SBS) apply social tagging to web resources by facilitating the collaborative collection and annotation of favorite web sites. Different comparisons, quantitative as well as qualitative ones, between user-generated annotations and their professionally created counterparts have been conducted, yet mostly for library records. Such comparisons are of interest to providers of digital libraries and related products because they allow insights into how internet users of different degrees of expertise annotate resources. One of the first analyses of Folksonomies was conducted by Golder and Huberman (2006). Lu et al. (2010) e.g. show in a quantitative comparison of keywords from the Library of Congress and social tags from LibraryThing for 8,562 book records, that although only 2.2% of social tags are used as keywords, these common terms account for 50.1% of the keywords. The common terms are also used much more often as social tags than other terms (average frequency of 33.5 to 5.3). Rolla (2009) on the other hand conducted a qualitative study of 45 records from the same two sources as Lu et al. (2010) and finds that the social tags include both broader and narrower terms than the keywords, but always add at least one content-related concept not present in the keywords.
This paper presents a comparison of professionally assigned keywords for web resources from an editorial, educational web portal and their respective user-generated annotations from SBS. The study focuses on a statistical description of the two types of data sources and an analysis of their agreement. This description includes properties such as the number and length of terms on the resource as well as on the vocabulary level and the term frequency. Its aim is to indicate structural similarities or differences between the two types of annotation.
Method
Expected Outcomes
References
Golder, Scott A.; Huberman, Bernardo A. (2006): Usage patterns of collaborative tagging systems. In Journal of Information Science 32 (2), pp. 198–208. DOI: 10.1177/0165551506062337. Lu, Caimei; Park, Jung-ran; Hu, Xiaohua (2010): User tags versus expert-assigned subject terms: A comparison of LibraryThing tags and Library of Congress Subject Headings. In Journal of Information Science 36 (6), pp. 763–779. DOI: 10.1177/0165551510386173. Rolla, Peter J. (2009): User tags versus subject headings. Can user-supplied data improve subject access to library collections? In Library Resources and Technical Services 53 (3), pp. 68–77.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.