Session Information
09 SES 08 B, Assessment in Higher Education (I)
Paper Session
Time:
2009-09-30
08:30-10:00
Room:
HG, Marietta- Blau-Saal
Chair:
Nadja Pfuhl
Contribution
The Bologna agreement regarding proposed changes in higher education within the European Union has brought about the implementation of the Bachelor and Master system, and increased student mobility, which in turn have increased demographic heterogeneity of students being admitted in higher education. Higher educational institutions are then posed with the challenge of determining which among these students are likely to perform well in the required academic work; consequently, admission testing has become an inevitable subject matter. Admission testing in higher education generally emphasizes measurement of abilities previously acquired within the academic setting. This concept of measurement is substantiated by studies that show predictive validity of grades during prior education. However, curriculum and quality of teaching differ among universities and among countries that it is not reasonable to assume that a standardized admission test measuring previous scholastic ability suffices. Moreover, the notion that past performance is the best predictor of future performance is valid in as far as considerable change did not occur in the individual and in the individual’s environment (Guthke & Beckmann, 2003). Likewise, there are student applicants who apply for admission in higher educational institutions who did not go through the conventional route of a certain educational system that it becomes an arduous undertaking to break through the academic world. Admission testing is so focused on what a student applicant has learned at the time of test taking that the other side of the coin is mainly disregarded, namely at which level the same student applicant is able to learn during the actual course of study.
An alternative admission measure is a test that consists of study-related tasks characteristic of those that would eventually be encountered by student applicants themselves at the educational program of their choice. The use of such an alternative measure augments the use of conventional cognitive tests, in that, the former provides a sampling of study-related skills as compared with the latter that provides an index of general intelligence. In addition, the development of such a test should lead to the identification and inclusion of predictors of academic performance in specific disciplines such as medicine, and psychology.
This paper focuses on the development and validation of a test consisting of study-related tasks, designed for students that would like to start a pre-Master program. The main goal of the test is to select those student applicants that are likely to perform well in the pre-Master program.
Method
The sample consists of 171 student applicants who performed tasks on theoretical conception, theoretical application, academic writing, English reading comprehension, and basic statistical comprehension. Each task was rated independently by two raters using a 4-point scale with 1 as the lowest score. Every score defines a series of criteria a student’s response has to meet. Inter-rater agreement was measured using weighted Kappa wherein agreement is calculated based on the distance between scores assigned by the raters, that is, a score of 1 given by a rater and 2 by another rater gets more weight than a score of 1 and 4.
Expected Outcomes
Since scores are at an ordinal level, confirmatory factor analysis for categorical data was used to assess construct validity. This factor analysis employs polychoric correlations and robust weighted least squares (WLS) for parameter estimates (Flora & Curran, 2004). Subsequently, validity of the admission test in predicting performance at the pre-Master’s program was assessed using hierarchical regression. Grade averages in high school and during the last two years of study at a post-secondary institution were considered as predictors in addition to the scores in the admission test. Initial results found support for both the construct and the predictive validity of the test.
References
Flora, D.B., & Curran, P.J. (2004). An empirical evaluation of alternative methods of estimation for confirmatory factor analysis with ordinal data. Psychological Methods, 9, 466-491. Guthke, J., & Beckmann, J.F. (2003). Dynamic assessment with diagnostic programs. In R.J. Sternberg, J. Lautrey & T.I. Lubart (Eds.), Models of Intelligence: International Perspectives (pp. 227-242). Washington, DC: American Psychological Association.
Search the ECER Programme
- Search for keywords and phrases in "Text Search"
- Restrict in which part of the abstracts to search in "Where to search"
- Search for authors and in the respective field.
- For planning your conference attendance you may want to use the conference app, which will be issued some weeks before the conference
- If you are a session chair, best look up your chairing duties in the conference system (Conftool) or the app.