It is not uncommon for instructors in a language program to find themselves confronted with the need to oversee, create, or revise a placement test. However, despite the widespread use of placement tests in modern foreign language programs in the United States, many instructors faced with the challenge of revising or replacing a placement test feel that they are not properly equipped with the knowledge to complete the task to the level of professionalism that they would like. This can sometimes lead to using tests that collect the wrong language data, misinterpreting test scores, and drawing the wrong conclusions about the language ability of a student.
In this two-week workshop, participants will gain a solid understanding of the fundamentals of creating sound language tests, with a particular emphasis on designing tests to facilitate placement decisions. In the morning sessions, participants will be introduced to various testing concepts in a clear and non-threatening manner. No previous statistical or measurement knowledge is assumed. Discussions of “real world” issues and problems from the participants’ home institutions are welcomed. In the afternoon sessions, participants will get hands-on practice creating test items and analyzing test results. The use of commonly available computer programs (e.g., Excel) to facilitate test analysis will be highlighted. Participants are encouraged to bring data sets from their program’s placement tests to practice setting up, analyzing, and interpreting their data.
Session topics will include:
– Placement, curriculum, and articulation
– Determining test reliability and validity
– Interpreting test results and setting cut scores
– Heritage students and study-abroad returnees
– Maintaining test quality over time
– Test fairness and test bias
– Testing specific skills
– Alternative test types