Resources for Middle Eastern Language Programs

2011 Western Consortium Middle East Language
Program Evaluation Workshop

"Making the most of program evaluation"

Sponsored and hosted by the National Middle East Language Resource Center & the Center for Middle Eastern Studies at the University of Texas at Austin; Facilitated by the University of Hawaii National Foreign Language Resource Center

Scroll down to see the schedule of events, and to download PowerPoint presentations, handouts, and discussion summaries.


1:30-2:30    Keynote Address by John Norris: High-value evaluation strategies in foreign language education

Summary: Middle East language programs (MELPs) face the need to engage in evaluation for a variety of reasons, including in particular mounting pressures to respond to questions about the value and effectiveness of contemporary language education in the U.S. Given these and related demands, how can MELPs pursue evaluation in ways that support our efforts, improve our teaching, learning, and other activities, and demonstrate the value of what we do to a variety of audiences? This presentation will review key findings that are emerging from current research and practice in language program evaluation, highlighting particularly useful strategies for initiating, sustaining, and acting upon evaluation, both within individual programs and across the discipline.
[View PowerPoint

Suggested short readings:

     1. Norris, J. M., & Watanabe, Y. (2011). Program evaluation. In C. A. Chapelle (Ed.), The encyclopedia of applied linguistics New York: Wiley-Blackwell.

     2. Davis, J. McE., Sinicrope, C., & Watanabe, Y. (2009). College foreign language program evaluation: Current practice, future directions. In J. M. Norris, J. McE. Davis, C. Sinicrope, & Y. Watanabe (Eds.), Toward useful program evaluation in college foreign language education (pp. 209-226). Honolulu: University of Hawai'i, National Foreign Language Resource Center.

Background text:

     Norris, J. M., Davis, J. McE., Sinicrope, C., & Watanabe, Y. (Eds.) (2009). Toward useful program evaluation in college foreign language education. Honolulu: University of Hawai'i, National Foreign Language Resource Center.


2:30-2:45     Break

2:45-4:15     Workshop by John Davis: Using surveys for understanding and improving foreign language programs

Summary: Surveys are often the first method we think of for collecting data in program evaluations, yet the development and use of good surveys may be less straightforward than presumed. This workshop provides advice (and examples) on using surveys in tertiary language programs, from the beginning planning stages through to reporting and acting on survey findings. The overall goal of the workshop is to help language educators develop and administer quality surveys that produce useful information for various program development and evaluation aims.
[View PowerPoint and handout]

Suggested short readings:

     3. Walther, I. C. (2009). Developing and implementing an evaluation of the foreign language requirement at Duke University. In J. M. Norris, J. McE. Davis, C. Sinicrope, & Y. Watanabe (Eds.), Toward useful program evaluation in college foreign language education (pp. 117-138). Honolulu: University of Hawai'i, National Foreign Language Resource Center.

     4. Pfeiffer, P. C., & Byrnes, H. (2009). Curriculum, learning, and the identity of majors: A case study of program outcomes evaluation. In J. M. Norris, J. McE. Davis, C. Sinicrope, & Y. Watanabe (Eds.), Toward useful program evaluation in college foreign language education (pp. 183-208). Honolulu: University of Hawai'i, National Foreign Language Resource Center.

Background text:

     Brown, J. D. (2001). Using surveys in language programs. Cambridge: Cambridge University Press.

4:15-4:30     Break

4:30-6:00     Roundtable discussion #1: How can we make the most of the mandated program review process?

Panelists: John Norris (facilitator), Kirk Belnap, Nahal Akbari, Mahmoud Al-Batal

Summary: Academic programs are regularly encouraged or required to engage in so-called 'program review', typically involving a self-study and a brief site visit by faculty from peer programs or other domain experts. Unfortunately, the utility of such reviews is often threatened by a variety of challenges, including the lack of a guiding framework or evaluation questions, minimal or non-participation by important stakeholders, inadequate/invalid/unreliable data to illuminate program activities and outcomes, and external reviewers with insufficient understanding of the target program and/or of evaluation purposes and methods. In this roundtable discussion, participants offer suggestions for how to improve program reviews, with an eye towards developing recommendations for practice in MELPs.
[View Norris Handout] [View Panel Discussion

Suggested short readings:

     5. Carsten-Wickham, B (2008). Assessment and foreign languages: A chair's perspective. ADFL Bulletin, 39(2&3), 36-43.

     6. McAlpine, D., & Dhonau, S. (2007). Creating a culture for the preparation of an ACTFL/NCATE program review. Foreign Language Annals, 40(2), 247-259.

Background text:

     Bresciani, M. (2006). Outcomes-based academic and co-curricular program review. Sterling, VA: Stylus.


9:00-12:00    Evaluation at work: Presentation sessions

9:00-9:40       Bonnie Sylwester & Yukiko Watanabe: Why outcomes-based evaluation? In search of value and impact

Summary: Current accountability and accreditation systems require college foreign language (FL) programs--including academic programs, National Resource Centers, area studies programs, and others--to engage in evaluation of program- or project-level outcomes, though often such activities are seen as daunting and bureaucratic. How can we build evaluative culture within organizations and create a proactive evaluation framework that addresses the demand for outcomes? The presenters will provide examples of (a) transformative organizational and evaluative culture in FL departments and National Resource Centers as well as (b) changes in curriculum, pedagogical practices, and project designs as we engaged faculty and staff in stating, mapping, and assessing/evaluating outcomes. We explore the strategies and factors that seem to impact valuing of assessment and evaluation, as well as the value contributed by these processes.
[View PowerPoint and handout

Suggested short readings:

     7. Norris, J. M. (2006). The why (and how) of student learning outcomes assessment in college FL education. Modern Language Journal, 90(4), 590-597.

     8. Houston, T. (2005). Outcomes assessment for beginning and intermediate Spanish: One program's process and results. Foreign Language Annals, 38(3), 366-376.

Background text:

     Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th Ed.). Thousand Oaks, CA: Sage. (Chapters 7, 8, 9, 10)

     Walvoord, B. E. (2004). Assessment clear and simple: A practical guide for institutions, departments, and general education. San Francisco: Jossey-Bass.

9:40-10:20   Martha Schulte-Nafeh: Embedded assessment: Identifying outcomes, indicators, and course-embedded assessment methods

Materials: Below is a list of generic learning outcomes for 1st through 6th semester Arabic that were developed by graduate students at the University of Texas in Austin in a curriculum development course.

Arabic 1st semester outcomes
Arabic 2nd semester outcomes
Arabic 3rd semester outcomes
Arabic 4th semester outcomes
Arabic 5th semester outcomes
Arabic 6th semester outcomes

Original title of the presentation: Making the most of evaluation requirements in grant-funded projects and programs: How to achieve objectivity and utility

Suggested short readings:

     9. Elder, C. (2009). Reconciling accountability and development needs in heritage language education: A communication challenge for the evaluation consultant. Language Teaching Research, 13(1), 15-33.

     10. Mackay, R. (1994). Undertaking ESL/EFL programme review for accountability and improvement. ELT Journal, 48(2), 142-149.

Background text:

     Kiely, R., & Rea-Dickins, P. (2005). Program evaluation in language education. New York: Palgrave Macmillan.

10:20-10:40     Break

10:40-11:20  Nahal Akbari: Using program logic models to understand and improve Persian language programs

Summary: College language programs are often critiqued for lacking clear curricular scope and sequence, meaningful articulation across courses/semesters/years of study, or valued outcomes that respond to specific societal and educational needs. At the same time, it is clear that language programs consist of multiple elements, from materials and instruction to trained teachers to fitting assessments, all of which interact to produce the educational experience. How can the distinct parts of a language program be combined intentionally into an overall effective educational design? How can our theories about language teaching and learning be translated consistently into practice across courses and within the different pedagogic efforts we make? In this presentation, we report on the use of "logic models" as one way of literally mapping out the various elements of a language program and demonstrating how they are linked together. Using the example of the Persian language programs at University of Maryland, we show how logic models can help to explicate the theory underlying our educational program, the needs to which the program responds, the outcomes it seeks to achieve, and the pedagogic practices we pursue. Further, we highlight the contribution of logic models to identifying strengths and weaknesses, as well as indicating aspects of the program which may require evaluation and/or improved design.
[View PowerPoint]

Suggested short readings:

     Innovation Network, logic model workbook:

     Example of logic modeling for a large-scale language program:

Background text:

     University of Wisconsin Extension Service, logic model course:

     Donaldson, S. I. (2007). Program theory-driven evaluation science: Strategies and applications. Mahwah, NJ: Erlbaum.

11:20-12:00  Esther L. Raizen and Joanna Caravita: The Use of 'Sabras' as Mentors for Advanced Hebrew Students

Summary: In the spring of 2010, we offered the course "Hebrew via Popular Culture," an upper-division course conducted entirely in Hebrew. The course immersed students in a variety of cultural issues, and because of the heavy reliance on current events and blogs/talkbacks, fairly quickly focused on three aspects of opposition in Israeli society: political left and right, religious and secular, and Ashkenazi/Sephardic Jews. Two weeks into the course students were assigned individual mentors from the Israeli community, either from their parents' generation or from an earlier generation. The mentorship experience was meant to add cultural depth in terms of both time span and emotional attachment to historical and social issues. It was also designed to provide broader exposure to the language. In this presentation we will discuss the parameters of student-mentor work and relationships, and the impact of the mentoring component of the course, as evaluated in the spring of 2010 and again in the summer of 2011.
[View PowerPoint]

Suggested short reading:

      Sinicrope, C., Norris, J. M., & Watanabe, Y. (2007). Understanding and assessing intercultural competence: A summary of theory, research, and practice. Second Language Studies, 26(1).

Background text:

      Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd ed.). San Francisco: Jossey-Bass.

12:00-1:30    Lunch (on your own)

1:30-3:15     Evaluation-topic-specific breakout sessions, facilitated by UH team and other presenters

Summary: This session provides an opportunity for individuals to meet and discuss evaluation issues specific to their programs and interests, topics to be determined based on a survey of attendees' interests, targeting 4-6 topics. Facilitators will provide a short overarching commentary on the particular topic, and each group will plan to report back.

Discussion topics:
1. How should outcomes assessment help our programs? Stating and assessing outcomes with an eye towards use and impact.
2. What is the best way to get started with program evaluation? Strategies for initiating feasible, useful evaluation projects.
3. How can we develop an 'evaluation culture' in our programs? Encouraging participation, buy-in, and a willingness to change.
4. What are the alternatives for collecting data in language programs? Key methods and ethics for empirical evaluation practice.

Framing questions for each group:

      - What are the key challenges associated with your particular topic, in MELPs?

      - Are there any good examples of practice that can/should be shared?

      - Which strategies might be pursued by ME language educators in responding to the challenges associated with this topic?

3:15-3:30     Break

3:30-4:30     Reporting session

Breakout groups report back to full group on challenges, examples, strategies discussed in breakout sessions, with an eye towards informing the Sunday strategic planning session.
[View discussion highlights

4:30-4:45    Break

4:45-6:30    Roundtable discussion #2: Assessing and otherwise gathering data on diverse program outcomes: Moving beyond 'how do we measure?'

Panelists: John Norris (facilitator) Martha Schulte-Nafeh, Esther Raizen, Ahmet Okal

Summary: Evaluation calls upon empirical data as a primary basis for informing decisions and taking actions in language programs. Yet there are numerous possible methodologies for gathering data, from assessments of student learning, to observations of how well programs are delivered, to perceptions of satisfaction and impact. Indeed, many of the outcomes associated with language programs may defy easy 'measurement'. In this roundtable discussion, participants will provide insights into useful methods for collecting meaningful data on the distinct kinds of outcomes (learning and otherwise) that ME language programs seek to encourage.
[View Norris Handout] [View Panel Discussion

Suggested short readings:

     11. Warford, M. K. (2006). Assessing target cultural literacy: The Buffalo State experience. ADFL Bulletin, 37(2-3), 47-57.

     12. Gorsuch, G. (2009). Investigating second language learner self-efficacy and future expectancy of second language use for high-stakes program evaluation. Foreign Language Annals, 42(3), 505-540.

Background text:

     Teagle Foundation publication on 'assessing the sublime' (intended as one point of reference, not as a source book or recommended guide):


9:00-11:00    Strategic planning session, facilitated by UH team: The minimum that evaluation needs to accomplish in Middle East Language Programs
[View PowerPoint and handout]

11:00-12:00  Open discussion

12:00-12:30  Workshop evaluation 

12:30            Boxed lunch