Common outcome measures
Tests & Exams Accordion Closed
Adapted from the University of Massachusetts, Amherst “Program-Based Review & Assessment Tools & Techniques for Program Improvement,” (April 2017) and from Marymount University Assessment Handbook (2015).
Selecting a standardized instrument (developed outside the institution for application to a wide group of students using national/regional norms and standards) or a locally-developed assessment tool (created within the institution, program or department for internal use only) depends on specific needs and available resources. Knowing what you want to measure is key to successful selection of standardized instruments, as is administering the assessment to a representative sample in order to develop local norms and standards. Locally-developed instruments can be tailored to measure specific performance expectations for a course or group of students.
Locally-developed instruments are directly linked to local curriculum and can identify student performance on a set of locally-important criteria. Putting together a local tool, however, is time-consuming as is development of a scoring key/method. There is also no comparison group and performance cannot be compared to state or national norms. Standardized tests are immediately available for administration and, therefore, are less expensive to develop than creating local tests from scratch. Changes in performance can be tracked and compared to norm groups and subjectivity/misinterpretation is reduced. However, standardized measures may not link to local curricula and purchasing the tests can be expensive. Test scores may also not contain enough locally-relevant information to be useful.
Many course-level Student Learning Outcomes (SLOs) can be assessed by examinations given within the course. In some cases the outcomes measured by the examinations will be identical to the program’s student learning outcomes and, the exam questions will assess both course and program outcomes. With some creativity, exam questions can also be written to cover broader program SLOs without losing their validity for course grading. In programs without capstone courses, it might be possible to write a coordinated set of exam questions that provide a fuller picture of student learning when administered in exams across a series of courses.
Standardized and certification exams
In some disciplines, national standardized or certification exams exist which can be used as measures if they reflect the program’s learning outcomes. The examination usually cuts across the content of specific courses and reflects the externally valued knowledge, skills and abilities of a program.
This method of assessment uses locally developed and administered tests and exams at the beginning and end of a course or program in order to monitor student progression and learning across pre-defined periods of time. Results can be used to identify areas of skill deficiency and to track improvement within the assigned time frame. Tests used for assessment purposes are designed to collect data that can be used along with other institutional data to describe student achievement.
Pre-test/post-test evaluations can be an effective way to collect information on students when they enter and leave a particular program or course, and provide assessment data over a period of time. They can sample student knowledge quickly and allow comparisons between different students groups, or the same group over time. They do, however, require additional time to develop and administer and can pose problems for data collection and storage. Care should be taken to ensure that the tests measure what they are intended to measure over time (and that they fit with program learning objectives) and that there is consistency in test items, administration and application of scoring standards.
Performance Assessments Accordion Closed
Adapted from the California State University, Bakersfield, PACT Outcomes Assessment Handbook (1999) and from Marymount University Assessment Handbook (2015)
Performance assessment uses student activities to assess skills and knowledge. These activities include class assignments, auditions, recitals, projects, presentations and similar tasks. At its most effective, performance assessment is linked to the curriculum and uses real samples of student work. This type of assessment generally requires students to use critical thinking and problem-solving skills within a context relevant to their field or major. The performance is rated by faculty or qualified observers and assessment data collected. The student receives feedback on the performance and evaluation.
Strengths and Weaknesses: Performance assessment can yield valuable insight into student learning and provides students with comprehensive information on improving their skills. Communication between faculty and students is often strengthened, and the opportunity for students’ self-assessment is increased. Performance assessment, like all assessment methods, is based on clear statements about learning objectives. This type of assessment is also labor-intensive, is sometimes separate from the daily routine of faculty and student, and may be seen as an intrusion or an additional burden. Articulating the skills that will be examined and specifying the criteria for evaluation may be both time-consuming and difficult.
Analysis of papers
Course papers can be used as measures for student learning outcomes. Because students create these papers for a grade, they are motivated to do their best and these papers may reflect the students’ best work. This process typically requires development of a different rubric that focuses on program learning outcomes. Faculty committees can also read these same papers to assess the attainment of program SLOs. In most cases, this second reading should be done by someone other than the instructor or by others along with the instructor, as the purpose for the assessment is different than grading. Scoring rubrics for the papers, based on the relevant learning outcomes should be developed and shared with faculty raters prior to rating to promote interrater reliability.
Analysis of projects and presentations
Products other than papers can also be assessed for attainment of program learning outcomes. For example, if students are required to give oral presentations, other faculty and even area professionals can be invited to these presentations and can serve as outside evaluators using the same rubric as other raters.
In some areas, such as teaching or counseling, analysis of student classroom teaching, mock counseling sessions or other performances can provide useful measures of student learning. A standardized evaluation form is necessary to ensure consistency in assessment. One advantage of using performances is that they can be videotaped for later analysis.
Internship supervisor evaluations
If the program has a number of students who are doing relevant internships or other work-based learning, standard evaluations by supervisors using a rubric designed to measure a particular learning outcome across the duration of the internship may provide data on attainment of learning outcomes. In addition, when programs exercise control over the content of internships, those settings can serve as capstone experiences where students can demonstrate their knowledge skills and abilities.
Portfolio Evaluations Accordion Closed
Adapted from the California State University, Bakersfield, PACT Outcomes Assessment Handbook (1999), and the University of Wisconsin, Madison, Outcomes Assessment Manual I (2000).
Portfolios are collections of student work over time that are used to demonstrate student growth and achievement in identified areas. Portfolios can offer information about student learning, assess learning in general education and the major, and evaluate targeted areas of instruction and learning. A portfolio may contain all or some of the following: research papers, process reports, tests and exams, case studies, audiotapes, videotapes, personal essays, journals, self-evaluations and computational exercises. Portfolios are often useful and sometimes required for certification, licensure, or external accreditation reviews.
Portfolios not only demonstrate learning over time, but can be valuable resources when students apply to graduate school or for jobs. Portfolios also encourage students to take greater responsibility for their work and open lines of discussion between faculty and students and among faculty involved in the evaluation process. Portfolios are, however, costly and time-consuming and require extended effort on the part of both students and faculty. Also, because portfolios contain multiple samples of student work, they are difficult to assess and to store and may, in some contexts, require too much time and effort from students and faculty alike.
Enlist the assistance of assessment and testing specialists when you plan to create, adapt, or revise assessment instruments. Staff in the Office of Curriculum, Learning Design & Academic Assessment are happy to assist you in finding the appropriate resources and helping you to design the assessment. Areas in which you might want to seek assistance include:
- ensuring validity and reliability of test instruments;
- ensuring validity and reliability of qualitative methods;
- identifying appropriate assessment measurements for specific goals and tasks; and
- analyzing and interpreting quantitative and qualitative data collected as part of your assessment plan.
Adapted from Western Washington University’s Tools & Techniques for Program Improvement: Handbook for Program Review & Assessment of Student Learning (2006)