Interpreting assessment results
Adapted from the University of Massachusetts, Amherst “Program-Based Review & Assessment Tools & Techniques for Program Improvement,” (April 2017) and from Marymount University Assessment Handbook (2015)
Now that you have summarized your findings, let’s determine how to go about interpreting them. In order to interpret results, you will want to consider the following:
- What are the Performance Standards?
- What results do you expect to obtain?
- What crucial questions do you want to discuss with faculty concerning the results?
Establishing Performance Standards Accordion Closed
When interpreting assessment results, it is useful to set a performance standard that specifies the acceptable level of student work or response. For each learning outcome the program should ask “What is an acceptable performance standard for this learning outcome?” This performance standard may be a passing score on an exam, a rubric rating of “meets program standards” on a student paper or another indicator of the quality of student work.
Suskie offers the following suggestions for setting specific, appropriate standards:
- Do some research, perhaps with peer institutions or professional associations.
- Involve others in the discussion such as students, employers, and faculty members teaching in other programs.
- Use samples of student work to inform the discussion of setting expectations.
Interpreting Assessment Results Accordion Closed
By setting expected results for the percentage of students meeting or exceeding performance standards before data collection begins, the program can gauge its effectiveness in helping students meet the learning outcomes. For example: Seventy-five percent of students met the performance standard set by the department for the outcome measure on ethical reasoning. This can be compared to the expected result of 85% meeting the performance standard which reveals an area for improvement.
Suskie also offers the following tips for setting targets for collective performance:
- Express targets as percentages rather than means to improve understanding.
- Vary targets depending on the circumstances.
- Consider multiple targets (e.g., at least 90% of students score above the adequate level, and at least 30% score above the exemplary level).
Suskie (2009) identifies the following types of benchmarks or standards for framing expectations (the table below). If you are examining findings for the first time, it is most likely that you will want to ensure you are achieving Local Standards, the Capability Benchmark, and the Strengths & Weaknesses Perspective. If you make changes to your program and then re-assess student learning, you would use the Value Added Benchmark and the Historical Trends Benchmark.
The remaining standards and benchmarks in the table below rely mostly upon coordinating similar assessments across areas. For example, many professions that require accreditation, such as Nursing, Engineering, and Education, have standardized tests that lead to accreditation. One of the benefits of having such tests in place is the capability to determine External Peer Benchmarks and Best Practices Benchmarks.
Types of Benchmarks or Standards
Suskie (2009)
Local | Are students meeting our own |
Capability Benchmark | Are our students doing as well as they can? |
Strengths and Weaknesses Perspective | What are our students’ areas of strengths and weaknesses? |
Value-Added Benchmark | Are our students improving? |
Historical Trends Benchmark | Is our program improving? |
External Standards | Are students meeting standards set by someone else? |
Internal Peer Benchmark | How do our students compare to others within Ball State? |
External Peer Benchmark | How do our students compare with those of other universities that are similar to Ball State? |
Best Practices Benchmark | How do our students compare to the best of their peers? |
Which standard or benchmark you should use depends on the purpose of the assessment. For example, in examining the performance of a group of students on a certification exam, you might compare against national norms, against the performance of students in the best programs in the country, or against students at peer institutions if the assessment is being conducted for purposes of accountability or accreditation. Then, you might compare against your own students at another point in time or within groups of students if your purpose is self-analysis and improvement.
Once you determine the performance standards and expected results for the learning outcome, you will want to compare the results with the specified performance standard and discuss the implications of the data as they relate to the program. Both strengths and areas for improvement are discussed, because showcasing program success is just as important as identifying areas for improvement, when it comes to making data based decisions about the program.
Crucial Questions to Discuss Concerning Your Results Accordion Closed
*adapted from Marymount University Assessment Handbook (2015)
Consider the extent to which your findings can help you answer the following questions:
- What do the data say about your students’ mastery of the program’s learning outcomes?
- Do you see indications in student performance that point to weakness in any particular content areas, skills, etc.?
- Do you see areas where performance is okay, but not outstanding, and where you would like to see a higher level of performance?
- Are there areas where your students are outstanding? How might you learn from the curriculum design used to develop these learning strengths to improve student learning in other areas?
- Are they consistently weak in some respects? How might you adjust the program’s curriculum design to improve student learning in these areas?
These are compelling and central questions for faculty, administrators, students, and external audiences alike. If your assessment information can shed light on these issues, the value of your efforts will become all the more apparent.
Including program faculty in all steps of the assessment process is important to ensure its meaningfulness and effectiveness. The inclusion of faculty insights is probably most important in interpreting results and identifying strategies for improving student learning. The methods used for sharing results is driven by character of the department, with some pouring over all the data generated and others simply reviewing summary analysis outlined in Section IV of the handbook. Using summary reports of assessment results, and the University Assessment Committee’s review of the previous year’s report will typically facilitate rich discussion and generate useful interpretation for the assessment report.
Enlist the assistance of assessment and testing specialists when you plan to create, adapt, or revise assessment instruments. Staff in the Office of Curriculum, Learning Design & Academic Assessment (LINK) are happy to assist you in finding the appropriate resources and helping you to design the assessment. Areas in which you might want to seek assistance include:
- ensuring validity and reliability of test instruments;
- ensuring validity and reliability of qualitative methods;
- identifying appropriate assessment measurements for specific goals and tasks; and
- analyzing and interpreting quantitative and qualitative data collected as part of your assessment plan.
Adapted from Western Washington University’s Tools & Techniques for Program Improvement: Handbook for Program Review & Assessment of Student Learning (2006)