Improve the assessment
When academic units are in the developing stages of assessment, it is common to need improvements in the area of assessment in order to obtain actionable results: results that are reliable and valid enough to either (a) use to improve the curriculum, or (b) that the faculty feel comfortable to publicly disseminate.
The areas to consider improving assessment approaches include:
Determine where/when to conduct assessment
- Design outcome measures (assignments, tests, etc.)
- Collect data
- Analyze, interpret and evaluate findings
Evaluating Measures
It is possible to evaluate outcome measures by asking the three questions found in Tool 3: Questions for evaluating outcome measure. If faculty and chairs are able to answer “yes” to all of three questions, it is likely that a strong set of measures has been developed.
Tool 3: Questions For Evaluating Outcome Measures
- Does the measure provide sufficient data and information to analyze the learning outcome?
- Does the measure require a reasonable amount of work to collect?
- Does the measure establish performance standards to help guide the analysis?
Discuss planned curricular or program improvements for this year based on assessment of outcome.
This section describes the plan for action for the next year. Planned improvements usually address one of the following areas:
- Courses supporting learning outcomes
- Learning outcomes
- Measures (rubrics, tests, surveys)
EXAMPLE: Improving Outcome Measures
The following illustration shows how the questions in Tool 3: Questions for evaluating outcome measure can be used to evaluate outcome measures. This example builds on the learning outcome developed in section one.
Upon successful completion of this program, students will be able to apply ethical reasoning in discussing an ethical issue.
A department first uses an indirect measure:
Two questions from the Graduating Student Survey:
For each of the following skills, please indicate how well you believe your education prepared you to:
Determine the most ethically appropriate response to a situation.
Understand the major ethical dilemmas in your field.
Students respond to these questions by indicating their choice on a scale ranging from “Poor” to “Excellent.”
We will evaluate this outcome measure by asking the following questions:
1. “Does the measure provide sufficient data and information to analyze the learning outcome?”
a. Yes, because this evidence is the student’s opinion.
b. No, because it is an indirect measure and indirect measures are not sufficient by themselves to analyze learning outcomes. It does not look at the student’s actual ability to “apply ethical reasoning in discussing an ethical issue.”
2. “Does the measure require a reasonable amount of work to collect?”
a. Yes, the amount of work required is reasonable.
3. “Does the measure establish performance standards to help guide the analysis?”
a. No, it does not provide a performance standard to help guide the analysis though one could be developed regarding the student opinion.
The department revises the outcome measure to a direct measure, which is the required minimum at NAU (that each learning outcome be assessed using a direct measure of student learning):
A paper taken from student portfolios where the student discusses an ethical issue. The papers are rated by each faculty member on a specific rubric designed to measure the application of ethical reasoning.
We evaluate this outcome measured by asking the same three questions as before:
1. “Does the measure provide sufficient data and information to analyze the learning outcome?”
a. Yes, the measure directly measures students’ application of ethical reasoning.
2. “Does the measure require a reasonable amount of work to collect?”
a. No, the faculty may object to having to read all the student papers and they may deem this measure too much work.
3. “Does the measure establish performance standards to help guide the analysis?”
a. No, there is no specific performance standard established.
The department revises the outcome measure to:
Student papers that discuss ethical issues are extracted from student portfolios. Each paper is rated by two faculty members on a rubric designed to measure the application of ethical reasoning. The mid-point of the rubric (a rating of 3) provides a description of the performance standard required by the program. The mid-point states that the paper, “Identifies the key stakeholders, states one ethical approach in their discussion, discusses both the benefits and risks associated with the ethical issue, shows consideration of key stakeholder interests, uses at least one normative principle in discussing the issue.”
By revisiting the three questions, the strengths of this outcome emerge.
1. “Does the measure provide sufficient data and information to analyze the learning outcome?”
a. Yes, the measure directly measures student’s ability to apply ethical reasoning.
2. “Does the measure require a reasonable amount of work to collect?”
a. Yes, It is less burdensome on the faculty to collect the data than the previous outcome measure.
3. “Does the measure establish performance standards to help guide the analysis?”
a. Yes, it provides a performance standard to help guide the analysis.
*adapted from Marymount University Assessment Handbook (2015)