All of our programs are involved in regular assessment practices so our students are ensured of the best possible education that will lead to successful careers.
Program Review is a pivotal part of that assessment practice since it establishes a common structure and a time-table for a comprehensive self study that leads to important institutional decisions on effective future directions for the program. Program review draws on a series of assessment tools including student, faculty, graduate, and employer surveys. It involves the analysis of retention data, changes in the targeted fields, suggestions from advisory boards, and much more.
Program review is not, however, the only opportunity for program assessment. Effective program review is, in fact, the result of the on going practice by programs to collect data on student learning each year. While surveys are, of course, important mechanisms for collecting data on student, graduate, and employer perceptions, they are not the only or even the most effective way of assessing whether our students have the skills and knowledge we want from them.
There are many different ways for programs to collect data on student learning. First, of course, it is vital to have clearly articulated learning outcomes which specify the kinds of skills and knowledge programs expect of their graduates. These are not general but need to be specific and measurable. While we all want our graduates to get a job in their field and while placement rates are an important measure of the effectiveness of programs, placement rates are not learning outcomes. They do not tell us what skills or knowledge we expect our graduates to be able to demonstrate.
Certification tests can be important measures of student knowledge relative to the expectations of a professional area and are an essential part of many Ferris programs. We are justly proud of how well our graduates do on certification tests. When we get an analytic reading of the results that identify the strengths and weaknesses of our students, then they become invaluable tools for assessing learning outcomes as well. Where the testing services do not provide such an analysis, sample tests, when possible, can provide an opportunity to analyze student performance by target areas. In general, for example, one would want to know how well our students in our Pre-Optometry program did on the optics section of the Optometry Admissions Exam compared to how well they did with the organic chemistry sections.
Capstone courses can play a vital role in program assessment. Often capstones offer the opportunity to evaluate using a clear rubric the performance of students on simulations, projects, or internships that require them to synthesize the skills and knowledge learned during the program. The data can be collected with a clear eye at measuring how well the program has prepared students to meet the key targeted outcomes. If students seem weak in area considered important to the program, such as team work abilities, this could provide evidence for deliberation on whether the skill might need to be re-emphasized in other areas of the curriculum.
Course Assessment can provide useful information for program assessment, especially when key projects to assess program outcomes can be embedded in course work. For example, in Advanced Business Writing students are required to write a long analytic report where they analyze business problems to offer recommendations. These reports can be used not only to rate the effectiveness of the course but to provide information to the program. Clearly, these works could be assessed to evaluate the professional writing skills of upper level business students. They could also be evaluated to determine other key skills such as the ability to analyze a problem, the use of core knowledge in their discipline, the ability to draw on business resources. Course assessment can be collected and evaluated as a way of answering program level questions about student learning.