Capstones are often final courses or internships in a program. They provide an excellent opportunity to identify whether students have acquired the central knowledge and skills that are the carefully selected outcomes expected of the program. There are many different approaches that can be used effectively in capstone courses. If you are interested in developing your capstone assessment, contact Robert VonderOsten at 591-2916. There are many on campus with expertise in different models who can work with you in enhancing your own assessment effort.
The key to capstone assessment is determining the strengths and weaknesses of students in achieving your outcome expectations so that you can determine how to develop the program to enhance student performance.
Capstone assessment may also want to assess broader skills important to our graduates such as problem solving, computer literacy, team work, communication skills, and even the reading of professional material.
Simulations and Team Projects
Students can be provided with situations or problems which model the kinds of work expected of them. Working together or as part of a team, they need to bring together the learning emphasized in the program. A rubric which provides a model for evaluating the different skill sets may be used to analyze the successful application of different expectations. Evaluations can be done by the faculty member, teams of faculty members from outside the program, and even by outside representatives from the field. It is important to make this more than an evaluation of particular students by identifying the patterns of performance. Are there some elements that are performed well by most students? Are there any areas of consistent under-performance by a number of students? Such data can be useful in determining if any area of the curriculum needs strengthening. For example, if one of the expectations is that students be able to work effectively in a team to solve a problem but team work is fairly contentious and disorganized, then perhaps the program would need to make training and practice at working as a team a more significant part of the curriculum.
Students may be asked to maintain a portfolio of their work over the course of their curriculum and then prepare and present their portfolio as part of a capstone course. This might be combined with some final projects as part of the portfolio. Such portfolios may be presented by students or simply collected and evaluated. Again, the portfolios need to be organized and evaluated according to a pre-established rubric which identifies the key areas to assess and the criteria for evaluation that measures the key outcomes expected of graduates. They may be evaluated by faculty or by people in the field or a combination. The key is to look for patterns. If even 20% of the portfolios for the Technical Communication program were found to have weak proofreading skills (not in fact the case), this would be a serious matter that would need to be addressed in the curriculum.
There are many different ways to collect and evaluate portfolios. Students may be asked to select their best work, have work representative of different areas of expectations or representing different courses, or have work representing their development. The portfolio could be assessed as a whole or earlier work can be assessed against earlier work to measure development.
Tests can be effective assessment measures if constructed carefully. If the goals of capstone assessment is to evaluate student learning for the program and not just that course, any test cannot simply be on the material covered in the capstone. The test would need to be tied to the core outcomes with questions that are relevant to those core outcomes. The analysis of the test results should look not just at the performance of individual students but the general success of students on the key questions. If the goal of a program would be for students to be able to analyze whether a specific material would be able to accept measurable stress, a high success rate by most students on most such questions would indicate curricula success. If, however, fifty percent of students had problems with a majority of such questions, faculty in a program would need to consider whether the problem was with the test or with the learning of the material. Perhaps the course is early enough in the curricula that the relevant material also needs to be incorporated and reviewed in later courses.
External certification tests can also be useful measures of the success of a program. However, the overall scores of students on such instruments is not sufficient information. If possible, it is useful to have an analysis of student performance on different areas of the test. If such certification test doesn't provide such analytic information, obviously in house instruments, even if sample certification tests, would be necessary to gain a better picture of the strengths and weaknesses of student learning.
Internships and Clinical Experiences
Internships and clinical experiences are an integral and important part of a Ferris education. They provide students with an opportunity to practice what they have learned in a work situation while they can still get valuable feedback on their performance. Internships and clinical experiences are also excellent assessment opportunities. A structured rubric can provide analytic information about our students' performance provided either by the professional in the field or a supervising faculty member. By identifying patterns of performances across a number of students, programs can gain solid data on where students are best prepared and where they would benefit from additional learning opportunities.
Specific Projects or Assignments
Students can be given a specific project or assignment that will allow programs to evaluate how well they have prepared the students to meet the expectations of the program. Students in an English major might be asked to prepare a literary analysis of a work of their choice suitable for presentation at a conference that accepts student presentations. The work can then be evaluated based on an analytic rubric. If thirty percent of the resulting reports were found to have scored as weak in integrating secondary sources in the critique, then this could be identified as a part of the curriculum that would need to be strengthened. For projects or assignments to provide useful information about the learning in the program, they cannot be specific just to that course but draw on overall learning in the program.