Methods: A subcommittee of the Instructional Programs Committee, composed of faculty and students, developed and administered a single question survey to gather the opinions of students and faculty regarding their satisfaction with the courses in the university curriculum, not limited to syllabi or method of instruction. The University includes a College of Acupuncture and Oriental Medicine (CAOM) and a College of Chiropractic (CC), both of which were surveyed. To protect confidentiality and avoid any influence in the responses, no personal identifying information was requested from the responders. Student class representatives administered a paper survey instrument during the 13th and 14th weeks of a 15-week trimester in Spring 2005. An electronic survey was used for students who could not be reached in a classroom setting. Microsoft Excel software was used for both data entry and analysis.
Results: In the CAOM, 97% of the students were surveyed and a response rate of 60% was obtained, although there could be an error due to inconsistencies in administration of the instrument. Of those responding, 84% indicated an overall “satisfaction”, 10% indicated a “lack of satisfaction” and 6% did not respond. Of some 80 students in the CAOM internship programs who were polled via e-mail, only 4 responded. In the CC, 55% of students present in class on the days that the survey was administered responded, with the same possible error as noted for the CAOM survey. Seventy-five percent of those responding indicated an overall “satisfaction”, 24% indicated a “lack of satisfaction” and 1% did not respond. In general, the majority of faculty responding from both colleges indicated an overall “satisfaction” with their courses.
Discussion: The intent of this survey was to provide evidence of the utility of this method of program assessment and to identify barriers in using such a measurement. There is no single reason that could be attributed for the inconsistent response rates of students for different courses within the same term in both programs. The identified areas requiring more attention in future efforts included inconsistencies in administration, manual polling versus confidential electronic polling, responder fatigue due to repetition, time and labor efficiency and funding. It is important to determine an appropriate evaluation design which can collect information that can serve the decision making process and also address research integrity. It is also crucial to collect such information periodically and to critically analyze the data before making decisions regarding academic restructuring.
This abstract is reproduced with the permission of the publisher.