Publication:
Undergraduate Students’ Role in Spreading and Controlling the Assessment System

Loading...
Thumbnail Image
Full text at PDC
Publication Date
2015-03
Advisors (or tutors)
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
L. Gómez Chova, A. López Martínez, I. Candel Torres, IATED Academy
Citations
Google Scholar
Research Projects
Organizational Units
Journal Issue
Abstract
The Clinical-Basic Sessions (CBS) are a compulsory practical activity of the new Degree in Medicine course curriculum of the Complutense University of Madrid. They are integrated in the strategy of transversal competences improvement. Students can participate in a team work studying and presenting a clinical case as speakers or, if they are students of the sixth course, as tutors, or can act as listener assessors of the public presentation of the cases and the speakers. Teachers can participate tutoring a case and evaluating the students participating in it, or can act as speakers’ assessors in a classroom where other clinical cases are presented. In 2013/2014, important changes in the evaluation process and the evaluation criteria were introduces. First of all, objective tools were used so that students knew which aspects will be evaluated and evaluators had a guide for assessment. Two rubrics were created, one for the continuos evaluation of the students in each clinical case (rubric A), and the other one for listener assessors to evaluare the speakers’ presentation (rubric B). Also, the continuous evaluation was made not only by tutor teachers but also by the team students (peer evaluation). Secondly, the evaluation became telematic. Rubrics A and B were transferred into forms A and B using Google Drive. This way, evaluators have to complete the forms to do their assessment. The attendance control system need to be implemented in order to prove the listener assessor’s participation. For this purpose, a double code system was set up, one for each case and a second one for each case’s assessor. Both codes were needed to complete the on line forms. A new figure was created to inform participants correctly, the Tutor Students for the Evaluation Control, a sixth course student who was assigned to no clinical case. Its role was very important: giving information properly to the working teams, explaining what the rubrics are and the student assessor role, informing the listener evaluators in the classrooms, managing the attendance control and explaining how to digitalize the assessments. Also, assessors were asked to complete the satisfaction surveys included in the forms. Several documents were prepared and uploaded to a Virtual Campus space to make their tasks easier. 39 students worked as Tutor Students for the Evaluation Control, explaining the new system to 104 teams of teachers and students and managing the 26 classrooms where 52 cases were presented. Positive results were achieved. There were no incidents even though the massive participation. Actually, 33% of the enrolled students in the degree participated in the clinical cases and 70% took part as listener assessors. The electronic forms were sent promptly, collecting 50% of the shipments between the 4th and 7th day. Also, participation in the surveys surpassed 90%. Finally, 70% of the survey’s respondents, declared to recommend maintaining the evaluation system. We can concluye that the new figure has achieved its aims, contributing definitely to the implementation of CBS evaluation implementation. In spite of its success, it is necessary to continue working to implement the rubrics and to drive forward into learning by assessing.
Description
Contribución presentada en el 9th International Technology, Education and Development Conference celebrada en Madrid del 2 al 4 de marzo de 2.015. Ponencia nº280. Topic: Pedagogical and Didactical Innovations. Evaluation and Assessment of Student Learning.
Keywords
Citation
[1] Álvarez-Sala, J.L., Nieto, M.A. and Rodríguez, G. (2010). La formación clínica como piedra angular de los estudios de Medicina. Educación Médica 13, pp.S25-S31. [2] Brown, S. and Glasner, A. (1999). Assessment matters in Higher education. Choosing and Using Diverse Approaches. Maidenhead: Open University Press. [3] Millán, J.J. and Carreras, J. (2011). Evaluación objetiva. Instrumentos para la Educación Médica. Madrid: Cátedra de Educación Médica, Universidad Complutense de Madrid: Unión Editorial D. L. [4] Gómez, B., Weber, B., Sanz, M.C. and Flores, R. (2013). Variación de las competencias transversales mediante las Sesiones Básico-Clínicas. Revista Fundación Médica, 16(S2), pp.55-56. [5] Flores, R. and Álvarez, M.P. (2014). Simulación y resolución de casos clínicos por estudiantes de Medicina. II Congreso Virtual Internacional sobre Innovación Pedagógica y Praxis Educativa (INNOVAGOGIA). Sevilla, March 2014, pp. 637-644. [6] Boud, D., Falchikov, N. (2006). Aligning assessment with long-term learning. Assessment and Evaluation in Higher Education 31(4), pp.399-413. [7] Murphy, R. (2006). Evaluating new priorities for assessment in higher education. In: C. Bryan and K. Cleff (Eds.) Innovative assessment in higher education. New York: Routledge, pp.37-47. [8] Margalef, L. (2007). La evaluación como aprendizaje: una tarea aún pendiente. Relada 1, pp.11-21. [9] Brew, A. (2007). La autoevaluación y la evaluación por los compañeros. In: Evaluar en la Universidad. Madrid: Narcea, pp. 179-190. [10] Heathfield, M. (2007). Evaluación en grupo para fomentar un aprendizaje de calidad. In: Evaluar en la Universidad. Madrid: Narcea, pp. 155-166. [11] Jordan, S. (2007). La práctica de la autoevaluación y la evaluación por los compañeros. In: Evaluar en la Universidad. Madrid: Narcea, pp. 191-202. [12] Álvarez, I. (2008). La coevaluación como alternativa para mejorar la calidad del aprendizaje de los estudiantes universitarios: valoración de una experiencia. Revista Interuniversitaria de Formación del Profesorado 63(22), pp.127-140. [13] Álvarez, M.P. (2012). La evaluación entre pares como estrategia innovadora en la asignatura Organografía Microscópica Humana. Relada 6, pp.175-183. [14] Álvarez, M.P. and Vázquez, M.N. (2013). Evaluar para aprender: valoración de una experiencia de coevaluación y evaluación entre iguales mediante rúbrica. In: J. Paredes, F. Hernández and J.M. Correa (Eds.). La relación pedagógica en la Universidad, lo transdiciplinar y los estudiantes. Desdibujando fronteras, buscando puntos de encuentro. Madrid: Universidad Autónoma de Madrid, 2013, pp. 434-448. [15] Baartman, L.K.J., Bastiaens, T.J., Kirschner, P.A. and Van der Vleuten C.P.M. (2006). The wheel of competency assessment: Presenting quality criteria for competency assessment programs. Studies in Educational Evaluation 32(2), pp.153-170. [16] Valverde, J. and Ciudad, A. (2014). El uso de e-rúbricas para la evaluación de competencias en estudiantes universitarios. Estudio sobre la fiabilidad del instrumento. Revista de Docencia Universitaria 12(1), pp.49-79. [17] Ji, Z., Herencias, A., Álvarez, M.P. and Flores, R. (2013). Reflexiones sobre el sistema de evaluación de las sesiones. Revista Fundación Médica, 16(S2), pp.46-47. [18]. Herencias, A., Ji, Z., Flores, R. and Álvarez, M.P. (2013). La rúbrica como herramienta para evaluar las Sesiones Básico-Clínicas. Revista Fundación Médica, 16(S2), pp.42. [19] Baepler, P. and Murdoch, C.J. (2010). Academic Analytics and Data Mining in Higher Education. International Journal for the Scholarship of Teaching and Learning 4(2), 17. [20] Rodríguez, G., Ibarra, M.S. (2012). e-Evaluación orientada al e-Aprendizaje estratégico en Educación Superior. Madrid: Narcea. [21] Andrade, H. and Du, Y. (2005). Student perspectives on rubric-referenced assessment. Practical Assessment, Research and Evaluation 10(5), pp.1-11. [22] Panadero, E. and Johnson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review 9, pp.129-144. [23] East, M. (2009). Evaluating the reliability of a detailed analytic scoring rubric for foreign language writing. Assessing Writing 14(2), pp.88-105. [24] Ji, Z., García Seoane, J.J., Vázquez, T., Flores, R. and Álvarez, M.P. (2015). Empleo de Google Drive en evaluación universitaria. V Encuentro Internacional de Intercambio de Experiencias Innovadoras en la Docencia. Madrid, November 2014 (in press). [25] Boud, D., Lawson, R and Thompson, D.G. (2013). Does student engagement on self-assessment calibrate their judgment over time? Assessment and Evaluation in Higher Education 38(8), pp.941-956. [26] Gallego, M.J. and Raposo-Rivas, M. (2014). Compromiso del estudiante y percepción del proceso evaluador basado en rúbricas. Revista de Docencia Universitaria 12(1), pp.197-215. [27] Goodrich Andrade, H. (1997). Understanding rubrics. Harvard: Educational Leardership. [28] Allen, D. and Tanner, K. (2006). Rubrics: Tools for making learning goals and evaluation criteria explicit for both teachers and learners. CBE- Life Sciences Education 5(3), pp.197-203. [29] Gregori, E., Menéndez, J.L. (2013). Valoraciones de estudiantes universitarios sobre la utilización de rúbricas. II Congreso Internacional sobre aprendizaje, Innovación y Competitividad (CINAIC). Madrid, november 2013, pp.515-520. [30] Rezai, A.R. and Lovorn, M.G. (2010). Reliability and validity of rubrics for assessment through writing. Assessing Writing 15(1), pp.18-39. [31] Lovorn, M.G. and Rezai, A.R. (2011). Assessing the Assessment: Rubrics Training for Pre-Service and New In-Service Teachers. Practical Assessment, Research and Evaluation 16(16), pp.1-18.