PD Handbook

Evaluations: Trainees, Faculty, and the Program

Last updated: January 13, 2020

Authors

  1. Carl Haisch, MD; Professor of Surgery; Surgical Immunology and Transplantation, East Carolina University, Greenville, NC
  2. Marcie Feinman, MD, MEHP (M.Ed), FACS; Program Director, General Surgery Residency; Lifebridge Health, Baltimore, MD Vance

Introduction

Evaluations are a crucial part of an education and training program that allows for assessment and self-reflection. Whereas the ACGME requires specific types and evaluations, evaluations should aim to be timely, effective, and formative.

Evaluations can be either formative or summative. Formative evaluation is designed to help trainees identify areas for improvement and develop an action plan. Summative evaluation is utilized to determine whether knowledge or skills have been achieved. The evaluation tools used for formative and summative evaluation may be the same, but the utilization varies based on the intent of the evaluation. For example, an OSCE (objective structured clinical exam) can be formative if the trainee receives immediate feedback about their performance with identified areas of improvement, or summative if the trainee takes the exam and gets a grade at the completion of a learning module or rotation.


The ACGME Perspective for Trainees

Feedback is ongoing information provided regarding aspects of one’s performance, knowledge, or understanding. The faculty empower residents to provide much of that feedback themselves in a spirit of continuous learning and self-reflection. Feedback from faculty members in the context of routine clinical care should be frequent, and need not always be formally documented.

Formative and summative evaluation have distinct definitions. Formative evaluation is monitoring resident learning and providing ongoing feedback that can be used by residents to improve their learning in the context of provision of patient care or other educational opportunities. More specifically, formative evaluations help:

  • residents identify their strengths and weaknesses and target areas that need work 
  • program directors and faculty members recognize where residents are struggling and address problems immediately 

Summative evaluation is evaluating a resident’s learning by comparing the residents against the goals and objectives of the rotation and program, respectively. Summative evaluation is utilized to make decisions about promotion to the next level of training, or program completion.

End-of-rotation and end-of-year evaluations have both summative and formative components. Information from a summative evaluation can be used formatively when residents or faculty members use it to guide their efforts and activities in subsequent rotations and to successfully complete the residency program.

Feedback, formative evaluation, and summative evaluation compare intentions with accomplishments, enabling the transformation of a neophyte physician to one with growing expertise.

Faculty members should provide feedback frequently throughout the course of each rotation. Residents require feedback from faculty members to reinforce well-performed duties and tasks, as well as to correct deficiencies. This feedback will allow for the development of the learner as they strive to achieve the Milestones. More frequent feedback is strongly encouraged for residents who have deficiencies that may result in a poor final rotation evaluation.


ACGME Requirements for Trainees

  1. Faculty members must directly observe, evaluate, and frequently provide feedback on resident performance during each rotation or similar educational assignment. (Core)
  2. Evaluation must be documented at the completion of the assignment. (Core)
    1. V.A.1.b).(1) For block rotations of greater than three months in duration, evaluation must be documented at least every three months. (Core)
    2. V.A.1.b).(2) Longitudinal experiences, such as continuity clinic in the context of other clinical responsibilities, must be evaluated at least every three months and at completion. (Core)
  3. The program must provide an objective performance evaluation based on the Competencies and the specialty- specific Milestones, and must: (Core)
    1. V.A.1.c).(1) use multiple evaluators (e.g., faculty members, peers, patients, self, and other professional staff members); and, (Core)
  4. The program director or their designee, with input from the Clinical Competency Committee, must:
    1. V.A.1.d).(1) meet with and review with each resident their documented semi-annual evaluation of performance, including progress along the specialty-specific Milestones; (Core)

In summary, at a minimum, residents must get multi-source evaluations (faculty members, peers, patients, self, and other professional staff members) based on competencies and milestones on at least a semi-annual basis.  These evaluations do not include the summative evaluation that “is utilized to make decisions about promotion to the next level of training, or program completion.”

  1. Examples of mid and end of year Resident Evaluation
    2018 Mid Year Evaluation revised 2019 Year-End Evaluation Template
  2. Multisource Evaluations- Examples of Peer, Self, Nursing, and Patient Evaluations
    Resident Self Eval Resident Peer Eval Nursing evaluation patient evaluation
  3. Presentation Evaluation
    Eval Presentation by resident
  4. Faculty Evaluation of Resident
    Eval General Surgery
  5. Laparoscopic cholecystectomy evaluation
    Eval OR LAP GB

Evidence

  1. This paper studied a modified Milestones global evaluation and found that this tool can be successfully adopted for semiannual assessments of resident performance.
    Borman_Milestones and Residents
  2. This study evaluated the use of QuickNotes, an electronic web-based review system for evaluating resident performance relative to established milestones.  The use of QuickNotes has been associated with an overall increased level of satisfaction in the evaluation process by both faculty and residents.
    Hartramft_Eval Residents Using Milestones

American Board of Surgery (ABS) Assessments

Starting in June 2012, the ABS approved a requirement for assessment of operative and clinical performance during residency.  These assessments do not have to be turned in prior to the ABS as part of the qualification examination application packet.  The Program Director will attest that 12 assessments (6 operative performance assessments and 6 clinical performance assessments) have been completed.  It is encouraged to recommend completion of these assessments prior to their Chief Resident year!  

The link to the ABS requirement and assessments can be found at:

www.abssurgery.org/default.jsp?certgsqe_resassess


The ACGME Perspective for Faculty

The quality of the faculty’s teaching and clinical care is a determinant of the quality of the program and the quality of the residents’ future clinical care. Therefore, the program has the responsibility to evaluate and improve the program faculty members’ teaching, scholarship, professionalism, and quality care.  


ACGME Requirements for Faculty

  1. This evaluation must include a review of the faculty member’s clinical teaching abilities, engagement with the educational program, participation in faculty development related to their skills as an educator, clinical performance, professionalism, and scholarly activities. (Core)
  2. This evaluation must include written, anonymous, and confidential evaluations by the residents. (Core)
    1. Faculty members must receive feedback on their evaluations at least annually. (Core)
    2. Results of the faculty educational evaluations should be incorporated into program-wide faculty development plans. (Core)

Faculty evaluations represents one of the more arguably difficult tasks for Program Directors.  These evaluations are important as a method of objectively measuring educational potential and performance.  In addition, it allows the trainees to assess and evaluate the educational value of any particular teaching faculty.  It is the responsibility of the Program Director to obtain “written, anonymous, and confidential evaluations” that is reviewed with the faculty at least annually.  

  1. Example of a resident evaluation of faculty
    Resident Evaluation of Faculty

Literature

  1. This study evaluates the implementation of faculty milestones, akin to resident milestones, that provides improved data about attending surgeons’ teaching, and standardize faculty evaluations by residents.
    Shah_Faculty Milestones
  2. This paper describes the development and validation of robust evaluation tools that provide surgeons with insight into their clinical teaching performance.  The SETQ tools for the evaluation of surgeons’ teaching performance appear to yield reliable and valid data.
    Boerebach_Teaching Performance

The ACGME Perspective for Program Evaluations

In order to achieve its mission and train quality physicians, a program must evaluate its performance and plan for improvement in the Annual Program Evaluation. Performance of residents and faculty members is a reflection of program quality, and can use metrics that reflect the goals that a program has set for itself. The Program Evaluation Committee utilizes outcome parameters and other data to assess the program’s progress toward achievement of its goals and aims.


ACGME Requirements for the Program

Program Evaluation and Improvement

The program director must appoint the Program Evaluation Committee to conduct and document the Annual Program Evaluation as part of the program’s continuous improvement process. (Core)

The Annual Program Evaluation represents an opportunity to critically evaluate and improve the program.   It is the responsibility of the Program Director to appoint and utilize the Program Evaluation Committee (PEC) to critically evaluate and identify areas of sustainment and improvement.  Involvement of resident trainees is highly recommend as they can provide insight and nuances to the educational environment.  


Simulation Evaluation

Simulation is one activity that lends itself to formative evaluation.  Per the ACGME, “I.D.1.b) Programs must provide for simulation and skills laboratories”.  In addition, “II.D.1. Personnel should be available for administration of program components, including support for faculty member and resident scholarly activity, and for simulation. (Core)”  In fact, the ACGME goes so far as to say “IV.C.3. The program must implement a level-specific, simulation-based curriculum that complements clinical rotations in the development of technical and non-technical skills. (Core)”  This type of formative assessment is especially conducive to evaluating non-technical skills, such as communication and professionalism (both core competencies of the ACGME).   The key to successful simulations is the debriefing after the activity.  Debriefing is a type of reflective practice, which is essential for continuous learning as illustrated by Kolb in his experiential learning theory.  Many tools exist to facilitate debriefing.  The TEAM Debrief Tool was utilized by the University of Wisconsin during trauma simulations and led to an increase in learner self-assessment over direct performance feedback (Thompson, 2018).  The NOTECHS (non-technical skills) scale was adapted from the aviation industry to use in surgery.  T-NOTECHS is a variant applied to trauma scenarios and data shows this tool can reliably evaluate non-technical skills during these simulations. (Sevdalis, 2008).

I am a believer in “perfect practice makes perfect.”  As such, interrupting simulations for “within-event debriefing” is a tool that can be utilized to ensure that mistakes are corrected in real time.  The simulation can be stopped, the facilitator can guide a debrief about events that just occurred, and the simulation can be restarted from an earlier point to drill the skills again.  Alternately, the simulation can be allowed to run its course prior to debriefing.  There are many options to structure a debrief, but all end with cementing a plan of action for future scenarios (real or simulated).

Table/Figure

For additional details regarding simulation debriefing, please read the article More Than One Way to Debrief: A Critical Review of Healthcare Simulation Debriefing Methods by Taylor Sawyer.

References

Sawyer T, Eppich W, Brett-Fleegler M, Grand V & Cheng A.  (2016).  More than one way to debrief: A critical review of healthcare simulation debriefing methods.  Simulation in Healthcare, 11(3),  209-217. Link

Sevdalis N, Davis R, Koutantji M, Undrew S, Darzi A & Vincent C. (2008). Reliability of a revised NOTECHS scale for use in surgical teams.  The American Journal of Surgery, 196(2), 184-190.  Link

Thompson R, Sullivan S, Campbell K, Osman I, Statz B & Jung HS.  (2018).  Does a written tool to guide structured debriefing improve discourse?  Implications for interprofessional team simulation.  Journal of Surgical Education, 75(6), 240-245.