CDEM Voice – FOAMonthly

foamonthly-title

spacer

rfi

http://www.facultyfocus.com/articles/teaching-professor-blog/can-learn-end-course-evaluations/

With Match behind us, we are entering the last quarter of the academic year. Many will reflect on the progress of graduating medical students and residents and anticipate the arrival of new medical students and interns. Along with that reflection and anticipation, medical schools are likely to be delivering end-of-course evaluations. An article by Dr. Maryellen Weimer on the website Faculty Focus entitled What Can We Learn from End-of-Course Evaluations? discusses how to use end-of-course evaluations to improve the quality of your teaching and your students’ learning.

First, mindset is important. End-of-course evaluations should be viewed as an opportunity for improvement; regardless of how good (or bad) the scores, there is always an opportunity to improve the learning experiences for students. Next, be curious. Use global ratings to ask yourself questions about your teaching style and why it is or is not effective for your learners. The article references the Teaching Perspectives Inventory (http://www.teachingperspectives.com/tpi/) which is helpful in providing information about your instructional strategies and can also provide useful insights for Educator Portfolios and educational philosophy statements. Finally, we need to be specific and timely in the feedback we request from our students. The start-stop-continue method has been shown to improve the quality of student feedback. Ask students what you should start doing, stop doing, and continue doing. Course directors can also share their interpretations of the feedback and develop an action plan for change and quality monitoring. End-of-course evaluations no longer need to evoke a sense of dread!

 

Kendra Parekh, MD

parekh-signature

CDEM Voice – Research Column

er-column

blank

How to appropriately analyze a Likert scale in medical education research

 

A common tool in both medical education and medical education research is the Likert scale.  The Likert scale is an ordinal scale using 5 or 7 levels. Despite regular use of the scale, its interpretation and statistical analysis continues to be a source of controversy and consternation.  While the Likert scale is a numerically based scale, it is not a continuous variable, but rather an ordinal variable. The question is then how to correctly analyze the data.

In the strictest sense ordinal data should be analyzed using non-parametric tests, as the assumptions necessary for parametric testing are not necessarily true.  Often investigators and readers are more familiar with parametric methods and comfortable with the associated descriptive statistics which may lead to their inappropriate use.  Mean and standard deviation are invalid descriptive statistics for ordinal scales, as are parametric analyses based on a normal distribution.  Non-parametric statistics do not require a normal distribution and are therefore always appropriate for ordinal data. Common examples of parametric tests are the t-test, ANOVA, and Pearson correlation.  Common examples of corresponding non-parametric tests the Wilcoxon Rank Sum, Kruskal Wallis Test, and Spearman Correlation.

The confusion and controversy arise because parametric testing may be appropriate and in fact more powerful than non-parametric testing of ordinal data provided certain conditions exist.  Parametric tests require certain assumptions such as normally distributed data, equal variance in the population, linearity, and independence.  If these assumptions are violated then a parametric statistic cannot be applied. Care must also be taken to ensure that averaging the data isn’t misleading.  This can occur if the data is clustered at the extremes resulting in a neutral average. For instance, if we used a Likert scale to evaluate the current polarized political climate, we would likely be clustered at the extremes, yet the mean might lead us to believe everyone is neutral.

Frequently, the responses on a Likert scale are averaged and the means are compared between the control and intervention group (or before and after implementation of an educational tool) utilizing a T-test or ANOVA.  While these are the correct statistical analyzes for comparing means, one cannot calculate an actual mean for a Likert scale as it is not a continuous numerical value and the distance between values may not be equal therefore it is also not interval data.  For example, in a study comparing mean arterial blood pressures between an experimental drug and placebo, there is a continuous numerical variable for a mean can be calculated between the two study groups. In contrast for a Likert scale of 1-5, these are ordinal classifications and there are no responses of 1.1, 2.7, 3.4 or 4.2. Therefore, a mean of 3.42 for the control group and 3.86 for the intervention group does not fall within the pre-defined ordinal category responses of the Likert scale.

One approach is to dichotomize the data into “yes” and “no” categories.  For example, on a scale from 1-5 with 3 being “average” one could group responses into >3 or <3.  Dichotomizing the data is also a mechanism to increase the power. An exception to this is if one is using a series of questions and averaging the individual’s response to create a single composite score and then compares the composite scores across the groups. Under this scenario, comparing means may be appropriate since the data has been converted into a continuous variable.

After dichotomizing, one can utilize a Fisher’s exact or a Chi-Squared test to analyze the data.  Stay tuned for a future explanation of the differences between and Fisher’s exact and Chi-Squared analysis!

Understanding the statistics can help improve the experimental design and avoid inappropriate application of statistical analyses yielding erroneous conclusions.

 

Jason J. Lewis, MD    &    David Schoenfeld, MD, MPH

Beth Israel Deaconess Medical Center/Harvard Medical School

Reference:

Boone, H.N. and Boone, D.A. (2012, April). Analyzing Likert Data. Retrieved from: https://joe.org/joe/2012april/tt2.php. Accessed February 16, 2017.

CDEM VOICE – Committee Update

nbme

 

The NBME EM Advanced Clinical Examination Task Force was formed in 2011.  The task force is made up of CDEM members who are current or previous clerkship directors along with NBME staff.  The task force was charged with the development of an EM Advanced Clinical Examination (ACE).  The task force has been responsible for developing the test blueprint, finalizing the test content and assigning appropriate weights to specific categories of disease across various physician tasks.  Task Force members have also been responsible for generating new items to fill gaps in the test question pool.  To date, members have written and reviewed many hundreds of questions.  The EM ACE was first made available in April 2013.  Although the test is designed as a knowledge base assessment for students completing a required 4th year clerkship in EM, the test is also taken by many 3rd year students.  In the 2015 – 2016 academic year, almost 5,000 medical students across the country completed the examination (4th year students; n-3,752, 3rd year students; n-995).

This past year, the task force, working with the NBME conducted a web-based study to establish grading guidelines for the EM ACE.  Medical school faculty representing 27 different institutions participated in this study.   The task force has published multiple abstracts regarding the EM ACE examination.  We continue to meet on an annual basis and are currently collaborating with the NBME on a number of research initiatives.

  1. Miller ES, Wald DA, Hiller K, Askew K, Fisher J, Franzen D, Heitz C, Lawson L, Lotfipour S, McEwen J, Ross L, Baker G, Morales A, Butler A. Initial Usage of the National Board of Medical Examiners Emergency Medicine Advanced Clinical Examination.  Acad Emerg Med. 2015;22:s14.
  2. Miller ES, Wald DA, Hiller K, Askew K, Fisher J, Franzen D, Heitz C, Lawson L,    Lotfipour S, McEwen J, Ross L, Baker G, Morales A, Butler A. National Board of Emergency Medicine Advanced Clinical Examination 2014 Post-Examination Survey Results. Acad Emerg Med. 2015;22:s109.
  3. Fisher J, Wald DA, Orr N, et al.  National Board of Medical Examiners’ Development of an Advanced Clinical Examination in Emergency Medicine.  Ann Emerg Med. 2012;60:s190-191.
  4. Ross L, Wald DA, Miller ES, et al.  Developing Grading Guidelines for the NBME® Emergency Medicine Advanced Clinical Examination.  Accepted for publication West J Emerg Med 2017.
 

David A. Wald, DO

On behalf of the NBME EM ACE Task Force

CHAIR:
David Wald
MEMBERSHIP:
o    David A. Wald
o    Doug Franzen
o    Jonathan Fisher
o    Kathy Hiller
o    Emily Miller
o    Luan Lawson
o    Kim Askew
o    Jules Jung
o    Cory Heitz
Prior Members:
o    Shahram Lotfipour
o    Jill McEwen

CDEM Voice – Member Highlight

capture


bordblank

Sharon Bord MD, FACEP

Assistant Professor

Associate Director, Medical Student Education

Department of Emergency Medicine

The Johns Hopkins University School of Medicine
blank


  1. What is your most memorable moment of teaching?

I would say that teaching is about lots of little moments, rather than one big memorable moment. From the thank you at the end of a shift from a student who is on their first rotation to sharing Match Day envelope opening with students who I have advised to seeing students years after they graduated at conferences presenting an abstract or giving a presentation. Putting all of these experiences together make my job worthwhile.

  1. Who or what is your biggest influence?

I think that my kids are my biggest influence. I have two girls, ages 4 and 6 and ultimately I want to be a wonderful role model for them. I want to teach them that they can do anything they want with their lives and to dream big. It is sometimes silhouette1hard because with our job we can miss out on weekends, holidays and other special occasions, but they are starting to understand the importance of what I do and why I do it, and I think that this will only continue to grow over the years.

  1. Any advice for other clerkship directors?

Over the years I have learned to listen to the students. They generally have their finger on the pulse of the medical school and have the best understanding of how the different clerkship experiences compare to one another. Also, keep things fun and interactive! My goal is to get people excited about Emergency Medicine- who knows when a student will be in a position to save someone’s life. I want them to feel empowered and prepared to act quickly.

  1. What is your favorite part about being and educator/director?

My favorite part about being an educator is giving back. Being an educator is more than just reaching students on their clerkship. It involves teaching nurses, physician assistants and patients. To me it means setting a strong and positive example for those around me. And then, at the end of a shift or encounter receiving a genuine thank you- it can really make your day!

  1. Any interesting factoids you would like to share?

I got accepted into medical school off the wait list on August 8th- just 2 weeks before it was starting. Throughout medical school I was one of the hardest working students, mainly because I felt I had something to prove. All that hard work paid off when I was inducted into AOA graduation week. Following medical school I went on to an amazing training program at Boston Medical Center, and now am working as an educator at The Johns Hopkins School of Medicine. I am just so thankful that someone took a chance on me that early August day and completely changed the trajectory of my life!

I tell students all the time, the hardest part is getting into medical school!

CDEM Voice – FOAMonthly

foamonthly-title

 

Introducing Medical Students to Social Media

-Online Source: https://www.amsa.org/social-media-guidelines-medical-students-physicians/


 

The modern medical student is no stranger to social media networks, most likely having never lived in a world without internet readily available in schools, homes and libraries. As sources of readily consumable information, many students look to Twitter, Facebook, blogs, and podcasts to supplement their medical learning in addition to interacting with others of similar interests. While studies lack evidence of quality, many have found social media to be satisfactory learning platforms for students. However, due to the informal nature of most social media networks, it can be difficult to adhere to principles of professionalism, patient safety and privacy, and transparency. Many medical schools have developed guidelines regarding online behavior, but there is no clear consensus on what level of involvement a physician in training should have. In the same way that you would guide students on how to maintain professionalism at the bedside, how can you appropriately guide your students to interact on social media platforms?

AMSA, the American Medical Student Association, has a helpful introductory guide on the basic principles of navigating the many pitfalls associated with medical information on social media. The guide advises students (and physicians) to respect patient privacy, copyright laws, and adhere to standards of professionalism. One can easily imagine how a single tweet designed to educate could violate patient confidentiality, opening the user up to a minefield of medico-legal issues. In contrast to usual social media behavior, AMSA recommends users to be clear about their identity, to disclose potential conflicts of interest, and to exercise caution when listing professional affiliations or advocating political views. With these basics under their belts, your students can benefit from the medical dialogue on social media without sacrificing their burgeoning professional reputation.

 

 

Emily Brumfield, MD
@DrSadieHawkins
Assistant Director of Undergraduate Medical Education
Department of Emergency Medicine
Vanderbilt University Medical Center

CDEM VOICE – Committee Update

awards-committee

– An opportunity for involvement and a call for nominations – 


Greetings and Happy New Year from the CDEM Awards Committee. The committee has both the honor and pleasure of recognizing medical student faculty educators from across the country for their dedication, innovation and achievement in undergraduate medical education. It is a great opportunity to get involved (we are looking for new members to join us) and to reward our colleagues for the work they do. The committee solicits nominations for four awards at the beginning of each calendar year to be presented at the annual CDEM business meeting each spring –

  • CDEM Clerkship Director of the Year Award
  • CDEM Young Educator of the Year Award
  • CDEM Distinguished Educator Award
  • CDEM Award for Innovation in Medical Education

For a description of each award, please go to CDEM Awards.


Please nominate yourself or someone you know!

Each nomination packet must include:

  1. Letter of Support from the nominator specifying which award the candidate is being nominated for, how the nominee fulfills the award criteria, the relationship of the letter writer to the nominee (self-nominations are welcome).
  2. Curriculum Vitae and/or Teaching Portfolio of the nominee.
  3. Nominations may also include other supporting documentation, such as a detailed description of curricular innovations, teaching evaluations, letters of support from colleagues, supervisor or students, funded grant or project applications, and publications.

All applications and questions should be sent to CDEM@saem.org

Importantly the DEADLINE for submission is    march-17

David Cheng MD & Sarkis Kouyoumijian MD

CDEM Awards Committee Co-Chairs

CDEM Voice – Topic 360

title


I remember vividly my first patient as an academic Attending, it was many years ago now but it is still sharp in my memory and an interaction that I still reflect on. The patient was a 20’s year old male, post cardiac arrest, and presenting in sinus rhythm thanks to the great work of our prehospital providers. But he now was in need of a more definitive airway. I prepped the new intern on patient positioning, technique, medications, backup plans, confirmatory devices/methods, post-intubation management, and after all of that we proceeded. The new EM-1 approached the patient with great outward confidence and promptly and expertly converted the endotracheal tube into a nasogastric tube. The one phrase that sticks out in my mind was when I asked, “Are you in?” (Keep in mind this was before the days of video laryngoscopy) the not so confident response was “I think so…?” The patient did very well despite our initial attempt, no complications, no desaturation and I have reflected back on this interaction many times over the years. You see, I spent all my time in preparation and amazing instruction (if I do say so myself) but had I prepared my new resident for failure? Why wasn’t that a part of my plan/instruction? Was I afraid of even mentioning failure? I think it’s safe to assume that my intern reflected on that interaction as much as I did in the days to weeks that followed.

For some of our residents, the post-graduate training period may represent the first time in their life (professional or personal) that they experience a significant setback, mistake, or failure with negative consequences. This may take the form of a clinical decision (or indecision), interaction(s) with colleagues or patients, or just the stress of the training. The students and residents that we are lucky enough to mentor and teach are (for the most part) extraordinarily gifted and driven individuals, many of whom have never failed.

In her book Mindset: The New Psychology of Success Dr. Carol Dweck discusses how the mindset of the individual can be a great determinant of their success. She makes the distinction between was is referred to as a ‘Fixed’ vs a ‘Growth’ mindset. I can tell you that this book has helped me on numerous occasions to have more meaningful interactions with my students and residents (and Faculty), and has helped me to understand how help our trainees work through a setback in a much more beneficial way. In the book Dr. Dweck discusses that praising learners for their talent instead of hard work does not build mental toughness or confidence or help to develop resiliency, and can actually be counterproductive.

growth-vs-fixed

 

I suspect that many of us in medical education might recognize some examples of a Fixed mindset. That defensive pushback you encounter with a resident or student discussing a “miss” or the frustration/anger voiced after critical feedback on an evaluation or during a Faculty interaction. If a learner seems to give up with this type of feedback of feels that they are “no good” this should be an indication to us of the mindset (at that time) of the learner. Developing a growth mindset will take time and my hope is that we view it as a wonderful opportunity to build trust with our learners and demonstrate to them that their intelligence and talent do not determine their worth. The “Growth” mindset is one that welcomes challenges and opportunities and the “Fixed” places more value on praise and accolades.

After reading existing work on this concept in other fields I have tried to incorporate this into our resident and student curriculum. I stopped trying to ” win” the crucial conversations that I was having with my residents and students and have instead began to listen in order to understand their mindset. I guess you would have to talk to them but I feel that our interactions are much more meaningful and understanding.

Bo Burns, DO FACEP
George Kaiser Foundation Chair in Emergency Medicine
Associate Professor & Program Director
Department of Emergency Medicine
University of Oklahoma School of Community Medicine