CDEM Voice – Research Column

er-column

 

Choosing Wisely: Chi-Square vs. Fisher’s Exact

Choosing the ideal statistical test will help get to the true answer.  Much like in our clinical practice, where we have to weigh the risks and benefits of diagnostic testing, the same holds true in statistical testing.  Every test has its limitations and risk of giving a false positive or negative.  That is why it is important to choose the optimum test.

In educational research, we often find ourselves analyzing data arranged in a contingency table, and then have to choose the “right” test.  Both the Fisher’s exact and Chi-square test can be used.  In order to choose the best test for your data we must understand how the tests work and their limitations.

The chi-square test for independence compares variables in a contingency table.  It is a particularly useful statistic because in addition to determining whether a significant difference is observed it also helps to identify which categories are responsible for those differences.  As a non-parametric test, it does not require assumptions about the distribution the data is drawn from, but does have its own requirements that must be met for a useful and valid result.

To use a chi-square test, the data should be count or frequencies rather than percentages of sufficiently large sample size.  The categories used must be mutually exclusive (for example intervention vs. control group), must be independent and not a paired sample.   There can be only two variables, however for each variable there can be multiple levels (for example the 5-level Likert scale).  Finally, there must be an expected minimum count of at least 5 in at least 80% of the cells in the table.  For instance, in a 2×2 contingency table if one of the four categories has an expected count of less than 5, the chi-square test becomes unreliable. A good rule of thumb is that if the sample size is at least five times the number of cells this should satisfy the final assumption.

While the chi-square is a very useful test to determine if a significant difference is observed, it does not provide much information about the strength or magnitude of the difference.  If a sample size is large enough we can achieve statistical significance even though there is little strength to the association.  To determine the strength of the association a test such as Cramer’s V can be applied.  In addition to the fact that a sufficiently large sample size can yield statistical significance, the chi-square test is also sensitive to small frequencies.  If the expected frequencies in cells are below 5, or more than 20% of cells are below five, the method of approximation used to calculate the chi-square becomes unreliable and risks either a type I or type II error.

The scenario of low expected cell frequencies may be encountered in small sample size educational research or clinical trials.  This is where the Fisher’s Exact test is superior.  The Fisher’s exact test is just that, exact.  It does not use an approximation like the chi-square test and therefore remains valid for small sample sizes.  When the sample size becomes large enough the p-value generated from a chi-square will approach that of a Fisher’s exact.  Fisher’s exact also has the benefit of being valid at large sample sizes.

Historically, statistical tests using approximations such as the chi-square were used because of the arduous calculations required for exact tests.  Now with powerful computers these calculations are easy to perform and generate exact values and do share as significant a risk of type I or type II error due to small sample size.  While typically only used for 2×2 tables, Fisher’s exact can be used with larger contingency tables provided you have ample computing power.

Jason J. Lewis, MD    &    David Schoenfeld, MD, MPH

Beth Israel Deaconess Medical Center/Harvard Medical School

 

 

CDEM Voice – FOAMonthly

foamonthly-title

Curating FOAMed Video Resources for your Students

Featured Sites: Vimeo and YouTube

 

As seen in the ED…

Attending: The patient in room 12 needs a paracentesis, do you know how to do one?

Student: No, but I watched a video online one time!

The old adage of “see one, do one, teach one,” has now become “watch a video, do one, tell someone else about the video.” Modern medical students are sophisticated navigators of online repositories and increasingly rely on supplemental online resources (i.e. not regulated by you) to complement their learning. Videos can be especially helpful in procedural teaching, but how can we as educators ensure our students are getting exposed to high quality teaching and high fidelity simulations? Thankfully, there’s no need to create fresh quality digital media on your own – there are already numerous open access repositories available. But in that sea of information, how can you curate the collection to best target your learners?

Using online platforms such as Vimeo or YouTube, you can select videos that others have made, add it to a personal collection, and share the collection with your students.  A quick search of “emergency medicine” on either site will show videos from trusted sources such as EMRA, HQMedEd, and specific residency/fellowship programs (as well as some less trustworthy options). If you use Vimeo, it’s simple to create a new group or channel and quickly add videos to it.  To see what I created in less than 5 minutes, follow this link.  A “group” facilitates comments and discussion, while a “channel” is just a playlist of your selected videos.

Ideal for asynchronous learning, a curated collection of videos can also be used to replace a power point presentation filled with embedded videos, or to introduce a procedure before bedside or simulation teaching.  Each group or channel can be public or private (accessible via email invitation on YouTube or shared link on Vimeo), depending on your targeted audience.  Happy curating!

 

Emily Brumfield, MD

Assistant Professor of Emergency Medicine

Assistant Director of Undergraduate Medical Education

Vanderbilt Department of Emergency Medicine

Emily.brumfield@vanderbilt.edu

CDEM Voice – Member Highlight

capture


nikita-m5q0dealaxj7w6sz0q6winxczdp86q91imtxv9s3to

Nikita K Joshi MD, FACEP

Assistant Clerkship Director, Emergency Medicine

Stanford University

Chief People Office for Academic Life in Emergency Medicine

Twitter – @njoshi8

Email – njoshi8@gmail.com


blank

1. What is your most memorable moment of teaching?

The most memorable moment of teaching probably is in high school. For some reason I really enjoyed the Kreb cycle and I used to help my friends understand it after school. I even would use a whiteboard to draw out all cycle over and over again. I guess it is pretty clear that I’ve always enjoyed teaching.

blank
2. Who or what is your biggest influence?
Academically speaking, probably one of the greatest influences is Dr. Christopher Doty, my program director in residency. I found him to be a strong and dedicated leader, and someone who continues to inspire me as an educator. Personally, my biggest influence is probably my husband. We met in college and over the years have grown together and shared some pretty awesome experiences. I definitely would not be who I am today without him.

blank
3. Any advice for other clerkship directors?
My advice would be that medical students want to learn and also want to feel appreciated. They want to feel like they are part of the team. This is especially challenging in the busy emergency department, but the worst thing to do is to have faculty and residents ignore the student and make them feel like a burden. Definitely not conducive to learning. No matter how great the curriculum, if the clinical setting is not inviting, then it will not be a good learning experience.blank
4. What is your favorite part about being and educator/director?
My favorite part is to think of new ways to keep the curricula exciting. There’s always new educational technologies and content to consider and add, such as simulation a few years ago and now social media resources. I also love getting inspired by CORD and CDEM for new ideas to shake things up.

blank
5. Any interesting factoids you would like to share?
I am a basketball fan! Which is only natural as I was born in Chapel Hill, North Carolina when Michael Jordan was there playing college basketball. I also grew up in Chicago in the 90s, and got to witness first hand some of the greatest years in NBA history courtesy of Jordan.  I went to college in Cleveland and was privileged to see LeBron James play in a high school game and in the McDonald’s high school all-star game. Now I live in the San Francisco area and get to witness the pretty awesome basketball skills of Steph Curry. Regardless of all the greats there have been and those that will be in the future, I will always believe that Jordan is the greatest player to ever play in the NBA.

 

CDEM Voice – Topic 360

 

The Burning Question

A snapshot from the Emergency Medicine Physicians Wellness and Resilience Summit

What is it that separates Emergency Physicians with 30-year-long careers from those who burn out after less than a decade? Why is the rate of burnout higher in our field than in any other medical specialty? What can we do to help stem the epidemic of burnout amongst Emergency Medicine physicians, residents, and students? These questions and many others were tackled at the Emergency Medicine Physicians Wellness and Resilience Summit, held in Dallas in February.

 

Shanafelt’s eye-opening study in 2015 demonstrated a steadily-rising rate of burnout amongst physicians. This study showed that, between 2011 and 2015, the rate of physicians endorsing at least one symptom of burnout increased from 45% to 54%. The same study revealed that, though Emergency Physicians (EPs) report a higher level of satisfaction with their work-life balance than most specialties, the rate of burnout amongst EPs is the highest of any specialty (Shanafelt 2015). This high level of burnout amongst EPs has been echoed in subsequent studies. The Medscape Lifestyle Study in 2017 re-demonstrated the steadily-increasing rates of burnout amongst all physicians and showed that nearly 60% of Emergency Physicians experience symptoms of burnout, the highest of any specialty.

 

Even medical students have demonstrated higher levels of burnout than their peers. Brazeau et al demonstrated that matriculating medical students have a lower rate of burnout and depressive symptoms than their age-similar college-graduate peers, but somewhere in medical school that relationship flips, and medical students develop higher levels of burnout and depression than their peers (Brazeau 2014).

 

There are many reasons why the stressors of medicine, and Emergency Medicine in particular, can cause high rates of burnout and stress. These stressors can vary in importance throughout one’s career. Medical students may find the lack of control and lack of autonomy most frustrating while seasoned providers may be most challenged by the demands of electronic documentation, irregular hours, and lack of administrative support.

 

The Wellness and Resilience Summit brought together representatives from all of the major Emergency Medicine Groups, including ACEP, AAEM, AACEM, CORD, SAEM, EMRA, RSA, ACGME, and CDEM, to discuss potential solutions to the burnout epidemic. Many ideas were considered as potential areas for intervention or further investigation. All of the findings are currently being written up and will be published to help open a dialogue in our field.

 

The discussion that focused on our medical students touched on potential initiatives to help teach resilience. More resilient individuals are less susceptible to the stressors of our job and experience less burnout. Emergency Medicine is a stressful field, and we want to give our students and residents the tools they need to have long, rewarding careers. The next step for CDEM is to start investigating the role we can play in mitigating burnout. Through the cooperation of multiple professional organizations, we can help reverse the tide of ever-increasing burnout in our field.

 

Emily Fisher MD

on behalf of the Emergency Medicine Physicians Wellness and Resilience Summit

 

Brazeau C, Shanafelt T, Durning S, Massie SF, Eacker A, Moutier C, Satele DV, Sloan JA, Dyrbye LN. Distress Among Matriculating Medical Students Relative to the General Population. Academic Medicine. Nov 2014; 89(11): 1520-1525. doi: 10.1097/ACM.0000000000000482

Shanafelt TD, Hasan O, Dyrbye LN, Sinsky C, Satele D, Sloan J, West CP. Changes in Burnout and Satisfaction with Work-Life Balance in Physicians and the General US Working Population Between 2011 and 2014. Mayo Clin Proc. Dec 2015;90(12):1600-13. doi: 10.1016/j.mayocp.2015.08.023.

CDEM Voice – FOAMonthly

foamonthly-title

spacer

rfi

http://www.facultyfocus.com/articles/teaching-professor-blog/can-learn-end-course-evaluations/

With Match behind us, we are entering the last quarter of the academic year. Many will reflect on the progress of graduating medical students and residents and anticipate the arrival of new medical students and interns. Along with that reflection and anticipation, medical schools are likely to be delivering end-of-course evaluations. An article by Dr. Maryellen Weimer on the website Faculty Focus entitled What Can We Learn from End-of-Course Evaluations? discusses how to use end-of-course evaluations to improve the quality of your teaching and your students’ learning.

First, mindset is important. End-of-course evaluations should be viewed as an opportunity for improvement; regardless of how good (or bad) the scores, there is always an opportunity to improve the learning experiences for students. Next, be curious. Use global ratings to ask yourself questions about your teaching style and why it is or is not effective for your learners. The article references the Teaching Perspectives Inventory (http://www.teachingperspectives.com/tpi/) which is helpful in providing information about your instructional strategies and can also provide useful insights for Educator Portfolios and educational philosophy statements. Finally, we need to be specific and timely in the feedback we request from our students. The start-stop-continue method has been shown to improve the quality of student feedback. Ask students what you should start doing, stop doing, and continue doing. Course directors can also share their interpretations of the feedback and develop an action plan for change and quality monitoring. End-of-course evaluations no longer need to evoke a sense of dread!

 

Kendra Parekh, MD

parekh-signature

CDEM Voice – Research Column

er-column

blank

How to appropriately analyze a Likert scale in medical education research

 

A common tool in both medical education and medical education research is the Likert scale.  The Likert scale is an ordinal scale using 5 or 7 levels. Despite regular use of the scale, its interpretation and statistical analysis continues to be a source of controversy and consternation.  While the Likert scale is a numerically based scale, it is not a continuous variable, but rather an ordinal variable. The question is then how to correctly analyze the data.

In the strictest sense ordinal data should be analyzed using non-parametric tests, as the assumptions necessary for parametric testing are not necessarily true.  Often investigators and readers are more familiar with parametric methods and comfortable with the associated descriptive statistics which may lead to their inappropriate use.  Mean and standard deviation are invalid descriptive statistics for ordinal scales, as are parametric analyses based on a normal distribution.  Non-parametric statistics do not require a normal distribution and are therefore always appropriate for ordinal data. Common examples of parametric tests are the t-test, ANOVA, and Pearson correlation.  Common examples of corresponding non-parametric tests the Wilcoxon Rank Sum, Kruskal Wallis Test, and Spearman Correlation.

The confusion and controversy arise because parametric testing may be appropriate and in fact more powerful than non-parametric testing of ordinal data provided certain conditions exist.  Parametric tests require certain assumptions such as normally distributed data, equal variance in the population, linearity, and independence.  If these assumptions are violated then a parametric statistic cannot be applied. Care must also be taken to ensure that averaging the data isn’t misleading.  This can occur if the data is clustered at the extremes resulting in a neutral average. For instance, if we used a Likert scale to evaluate the current polarized political climate, we would likely be clustered at the extremes, yet the mean might lead us to believe everyone is neutral.

Frequently, the responses on a Likert scale are averaged and the means are compared between the control and intervention group (or before and after implementation of an educational tool) utilizing a T-test or ANOVA.  While these are the correct statistical analyzes for comparing means, one cannot calculate an actual mean for a Likert scale as it is not a continuous numerical value and the distance between values may not be equal therefore it is also not interval data.  For example, in a study comparing mean arterial blood pressures between an experimental drug and placebo, there is a continuous numerical variable for a mean can be calculated between the two study groups. In contrast for a Likert scale of 1-5, these are ordinal classifications and there are no responses of 1.1, 2.7, 3.4 or 4.2. Therefore, a mean of 3.42 for the control group and 3.86 for the intervention group does not fall within the pre-defined ordinal category responses of the Likert scale.

One approach is to dichotomize the data into “yes” and “no” categories.  For example, on a scale from 1-5 with 3 being “average” one could group responses into >3 or <3.  Dichotomizing the data is also a mechanism to increase the power. An exception to this is if one is using a series of questions and averaging the individual’s response to create a single composite score and then compares the composite scores across the groups. Under this scenario, comparing means may be appropriate since the data has been converted into a continuous variable.

After dichotomizing, one can utilize a Fisher’s exact or a Chi-Squared test to analyze the data.  Stay tuned for a future explanation of the differences between and Fisher’s exact and Chi-Squared analysis!

Understanding the statistics can help improve the experimental design and avoid inappropriate application of statistical analyses yielding erroneous conclusions.

 

Jason J. Lewis, MD    &    David Schoenfeld, MD, MPH

Beth Israel Deaconess Medical Center/Harvard Medical School

Reference:

Boone, H.N. and Boone, D.A. (2012, April). Analyzing Likert Data. Retrieved from: https://joe.org/joe/2012april/tt2.php. Accessed February 16, 2017.

CDEM VOICE – Committee Update

nbme

 

The NBME EM Advanced Clinical Examination Task Force was formed in 2011.  The task force is made up of CDEM members who are current or previous clerkship directors along with NBME staff.  The task force was charged with the development of an EM Advanced Clinical Examination (ACE).  The task force has been responsible for developing the test blueprint, finalizing the test content and assigning appropriate weights to specific categories of disease across various physician tasks.  Task Force members have also been responsible for generating new items to fill gaps in the test question pool.  To date, members have written and reviewed many hundreds of questions.  The EM ACE was first made available in April 2013.  Although the test is designed as a knowledge base assessment for students completing a required 4th year clerkship in EM, the test is also taken by many 3rd year students.  In the 2015 – 2016 academic year, almost 5,000 medical students across the country completed the examination (4th year students; n-3,752, 3rd year students; n-995).

This past year, the task force, working with the NBME conducted a web-based study to establish grading guidelines for the EM ACE.  Medical school faculty representing 27 different institutions participated in this study.   The task force has published multiple abstracts regarding the EM ACE examination.  We continue to meet on an annual basis and are currently collaborating with the NBME on a number of research initiatives.

  1. Miller ES, Wald DA, Hiller K, Askew K, Fisher J, Franzen D, Heitz C, Lawson L, Lotfipour S, McEwen J, Ross L, Baker G, Morales A, Butler A. Initial Usage of the National Board of Medical Examiners Emergency Medicine Advanced Clinical Examination.  Acad Emerg Med. 2015;22:s14.
  2. Miller ES, Wald DA, Hiller K, Askew K, Fisher J, Franzen D, Heitz C, Lawson L,    Lotfipour S, McEwen J, Ross L, Baker G, Morales A, Butler A. National Board of Emergency Medicine Advanced Clinical Examination 2014 Post-Examination Survey Results. Acad Emerg Med. 2015;22:s109.
  3. Fisher J, Wald DA, Orr N, et al.  National Board of Medical Examiners’ Development of an Advanced Clinical Examination in Emergency Medicine.  Ann Emerg Med. 2012;60:s190-191.
  4. Ross L, Wald DA, Miller ES, et al.  Developing Grading Guidelines for the NBME® Emergency Medicine Advanced Clinical Examination.  Accepted for publication West J Emerg Med 2017.
 

David A. Wald, DO

On behalf of the NBME EM ACE Task Force

CHAIR:
David Wald
MEMBERSHIP:
o    David A. Wald
o    Doug Franzen
o    Jonathan Fisher
o    Kathy Hiller
o    Emily Miller
o    Luan Lawson
o    Kim Askew
o    Jules Jung
o    Cory Heitz
Prior Members:
o    Shahram Lotfipour
o    Jill McEwen