To see the other types of publications on this topic, follow the link: Clinical competency exam.

Journal articles on the topic 'Clinical competency exam'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Clinical competency exam.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wulandari, Patricia, Rachmat Hidayat, and Carla R. Marchira. "Profile of Personality and Psychopathology Dimensions of Indonesian Medical Students who Failed in Medical Doctor Competency Exams (UKMPPD)." Scientia Psychiatrica 1, no. 2 (April 13, 2020): 9–15. http://dx.doi.org/10.37275/scipsy.v1i2.7.

Full text
Abstract:
Abstract Introduction Medical doctor competency exams (UKMPPD) in Indonesia is a final test that should be followed by medical student before being declared worthy of a medical doctor’s degree. This exam is certainly intended with good intentions, in order to improve the standards and quality of graduates of Indonesian doctors. However, each policy turns out to have two opposite sides of the situation, on the one hand it is profitable but on the other it often creates new problems. Students’ fear of the competency test often causes new psychological problems for students. No doubt the failure of the competency exam causes students to experience prolonged disappointment and sadness, which in turn will cause depression. This research is the first research that aim to present a description of personality and psychopathology dimension data from UKMPPD participants who failed the test. Method This study was an exploratory descriptive study by presenting narratives of personality and psychopathology dimensions of unsuccessful UKMPPD participants. This research was conducted at the Faculty of Medicine, Universitas Sriwijaya Palembang, Indonesia. Each participant was assessed personality and psychopathology dimensions using MMPI-2 (Minnesota Multiaxial Personality Inventory-2). The results of analysis with MMPI-2 present data in the form of clinical psychic conditions, the work capacity, interpersonal relationships, the work abilities and the ability to change the self potential of the research subjects. Result The research subjects were UKMPPD participants who did not successfully pass the first exam. From 7 research subjects, there were 2 subjects who successfully passed the second exam (28.5%) and there were 3 people who successfully passed after the third exam (43%). Meanwhile, 2 research subjects have not successfully passed the UKMPPD exam until the fifth exam (28.5%). The results are quite surprising that of the 7 participants who failed to pass the UKMPPD exam, all of them felt depression. Conclusion Medical students who experienced UKMPPD failures have an inability to develop their own potential which result in depression due to failure of the exam
APA, Harvard, Vancouver, ISO, and other styles
2

Birkhoff, Susan D., and Carol Donner. "Enhancing Pediatric Clinical Competency with High-Fidelity Simulation." Journal of Continuing Education in Nursing 41, no. 9 (May 7, 2010): 418–23. http://dx.doi.org/10.3928/00220124-20100503-03.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rogers, Jennifer Lynn, and Katy Garth. "Implementation of a Formative Objective Structured Clinical Exam to assess self evaluation in a rural BSN-DNP program." Journal of Nursing Education and Practice 10, no. 12 (October 19, 2020): 69. http://dx.doi.org/10.5430/jnep.v10n12p69.

Full text
Abstract:
Background and objective: The role of self-assessment in competency-based education has been controversial. The Objective Structured Clinical Exam (OSCE) has been used to assess competencies across the health professions. However, exploring the role of the OSCE as a method of self-assessment for nursing students has been limited. Objective: Implementation of a low cost pilot OSCE in a rural BSN-DNP program to explore graduate nursing students perceived self-evaluation of competencies to their actual OSCE performance.Methods: Eight students enrolled in a small, rural Bachelor of Science and Nursing to Doctorate of Nursing Practice (BSN-DNP) program in the Family Nurse Practitioner (FNP) specialty track were required to complete an OSCE. Graduate students participating in the OSCE completed a Self-Assessment of Competency questionnaire prior to performing the OSCE and the results were compared to their actual performance on the OSCE. Using available resources, undergraduate students in the BSN program at the institution were utilized as standardized patients.Results: Students perceived self-assessment of competence rated higher than their actual performance in subjective and objective data collection and implementation of a plan. Students’ actual performance was superior to their perceived self-assessment regarding communication with the patient.Conclusions: Without competency-based self-assessments, students can be unaware of their strengths and weaknesses. The OSCE is an instrument that provides faculty and students with objective measures of self-evaluation and should be considered as a component of competency-based education in rural nursing institutions.
APA, Harvard, Vancouver, ISO, and other styles
4

Gonsalves, Catherine, and Zareen Zaidi. "Hands in medicine: understanding the impact of competency-based education on the formation of medical students’ identities in the United States." Journal of Educational Evaluation for Health Professions 13 (August 31, 2016): 31. http://dx.doi.org/10.3352/jeehp.2016.13.31.

Full text
Abstract:
Purpose: There have been critiques that competency training, which defines the roles of a physician by simple, discrete tasks or measurable competencies, can cause students to compartmentalize and focus mainly on being assessed without understanding how the interconnected competencies help shape their role as future physicians. Losing the meaning and interaction of competencies can result in a focus on ‘doing the work of a physician’ rather than identity formation and ‘being a physician.’ This study aims to understand how competency-based education impacts the development of a medical student’s identity. Methods: Three ceramic models representing three core competencies ‘medical knowledge,’ ‘patient care,’ and ‘professionalism’ were used as sensitizing objects, while medical students reflected on the impact of competency-based education on identity formation. Qualitative analysis was used to identify common themes. Results: Students across all four years of medical school related to the ‘professionalism’ competency domain (50%). They reflected that ‘being an empathetic physician’ was the most important competency. Overall, students agreed that competency-based education played a significant role in the formation of their identity. Some students reflected on having difficulty in visualizing the interconnectedness between competencies, while others did not. Students reported that the assessment structure deemphasized ‘professionalism’ as a competency. Conclusion: Students perceive ‘professionalism’ as a competency that impacts their identity formation in the social role of ‘being a doctor,’ albeit a competency they are less likely to be assessed on. High-stakes exams, including the United States Medical Licensing Exam clinical skills exam, promote this perception.
APA, Harvard, Vancouver, ISO, and other styles
5

Cham, Kwang Meng, and Anthea L. Cochrane. "A digital resource to assess clinical competency." Clinical Teacher 17, no. 2 (May 29, 2019): 153–58. http://dx.doi.org/10.1111/tct.13030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

AlEnezi, Saad H., Abdullah M. Alfawaz, Adi Mohammed Al Owaifeer, Saad M. Althiabi, and Khalid F. Tabbara. "Assessment of Ophthalmology Residency Programs in Saudi Arabia: A Trainee-Based Survey." Journal of Medical Education and Curricular Development 6 (January 2019): 238212051985506. http://dx.doi.org/10.1177/2382120519855060.

Full text
Abstract:
Purpose: To assess the satisfaction and competency of Saudi ophthalmology residents and compare their performance against International Council of Ophthalmology (ICO) standards. Methods: A cross-sectional web-based survey of senior ophthalmology residents (postgraduate years [PGY] 3-4) and recent graduates (from 2010 to 2015) assessed various aspects of training. The questionnaire was sent to the participants and was divided into 3 main domains: demographics, training program evaluation, and preparedness for board exams and clinical practice. Results: Out of the 145 invitees, 120 (82.8%) responded. Fifty percent of respondents reported an overall satisfaction with the program. Adequate clinical exposure was reported in most subspecialties except refraction and low vision rehabilitation with inadequate exposure reported by 55.8% and 95.8%, respectively. Surgical exposure was reported as adequate for phacoemulsification (58.3%) and strabismus surgery (68.3%) only. Eighty-nine percent of respondents reported performing less than 80 cases of phacoemulsification. Of the respondents who had graduated, most (89.7%) passed the final board exam at the first attempt. There were 73.5% of respondents who reported that residency training prepared them well for the board exam. Ongoing clinical and call duties were reported as having a negative impact on exam performance. Conclusions: Saudi ophthalmology residents demonstrate a high level of clinical competency. However, additional efforts should aim at improving surgical training to increase the level of satisfaction among residents and improve the quality of training to meet international standards.
APA, Harvard, Vancouver, ISO, and other styles
7

Miller, Janice E., Ian R. Han, William A. Dafoe, and Jay Gillespie. "AN OBJECTIVE STRUCTURED CLINICAL EXAM FOR ASSESSING COMPETENCY OF ACSM EXERCISE SPECIALISTS." Medicine & Science in Sports & Exercise 24, Supplement (May 1992): S2. http://dx.doi.org/10.1249/00005768-199205001-00011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Suwardianto, Heru, and Vitaria Wahyu Astuti. "Competency In Critical Care Nursing With Approach Methods Journal Sharing of Critical Care (JSCC) In Nursing Profession Students." STRADA Jurnal Ilmiah Kesehatan 9, no. 2 (November 1, 2020): 686–93. http://dx.doi.org/10.30994/sjik.v9i2.361.

Full text
Abstract:
The results showed that most respondents had good critical nursing competency scores including primary assessment: airway assessment (53.8%); breathing assessment (56.4%); Circulation assessment (61.5%); Disability assessment (56.4%); and Exposure assessment (59%), professionalism (56.4%), critical nursing care competencies (79.5%), Clinical reasoning process (71.8%), Patient safety (61.5%) and critical care exam score (46.2%). The result of statistical test with Pearson test obtained that the primary assessment: airway assessment (ρ = 0.038); circulation assessment (ρ = 0.029); Exposure assessment (ρ = 0.023), competence of critical nursing care (ρ = 0.049), clinical reasoning process (ρ = 0.028) and patient safety (ρ = 0.001) have a significant relationship to the critical care exam score. The implementation of learning methods for journal sharing of critical care has a positive impact on competencies and results in good student competencies.
APA, Harvard, Vancouver, ISO, and other styles
9

Karabilgin, Ozlem Surel, Kevser Vatansever, Suleyman Ayhan Caliskan, and Halil İbrahim Durak. "Assessing medical student competency in communication in the pre-clinical phase: Objective structured video exam and SP exam." Patient Education and Counseling 87, no. 3 (June 2012): 293–99. http://dx.doi.org/10.1016/j.pec.2011.10.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

MacQuillan, Elizabeth L., Jennifer Ford, and Kristin Baird. "Increased competency of dietitian nutritionists’ physical examination skill after a simulation-based education in the United States." Journal of Educational Evaluation for Health Professions 17 (December 14, 2020): 40. http://dx.doi.org/10.3352/jeehp.2020.17.40.

Full text
Abstract:
Purpose: This study aimed to translate simulation-based dietitian nutritionist education to clinical competency attainment in a group of practicing Registered Dietitian Nutritionists (RDNs). Using a standardized instrument to measure performance on the newly-required clinical skill, Nutrition Focused Physical Exam (NFPE), competence was measured both before and after a simulation-based education (SBE) session. Methods: Total 18 practicing RDNs were recruited by their employer Spectrum Health system. Following a pre-brief session, participants completed an initial 10-minute encounter, performing NFPE on a standardized patient (SP). Next, participants completed a 90-minute SBE training session on skills within NFPE, including hands-on practice and role play, followed by a post-training SP encounter. Video recordings of the SP encounters were scored to assess competence on seven skill areas within the NFPE. Scores were for initial competence and change in competence.. Results: Initial competence rates ranged from 0- 44% of participants across the seven skills assessed. The only competency where participants scored in the “meets expectations” range initially was “approach to the patient(. When raw competence scores were assessed for change from pre- to post-SBE training, a paired t-test indicated significant increased in all seven competency areas following the simulation-based training (P< .001). Conclusion: This study showed the effectiveness of a SBE training for increased competence scores of practicing dietitian nutritionist on a defined clinical skill.
APA, Harvard, Vancouver, ISO, and other styles
11

Lalka, Andy, Ryan Caldwell, Andrew Black, and Frank A. Scott. "An Evaluation of the Effectiveness of a Medical School Musculoskeletal Curriculum at an Academic Medical Center." Higher Learning Research Communications 8, no. 2 (December 26, 2018): 55–63. http://dx.doi.org/10.18870/hlrc.v8i2.422.

Full text
Abstract:
Background: Musculoskeletal disorders are common medical problems encountered by physicians and affected 126.6 million Americans in 2012. Musculoskeletal education has inadequate in United States medical schools. Objective: To determine the musculoskeletal competency of third year medical students. Methods: A cross-sectional 25-question nationally validated musculoskeletal competency exam was given to the third year medical students. A survey was given to second and third year medical students to assess students’ level of interest in musculoskeletal medicine and their feedback regarding the curriculum. Results: The mean score of the competency exam was 69.0%. There was 48/107 (44.9%) students’ who reached the minimum passing score of 70%. Free-response feedback from both classes featured themes of more hands-on learning, a longer clinical block, and more small-group learning sessions. Conclusions: Third year medical students scored relatively well on the exam. Student feedback suggests the 2-week musculoskeletal block is useful and relevant to their future careers.
APA, Harvard, Vancouver, ISO, and other styles
12

Foley, Tony, Kathleen McLoughlin, Elaine K. Walsh, Paul Leggett, Muríosa O'Reilly, Molly Owens, and Aisling A. Jennings. "The candidate perspective of the clinical competency test (CCT) of the MICGP examination: a mixed-methods study." BJGP Open 2, no. 3 (September 4, 2018): bjgpopen18X101605. http://dx.doi.org/10.3399/bjgpopen18x101605.

Full text
Abstract:
BackgroundThe clinical competency test (CCT) was introduced by the Irish College of General Practitioners (ICGP) in 2015. Similar to the clinical skills assessment (CSA) of the Membership of the Royal College of General Practitioners exam (MRCGP), the CCT is a modified objective structured clinical examination (OSCE).AimThe aim of this study was to evaluate the MICGP CCT from the candidates' perspective, to gain an insight into their views of its fairness, relevance, and acceptability.Design & settingThis mixed-methods study was conducted with GP registrars in Ireland.MethodThe study was conducted in two phases. Firstly, focus groups were conducted with participants who had previously undertaken the CCT to explore their experience of the CCT. Secondly, findings from the focus groups informed the development of an online questionnaire, which was sent to all GP registrars who completed the CCT in the 2017 summer sitting.ResultsTwo focus groups were held with a total of nine participants. Following this, the online questionnaire was emailed to 134 registrars. Of these, 83 registrars completed the questionnaire in full. Registrars reported that the CCT is a fair exam and is relevant to daily general practice. They considered the exam to be a comprehensive assessment that has a positive educational impact. However, they were challenged by time restrictions, and found it financially and emotionally stressful.ConclusionThis is the first study to evaluate the candidate’s perspective of an exiting GP membership exam in the UK or Ireland. The CCT is well-regarded by registrars. The study results will help to inform the future development of the CCT exam.
APA, Harvard, Vancouver, ISO, and other styles
13

Astion, Michael L., Sara Kim, Amanda Nelson, Paul J. Henderson, Carla Phillips, Claudia Bien, Lynn Mandel, Adam R. Orkand, and James S. Fine. "A Two-Year Study of Microscopic Urinalysis Competency Using the Urinalysis-Review Computer Program." Clinical Chemistry 45, no. 6 (June 1, 1999): 757–70. http://dx.doi.org/10.1093/clinchem/45.6.757.

Full text
Abstract:
Abstract Background: The microscopic examination of urine sediment is one of the most commonly performed microscope-based laboratory tests, but despite its widespread use, there has been no detailed study of the competency of medical technologists in performing this test. One reason for this is the lack of an effective competency assessment tool that can be applied uniformly across an institution. Methods: This study describes the development and implementation of a computer program, Urinalysis-ReviewTM, which periodically tests competency in microscopic urinalysis and then summarizes individual and group test results. In this study, eight Urinalysis-Review exams were administered over 2 years to medical technologists (mean, 58 technologists per exam; range, 44–77) at our academic medical center. The eight exams contained 80 test questions, consisting of 72 structure identification questions and 8 quantification questions. The 72 structure questions required the identification of 134 urine sediment structures consisting of 63 examples of cells, 25 of casts, 18 of normal crystals, 8 of abnormal crystals, and 20 of organisms or artifacts. Results: Overall, the medical technologists correctly identified 84% of cells, 72% of casts, 79% of normal crystals, 65% of abnormal crystals, and 81% of organisms and artifacts, and correctly answered 89% of the quantification questions. The results are probably a slight underestimate of competency because the images were analyzed without the knowledge of urine chemistry results. Conclusions: The study shows the feasibility of using a computer program for competency assessment in the clinical laboratory. In addition, the study establishes baseline measurements of competency that other laboratories can use for comparison, and which we will use in future studies that measure the effect of continuing education efforts in microscopic urinalysis.
APA, Harvard, Vancouver, ISO, and other styles
14

Kaf, Wafaa A., Caleb G. Masterson, Nancy Dion, Susan L. Berg, and Mohamed K. Abdelhakiem. "Optimizing Otoscopy Competency in Audiology Students through Supplementary Otoscopy Training." Journal of the American Academy of Audiology 24, no. 09 (October 2013): 859–66. http://dx.doi.org/10.3766/jaaa.24.9.9.

Full text
Abstract:
Background: Scope of practice in audiology encompasses proficiency in visual inspection of ear canal and tympanic membrane (TM) as well as otoscopy interpretation skills to determine normal versus abnormal conditions of outer and middle ear. Audiology students can develop skills in otoscopy through education and supervised training. Studies have shown that additional otoscopy training increased skills in medical students and general practitioners. However, educational and supervised practices targeting otoscopy competency during audiology graduate coursework are lacking. Also, no studies have attempted to determine otoscopy skills among audiology students. Purpose: To determine the effectiveness of the otoscopy training model on clinical competency and confidence level of audiology students in performing and interpreting otoscopy. Research Design: A combination of experimental treatment design with random assignment of treatment and control groups and delayed treatment for control group. Study Sample: Thirty-two first- and second-year audiology graduate students who were enrolled in a pediatric audiology class participated in this study. Students were randomly assigned to the control (n = 16, 14 females) or experimental (n = 16, 14 females) group. Intervention: Participants in the experimental group received supplementary otoscopy training including didactic otoscopy lectures as well as clinical training using manikin ears. The control group received the same pretest and posttest and then completed a third assessment (posttest 2) after receiving the same training. Data Collection and Analysis: An evaluation of knowledge and skills regarding otoscopy between groups and time was conducted at three times: (a) pretraining, (b) upon completion of training for the experimental group, (c) upon completion of training by the control group. The evaluation consisted of a written exam, a clinical exam, and a self-perception rating of confidence. Both written exam scores and clinical exam scores (otoscopy manikin) were analyzed via two-way analyses of variance (ANOVAs), whereas chi-square (χ2) statistic was conducted to evaluate the effects of training on the confidence level of students of both groups. Results: Experimental and control groups demonstrated significant increased overall competency in otoscopy following the otoscopy training model with didactic and laboratory components. Posttest confidence ratings showed increases in all groups, and there were no significant differences between groups. Conclusions: The need for supplementary otoscopy training was warranted by low knowledge and clinical competency in otoscopy skills of audiology students as measured by pretest mean scores. After completing the training, both experimental and control groups showed significant improvement in knowledge and competency. Results also suggest that perceived confidence ratings may be misleading in determining students' clinical otoscopy skills.
APA, Harvard, Vancouver, ISO, and other styles
15

Tomlinson, Mark W., Sara A. Brumbaugh, Marin O'Keeffe, Richard L. Berkowitz, Mary D'Alton, and Michael Nageotte. "Electronic Fetal Monitoring Credentialing Examination: The First 4000." American Journal of Perinatology Reports 10, no. 01 (January 2020): e93-e100. http://dx.doi.org/10.1055/s-0040-1705141.

Full text
Abstract:
Abstract Objective Recognized variability in fetal heart rate interpretation led the Perinatal Quality Foundation (PQF) to develop a credentialing exam. We report an evaluation of the 1st 4000 plus PQF Fetal Monitoring Credentialing (FMC) exams. Study Design The PQF FMC exam is an online assessment for obstetric providers and nurses. The exam contains two question types: traditional multiple-choice evaluating knowledge and Script Concordance Theory (SCT) evaluating judgment. Reliability was measured through McDonald's Total Omega and Cronbach's Alpha. Pearson's correlations between knowledge and judgment were measured. Results From February 2014 through September 2018, 4,330 different individuals took the exam. A total of 4,057 records were suitable for reliability analysis: 2,105 (52%) physicians, 1,756 (43%) nurses, and 196 (5%) certified nurse midwives (CNMs). As a measure of test reliability, total Omega was 0.80 for obstetric providers and 0.77 for nurses. There was only moderate correlation between the knowledge scores and judgment scores for obstetric providers (0.38) and for nurses (0.43). Conclusion The PQF FMC exam is a reliable, valid assessment of both Electronic Fetal Monitoring (EFM) knowledge and judgment. It evaluates essential EFM skills for the establishment of practical credentialing. It also reports modest correlation between knowledge and judgment scores, suggesting that knowledge alone does not assure clinical competency.
APA, Harvard, Vancouver, ISO, and other styles
16

Tyler, Staci, Erica Bourbon, Shannon Cox, Nanci Day, Chris Fineran, Dena Rexford, Jessica Rinas, Kim Shumate, and Peggy Ward-Smith. "Clinical Competency, Self-Efficacy, and Job Satisfaction." Journal for Nurses in Staff Development 28, no. 1 (2012): 32–35. http://dx.doi.org/10.1097/nnd.0b013e318240a703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

DiMauro, Kathleen, and Leslie Bobb Mack. "A Competency-Based Orientation Program for the Clinical Nurse Specialist." Journal of Continuing Education in Nursing 20, no. 2 (March 1989): 74–78. http://dx.doi.org/10.3928/0022-0124-19890301-08.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Hazell, Wayne, and Ian R. Rogers. "Update on the fellowship exam and its relation to modern educational principles and clinical competency." Emergency Medicine Australasia 17, no. 3 (June 7, 2005): 263–65. http://dx.doi.org/10.1111/j.1742-6723.2005.00732.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Isaak, Robert, Fei Chen, Gene Hobbs, Susan M. Martinelli, Marjorie Stiegler, and Harendra Arora. "Standardized Mixed-Fidelity Simulation for ACGME Milestones Competency Assessment and Objective Structured Clinical Exam Preparation." Medical Science Educator 26, no. 3 (June 27, 2016): 437–41. http://dx.doi.org/10.1007/s40670-016-0277-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Harjanto, Totok, Widowati Budi Pratiwi, Sunika Puspasuci, and Lesiana Eka Hapsari. "Factors Affecting the Students’ Coping Strategy Dealing with National Nurse Competence Examination: Which are More Related?" Jurnal Keperawatan Soedirman 13, no. 1 (March 1, 2018): 32. http://dx.doi.org/10.20884/1.jks.2018.13.1.784.

Full text
Abstract:
<span>National nurse competency examination (UKNI) aims to measure standards of nurses approved with Nurse’s Register License. This examination could induce anxieties that affect students’ readiness, performance and graduation. The objective of this study was to describe coping strategy in dealing with national nurse competency examination and its related factors which are anxiety and perception. A descriptive study was performed on 80 nursing students at clinical rotation nursing program who enrolled in March 2017’s nurse competency examination. German Test Anxiety Inventory (TAI-G), Perception Questionnaire and Coping Strategy Indicators (CSI) were used to measure exam anxiety, student’s perception and coping strategy respectively. The results show that regarding to students’ perception, 37 students (46,3%) demonstrated good perception and the rest (58,7%) showed poor perception. Forty two students (52.5%) experienced low anxiety and the rest (47.5%) experienced high anxiety. In regards to coping strategy in dealing with competence examination, 44 students were in good category (55%), while 36 students (45%) were in poor category. This study implies that information about the national nurse competence examination should be provided as earlier as possible, so that students could make adequate preparation.</span>
APA, Harvard, Vancouver, ISO, and other styles
21

Rodriguez Rivera, Lourdes, Cynthia Rodriguez Rivera, Alberto Zabala Soler, Rey Pagan Rivera, Luis Rodriguez, and Carlos Garcia-Gubern. "The Use of Simulation Games and Tabletop Exercises in Disaster Preparedness Training of Emergency Medicine Residents." Prehospital and Disaster Medicine 34, s1 (May 2019): s82. http://dx.doi.org/10.1017/s1049023x19001729.

Full text
Abstract:
Introduction:Emergency physicians play a frontline role in hospital disaster responses and require appropriate training.Aim:The aim of the current study was to pilot and compare the effectiveness of two emergency preparedness teaching interventions: the first employing traditional lecture-based instruction (LEC) and the second utilizing interactive simulation/game-based teaching (SIM).Methods:A two-group randomized pre- and post-test design was implemented into the didactic curriculum of the Emergency Medicine (EM) Residency Training Program at the San Lucas Episcopal Hospital in Ponce, Puerto Rico. Residents (n=23) completed either a LEC (control) or SIM teaching module (single day, one to two hours) focusing on emergency preparedness concepts, disaster-related clinical decision-making, and physician responsibilities during hospital disaster protocols. Knowledge-based multiple-choice exams and scenario-based competency exams were administered at three different time points: one-week pre-intervention, immediately post-training, and two-weeks post-training. Test scores were compared between groups at each time point using the Mann-Whitney U test.Results:Following the teaching interventions, no significant differences were found between the LEC group versus the SIM group in knowledge-based exam performance (LEC 81.1%[9.4] vs. SIM 74.9%[12.1]; U=42.50, p=0.15) and scenario-based exam performance (LEC 80.0%[9.7] vs. SIM 80.2%[9.2]; U=62.00, p=0.83), suggesting both teaching methods were similarly effective. Indeed, knowledge-based exam scores improved two-fold and scenario-based exam scores improved by over 50% immediately following training relative to baseline exam scores. Two-weeks post-training, a significant decrease in scenario-based exam performance was found in the LEC group relative to the SIM group (LEC 63.1%[11.6] vs. SIM 75.4%[11.5]; U=91.50, p=0.036), suggesting residents who train with simulations show greater retention of scenario-based concepts compared to those who train with lecture-based training alone.Discussion:The current study highlights the potential dual value of incorporating simulation training in EM emergency preparedness curriculums in improving both knowledge and concept retention of physician disaster responsibilities.
APA, Harvard, Vancouver, ISO, and other styles
22

Hose, Michal Kalli, John Fontanesi, Manjulika Woytowitz, Diego Jarrin, and Anna Quan. "Competency based clinical shoulder examination training improves physical exam, confidence, and knowledge in common shoulder conditions." Journal of General Internal Medicine 32, no. 11 (August 7, 2017): 1261–65. http://dx.doi.org/10.1007/s11606-017-4143-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Schuster, Gregory M., Ronald J. Hunt, and Harold J. Haering. "Effect of a Pilot Preclinical Incentive Program on Dental Students’ Performance on a Clinical Competency Exam." Journal of Dental Education 81, no. 1 (January 2017): 96–100. http://dx.doi.org/10.1002/j.0022-0337.2017.81.1.tb06251.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Blanzola, Cheryl, Roslyn Lindeman, and Major L. King. "Nurse Internship Pathway to Clinical Comfort, Confidence, and Competency." Journal for Nurses in Staff Development (JNSD) 20, no. 1 (January 2004): 27–37. http://dx.doi.org/10.1097/00124645-200401000-00006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Samuels, Elias M., Thomas E. Perorazio, Brenda Eakin, Ellen Champagne, and Marilyn Lantz. "2244." Journal of Clinical and Translational Science 1, S1 (September 2017): 46. http://dx.doi.org/10.1017/cts.2017.167.

Full text
Abstract:
OBJECTIVES/SPECIFIC AIMS: The first goal of this project is to test the reliability and validity of an objective structured clinical exam (OSCE) that was designed to assess competency in clinical and translational research. The second goal is to evaluate the impact of MICHR’s Summer Research Program on the participating trainee’s competency development. METHODS/STUDY POPULATION: The methodology used for this study was reviewed and exempted from oversight by the U-M Institutional Review Board (HUM00113293). The participants in the study include 17 pre-doctoral students in health professions programs at U-M who participated MICHR’s Summer Research Program. The Research OSCE was administered using a pretest, post-test design. The pretest was administered once during the 1st week of program in the Summer of 2016 and the post-test during the 10th week of the program. The Research OSCE was proctored and rated by trained staff members. We will assess the reliability of the Research OSCE using Generalizability Theory (Webb et al., 2006). And the construct validity of the Research OSCE will be tested using factor analysis and other statistical analyses. Growth in the competence of the trainees participating in the Summer Research program will be evaluated by testing for significant differences between their pretest and post-test scores. RESULTS/ANTICIPATED RESULTS: We anticipate that this study will show that the Research OSCE is a reliable competency assessment with proven construct validity. We also anticipate that the use of the Research OSCE will show the trainees participating in the Summer Research program experienced a gain in competence during the course of the 10-week program. DISCUSSION/SIGNIFICANCE OF IMPACT: This project uses a common and standardized testing approach. The primary goal of this project is to evaluate the reliability and validity of an OSCE to assess competency in clinical and translational research. It represents a new application for a well-studied testing method used extensively in the health professions to assess the clinical competency of health practitioners. This project will lead to a better understanding of (a) the reliability and validity of the Research OSCE designed to test research competency and (b) the effectiveness of the Summer Research Program curriculum in better preparing participants to conduct clinical and translational research. Showing how a specific competency assessment can be used for this purpose will provide the administrators, evaluators, and other stakeholders of clinical and translational research training programs with information that can be used to design more rigorous and relevant evaluations of their research training programs.
APA, Harvard, Vancouver, ISO, and other styles
26

Bobos, Pavlos, Dimitra V. Pouliopoulou, Alexandra Harriss, Jackie Sadi, Alison Rushton, and Joy C. MacDermid. "A systematic review and meta-analysis of measurement properties of objective structured clinical examinations used in physical therapy licensure and a structured review of licensure practices in countries with well-developed regulation systems." PLOS ONE 16, no. 8 (August 3, 2021): e0255696. http://dx.doi.org/10.1371/journal.pone.0255696.

Full text
Abstract:
Background The Objective Structured Clinical Examination (OSCE) is a commonly used tool internationally to assess clinical competency. Physical therapy (PT) licensure processes vary internationally. The OSCE is the tool used in Canada to assess clinical competency for PT graduates seeking licensure. Previous studies that examined the measurement properties of OSCEs present contradictory results. Objectives The first objective was to investigate the reliability and validity of OSCEs when administered to PTs during their education or as part of a licensure process. The second objective was to conduct a structured review to report PT educational and licensing components and policies in 17 countries with well-developed PT regulation systems. Methods An electronic search was performed in four databases from inception to 31st March 2021 to identify relevant articles. Two reviewers performed the critical appraisal of the included studies using a validated quality assessment tool. We deployed a random effects meta-analysis on reliability and validity estimates of OSCEs and examined sources of heterogeneity with univariate meta-regressions. We searched websites of professional regulatory bodies and associations for data on educational and licencing components and policies. Educational and licensing components across countries were synthesized descriptively. Results A pooled estimate of Cronbach’s alpha of 0.55, (95% CI: 0.41, 0.67) was determined for OSCEs. The pooled estimate of Intraclass Correlation Coefficient (ICC) between assessors was 0.77 (95% CI: 0.70, 0.83). The pooled estimate of Pearson Correlation between multiple OSCE stations’ scores was 0.27 (95% CI: 0.15, 0.39); and between each station score and the total score was 0.71 (95% CI: 0.61, 0.79). The pooled estimates for kappa Coefficients were 0.75 (95% CI: 0.58, 0.86) and 0.84, (95% CI: 0.72, 0.91) for intra-rater and inter-rater reliability of the standardised patient respectively. From the 17 included countries, Canada (excluding Quebec) was the only country that required both a clinical and written competency exam following graduation from an accredited PT program. Two countries (USA, UAE) required a written competency exam. The remaining 14 countries did not require an additional competency examination after completion of degree requirements from an accredited program. Conclusions We found weak evidence that OSCE examinations items are internally consistent when used to assess PTs. Canada (excluding Quebec) is the only country out of 17 implementing a national clinical competency examination for their PT graduates to achieve licensure after completing professional degree requirements.
APA, Harvard, Vancouver, ISO, and other styles
27

Hoepner, Jerry K., and Abby L. Hemmerich. "Using Formative Video Competencies and Summative In-Person Competencies to Examine Preparedness for Entry-Level Professional Practice." Seminars in Speech and Language 41, no. 04 (July 22, 2020): 310–24. http://dx.doi.org/10.1055/s-0040-1713782.

Full text
Abstract:
AbstractA key element of competency-based education is assessment. Effective assessment requires access to a core set of expectations that match a learner's level of preparation. Miller's triangle provides a framework for establishing appropriate expectations that move learners from novice to entry-level clinicians. Formative assessment and feedback are a crucial part of facilitating learning in this context. A pilot investigation was conducted to examine the effects of a formative, video competency on performance in a summative, live competency. Rubrics were used to score performance on two competencies, an oral mechanism exam (OME) and a clinical bedside swallowing examination (CBSE). Performance on the OME was significantly improved in the summative competency, compared with the formative, video competency. Performance on the CBSE did not change from formative to summative competency. Assessment in competency-based education is important as a measure of readiness for entry-level practice. Formative assessment and feedback can improve preparedness and performance on summative competencies. Detailed, criterion-referenced assessment tools are crucial to identifying performance. While the OME rubric used in this investigation appears to meet that standard, it is likely that the CBSE rubric was not specific enough to detect changes.
APA, Harvard, Vancouver, ISO, and other styles
28

Zechariah, Sunitha, Jennifer L. Waller, Gianluca De Leo, Judith Stallings, Ashley J. Gess, and Leigh Lehman. "Content and Face Validation of a Novel, Interactive Nutrition Specific Physical Exam Competency Tool (INSPECT) to Evaluate Registered Dietitians’ Competence: A Delphi Consensus from the United States." Healthcare 9, no. 9 (September 17, 2021): 1225. http://dx.doi.org/10.3390/healthcare9091225.

Full text
Abstract:
The nutrition-focused physical examination (NFPE) is an integral component of nutrition assessment performed by registered dietitian nutritionists (RDNs) to determine signs of malnutrition and other nutrition-related complications. Increased use of this essential skill among RDNs and the transformation of dietetics education to a competency-based model in the near future calls for appropriately validated tools to measure RDNs’ NFPE competence. To fill the need for a validated competency tool, this study developed an Interactive Nutrition-Specific Physical Exam Competency Tool (INSPECT) utilizing the initial 70 items identified in the first phase of the study. The second phase of this study aimed to test the preliminary version of the INSPECT for content and face validity. An expert panel of 17 members provided consensus recommendations through the Delphi process. Internal consistency of the consensus was measured with Cronbach’s alpha (α) and α of ≥0.70 was defined as acceptable a priori. Inter-rater agreement among the expert panel was determined using the intraclass correlation coefficient (ICC) and an a priori ICC of 0.75 to 0.9 was established as good and >0.9 as excellent agreement. The results showed acceptable face validity (α = 0.71) and excellent content validity for the INSPECT, with an internal consistency of α = 0.97 in the first round and α = 0.96 in the second round. The inter-rater agreement was also excellent with ICC = 0.95 for each of the Delphi rounds. A total of 52 items were retained from the preliminary version of the INSPECT. Open feedback from the experts allowed for the consolidation of 11 similar items for better scoring and evaluation and thus, a total of 41 items were included in the final version of the INSPECT. The final version of the INSPECT is currently being studied in real-life, multi-site clinical settings among practicing RDNs to examine construct validity, reliability, and item-level psychometric properties. Ultimately, the validated INSPECT will be available for the competency evaluation of RDNs practicing in clinical settings.
APA, Harvard, Vancouver, ISO, and other styles
29

Limen, Gilbert, Joshua Runtuwene, and Christillia Wagiu. "Hubungan Tingkat Kecemasan dalam Menghadapi UKMPPD OSCE dengan Nilai UKMPPD Mahasiswa Fakultas Kedokteran Universitas Sam Ratulangi." JURNAL BIOMEDIK (JBM) 10, no. 3 (December 18, 2018): 159. http://dx.doi.org/10.35790/jbm.10.3.2018.21981.

Full text
Abstract:
Abstract: Exam is a potential stressor to cause anxiety among students. As an exit exam, the medical competency examination consists of two parts: multiple choice question computer-based test and an objective structured clinical examination (OSCE). The anxiety level during the latter part where the cognitive, psychomotor and professional behaviour aspects of examinees are tested, is considered the highest. Passing grade of the exam as one criterion used for important decisions can also be another source of anxiety. Anxiety may impact performance during exam and consequently the passing grade. This study was aimed to evaluate the correlation between the anxiety level right before medical competency examination OSCE and the August 2018 OSCE final results. This was an analytical study with a cross sectional design. Respondents were all students partaking in OSCE at Sam Ratulangi University Medical School. The Hamilton Anxiety Rating Scale was used to measure the anxiety level. The OSCE results were retrieved from the Academic Department. Data were analyzed with the Spearman correlation test that obtained a P value of 0.289. Overall, 81.20% of respondents experienced anxiety, however, the majority (43.50%) were considered as mild anxiety. Moreover, the median score of August 2018 OSCE was 80.00. Conclusion: There is no correlation between anxiety level right before OSCE and August 2018 final results.Keywords: anxiety level, medical competency examination, OSCE scoreAbstrak: Ujian dapat menjadi sebuah stresor yang menimbulkan kecemasan. Uji Kompetensi Mahasiswa Program Profesi Dokter (UKMPPD) sebagai exit exam terdiri atas dua jenis ujian yakni pilihan ganda berbasis komputer dan Objective Structured Clinical Examination (OSCE). Tingkat kecemasan yang dihasilkan oleh OSCE paling tinggi karena OSCE menguji aspek kognitif, psikomotor dan professional behavior. Nilai batas lulus ujian UKMPPD juga dapat menjadi sumber kecemasan karena digunakan untuk menentukan keputusan yang penting. Kecemasan dalam menghadapi ujian dapat menjadi salah satu penyebab yang memengaruhi performa dan berdampak pada kelulusan. Penelitian ini bertujuan untuk mengetahui hubungan tingkat kecemasan mahasiswa dalam menghadapi UKMPPD OSCE dengan nilai UKMPPD OSCE periode Agustus 2018. Jenis penelitian ialah analitik dengan desain potong lintang. Responden ialah seluruh mahasiswa yang mengikuti UKMPPD OSCE di Fakultas Kedokteran Universitas Sam Ratulangi (Unsrat) dengan menggunakan instrumen penelitian Hamilton Anxiety Rating Scale untuk mengukur tingkat kecemasan dan nilai OSCE dari Bagian Akademik Fakultas Kedokteran Unsrat. Analisis statistik menggunakan uji kore-lasi Spearman. Hasil analisis hubungan antara kecemasan dalam menghadapi UKMPPD OSCE dengan nilai UKMPPD OSCE periode Agustus 2018 mendapatkan nilai P=0,289. Responden yang mengalami kecemasan sebanyak 81,20% dan umumnya memiliki tingkat kecemasan yang ringan (43,50%). Median nilai UKMPPD OSCE periode Agustus 2018 yang diperoleh ialah 80,00. Simpulan: Tidak terdapat hubungan antara tingkat kecemasan dalam menghadapi UKMPPD OSCE dengan nilai UKMPPD OSCE periode Agustus 2018.Kata kunci: tingkat kecemasan, UKMPPD, nilai OSCE
APA, Harvard, Vancouver, ISO, and other styles
30

Sarfraz, Farrukh, Fahad Sarfraz, Imran Jawad, Mohammad Zia-Ul-Miraj, Rizwan Zafar Ahmad, and Jawairia Saleem. "OSCE: An Effective Tool of Assessment for Medical Students." Pakistan Journal of Medical and Health Sciences 15, no. 8 (August 30, 2021): 2235–39. http://dx.doi.org/10.53350/pjmhs211582235.

Full text
Abstract:
Background: To assess the competency of a student different tools are used. Since its introduction in 1975 by Dr. Harden and his team, OSCE has gained tremendous strides to assess the clinical competencies. Since 1975 onward OSCE has been very successfully used to assess the clinical competencies of medical student globally. OSCE is an assessment tool in which student is observed for performance of different tasks at specified stations. In the current study perception of medical students about OSCE examination was done which shall give room for positive criticism and further improvement of the system where ever required. Objective: To expedite view of final year MBBS students of Azra Naheed College about OSCE Material and Method Study design: Quantitative, cross sectional study. Settings: Azra Naheed College, Lahore. Duration: Six months i.e. 1st July2020 to 31st December 2020 Data Collection procedure: After an informed consent and appropriate briefing, the questionnaire was distributed among the final year medical students of Azra Naheed Medical College. Questionnaire developed by Russell et al was used. Results: Out of 148 students who participated in the study, 66(45%) students were females and 82(55%) were male. Majority of the students were satisfied with the quality of the exam. Consensus about the quality of exam was that, 29.7% were aware about the nature of the exam, 52.7% were satisfied that the syllabus taught was asked in the exam, 58.1% were satisfied about the time allocation for each station. Majority i.e. 60% considered OSCE an exam of practical nature which is not biased by gender or ethnicity. More than 50% of the students were satisfied with the standard of the exam. At the same time more than 50% students considered essay exam the easiest format of assessment. However, OSCE was considered to be fairest form of assessment 73%. 68.9% perceived that learning is enhanced by MCQs rather than other formats of assessment. Conclusion: To conclude this study, it is very much clear that the perception of students about OSCE as an assessment tool was very encouraging, as it not only provided them the opportunity to highlight their weaknesses but also helped them to perform well in the exam, manage time during exam and to overcome them stress which influenced their results. Key words: OSCE, Objective, Examinations, Clinical skills, qualitative analysis
APA, Harvard, Vancouver, ISO, and other styles
31

Hansen, Jamie, and Marilyn Bratt. "Effect of Sequence of Simulated and Clinical Practicum Learning Experiences on Clinical Competency of Nursing Students." Nurse Educator 42, no. 5 (2017): 231–35. http://dx.doi.org/10.1097/nne.0000000000000364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Nakajima, Mikiko Aoyagi, and Keith W. Freesemann. "Help-Seeking Behaviors Among Athletic Training Students in the Clinical Education Setting: A Pilot Study." Athletic Training Education Journal 8, no. 4 (December 1, 2013): 115–23. http://dx.doi.org/10.4085/0804115.

Full text
Abstract:
Context Help-seeking is an important self-regulating and proactive strategy that prepares students to be successful learners. It is particularly important in the clinical education setting, in which students must actively engage in learning. Objective To determine both the type of help-seeking behaviors used by athletic training students in the clinical education setting and the relationship between help-seeking behaviors and achievement in their athletic training program. Design Cross-sectional exploratory study. Setting Online survey. Patients or Other Participants Athletic training students from one Commission on Accreditation of Athletic Training Education–accredited athletic training program. Data Collection and Analysis An online survey was developed using previously validated help-seeking and general self-efficacy scales and several demographic questions. Factorial multivariate analysis of variance and multivariate analysis of covariance and univariate analyses determined differences among respondents' demographic characteristics and other variables. Results A total of 38 athletic training students responded to the online survey. There was a significant main effect for passing/failing of competency exams (Wilks λ = 0.680, F = 3.061, P = .034), semester (Wilks λ = 0.485, F = 6.905, P = .001), and interaction effect (Wilks λ = 0.591, P = .007). Follow-up analysis showed that first-semester students who passed had significantly lower scores for avoidance of help-seeking (M = 1.229 ± 0.282) compared to first-semester students who didn't pass (M = 1.994 ± 0.079; P = .004). Conclusions Students typically engaged in help-seeking behaviors beneficial for learning (ie, instrumental help-seeking). However, students who engaged in avoidance help-seeking had lower achievement scores when measured by the passing/failing of their competency exam at the end of their respective semester. Preceptors and athletic training educators are encouraged to detect the type of help-seeking behaviors students use and guide them to those that are conducive to learning and success.
APA, Harvard, Vancouver, ISO, and other styles
33

Langenau, Erik E., Gina Pugliano, and William L. Roberts. "Relationships between high-stakes clinical skills exam scores and program director global competency ratings of first-year pediatric residents." Medical Education Online 16, no. 1 (January 2011): 7362. http://dx.doi.org/10.3402/meo.v16i0.7362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ginsburg, Liane R., Deborah Tregunno, Peter G. Norton, Sydney Smee, Ingrid de Vries, Stefanie S. Sebok, Elizabeth G. VanDenKerkhof, Marian Luctkar-Flude, and Jennifer Medves. "Development and testing of an objective structured clinical exam (OSCE) to assess socio-cultural dimensions of patient safety competency." BMJ Quality & Safety 24, no. 3 (November 14, 2014): 188–94. http://dx.doi.org/10.1136/bmjqs-2014-003277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Hayward, Nathan, Anders Sideris, Nathaniel Marshall, Michael Burri, and Stuart G. Mackay. "815 Australian Surgery Trainee Education for Contemporary Airway Management of OSA: A Pilot Randomised Controlled Study." Sleep 44, Supplement_2 (May 1, 2021): A318—A319. http://dx.doi.org/10.1093/sleep/zsab072.812.

Full text
Abstract:
Abstract Introduction In Australia, ASOHNS delivers no formal curriculum for training of OHNS, or levels of competency required, to assess and treat complex OSA patients. Australian OHNS trainee confidence, knowledge and exposure to complex multi-level OSA surgery is lacking. Lack of exposure to sufficient complex OSA surgery case load has been identified as a major weakness in training within a recently published international survey. This study was a randomized clinical trial evaluating the effect of Australian OHNS trainee exposure to education materials compared with no exposure, on Sleep Surgery specific examination performance (multiple choice and short written answer). Methods 70 accredited and 45 unaccredited OHNS trainees were invited to participate in this trial. Participants were randomly assigned to Sleep Surgery educational material exposure or no exposure to those materials. Those randomized to the exposure group were provided educational material and were given 2 weeks exposure time prior to the exam. Each participant then complete an online examin, consisting of 40 multiple choice questions and 1 short answer question (marked by a field expert). Differences between exposure and control group means were tested using independent t-tests. Results 24 trainees were allocated to exposure and 22 to control. 33 participants attempted the examination. The were no significant differences between groups in the multiple choice (mean difference 1.3 ± 1.6 [3.3%], p=0.41) or written exam test scores (mean difference 1.8 ± 1.2 [9.0%], p=0.14). Accredited trainees performed better in the written exam (mean difference 2.6 ± 1.1 [13.0%], p=0.03). The mean test score in a separate exploratory group of 2 sleep fellowship trained OHNS was considerably higher in both exams. Conclusion This study suggests that exposure to formal education material may improve understanding of sleep surgery. Accredited trainees performed better than unaccredited trainees but the difference was small. Poor test performance in both groups may indicate further formal sleep surgery teaching is required in the ASOHNS training curriculum. Further research is required to identify the best ways possible to educate OHNS trainees in the complex and nuanced decision making required for OSA patients. Support (if any) Illawarra Health and Medical Research Institute Grant 2019.
APA, Harvard, Vancouver, ISO, and other styles
36

Yim, Mi Kyoung. "Reforms of the Korean Medical Licensing Examination regarding item development and performance evaluation." Journal of Educational Evaluation for Health Professions 12 (March 17, 2015): 6. http://dx.doi.org/10.3352/jeehp.2015.12.6.

Full text
Abstract:
Purpose: The Korean Medical Licensing Examination (KMLE) has undergone a variety of innovative reforms implemented by the National Health Personnel Licensing Examination Board (NHPLEB) in order to make it a competency-based test. The purpose of this article is to describe the ways in which the KMLE has been reformed and the effect of those innovations on medical education in Korea. Methods: Changes in the KMLE were traced from 1994 to 2014 by reviewing the adoption of new policies by the NHPLEB and the relevant literature. Results: The most important reforms that turned the examination into a competency-based test were the following: First, the subjects tested on the exam were revised; second, R-type items were introduced; third, the proportion of items involving problem-solving skills was increased; and fourth, a clinical skills test was introduced in addition to the written test. The literature shows that the above reforms have resulted in more rigorous licensure standards and have improved the educational environment of medical schools in Korea. Conclusion: The reforms of the KMLE have led to improvements in how the competency of examinees is evaluated, as well as improvements in the educational system in medical schools in Korea.
APA, Harvard, Vancouver, ISO, and other styles
37

Choi, Erin, Sonia Khan, Laxmi Chintakayala, Katherine Holder, Bernardo Galvan, and Steven Berk. "Classic clinical descriptions of disease: curing medical education with a dose of the past." Southwest Respiratory and Critical Care Chronicles 9, no. 37 (January 28, 2021): 54–59. http://dx.doi.org/10.12746/swrccc.v9i37.799.

Full text
Abstract:
The importance of clinical skills, including obtaining patient history and performing physical examination, has been de-emphasized in the modern medical school curriculum. With advancements in diagnostic technologies, the clinical presentation of diseases in medical textbooks has been simplified, diminished, and largely replaced with detailed pathophysiology and laboratory findings. The implementation of the United States Medical Licensing Exam (USMLE) Step 1 has also contributed in pushing medical education toward classroom-based learning rather than emphasizing clinical experience. Clinical skills competency is crucial to accurately diagnose patients and simultaneously lowers health care costs by not relying on unneeded diagnostic tests. To address this gap in medical knowledge, a group of students at Texas Tech University Health Sciences Center, Lubbock, Texas, have created a website documenting classic clinical disease descriptions written by some of the renowned physicians from the 19th and 20th centuries, including Osler, Flint, Gowers, etc. This website will continue to grow and will be a useful tool for professors, physicians, and medical students.
APA, Harvard, Vancouver, ISO, and other styles
38

Tsao, Peg, Veronica Haight, Ashley Dunn, Lisa Jackson, and Steven Goodman. "2007 The clinical research operations program: Educating clinical research staff." Journal of Clinical and Translational Science 2, S1 (June 2018): 61. http://dx.doi.org/10.1017/cts.2018.228.

Full text
Abstract:
OBJECTIVES/SPECIFIC AIMS: The Clinical Research Operations Program is a free educational program designed to educate clinical research personnel on the conduct of clinical research (CR). The participant completes 16 required core sessions (24 h), 4 elective sessions (4 h), and passes the final exam to receive a certification in CR operations at Stanford. Sessions focus on the 9 domains of CR (established by the Joint Task Force for Clinical Trial Competency), such as Ethical & Participant Safety Considerations, Clinical Study Operations, & Data Management/Informatics. METHODS/STUDY POPULATION: Sessions are taught by volunteer lecturers. Participants may also attend the sessions without pursuing the certification. The program objective is to provide easy-access education in CR in order to increase regulatory compliance, staff retention, and improve CR at Stanford. The program targets CR coordinators, however, staff, postdocs, fellows, and faculty also participate. RESULTS/ANTICIPATED RESULTS: Since the program’s launch in January 2017, 119 individuals have enrolled in the certification program. The most represented group is the Department of Medicine. Sessions consistently reach their maximum with a waiting list. Each core session requires that the participant complete an evaluation (Likert scale, 1–5) of the registration process (4.5/5), the class environment (4.6/5), the presented content (4.5/5), and the instructor (4.6/5). Data from these evaluations are positive to date and is used to continually refine the program. DISCUSSION/SIGNIFICANCE OF IMPACT: N/A.
APA, Harvard, Vancouver, ISO, and other styles
39

Prislin, Michael D., Sue Ahearn, and John Boker. "Do Simulation-Based Skill Exercises and Post-Encounter Notes Add Additional Value to a Standardized Patient-Based Clinical Skills Examination?" Education Research International 2011 (2011): 1–5. http://dx.doi.org/10.1155/2011/107861.

Full text
Abstract:
Background. Standardized patient (SP) clinical assessments have limited utility in assessing higher-level clinical competencies. This study explores the value of including simulation exercises and postencounter notes in an SP clinical skills examination.Methods. Two exercises involving cardiac auscultation and ophthalmic funduscopy simulations along with written post encounter notes were added to an SP-based performance examination. Descriptive analyses of students' performance and correlations with SP-based performance measures were obtained.Results. Students' abilities to detect abnormalities on physical exam were highly variable. There were no correlations between SP-based and simulation-derived measures of physical examination competency. Limited correlations were found between students' abilities to perform and document physical examinations and their formulation of appropriate differential diagnoses.Conclusions. Clinical simulation exercises add depth to SP-based assessments of performance. Evaluating the content of post encounter notes offers some insight into students' integrative abilities, and this appears to be improved by the addition of simulation-based post encounter skill exercises. However, further refinement of this methodology is needed.
APA, Harvard, Vancouver, ISO, and other styles
40

Cummings, Joanna, and Diane Stadler,. "Building Clinical Nutrition Capacity in Lao PDR: A Novel Clinical Nutrition Educational Model to Provide Interventions to Treat Malnutrition and Non-Communicable Diseases." Current Developments in Nutrition 4, Supplement_2 (May 29, 2020): 597. http://dx.doi.org/10.1093/cdn/nzaa048_003.

Full text
Abstract:
Abstract Objectives Malnutrition is the number one health priority in Lao PDR where 36% of children under the age of five are stunted and 27% are underweight. Compounded by escalating rates of diabetes and non-communicable diseases, hospital-based nutrition interventions are needed. A partnership between OHSU and the Lao Ministry of Health is working to fill the gap in knowledge and application by providing clinical nutrition education to health care providers. Methods A clinical nutrition needs assessment was conducted in early 2016 and informed the design of a 1000-hour, evidenced-based applied clinical dietetics certificate program for health professionals. Aligning with the 2015 International Dietitian Education Program (IDE) standards, the curriculum was adapted to be culturally relevant and appropriate. Semi-structured interviews, open-ended questions, and pre-and post-tests were used to evaluate the program. Results As of January 2020, 32 clinicians have successfully completed the program. At matriculation, the mean pre-test score was 43% and 50% of students could not calculate BMI or waist-to-hip ratio, determine individual energy, nutrient and fluid requirements, provide individualized medical nutrition therapy, or manage severe acute malnutrition. The mean final exam score was 84% with greatest improvements in malnutrition (+94%) and chronic disease (+62%) knowledge and application. Upon completion, 98% of students successfully met IDE competency standards, conducted nutrition-focused physical exams, administered culturally appropriate nutrition interventions, and provided nutrition education to medical teams at provincial hospitals. Conclusions Providing hands-on training on nutrition assessment, differential diagnosis, and treatment plan design and implementation enables health care providers in Lao PDR to better treat patients with malnutrition and non-communicable diseases that are prevalent within this developing country. This educational curriculum may serve as a model for other developing countries. Funding Sources OHSU Global-SE Asia, OHSU Graduate Programs of Human Nutrition and the Vejdusit Foundation
APA, Harvard, Vancouver, ISO, and other styles
41

Romadhoni, Romadhoni, Gandes Retno Rahayu, and Umatul Khoiriyah. "IDENTIFIKASI MOTIVASI DAN DUKUNGAN YANG DIPERLUKAN MAHASISWA RETAKER UJI KOMPETENSI MAHASISWA PROGRAM PROFESI DOKTER." Jurnal Pendidikan Kedokteran Indonesia: The Indonesian Journal of Medical Education 10, no. 1 (March 31, 2021): 75. http://dx.doi.org/10.22146/jpki.48329.

Full text
Abstract:
Background: Medical students who experience failure in the final exam are around 10%. In Indonesia, there are students who failed to pass the competency test for medical profession program students (UKMPPD) until 14 times. The impact of this failure is the occurence of mental health disorder. Students need support more than guidance on clinical knowledge and skills. This study aimed to identify the motivation and support needed by the UKMPPD retaker students in the effort to achieve graduation Methods: This research is a qualitative research with phenomenological approach, the data was obtained by in-depth interview and focus group discussion (FGD). This research was followed by 16 respondents. The data analysis was conducted by thematic analysis method. Results: Identified intrinsic motivation predictor originated from learning independence, relation, and low competence. The extrinsic motivation predictors are originated from external regulation, that is UKMPPD regulation. The motivation predictor is originated from anxiety and study period limit. The support needed by the respondents from medical schools are in the form of psychological approach, absolving the retaker students from selection tests, providing form of selection tests that compatible with the blueprint and rules of UKMPPD multiple choice question, also giving the opportunity to pass with other exam methods. Conditions of motivation can change with the factors that influence it. Changes in motivational conditions that may occur are an increase in motivational conditions, decreased motivational conditions, or persistent motivational conditions. Conclusion: The motivation condition of retaker students is amotivated and motivated (external and internal motivation). Support from medical school that can increase motivation is given to students to keep their motivation level.
APA, Harvard, Vancouver, ISO, and other styles
42

Nguyen, Laura, Kim Tardioli, Matthew Roberts, and James Watterson. "Development and incorporation of hybrid simulation OSCE into in-training examinations to assess multiple CanMEDS competencies in urologic trainees." Canadian Urological Association Journal 9, no. 1-2 (February 5, 2015): 32. http://dx.doi.org/10.5489/cuaj.2366.

Full text
Abstract:
Introduction: As residency training requirements increasingly emphasize a competency-based approach, novel tools to directly evaluate Canadian Medical Education Directives for Specialists (CanMEDS) competencies must be developed. Incorporating simulation allows residents to demonstrate knowledge and skills in a safe, standardized environment. We describe a novel hybrid simulation station for use in a urology resident in-training Objective Structured Clinical Exam (OSCE) to assess multiple CanMEDS competencies.Methods: An OSCE station was developed to assess Communicator, Health Advocate, Manager, and Medical Expert (including technical skills) CanMEDS roles. Residents interviewed a standardized patient, interacted with a nurse, performed flexible cystoscopy and attempted stent removal using a novel bladder/stent model. Communication was assessed using the Calgary-Cambridge Observational Guide, knowledge was assessed using a checklist, and technical skills were assessed using a previously validated global rating scale. Video debriefing allowed residents to review their performance. Face and discriminative validity were assessed, and feasibility was determined through qualitative post-examination interviews and cost analysis.Results: All 9 residents (postgraduate years [PGY] 3, 4, 5) completed the OSCE in 15 minutes. Communicator and knowledge scores were similar among all PGYs. Scores in technical skills were higher in PGY-5 compared with PGY-3/4 reside nts (mean score 79% vs. 73%). Residents and exam personnel felt the OSCE station allowed for realistic demonstration of competencies. Equipment cost was $218 for the exam station.Conclusions: We developed and implemented a hybrid simulation- based OSCE station to assess multiple CanMEDS roles. This approach was feasible and cost-effective; it also provided a framework for future development of similar OSCE stations to assess resident competencies across multiple domains.
APA, Harvard, Vancouver, ISO, and other styles
43

Ong, John, Carla Swift, Nicholas Magill, Sharon Ong, Anne Day, Yasseen Al-Naeeb, and Arun Shankar. "The association between mentoring and training outcomes in junior doctors in medicine: an observational study." BMJ Open 8, no. 9 (September 2018): e020721. http://dx.doi.org/10.1136/bmjopen-2017-020721.

Full text
Abstract:
ObjectiveTo determine quantitatively if a positive association exists between the mentoring of junior doctors and better training outcomes in postgraduate medical training within the UK.DesignObservational study.Participants117 trainees from the East of England Deanery (non-mentored group) and the recently established Royal College of Physicians (RCP) Mentoring scheme (mentored group) who were core medical trainees (CMTs) between 2015 and 2017 completed an online survey. Trainees who received mentoring at the start of higher specialty training, incomplete responses and trainees who were a part of both the East of England deanery and RCP Mentoring scheme were excluded leaving 85 trainees in the non-mentored arm and 25 trainees in the mentored arm. Responses from a total of 110 trainees were analysed.Main outcome measuresPass rates of the various components of the Membership of the Royal College of Physicians (MRCP) (UK) examination (MRCP Part 1, MRCP Part 2 Written and MRCP Part 2 PACES), pass rates at the Annual Review of Competency Progression (ARCP), trainee involvement in significant events, clinical incidents or complaints and trainee feedback on career progression and confidence.ResultsMentored trainees reported higher pass rates of the MRCP Part 1 exam versus non-mentored trainees (84.0% vs 42.4%, p<0.01). Mentored international medical graduates (IMGs) reported higher pass rates than non-mentored IMGs in the MRCP Part 2 Written exam (71.4% vs 24.0%, p<0.05). ARCP pass rates in mentored trainees were observed to be higher than non-mentored trainees (95.8% vs 69.9%, p<0.05). Rates of involvement in significant events, clinical incidents and complaints in both groups did not show any statistical difference. Mentored trainees reported higher confidence and career progression.ConclusionsA positive association is observed between the mentoring of CMTs and better training outcomes. Further studies are needed to investigate the causative effects of mentoring in postgraduate medical training within the UK.
APA, Harvard, Vancouver, ISO, and other styles
44

Geisler, Paul R., Chris Hummel, and Sarah Piebes. "Evaluating Evidence-Informed Clinical Reasoning Proficiency in Oral Practical Examinations." Athletic Training Education Journal 9, no. 1 (May 1, 2014): 43–48. http://dx.doi.org/10.4085/090143.

Full text
Abstract:
Clinical reasoning is the specific cognitive process used by health care practitioners to formulate accurate diagnoses for complex patient problems and to set up and carry out effective care. Athletic training students and practitioners need to develop and display effective clinical reasoning skills in the assessment of injury and illness as a first step towards evidence-based functional outcomes. In addition to the proper storage of and access to appropriate biomedical knowledge, an equally important component of effective clinical reasoning is the ability to select and interpret various conclusions from the mounting quantity of evidence-based medicine (EBM) sources. In assessing injury and illness, this competency is particularly reliant upon experience, skill execution, and available evidence pertaining to the diagnostic accuracy and utility of various special tests and physical examination procedures. In order to both develop and assess the ability of our students to integrate EBM into their clinical reasoning processes, we have designed exercises and evaluations that pertain to evidence-based clinical decision making during oral practical examinations in our assessment of athletic injury labs. These integrated oral practical examinations are designed to challenge our students' thinking and clinical performance by providing select key features of orthopaedic case pattern presentations and asking students to pick the most fitting diagnostic tests to fit that particular case. Students must not only match the appropriate special/functional tests, etc, to the case's key features, but also choose and explain how useful the chosen tests are for the differential diagnosis process, relative to the best diagnostic evidence. This manuscript will present a brief theoretical framework for our model and will discuss the process we use to evaluate our students' ability to properly select, perform, and explain various orthopaedic examination skills and the relevant evidence available. Specific examples of oral practical exam modules are also provided for elucidation.
APA, Harvard, Vancouver, ISO, and other styles
45

Apiratwarakul, Korakot, Kamonwon Ienghong, Nichaphat Tongthummachat, Takaaki Suzuki, Somsak Tiamkao, and Vajarabhongsa Bhudhisawasdi. "Assessment of Competency of Point-of-Care Ultrasound in Emergency Medicine Residents during Ultrasound Rotation at the Emergency Department." Open Access Macedonian Journal of Medical Sciences 9, E (April 20, 2021): 293–97. http://dx.doi.org/10.3889/oamjms.2021.5954.

Full text
Abstract:
BACKGROUND: Point-of-care ultrasound (POCUS) is the core competency in the Emergency Medicine (EM) residency training. However, there are many methods that can be used to evaluate this competency, and the best practices for teaching ultrasonography to residents have yet to be determined. AIM: The researchers aimed at evaluating the POCUS knowledge and skills of the EM residents after having participated in the POCUS training during their first ultrasound rotation in the Emergency Department. METHODS: A curriculum was developed in the form of a 2-week rotation in the EM residency program at the Department of EM at Khon Kaen University’s Srinagarind Hospital. It consisted of didactic lectures, bedside ultrasound trainings, the journal club, and the process of reviewing the images. Tools were developed, which included a knowledge exam. For each resident, the assessments were administered before and after the rotation. Furthermore, an ultrasound skills test was developed to be used at the end of the 1st year EM residency program. RESULTS: Nine EM residents completed their rotations and the tests. The average pre-training score and post-training scores were 5.25 ± 1.03 and 8.50 ± 1.20, respectively. The mean difference score between pre- and post-test was 3.25 ± 1.28. (95% CI −4.321, −2.178). In terms of the ultrasound skills test, the average total score was 26.13 out of 30 (87.1%). Moreover, the residents had higher scores in the aspects of image acquisition (87.5%) and image interpretation (87.5%). However, for the aspect of clinical decision-making, the average score was 75%. The survey questions indicated that with respect to all of the academic activities, the “Bedside ultrasound,” which had encouraged the residents to learn POCUS, was given the highest score (4.75 of 5). CONCLUSIONS: The 2-week ED ultrasound rotation had improved the residents’ EM ultrasound knowledge and skills.
APA, Harvard, Vancouver, ISO, and other styles
46

Chaplin, T., L. McMurray, and A. K. Hall. "LO097: A novel curriculum for assessing competency in resuscitation at the foundations of discipline level of training." CJEM 18, S1 (May 2016): S63. http://dx.doi.org/10.1017/cem.2016.134.

Full text
Abstract:
Introduction / Innovation Concept: Junior residents are often the first physicians who attend to the acutely unwell floor patient, especially at night and on weekends. The ‘Nightmares Course’ at Queen’s University was designed to address an Entrustable Professional Activity (EPA) relevant to several residency programs at the ‘Foundations of Discipline’ level of training: “to manage the acutely unwell floor patient for the first 5-10 minutes until help arrives”. In keeping with competency based medical education principles, this course offers longitudinal and repetitive practice and assessment. We have also designed a summative objective structured clinical exam (OSCE) in order to identify trainees who require additional remedial practice of this EPA. Methods: We developed simulated cases that reflect common but “scary” calls to the floor. We then, using a modified Delphi process with experts in resuscitation, defined relevant milestones applicable to the Foundations of Discipline level of training in order to inform our formative assessment. We also modified the Queen’s Simulated Assessment Tool (QSAT) to adopt CBME terminology and this will be used to provide a summative assessment during a four-scenario OSCE in the spring. Residents with QSAT scores below the competency threshold will be enrolled in a remediation course. Curriculum, Tool, or Material: Weekly sessions were led by staff physicians and were offered to first-year residents from internal medicine, core surgery, obstetrics and gynecology, and anesthesiology over the academic year. Each resident participated in one session every 4-week block. Sessions were organized into themes such as “shortness of breath” or “decreased level of consciousness” and involved three high-fidelity simulated cases with a structured debrief following each case. Formative feedback was given following each case. Conclusion: The Nightmares Course is a novel simulation-based, multidisciplinary curriculum in resuscitation medicine. It includes longitudinal practice and repetitive assessment, as well as summative testing and remediation of an EPA common to several residency programs.
APA, Harvard, Vancouver, ISO, and other styles
47

Levine, Oren Hannun, Ines B. Menjak, Stephanie Yasmin Brule, Meghan McConnell, Sukhbinder K. Dhesy-Thind, Som Mukherjee, and Melissa C. Brouwers. "The PULSES project: Teaching the vital elements of code status discussions to oncology residents." Journal of Clinical Oncology 35, no. 15_suppl (May 20, 2017): 10024. http://dx.doi.org/10.1200/jco.2017.35.15_suppl.10024.

Full text
Abstract:
10024 Background: Discussions with cancer patients around cardiopulmonary resuscitation (CPR), or ‘code status,’ are often led by trainees in oncology, but formal education for this competency is lacking. In this study, we developed and tested a novel communication tool, the PULSES framework, for informed code status decision-making (a six-step approach summarized by the PULSES acronym [Table 1]), through an educational workshop. Methods: A multicentre randomized controlled trial was carried out at 3 academic cancer centres in Ontario, Canada. Residents in medical oncology (MO) and radiation oncology (RO) programs completed a workshop and an observed structured clinical exam (OSCE). Participants were randomized to complete the training before the OSCE (experimental arm) or after the OSCE (control arm). Randomization was stratified for centre and oncology discipline. Expert raters evaluated communication with two rating tools: the novel PULSES scale and the communication skills assessment form (CSAF), a validated benchmark tool that is not specific to oncology content. The primary outcome was improvement in PULSES scores. Results: Forty-six residents consented to participate (28 RO and 18 MO). Groups were well balanced for program and year of training. Participants in the experimental group had higher mean PULSES score than those in the control group (80.4±13.5 vs 63.4±9.7; p < .001; maximum score = 108). There was no significant effect for program and no significant interaction between program and training condition. Scores for the PULSES and CSAF scales were highly correlated (R = 0.864). Conclusions: The PULSES training improved performance among oncology residents for code status discussions. Improved communication scores were not scale-specific. The PULSES framework offers a standardized approach and can be incorporated into competency-based curricula for postgraduate oncology programs. Future work will explore whether communication training in this area impacts patient-level outcomes. [Table: see text]
APA, Harvard, Vancouver, ISO, and other styles
48

Levine, Oren Hannun, Ines B. Menjak, Stephanie Yasmin Brule, Meghan McConnell, Sukhbinder K. Dhesy-Thind, Som Mukherjee, and Melissa C. Brouwers. "The PULSES project: Teaching the vital elements of code status discussions to oncology residents." Journal of Clinical Oncology 35, no. 31_suppl (November 1, 2017): 46. http://dx.doi.org/10.1200/jco.2017.35.31_suppl.46.

Full text
Abstract:
46 Background: Discussions with cancer patients around cardiopulmonary resuscitation, or ‘code status,’ are often led by trainees in oncology, but formal education for this competency is lacking. In this study, we developed and tested a novel communication tool, the PULSES framework, for informed code status decision-making (a six-step approach summarized by the PULSES acronym [see Table]), through an educational workshop. Methods: A multicentre randomized controlled trial was carried out at 3 academic cancer centres in Ontario, Canada. Residents in medical oncology (MO) and radiation oncology (RO) programs completed a workshop and an observed structured clinical exam (OSCE). Participants were randomized to complete the training before the OSCE (experimental arm) or after the OSCE (control arm). Randomization was stratified for centre and oncology discipline. Expert raters evaluated communication with two rating tools: the novel PULSES scale and the communication skills assessment form (CSAF), a validated benchmark tool that is not specific to oncology content. The primary outcome was improvement in PULSES scores. Results: Forty-six residents consented to participate (28 RO and 18 MO). Groups were well balanced for program and year of training. Participants in the experimental group had higher mean PULSES score than those in the control group (80.4±13.5 vs 63.4±9.7; p<.001; maximum score = 108). There was no significant effect for oncology program and no significant interaction between program and training condition. Scores from the PULSES and CSAF scales were highly correlated (R = 0.864). Conclusions: The PULSES training improved performance among oncology residents for code status discussions. Improved communication scores were not scale-specific. The PULSES framework offers a standardized approach and can be incorporated into competency-based curricula for postgraduate oncology programs. Future work will explore whether communication training in this area impacts patient-level outcomes. [Table: see text]
APA, Harvard, Vancouver, ISO, and other styles
49

McAlpine, Kristen, and Stephen Steele. "Missing the mark: Current practices in teaching the male urogenital examination to Canadian undergraduate medical students." Canadian Urological Association Journal 10, no. 7-8 (August 16, 2016): 281. http://dx.doi.org/10.5489/cuaj.3679.

Full text
Abstract:
<p><strong>Introduction:</strong> The urogenital physical examination is an important aspect of patient encounters in various clinical settings. Introductory clinical skills sessions are intended to provide support and alleviate students’ anxiety when learning this sensitive exam. The techniques each Canadian medical school uses to guide their students through the initial urogenital examination has not been previously reported.</p><p><strong>Methods:</strong> This study surveyed pre-clerkship clinical skills program directors at the main campus of English-speaking Canadian medical schools regarding the curriculum they use to teach the urogenital examination.</p><p><strong>Results:</strong> A response rate of 100% was achieved, providing information on resources and faculty available to students, as well as the manner in which students were evaluated. Surprisingly, over onethird of the Canadian medical schools surveyed failed to provide a setting in which students perform a urogenital examination on a patient in their pre-clinical years. Additionally, there was no formal evaluation of this skill set reported by almost 50% of Canadian medical schools prior to clinical training years.</p><p><strong>Conclusions:</strong> To ensure medical students are confident and accurate in performing a urogenital examination, it is vital they be provided the proper resources, teaching, and training. As we progress towards a competency-based curriculum, it is essential that increased focus be placed on patient encounters in undergraduate training. Further research to quantify students’ exposure to the urogenital examination during clinical years would be of interest. Without this commitment by Canadian medical schools, we are doing a disservice not only to the medical students, but also to our patient population.</p>
APA, Harvard, Vancouver, ISO, and other styles
50

Rotenberg, C., and S. Field. "P110: Are there differences in student academic and clinical performance after rotations at tertiary or community care Emergency Medicine teaching sites?" CJEM 21, S1 (May 2019): S103—S104. http://dx.doi.org/10.1017/cem.2019.301.

Full text
Abstract:
Introduction: Canadian undergraduate medical Emergency Medicine (EM) rotations are often completed at either tertiary care centres or regional community hospitals. While the latter offer students exposure to different practice settings and population needs, many students perceive that teaching at tertiary care EM departments is superior to that in community hospitals. At our institution, third year undergraduate medical students complete three-week EM rotation at either a tertiary centre or a community hospital. We compared academic and clinical performance between students trained in tertiary care centres and students trained in community hospitals. Methods: Academic and clinical performance in EM was evaluated based on the results of an EM-specific multiple choice examination (MCQE) and an annual Objective Structured Clinical Exam (OSCE) assessing competency in a broad range of clinical scenarios commonly addressed in EM. The 40-question MCQE is administered quarterly and a mix of old and new questions are used to ensure consistency. The OSCE is administered annually and relies on the same principal to remain consistent. OSCE scores are binary: pass or fail. We reviewed MCQE and OSCE scores from three consecutive cohorts of students. Students were pooled into two groups, tertiary and community, based on the site of their EM rotation. Mean MCQE and OSCE performance were compared between the two groups of students using two-tailed unpaired T tests. Chi squared tests were used to identify significant differences in scores between cohorts. Results: MCQE and OSCE scores from 312 students over three consecutive cohorts were analyzed. Cohorts included 104, 100, and 108 students with 61% trained in tertiary centres (N = 191). Students trained in tertiary centres had a mean MCQE score of 77%. Students from community centres had a mean score of 78%. There was no significant difference in MCQE scores between tertiary- and community-trained students (p = 0.6099). The OSCE pass rate was 97% for students trained in tertiary centres and 98% for students trained in community centres. OSCE pass rates were not significantly different between the two groups (p = 0.8145). Conclusion: Despite student perceptions that training in tertiary care EM centres was superior, objective analysis showed that academic and clinical performance were similar regardless of training site.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography