4/3/2023 0 Comments Data reliability and validity![]() Kappa statistics showed agreement better than chance in all chapters, but failed to reach significance at rubric level in three chapters. Positive mean agreement was 78.8% at chapter level, 69.6% at chapter-component, and 55.9% at rubric level. Overall distribution of morbidity did not differ between observers. Overall distribution of the morbidity positive agreement regarding the morbidity managed at matched contacts at three levels of specificity (chapter chapter-component specific rubric) agreement taking negative agreement into account using Cohen's Kappa. Problems were centrally coded, using the International Classification of Primary Care (ICPC). ![]() Two observers independently viewed the video-tapes and completed a recording form for each. The consulting general practitioner recorded the problems managed at 347 video-taped doctor-patient contacts. To test the reliability and validity of morbidity data recorded by general practitioners (family physicians) on structured recording forms in active data collection systems.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |