Research Article | | Peer-Reviewed

Test Item Writing Competence Among Oman College of Health Sciences Nurse Faculty

Received: 31 October 2024     Accepted: 12 November 2024     Published: 29 November 2024
Views:       Downloads:
Abstract

Background: The demand for nursing faculty to deliver high-quality teaching and assessment has surged, emphasizing the need for accurate learning assessment through effective testing and outcome measurement. Yet, the literature reveals that many nursing faculty are underprepared in measurement and evaluation, lacking essential competencies in test development and often failing to follow established guidelines. While both teacher-made tests (TMT) and standardized test scores are used to inform nursing faculty what the students have learned, the validity and reliability of TMT remain a significant concern. This study, guided by Social Justice Theory, assessed nursing faculty competencies in TMT development within Oman’s College of Health Sciences, evaluating the fairness and effectiveness of these assessments based on content coverage, difficulty level, and test validity and reliability. Methodology: Descriptive statistics were used to portray the sample; Pearson's correlation was used to determine the relationship between TMT and committee-designed standardized end-of-semester final examinations (CDESFE). Results: Results showed a strong positive correlation between student scores on teacher-made tests (TMT) (M = 23.95, SD = 4.74) and committee-designed standardized exams (CDESFE) (M = 31.99, SD = 6.22), r (1672) =.613, p <.001. MANOVA analysis indicated no significant differences between TMT and CDESFE regarding best-practice guidelines and item analysis (Wilk’s Ʌ =.93, F (2, 21) =.78, p =.47, partial ŋ2 =.69). Multiple regression analysis further demonstrated that both TMT and CDESFE scores significantly predict students' overall academic achievement (F (2, 1671) = 2241, p <.001, =.73), underscoring the predictive value of both testing methods for student success. Conclusion: With a gap in how nurse faculty implement TMT, there are potential negative consequences on students’ progress towards licensure examination. This study contributes to nursing science and education through objective, efficient, fair, and equitable assessment measures in the classroom setting. These findings may also transfer to the clinical setting, where nursing students, staff, and faculty are assessed during and after educational sessions and workshops.

Published in Teacher Education and Curriculum Studies (Volume 9, Issue 4)
DOI 10.11648/j.tecs.20240904.15
Page(s) 138-151
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Teacher-Made Test, End-of-Semester Final Examination, Students’ Test Scores, Academic Achievement, Best Practice Guidelines, Item Analysis

References
[1] National League for Nursing. (2020, November). The fair testing imperative in nursing education: A living document from the National League for Nursing. Retrieved from
[2] Rawls, J. (1971). A theory of justice. The Belknap Press of Harvard University Press.
[3] Wright, C. D., Huang, A. L., Cooper, K. M., & Brownell, S. E. (2018). Exploring differences in decisions about exams among instructors of the same introductory biology course. International Journal for the Scholarship of Teaching and Learning, 12(2).
[4] Simsek, A. (2016). A comparative analysis of common mistakes in achievement tests prepared by school teachers and corporate trainers. European Journal of Science and Mathematics Education, 4(4), 477–489.
[5] Asim, A. E., Ekuri, E. E., & Eni, E. I. (2013). A diagnostic study of pre-service teachers’ competency in multiple-choice item development. Research in Education, 89(1), 13–22.
[6] American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, & Joint Committee on Standards for Educational and Psychological Testing. (2014). Standards for educational and psychological testing. AERA.
[7] Haladyna, T. & Rodriguez, M. (2013). Developing and validating test items (1st ed.). Routledge.
[8] Ing, L., Musah, M. B., Al-Hudawi, S., Tahir, L. M., & Kamil, M. N. (2015). Validity of teacher-made assessment: A table of specification approach. Asian Social Science, 11(5), 193–200.
[9] Nedeau-Cayo, R., Laughlin, D., Rus, L., & Hall, J. (2013). Assessment of item-writing flaws in multiple-choice questions. Journal of Nurses Professional Development, 29(2), 52–57.
[10] Ugodulunwa, C. A., & Wakjissa, S. G. (2016). What teachers know about validity of classroom tests: Evidence from a University of Nigeria. Journal of Research & Method in Education, 6(3), 14–19.
[11] Oermann, M. H., Saewert, K. J., Charasika, M., & Yarbrough, S. S. (2009). Assessment and grading practices in schools of nursing: National survey findings part I. Nursing Education Perspectives, 30(6), 352–357.
[12] McDonald, M. (2014). The nurse educator’s guide to assessing learning outcomes (3rd ed.). Jones & Bartlett Learning.
[13] Miller, M. D., Linn, R. L., & Gronlund, N. E. (2009). Measurement and assessment in teaching (10th ed.). Pearson Education.
[14] Bristol, T. J., Nelson, J. W., Sherrill, K. J., & Wangerin, V. S. (2018). Current state of test development, administration, and analysis: A study of faculty practices. Nurse Educator, 43(2), 68–72.
[15] Hijji, B. (2017). Flaws of multiple choice questions in teacher-constructed nursing examinations: A pilot descriptive study. Journal of Nursing Education, 56(8), 490–495.
[16] Kinyua, K., & Okunya, L. O. (2014). Validity and reliability of teacher-made tests: Case study of year 11 physics in Nyahururu District of Kenya. African Educational Research Journal, 2(2), 61–72. Retrieved from
[17] Zhang, Z., & Burry‐Stock, J. A. (2003). Classroom assessment practices and teachers’ self‐perceived assessment skills. Applied Measurement in Education, 16(4), 323–342.
[18] Brown G. & Abdulnabi H. (2017) Evaluating the Quality of Higher Education Instructor-Constructed Multiple-Choice Tests: Impact on Student Grades. Front. Educ. 2: 24.
[19] D'Sa, J. L., & Visbal- Dionaldo, M. L. (2017). Analysis of multiple-choice questions: item difficulty, discrimination index and distractor efficiency. International Journal of Nursing Education, (9)3, 109-114.
[20] Amelia, R., Sari, A., & Astuti, S. (2021). Chemistry learning outcomes assessment: how is the quality of the tests made by the teacher?. Journal of Educational Chemistry (JEC), 3(1), 11-22.
[21] Ramadhan, S., Sumiharsono, R., Mardapi, D., & Prasetyo, Z. K. (2020). The quality of test instruments constructed by teachers in bima regency, Indonesia: document analysis. International Journal of Instruction, 13(2), 507-518.
[22] Moore, L. C., Goldsberry, J., Fowler, C., & Handwerker, S. (2021). Academic and nonacademic predictors of BSN student success on the HESI Exit Exam. Computers, Informatics, Nursing: CIN, 39(10), 570–577.
[23] Gillespie, M. D., & Nadeau, J. W. (2019). Predicting HESI® Exit Exam success: a retrospective study. Nursing Education Perspectives, 40(4), 238–240.
[24] Spurlock, D. R., Jr, & Hunt, L. A. (2008). A study of the usefulness of the HESI Exit Exam in predicting NCLEX-RN failure. The Journal of Nursing Education, 47(4), 157–166.
Cite This Article
  • APA Style

    Ambusaidi, M. K. (2024). Test Item Writing Competence Among Oman College of Health Sciences Nurse Faculty. Teacher Education and Curriculum Studies, 9(4), 138-151. https://doi.org/10.11648/j.tecs.20240904.15

    Copy | Download

    ACS Style

    Ambusaidi, M. K. Test Item Writing Competence Among Oman College of Health Sciences Nurse Faculty. Teach. Educ. Curric. Stud. 2024, 9(4), 138-151. doi: 10.11648/j.tecs.20240904.15

    Copy | Download

    AMA Style

    Ambusaidi MK. Test Item Writing Competence Among Oman College of Health Sciences Nurse Faculty. Teach Educ Curric Stud. 2024;9(4):138-151. doi: 10.11648/j.tecs.20240904.15

    Copy | Download

  • @article{10.11648/j.tecs.20240904.15,
      author = {Mohammed Khalfan Ambusaidi},
      title = {Test Item Writing Competence Among Oman College of Health Sciences Nurse Faculty
    },
      journal = {Teacher Education and Curriculum Studies},
      volume = {9},
      number = {4},
      pages = {138-151},
      doi = {10.11648/j.tecs.20240904.15},
      url = {https://doi.org/10.11648/j.tecs.20240904.15},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.tecs.20240904.15},
      abstract = {Background: The demand for nursing faculty to deliver high-quality teaching and assessment has surged, emphasizing the need for accurate learning assessment through effective testing and outcome measurement. Yet, the literature reveals that many nursing faculty are underprepared in measurement and evaluation, lacking essential competencies in test development and often failing to follow established guidelines. While both teacher-made tests (TMT) and standardized test scores are used to inform nursing faculty what the students have learned, the validity and reliability of TMT remain a significant concern. This study, guided by Social Justice Theory, assessed nursing faculty competencies in TMT development within Oman’s College of Health Sciences, evaluating the fairness and effectiveness of these assessments based on content coverage, difficulty level, and test validity and reliability. Methodology: Descriptive statistics were used to portray the sample; Pearson's correlation was used to determine the relationship between TMT and committee-designed standardized end-of-semester final examinations (CDESFE). Results: Results showed a strong positive correlation between student scores on teacher-made tests (TMT) (M = 23.95, SD = 4.74) and committee-designed standardized exams (CDESFE) (M = 31.99, SD = 6.22), r (1672) =.613, p F (2, 21) =.78, p =.47, partial ŋ2 =.69). Multiple regression analysis further demonstrated that both TMT and CDESFE scores significantly predict students' overall academic achievement (F (2, 1671) = 2241, p R² =.73), underscoring the predictive value of both testing methods for student success. Conclusion: With a gap in how nurse faculty implement TMT, there are potential negative consequences on students’ progress towards licensure examination. This study contributes to nursing science and education through objective, efficient, fair, and equitable assessment measures in the classroom setting. These findings may also transfer to the clinical setting, where nursing students, staff, and faculty are assessed during and after educational sessions and workshops.
    },
     year = {2024}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Test Item Writing Competence Among Oman College of Health Sciences Nurse Faculty
    
    AU  - Mohammed Khalfan Ambusaidi
    Y1  - 2024/11/29
    PY  - 2024
    N1  - https://doi.org/10.11648/j.tecs.20240904.15
    DO  - 10.11648/j.tecs.20240904.15
    T2  - Teacher Education and Curriculum Studies
    JF  - Teacher Education and Curriculum Studies
    JO  - Teacher Education and Curriculum Studies
    SP  - 138
    EP  - 151
    PB  - Science Publishing Group
    SN  - 2575-4971
    UR  - https://doi.org/10.11648/j.tecs.20240904.15
    AB  - Background: The demand for nursing faculty to deliver high-quality teaching and assessment has surged, emphasizing the need for accurate learning assessment through effective testing and outcome measurement. Yet, the literature reveals that many nursing faculty are underprepared in measurement and evaluation, lacking essential competencies in test development and often failing to follow established guidelines. While both teacher-made tests (TMT) and standardized test scores are used to inform nursing faculty what the students have learned, the validity and reliability of TMT remain a significant concern. This study, guided by Social Justice Theory, assessed nursing faculty competencies in TMT development within Oman’s College of Health Sciences, evaluating the fairness and effectiveness of these assessments based on content coverage, difficulty level, and test validity and reliability. Methodology: Descriptive statistics were used to portray the sample; Pearson's correlation was used to determine the relationship between TMT and committee-designed standardized end-of-semester final examinations (CDESFE). Results: Results showed a strong positive correlation between student scores on teacher-made tests (TMT) (M = 23.95, SD = 4.74) and committee-designed standardized exams (CDESFE) (M = 31.99, SD = 6.22), r (1672) =.613, p F (2, 21) =.78, p =.47, partial ŋ2 =.69). Multiple regression analysis further demonstrated that both TMT and CDESFE scores significantly predict students' overall academic achievement (F (2, 1671) = 2241, p R² =.73), underscoring the predictive value of both testing methods for student success. Conclusion: With a gap in how nurse faculty implement TMT, there are potential negative consequences on students’ progress towards licensure examination. This study contributes to nursing science and education through objective, efficient, fair, and equitable assessment measures in the classroom setting. These findings may also transfer to the clinical setting, where nursing students, staff, and faculty are assessed during and after educational sessions and workshops.
    
    VL  - 9
    IS  - 4
    ER  - 

    Copy | Download

Author Information
  • Sections