This paper reports series of steps applied in the instrument building process to ensure the validity and reliability of evaluation scales developed in the study. The scales are later used in the main study to measure the implementation of evaluation on policies and programs and simultaneously measure its antecedents (evaluation capacity building (ECB) factors) and consequences (evaluation use) in the Malaysian public sector. There are eight constructs used to measure the proposed framework. Five constructs are used to measure the ECB factors, which are evaluation office (EO), internal evaluators (IE), evaluation information system (EIS), financial resources (FR), and evaluation regulatory framework (ERF). One construct is used to measure the implementation of evaluation and two constructs namely accountability and organisational learning are used to measure evaluation use. In efforts to ensure the content validity of the scales, a pre-test session with six practitioners and a content review session with three experts from the industry and academics were done prior to the pilot study commences. The pre-test session with practitioners helped to validate important constructs of the study. While the content validity session with the experts confirmed the aspects of relevance, clarity, and technical of the instrument are met. Later, a total of 50 respondents who directly involve in evaluation-related activities at selected divisions in various ministries or had previous posting experience in related divisions were chosen as pilot study samples. The reliability tests using the Statistical Package for Social Sciences Program (SPSS) version 22, revealed that the Cronbach’s Alpha scores between 0.732 to 0.923 are well above the minimum set value of 0.70. Therefore, based on the feedbacks from the pre-test respondents and the content review by experts, coupled with the reliability results of the pilot test, the scales can be accepted to be valid and reliable.
Bamberger, M. (1991). The politics of evaluation in developing countries. Evaluation and Program Planning, 14(4), 325–339. https://doi.org/10.1016/0149-7189(91)90015-9
Baron, M. E. (2011). Designing internal evaluation for a small organization with limited resources. New Directions for Evaluation, (132), 87–99.
Botcheva, L., White, C. R., & Huffman, L. C. (2002). Learning Culture and Outcomes Measurement Practices in Community. American Journal of Evaluation, 23(4), 421–434. https://doi.org/10.1177/109821400202300404
Bourgeois, I., & Cousins, J. B. (2013). Understanding Dimensions of Organizational Evaluation Capacity. American Journal of Evaluation, 34, 299–319.
https://doi.org/10.1177/1098214013477235
Bourgeois, Isabelle, Toews, E., Whynot, J., & Lamarche, M. K. (2013). Measuring organizational evaluation capacity in the Canadian federal government. Canadian Journal of Program Evaluation, 28(2), 1–19.
Bourgeois, Isabelle, Whynot, J., & Theriault, E. (2015). Application of an organizational evaluation capacity self-assessment instrument to different organizations: Similarities and lessons learned. Evaluation and Program Planning, 50, 47–55.
https://doi.org/10.1016/j.evalprogplan.2015.01.004
Connie F . Walker-Egea. (2014). Design and Validation of an Evaluation Checklist for Organizational Readiness for Evaluation Capacity Development. University of South Florida.
Cooper, D. R., & Schindler, P. S. (2003). Business Research Methods (8th Intern). New York: McGraw-Hill,.
Creswell, J. W. (2014). Research design: Qualitative, quantitative and mixed methods approaches 4th edition.
Danseco, E., Halsall, T., & Kasprzak, S. (2009). Readiness assessment tool for evaluation capacity building. Ottawa, Canada.
DeVellis, R. F. (2003). Scale development: theory and implications. Thousand Oaks, CA: Sage Publications.
Dillman, D. (2007). Mail and internet surveys: The tailored design method. New York: Wiley.
Dremina, M. A., Davydova, N. N., & Kopnov, V. A. (2016). Lifelong Learning in Russia: History, Concepts & Practices. Multilingual Academic Journal of Education and Social Sciences, 4(2), 30–46.
Fleischer, D., Christie, C., LaVelle, K., & Acres, F. (2008). Perceptions of Evaluation Capacity Building in the United States: A Descriptive Study of American Evaluation Association Members. The Canadian Journal of Program Evaluation, 23(3), 37–60. Retrieved from http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle:PERCEPTIONS+OF+EVALUATION+CAPACITY+BUILDING+IN+THE+UNITED+STATES:+A+DESCRIPTIVE+STUDY+OF+AMERICAN+EVALUATION+ASSOCIATION+MEMBERS#4
Hallie, P., & Rosalie, T. (1999). Assessing an Organization’s Readiness for Learning from Evaluative Inquiry. In American Evaluation Association Annual Conference (pp. 1–15). Orlando, Florida.
Khan, M. A. (1998). Evaluation capacity building: An overview of current status, issues and options. Evaluation, 4(3), 310–328. https://doi.org/10.1177/13563899822208626
Kudo, H. (2003). Between the ‘Governance’ Model and the Policy Evaluation Act: New Public Management in Japan. International Review of Administrative Sciences, 69(4), 483–504. https://doi.org/10.1177/0020852303694005
Labin, S. N., Duffy, J. L., Meyers, D. C., Wandersman, A., & Lesesne, C. A. (2012). A Research Synthesis of the Evaluation Capacity Building Literature. American Journal of Evaluation, 33(3), 307–338. https://doi.org/10.1177/1098214011434608
Lloyd, R., Warren, S., & Hammer, M. (2008). 2008 Global Accountability Report. London.
Trevisan, M. S. (2002). Evaluation capacity in K-12 school counseling programs. American Journal of Evaluation, 23(3), 291–305.
Michael Schriven. (1991). Evaluation Thesaurus. Newbury Park, California: Sage Publications.
Mucciarone, M. A., & Neilson, J. (2011). Performance Reporting in the Malaysian Government. Asian Academy of Management Journal of Accounting and Finance, 7(2), 35–77.
Naidoo, I. A. (2011a). The role of monitoring and evaluation in promoting good governance in Sout
In-Text Citation: (Hashim & Ahmad, 2020)
To Cite this Article: Hashim, R. M., & Ahmad, J. (2020). Validity and Reliability of Scales on the Implementation of Evaluation in Measuring Its Antecedents and Consequences in the Malaysian Public Sector. International Journal of Academic Research in Business and Social Sciences, 10(1), 264–292.
Copyright: © 2020 The Author(s)
Published by Human Resource Management Academic Research Society (www.hrmars.com)
This article is published under the Creative Commons Attribution (CC BY 4.0) license. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this license may be seen at: http://creativecommons.org/licences/by/4.0/legalcode