Online Learning Quality Measurement Application for Higher Education: Development and Analysis Using ISO 9126

Abstract

This study aims to develop an application for measuring the quality of online learning in higher education using the analytical hierarchy process (AHP) model and analyze the quality of the application using the ISO 9126 standard. This is a Research and Development study. The application development process uses the software development life cycle (SDLC) method with the waterfall method. Furthermore, the application quality analysis process uses the ISO 9126 standard, consisting of functionality, reliability, efficiency, maintainability, usability, and portability. The results showed that the application for measuring the quality of higher education online learning based on the AHP model had been successfully developed using the SDLC method with the waterfall model. Furthermore, the results of the analysis of application quality using the ISO 9126 standard show that the average results are excellent and suitable for assessing the quality of online learning in higher education.


Keywords: online learning, higher education, analytical hierarchy process (AHP), ISO 9126

References
[1] Rennie F, Morrison T. E-learning and social networking handbook: Resources for higher education. 2013.

[2] Baker SC, Wentz RK, Woods MM. Using virtual worlds in education: Second Life® as an educational tool. Teach Psychol. 2009;36(1):59–64.

[3] Baker SC, Wentz RK, Woods MM. Using virtual worlds in education: Second life® as an educational tool. Teach Psychol. 2009;36(1):59–64.

[4] Fisher M, Baird DE. 2005 Online learning design that fosters student support, self?regulation, and retention. Campus-wide Inf Syst. 2005. https://doi.org/10.1108/10650740510587100

[5] Robinson CC, Hullinger H. New benchmarks in higher education: Student engagement in online learning. J Educ Bus. 2008;84(2):101–108.

[6] Subandowo M. The use of e-learning in increasing learning productivity. 2016.

[7] Suryanti S, Sutaji D, Iswanti. Perception of readiness for online learning: Voice from Mathematics Learners. J Phys Conf Ser. 2021;1940(1):012103.

[8] Ser. 2021;1933:012117.

[9] Robinson B. Governance, accreditation and quality assurance in open and distance education. In: Policy for open and distance learning. Routledge; 2003. pp. 197–222.

[10] Custard M, Sumner T. Using machine learning to support quality judgments. Dlib Mag. 2005;11(10):11.

[11] Jaggars SS, Xu D. How do online course design features influence student performance? Comput Educ. 2016;95:270–284.

[12] Roblyer M, Wiencke W. Design and use of a rubric to assess and encourage interactive qualities in distance courses. Am J Dist Educ. 2003;17(2):77–98.

[13] Yuan M, Recker M. Does audience matter? Comparing teachers’ and non-teachers’ application and perception of quality rubrics for evaluating open educational resources. Educ Technol Res Dev. 2019;67(1):39–61.

[14] Matters Q. Quality matters higher education rubric. 2018.

[15] Lee Y, Choi J. A review of online course dropout research: Implications for practice and future research. Educ Technol Res Dev. 2011;59(5):593–618.

[16] Liu IF, Chen MC, Sun YS, Wible D, Kuo CH. Extending the TAM model to explore the factors that affect intention to use an online learning community. Comput Educ. 2010;54(2):600–610.

[17] Sun PC, Tsai RJ, Finger G, Chen YY, Yeh D. What drives a successful e- Learning? An empirical investigation of the critical factors influencing learner satisfaction. Comput Educ. 2008;50(4):1183–1202.

[18] Swan K, Matthews D, Bogle L, Boles E, Day S. Linking online course design and implementation to learning outcomes: A design experiment. Internet High Educ. 2012;15(2):81–88.

[19] Vach T, Mašín I. AHP-based evaluation of vertical gardens design current methods of construction design. Cham: Springer; 2020. pp. 415–421.

[20] Dunn MF, Knight JC. Software reuse in an industrial setting: A case study Proceedings-13th International Conference on Software Engineering (pp. 329- 330). IEEE Computer Society. 1991. https://doi.org/10.1109/ICSE.1991.130659.

[21] Fahmy S, Haslinda N, Roslina W, Fariha Z. Evaluating the quality of software in e-book using the ISO 9126 model. Int J Control Autom. 2012;5:115–122.

[22] Nielsen J, Phillips VL. Estimating the relative usability of two interfaces: Heuristic, formal, and empirical methods compared Proceedings of the INTERACT’93 and CHI’93 conference on Human Factors in Computing Systems. pp 214–221.

[23] Lund A. Measuring usability with the USE Questionnaire. 2001.

[24] Asthana A, Olivieri J. Quantifying software reliability and readliness. IEEE; 2009.

[25] Coleman D, Ash D, Lowther B, Oman P. Using metrics to evaluate software system maintainability. Computer. 1994;27(8):44–49.

[26] Gilem J, Gilem R. Calculating, interpreting, and reporting Cronbach’s Alpha reliability, coeficient for Liker-Type scales. Midwest Research-to-Practice Conference in Adult, Continuing, and Community Education; 2003.