The development of Automatic Programming Assessment Tool (APAT) that applies learning Taxonomy as its grading model
<p>Currently, it is difficult to effectively grade students programming assignments. As a result,</p><p>the objective of this work was to create an Automatic Programming Assessment Tool</p><p>(APAT) with a Bloom Taxonomy-mapped gr...
Saved in:
Main Author: | |
---|---|
Format: | thesis |
Language: | eng |
Published: |
2022
|
Subjects: | |
Online Access: | https://ir.upsi.edu.my/detailsg.php?det=10010 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
oai:ir.upsi.edu.my:10010 |
---|---|
record_format |
uketd_dc |
institution |
Universiti Pendidikan Sultan Idris |
collection |
UPSI Digital Repository |
language |
eng |
topic |
QA Mathematics |
spellingShingle |
QA Mathematics Muhammad Huzaifah Ismail The development of Automatic Programming Assessment Tool (APAT) that applies learning Taxonomy as its grading model |
description |
<p>Currently, it is difficult to effectively grade students programming assignments. As a result,</p><p>the objective of this work was to create an Automatic Programming Assessment Tool</p><p>(APAT) with a Bloom Taxonomy-mapped grading rubric. To guarantee that such a novel</p><p>tool has appropriate quality attributes, the development of APAT was carried out based on</p><p>the Software Engineering (SE) principles, namely software specification, software</p><p>development, and software verification. The evaluation of this novel tool focused on its</p><p>usability and effectiveness. The assessment of the tools usability was carried out using</p><p>Heuristic Assessment involving eight lecturers from the Faculty of Art, Computing, and</p><p>Creative Industry, Sultan Idris Education University where data were gathered through</p><p>WebUSE. The assessment of the tools effectiveness in assessing student learning was</p><p>performed through Analysis of Variance (ANOVA). The results of the analysis of the survey</p><p>data showed that the lecturers gave the proposed prototype a high rating. The findings of the</p><p>ANOVA test revealed that there were significant differences in the learning outcomes of the</p><p>students between groups. Overall, according to both findings, APAT is highly usable and</p><p>effective from the standpoints of practicality and assessment, respectively. Thus, teaching</p><p>professionals can use this innovative assessment tool to enhance the grading of students'</p><p>programming works.</p> |
format |
thesis |
qualification_name |
|
qualification_level |
Master's degree |
author |
Muhammad Huzaifah Ismail |
author_facet |
Muhammad Huzaifah Ismail |
author_sort |
Muhammad Huzaifah Ismail |
title |
The development of Automatic Programming Assessment Tool (APAT) that applies learning Taxonomy as its grading model |
title_short |
The development of Automatic Programming Assessment Tool (APAT) that applies learning Taxonomy as its grading model |
title_full |
The development of Automatic Programming Assessment Tool (APAT) that applies learning Taxonomy as its grading model |
title_fullStr |
The development of Automatic Programming Assessment Tool (APAT) that applies learning Taxonomy as its grading model |
title_full_unstemmed |
The development of Automatic Programming Assessment Tool (APAT) that applies learning Taxonomy as its grading model |
title_sort |
development of automatic programming assessment tool (apat) that applies learning taxonomy as its grading model |
granting_institution |
Universiti Pendidikan Sultan Idris |
granting_department |
Fakulti Seni, Komputeran dan Industri Kreatif |
publishDate |
2022 |
url |
https://ir.upsi.edu.my/detailsg.php?det=10010 |
_version_ |
1804890508661620736 |
spelling |
oai:ir.upsi.edu.my:100102024-04-05 The development of Automatic Programming Assessment Tool (APAT) that applies learning Taxonomy as its grading model 2022 Muhammad Huzaifah Ismail QA Mathematics <p>Currently, it is difficult to effectively grade students programming assignments. As a result,</p><p>the objective of this work was to create an Automatic Programming Assessment Tool</p><p>(APAT) with a Bloom Taxonomy-mapped grading rubric. To guarantee that such a novel</p><p>tool has appropriate quality attributes, the development of APAT was carried out based on</p><p>the Software Engineering (SE) principles, namely software specification, software</p><p>development, and software verification. The evaluation of this novel tool focused on its</p><p>usability and effectiveness. The assessment of the tools usability was carried out using</p><p>Heuristic Assessment involving eight lecturers from the Faculty of Art, Computing, and</p><p>Creative Industry, Sultan Idris Education University where data were gathered through</p><p>WebUSE. The assessment of the tools effectiveness in assessing student learning was</p><p>performed through Analysis of Variance (ANOVA). The results of the analysis of the survey</p><p>data showed that the lecturers gave the proposed prototype a high rating. The findings of the</p><p>ANOVA test revealed that there were significant differences in the learning outcomes of the</p><p>students between groups. Overall, according to both findings, APAT is highly usable and</p><p>effective from the standpoints of practicality and assessment, respectively. Thus, teaching</p><p>professionals can use this innovative assessment tool to enhance the grading of students'</p><p>programming works.</p> 2022 thesis https://ir.upsi.edu.my/detailsg.php?det=10010 https://ir.upsi.edu.my/detailsg.php?det=10010 text eng closedAccess Masters Universiti Pendidikan Sultan Idris Fakulti Seni, Komputeran dan Industri Kreatif <p>[Diploma] Guide Book. (2019). Retrieved August 6, 2019, from http://fskik.upsi.edu.my/wp-content/uploads/2019/06/BPDIPLOMA_20192020.pdf</p><p>About Judge0. (n.d.). Retrieved June 30, 2021, from https://github.com/judge0/judge0</p><p>About Moodle. (2018). Retrieved November 11, 2018, from https://docs.moodle.org/35/en/About_Moodle</p><p>Adams, M. D. (2017). Aristotle: A flexible open-source software toolkit for semi-automated marking of programming assignments. 2017 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing, PACRIM 2017 - Proceedings, 2017-Janua, 16. https://doi.org/10.1109/PACRIM.2017.8121888</p><p>Agarwal, R., & Venkatesh, V. (2002). Assessing a Firm s Web Presence : A Heuristic Evaluation Procedure for the Measurement of Usability. Information Systems Research, (February 2018), 168186. https://doi.org/https://doi.org/10.1287/isre.13.2.168.84</p><p>Ahoniemi, T., & Reinikainen, T. (2006). ALOHA - A Grading Tool for Semi-Automatic Assessment of Mass Programming Courses. Proceedings of the 6th Baltic Sea Conference on Computing Education Research (Koli Calling 2006), (February), 139140. https://doi.org/10.1145/1315803.1315830</p><p>Al-Khanjari, Z. A., Fiaidhi, J. A., Al-Hinai, R. A., & Kutti, N. S. (2010). PlagDetect: A Java Programming Plagiarsim Detection Tool. ACM Inroads, 1(4), 66. https://doi.org/10.1145/1869746.1869766</p><p>Alsumait, A., & Al-Osaimi, A. (2009). Usability heuristics evaluation for child e-learning applications. IiWAS2009 - The 11th International Conference on Information Integration and Web-Based Applications and Services, 425430. https://doi.org/10.1145/1806338.1806417</p><p>Anderson, L. W. (2003). Introduction to classroom assessment. In Classroom Assessment Enhancing the Quality of Teacher Decision Making (p. 199). Lawrence Erlbaum Associates, Inc.</p><p>Andrade, H. (2007). Self-Assessment Through Rubrics. Educational Leadership, 65(4), 6063. https://doi.org/10.1016/j.neuropharm.2005.02.010</p><p>Bachelor of COMPUTER SCIENCE (SOFTWARE ENGINEERING) (HONS.). (n.d.). Retrieved May 25, 2021, from https://www.uniten.edu.my/programmes/computing-informatics/bachelor-of-computer-science-software-engineering-hons/</p><p>Batool, A., Motla, Y. H., Hamid, B., Asghar, S., Riaz, M., Mukhtar, M., & Ahmed, M. (2013). Comparative Study of Traditional Requirement Engineering and Agile</p><p>Requirement Engineering. In International Conference on Advanced Communications Technology (pp. 10061014).</p><p>Becker, K. (2003). Grading Programming Assignments using Rubrics, 58113.</p><p>Bevan, N. (1995). Measuring usability as quality of use. Software Quality Journal, 115130. https://doi.org/https://doi.org/10.1007/BF00402715</p><p>Bixler, B. (2007). Psychomotor Domain Taxonomy. Retrieved November 11, 2018, from http://users.rowan.edu/~cone/curriculum/psychomotor.htm</p><p>Black, P., & William, D. (2010). Inside the black box: Raising standards through classroom assessment. https://doi.org/10.1177/ 003172171009200119</p><p>Blooms Taxonomy of Learning Domains. (2006). Retrieved November 11, 2018, from https://www.nbna.org/files/Blooms Taxonomy of Learning.pdf</p><p>Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1956). Taxonomy of educational objectives, Handbook 1: Cognitive domain. New York: Longman.</p><p>Bloom, B. S., Krathwohl, D. R., & Masia, B. B. (1964). Taxonomy of Educational Objectives. The classification of educational goals. Handbook 2: Affective Domain. David McKay Company, Inc.</p><p>Buckley, J., & Exton, C. (2003). Blooms taxonomy: A framework for assessing programmers knowledge of software systems. Proceedings - IEEE Workshop on Program Comprehension, 2003-May, 165174. https://doi.org/10.1109/WPC.2003.1199200</p><p>Caiza, J. C., & Alamo, J. M. Del. (2013). PROGRAMMING ASSIGNMENTS AUTOMATIC GRADING : REVIEW OF TOOLS AND IMPLEMENTATIONS. In International Technology, Education and Development Conference.</p><p>Catet, V., Snider, E., & Barnes, T. (2016). Developing a Rubric for a Creative CS Principles Lab, 290295.</p><p>Cheang, B., Kurnia, A., Lim, A., & Oon, W. C. (2003). On automated grading of programming assignments in an academic institution. Computers and Education, 41(2), 121131. https://doi.org/10.1016/S0360-1315(03)00030-7</p><p>Chen, H., Kazman, R., & Haziyev, S. (2016). Strategic Prototyping for Developing Big Data Systems.</p><p>Chiew, T. K., & Salim, S. S. (2003). Webuse: Website usability evaluation tool. Malaysian Journal of Computer Science, 16(1), 4757.</p><p>Choudhury, P. R., Wats, N., Jaiswal, R., & Goudar, R. H. (2014). Automated Process for Assessment of Learners Programming Assignments. In International Conference on Intelligent Systems and Control: Green Challenges and Smart Solutions (pp. 281285). https://doi.org/10.1109/ISCO.2014.7103960</p><p>Chung Man, T., & Y.T., Y. (2013). An Exploratory Study on Instructors Agreement on the Correctness of Computer Program Outputs. In S. K. S. Cheung, J. Fong, W. Fong, W. Fu Lee, & K. Lam For (Eds.), Hybrid Learning and Continuing Education (pp. 6980). Springer Heidelberg Dordrecht London NewYork. https://doi.org/10.1007/978-3-642-39750-9</p><p>CodeIgniter Overview. (n.d.). Retrieved July 3, 2021, from https://www.codeigniter.com/userguide3/overview/mvc.html</p><p>Cronbach, L. J. (1990). Essentials of Psychological Testing. Harpercollins College Div; Subsequent edition.</p><p>Cullinane, A. (2010). Bloom s Taxonomy and its Use in Classroom Assessment. Resource & Research Guides, 1(10), 20092010.</p><p>Daud, N. M. N., Bakar, N. A. A. A., & Rusli, H. M. (2010). Implementing Rapid Application Development (RAD) methodology in developing practical training application system. Proceedings 2010 International Symposium on Information Technology - System Development and Application and Knowledge Society, ITSim10, 3, 16641667. https://doi.org/10.1109/ITSIM.2010.5561634</p><p>DigitalOcean. (n.d.). Retrieved August 22, 2021, from https://www.digitalocean.com/products/droplets/</p><p>Dixson, D. D., & Worrell, F. C. (2016). Formative and Summative Assessment in the Classroom. Theory into Practice, 55(2), 153159. https://doi.org/10.1080/00405841.2016.1148989</p><p>Dogan, C. D., & Uluman, M. (2017). A comparison of rubrics and graded category rating scales with various methods regarding raters reliability. Kuram ve Uygulamada Egitim Bilimleri, 17(2), 631651. https://doi.org/10.12738/estp.2017.2.0321</p><p>Dosilovic, H. Z., & Mekterovic, I. (2020). Robust and scalable online code execution system. 2020 43rd International Convention on Information, Communication and Electronic Technology, MIPRO 2020 - Proceedings, 16271632. https://doi.org/10.23919/MIPRO48935.2020.9245310</p><p>Douce, C., Livingstone, D., & Orwell, J. (2005). Automatic test-based assessment of programming. Journal on Educational Resources in Computing, 5(3), 4-es. https://doi.org/10.1145/1163405.1163409</p><p>Dowson, M. (1997). The Ariane 5 software failure. ACM SIGSOFT Software Engineering Notes, 22(2), 84. https://doi.org/10.1145/251880.251992</p><p>Etikan, I., Abubakar Musa, S., & Alkassim, S. R. (2017). Comparison of Convenience Sampling and Purposive Sampling Comparison of Convenience Sampling and Purposive Sampling. American Journal of Theoretical and Applied Statistics, 5(February). https://doi.org/10.11648/j.ajtas.20160501.11</p><p>Foong, O.-M., Tran, Q.-T., Yong, S.-P., & Rais, H. M. (2014). Swarm inspired test case generation for online C++ programming assessment. 2014 International</p><p>Conference on Computer and Information Sciences (ICCOINS), 15. https://doi.org/10.1109/ICCOINS.2014.6868842</p><p>Frazer, M. (1992). Quality Assurance in Higher Education (1st Editio).</p><p>Gao, J. Z., Tsao, H.-S. J., & Ye, W. (2003). Testing and Quality Assurance for Component-Based Software. (V. Perrish & L. Nevard, Eds.). Artech House Inc.</p><p>Gerdes, A., Heeren, B., Jeuring, J., & van Binsbergen, L. T. (2017). Ask-Elle: an Adaptable Programming Tutor for Haskell Giving Automated Feedback. International Journal of Artificial Intelligence in Education, 27(1), 65100. https://doi.org/10.1007/s40593-015-0080-x</p><p>Ghosh, M., Verma, B., & Nguyen, A. (2002). An Automatic Assessment Marking And Plagiarism Detection. Proceedings of the First International Conference on Information Technology and Applications, 489494. Retrieved from http://www.scopus.com/inward/record.url?eid=2-s2.0-1842580485&partnerID=40&md5=8f16cddf26ea2b1a2b78a6918f2c2c18</p><p>GitHub Classroom. (2018). Retrieved November 11, 2018, from classroom.github.com</p><p>Goeva-Popstojanova, K., & Trivedi, K. S. (2001). Architecture-based approach to reliability assessment of software systems. Performance Evaluation, 45(23), 179204. https://doi.org/10.1016/S0166-5316(01)00034-7</p><p>Guidelines: Malaysian Qualifictaion Statement (MQS). (n.d.). Retrieved June 4, 2021, from https://www2.mqa.gov.my/qad/garispanduan/GGP-Malaysia Qualification Statement.pdf</p><p>Hollingsworth, J. (1960). Automatic graders for programming classes. Communications of the ACM, 3(1), 528529. https://doi.org/10.1145/367415.367422</p><p>Holzinger, A. (2005). Usability engineering methods for software developers. Communications of the ACM, 48(1), 7174. https://doi.org/10.1145/1039539.1039541</p><p>Howatt, J. W. (1994). On Criteria for Gradin g Student Programs. SIGCSE Bulletin, 26(3).</p><p>Hsiao, I.-H. (2016). Mobile Grading Paper-Based Programming Exams: Automatic Semantic Partial Credit Assignment Approach. In G. Gerhard, J. Juris, & J. Van Leeuwen (Eds.), European Conference on Technology Enhanced Learning (pp. 110223). Springer International Publishing Switzerland. https://doi.org/10.1007/978-3-319-45153-4_9</p><p>Hwang, W., & Salvendy, G. (2010). Number of people required for usability evaluation: The 102 rule. Communications of the ACM, 53(5), 130133. https://doi.org/10.1145/1735223.1735255</p><p>IEEE Guide to Software Requirements Specifications. (1984).</p><p>IEEE Recommended Practice for Software Design Descriptions. (1998) (Vol. 1998).</p><p>Ihantola, P., & Seppl, O. (2010). Review of Recent Systems for Automatic Assessment of Programming Assignments. In Proceedings of the 10th Koli Calling International Conference on Computing Education Research. https://doi.org/10.1145/1930464.1930480</p><p>ISO 9241-11:2018. (2018). Retrieved June 1, 2021, from https://www.iso.org/standard/63500.html</p><p>Iyyappan, M., & Kurmar, A. (2020). Software quality optimization of coupling and cohesion metric for CBSD model. In V. Singh, V. . Asarai, S. Kumar, & R. . Patel (Eds.), Computational Methods and Data Engineering (pp. 119). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-15-7907-3_1</p><p>Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130144. https://doi.org/10.1016/j.edurev.2007.05.002</p><p>Joy, M., Griffiths, N., & Boyatt, R. (2005). The boss online submission and assessment system. Journal on Educational Resources in Computing, 5(3), 2-es. https://doi.org/10.1145/1163405.1163407</p><p>Jr, T. R. (2016). Nonexperimental Research: Strengths, Weaknesses and Issues of Precision. European Journal of Training and Development, 40(8/9), 676690. https://doi.org/https://doi.org/10.1108/EJTD-07-2015-0058</p><p>Juarez-Ramirez, R., Jimenez, S., Huertas, C., & Guerra-Garcia, C. (2017). Towards Assessing Attitudes and Values in the Practice of Software Engineering: The Competency-Based Learning Approach. 2017 5th International Conference in Software Engineering Research and Innovation (CONISOFT), 153162. https://doi.org/10.1109/CONISOFT.2017.00026</p><p>KENTON, W. (n.d.). Descriptive Statistics. Retrieved August 16, 2019, from https://www.investopedia.com/terms/d/descriptive_statistics.asp</p><p>Krathwohl, D. R. (2002). A revision of blooms taxonomy: An overview. Theory into Practice, 41(4), 212218. https://doi.org/10.1207/s15430421tip4104_2</p><p>Krejcie, R. V, & Morgan, D. (1970). DETERMINING SAMPLE SIZE FOR RESEARCH ACTIVITIES. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 607610.</p><p>Lajis, A., Baharudin, S. A., Kadir, D. A., Ralim, N. M., & Nasir, H. M. (2018). A Review of Techniques in Automatic Programming Assessment for Practical Skill Test. Journal of Telecommunication, Electronic and Computer Engineering, 10(2), 109113.</p><p>Lavrakas, P. J. (2008). Encyclopedia of Survey Research Methods. In J. Smarr (Ed.) (p. 1041). SAGE Publications, Inc.</p><p>Leal, jose paulo, & Fernamdo, S. (2010). Using Mooshak as a Competitive Learning Tool. A New Learning Paradigm: Competition Supported by Technology, 91106.</p><p>Leal, P., & Silva, F. (2003). Mooshak : a Web-based multi-site programming contest, 581(March), 567581. https://doi.org/10.1002/spe.522</p><p>Lee Chuan, C. (2006). SAMPLE SIZE ESTIMATION USING KREJCIE AND MORGAN AND COHEN STATISTICAL POWER ANALYSIS: A COMPARISON Chua Lee Chuan Jabatan Penyelidikan. Jurnal Penyelidikan IPBL.</p><p>Leff, A., & Rayfield, J. T. (2001). Web-application development using the Model/View/Controller design pattern. Proceedings - 5th IEEE International Enterprise Distributed Object Computing Conference, 2001-Janua(January), 118127. https://doi.org/10.1109/EDOC.2001.950428</p><p>Lichter, H., Schneider-hufschmidt, M., & Zullighoven, H. (1995). Prototyping in Industrial Software Projects-Bridging the Gap Between Theory and Practice, 20(11), 825832.</p><p>Lipovaca, M. (2018). Introduction to Haskell. Retrieved December 25, 2018, from http://learnyouahaskell.com/introduction#so-whats-haskell</p><p>Martin, J. (1991). Rapid Application Development. Macmillan Publishing Co.</p><p>Masapanta-Carrin, S., & Velzquez-Iturbide, J. . (2018). A systematic review of the use of Blooms Taxonomy in computer science education. Proceedings of the 49h ACM Technical Symposium on Computer Science Education, 441446.</p><p>Mata-Toledo, R. A., & Cushman, P. K. (2003). Introduction To Computer Science (Tata-McGra). Tata McGraw-Hill Publishing Company Limited.</p><p>Matera, M., Costabile, M. F., Garzotto, F., & Paolini, P. (2002). SUE inspection: An effective method for systematic usability evaluation of hypermedia. IEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans., 32(1), 93103. https://doi.org/10.1109/3468.995532</p><p>McGaghie, W. C. (2001). Review Criteria, 922951.</p><p>McTighe, J., & Arter, J. (2001). Scoring Rubrics in the Classroom: Using Performance Criteria for Assessing and Improving Student Performance (illustrate). Corwin Press.</p><p>Moskal, B. M., & Leydens, J. A. (2000). Scoring rubric development: Validity and reliability. Practical Assessment, Research & Evaluation, 7(10), 110. https://doi.org/10.1016/j.asw.2010.01.003</p><p>Murer, S., Gruntz, D., & Szyperski, C. (2002). Component Softwrae: Beyond Object-oriendted Programming (illustrate). ACM Press.</p><p>Mustapha, A., Samsudin, N. A., Arbaiy, N., Mohamed, R., & Hamid, I. R. (2016). Generic assessment rubrics for computer programming courses. Turkish Online Journal of</p><p>Educational Technology, 15(1), 5361. https://doi.org/10.1017/CBO9781107415324.004</p><p>MVC Framework - Introduction. (n.d.). Retrieved July 3, 2021, from https://www.tutorialspoint.com/mvc_framework/mvc_framework_introduction.htm</p><p>Nielsen, J, & Landauer, J. (1993). A mathematical model of finding the usability problem. Proceedings of the CHI 93 proceedings of the Interact conference on human factors in computing systems. Proceedings of ACM INTERCHI93 Conference, 206213. Retrieved from http://delivery.acm.org/10.1145/170000/169166/p206-nielsen.pdf</p><p>Nielsen, Jakob. (2000). Why You Only Need to Test with 5 Users. Retrieved May 26, 2021, from https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/</p><p>Nielsen, Jakob. (2012). Usability 101: Introduction to Usability. Retrieved July 1, 2021, from https://www.nngroup.com/articles/usability-101-introduction-to-usability/</p><p>Odhabi, H. (2007). Investigating the impact of laptops on students learning using Blooms learning taxonomy. British Journal of Educational Technology, 38(6), 11261131. https://doi.org/10.1111/j.1467-8535.2007.00730.x</p><p>Omair, A. (2014). Sample size estimation and sampling techniques for selecting a representative sample, 2(4), 142147. https://doi.org/10.4103/1658-600X.142783</p><p>Pandey, D., & Suman, U. (2010). An Effective Requirement Engineering Process Model for Software Development and Requirements Management. In International Conference on Advances in Recent Technologies in Communication and Computing. IEEE. https://doi.org/10.1109/ARTCom.2010.24</p><p>Payne, D. A. (2003). Applied Educational Assessment (Second).</p><p>Pieterse, V. (2013). Automated Assessment of Programming Assignments. In Proceedings of the 3rd Computer Science Education Research Conference on Computer Science Education Research (pp. 4556).</p><p>Pieterse, V., & Liebenberg, J. (2017). Automatic vs manual assessment of programming tasks. Proceedings of the 17th Koli Calling Conference on Computing Education Research - Koli Calling 17, 193194. https://doi.org/10.1145/3141880.3141912</p><p>Pillay, N. (2003). Developing Intelligent Programming Tutors for Novice Programmers, 35(2), 7882.</p><p>Popham, W. J. (1997). Whats wrongand whats rightwith rubrics. Educational Leadership. Retrieved from http://skidmore.edu/assessment/handbook/Popham_1997_Whats-Wrong_and-Whats-Right_With-Rubrics.pdf</p><p>Pressman, R. S. (2010). Software Engineering A Practitioners Approach (Seventh Ed). McGraw Hill.</p><p>Price, P. C., Jhangiani, R., & Chiang, I.-C. A. (2015). Overview of nonexperimental research. Retrieved from https://ecampusontario.pressbooks.pub/researchmethods/chapter/overview-of-nonexperimental-research/</p><p>Prieto-Diaz, R., & Freeman, P. (1987). CLassifying Software for Reusability. Ieee Software, 616. https://doi.org/10.1109/MS.1987.229789</p><p>Quantitative Data Analysis. (2018). Retrieved August 12, 2019, from https://research-methodology.net/research-methods/data-analysis/quantitative-data-analysis/</p><p>Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment and Evaluation in Higher Education, 35(4), 435448. https://doi.org/10.1080/02602930902862859</p><p>Regan, G. O. (2019). Concise Guide to Software Testing. Springer. https://doi.org/https://doi.org/10.1007/978-3-030-28494-7</p><p>Robinson, P. E., & Carroll, J. (2017). An online learning platform for teaching, learning, and assessment of programming. IEEE Global Engineering Education Conference, EDUCON, (April), 547556. https://doi.org/10.1109/EDUCON.2017.7942900</p><p>Romli, R., Abdurahim, E. A., Mahmod, M., & Omar, M. (2016). Current Practices of Dynamic-Structural Testing in Programming Assessments. Journal of Telecommunication, Electronic and Computer Engineering, 8(2), 153159.</p><p>Romli, R., Sulaiman, S., & Zamli, K. Z. (2013). Designing a Test Set for Structural Testing in Automatic Programming Assessment, 5(3).</p><p>Romli, R., Sulaiman, S., & Zamli, K. Z. (2015a). Improving Automated Programming Assessments: User Experience Evaluation Using FaSt-generator. Procedia Computer Science, 72, 186193. https://doi.org/10.1016/j.procs.2015.12.120</p><p>Romli, R., Sulaiman, S., & Zamli, K. Z. (2015b). Improving the Reliability And Validity Of Test Data Adequacy In Programming Assessments. Jurnal Teknologi (Sciences & Engineering).</p><p>Rubio-snchez, M., Kinnunen, P., Pareja-flores, C., & Velzquez-iturbide, . (2014). Student perception and usage of an automated programming assessment tool. Computers in Human Behavior, 31, 453460. https://doi.org/10.1016/j.chb.2013.04.001</p><p>Salant, P., & Dillman, D. A. (1994). How to conduct your own survey. New York: John Wiley & Sons, Inc.</p><p>Salkind, N. J. (2010). Encyclopedia of Research Design (Volume 1).</p><p>Schlarb, M., Hundt, C., & Schmidt, B. (2015). SAUCE: A Web-Based Automated Assessment Tool for Teaching Parallel Programming. In G. Goos, J. Hartmanis, & J. Van Leeuwen (Eds.), European Conference on Parallel Processing (pp. 5465).</p><p>Springer International Publishing Switzerland. https://doi.org/10.1007/978-3-319-27308-2_5</p><p>Shermis, M. D., & Vesta, F. D. J. (2011). Classroom Assessment In Action (p. 559). Rowman & Littlefi eld Publishers, Inc.</p><p>Simpson, E. J. (1972). The Classification of Educational Objectives in the Psychomotor Domain. Education, 3(3), 4356. Retrieved from http://eric.ed.gov/ERICWebPortal/recordDetail?accno=ED010368</p><p>Singh, A., Karayev, S., Gutowski, K., & Abbeel, P. (2017). Gradescope: A Fast, Flexible, and Fair System for Scalable Assessment of Handwritten Work. Proceedings of the Fourth (2017) ACM Conference on Learning @ Scale - L@S 17, 8188. https://doi.org/10.1145/3051457.3051466</p><p>Software Quality Attributes. (n.d.). Retrieved January 7, 2021, from https://asq.org/quality-resources/software-quality</p><p>Solms, F., & Pieterse, V. (2016). Towards a Generic DSL for Automated Marking Systems. In Annual Conference of the Southern African Computer Lecturers Association (p. 642). Springer Verlag. https://doi.org/10.1007/978-3-319-47680-3_6</p><p>Somerville, I. (2011). Software Engineering. (M. Horton, M. Hirsch, M. Goldstein, C. Bell, & J. Holcomb, Eds.). Pearson.</p><p>Souza, D. M., Felizardo, K. R., & Barbosa, E. F. (2016a). A Systematic Literature Review of Assessment Tools for Programming Assignments. 2016 IEEE 29th International Conference on Software Engineering Education and Training (CSEET), 147156. https://doi.org/10.1109/CSEET.2016.48</p><p>Souza, D. M., Felizardo, K. R., & Barbosa, E. F. (2016b). A Systematic Literature Review of Assessment Tools For Programming Assignments. In Proceedings - 2016 IEEE 29th Conference on Software Engineering Education and Training (pp. 147156). https://doi.org/10.1109/CSEET.2016.48</p><p>Srikant, S., & Aggarwal, V. (2014). A system to grade computer programming skills using machine learning. Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD 14, 18871896. https://doi.org/10.1145/2623330.2623377</p><p>Steigerwald, L. R. (1992). Rapid Software Protyping.</p><p>Stevens, D. D., & Levi, A. J. (2005). Introductions to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback, and Promote Student Learning. Stylus Publishing, LLC.</p><p>Stout, Q. F. (2000). What is Parallel Computing? A Not Too Serious Explanation. Retrieved December 25, 2018, from https://web.eecs.umich.edu/~qstout/parallel.html</p><p>Summative and Formative Assessment. (2018). Retrieved July 21, 2018, from https://citl.indiana.edu/teaching-resources/assessing-student-learning/summative-formative/</p><p>Tang, T., Smith, R., Rixner, S., & Warren, J. (2016). Data-Driven Test Case Generation for Automated Programming Assessment. In Annual Conference on Innovation and Technology in Computer Science Education (pp. 260265). https://doi.org/10.1145/2899415.2899423</p><p>Taras, M. (2005). Assessment - Summative and formative - Some theoretical reflections. British Journal of Educational Studies, 53(4), 466478. https://doi.org/10.1111/j.1467-8527.2005.00307.x</p><p>Tulis, T., & Willian, A. (2013). Measuring the User Experience: Collecting, Analysis, and Presenting Usability Metrics. (M. Dunkerley & H. Scherer, Eds.) (Second Edi). Morgan Kaufmann.</p><p>Ullah, Z., Lajis, A., Jamjoom, M., Altalhi, A. H., Shah, J., & Saleem, F. (2019). A rule-based method for cognitive competency assessment in computer programming using blooms taxonomy. IEEE Access, 7, 6466364675. https://doi.org/10.1109/ACCESS.2019.2916979</p><p>Vale, T., Crnkovic, I., De Almeida, E. S., Silveira Neto, P. A. D. M., Cavalcanti, Y. C., & Meira, S. R. D. L. (2016). Twenty-eight years of component-based software engineering. Journal of Systems and Software, 111, 128148. https://doi.org/10.1016/j.jss.2015.09.019</p><p>Virzi, R. A. (1992). Refining the test phase of usability evaluation: How many subjects is enough? Human Factors, 34(4), 457468. https://doi.org/10.1177/001872089203400407</p><p>Wiggins, G. P. (1998). Educative assessment: Designing assessments to inform and improve student performance.</p><p>Wolf, K., & Stevens, E. (2007). The role of rubrics in advancing and assessing student learning. The Journal of Effective Teaching, 7(1), 314. Retrieved from http://works.bepress.com/cgi/viewcontent.cgi?article=1058&context=susan_madsen#page=8</p><p>Yew-Jin, L., Mijung, K., Qingna, J., Hye-Gyoung, Y., & Kenji, M. (2017). East-Asian Primary Science Curricula: An Overview Using Revised Blooms Taxonomy. SpringerBriefs in Education. https://doi.org/10.1007/978-981-10-2690-4</p><p>Yu, Y. T., Poon, C. K., & Choy, M. (2006). Experiences with PASS : Developing and Using a Programming Assignment aSsessment System *.</p><p>Zougari, S., Tanana, M., & Lyhyaoui, A. (2016). Towards an automatic assessment system in introductory programming courses. Proceedings of 2016 International Conference on Electrical and Information Technologies, ICEIT 2016, 496499. https://doi.org/10.1109/EITech.2016.7519649</p><p>Zougari, S., Tanana, M., & Lyhyaoui, A. (2017). Hybrid assessment method for programming assignments. Colloquium in Information Science and Technology, CIST, 564569. https://doi.org/10.1109/CIST.2016.7805112</p><p></p> |