Control Systems and Computers, N3, 2024, Article 3

Control Systems and Computers, 2024, Issue 3 (307), pp. 

UDK 004.05+004.942+004.416.2+004.415.532.3

O.V. Kolchyn, PhD (Comp.Sci), Senior Researcher, V.M. Glushkov Institute of cybernetics NAS of Ukraine, Academican Glushkov ave, 40, Kyiv, Ukraine, 03187, ORCID: https://orcid.org/0000-0001-7809-536X, kolchin_av@yahoo.com

S.V. Potiyenko, PhD (Comp.Sci), Senior Researcher, V.M. Glushkov Institute of cybernetics NAS of Ukraine, Academican Glushkov ave, 40, Kyiv, Ukraine, 03187, ORCID: https://orcid.org/0000-0001-9462-599Xstepan.potiyenko@gmail.com

INTERACTIVE TEST SCENARIO VALIDATION AND DEBUGGING METHOD BASED ON FORMAL MODEL

Introduction. Model-based test case generation is a popular white-box strategy for testing. It helps to reduce time spent on the development of a test suite and can improve the level of coverage. However, many reports show a shortage of such test cases in poor quality and doubtable efficiency.

Purpose. The main goal of the proposed method is cost-effective validation, assessment, and debugging of generated test cases. The method helps improve the quality and efficiency of the test cases, and make their scenario meaningful and goal-oriented. The method also develops debugging facilities and simplifies data dependency analysis and test scenario editing.

Methods. We propose an interactive post-processing method that allows to (1) analyze the path that is examined by the test case, (2) make safe changes to the path which will eliminate the shortcomings while leaving the coverage targets of the test case unharmed. The method is based on visualization of the path along the control flow graph of the model with additional information about the factual evaluation history of all variables and possible alternative variants of behavior. For consistent substitution of certain values in the signal parameters, which would determine the artifacts of the test environment (such as files, databases, etc.) and check boundary cases (in predicates of conditions, indexing of arrays, etc.), a method of interactive specification of symbolic traces have been developed.

Results. The role of the user in deciding whether to add a test case to the project test suite and make changes to it remains crucial, but to reduce labor intensity, the following processes are automated: evaluation of test scenarios according to certain objective characteristics (level of coverage, ability to detect defects, data cohesion, etc.); highlighting of possible alternatives for making corrections; consistent updating of computations for the corresponding corrections. A prototype was developed based on the proposed methods. The empirical results demonstrated a positive impact on the overall efficiency (ability to detect defects and reduce resource consumption) and quality (meaningfulness, readability, maintenance, usefulness for debugging, etc) of the generated test suites. The method allows to make automatically generated test cases trustable and usable.

Conclusion. The proposed toolkit significantly reduces the time spent on researching the results of test generation and validation of the obtained tests and their editing. Unlike existing simulation methods, the proposed method not only informs about the values of variables but also explores the history of their computations and additionally provides information about admissible alternatives, and helps answer questions like “why?” and “what if?”. Further, we plan to improve the process of localizing the causes of test failure at the execution phase to speed up the search for defects.

Download full text! (On Ukrainian)

Keywords: model-based testing, test case generation, debugging, test case validation.

  1. Kolchin, A., Potiyenko, S., Weigert, T. (2019). “Challenges for automated, model-based test scenario generation”. In Comm. in Computer and Inf. Sci., vol. 1078, pp. 182–194.
  2. Kolchin, A., Potiyenko, S. (2022). “Extending Data Flow Coverage to Test Constraint Refinements”. In: ter Beek, M.H., Monahan, R. (eds) Integrated Formal Methods. Lecture Notes in Computer Science, vol. https://doi.org/10.1007/978-3-031-07727-2_17.
  3. Rushby, (2008). “Automated test generation and verified software”. Verified Software: Theories, Tools, Experiments, pp. 161–172. https://doi.org/10.1007/978-3-540-69149-5_18.
  4. Dssouli, R. et. al. (2017). “Testing the control-flow, data-flow, and time aspects of communication systems: a survey”. Advances in Computers. vol. 107, pp. 95–155.
  5. Inozemtseva, L., Holmes, R. (2014). “Coverage is not strongly correlated with test suite effectiveness”. In Proc. of ACM ICSE’14, 435–445.
  6. Gay, G., Staats, M., Whalen, M., Heimdahl, M. (2015). “The risks of coverage-directed test case generation”. IEEE Transactions on Software Engineering. vol.41, pp. 803–819.
  7. Staats, M. et. al. (2012). “On the danger of coverage directed test case generation”. Lecture Notes in Computer Science, 7212, pp. 409–424. https://doi.org/10.1007/978-3-642-28872-2_28.
  8. Palomba, F., et. al. (2016). “Automatic Test Case Generation: What if Test Code Quality Matters?” In proc. of Int. Symp. on Software Testing and Analysis, pp. 130–141.
  9. Ying, M., Gay, G., Whalen, M. (2018). “Ensuring the observability of structural test obligations”. IEEE Transactions on Software Engineering, 46 (7),748–772.
  10. Perez, F., Font, J., Arcega, L., & Cetina, C. (2022). “Empowering the Human as the Fitness Function in Search-Based Model-Driven Engineering”. IEEE Transactions on Software Engineering, 48 (11), pp. 4553–4568.
  11. Ceccato, M., et.al. (2015). “Do automatically generated test cases make debugging easier? An experimental assessment of debugging effectiveness and efficiency”. ACM Transactions on Software Engineering and Methodology, 25(1), pp. 1–38.
  12. Lucredio, D., Vincenzi, A., Almeida, E., Ahmed, I. (2023). “Test case quality: an empirical study on belief and evidence”. arXiv:2307.06410.
  13. Winkler, D., Urbanke, P., Ramler, R. (2022). “What Do We Know About Readability of Test Code? – A Systematic Mapping Study”. In 2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), Honolulu, HI, USA, pp. 1167–1174.
  14. Rastello, F., Tichadou, F. (2022). SSA-based Compiler Design. Springer Nature, 382 p.
  15. Kolchin, A.V. (2010). “An automatic method for the dynamic construction of abstractions of states of a formal model”. Cybernetics and Systems Analysis, 46 (4), pp. 583–601. https://doi.org/10.1007/s10559-010-9235-9.
  16. Kolchin, A. (2018). “Interactive method for cumulative analysis of software formal models behavior”. of the 11th Int. Conf. on Programming UkrPROG’2018, CEUR-WS vol. 2139, pp. 115–123.
  17. Kolchin, A. et. al. (2013). “An approach to creating concretized test scenarios within test automation technology for industrial software projects”. Automatic control and computer science, pp. 433–442. https://doi.org/10.3103/S0146411613070213.
  18. Godlevskiy, A.B., Potiyenko, S.V. (2010). “Obratnaya transformatsiya formul v simvol’nom modelirovanii: ot rezul’tata k iskhodnoy formule”. Problemi programuvannya, 2–3, pp. 363–368.
  19. Potiyenko, S.V. (2008). “Metody pryamogo i obratnogo simvol’nogo modelirovaniya sistem, zadannykh bazovymi protokolami”. Problemi programuvannya, 4, pp. 39–45.
  20. Vu, F., Leuschel, M. (2023). “Validation of Formal Models by Interactive Simulation”. In: Glässer, U., Creissac Campos, J., Méry, D., Palanque, P. (eds) Rigorous State-Based Methods. ABZ 2023. Lecture Notes in Computer Science, vol 14010. Springer, Cham. https://doi.org/10.1007/978-3-031-33163-3_5.

Received  14.04.2024