Study and Prioritization of Dependency-oriented Test Code for Efficient Regression Testing
In modern software process, software testing is performed along with software development to detect errors as early as possible and to guarantee that changes made in software do not affect the system negatively. However, during the development phase, the test suite is frequently expanded for new features and tends to increase in size quickly. Due to the resource and time constraints for re-executing large test suites, it is important to develop techniques to reduce the effort of regression testing.
Unit testing testing is the core of test driven development where software testers need to test a class or a component without integration with some of its dependencies. Typical reasons for excluding dependencies in testing include high cost of invoking some dependencies (e.g., slow network or database operations, commercial third-party web services), and the potential interference of bugs in dependencies. In practice, mock objects have been used in software testing to simulate such missing dependencies. However, due to exclusion of dependencies, mock-objects based testing is not suitable for performance regression and backward incompatibility regression testing.
A small extent of performance degradation may result in severe consequence and running performance test needs more time and resources. Also, it can be hard for a developer to understand performance impact with few runs. Furthermore, A code changes touches several test cases is very common during the evolution of software. Due to the resource and time constraints for re-executing large test suites, it is important to develop techniques to reduce the effort of regression testing. Our proposed method focus on the performance test suite prioritization via performance impact analysis of change.
Nowadays, due to the frequent technological innovation and market changes, software libraries are evolving very quickly. To make sure that existing client software applications are not broken after a library update, backward compatibility has always been one of the most important requirements during the evolution of software platforms and libraries. Previous studies on this topic mainly focus on API signature changes between consecutive versions of software libraries, but behavioral changes of APIs with untouched signatures are actually more dangerous and are causing most real world bugs because they cannot be easily detected. Our study categorizes behavioral backward incompatibilities according to incompatible behaviors and invocation conditions. We propose to compare those detected in regression testing with those causing real-world bugs and prioritizing test case based on backward incompatibilities in dependencies.