Automated program repair (APR) tools apply fault localization (FL) techniques to identify the locations of likely faults to be repaired. The effectiveness, performance, and repair quality of APR depends in part on the FL method used. If FL does not identify the location of a fault, the application of an APR tool will not be effective– it will fail to repair the fault. If FL assigns the actual faulty statement a low priority for repair, APR performance will be reduced by increasing the time required to find a potential repair. In addition, the quality of a generated repair will be decreased since APR will modify fault- free statements that are assigned a higher priority for repair than an actual faulty statement.
In this paper, we conducted a controlled experiment to evaluate the impact of ten FL techniques on APR effectiveness, performance, and repair quality using a brute force APR tool applied to faulty versions of the Siemens Suite and two other large programs: space and sed. All FL techniques were effective in identifying all faults; however, Wong3 and Ample1 were the lest effective since they assigned the lowest priority for repair in more than 26% of the trials. We obtained the worst APR performance significantly when Ample1 was used since it generated large number of variants in 29.11% of the trials, and took long time to produce potential repairs. Jaccard produced higher quality repairs by generating more validated repairs–potential repairs that pass set of regression tests, and generating potential repairs that failed fewer regression tests, and its performance is noteworthy since it never generated large number of variants to produce potential repairs compared to alternatives.
MUT-APR was built by adapting the GenProg version 1 framework. Unlike GenProg which makes use of existing code in the subject program to repair faults, MUT-APR applies a set of PMOs that construct new operators to replace faulty ones creating new variants within a genetic algorithm. For this paper, we replaced the genetic algorithm in MUT-APR with a brute-force search algorithm, and added support for the use of different FL techniques applying the changes developed by Qi et al. (GenProg-FL)
MUT-APR source code: MUT-APR
To evaluate our approach, we used six C programs from the Software artifacts Infrastructure Repository (SIR)  along with a comprehensive set of test inputs. We used the Siemens Suites: tcas, replace, schedule2, and tot info. We also used two larger programs: space and sed. We found different versions of each program with the fault of interest.
Experimnet Data: data_sqjresults
- Assiri, F.Y., Bieman, J.M.: An assessment of the quality of automated program operator repair. In: Proceedings of the 2014 ICST Conference, ICST ’14 (2014)
- Assiri, F.Y.: Assessment and improvement of automated program repair mechanismsand components. Ph.D. thesis, Colorado State University (2015)
- Assiri, F.Y., Bieman, J.M.: The Impact of search algorithms in automated program repair. In: Proceedings of the 2015 ScSe Conference, SCSE'15 (2015)