Abstract

Branching and merging are common practices in collaborative software development. They increase developer productivity by fostering teamwork, allowing developers to independently contribute to a software project. Despite such benefits, branching and merging comes at a cost—the need to merge software and to resolve merge conflicts, which often occur in practice. While modern merge techniques, such as 3-way or structured merge, can resolve many such conflicts automatically, they fail when the conflict arises not at the syntactic, but the semantic level. Detecting such conflicts requires understanding the behavior of the software, which is beyond the capabilities of most existing merge tools. As such, semantic conflicts can only be identified and fixed with significant effort and knowledge of the changes to be merged. While semantic merge tools have been proposed, they are usually heavyweight, based on static analysis, and need explicit specifications of program behavior. In this work, we take a different route and explore the automated creation of unit tests of partial specifications to detect unwanted behavior changes (conflicts) when merging software. We systematically explore the detection of semantic conflicts through unit-test generation. Relying on a ground-truth dataset of 38 software merge scenarios, which we extracted from GitHub, we manually analyzed them and investigated whether semantic conflicts exist. Next, we apply test-generation tools to study their detection rates. We propose improvements (code transformations) and study their effectiveness, as well as we qualitatively analyze the detection results and propose future improvements. For example, we analyze the generated test suites for false-negative cases to understand why the conflict was not detected. Our results evidence the feasibility of using test-case generation to detect semantic conflicts as a method that is versatile and requires only limited deployment effort in practice, as well as it does not require explicit behavior specifications

Using Regression Testing for detecting semantic conflicts

We present here our dataset of 40 changes on the same declarations from 38 merge scenarios mined from 28 different projects. In 15 changes, we can observe the occurrence of semantic conflicts (column Semantic Conflict). Using Regression Testing with Testability Transformations, we were able to automatically detect 4 of these conflicts.

Here, you can find the list of changes on the same declarations we analyzed in our study. Each change is represented as a row in the table below.

  • In column Semantic Conflict, we report whether the change represents a semantic conflict based on our notion of local interference (manual analysis).
  • In column Detected Conflict, we report whether the semantic conflict was detected by our study using regression testing.
  • In columns Test Suites for Original Version and Test Suites for Transformed Version, you can find the generated test suites for the original and transformed versions, respectively, from each merge scenario.

For additional details regarding our dataset, you can find further information here. We provide descriptions for each row of the table above, like whether the associated changes represent a conflict, the summary of changes performed by each parent commit, and when applicable, also a test case revealing the conflict.

For additional details regarding the set of test cases that detected semantic conflicts, please check this file. We inform the test cases of each test suite that detected any of the semantic conflicts.

Study Replication

Here you can find the links for the scripts and dataset we used to perform our study. Aiming to support replications, we provide our sample of merge scenarios as a dataset. This dataset is composed of build files required for our scripts for the generation of test suites by the unit test generation tools, and execution of these suites against the different versions of a merge scenario.

Below, we present in detail how our scripts can be used to replicate our study using our dataset or perform a new one using another sample.

Replicating the study

We recommend the execution of the next steps when trying to replicate our study.

  1. Getting the build files - We provide the sample of our study as a dataset with the jar files for the original and transformed versions for each merge scenario commit. So, you must clone the Github project and run the script get_sample.py. As a result, the file results_semantic_study.csv will be created, required as input for the next step.
  2. Setting up the scripts - After having the local dataset available, you must clone the project with our scripts, which generate and execute test suites using unit test generation tools. After cloning, you must rename the configuration file env-config.template.json for env-config.json. Next, you must inform the path of the file generated in the last step on path_hash_csv. Additional information to setup other options is given in the Github project page.
  3. Running the study - Our scripts require Python 3.6.x. In case you are running on a Linux terminal, you can just call the semantic_study.py . As a result, the file semantic_conflict_results.csv and the folder output-test-dest grouping the generated test suites will be created .

Running a new study

  1. Building Merge Scenarios - For our study, we used the MiningFramework to automatically create the build files associated with each merge scenario commits on Travis. For a new sample, you can inform a list of projects, and MiningFramework will use Travis to generate the jar files associated with the original and transformed versions for each merge scenario. In the project page of MiningFramework, you can find additional information about how this framework works.
  2. Testability Transformations - For each merge scenario evaluated in our study, we consider an original and transformed version for each commit. These transformations are performed using a jar file that we implemented. Mining Framework already applies these transformations, so you do not need to call it for each commit. In case you want to apply the transformations manually, you can find visit the project page and look for instructions/guidelines on how to apply them.
  3. Running the study - After generating the jar files using the MiningFramework, it will create a local file results_semantic_study.csv with the information for each merge scenario mined by the framework. This file must be given as input for the scripts to generate and execute test suites. From this point, you must follow the previous steps when presenting Replicating the study starting on step 2.