Joint Artifact Evaluation Track and ROSE Festival - Call for Papers

Goal and Scope

The ICSME 2022 Joint Artifact Evaluation Track and ROSE Festival is a track that aims to celebrate open science in Software Engineering research.

The Artifact Evaluation Track will assess artifacts submitted by the authors of papers accepted for publication to all ICSME, SCAM, and VISSOFT archivaltracks whose papers will appear in the proceedings and award badges that will be displayed in those papers based on their contributions to open science.

The ROSE Festival (Recognizing and Rewarding Open Science in SE), a special track within ICSME, is a venue where researchers can receive public credit for facilitating and participating in open science in Software Engineering. For ICSME 2022, the ROSE Festival will (a) host lightning talks about the accepted artifacts, and (b) present invited talks about open science in Software Engineering research.

We invite authors of papers of all lengths accepted to the ICSME, SCAM, and VISSOFT 2022 Technical Tracks to submit artifacts associated with those papers to the ICSME Artifact Evaluation Papers with artifacts that meet our review criteria will be awarded badges, noting their contributions to open science in SE. We also welcome submissions from previous editions of ICSME, SCAM, and VISSOFT, provided that their artifacts were not submitted to their artifact evaluation at the time. Papers with artifacts that meet our review criteria will be awarded badges, noting their contributions to open science in Software Engineering. If an artifact is accepted, authors will be invited to give a lightning talk on the artifact during the ROSE festival at ICSME 2022, and we will work with IEEE to add badges corresponding to the Available, Reusable, Reproduced, and Replicated categories defined in the table below to the electronic versions of the paper(s). Artifacts of interest include (but are not limited to) the following:

  • Software, which are implementations of systems or algorithms potentially useful in other studies.
  • Automated experiments that replicate the study in the accepted paper.
  • Data repositories, which are data (e.g., logging data, system traces, survey raw data) that can be used for multiple software engineering approaches.
  • Frameworks, which are tools and services illustrating new approaches to software engineering that could be used by other researchers in different contexts.
  • Qualitative artifacts such as interview scripts and survey templates.

This list is not exhaustive, so the authors are asked to email the chairs before submitting if their proposed artifact is not on this list. For additional types of artifacts please see here.

Evaluation Criteria

 

 

Artifacts Evaluated

Results Validated

(Note: These badges are awarded to the study being replicated or reproduced)

Available

 

Functional

Reusable

 

Reproduced

 

Replicated

 

image

Open Research Objects (ORO)

No badge

image

Research Objects Reviewed (ROR)

image

 

Results Reproduced (ROR-R)

image

 

Results Replicated (RER)

Placed on a publicly accessible archival repository. A DOI or link to this persistent repository along with a unique identifier for the object is provided. Artifacts have not been formally evaluated.

Artifacts documented, consistent, complete, exercisable, and include appropriate evidence of verification and validation.

Functional + very carefully documented and well-structured to the extent that reuse and repurposing is facilitated. In particular, norms and standards of the research community for artifacts of this type are strictly adhered to.

The artifacts provided by the original authors are Functional + the main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the original authors.

The main results of the paper have been independently obtained in a subsequent study by a person or team other than the authors, without the use of author-supplied artifacts.

The ICSME artifact evaluation track uses a single-anonymous review process. Artifacts will be evaluated using 1) the criteria summarized in the last row of the table above, 2) the quality of the documentation produced by the authors as described in Section 3 – DOCUMENTING THE ARTIFACT, and 3) the process described in Section 4 – REVIEWING THE ARTIFACT.

The goal of this track is to encourage reusable research products. Hence, no functional badges will be awarded.

 

Special Issue

The best artifacts will be invited to submit an extended description of the artifact to a special issue of the Journal of Systems and Software https://www.sciencedirect.com/journal/journal-of-systems-and-software.

 Papers in this journal track are typically between three and six pages long.

 

Best Artifact Award

There will be a Best Artifact Award for each venue (ICSME, VISSOFT, SCAM) to recognize the effort of authors creating and sharing outstanding research artifacts.

    Submission and Review

    Note that all submissions, reviewing, and notifications for this track will be via the ICSME 2022 EasyChair conference management system (“Artifact Evaluation” Track).

    • During the first week after the submissions, kick-the-tires period, the authors might be asked to provide clarifications if the reviewers have problems with the artifacts.
    • After the kick-the-tires period, the reviewing process can start.
    • Authors will be notified of final decisions on the notification date.

    Authors of the papers accepted to the tracks must perform the following steps to submit an artifact for the Available (ORO) and Reusable (ROR) badges:

    1. Prepare the artifact
    2. Make the artifact available
    3. Document the artifact
    4. Declare conflict of interest
    5. Submit the artifact

    See Document the Artifact below for more details on how to submit for Reproduced (ROR-R) and Replicated (RER).

    Preparing the Artifact

    There are two options depending on the nature of the artifacts: Installation Package or Simple Package. In both cases, the configuration and installation for the artifact should take less than 30 minutes. Otherwise the artifact is unlikely to be endorsed simply because the committee will not have sufficient time to evaluate it.

    • _Installation Package_s: If the artifact consists of a tool or software system, then the authors need to prepare an installation package so that the tool can be installed and run in the evaluator’s environment. Provide enough associated instruction, code, and data such that some CS person with a reasonable knowledge of scripting, build tools, etc. could install, build, and run the code. If the artifact contains or requires the use of a special tool or any other non-trivial piece of software the authors must provide a VirtualBox VM image or a Docker container image with a working environment containing the artifact and all the necessary tools. Similarly, if the artifact requires specific hardware, it should be clearly documented in the requirements (see Section 3 – DOCUMENTING THE ARTIFACT). Note that we expect that the artifacts will have been vetted on a clean machine before submission.
    • Simple Package: If the artifact contains only documents that can be used with a simple text editor, a PDF viewer, or some other common tool (e.g., a spreadsheet program in its basic configuration) the authors can just save all documents in a single package file (zip or tar.gz).
    Making the Artifact Available

    Authors need to make the packaged artifact (installation package or simple package) available so that the Evaluation Committee can access it. We suggest a link to a public repository or to a single archive file in a widely available archive format. If the authors are aiming for the available badge, the artifact needs to be publicly accessible. Note that links to individual websites or links to temporary drives (e.g. Google) are non-persistent and thus artifacts placed in such locations will not be considered for the available badge. Examples of persistent storages that offer DOI are Zenodo, figshare, and Open Science Framework. Other suitable providers can be found here. For larger files like VirtualBox images, we recommend the use of such open repositories. Institutional repositories are acceptable. In all cases, repositories used to archive data should have a declared plan to enable permanent accessibility.

    In other cases, the artifacts do not necessarily have to be publicly accessible for the review process. In this case, the authors are asked to provide a private link or a password-protected link.

    Documenting the Artifact

    Authors need to write and submit documentation explaining how to obtain the artifact package, how to unpack the artifact, how to get started, and how to use the artifacts in sufficient detail. The artifact submission must describe only the technicalities of the artifacts and uses of the artifact that are not already described in the paper. The submission should contain the following documents (in markdown plain text format within the submission root folder):

    • A README.md main file describing what the artifact does and how and where it can be obtained (with hidden links and access password if necessary). Also, there should be a clear description, step-by-step, of how to reproduce the results presented in the paper.
    • A LICENSE.md file describing the distribution rights. Note that to score “available”, then that license needs to be some form of open source license.
    • A REQUIREMENTS.md file describing all necessary software/hardware prerequisites.
    • An INSTALL.md file with installation instructions. These instructions should include notes illustrating a very basic usage example or a method to test the installation. This could be, for instance, information on what output to expect that confirms that the code is installed and working; and that the code is doing something interesting and useful.
    • A copy of the accepted paper in pdf format.

    For Reproduced (ROR-R) submissions:

    • The original authors must have made artifacts available for use in the reproduction. The artifact must meet the criteria for the Available (ORO) badge. You do not need to document the artifact.
    • Submissions for the Reproduced (ROR-R) badge must fall within the scope of topics covered by ICSME, SCAM, and VISSOFT.
    • If the submission is accepted, we will attempt to recommend that the original paper receives a Reproduced (ROR-R) badge, and if not already applied, an Available (ORO) badge.
    • You must submit a README.md file in markdown plain text format, containing the following details:
      • A link to the reproduction study (e.g., a DOI link to a publisher site or to a pre-print).
      • A link to the original study.
      • A link to the artifacts used in the replication.
      • Explanation of whether the study is a partial or complete reproduction.
      • Explanation of what was reproduced, and the results of the reproduction.

    For Replicated (RER) submissions:

    • Submissions for the Replicated (RER) badge must fall within the scope of topics covered by ICSME, SCAM, and VISSOFT.
    • If the submission is accepted, we will attempt to recommend that the original paper receives a Replicated (RER) badge.
    • If you have made new artifacts available as part of your reproduction, you may apply for the Available (ORO) and Reusable (ROR) badges only if your replication is an accepted submission to ICSME, SCAM, or VISSOFT.
    • You must submit a README.md file in markdown plain text format, containing the following details:
      • A link to the replication study (e.g., a DOI link to a publisher site or to a pre-print).
      • A link to the original study.
      • Explanation of whether the study is a partial or complete replication.
      • Explanation of what was replicated, and the results of the replication.

     

    Reviewing the Artifact

    The review process will be interactive as follows:

    • Kick-the-tires: Before the actual evaluation, reviewers will check the integrity of the artifact and look for any possible setup problems that may prevent it from being properly evaluated (e.g., corrupted or missing files, VM won’t start, immediate crashes on the simplest example, etc.). The Evaluation Committee may contact the authors to request clarifications on the basic installation and start-up procedures or to resolve simple installation problems. Authors are required to provide responses promptly (within 48h) to help reviewers resolve any technical issues with their artifact submission. This phase will be 1 week long (this is a hard limit) and the authors will be given 2 chances to clarify/fix the problems within this period. If the problems remain after the interactions, the submission will be rejected. The number of interactions is limited to maintain the workload for the reviewers at a reasonable level.
    • Artifact assessment Reviewers evaluate the artifacts and provide comments via EasyChair.
    • Notification Authors are informed of the outcome.

    Additional information for AUTHORS:

    • Make sure that a link to the artifact is in the paper submission and it remains there for the CR version. The process for awarding badges is conducted after the CR deadline.
    • Download the artifact that you have submitted on a clean install and forward your own instructions to limit the potential problems that the reviewers might encounter. Include at the end of the INSTALL.md the configuration for which the installation was tested.
    • For software artifacts, consider preparing a virtual image in addition to your self-contained artifact: this greatly reduces possible problems on the reviewers’ side, and will benefit future users of the artifact. Some options include: Docker, VirtualBox, Vagrant, Packer. Non-software artifacts (e.g. datasets) should be distributed as a single archive (no need for a VM).
    • There will be a maximum of 2 interactions with the reviewer during the kick-the-tires phase. Please make sure that your answers are clear and provide as much detail as possible to resolve the problems that the reviewers are having.
    • Reviewers should not need to figure out on their own what the input is for a specific step or what output is produced (and where). All usage instructions should be explicitly documented in the step-by-step instructions of the README.md file.
    • Provide an explicit mapping between the results and claims reported in the paper and the steps listed in the README.md for an easy traceability.
    • Place any additional information that does not fit the required type of information in a separate document (ADDITIONAL_INFORMATION.md) that you think might be useful.

    Additional information for REVIEWERS:

    • We adopt the following definitions:
    • Documented: At minimum, an inventory of artifacts is included, and sufficient description provided to enable the artifacts to be exercised.
    • Consistent: The artifacts are relevant to the associated paper, and contribute in some inherent way to the generation of its main results.
    • Complete: To the extent possible, all components relevant to the paper in question are included. (Proprietary artifacts need not be included. If they are required to exercise the package then this should be documented, along with instructions on how to obtain them. Proxies for proprietary data should be included so as to demonstrate the analysis.)
    • Exercisable: Included scripts and/or software used to generate the results in the associated paper can be successfully executed, and included data can be accessed and appropriately manipulated.
    • Your goal is to gain sufficient confidence that the results of the paper can be obtained by relying on the submitted artifact. You are free to employ any strategy that allows you to do so such as inspecting the code, rebuilding it from scratch, changing the code/data, etc. However, none of those are mandatory.
    • We do not expect an exhaustive inspection of the original paper results. However, we do recommend a partial inspection or, at minimum, a spot-check of the produced data sufficient to gain confidence in use of the artifact to produce the published research results.
    • Some artifacts are difficult to run and might require additional hardware/software resources. Those will be evaluated on a case-by-case basis as typical strategies for evaluation might not be applicable.
    • For available badges, artifacts do not need to have been formally evaluated in order for an article to receive this badge. In addition, they need not be complete in the sense described above. They simply need to be relevant to the study and add value beyond the text in the article. Such artifacts could be something as simple as the data from which the figures are drawn, or as complex as a complete software system under study.
    • For reusable badges, the artifacts associated with the paper are of a quality that significantly exceeds minimal functionality. That is, they have all the qualities of the Functional level, but, in addition, they are very carefully documented and well-structured to the extent that reuse and repurposing is facilitated. In particular, norms and standards of the research community for artifacts of this type are strictly adhered to.
    • For replicated badges, there must be enough evidence that the results of the paper have been produced independently, i.e., without the use of author-supplied artifacts.
    • For reproducible badges, it must be clear what was supplied by the authors of the original work and there must be a link to that material.
    • For replicated and reproducible badges, exact replication or reproduction of results is not required, or even expected. Instead, the results must be in agreement to within a tolerance deemed acceptable for experiments of the given type. In particular, differences in the results should not change the main claims made in the paper.

     

    Submission Link

    Please use the following link: https://easychair.org/conferences/?conf=icsme2022 and choose the appropriate track.

    Important Dates

    **All submission dates are at 23:59 AoE (Anywhere on Earth, UTC-12)** 

    • Artifact Submission: August 26th, 2022
    • Author Notification: September 16th, 2022

    Track Chairs

    Maria Papoutsoglou (mpapou02@cs.ucy.ac.cy), University of Cyprus, Cyprus

    Christoph Treude (christoph.treude@unimelb.edu.au), University of Melbourne, Australia