Artifact Evaluation for TACAS 2022
As in 2021, TACAS 2022 will include an artifact evaluation (AE) for all types of papers. There will be two rounds of the AE: For regular tool papers and tool demonstration papers, artifact evaluation is compulsory (see the TACAS 2022 call for papers) and artifacts must be submitted to the first round; for research and case study papers, it is voluntary, and artifacts may be submitted to either the first or the second round. All accepted papers with accepted artifacts will receive a badge.
Artifacts and Evaluation Criteria
An artifact is any additional material (software, data sets, machine-checkable proofs, etc.) that substantiates the claims made in the paper and ideally makes them fully replicable. As an example, a typical artifact would consist of the tool (in binary or source code form) and its documentation, the input files (e.g., models analysed or programs verified) used for the tool evaluation in the paper, and a configuration file or document describing the parameters used in the experiments. The Artifact Evaluation Committee will read the corresponding paper and evaluate the submitted artifact w.r.t. the following criteria:
- consistency with and replicability of results presented in the paper,
- completeness,
- documentation and ease of use,
- availability in a permanent online repository.
The evaluation will be based on the guidelines at this link, and the AEC will decide which of the badges can be assigned to a given tool, and added to the title page of the paper in case of acceptance.
Compulsory AE for Tool and Tool Demonstration Papers
Regular tool papers and tool demonstration papers are required to be accompanied by an artifact for evaluation by the Artifact Evaluation Committee at paper submission time. These papers will in general be expected to satisfy the requirements for the “Functional” and “Available” badges. The results of the artifact evaluation will be taken into consideration in the paper reviewing and rebuttal phase of TACAS 2022. The fact that not all experiments may be reproducible (e.g., due to high computational demands) or that the tool cannot be made available (e.g. due to proprietary restrictions) does not mean automatic rejection of the paper.
Artifact Evaluation for Research and Case Study Papers
Authors of research papers and case study papers are also invited to submit an artifact. In this case, the submission is voluntary. If the artifact is submitted at the same time as the paper, then it will be reviewed immediately by the Artifact Evaluation Committee (AEC) and the results of the evaluation can be taken into consideration during the paper reviewing and rebuttal phase of TACAS 2022. Authors of accepted papers who did not submit an artifact will be invited to submit an artifact after notification. Authors of artifacts that are accepted by the AEC will receive one or more of the badges (for Functionality, Reusability, and Availability) that can be shown on the title page of the corresponding paper.
Artifact Submission
An artifact submission consists of
- an abstract that summarizes the artifact and its relation to the paper,
- a .pdf file of the paper (uploaded via EasyChair), which may be modified from the submitted version to take reviewers’ comments into account (if submitted in the second round),
- a link to a .zip file (available for download) containing
- a directory with the artifact itself,
- a text file named License.txt that contains the license for the artifact (it is required that the license at least allows the Artifact Evaluation Committee to evaluate the artifact w.r.t. the criteria mentioned above),
- a text file called Readme.txt that contains detailed, step-by-step instructions on how to use the artifact to replicate the results in the paper, and
- an indication whether and how the artifact will be made publicly available and archived permanently, if the paper is accepted.
The artifact submission is handled via Easychair. Artifacts have to be submitted in the artifact evaluation track, with title "Artifact for Paper (papertitle)", and the same authors as the submitted paper.
Guidelines for Artifacts
We expect artifact submissions to package their artifact and write their instructions such that Artifact Evaluation Committee (AEC) members can evaluate the artifact using the TACAS 2022 Artifact Evaluation Virtual Machine for VirtualBox available via Zenodo. The virtual machine is based on an Ubuntu 20.04 LTS GNU/Linux operating system with the following additional packages: build-essential, cmake, clang, mono-complete, openjdk-8-jdk, python3.8, pip3, ruby, and a 32-bit libc. Moreover, VirtualBox guest additions are installed on the VM; it is therefore possible to connect a shared folder from the host computer. The login and password of the default user are: “tacas22” / “tacas22”.
If the artifact requires additional software or libraries that are not part of the virtual machine, the instructions must include all necessary steps for their installation and setup. Any software that is not already part of the virtual machine must be included in the .zip file. AEC members will not download software or data from external sources, and the artifact must work without a network connection. In case you feel that this VM will not allow an adequate replication of the results in your paper, please contact the AEC chairs prior to artifact submission (also see below).
It is to the advantage of authors to prepare an artifact that is easy to evaluate by the AEC. Some guidelines:
- Document in detail how to replicate most, or ideally all, of the (experimental) results of the paper using the artifact.
- Keep the evaluation process simple through easy-to-use scripts and provide detailed documentation assuming minimum expertise of users.
- For experiments that require a large amount of resources (hardware or time), it is recommended to provide a way to replicate a subset of the results of the paper with reasonably modest resources (RAM, number of cores), so that the results can be reproduced on various hardware platforms including laptops, and in a reasonable amount of time. Do include the full set of experiments as well (for those reviewers with sufficient hardware or time), just make it optional.
- State the resource requirements, or the environment in which you successfully tested the artifact, in the instructions file (RAM, number of cores, CPU frequency).
- Do not submit a virtual machine; only submit your files, which AEC members will copy into the provided virtual machine.
- State which of the three badges (Functional, Reusable, Available) you apply for. (note that Functional and Available are expected for papers where artifact evaluation is compulsory)
Members of the AEC will use the submitted artifact for the sole purpose of artifact evaluation. We do, however, encourage authors to make their artifacts publicly and permanently available.
Possibility for exemption
In case your experiments cannot be replicated inside the provided VM, please contact the AEC chairs before submission. Possible reasons include the need for special hardware (GPUs, compute clusters, bluetooth devices, robots etc.), software licensing issues or the need to access the internet. In any case, you are encouraged to submit a complete artifact. This way, the reviewers have the option to replicate the experiments in the event they have access to the required resources.
Important Dates
All dates refer to 23:59 "anywhere on Earth" (UTC-12) on that day.
First round:2021-11-4 | Artifact submission deadline (mandatory for tool and tool demo papers, optional for research and case study papers) |
2021-11-24 | Communication with authors in case of technical problems with the artifact |
2021-12-8 | Notification of AE reviews (results will also be communicated to the TACAS paper reviewers and considered for the paper acceptance decision) |
2022-01-05 | Artifact submission deadline (optional for accepted research and case study papers that did not submit to the first round) |
TBA | Communication with authors in case of technical problems with the artifact |
2022-02-16 | Notification of AE reviews |
TBA | Extended deadline for updating camera-ready paper with AEC badge (if artifact is accepted). |
Artifact Evaluation Chairs
- Swen Jacobs (CISPA Helmholtz Center for Information Security)
- Andrew Reynolds (University of Iowa, USA)
Artifact Evaluation Committee
- Stefanie Mohr(Technical University of Munich)
- Veronika Šoková(FIT Brno University of Technology)
- Philip Offtermatt(Université de Sherbrooke)
- Mathias Fleury(University of Freiburg)
- Mitja Kulczynski(Kiel University)
- Tim Quatmann(RWTH Aachen University)
- Mathias Preiner(Stanford University)
- Etienne Renault(LRDE)
- Michael Backenköhler(Saarland University)
- Debasmita Lohar(Max Planck Institute for Software Systems)
- Malte Mues(TU Dortmund)
- Yuki Nishida (Kyoto University)
- Maximilian Alexander Köhl(Saarland University)
- Michael Schwarz(Technische Universität München)
- Yong Li(Institute of Software, Chinese Academy of Sciences)
- Jiří Pavela(FIT VUT)
- Lei Shi(University of Pennsylvania)
- Matthew Sotoudeh(University of California, Davis)
- Maurice Laveaux(Eindhoven University of Technology)
- Ali Shamakhi (Tehran Institute for Advanced Studies)
- Morten Konggaard Schou(Aalborg University)
- Fabian Meyer(RWTH Aachen University)
- Kush Grover(Technical University of Munich, Munich)
- Priyanka Darke (Tata Consultancy Services)
- Sebastian Biewer(Saarland University)
- Benjamin Bisping(TU Berlin)
- Felipe Gorostiaga(IMDEA Software Institute)
- Pavel Andrianov (ISP RAS)
- Daniela Kaufmann(Johannes Kepler University Linz)
- Muhammad Osama(Eindhoven University of Technology)
- José Proença(CISTER-ISEP and HASLab-INESC TEC)
- Marek Chalupa(Faculty of Informatics, Masaryk University)
- Olav Bunte(Eindhoven University of Technology)
- Damien Busatto-Gaston(Université Libre de Bruxelles)
- Makai Mann(MIT Lincoln Laboratory)
- Jip Spel(RWTH Aachen University)
- Joseph Scott(University of Waterloo)
- Mouhammad Sakr(University of Luxembourg)
- Hans-Jörg Schurr(Inria - Nancy - Grand Est)