Artifact Evaluation for TACAS 2023
TACAS 2023 will include an artifact evaluation (AE). There are two separate deadlines for the artifact submission, depending on the paper category:
- For regular tool papers and tool demonstration papers, the artifact evaluation is compulsory and the artifact must be submitted before the paper’s acceptance notification.
- For the accepted research and case study papers, the artifact evaluation is optional and the artifact might be submitted shortly after the paper’s acceptance notification.
Artifacts and Evaluation Criteria
An artifact is any additional material (software, data sets, machine-checkable proofs, etc.) that substantiates the claims made in the paper and ideally makes them fully replicable. As an example, a typical artifact would consist of the tool (in binary or source code form) and its documentation, the input files (e.g., models analyzed or programs verified) used for the tool evaluation in the paper, and a configuration file or document describing the parameters used in the experiments. The Artifact Evaluation Committee will read the corresponding paper and evaluate the submitted artifact w.r.t. the following criteria:
- consistency with and replicability of results presented in the paper,
- completeness,
- documentation and ease of use,
- availability in a permanent online repository.
The evaluation will be based on the guidelines at this link, and the AEC will decide which of the badges, among Functional, Reusable, and Available, will be assigned to a given artifact and added to the title page of the paper in case of acceptance.
Compulsory AE for Tool and Tool Demonstration Papers
Regular tool papers and tool demonstration papers are required to submit an artifact for evaluation by November 10, 2022. These artifacts will in general be expected to satisfy the requirements for the "Functional" and "Available" badges. The results of the artifact evaluation will be taken into consideration in the paper reviewing and rebuttal phase of TACAS 2023. The fact that not all experiments may be reproducible (e.g., due to high computational demands) or that the tool cannot be made available (e.g. due to proprietary restrictions) does not mean automatic rejection of the paper. However, the authors have to clarify if any of the above conditions apply when submitting the artifact (or require for an exemption from the AE, see later).
Optional Artifact Evaluation for Accepted Research and Case Study Papers
Authors of accepted research papers and case study papers are also invited to submit an artifact. In this case, the submission is voluntary. Authors of the accepted papers will be invited to submit an artifact two weeks after notification.
Artifact Submission
The artifact submission is handled via EasyChair. Artifacts have to be submitted in the "TACAS 2023 - Artifact Evaluation" track, with title "Artifact for Paper (title of the original paper)", and the same authors as the submitted paper. An artifact submission consists of
- An abstract that summarizes the artifact and its relation to the paper (inserted in EasyChair),
- A .pdf file of the paper (uploaded via EasyChair). In the case of accepted regular and case study papers, the file can be modified from the submitted version to take reviewers’ comments into account,
- A .zip file (uploaded via EasyChair) containing:
- a text file named License.txt that contains the license for the artifact (it is required that the license at least allows the Artifact Evaluation Committee to evaluate the artifact w.r.t. the criteria mentioned above),
- a text file called Readme.txt that contains
- ARTIFACT LINK: a (working) public link to the artifact,
- ADDITIONAL REQUIREMENTS: Any additional software or hardware requirement for running the artifact, such as the need of installing proprietary software needing a license, particular hardware resources (e.g., GPUs),
- EXPERIMENT RUNTIME: the total runtime (at least estimation on a specific hardware) needed to run the experiments (and/or a sufficient subset of the experiments for evaluating the artifact), and
- REPRODUCIBILITY INSTRUCTIONS: The detailed, step-by-step instructions on how to setup and use the artifact to replicate the results in the paper (note: the instructions cannot assume any specific knowledge, apart from the basic usage of a linux system).
- A .zip file (not submitted through EasyChair but made available for download) of the artifact containing all the data (code, binaries, scripts, benchmarks, dependencies,..) necessary to run the artifact on the TACAS ‘23 virtual machine (note: not the virtual machine itself, see the instruction later).
Guidelines for Artifacts
We expect artifact submissions to package their artifact and write their instructions such that Artifact Evaluation Committee (AEC) members can evaluate the artifact using the TACAS 2023 Artifact Evaluation Virtual Machine for VirtualBox available via Zenodo.
The virtual machine is based on an Ubuntu 22.04 LTS GNU/Linux operating system with the following additional packages: build-essential, cmake, clang, mono-complete, openjdk-8-jdk, python3.10, pip3, ruby, and a 32-bit libc. Moreover, VirtualBox guest additions are installed on the VM; it is therefore possible to connect a shared folder from the host computer. If the artifact requires additional software or libraries that are not part of the virtual machine, the instructions must include all necessary steps for their installation and setup. Any software that is not already part of the virtual machine must be included in the .zip file. AEC members will not download software or data from external sources, and the artifact must work without a network connection. In case you feel that this VM will not allow an adequate replication of the results in your paper, please contact the AEC chairs prior to artifact submission.
It is to the advantage of authors to prepare an artifact that is easy to evaluate by the AEC. Some guidelines:
- Your artifact should not be anonymized (in contrast to your paper which should be anonymized). However, for practical reasons you should not rename your tool in both the paper and artifact even if it yields a potential loss of anonymity. For instance, if you submit a paper/artifact on a new tool called XYZ5 which has a clear connection to a tool called XYZ4 (and the authors of XYZ4 are publicly known) then you should keep calling your tool XYZ5 and do not hide a relationship with XYZ4. Your paper/artifact will not be rejected because of that.
- Document in detail how to replicate most, or ideally all, of the experimental results of the paper using the artifact.
- Keep the evaluation process simple through easy-to-use scripts and provide detailed documentation assuming minimum user expertise.
- For experiments that require a large amount of resources (hardware or time), we strongly recommend to provide a way to replicate a representative subset of the results from the paper with reasonably modest resources (RAM, number of cores), so that the results can be reproduced on various hardware platforms including laptops, and in a reasonable amount of time. Do include the full set of experiments as well (for those reviewers with sufficient hardware or time), just make it optional.
- State the resource requirements, or the environment in which you successfully tested the artifact, in the Readme.txt file (RAM, number of cores, CPU frequency).
- Do not submit a virtual machine; only submit your files, which AEC members will copy into the provided virtual machine.
- For the "Available" badge, you have to upload your artifact to a permanent repository (e.g., Zenodo, figshare, or Dryad) that provides a Digital Object Identifier (DOI) and use that link in your submission. So, for obtaining the Available badge you cannot use non permanent repositories (e.g., institutional website, github, Google Drive, Dropbox).
Members of the AEC will use the submitted artifact for the sole purpose of artifact evaluation. We do, however, encourage authors to make their artifacts publicly and permanently available.
Please note that the reviewers will only have a limited time to reproduce the experiments and they will likely use a machine that is different from your machine. Thus, again, if your experiments need a significant amount of time (e.g., longer than a few days), please prepare a representative subset of experiments that could be run in a shorter amount of time (ideally, several hours). Lastly, test your virtual machine on other platforms.
Possibility for exemption
Under particular conditions tool papers and tool demonstration papers may be exempted from either submitting the artifact, using the provided virtual machine, or acquiring both the "Functional" and "Available" badges.
Possible reasons for one of such exemptions include the need for special hardware (GPUs, compute clusters, bluetooth devices, robots, etc.), software licensing issues or the need to access the internet. Note that, even if your experiments need special resources, you are encouraged to submit a complete artifact (and also an artifact that even replicates a subset of the experiments). This way, the reviewers have the option to replicate the experiments in the event they have access to the required resources.
Important Dates
All dates refer to 23:59 "anywhere on Earth" (UTC-12) on that day.
Deadlines for tool papers and tool demonstration papers:
- November 10, 2022: Artifact submission deadline for tool and tool demonstration papers (mandatory)
- November 20, 2022 – November 21, 2022: Communication with authors in case of technical problems with the artifact
Deadlines for accepted regular and case study papers:
- January 5, 2023: Artifact submission deadline for accepted research and case study papers (optional)
- January 20 2023 – January 22 2023: Communication with authors in case of technical problems with the artifact
Artifact Evaluation Chairs
- Sergio Mover (Ecole Polytechnique, France)
- Grigory Fedyukovich (Florida State University, USA)
Artifact Evaluation Committee
- Ahmed Irfan -- SRI International, USA
- Aleksandr Fedchin -- Tufts University, USA
- Alexander Bork -- Rheinisch-Westfälische Technische Hochschule Aachen, Germany
- Andres Noetzli -- Stanford University, USA
- Anton Xue -- University of Pennsylvania, USA
- Baoluo Meng -- GE Global Research
- Denis Mazzucato -- Ecole Normale Superieure, France
- Dmitry Mordvinov -- Saint-Petersburg State University, Russia
- Dongjoo Kim -- Seoul National University, Korea
- Emanuele De Angelis -- Istituto di Analisi dei Sistemi ed Informatica "Antonio Ruberti", Italy
- Federico Mora -- University of California, Berkeley, USA
- Felipe R. Monteiro -- Amazon Web Services, USA
- Hadar Frenkel -- CISPA – Helmholtz Center for Information Security, Germany
- Hansol Yoon -- The Republic of Korea Air Force
- Hari Govind Vediramana Krishnan -- University of Waterloo, Canada
- Jingbo Wang -- University of Southern California, USA
- Jip J. Dekker -- Monash University, Australia
- Jiří Pavela -- Brno University of Technology, Czech Republic
- Leonardo Alt -- Ethereum Foundation, Germany
- Martin Jonas -- Fondazione Bruno Kessler, Italy
- Martin Blicha -- Università della Svizzera italiana, Switzerland, and Charles University, Czech Republic
- Olli Saarikivi -- Microsoft Research, USA
- Pamina Georgiou -- Vienna University of Technology, Austria
- Pedro Henrique Azevedo de Amorim -- Cornell University, USA
- Priyanka Darke -- Tata Consultancy Services, India
- Saeid Tizpaz Niari -- University of Texas at El Paso, USA
- Satoshi Kura -- National Institute of Informatics, Japan
- Srinidhi Nagendra -- Chennai Mathematical Institute, India
- Sumanth Prabhu -- Indian Institute of Science and Tata Research Development and Design Centre, India
- Thomas Møller Grosen -- Aalborg University, Denmark
- Timothy Alberdingk Thijm -- Princeton University, USA
- Zafer Esen -- Uppsala University, Sweden