Welcome to DBTest


About DBTest

With the ever increasing amount of data stored and processed, there is an ongoing need of testing database management systems but also data-intensive systems in general. Specifically, emerging new technologies such as Non-Volatile Memory impose new challenges (e.g., avoiding persistent memory leaks and partial writes), and novel system designs including FPGAs, GPUs, and RDMA call for additional attention and sophistication.

Reviving the previous success of the seven previous workshops, the goal of DBTest 2020 is to bring researchers and practitioners from academia and industry together to discuss key problems and ideas related to testing database systems and applications. The long-term objective is to reduce the cost and time required to test and tune data management and processing products so that users and vendors can spend more time and energy on actual innovations.

Topics Of Interest

  • Testing of database systems, storage services, and database applications
  • Testing of database systems using novel hardware and software technology (non-volatile memory, hardware transactional memory, …)
  • Testing heterogeneous systems with hardware accelerators (GPUs, FPGAs, ASICs, …)
  • Testing distributed and big data systems
  • Testing machine learning systems
  • Specific challenges of testing and quality assurance for cloud-based systems
  • War stories and lessons learned
  • Performance and scalability testing
  • Testing the reliability and availability of database systems
  • Algorithms and techniques for automatic program verification
  • Maximizing code coverage during testing of database systems and applications
  • Generation of synthetic data for test databases
  • Testing the effectiveness of adaptive policies and components
  • Tools for analyzing database management systems (e.g., profilers, debuggers)
  • Workload characterization with respect to performance metrics and engine components
  • Metrics for test quality, robustness, efficiency, and effectiveness
  • Operational aspects such as continuous integration and delivery pipelines
  • Security and vulnerability testing
  • Experimental reproduction of benchmark results
  • Functional and performance testing of interactive data exploration systems
  • Tracability, reproducibility and reasoning for ML-based systems


Paper Submission

Authors are invited to submit original, unpublished research papers that are not being considered for publication in any other forum.

Submission Guideline

Easy Chair


March 6, 2020 / 11:59PM US EDT


April 3, 2020 / 11:59PM US EDT

Notification Of Outcome

April 17, 2020 / 11:59PM US EDT

Camera-Ready Copy

Programm Committee

Anastasia Ailamaki, EPFL, Switzerland
Anisoara Nica, SAP SE, Canada
Anja Grünheid, Google, USA
Artur Andrzejak, University of Heidelberg, Germany
Caetano Sauer, Tableau, Germany
Danica Porobic, Oracle, USA
Eric Lo, Chinese University of Hong Kong,Hong Kong
Eva Sitaridi, Amazon Web Services,USA
Hannes Mühleisen, CWI, The Netherlands
Ioana Manolescu, INRIA, France
Joy Aruljay,Georgia Tech, USA
Julia Stoyanovich, NYU, USA
Leo Giakoumakis, Snowflake, USA
Renata Borovica-Gajic, University of Melbourne, Australia
Wolfram Wingerath, Baqend, Germany
Zsolt Istvan, IMDEA, Spain

Workshop Co-Chairs


ITU Copenhagen, Denmark


SAP SE, Germany

Steering Comitee


TU Darmstadt, Germany


SAP SE, Germany


TU Berlin, Germany

Program Schedule

Soon To Be Announced