The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here: https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

Johan Larsson. Photo.

Johan Larsson

Doctoral student

Johan Larsson. Photo.

Benchopt : Reproducible, efficient and collaborative optimization benchmarks

Author

  • Thomas Moreau
  • Mathurin Massias
  • Alexandre Gramfort
  • Pierre Ablin
  • Pierre Antoine Bannier
  • Benjamin Charlier
  • Mathieu Dagréou
  • Tom Dupré la Tour
  • Ghislain Durif
  • Cassio F. Dantas
  • Quentin Klopfenstein
  • Johan Larsson
  • En Lai
  • Tanguy Lefort
  • Benoit Malézieux
  • Badr Moufad
  • Binh T. Nguyen
  • Alain Rakotomamonjy
  • Zaccharie Ramzi
  • Joseph Salmon
  • Samuel Vaiter

Editor

  • S. Koyejo
  • S. Mohamed
  • A. Agarwal
  • D. Belgrave
  • K. Cho
  • A. Oh

Summary, in English

Numerical validation is at the core of machine learning research as it allows to assess the actual impact of new methods, and to confirm the agreement between theory and practice. Yet, the rapid development of the field poses several challenges: researchers are confronted with a profusion of methods to compare, limited transparency and consensus on best practices, as well as tedious re-implementation work. As a result, validation is often very partial, which can lead to wrong conclusions that slow down the progress of research. We propose Benchopt, a collaborative framework to automate, reproduce and publish optimization benchmarks in machine learning across programming languages and hardware architectures. Benchopt simplifies benchmarking for the community by providing an off-the-shelf tool for running, sharing and extending experiments. To demonstrate its broad usability, we showcase benchmarks on three standard learning tasks: ℓ2-regularized logistic regression, Lasso, and ResNet18 training for image classification. These benchmarks highlight key practical findings that give a more nuanced view of the state-of-the-art for these problems, showing that for practical evaluation, the devil is in the details. We hope that Benchopt will foster collaborative work in the community hence improving the reproducibility of research findings.

Department/s

  • Department of Statistics

Publishing year

2022-12-06

Language

English

Pages

25404-25421

Publication/Series

Advances in Neural Information Processing Systems

Volume

35

Document type

Conference paper

Publisher

Curran Associates, Inc

Topic

  • Probability Theory and Statistics

Keywords

  • Logistic regression
  • Machine learning

Conference name

36th Conference on Neural Information Processing Systems, NeurIPS 2022

Conference date

2022-11-28 - 2022-12-09

Conference place

New Orleans, United States

Status

Published

Project

  • Optimization and Algorithms in Sparse Regression: Screening Rules, Coordinate Descent, and Normalization

ISBN/ISSN/Other

  • ISSN: 1049-5258
  • ISBN: 9781713871088