Skip to content

CodaLab and Codabench newsletter

What happened in 2025?

2025 was a year of transition and consolidation for our community. After 13 years of service, CodaLab Competitions was officially phased out, closing an important chapter in the history of open scientific challenges. At the same time, Codabench matured into the central platform for benchmarking, concentrating both usage and development efforts.

Beyond the symbolic handover, the year was marked by strong community engagement, growing activity on Codabench, and steady progress on the software itself. This newsletter offers a snapshot of that journey: key numbers, standout competitions, and the latest advances shaping the platform.

image_header

Bye bye, CodaLab!

After 13 years and millions of submissions made, CodaLab Competitions and its main servers were finally phased out at end of 2025, passing the torch to Codabench.

Today, Codabench is where the community's energy and development efforts are fully focused. As a modernized evolution of the CodaLab platform, it preserves familiar workflows while introducing improved performance, live logs, greater transparency, data-centric benchmarks, and more.

If you haven't made the transition yet as an organizer, good news: CodaLab bundles are fully compatible with Codabench, making the move straightforward. The process is documented step by step here: How to transition from CodaLab to Codabench

Some statistics

Codabench continued to grow strongly throughout the year, reaching 519 public competitions created and welcoming 31,608 new users! Daily activity also increased steadily, from around 500 submissions per day in January to over 1,000 daily submissions by December, reflecting sustained community engagement.

CodaLab, while entering its sunset phase, still saw 100 public competitions created and 14,854 new users over the year. Submission activity peaked in March (around 850 submissions per day), before gradually declining to fewer than 200 daily submissions in December, as usage progressively shifted towards Codabench.

A total of $269,000 in prize money was awarded to competition participants in 2025.

Spotlight on competitions

2025 featured many notable competitions across scientific and industrial fields. From NeurIPS and ICML to challenges in health and medical research, environmental science, industrial applications, language processing, and education, the diversity of topics continued to grow.

NeurIPS and ICML

  • EEG Foundation Challenge, aiming to advance the field of electroencephalogram (EEG) decoding by addressing two critical challenges, (1) models that can transfer knowledge from any cognitive EEG tasks to active task and (2) creating representations that generalize across different subjects. It was the most popular competition this year, featuring 1220 participants, was the NeurIPS 2025 competition.
  • NeurIPS 2025 Weak Lensing Uncertainty Challenge, exploring uncertainty-aware and out-of-distribution detection AI techniques for Weak Gravitational Lensing Cosmology.
  • NeurIPS 2025: Fairness in AI Face Detection Challenge, where the goal is to advance the development of fair and robust AI-generated face detection systems by addressing the critical challenge of fairness generalization under real-world deployment conditions.
  • ICML 2025 AI for Math Workshop & Challenge 1 - APE-Bench I, designed to evaluate systems that can automate proof engineering in large-scale formal mathematics libraries.

Health and medical research

Environmental research

  • MIT ARCLab Prize for Space AI Innovation 2025, where the objective is to develop cutting-edge AI algorithms for nowcasting and forecasting space weather-driven changes in atmospheric density across low earth orbit using historical space weather observations.
  • TreeAI4Species Competition: Semantic Segmentation and Object detection, studying algorithms for identifying tree species from high-resolution aerial imagery.
  • Water Scarcity, leveraging data science to address water scarcity issues through simulations.

Industrial applications

Natural Language Processing

SemEval (Semantic Evaluation) is an international series of shared tasks in natural language processing that provides standardized benchmarks to evaluate and compare systems on semantic understanding challenges. More than 12 tasks (with sub-tracks) were organized on Codabench in 2025, accounting for more than 20,000 submissions.

Other notable NLP benchmarks:

Education

A huge thank you to everyone in the community for these outstanding scientific contributions across a wide variety of fields. You can discover many more challenges in the public competition listing.

image_break

Novelty in the software

Our contributors community was very active, with 139 pull requests merged this year. Many new features, bug fixes, and back-end changes were made. We present some of them below.

New features for participants and organizers

  • Public datasets listing: https://www.codabench.org/datasets/public/?page=1
  • Croissant standard compatibility
  • New documentation website: https://docs.codabench.org
  • Users can delete their submissions and manage their individual storage
  • Leaderboards are now public for everyone without login required

Back-end changes for developpers and hosters

  • Using Playwright instead of Selenium for automatic tests
  • Logs are now colored and easier to read
  • Django and other packages version upgrades

What's to come

The trend is to make the project more easy to deploy for independant hosters.

  • Unified and lighter compute worker image, making it more stable
  • Make compute worker its own repository, which means it can be more easily used for other projects if needed
  • Django Admin upgrades to make it easier to manage the website as a site admin

Community

Reminder on our communication tools:

  • Join our google forum to communicate your competitions and events
  • Contact us for any question: info@codabench.org
  • Write an issue on github about interesting suggestions

Please cite this paper when working with Codabench:

@article{codabench,
   title = {Codabench: Flexible, easy-to-use, and reproducible meta-benchmark platform},
   author = {Zhen Xu and Sergio Escalera and Adrien Pavão and Magali Richard and
               Wei-Wei Tu and Quanming Yao and Huan Zhao and Isabelle Guyon},
   journal = {Patterns},
   volume = {3},
   number = {7},
   pages = {100543},
   year = {2022},
   issn = {2666-3899},
   doi = {https://doi.org/10.1016/j.patter.2022.100543},
   url = {https://www.sciencedirect.com/science/article/pii/S2666389922001465}
}

Closing words

Thank you for reading the our newsletter. We're not done yet! More projects, more challenges, and more science ahead. Our open platform is becoming a powerful actor for building reliable and innovative AI benchmarks. See you on Codabench.

partners