Research Paper arXiv:2512.22334

SciEvalKit: An Open-source Evaluation Toolkit for Scientific General Intelligence

Yiheng Wang, Yixin Chen, Shuo Li, Yifan Zhou, Bo Liu, Hengjian Gao, Jiakang Yuan, Jia Bu, Wanghan Xu, Yuhao Zhou, Xiangyu Zhao, Zhiwang Zhou, Fengxiang Wang, Haodong Duan, Songyang Zhang, Jun Yao, Han Deng, Yizhou Wang, et al.

Abstract

We introduce SciEvalKit, a unified benchmarking toolkit designed to evaluate AI models for science across a broad range of scientific disciplines and task capabilities. Unlike general-purpose evaluation platforms, SciEvalKit focuses on the core competencies of scientific intelligence, including Scientific Multimodal Perception, Scientific Multimodal Reasoning, Scientific Multimodal Understanding, Scientific Symbolic Reasoning, Scientific Code Generation, Science Hypothesis Generation and Scientific Knowledge Understanding.

It supports six major scientific domains, spanning from physics and chemistry to astronomy and materials science. SciEvalKit builds a foundation of expert-grade scientific benchmarks, curated from real-world, domain-specific datasets, ensuring that tasks reflect authentic scientific challenges.

The toolkit features a flexible, extensible evaluation pipeline that enables batch evaluation across models and datasets, supports custom model and dataset integration, and provides transparent, reproducible, and comparable results. By bridging capability-based evaluation and disciplinary diversity, SciEvalKit offers a standardized yet customizable infrastructure to benchmark the next generation of scientific foundation models and intelligent agents.

Key Features

7 Core Capabilities

Perception, Reasoning, Understanding, Symbolic Reasoning, Code Generation, Hypothesis Generation, Knowledge Understanding

6 Scientific Domains

Physics, Chemistry, Biology, Astronomy, Materials Science, and more

Extensible Pipeline

Flexible evaluation with custom model and dataset integration

Open Source

Community-driven development for AI4Science progress

Citation

@article{wang2025scievalkit,
  title={SciEvalKit: An Open-source Evaluation Toolkit for Scientific General Intelligence},
  author={Wang, Yiheng and Chen, Yixin and Li, Shuo and others},
  journal={arXiv preprint arXiv:2512.22334},
  year={2025}
}