HydroBench: Jupyter supported reproducible hydrological model benchmarking and diagnostic tool

Edom Moges, Benjamin L. Ruddell, Liang Zhang, Jessica M. Driscoll, Parker Norton, Fernando Perez, Laurel G. Larsen

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Evaluating whether hydrological models are right for the right reasons demands reproducible model benchmarking and diagnostics that evaluate not just statistical predictive model performance but also internal processes. Such model benchmarking and diagnostic efforts will benefit from standardized methods and ready-to-use toolkits. Using the Jupyter platform, this work presents HydroBench, a model-agnostic benchmarking tool consisting of three sets of metrics: 1) common statistical predictive measures, 2) hydrological signature-based process metrics, including a new time-linked flow duration curve and 3) information-theoretic diagnostics that measure the flow of information among model variables. As a test case, HydroBench was applied to compare two model products (calibrated and uncalibrated) of the National Hydrologic Model - Precipitation Runoff Modeling System (NHM-PRMS) at the Cedar River watershed, WA, United States. Although the uncalibrated model has the highest predictive performance, particularly for high flows, the signature-based diagnostics showed that the model overestimates low flows and poorly represents the recession processes. Elucidating why low flows may have been overestimated, the information-theoretic diagnostics indicated a higher flow of information from precipitation to snowmelt to streamflow in the uncalibrated model compared to the calibrated model, where information flowed more directly from precipitation to streamflow. This test case demonstrated the capability of HydroBench in process diagnostics and model predictive and functional performance evaluations, along with their tradeoffs. Having such a model benchmarking tool not only provides modelers with a comprehensive model evaluation system but also provides an open-source tool that can further be developed by the hydrological community.

Original languageEnglish (US)
Article number884766
JournalFrontiers in Earth Science
Volume10
DOIs
StatePublished - Sep 30 2022

Keywords

  • Hydrological Modeling
  • Kling-Gupta
  • Model Benchmarking
  • Model Diagnostics
  • Model Evaluation
  • Nash Sutcliffe
  • Reproducibility
  • Uncertainty Analysis

ASJC Scopus subject areas

  • General Earth and Planetary Sciences

Fingerprint

Dive into the research topics of 'HydroBench: Jupyter supported reproducible hydrological model benchmarking and diagnostic tool'. Together they form a unique fingerprint.

Cite this