Judging#

Panel#

Submissions to both tracks will be judged together by the following panel of environmental data science experts on the basis of their adherence to open science and FAIR practices, adherence to the objectives of the EDS book, and tutorial contributions, adjusted to consider the amount of resources provided by the authors of the original paper.

Role#

The judge role is to evaluate the notebooks submitted by participants and select the top-performing ones based on a set of criteria. Here is some information for judges:

  • Read the guidelines: carefully read the guidelines and criteria for evaluation to ensure that they are assessing the notebook appropriately.

  • Evaluate the notebook: evaluate the notebook based on their accuracy, completeness, and clarity. They should also consider any additional insights or observations provided by the team.

  • Verify the reproduction process: verify that team has followed the reproduction process as closely as possible, and note any deviations or challenges that the team encountered.

  • Evaluate the findings: evaluate the accuracy and reliability of the team’s findings, and compare them to the original study’s results. They should also consider any additional insights or observations provided by the participant.

  • Rank the notebooks: rank the notebooks based on their evaluation and select the top-performing ones.

  • Provide feedback: provide feedback to the teams, highlighting any strengths and weaknesses of their notebooks and offering suggestions for improvement.

  • Maintain confidentiality: maintain confidentiality and not share any information about the reports with others.

  • Meet the deadlines: meet the deadlines set by the organizers and submit their evaluations in a timely manner.

It is important for judges to be objective and impartial, and to provide constructive feedback to help participants improve their reproducibility skills.

Criteria#

Judges have two criteria categories to score submissions to ensure that the team’ work is of the high quality and meets the standards of reproducible research

General#

Here are the general criteria to be used to score submissions:

  • Accuracy: How closely did the team’s results match the original study’s results?

  • Completeness: Did the team include all necessary information to reproduce the study? Did they identify any limitations or caveats in the reproduction process?

  • Clarity: Was the team’s report easy to understand? Did they provide clear explanations of their methods and findings?

  • Reproducibility: Did the participant follow the reproduction process as closely as possible? Were they able to reproduce the original study’s results?

  • Documentation: Did the participant provide clear and thorough documentation of their reproduction process, including any scripts or code used?

  • Additional insights: Did the participant provide any additional insights or observations that were not included in the original study?

Jupyter notebook#

Here are some technical aspects of a Jupyter Notebook that can be reviewed:

  • Code quality: Review the code in the notebook to ensure it is well-documented, easy to understand, and follows best practices for code organization and style.

  • Output and visualizations: Check that the outputs and visualizations in the notebook are accurate, clearly labeled, and easy to interpret.

  • Data management: Evaluate the participant’s data management practices to ensure that the data used in the analysis is well-documented, properly formatted, and follows best practices for data management.

  • Text quality: Assess the quality of the text in the notebook, including the clarity, accuracy, and organization of the written explanations and descriptions.