Selection Criteria and Scoring System - Cycle 1
Proposals reviewed by the topical panels are subject to a two-stage review process: 1) preliminary grading and triage; and 2) the review meeting.
In the preliminary grading and triage phase, each proposal is reviewed and graded by 5 or 6 panel members; those preliminary grades are used to develop a ranked list. Proposals will be assessed on an absolute scale against three primary criteria described in the Call for Proposals with a separate grade given for each:
- The scientific merit of the program and its contribution to advancement of knowledge – how does the proposed investigation impact our knowledge with the specific sub-field?
- The program’s impact for astronomy in general – are there implications for other science areas and/or insights into larger-scale questions?
- A demonstration that the unique capabilities of JWST are required to achieve the science goals – suitability for JWST; how much of an advantage does JWST data offer over other facilities? This applies to both GO and AR proposals; Theory proposals should have broad applicability to JWST observational programs.
Where appropriate, reviewers should also consider the strength of the analysis plan described in the proposal in assessing the first two criteria.
Descriptions of additional criteria by type of proposal are given in the Proposal Selection Process section of the Call for Proposals.
While reviewing the proposals if you notice and/or identify any issues with the proposal template formatting, page limit violations or resource request issues, please contact SPG to discuss before downgrading that proposal.
The scoring should be on an absolute scale with the framework set by the following criteria.
Impact within the sub-field
Potential for transformative results
Transformative implications for one or more other sub-fields
Science goals can only be achieved with JWST
Potential for major advancement
Major implications for one or more other sub-fields
Major advantages in using JWST over other facilities
Potential for moderate advancement
Some implications for one or more other sub-fields
Some advantages in using JWST over other facilities
Potential for minor advancement
Minor impacts on other sub-fields
Minor advantages in using JWST over other facilities
Limited potential for advancing the field
Little or no impact for other sub-fields
JWST offers little or no advantage over other facilities or the advantages of using JWST are unclear.
Reviewers may submit grades in decimal form, but please limit to one decimal place.
Reviewers must submit all three grades for all proposals. For archival programs, the data may be well suited to the proposed program, but reviewers must consider the extent to which JWST data are critical in pursuing the science objectives, or whether those objectives could be met using data from another facility.
The following examples aim to give guidance in applying these rubrics to grading proposals; reviewers should use their best judgement.
Case 1. Mid-IR observations of gas in young stars
Highly significant improvement in our understanding of gas flow in young stars
Out of field
Potential for significant changes in our understanding of gas flows in a wide range of other environments
Mid-IR observations are essential to achieve the science goals and can only be acquired through JWST observations
Case 2. Analysis of archival near-IR imaging of a nearby galaxy for stellar population investigations
Major advance in understanding stellar pops in that galaxy
Out of field
Some implications for stellar pops and stellar evolution in other galaxies
The increased spatial resolution offered by JWST provides some advantages over other facilities in addressing the science goals. The analysis offers significant improvements and/or additional value with respect to the original use of the data.
Case 3. Near-IR/Mid-IR spectroscopy of an emission-line galaxy
Moderate increase in understanding of prevalence of star formation in that galaxy
Out of field
Minor implications for the properties of other galactic systems, but no wider impact
Mid-IR data are not required for the science case; limited gains in performance at near-IR wavelengths as compared with larger ground-based facilities
Case 4. Developing theoretical tools to characterize dust in Galactic star-forming regions
Potential significant increase in understanding of chemical composition in dusty environments
Out of field
Results have significant implications for interpreting dust composition in other galaxies
The theoretical analysis will enable additional JWST observational programs.
The three grades for each reviewer are normalized to have a mean of 3 and a standard deviation of 1. The overall grade for that reviewer is the straight average of the three normalized grades.
The preliminary grade for each proposal is determined by averaging the overall grades from each reviewer.
The preliminary grades are used to create a rank order list for each panel and the lowest-ranked proposals (typically ~40%) are triaged from further discussion.
Grading proposals in the virtual panel discussion
Grading is on the same absolute scale and uses the same grading rubric as the preliminary grades. Reviewers may submit grades in decimal form but please limit to one decimal place.
The grades are normalized following the same scheme used for the preliminary grades; the overall grade for each reviewer is the straight average of the three normalized grades. The preliminary grade for each proposal is determined by averaging the overall grade from each reviewer. Once the grading is complete for all proposals, the rank order list is created.
Final ranking of proposals
All proposals are ranked together regardless of type (GO, Survey, AR) and scale (Small, Medium). Each panel has a nominal time allocation (N hours), and that marks the recommendation line for the panel. ARs and Mediums above that line will be recommended for implementation.
Panel members should review the rank order list to determine whether the highly-ranked proposals above the nominal cutoff line reflect the panel consensus and whether they provide an appropriate science balance for the panel. Panels should pay particular attention to the ranking near the 1N line for the panel.
There may be a consensus that some science areas have been unduly favored. If a panel identifies two highly-rated proposals that cover very similar science, the panel can discuss those two proposals together to decide whether they should keep both or select just one. This is irrespective of the scale of the proposals and where they are in the relative ranking. Any panelist conflicted on either proposal may not participate in the discussion.
Similarly, there may be circumstances where the chair identifies highly ranked proposals that have a science overlap with proposals highly ranked by another panel. In those cases, the panel members can make a consensus decision to re-rank ( but not re-grade) proposals to provide an appropriate reflection of the science topics reviewed by the panel.
Panels may wish to re-rank proposals to better reflect the consensus viewpoint. In that cases, panels may only compare proposals in pairwise fashion. If a proposal is raised for discussion, it may only be compared against the proposals immediately adjacent in the ranking, regardless of their type (GO, AR) and scale (small, medium). The pairwise comparison minimises panel conflicts and ensures that there is an explicit consensus on re-ranking any proposal.
Beginning in Cycle 2, STScI Grants plans to implement a formulaic model to determine the base-level funding of approved JWST programs. This model will take into account the complexity of the program as determined by the TAC, so that difficult or complex analyses requiring extraordinary effort can receive an appropriate level of base funding. The Cycle 1 TAC members are asked to provide a complexity grade as part of the preliminary review to help us finalize the development of the funding model.
This complexity estimate does not factor into the grade assigned to the proposal and is only requested as part of the preliminary review.
The general idea in grading the complexity area is to consider the complete programmatic effort in seeing through the proposed study from the notification of award through to the dissemination of results, relative to other programs of the same proposal category and size. Programs are graded in a 5-step complexity scale from "Very Low" to "Very High".
For theory and software programs, one should consider the effort required to develop and make available deliverables. For observational and archival programs, one should consider additional factors like the difficulty in working with the data products (from the standard STScI data pipeline or other higher-level processing from MAST), the analysis of that data to extract the signal sought, and the effort an expert might require to realize results. The estimate is necessarily subjective and is left to your discretion.
The results from the assessment of approved Cycle 1 proposals will be combined with the Financial Review Committee's assessment to explore the potential for developing a funding algorithm that can accelerate the dispersal of funds to approved proposals in later cycles.
Next: Dual Anonymous Proposals Guide for Reviewers