Pre-screening and Approval for Review
NREPP identifies programs for review in three ways:
- Nominations from the field: SAMHSA announces an open submission period. The open submission period generally lasts several months and allows developers, researchers, practitioners, and other interested parties to submit programs for review.
- Environmental scans: SAMHSA and NREPP contractor staff conduct environmental scans (including literature searches, focus groups, public input, and interviews) to identify potential interventions for review.
- Agency nomination: SAMHSA identifies programs and practices addressing specific agency priorities.
Programs identified through the open submission process are prioritized for review.
Programs are pre-screened to ensure that at least one evaluation study meets the minimum criteria
for review. Programs are submitted to SAMHSA for approval to move into the review process. Applicants are notified whether their intervention has been accepted for review or rejected.
Literature Search and Screening
- A Federal Register notice issued July 7, 2015 announced the intent to re-review currently posted NREPP programs to comport with new review criteria and ratings.
- The changes to the review process will apply to all re-reviews of programs currently posted on NREPP. Approximately 110 programs posted on NREPP prior to September 2015 will be reviewed each year over the next 3 years. Program contacts/developers will be notified at least 45 days prior to the re-review date that their program has been selected for re-review. The re-review will follow the procedures presented on this page.
NREPP contractor staff contact the developer to request any additional evaluation studies and information about resources for dissemination and implementation (RFDI). Applicants will be asked to complete the RFDI checklist. Although SAMHSA has determined that programs no longer need RFDI materials to be reviewed, programs with such materials will be prioritized for review.
To ensure a comprehensive report, a literature search is conducted to identify other relevant evaluation studies. All evaluation materials are screened and NREPP contractor staff determine which studies and outcomes are eligible for review. SAMHSA has determined that all eligible outcomes will be reviewed, but that programs with positive impacts on outcomes and populations of interest will be prioritized over programs without positive impacts.
All evaluation studies that meet minimum criteria, including being published within the past 20 years (1995 or later), and fall within a 10-year time frame—as defined by the most recent eligible article of a study—are eligible for review.
NREPP contractor staff identify two certified reviewers
to conduct the review (Note: re-reviews of programs posted on NREPP before September 2015 may be completed by one reviewer). Reviewers must complete a Conflict of Interest form to confirm no conflict exists that would require recusal.
Review packets are sent to reviewers to assess the rigor of the study and the magnitude and direction of the program's impact on eligible outcomes. Studies identified during the submission process or by the literature search may be included in the review, but not all evaluation studies are necessarily included in the review packet. For instance, studies that will not be reviewed include those that do not meet the minimum criteria or assess only outcomes not included in the NREPP outcome taxonomy. The outcome taxonomy currently includes 16 mental health, 12 substance use, and 27 wellness outcomes, relevant to SAMHSA’s mission. An iterative process was applied to condense an extensive list of over 1000 outcome tags (extracted from studies reviewed on NREPP) into aggregate constructs. All outcomes were reviewed by scientists with expertise in the field of behavioral health and by SAMHSA staff.
Reviewers independently review the studies provided and calculate ratings
using the NREPP Outcome Rating Instrument.
Outcomes are assessed on four dimensions
. Study reviewers assign numerical values to each dimension in the NREPP Outcome Rating Instrument (with the exception of effect size). To support consistency across reviews, the dimensions include definitions, and the NREPP Outcome Rating Instrument provides other guidance that reviewers consider when rating findings. Reviewers also make note of any other information that should be highlighted as being of particular importance.
The study reviewer is responsible for making a reasonable determination as to the strength of the methodology, fidelity, and program effect, based on the provided documentation and his/her specialized knowledge with regard to program evaluation and subject matter. If the reviewers' ratings differ by a significant margin, a consensus conference to discuss and resolve the differences may be held.
In addition to this review by certified reviewers, NREPP staff also assess programs' conceptual frameworks.
To assess the strength of evidence, each eligible reported effect or measure is rated on four dimensions
; each dimension includes multiple elements (for instance, the strength of the methodology dimension includes design/assignment and attrition). Not all reported effects or measures will be assessed. For instance, those measured by a single item that is either subjective or not well-established are excluded. Those for which no effect size can be calculated are excluded. Also, findings not related to an NREPP outcome of interest are excluded. Once all eligible measures or effects are rated, all scores for those falling into one outcome are combined. This includes reported measures or effects across studies.
The effect size calculations are based on standard statistical methods. NREPP staff calculate Hedges g effect sizes for both continuous and dichotomous (yes/no) outcomes. Whenever possible, NREPP staff calculate an effect size that is adjusted for baseline differences.
Evidence Classes and Outcome Ratings
The outcome ratings are based on four dimensions: rigor, effect size, program fidelity, and conceptual framework. Most weight is given to the ‘design/assignment’ element of the rigor dimension and the confidence interval of the effect size(s) contributing to the outcome.
The rigor and fidelity elements contribute to the evidence score; the confidence interval of the effect size determines the ‘effect class’; and both combined determine the ‘evidence class’ for each component measure of the outcome. Evidence classes for each measure are then assigned numeric weights and averaged to create an outcome score. The outcome rating is based on a combination of the outcome score and the conceptual framework rating.
The graphic below summarizes all components of the outcome rating.
Components of the Final Outcome Rating
The evidence class for each reported effect is based on a combination of evidence score and effect class.
- The Evidence score is based on the rigor and fidelity dimensions and is rated as strong, sufficient, or inconclusive. For information about the elements of rigor and fidelity, go to: Program Review Criteria
The Effect class is based on the confidence interval of the effect size:
- Favorable: Confidence interval lies completely within the favorable range (greater than .10)
- Possibly favorable: Confidence interval spans both the favorable (greater than .10) and trivial range (from -.25 to .10)
- Trivial: Confidence interval lies completely within the trivial range (from -.25 to .10) or spans the harmful and favorable range (from -.25 to greater than .10)
- Possibly harmful: Confidence interval spans both the harmful (lower than -.25) and trivial range (from -.25 to .10).
- Harmful: Confidence interval lies completely within the harmful range (lower than -.25)
These two dimensions are then combined to categorize programs into one of seven evidence classes as depicted below.
|DESCRIPTION OF EVIDENCE CLASSES|
||Highest quality evidence with confidence interval completely within the favorable range
||Sufficient evidence with confidence interval completely within the favorable range
||Sufficient or highest quality evidence with confidence interval spanning both the favorable and trivial ranges
||Sufficient or highest quality evidence with confidence interval completely within the trivial range
||Sufficient or highest quality evidence with confidence interval spanning both the harmful and trivial ranges
||Sufficient or highest quality evidence with confidence interval completely within the harmful range
||Limitations in the study design preclude from reporting further on the outcome
The outcome rating (see table below) is based on a combination of the outcome score and the conceptual framework rating. The outcome score is based on the evidence class(es) of each component measure. The conceptual framework is based on whether a program has clear goals, activities, and a theory of change.
|Outcome Evidence Rating
||The evaluation evidence has strong methodological rigor, and the short-term effect on this outcome is favorable. More specifically, the short-term effect favors the intervention group and the size of the effect is substantial.
||The evaluation evidence has sufficient methodological rigor, and the short-term effect on this outcome is likely to be favorable. More specifically, the short-term effect favors the intervention group and the size of the effect is likely to be substantial.
||The evaluation evidence has sufficient methodological rigor, but there is little to no short-term effect. More specifically, the short-term effect does not favor the intervention group and the size of the effect is negligible. Occasionally, the evidence indicates that there is a negative short-term effect. In these cases, the short-term effect harms the intervention group and the size of the effect is substantial.
||Programs may be classified as inconclusive for two reasons. First, the evaluation evidence has insufficient methodological rigor to determine the impact of the program. Second, the size of the short-term effect could not be calculated.
The ratings and descriptive information are compiled into a program profile.
A courtesy copy of the program profile is shared with the developer or submitter of the program for review, who may suggest revisions to the profile.
The final program profile is submitted to SAMHSA for review, approval, and posting on the NREPP website.
NREPP Appeal Request
Grounds for an Appeal
A formal appeal may be requested when an applicant believes that NREPP procedures or standards have been misapplied during the review of a program. Disagreements with an NREPP standard or process do not constitute adequate grounds for making an appeal; rather, the requestor must provide information that demonstrates the standard or process has been inaccurately executed.
An appeal request may be made after NREPP has informed the applicant that no further changes will be made to the program profile. NREPP regularly provides extensive clarifications and detailed information to applicants once they have received their program profile (e.g., criteria that were rated low, reasons for not including specific reported effects, etc.). However, if the discussion does not resolve or clarify the program applicant’s concerns, then the appeals process can begin.
An appeal may be made if:
- The applicant disagrees with the way in which a study design has been rated—for instance, an applicant might argue a study design deemed to be a “compromised RCT” should be instead designated as a “well-executed RCT.” In this case, additional information about the study design and randomization procedures can be provided to support the change of the rating, based on NREPP’s review protocol.
- The applicant disagrees with the numbers used to rate attrition for the analytic sample.
- The applicant believes that the wrong data were used to determine effect sizes.
- The applicant believes that studies reflecting the same version of the program being reviewed have been excluded unnecessarily. In this instance, the applicant can provide additional information to demonstrate that the excluded studies evaluated the same version of the program reviewed.
- The applicant believes one or more outcome measures were excluded unnecessarily and can provide information to demonstrate that the measure is statistically reliable and relates to an outcome in the NREPP Outcomes Taxonomy.
- The applicant provides any other information regarding the misappropriation of the design or other aspects of study rigor that directly impacts the rating.
The following are not
considered grounds for an appeal:
- The applicant would like to add information to the key study findings section of the profile.
- The applicant disagrees with the naming of an outcome. NREPP outcomes are labeled using a specific taxonomy which was developed to standardize outcome labels across multiple studies and facilitate searches by registry users.
- The applicant disagrees about the aggregation of results across studies or about the way in which results were aggregated. It is standard procedure for NREPP to aggregate findings across studies looking at the same outcome, and the method for aggregating findings is applied consistently across programs reviewed.
- The applicant disagrees with the exclusion of subgroup and follow-up findings. NREPP solely reviews full sample findings evaluated at posttest, with posttest defined as the first post-intervention assessment of intervention effects.
- The applicant disagrees with the exclusion of indirect effects, and findings related to the mediation of program effects. NREPP outcome ratings are based solely on direct effects.
- The applicant disagrees with the methods used to calculate effect sizes.
- The applicant disagrees with one or more of the outcome-level ratings.
Once the appeals form has been received, the Chair of the Appeals Board will determine whether there are grounds for an appeal. If there are legitimate grounds, the Chair will move the process forward by contacting two NREPP-certified reviewers to reconsider the studies included and excluded in the review and/or re-review the outcomes in question, as applicable. The stages of an appeal include the following:
Each appellate reviewer will conduct an independent review of the studies and/or outcomes being appealed. The certified reviewers will review the studies in the program's evidence base and complete the NREPP Outcome Rating Instrument. The identities of the original reviewers will remain unknown to the appellate reviewer until they are required to participate in the process.
After the two separate reviews are complete, the appellate reviewers will participate in a conference call led by the Chair to discuss the original review and outcome rating(s) and the concerns raised by the inquirer, and to reach consensus about the program's outcome rating(s). If the appellate reviewers agree the program's original outcome rating(s) should not be changed, they will provide a written explanation documenting the discussion and the reason for their final decision. If they believe revised outcome rating(s) are warranted, they will provide a written explanation describing how and why their scores differ from the scores of the original reviewers. A final conference call will be held including the Chair, Appeals Board members, and original reviewers to discuss the disagreement on the scoring instruments and come to a consensus on final outcome rating(s).
Once a final consensus rating is reached, NREPP staff will provide the inquirer with a written response describing the Appeals Board's final decision.
If warranted as a result of the appeal, changes to a program's outcome rating(s) or additions to information in the program profile will be made on the NREPP website.
The appeals process may take up to 90 days after receipt of a request for an appeal. Please complete the form available here
and submit to email@example.com
to initiate the appeal.