Pennsylvania offers schools a variety of high-quality training and technical assistance resources to schools who wish to use RTI Methodologies as part of a comprehensive SLD evaluation. “Models incorporating RTI typically involve identification based in part on the mass screening of all students and repeated probe assessments of the same core area, such as reading or math, in students who demonstrate risk characteristics. RTI models are dynamic and base identification on the assessment of ability change. By tying multiple assessments to specific attempts to intervene with a student, the construct of unexpected underachievement can be operationalized in part on the basis of an inadequate response to instruction that is effective with most individuals.” (Fuchs & Fuchs, 1998; Gresham, 2002).
For more than ten years, the field has been exploring the best methodologies for determining whether a student’s response to intervention has resulted in adequate or inadequate growth. Researchers and practitioners alike have explored both qualitative and quantitative methods including but not limited to visual analysis or the “eye ball” approach, the 3-point decision rule, split middle, the tukey method, and linear regression models. The point is that there are many ways to qualify and/or quantify growth and the field does not currently have a flawless or perfect method. The inherent challenge that we have is to continue to investigate which methods prove to be the most reliable and valid for capturing realized growth (or lack thereof) as a function of Response to Intervention. To date, Pennsylvania has provided extensive training and technical assistance in the area of RTI methodologies, with specific emphasis on the use of ROI (Rate of Improvement) to inform both instructional intensification, goal setting and as part of the assessment used to inform a child’s response to intervention prior to and within the SLD determination process.
Pennsylvania’s graphic above illustrates the four components of the definition of SLD. The resources below explain how to use the data collected within a tiered system of support and/or otherwise, to inform decisions about a given student’s eligibility for special education under the category of SLD within the context of state and federal regulations.
MTSS, RTI, and Specific Learning Disabilities (SLD) Determination in Pennsylvania - FAQs
PaTTAN - MTSS, RTI, and SLD Determination in Pennsylvania FAQs
This publication lists frequently asked questions and answers about MTSS, RTI, and SLD Determination in Pennsylvania.
Tools and Resources for Using an RTI Approach:
As indicated, the use of RTI methodologies helps practitioners to determine the amount of growth (the kind of progress that a given student is making over time) that has occurred. There is growing consensus in the research that Student Growth Percentiles are reliable and valid indices of realized growth and may be utilized when making high-stakes decisions.
The ultimate accuracy of an RTI decision depends upon integrated instructional adjustments, repeated assessment with correctly chosen referents for evaluation, and the ruling out potential threats to a valid decision (incorrect initial identification upon screening; technical error associated with the measure such as low reliability or poor sensitivity; inadequate integrity or implementation of the intervention). The initial decision that must be correct is the valid identification of students who are in need of intervention. Given that a technically adequate screening measure has been selected, the next most immediate threat to the validity of a screening decision is the base rate of risk that exists within the sample of students who are being screened. When the base rate of risk is very high, as in the case of a class-wide learning problem, no screening measure can adequately identify the students who are truly at risk. In these cases, before the screening criterion can be applied, intervention trials must be delivered to introduce variation into the students’ scores. When class-wide intervention is used to change the base rate of risk in the sample, the screening can become technically useful and accurate in identifying those individual students who are truly in need of intervention.
Once individual students are identified as needing intervention, the next decision upon which the RTI validity rests is the correct selection of the intervention for the student. The use of standard protocol interventions has been shown to be effective, and some systems begin with a standard protocol, evaluate student progress, and then proceed to a more individually tailored intervention based upon student lack of growth in response to the standard protocol.
In fact, this is precisely how class-wide intervention is implemented according to the procedures described by VanDerHeyden and colleagues. Class-wide math intervention begins with a standard protocol. The progress of all students is monitored along with implementation integrity. Students who fail to grow at a pace comparable to their classmates and to reach an instructional target when the class as a whole has reached the mastery target are then recommended for individual intervention. When the individual intervention must be intensified through a problem-solving process, diagnostic assessment must be correctly carried out to identify the correct intervention targets, the correct intervention tactic, and the associated progress monitoring measures.
Various authors have written extensively about this process. Kovaleski and colleagues describe this process as a “drill down” assessment. This is necessary for assembling the right intervention, whereas Burns and colleagues refer to this process as identifying the right “skill-by-treatment” interaction. In any case, the diagnostic assessment must reliably identify the intervention target skills and practices and verify the effect of the intervention on student learning before deploying in the classroom.
Once the individual intervention has been assembled, the most immediate likely threat to RTI validity is correct implementation of the intervention. In order to assure there is correct intervention implementation, teams must provide preliminary supports for intervention use (e.g., provide all needed materials, provide a written plan for the intervention that details each observable step of the intervention, in-class training for the teacher to implement the intervention, etc.). There should also be on-going follow-up support to verify correct use of the intervention through in-class coaching support.
It is only after these issues have been accounted for that teams should turn their attention to measuring the effect or impact of the intervention on student response/growth. Because without doing this, growth alone (without reference to these other variables) is not interpretable. Importantly, growth may only be evaluated when the most pertinent threats to the RTI decision have been ruled out. In other words, the team can be confident that the student required intervention, the intervention was correctly matched to student need and improved performance, and the teacher was supported to use the intervention with fidelity.
There are a number of ways to measure growth in response to intervention. Ideally, if teams intend to use a “typical” rate of growth against which to compare a given student’s progress, the conditions under which growth are measured should be as similar as possible (e.g., standard protocol intervention delivered to class or small group allows for students receiving the same intervention in the same context to be identified as growth-discrepant).
Benchmark criteria against which intervention success may be evaluated is a very useful alternative. For example, requiring that a student reach a level of proficiency given intervention that forecasts the child’s likely success in the core instructional environment is a rigorous RTI criterion that is functionally meaningful and easily interpreted by teams. An aimline can also be useful formatively during intervention to guide troubleshooting and adjustment of the intervention, but in terms of evaluating intervention success, the aimline has the technical limitations of being too susceptible to instability because of its’ association with the starting level of performance and the number of weeks allocated to intervention.
Examining a Student Growth Percentile provides the needed context to understand how a given student has grown by comparing their growth with their grade level peers' growth and are popular referents in some RTI decision systems. These peers include students in the same grade, and those who started with a similar scaled score and history of performance. Using a growth chart, practitioners are able to compare individual students’ growth and achievement with peers that may help determine whether a dual discrepancy exists.
Measurement systems such as STAR, DIBELS, FastBridge, AIMSweb Plus, Spring Math and others either have or will be embedding the use of student growth percentiles to inform adequate or inadequate Response to Intervention (RTI) in the areas of reading and mathematics. Student Growth Percentiles (SPGs) are a widely accepted indicator of student progress used by many states for a variety of purposes including instructional decisions and accountability reports. Though easily interpreted, SGP calculations are based on a sophisticated normative-growth analysis technique that both accounts for initial performance levels and provides appropriate context to evaluate whether students are growing at a typical rate. Student Growth Percentile (SGP) is determined by first calculating growth between current test scores and up to two previous scores, then comparing that calculation to the growth of academic peers.
FastBridge, Group Growth Report
The majority of students will respond adequately to intervention, but a small percentage will not. Before concluding that a student has responded inadequately, practitioners should discern whether there was fidelity of instruction and intervention, whether assessments were also carried out in accordance with procedural and interpretive accuracy, whether a sufficient number of data points were collected and whether the goals that were established for the student were high but realistic.
Checklist for Determining Response to Instruction and Intervention:
Was the intervention evidence-based (have the highest probability of producing desired results given their underlying, empirically-validated methodologies)?
Was the intervention implemented as intended to be implemented to include attention to frequency, duration and implementation accuracy (quality and quantity of dosage matched to need)?
There was use of a fidelity of intervention checklist to confirm intervention integrity, as well as intervention coaching support.
Were the assessments administered, scored and interpreted accurately?
Are there at least 9-12 data points to assess a TREND in the student’s response?
Is the student’s goal reasonable? The goal is not too high or too low. The goal makes sense in relation to what is considered “adequate response” for a given student.
Practitioners confirm that the student’s scores are improving and they are on track to reach the goal (maintain); the scores are improving but they are not on track to reach the goal in the student’s current school year (increase intensity/dosage); and/or student scores are not improving (select a different intervention, conduct more assessment and/or move to eligibility determination).
In the case studies below, we will walk through a student who evidenced “inadequate response” within a context/classroom setting where there was not a class-wide math problem and inadequate response to intervention when there was a class-wide problem.
Middle School Case Study 1: “Failed RTI” in Case WITHOUT a Class-wide Math Problem
Tier 3 Problem Solving Team Meeting (Family, principal, classroom teacher, interventionist, school psychologist, guidance counselor, related service providers are at the table)
Family is at the table as part of the team and helps design plan with educators
Family helps implement piece of plan
School will print pdf of student progress in Spring Math each week and send to family
This school uses multiple data sources. For additional resources, schools may wish to explore the National Center on Response to Intervention tools chart for screeners and progress- monitoring tools. In this case, Austin is identified as needing intervention. His classmates are thriving and his performance indicates a high likelihood of math failure without intervention. The team identifies the following goals for Austin.
Austin’s Goals using Spring Math:
Austin will increase levels in intervention at least every 4 weeks with an intervention consistency score of at least 80%.
Austin’s post-intervention performance will be in the “Not At-Risk” range on the first goal skill for which he received intervention.
Austin will perform outside of the risk range on subsequent screenings.
To accomplish these goals, the team has elected to use Spring Math to guide Austin’s intervention in math. The first step for Austin will be to “drill down” and identify the correct intervention that is aligned to his specific learning needs which is accomplished with brief assessment. The next step is to begin the intervention in the classroom. It will occur daily, require 15-20 minutes and be delivered by the classroom teacher. The intervention includes procedures to build conceptual understanding and procedural knowledge, visual aids are included, games to promote fluency, and “think aloud” problems to help Austin create equivalent quantities and solve for unknown quantities. Antecedent supports for intervention use are provided to the teacher, including video demonstrations, troubleshooting checklists, and a local coach who will use the coach dashboard to know if and when in-class coaching support for the teacher is needed to ensure effective intervention delivery.
Intervention effects will be assessed weekly. Weekly assessment for children in Tier 3 intervention is essential so that adjustments to intervention implementation can be made to ensure upward growth for the student receiving the intervention. In Austin’s case, his performance is assessed on the targeted Intervention Skill, which is the starting target for his intervention (i.e., the most proximal prerequisite understanding that he lacks but that is necessary to mastering the Goal Skill).
In Austin’s case, intervention implementation is strong and he experiences adequate growth on the first two skills, reaching mastery for each within only a few weeks. He is slower to master the third Intervention Skill, but does reach mastery after 9 weeks of intervention.
Despite the content overlap between the intervention skills and the goal skill, Austin’s gains do not transfer or generalize to his Goal skill performance. Despite strong intervention implementation and steady upward growth to mastery on the prerequisite skills, Austin does not experience sufficient growth on the Goal skill. His Rate of Improvement on the Goal skill is 0.2.
The team discusses his intervention data, summarized in the table above. There is converging evidence that Austin could be a candidate for intensified intervention services that might be provided through special education. He did not meet his intervention goals in RTI. His performance places him solidly in the at-risk category and he is likely to struggle in mathematics without continued, intensified support.
In Austin’s case, he meets the four criteria for eligibility for SLD.
Case Study Example 2: Failed RTI when there IS a Class-wide Problem
Let’s consider Austin again, but this time recognize that he is enrolled in a class that as a whole is at risk for mathematics failure. The team can consider the same data and write the same goals.
This time, however, the intervention plan will need to include class-wide intervention. Class-wide intervention is a powerful tactic that can very efficiently improve performance of all students on immediate, proximal measures, but also on long-term year-end test scores. Importantly for Austin and our case example here, however, is that class-wide intervention also has the added benefit of making it technically possible for us to adequately identify individual students in need of Tier 3 intervention. In Austin’s case, Spring Math screening identified that there was a class-wide problem, recommended a class-wide intervention, and then guided the teacher to get started implementing the class-wide intervention. The class-wide intervention requires 12-15 minutes per day to implement and generates a data point each week for monitoring progress and increasing intervention difficulty. Learning gains are rapidly apparent and Spring Math tracked the growth of all students, recommending Austin for Tier 3 or individual intervention after 4 weeks.
Here we can see Austin’s performance (blue line) in concert with the class median (red line). Austin remains in the risk range and has experienced weak growth as his classmates have experienced upward growth and attained mastery for the skill being targeted with class-wide intervention.
Austin is recommended for Tier 3 intervention. Now Austin can proceed through individual intervention as he had in our previous case example.
Again, just as before, Austin experiences some growth with strongly implemented individual intervention that is aligned with his specific learning needs following a drill-down/diagnostic assessment directed by Spring Math. However, his growth does not generalize to the Goal Skill, despite strong content overlap. There is converging evidence (summarized below) that Austin may require intensified services such as could be provided through special education.
Austin meets the four eligibility criteria for SLD.
The quality of the probes used: Use professional grade probes from a respected vendor that maximize stability of measurement.
How many probes are administered during each assessment session: Three is better than one each.
How many times per week the student is assessed: Multiple times per week is better than once.
How many weeks of assessment data are available? (The more the better)
If you assess once per week using one probe per session, you need 14 weeks of data to make a reliable decision about the student’s rate of growth/rate of improvement.
If you need to make a decision sooner than that, increase the number of probes per session or sessions per week.
Remember that progress monitoring starts in Tier 2 and increases in frequency in Tier 3.
Also, remember that these guidelines pertain to grade-level assessment, not instructional-level assessment.
For more information, see resources below.
ADDITIONAL TOOLS AND RESOURCES
RTI/SLD Fidelity Tool, February 2021
Dr. Jack Fletcher: Dyslexia Presentation: https://www.cde.state.co.us/coloradoliteracy/dr-jack-fletcher-dyslexia-presentation-3_16_17
Identifying Students with Learning Disabilities in a Response to Intervention Model: https://www.texasldcenter.org/videos/identifying-students-with-specific-learning-disabilities-in-a-response-to-i
NCLD’s RTI Leadership Forum – Jack Fletcher: https://www.youtube.com/watch?v=VzO__UvMO2k
Kovaleski, J.F., VanDerHeyden, A.M., and Shapiro, E.S. (2013). The RTI approach to evaluating learning disabilities. New York: The Guilford Press.
Response to Instruction and English Language Learners: https://nysrti.org/files/documents/resources/ell/rti__ell_-_thompson.pdf
Response to Intervention and English Language Learners: Instructional and Assessment Considerations: https://nysrti.org/professional-development/past-statewide-training/event:12-07-2009-8-30am-response-to-intervention-and-english-language-learners-instructional-and-assessment-considerations/
RTI: An Overview. The IRIS Center: https://iris.peabody.vanderbilt.edu/module/rti01/
National Research Center on Learning Disabilities: Responsiveness to Intervention in the SLD Determination Process: http://www.wrightslaw.com/info/rti.identify.sld.ncld.pdf
RTI Action Network: Practical Advice for RTI-Based SLD Identification: http://www.rtinetwork.org/rti-blog/entry/1/226
Reading Rockets: The Role of RTI in LD Identification: http://www.readingrockets.org/webcasts/4002
Perry A. Zirkel, University Professor Emeritus of Education and Law at Lehigh University: RTI and SLD: https://perryzirkel.com/publications/
Intervention Central: Chart Dog Graph Maker: http://www.interventioncentral.org/teacher-resources/graph-maker-free-online
PA’s Implementing Multi-Tiered Systems of Supports (MTSS) Fidelity Tool: Enhancing Response to Intervention (RTI): http://www.pattan.net/category/Resources/Instructional%20Materials/Browse/Single/?id=5a95bf04150ba0cc168b462f
Using Growth Models to Measure Child/Student Outcomes for State Systemic Improvement Plans: https://ideadata.org/sites/default/files/media/documents/2017-09/using-growth-models-to-measure-child.pdf
To use the Word document as an editable document, you must have a version of Microsoft Word 2016 or later.
Ardoin, S. P., Christ, T. J., Morena, L.S., Cormier D.C., & Klingbeil, D.A. (2013). A systematic review and summarization of the recommendations and research surrounding Curriculum-Based Measurement of oralreading fluency (CBM-R) decision rules. Journal of School Psychology, 51, 1–18.
Burns, M. K., Codding, R. S., Boice, C. H., & Lukito, G. (2010). Meta-analysis of acquisition and fluency math interventions with instructional and frustration level skills: Evidence for a skill-by-treatment interaction. School Psychology Review, 39, 69-83.
Burns, M. K., Riley-Tilman, T. C., & VanDerHeyden, A. M. (2012). RTI Applications, Volume 1. Academic and Behavioral Interventions. New York: Guilford. (226 pp.) Christ, T. J. (2006). Short-term estimates of growth using curriculum based measurement of oral reading fluency: Estimating standard error of the slope to construct confidence intervals. School Psychology Review, 35, 128-133.
Christ, T. J., Zopluoglu, C., Monaghen, B. D., & Van Norman, E. R. (2013). Curriculum-based measurement of oral reading: Multi-study evaluation of schedule, duration, and dataset quality on progress monitoring outcomes. Journal of School Psychology, 51(1), 19-57. doi:10.1016/j.jsp.2012.11.001
Codding, R., VanDerHeyden, Martin, R. J., & Perrault, L. (2016). Manipulating Treatment Dose: Evaluating the Frequency of a Small Group Intervention Targeting Whole Number Operations. Learning Disabilities Research & Practice, 31, 208-220.
Deno, S. L., Fuchs, L. S., Marston, D., & Shin, J. (2001). Using curriculum-based measurement to establish growth standards for students with learning disabilities. School Psychology Review, 30, 507-524.
Good, R. H. (1990). Forecasting accuracy of slope estimates for reading curriculum based measurement: Empirical evidence. Behavioral Assessment, 12, 179-193.
Fletcher, J.M., Lyon, G.R., Fuchs, L.S., and Barnes, M.A. Learning disabilities: From identification to intervention. (2007). New York: The Guilford Press.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, L. & Germann, G. (1993). Formative evaluation of academic progress: How much growth can we expect? School Psychology Review, 22, 27-48.
Jenkins, J. R., Graff, J. J., & Miglioretti, D.L. (2009). Estimating reading growth using intermittent CBM progress monitoring. Exceptional Children, 75, 151-163.
Kovaleski, J., VanDerHeyden, A. M., & Shapiro, E. (2013). The RtI Approach to Evaluating Learning Disabilities. New York, NY: Guilford. (206 pp.)
Shinn, M. R., Gleason, M. M., & Tindal, G. (1989). Varying the difficulty of testing materials: Implications for curriculum-based measurement. The Journal of Special Education, 23, 223-233.
Shinn, M. R., Good, R. H., & Stein, S. (1989). Summarizing trend in student achievement: A comparison of methods. School Psychology Review, 18, 356-370.
VanDerHeyden, A. M., & Burns, M. K. (2005). Using curriculum-based assessment and curriculum-based measurement to guide elementary mathematics instruction: Effect on individual and group accountability scores. Assessment for Effective Intervention, 30, 15-31.
VanDerHeyden, A. M., McLaughlin, T., Algina, J., & Snyder, P. (2012). Randomized evaluation of a supplemental grade-wide mathematics intervention. American Education Research Journal, 49, 1251-1284.
VanDerHeyden, A. M. & Witt, J. C. (2005). Quantifying the context of assessment: Capturing the effect of base rates on teacher referral and a problem-solving model of identification. School Psychology Review, 34, 161-183.