About Us Services Blog Join RSA Today! Login

Should You LoveCATs?

data and assessment Mar 30, 2023

Early literacy screening is one of the most important educational technologies available today. I believe districts are not taking advantage of the full opportunity available in universal screening when they choose to use Computer Adaptive Tests (CATs).

What is a CAT?

A CAT is essentially an achievement test that is used to identify risk status. Students take the test on a computer. The test includes many, many items that have been ordered from easy to hard. Students are first presented with items somewhere in the middle of the difficulty scale, or at a place indicated by their grade level, or perhaps based on a previous test score.

When the student responds correctly to an item or items, they are presented with more difficult items, based on the performance of other students who previously took the test. When the student responds incorrectly to an item, they are presented with easier items. Each student has their own unique path through the assessment.

The computer software does a calculation of the student's achievement level and offers recommendations for instructional topics, grouping arrangements, and risk status.

Examples of CATs include MAP by NWEA, aReading by FastBridge, and Star Early Literacy by Renaissance.

Why CATs Aren't Useful for Early Literacy Screening

1. Not always accurate for screening

  • screening accuracy varies across measures
  • some CATs are not optimally accurate for finding students who are at risk
  • some CATs may be more accurate for identifying students who are predicted to be OK in the future
  • may or may not be as accurate as Curriculum Based Measures (CBM)

 2. Not useful for planning instruction

  • reports often indicate a need for instruction in a skill when in fact the student may not have been presented with any items, or with very few items, that tested that skill
  • two students who have the same score may not have been presented with the same items or even have received items that measure the same skill area, and therefore don't have the same instructional needs

3. Not designed for progress monitoring

  • most authors and publishers correctly advise against using the test frequently
  • each time students are tested they are presented with a different set of test items
  • CBMs will have to be used for progress monitoring which can result in additional training time and potential confusion for educators and parents
  • scores should not be used to form small groups

 4. Not production responses

  • multiple choice items require lower-level selection responses rather than production responses
  • test items are sometimes different from what students are asked to do in class, so an inference has to be made from performance on the test to performance in the classroom

 5. Not measures of the essential early literacy skills

  • the test item banks may include thousands of items representing every grade-level standard rather than focusing on the essential early literacy skills (phonemic awareness, phonics, vocabulary, fluency, and reading comprehension)

 6. Not appropriate for pre-K and kindergarten

  • fine motor skills are required to use the computer keyboard or mouse

 7. Not brief

  • testing can take students away from instruction for 15 - 30 minutes or more

 8. Not standardized

  • each student has their own unique path through individualized test items

 9. Not criterion-referenced

  • scores are reported and interpreted relative to norms only, rather than including a criterion-referenced interpretation that provides the minimum expectation

10. Not easy to interpret

  • scores are statistically manipulated in a way that isn't transparent
  • scores are reported in a way that is difficult for most educators and parents to understand

11. Not able to aggregate across students

  • scores can't be aggregated for system-level planning and decision making related to curriculum, instruction, and resource allocation

12. Not given by teachers

  • teachers don't have the opportunity to sit with students and listen to them perform the essential early literacy skills

What To Use Instead

I believe universal screening offers the best opportunity for influencing future reading outcomes when the screening assessment is:

  • brief,
  • standardized,
  • reliable and valid,
  • criterion-referenced,
  • indicators of the essential early literacy skills,
  • instructionally relevant,
  • inclusive of alternate forms for progress monitoring, and
  • predictive of future reading success.

CBMs such as Acadience Reading K-6 and DIBELS 8th Edition fit the criteria for universal screening and provide educators with the information they need to prevent reading failure and provide effective early reading intervention.

 

 

 

Disclosure: I worked for the authors of Acadience for twelve years and I occasionally contract with the authors and their publisher to provide training or to write content related to their assessments. I was not asked or paid by anyone to write this blog entry.

Dr. Stephanie Stollar is the founder of Stephanie Stollar Consulting LLC and the creator of The Reading Science Academy. She is a part-time assistant professor in the online reading science program at Mount St. Joseph University, and a founding member of a national alliance for supporting reading science in higher education

You can follow Stephanie Stollar Consulting and the Reading Science Academy on Facebook, YouTube, Twitter, Instagram and LinkedIn, and contact her at [email protected].

⭐️ Get Dr. Stollar's free resources on the science of reading here! → 

 

Close

50% Complete

Two Step

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.