Assessment Battery
Pre-testing:
Standardized Tests:
1. Woodcock-Johnson-III NU: (Woodcock, McGrew, & Mather)
Subtest: Letter-Word Identification: Participants read aloud a list of increasingly difficult words.
2. Woodcock-Johnson-III NU: (Woodcock, McGrew, & Mather)
Subtest: Passage Comprehension: Participants silently read passages and fill in the missing word.
3. Woodcock-Johnson-III NU: (Woodcock, McGrew, & Mather)
Subtest: Word Attack: Participants read aloud a list of decodable nonwords of increasing difficulty.
4. Woodcock-Johnson-III NU: (Woodcock, McGrew, & Mather)
Subtests: General Information A and B: Participants answer questions about common objects.
5. Woodcock-Johnson-III NU: (Woodcock, McGrew, & Mather)
Subtest: Picture Vocabulary: Participants are shown pictures of objects and asked to provide names.
6. Woodcock-Johnson-III NU: (Woodcock, McGrew, & Mather)
Subtest: Reading Fluency: Participants are given 3 minutes to read a list of statements and decide if they are true or false.
7. Comprehensive Assessment of Spoken Language (CASL): (Carrow-Woolfolk)
Subtest: Grammatical Morphemes: Participants complete word analogies that have a morphological relationship.
8. Clinical Evaluation of Language Fundamentals – 4th Edition (CELF-4): (Semel, Wiig, & Secord)
Subtest: Understanding Spoken paragraphs: Participants answer questions about passages orally presented by the examiner.
9. Comprehensive Assessment of Spoken Language (CASL): (Carrow-Woolfolk)
Subtest: Inference: Participant answers questions based on oral scenarios presented by examiner. Part of the information is missing, so the participant needs to use background/world knowledge to infer the missing information.
10. Clinical Evaluation of Language Fundamentals – 4th Edition (CELF-4): (Semel, Wiig, & Secord)
Subtest: Recalling Sentences: Participants repeat sentences of increasing length and grammatical complexity.
11. Reading Inventory and Scholastic Evaluation (RISE): (SERP Institute)
This computer-administered test contains six subtests measuring word recognition/decoding, vocabulary, morphology, sentence processing, efficiency, and reading comprehension.
12. Lexia Rapid Assessment: (Lexia Learning Inc.)
This computer-adaptive test assesses word recognition, vocabulary knowledge, syntactic knowledge, and reading comprehension.
Motivation Surveys:
13. Intrinsic Motivation Inventory: (IMI: Ryan, 2002)
Participants are asked questions pertaining to their intrinsic motivation toward reading.
14. Reading Motivation Scale: (RMS: Guthrie & Wigfield, 2009)
Participants are asked questions pertaining to their motivation for, breadth of, and depth of reading.
15. Expectancy Value Questionnaire (EVQ): (Eccles & Wigfield, 1995; 2002)
Assesses participant’s cognitive appraisal of expectancy for success and affective evaluation of the participant’s learning target (i.e., reading).
“In-House” Developed Questionnaires:
16. Demographic Survey:
Participants are asked questions pertaining to their race, gender, age, country of birth, English language status, and educational history.
17. Computer Familiarity:
Participants are asked questions about their computer knowledge and use.
18. Reading Practices:
Participants are asked questions pertaining to their frequency of reading different types of print.
Examples of Experimental Tasks:
19. Letter-Sound Identification Task (Sound Symbol Test—Letter-Sound and Sound Combinations subtest– Lovett, M.W., Borden, S.L., DeLuca, T., Lacerenza, L., Benson, N.J., & Brackstone,1994; Lovett, Lacerenza, Borden, Frijters, Steinbach, & De Palma, 2000): Participants are shown single letters or letter combinations and asked to give the sound the letter or letters make.
20. Challenge Words Task (Lovett et al., 1994; Lovett et al., 2000): Participants are shown multisyllabic words of increasing difficulty and asked to read them aloud.
Post-testing:
Standardized Tests:
1. Woodcock-Johnson-III NU: (Woodcock, McGrew, & Mather)
Subtest: Letter-Word Identification: Participants read aloud a list of increasingly difficult words.
2. Woodcock-Johnson-III NU: (Woodcock, McGrew, & Mather)
Subtest: Passage Comprehension: Participants silently read passages and fill in the missing word.
3. Woodcock-Johnson-III NU: (Woodcock, McGrew, & Mather)
Subtest: Word Attack: Participants read aloud a list of decodable nonwords of increasing difficulty.
4. Woodcock-Johnson-III NU: (Woodcock, McGrew, & Mather)
Subtest: Reading Fluency: Participants are given 3 minutes to read a list of statements and decide if they are true or false.
5. Comprehensive Assessment of Spoken Language (CASL): (Carrow-Woolfolk)
Subtest: Grammatical Morphemes: Participants complete word analogies that have a morphological relationship.
6. Clinical Evaluation of Language Fundamentals – 4th Edition (CELF-4): (Semel, Wiig, & Secord)
Subtest: Understanding Spoken paragraphs: Participants answer questions about passages orally presented by the examiner.
7. Comprehensive Assessment of Spoken Language (CASL): (Carrow-Woolfolk)
Subtest: Inference: Participant answers questions based on oral scenarios presented by examiner. Part of the information is missing, so the participant needs to use background/world knowledge to infer the missing information.
8. Reading Inventory and Scholastic Evaluation (RISE): SERP Institute
This computer-administered test contains six subtests measuring word recognition/decoding, vocabulary, morphology, sentence processing, efficiency, and reading comprehension.
9. Lexia Rapid Assessment: (Lexia Learning Inc.)
This computer-adaptive test assesses word recognition, vocabulary knowledge, syntactic knowledge, and reading comprehension.
Motivation Surveys:
10. Intrinsic Motivation Inventory: (IMI: Ryan, 2002)
Participants are asked questions pertaining to their intrinsic motivation toward reading.
11. Reading Motivation Scale: (RMS: Guthrie & Wigfield, 2009)
Participants are asked questions pertaining to their motivation for, breadth of, and depth of reading.
12. Expectancy Value Questionnaire (EVQ): (Eccles & Wigfield, 1995; 2002)
Assesses participant’s cognitive appraisal of expectancy for success and affective evaluation of the participant’s learning target (i.e., reading).
13. Working Alliance Inventory (A. O. Horvath, 1981, 1982; Revision Tracey & Kokotowitc 1989)
Participants are asked questions about their relationship and interactions with their teacher. The survey measures a participant’s perceived collaborative relationship between themselves and their teacher.
“In-House” Developed Questionnaires:
14. Computer Familiarity:
Participants are asked questions about their computer knowledge and use.
15. Reading Practices:
Participants are asked questions pertaining to their frequency of reading different types of print.
16. Exit Interview:
Participants are interviewed about their experiences in our intervention.
Examples of Experimental Tasks:
17. Letter-Sound Identification Task (Sound Symbol Test—Letter-Sound and Sound Combinations subtest– Lovett, M.W., Borden, S.L., DeLuca, T., Lacerenza, L., Benson, N.J., & Brackstone,1994; Lovett, Lacerenza, Borden, Frijters, Steinbach, & De Palma, 2000): Participants are shown single letters or letter combinations and asked to give the sound the letter or letters make.
18. Challenge Words Task (Lovett et al., 1994; Lovett et al., 2000): Participants are shown multisyllabic words of increasing difficulty and asked to read them aloud.