top of page

Wonders Approaching Level Weekly Assessment (given bi-weekly)

I used the approaching level weekly assessment to measure students’ reading comprehension from the district mandated curriculum. Students were given these assessments on a bi-weekly basis. The assessments contained ten multiple choice questions about a cold passage (passage students have never read) which connected to the weekly focus and essential question. I chose these assessments as one piece of data because they aligned with the district curriculum and provided quantitative data on students' reading comprehension of a related passage. Also, because the passage was a cold read it gave a true snapshot of their reading comprehension after seeing a text for the first time. This provided more information about individual student comprehension, rather than using a warm read which we had read and discussed all week. I utilized the data I had gathered to plan instruction for upcoming bi-weekly comprehension checks by monitoring which students struggled with fluency. These students had the text, questions, and responses read aloud to them. I did not want students who struggled with fluency and decoding to be prevented from demonstrating their comprehension of the text. I also used these bi-weekly comprehension checks to monitor student progress during my study. I utilized the on-going data to inform my instructional decisions when I planned questions for my guided reading groups.  For example, when students demonstrated mastery of a particular comprehension questioning type I moved on and focused their instruction during small groups on different questioning types, while still reviewing those which had been mastered. Further, students who continued to struggle from week to week reviewed the same questioning types during small groups with additional scaffolding before adding another questioning type into the mix.

Fountas & Pinnell Benchmark (December - pre) and (March - post)

Fountas and Pinnell is the district mandated fluency and comprehension assessment method in my school district. Additionally, much of my research also came from the findings of Fountas and Pinnell. I focused on Fountas and Pinnell’s three levels of questioning throughout my research, therefore, I used their benchmarking texts and questioning format for my pre and post-tests. Since my students had completed Fountas and Pinnell benchmarking in the fall, and in previous grades, they were familiar with the assessment method. This allowed me to keep a level of consistency and use common language, with my questioning, throughout my action research. I utilized this common language with both assessments and daily instruction. I chose the Fountas and Pinnell Benchmark format for my pre and post-test because it gave me a platform for consistency when comparing data and results. The texts and questions were consistent in their levels of cognitive thinking and difficulty as I administered assessments because I used the Fountas and Pinnell materials and questioning format. During the assessment, students were asked questions of all three questioning types, Within the Text, Beyond the Text, and About the Text. Students were given a leveled text to read, then after reading, they were asked comprehension questions about the text. As I asked questions about the text, I scribed student responses on a recording sheet. If students had misconceptions or responded “I don’t know” I would probe the student further and model thinking through the question for the student (as long as it did not compromise any other questions). I assessed student comprehension by rating their responses on a scale of 0-3, a level 3 response being the most accurate and detailed response to truly demonstrate understanding. Then I averaged student response scores in each questioning section to determine their comprehension score.  Students could score a maximum of 9 points on the comprehension assessment, as there were 3 points possible per questioning type: Within the Text, Beyond the Text, and About the Text. Based on students’ pretest data I planned to focus primarily on certain types of questions according to where students demonstrated the highest need. I compared the pre-tests of the students within each of my guided reading groups and planned my first sets of questions geared toward the questioning type which had the highest need. I continued to use my pre-test data and data I had collected through weekly observations and anecdotal notes to guide my instruction.

Anecdotal Notes

Each week I also kept a log of anecdotal notes to track qualitative data on how students’ reading comprehension was progressing. For each guided reading group I prepared targeted questions which were Within the Text, Beyond the Text, or About the Text style questions. The questions were based on the book they were reading and I recorded student responses to those questions in the log. Each week students read a new book and were asked new questions. I asked an average of five scripted questions for each story. When students responded to the questions, I rated their response based on a 0-3 scale. A level 3 response being the most accurate and detailed response to truly demonstrate understanding. I chose to structure my scoring on a 0-3 scale to align with the Fountas and Pinnell model to maintain a level of consistency while I was evaluating my students’ comprehension each week. I also used the 0-3 scale because it was best for my students if I compared their data from the pre-test and post-test with my weekly assessments using the same scale. I planned questions for each group around just one of the questioning levels (within, beyond, about) each week. When I planned my questions I also used a common language from Fountas and Pinnell. This was best for my students because they became familiar with the language and were no longer intimidated by this style of questioning. My students benefited from my use of the common language because they were hearing the questioning everyday. It became just a part of our daily learning rather than questioning they only heard three times a year for benchmark assessments. Then, based on student responses each week I adjusted my questioning for each guided reading group depending on the level of their responses for each level of questioning. For example, if I observed one of my groups struggling with Beyond the Text questions, I would make sure to target that type of questioning the next time I met with that group. During the last week, I asked a mixture of the Within the Text, Beyond the Text, or About the Text to all groups to evaluate the progress of all questioning levels.

IMG_8592 EDIT.jpg

Climate Survey

I was interested in learning about my students’ attitudes towards our learning. This assessment was chosen because surveys can be a very powerful tool to anonymously evaluate student opinions and self reflect on teaching. Additionally, when students feel safe and like their teacher they are more likely to take risks and work harder in the classroom. This particular survey asked students 12 multiple-choice questions, with response options of strongly agree, agree, disagree, strongly disagree and 2 written response questions. I had the students complete the survey on paper rather than online because I knew we would have wasted valuable instructional time getting students logged onto the computers and gaining access to the online survey. It was more work for myself because I had to enter all 21 of my students’ responses on the online survey, but it was worth to save the instructional time. The survey helped drive and improve my instruction throughout my action research. I viewed responses from students of “agree” and “strongly agree” both as positive data. Questions which students answered “disagree” or “strongly disagree” were areas I knew that I needed to focus on improving as an educator, in order to cultivate a classroom environment which promoted safety and was student-centered. For instance, the statements “I have lots of friends in the classroom,” “Students in my class are friendly,” “I behave in the classroom,” “Other students behave in the classroom,” “I like school, “ and “I have fun learning” were answered with a “disagree” or a “strongly disagree” by at least one student. I implemented new behavior management strategies in order to help my students make better choices and improve their behavior to get more out of instructional time. Based on this data, I also tried to incorporate new, fun learning opportunities to engage students in our learning.

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

Data Collection

bottom of page