ISI Research Lab

University of California, Irvine

  • Home
  • Contact
  • ISI Team
    • Carol Connor
    • Researchers, Postdoctoral Scholars, and Staff
    • Doctoral Students
  • Blog
  • Projects
    • A2i
      • A2i Online Assessments
    • Interactive Word Knowledge E-Book
    • Observing Effective Teaching
    • Optimizing Learning Opportunities for Students (OLOS) Observation System
  • News
  • Resources
    • Publications
    • Learning Ovations

Comprehension Monitoring in Children

August 27, 2018 by isilearn Leave a Comment

By Elham Zargar

Have you ever read a paragraph but realize you have not understood it? What do you usually do to fix this problem? Do you find yourself rereading the paragraph?

Although you may not be consciously aware of it, as a skilled reader, you are likely to monitor your reading. This means that you are constantly evaluating your understanding of the text while reading. And once you find yourself having difficulty understanding, you are likely to take steps to repair the problem – usually by rereading the passage more carefully, rereading some parts of the passage, or even looking up the definition of words you might not know.

This phenomenon is called comprehension monitoring.

Comprehension monitoring is formally defined as the conscious and unconscious strategies used to (1) identify and (2) repair misunderstandings that might occur during reading (Connor et al., 2015).

Although comprehension monitoring may seem to be automatic, both aspects of it are likely to involve conscious metacognitive awareness. Metacognition (or the ability to think about thinking) is not as easy as it may sound, especially for younger students who are in the process of developing metacognitive skills. Word knowledge calibration, the ability to accurately judge whether or not one knows a word correctly, is also a metacognitive skill.

To give you an idea of how assessing one’s own word knowledge (metaknowledge or word knowledge calibration) may be difficult for a young student, let’s examine the following excerpt from a word knowledge calibration assessment developed by Connor and colleagues (in review). This assessment provides students with a short passage about an event or scenario, which purposely includes a complex target word – likely an unfamiliar one for students in 3rd through 5th grade. After reading the passage, the assessment requires the students to answer a few questions about the target word, in order to assess their word knowledge calibration.

Here is an example, with “pursuing” as the target word:

Spot and Rover were two naughty dogs. They played all day and never behaved themselves. One day, they saw a little boy pursuing a ball. They ran after the ball, knocking the little boy over. The little boy said, “Bad dogs!” and wept when Spot and Rover ran away with his ball.

Now let’s take a look at how a third grader named Jose (not his real name) responded to the following questions.

Based on his first response, Jose believes that he knows what pursuing means. However, after prompting him to define this word, we can see that Jose assessed his knowledge about this word incorrectly. Despite not knowing the correct definition of the word, he believes that he knows its meaning. This implies that he is unlikely to make the right inferences as he tries to substitute buying for the real meaning of pursuing, chasing. He will perhaps form a mental representation of the dogs snatching the ball out of the boy’s hands while he is buying it, when in fact, the boy is not holding the ball at all but is chasing it! A cascade of incorrect inferences will seriously interfere with Jose’s comprehension of this text.

According to the National Assessment of Educational Progress, only 36% of fourth graders in the United States were at or above reading proficiency on the reading assessment in 2015. Reading comprehension is described as one of the most complex human activities. To understand a sentence, simply reading and understanding each of the words is not sufficient (Perfetti & Stafura, 2014). Proficient reading comprehension requires making simple inferences and drawing conclusions while making judgments and connecting parts of text (Kendeou, McMaster & Christ, 2016). Although there are several different skills necessary for proficient reading (e.g., word decoding, reading fluency, vocabulary knowledge, prior knowledge, and working memory), many students failing at reading comprehension lack effective comprehension monitoring skills (Connor et al., 2015; Cain, Oakhill, & Bryant, 2004).

Recent studies have used eye-movement analyses to examine comprehension monitoring in young students. This methodology allows for examining how they process text while reading, without relying on their reduced metacognitive skills. Investigating comprehension monitoring and how students might process text can be helpful in understanding why children might succeed or fail at reading comprehension. Researchers have used eye-movement studies to examine moment-to-moment information processing in reading. They have found that the amount of time one spends looking at a word or a phrase is a good estimate for the amount of mental effort needed for processing it while reading.

In a recent eye-movement study, 5th grade students were presented with sentence pairs, where the first sentence in each item introduced an event or action that was explained further in the second sentence. The second sentence contained either a plausible or implausible word in relation to the context (Connor et al., 2015). For instance,

Last week Kyle flew to visit his family in another city. The large plane/truck was spacious and quickly transported them.

In the second sentence, plane is a plausible word in relation to the verb of the first sentence, flew, while truck is the implausible conjugate. This study investigated how 5th grade students responded to the target word in the two conditions, by analyzing their eye-movements while reading. The two measures that are usually examined are gaze duration and rereading time. A longer gaze duration (the amount of time the reader looks at a word for the first time) is known to be an approximate measure of the first aspect of comprehension monitoring, detecting an inconsistency. And a longer rereading time (the amount of time spent rereading a word) is a good measure of repairing a misunderstanding. The figures below show how the eye movements of students differ with this sentence pair containing the plausible word (first image) in comparison to the sentence pair with the implausible word (second image). As you can see, the second image has larger and more frequent circles, which means that the students generally fixated longer on the implausible word “truck” due to the disagreement or confusion caused by the implausibility.

The findings of this study revealed that all students, regardless of their literacy skills, generally spent more time looking at the implausible words compared to the plausible ones. This finding supports the idea that noticing you’ve misunderstood the text, doesn’t really depend on your literacy skills and seems to be mostly unconscious and automatic. On the other hand, this study also showed that students with stronger literacy skills spent significantly more time rereading the target implausible words, and attempting to repair their misunderstanding. Therefore, how the students respond to text after noticing a misunderstanding appears to be the critical factor that predicts to students’ proficiency in reading comprehension. This suggests that students struggling with reading comprehension may be lacking the necessary skills needed to repair their misunderstanding while reading.

Recommended solutions:

Similar to any other literacy lesson, teaching students to monitor their comprehension needs to be done systematically and explicitly.  Promoting comprehension monitoring and the use of repair strategies can be done using a number of research-based methods. In addition to encouraging students to consciously monitor their understanding of the text, they can use the following explicit strategies:

Finding the main idea. Scaffolding younger students to identify the main ideas of the text can be beneficial in helping them to consciously monitor their understanding. Students can be taught to ask themselves simple “what” and “who” questions about the passage and reiterate or write the story in their own words (Jenkins, Heliotis, Stein, and Haynes, 1987).

Summarization. Although this is similar to finding the main ideas, students are not prompted by answering simple questions to identify the ideas. Similarly, summarization requires readers to regularly demonstrate their understanding of the text by identifying the main ideas, but also encourages students to attend to the higher-level meaning of the text, leading to increased comprehension (Bransford, Brown, & Cocking, 2000). Although younger students might not have the necessary skills to summarize passages as they read, with explicit instruction and training, they can learn to identify the main ideas of the text and generate high-quality summaries.

Generating questions. Generating deep-level questions encourages active engagement as it promotes students to question the meaning of the text as well as their understanding to identify and repair the gaps (Rosenshine, Meister, & Chapman, 1996). Moreover, this promotes comprehension by encouraging students to use their prior knowledge, focus on the main ideas, and summarize key points while reading.

Word knowledge. Additionally, word knowledge plays a crucial role in comprehension. Children can be taught multiple word strategies for when they come across unfamiliar words. Strategies include dictionary use, morphemic analysis – a strategy used to infer the meaning of a word by examining its meaningful parts (e.g., prefixes, suffixes and roots), and contextual analysis, which is using context clues in the text to determine the meaning of a word.

By teaching students to monitor their comprehension using these strategies, they will be more likely to identify their confusions and repair their misunderstandings while reading. Although it may seem difficult at first, practicing such strategies will give students the ability to have stronger comprehension monitoring and reading comprehension skills.

References

Bransford, J. D., Brown, A. L., & Cocking, R. R. (2000). How people learn: Brain, mind, experience, and school: Expanded edition. Washington, D.C.: National Academies Press.

Cain, K., Oakhill, J., & Bryant, P. (2004). Children’s reading comprehension ability: Concurrent prediction by working memory, verbal ability, and component skills. Journal of Educational Psychology, 96, 31–42.

Connor, C. M., Radach, R., Vorstius, C., Day, S. L., McLean, L., & Morrison, F. J. (2015). Individual differences in fifth graders’ literacy and academic language predict comprehension monitoring development: An eye-movement study. Scientific Studies of Reading, 19(2), 114-134.

Jenkins, J. R., Heliotis, J. D., Stein, M. L., & Haynes, M. C. (1987). Improving reading comprehension by using paragraph restatements. Exceptional Children, 54(1), 54-59.

Kendeou, P., McMaster, K., Christ, T. (2016) Reading comprehension: Core components and processes. Policy Insights from the Behavioral and Brain Sciences, 3,162-169. doi: 10.1177/2372732215624707

National Assessment of Educational Progress. (2015). The nation’s report card. Retrieved from http://www.nationsreportcard.gov/reading_math_2015/#reading/acl?grade=4

Perfetti, C., & Stafura, J. (2014). Word knowledge in a theory of reading comprehension. Scientific Studies of Reading, 18, 22-37.

Rosenshine, B., Meister, C., & Chapman, S. (1996). Teaching students to generate questions: A review of the intervention studies. Review of Educational Research, 66(2), 181-221.

Digital Promise: United2Read Partnership and Improving Early Literacy

May 3, 2018 by isilearn Leave a Comment

New United2Read Partnership Sets Out to Improve Early Literacy Skills

MAY 2, 2018 | BY BOB GRAMMAR

Despite considerable resources spent on improving early literacy skills in the past 15 years, reading proficiency scores at the end of third grade are largely unchanged. Research shows that children who fail to read proficiently by the end of third grade face serious academic and life challenges. They are more likely to be retained a grade, to underachieve in mathematics and science, and are four times more likely to drop out of high school. Yet only 37 percent of children in the United States learn to read proficiently by the 4th grade, and reading proficiency drops to 22 percent for high-needs students and children of color.

To help address this gap, Digital Promise, Learning Ovations, MDRC, and The University of California at Irvine have received a five-year U.S. Department of Education grant to support the United2Read project. Over the course of the grant, the project will bring Learning Ovation’s Assessment2Instruction (A2i) professional support system to more than 100,000 students in at least 300 schools across the United States.

Click here to read more.

The Journey to Online Writing: Tools for Individualized Learning

September 26, 2017 by Carol Connor Leave a Comment

By Jenell Krishnan

When I enrolled in “Technology in the Classroom” as a part of my Master’s in Curriculum and Instruction the year my public school district adopted a one-to-one device program, I was hopeful. But after that first class my interior monologue went something like this: “I’m a teacher, not a technician!” “How am I supposed to meet all my students’ learning needs and use technology?” “If I don’t know how to use it, how can I expect my students to use it?” “What if…what if…what if?”
But after this self-doubt waned, I committed myself to a journey in search of answers to an important question.
“How can instructional technologies support my high school students’ learning in individualized ways?”
And in my pursuits of answers to this question, my “what ifs” of uncertainty became “what ifs” of possibility.

As a former ELA teacher in a 21st Century classroom equipped with various technologies, I learned to play many roles. Yes, I still provided direct instruction (think sage-on the-stage), but I also became a flexible technician, digital literacy coach, and writing collaborator (more like a guide-on-the-side). I came to know and use a few digital tools that helped me to better meet all my students’ learning needs. Once I recognized that some instructional technologies were making my practices more efficient (i.e., Edmodo and Polleverywhere), as well as more inclusive (i.e., Backchanneling; Krishnan & Poleon, 2013), I was ready to explore other technologies that would help me meet my students’ widely differing needs as writers.

Using my Professional Learning Network, both online and off, I started to vet the many resources available for writing instruction. Some free tools appeared engaging and student-centered but seemed to only meet the goal of using technology. Because I was focused on student learning and outcomes, I questioned how much value these tools would add. Would the time it would take to introduce the tool be worthwhile? Would it help my students’ learning in their other classes? Would they ever use the tool again?

A Tool for Individualized Student Research

This was not my reaction when I learned about Noodletools, a program designed by educators for supporting online research and online writing. At this point, I think it’s important to stress that online tools are only a support for evidence-based instructional practices. I must also note that it is crucial to understand each student’s digital literacy skills prior to introducing these tools into classroom practices. Through regular reading and writing tasks designed to develop students’ background knowledge and literacy skills (including digital literacy), I knew my 11th graders were ready to tackle a literature review scaffolded by Noodletools. Because this tool provides tailored scaffolding to each student’s pre-writing needs, I was empowered to encourage each student to choose a topic that aligned with their own interests, an instructional practice that supports student writing motivation (Bruning & Horn, 2010).

Noodletools’ Dashboard helps students stay organized and focused during searches for scholarly research articles and while engaging in research writing.

Users are prompted to set writing goals (i.e., “To-do items”), a practice that aligns with recommendations in IES’ Educator’s Practice Guide for Teaching Secondary Students to Write Effectively—developed by experts in the field of writing pedagogy and writing research. Users can also save their research question(s), so each student is reminded of their specific research purpose each time they log in.

The digital notecards page asks students to differentiate between “Direct Quotation,” “Paraphrase,” and “My Ideas.” This is an important feature because some students may plagiarize unintentionally, due to a lack of awareness. I recommend encouraging students to write their in-text citation directly in each digital notecard to avoid this problem. Students can construct their writing outline or they can drag and drop notecards into an outline that they have previously created.

Areas in their outline that do not feature notecards can serve as a visual cue to students that they need to continue their search for source-based evidence. This represents both “Goal Setting” and “Planning,” two important elements of the writing process (p. 7). Once students complete the outline and develop notecards that address components of their outline, they will have a solid blueprint from which to start drafting.

From High School to College

But what impact did this technology have on students’ future source-based writing? Was it worth the trouble of learning how to use this platform? For one student, the answer was a resounding yes! In an email to me, this student, now majoring in Human Services wrote,

“Our professor was truly impressed with the quality of work all of your former students had not only in their writing, but also in their research skills. And yes, I still use Noodletools to this day to help me out!”

Although anecdotal, this suggests that Noodletools has the potential to support students’ research writing at the college level and reflects efforts to support college students’ writing development through principles of Universal Design for Learning (Gradel & Edson, 2009).

The student success and improved motivation that I witnessed by using Noodletools to scaffold individual student research projects piqued my curiosity about other digital programs that support students’ writing development in individualized ways. This led me down the path to Automated Writing Evaluation (AWE) software.

Tools for Individualized Writing Feedback

AWE programs evaluate each student’s draft and some programs even provide formative writing feedback in real-time. Although the feedback algorithms are not perfect, there is still utility in these fallible tools. Most clearly, the instant feedback given to each draft is not possible during paper-based writing instruction.

Learn more about Revision Assistant, an AWE program, by watching this brief video.

In my experience, AWE encouraged students to write and revise- a crucial and often underappreciated component of the writing process. But when they received automated feedback, I encouraged my students to think critically about these suggestions. They were coached to ask themselves, “Does this feedback help me meet my writing goals, or should I choose to do something different?” In turn, they became writing detectives, critically rereading their own words, and each other’s, in new and exciting ways.

One surprising consequence of using an AWE program during writing instruction was the unsolicited conversations sparked between writers. Students talked willingly about their writing in ways I had not seen previously. These peer-to-peer learning opportunities, along with the support of AWE, helped my students tackle significant revisions as well as minor edits in grammar and mechanics. Because of these revision efforts, I became less of a copy editor for my students’ writing and forged a path marked by more meaningful writing feedback.

Supporting Struggling Writers

I witnessed the development of one particular student’s writing as she took up what Geertz (1988) calls a “writerly identity” (p. 9). The writing support from all her teachers, the individualized learning affordances of the AWE program, and her own commitment to writing improvement all contributed to this student’s writing journey.

Julia (a pseudonym) entered 11th grade with 4th grade level writing skills. This likely attributed to her struggle with grade-level writing assignments. One writing task posed a significant problem, yet she would need to write this type of essay as a part of the (former) New York State Regents examination, a prerequisite for high school graduation.

Click here to see Julia’s writing development from her initial essay in September to the last one in June when she used ETS’ Criterion, an AWE program, to support her revisions. Despite lingering concerns, Julia’s final piece represents many improvements.

Julia’s development of ideas and use of textual evidence are more apparent. She included a thesis statement, discussed her reasoning behind this claim, and demonstrated emerging use of punctuation. Ultimately, Julia’s progress is an example of how automated feedback can support individualized writing instruction, feedback, and student success.

In general, my classroom experience with AWE reflected the results of one study (Warschauer & Grimes, 2008). The researchers reported that AWE “encouraged more revision” (p. 22). The researchers also found that AWE saved teachers time.

Final Thoughts

In writing this, my hope is to inspire teachers to explore evidence-based instructional technologies that have the power to support student writing development in individualized ways. Whether it is a tool students can take with them to college or one that helps struggling writers practice the basics, teachers who take this road might make all the difference. 

Notes

Images used with permission from Noodletools, Inc.

References

Bruning, R., & Horn, C. (2000). Developing motivation to write. Educational Psychologist, 35(1), 25-37.

Geertz, C. (1988). Works and lives: The anthropologist as author. Stanford University Press.

Gradel, K., & Edson, A. J. (2009). Putting universal design for learning on the higher ed agenda. Journal of Educational Technology Systems, 38(2), 111-121.

Graham, S., Bruch, J., Fitzgerald, J., Friedrich, L., Furgeson, J., Greene, K., Kim, J., Lyskawa, J., Olson, C. B., & Smither Wulsin, C. (2016). Teaching secondary students to write effectively (NCEE 2017-4002). Washington, DC: National Center for Education Evaluation and Regional Assistance (NCEE), Institute of Education Sciences, U.S. Department of Education. Retrieved from the NCEE website: http://whatworks.ed.gov.

Grimes, D., & Warschauer, M. (2010). Utility in a fallible tool: A multi-site case study of automated writing evaluation. Journal of Technology, Learning, and Assessment, 8(6). http://www.jtla.org.

Krishnan, J., & Poleon, E. (2013). Digital Backchanneling: A strategy for maximizing engagement during a performance-based lesson on Shakespeare’s Macbeth. Teaching English with Technology, 13(4), 38-48.

Pecorari, D. (2003). Good and original: Plagiarism and patchwriting in academic second-language writing. Journal of Second Language Writing, 12(4), 317-345.

Warschauer, M., & Grimes, D. (2008). Automated writing assessment in the classroom. Pedagogies: An International Journal, 3(1), 22-36.

Promoting Advanced Literacy in Middle School with the Word Generation Curriculum

September 11, 2017 by Carol Connor Leave a Comment

By Karen Taylor


How do you take middle school literacy improvement efforts beyond the English-language arts classroom? Several sets of recent standards encourage literacy learning across content areas (Common Core State Standards, C3 Framework, and NGSS), but how can middle school teachers work together to promote their students’ literacy?

WordGen Weekly is an interdisciplinary, supplementary curricular resource for middle schools desiring to foster their students’ academic language and argumentation skills. It was developed in collaboration between Strategic Education Research Partnership (or, SERP), and Boston Public Schools and other districts in Massachusetts and Maryland, under the direction of Catherine Snow at Harvard University. Numerous foundations supported the development of the original WordGen Weekly series for grades 6-8. The U.S. Department of Education’s Institute of Education Sciences later supported the development of three additional Word Generation programs (WordGen Elementary, Science Generation, and Social Studies Generation) through Reading for Understanding grants.

Word Generation is now used by thousands of teachers across the U.S. In a typical WordGen Weekly unit centered on a discussable topic (for example, Cloning: Threat or Opportunity?), students learn relevant academic vocabulary words, and they learn about the controversial issue through a math activity, a science activity, and then a debate in social studies class. Classroom discussion is emphasized throughout the lessons. At the end of the week, students are challenged to apply higher order thinking in an argumentative writing piece where they synthesize their position on the topic.

These learning activities may sound like they take a lot of time. But the organization of WordGen makes it fairly easy to implement. Students spend about 15-20 minutes per day using the program, and the math, social studies, science, and ELA teachers each spend one of those segments per week, with the exception of ELA having two 15-20 minute segments per week due to the writing activity. (However, some schools choose different ways to configure the time.)

A key ingredient to smooth school-wide implementation? A collaborative school culture already in place, although trying out WordGen could be a great way to start on the path of organizing teacher teams for professional learning.

WordGen has been the focus of a number of research studies and articles. For example, some of them focus on academic vocabulary, some address disciplinary literacy, and some of them point to the quality of classroom discussion as a promising instructional practice.

Considering the developmental period of early adolescence in relation to literacy learning, noted reading researcher Jean Chall puts it this way; once students have cracked the code of learning to read, then they can begin to move into the next level of reading to learn (Chall, 1983). From an education policy standpoint, a lot of attention has been dedicated to the primary grades (K-3) and reading instruction. However, in recent years, the subject of adolescent literacy has also gained traction (see the Carnegie Report, Reading Next).

WordGen is a promising example of a middle school literacy resource that has flowed out of the tide since Reading Next. And although programs such as WordGen and the newer educational standards emphasize literacy across subject areas, the issue of adolescent literacy has a long history (see this article to see how Vicki Jacobs puts everything in context).

In a lesson video of the unit on secret wiretapping, Mr. Buttimer asks his students, “So why do some people think that secret or covert wiretapping is a bad idea? Why are people opposed to it?” A student responds, “Because, well it says (referring to the text in her WordGen notebook), they think wiretapping violates a person’s right to privacy.” In this brief conversational turn, Mr. Buttimer begins a discussion about the perspectives surrounding secret wiretapping, and later the students adopt various positions on the issue in order to have a classroom debate.

WordGen is freely available online.

Notes

Permission to use the images in this post was granted by Strategic Education Research Partnership in accordance with the following Creative Commons License.

References

Biancarosa, G., & Snow, C. E. (2004). Reading Next-A Vision for Action and Research in Middle and High School Literacy: A Report from Carnegie Corporation of New York. Washington DC: Alliance for Excellent Education.

Chall, J. (1983). Stages of Reading Development. New York: McGraw-Hill.

 

Next Page »

Recent Posts

  • Comprehension Monitoring in Children
  • Digital Promise: United2Read Partnership and Improving Early Literacy
  • Welcome, Dr. Marcela Reyes
  • Learning​ ​Ovations,​ ​Digital​ ​Promise,​ ​UCI,​ ​and​ ​MDRC​ ​Awarded​ ​U.S.​ ​Department​ ​of​ ​Education
  • Voice of Literacy podcast

Archives

  • August 2018
  • May 2018
  • February 2018
  • October 2017
  • September 2017
  • August 2017

Copyright © 2023 · Beautiful Pro Theme on Genesis Framework · WordPress · Log in