IPI Categories

The development of the IPI Categories began in 1995-96 while I was the director of the Missouri Center for School Improvement at the University of Missouri.  Bryan Painter, a Ph.D. graduate assistant who worked with me in the Missouri Center for School Improvement, and I were intrigued with the notion of creating a process that could categorize learning experiences and student engagement during class time.  While I was a graduate assistant at the University of Nebraska in the early seventies, I worked on a project that quantified what students and teachers did into 34 categories. I was impressed with the data we collected in that project, but it seemed too complex to be of practical value to schools on an on-going basis.  Bryan and I reviewed the literature and concluded we would have to develop a system because we did not find a process that matched our goals.  Even as recently as 2010 when the Southeast Regional Educational Laboratory searched the nation for viable strategies for studying cognitive engagement, the IPI stood alone as a unique approach. 

As Bryan and I developed the categories and process for collecting the data, we started with a focus on what teachers did in the classroom, but that clearly was not an adequate way to understand what students were doing.  We then focused on the nature of the instructional practices in which students were engaged, relying on the research that some practices were more productive than others.  We stayed on that focus for a while, thus the original name of the IPI.  Eventually, it became evident that describing instructional practices would not be adequate, because the same instructional practice might have very different impact on what students did, how they were learning, and how they were thinking within and across classrooms.  The shift to a focus on how students were engaged and more specifically to how students were thinking while engaged seemed a natural evolution.  Bryan and I had the opportunity to use the IPI Categories to collect data for both research and school improvement between 1996 and 1998 and I continued to collect data in another school improvement project from 1998-2000.  I also had the opportunity to use the categories to collect research data as part of a national study of highly successful middle level schools from 2000 to 2002.  By that time, it was clear that the IPI could be of value to schools as a part of their school improvement initiatives.  It was also clear that the IPI could be a process to foster instructional change and that the longitudinal data from the IPI Process could be used to assess the impact of other initiatives designed to improve student academic success throught a school.   

The value of the IPI Categories seemed to be in their simple, yet comprehensive nature.  The IPI categories are simple enough to be used as the basis for accurate data collection, complex enough to distinguish four valuable forms of student cognitive engagement, and broad enough to cover all instances of student engagement that might occur as part of class/learning time.   And as we hunched in the mid-nineties as we began to finalize the category descriptions, the subsequent research about the relationships between the presence, or lack thereof, of the categories and student academic success has been consistent with the existing research and literature on learning of the past two decades.  As noted in detail elsewhere on this website, our IPI research has paralleled that of others, particularly the importance of higher-order/deeper learning (IPI Categories 5 and 6), the importance of overall engaged time (Categories 2 through 6) and the powerful impact of student disengagement (Category 1) and teacher disengagement (Category 2) during class time.  Our research has also affirmed that the power of the IPI is in the process--how a faculty collects and how they collaboratively study the data.  That also parallels the existing knowledge base about meaningful, sustained school improvement initiatives and collaborative professional growth. 


The six categories run the gamut from student disengagement to student engagement in higher-order/deeper thinking.  However, they are neither a continuum nor a hierarchy.  Categories 5 and 6 are not inherently better than Category 3 or Category 4, they are just distinctly different.  Throughout the course of a lesson, a learning unit, or a sequence of curricular objectives and related content, students generally need learning experiences in Categories 5 and 6, AND Categories 3 and 4.  Each category is a distinct way of codifying how students are engaged in learning during class time.  The classroom observation is focused on the students and their thinking, not the teacher or the instructional activity in which the students are engaged.  Over the years we have found that the six categories effectively cover the range of how students are engaged, probably due to their simplicity. 

Categories 5 and 6 are coded when students are engaged in higher-order/deeper (HO/D) thinking.  In the IPI process, HO/D is defined as times when the students are engaged cognitively in learning experiences that require analysis; critical thinking; decision making from analysis; reflection, goal setting. and strategizing from analysis; evaluation, conclusion, and synthesis from analysis; and, creative and innovative thinking.  The primary distinction between a Category 5 and a Category 6 code is whether or not the students are collaborating verbally as an integral part of their higher-order/deeper thinking.  All forms of HO/D will be either a code 5 or 6.  Categories 5 and 6 (HO/D) have typically ranged between 15 and 20 percent of total data as a combined average of all grade levels K-12.  Interestingly, and not surprisingly, the percent of HO/D has diminished in recent years, moving from a range of 20-25% prior to the mid-2000s to a range of 15-20% today.  By grade level, categories 5-6 are highest in vocational/technical/career schools and in elementary schools and lowest in comprehensive high schools. 

Category 4 is easily distinguishable because the teacher is leading the learning experience and the students are attentive and complying with the teacher's leadership.  The students are not engaged in HO/D forms of thinking.  Category 4 averages typically ranges between 30-40% of total data all grade levels.  

Categories 2 and 3 are similar based on how the students are engaged in thinking but differ based on the role of the teacher during the learning experience.  For both categories, students are engaged in non-HO/D learning.  In the IPI process that means the students are engaged cognitively in recall and memorization; basic fact finding; simple understanding; knowledge/recall of facts, details, and elements; knowledge/recall of processes, algorithms, methods, and strategies; and, practice to embed/internalize skills or processes.  In essence, the students are engaged in work that is generally a necessary form of learning, but the work is not HO/D.  It is a form of engagement that we often describe today as "surface" learning.  The distinction between Categories 2 and 3 is very important.  For Category 3, the teacher is attentive to, engaged with, or supportive of his/her students.  For Category 2, the teacher is not attentive to, engaged with, or supportive of the students' learning.  In essence, the teacher is doing something that is not directly related to the students' current learning experience.  Categories 2 and 3 together have averaged, over the years we have collected data, between 30-40% of IPI codes.  Category 3 has averaged about 25-30% while Category 2 has been about 5-10%.  Higher levels of Category 2 have a negative relationship with student achievement.

Category 1 is coded when the students are not engaged in learning related to the curriculum.  Category 1 has average about 3-5% over the years, but some schools have considerably more disengagement (some have 10%, 15% and even 20+%).  Invariably, the students in schools with those higher numbers of Category 1 are schools with high percentages of students who are not passing state assessments.  Not suprisingly, our research has documented a very strong, inverse relationship between the percentage of Category 1 and student achievement. 

The information provided in this section provides a general sense of the six categories and how they are coded.  However, please note that the ability to code accurately requires a significant amount of formal professional development.  A study of the importance of formal professional development for coding accuracy was conducted at the Middle Level Leadership Center at the University of Missouri in 2005.  The participants were post-masters graduate students in educational leadership.  They were not familiar with the IPI coding categories.  To begin the research project, they were provided with the handouts and materials used in an IPI Workshop and asked to learn how to code, just as one might attempt to learn to code based upon the information found in this website. The graduate students in education were then tested for coding accuracy.  Their coding accuracy averaged less than 20%.  The next day the same graduate students participated in an IPI Level I Basic Workshop.  They were then tested again for coding accuracy.  Their average coding accuracy level was greater than 90%, a figure comparable to that of all participants in IPI Level I Workshops.  Clearly, there are too many nuances and subtelties involved in the coding process for individuals to develop coder validity, reliability, and inter-rater reliability without participation in at least several hours of formal professional development. 

The professional development necessary to be an IPI Data Collector is provided through the IPI Level I Workshop, which is a full-day workshop designed to develop the capacity to code with validity (accuracy) and reliability (consistent accuracy), and also the ability to code consistently across data collectors (inter-rater reliability).  To protect the integrity of the IPI Process, ONLY participants who complete the IPI Level I Workshop with a coding accracy score of .80 or higher are given permission to code engagement data for a school.  A score of at least .90 is required to code for any form of research.  At the conclusion of each IPI Level I Workshop, an assessment is administered and the participants are provided with feedback about their coding accuracy via email. 

Information about the IPI Workshops can be found at other locations in the website.  For specific information about the availability of future IPI Level I Workshops, contact Dr. Valentine by email at This email address is being protected from spambots. You need JavaScript enabled to view it. .