User Requirements

The process of codifying student engagement through classroom observations using a six-category system looks, on the surface, to be a rather easy endeavor.  From the development of the IPI in the mid '90s through its continued and expanded use today, it has been constantly evident that without a rigorous training component, the data collectors would lack validity, reliability, and inter-rater reliability...all necessary factors in the collection of good data for faculty collaborative study.  Therefore, all data collectors and facilitators of the faculty collaborative study of the data are required to have successfully completed an IPI Level I Workshop.  The eight hour workshop is designed to prepare participants to collect their school's IPI data with validity, reliability, and inter-rater reliability as well as develop strategies for leading the faculty in the collaborative study of the data.   

In the IPI Process, validity, reliability, and inter-rater reliability are defined as follows:

Data Collection Validity is the data collector's accuracy against the six IPI categories.  High validity means the data collector is accurately coding the classroom engagement observations per the six categories.  So validity, as used in the IPI Process and explained in an IPI Workshop, means accuracy of data collection.  At the conclusion of each IPI Workshop, the participants take an IPI Coding Assessment.  This assessment is used to provide feedback about the participant's coding accuracy.  An Assessment Score of .80 or higher is necessary for permission to be a data collector in a school setting, thus collecting the data for the purpose of school/instructional improvement.  The participants' feedback letter indicates whether they have met the standard and if not, provides suggestions about how to increase their coding accuracy and options for retaking the assessment.  Also, a standard of .90 is necessary for permission to collect IPI data for research purposes.  Over the years, the standard has been even higher (.95) for university staff and graduate  assistants who collected data for our university-based research.    

Data Collection Reliability is the data collector's accuracy across multiple similar observations.  So, reliability, as used in the IPI Process and explained in the IPI workshop means data collection accuracy over time for observations that should have the same code.  In other words, when a data collector sees student engagement of a particular type at 8:15 AM and then sees the same type of engagement at 10:20 AM, the observer is making the same (correct) code for the two scenarios.  During an IPI Workshop, participants complete between 40 and 50 codes and do so with scenarios that range from vastly different to highly similar, across different classroom learning contexts and/or grades levels.  That number and variety of the scenarios is necessary to establish the coder's consistent competence. 

Data Collection Inter-rater Reliability is the data collector's accuracy against other data collectors.  The process for developing a data collector's validity, reliability, and inter-rater reliability is evident throughout an IPI Workshop.  For example, participants' codes for the learning scenarios are determined independently and then shared by the with all other workshop participants.  Also, about half of the coding scenarios, including the codes made in real classroom visits during the workshop, are shared and discussed with colleagues as part of the participants' learning experiences.  As has been noted by countless workshop participants: "the degree to which we differed when we started the day (workshop) and the degree to which we agreed by the end of the (workshop) day was miraculous."  The IPI Workshop is specifically designed so participants not only realize that they are all growing in their coding skill capacity, but it is also designed to help them realize that they are growing together and building inter-rater reliability as they work through the day-long workshop.  That is an important realization for all data collectors because they must have confidence that their colleagues who are collecting data are coding just as accurately as they are throughout the school day. 

A study of the importance of formal professional development for coding accuracy was conducted at the Middle Level Leadership Center at the University of Missouri in 2005.  The participants were post-masters graduate students in educational leadership.  They were not familiar with the IPI coding categories.  To begin the research project, they were provided with the handouts and materials used in an IPI Workshop and asked to learn how to code, just as one might attempt to learn to code based upon the information found in this website. The graduate students in education were then tested for coding accuracy.  Their coding accuracy averaged less than 20%.  The next day the same graduate students participated in an IPI Level I Basic Workshop.  They were then tested again for coding accuracy.  Their average coding accuracy level was greater than 90%, a figure comparable to that of all participants in IPI Level I Workshops.  Clearly, there are too many nuances and subtleties involved in the coding process for individuals to develop coder validity, reliability, and inter-rater reliability without participation in at least several hours of formal professional development.

For information about future IPI Level I Workshop or to contact an IPI trainer in a particular region of the country, contact Jerry Valentine by email at This email address is being protected from spambots. You need JavaScript enabled to view it.