Questions about Impact and Attribution: Are Better Results Really Due To The New Curriculum?

            It is difficult to construct and administer a single test or measure that can conclusively demonstrate superior student achievement using IMP and show that it was due to IMP and IMP alone. We have heard a variety of concerns regarding the impact of IMP and issues involving attribution, that is, related to cause and effect. For example, “maybe you got better results because…”

1.      You “stacked the deck.” The IMP students were better students to start.  They were brighter, or previously had more math courses in middle school, or were somehow different, which biased the results in favor of IMP.

2.      You had the school’s better teachers teach the new curriculum. 

3.      IMP teachers graded their students more leniently.  Students could get wrong answers and still pass the course because you used “alternate assessments,” which don’t test for right answers.

4.      The gains in student achievement were due to increased professional development of the IMP teachers. If you give teachers who teach pre-standards curriculum the same amount of professional development you gave IMP teachers, how do you know you would not get similar results? 

5.      The effect is due to the use of collaborative learning groups, not the curriculum per se.

6.      It could be the “Hawthorne effect.” You got better results because it was new and students and teachers were getting increased attention.

7.      The IMP teachers have reductions in course load. 

8.      You selected only those test questions that favored the new curriculum.

9.      You got the results because you tested only low achieving students. The higher achieving students don’t need IMP.

10.  You got the results because you only tested IMP with the  “better” students [delete ‘kids’].  Lower ability students would not be able to do this kind of math.

11.  Colleges may not accept these new programs such as IMP, rather than “algebra.” And if they do, students will not be adequately prepared for a traditional college math course. Most colleges do not let students use graphing calculators so they will be at disadvantage.

12.  Students who take these new curricula in high school are regularly placed in remedial math class in college because their basic skills are so low.

 

All of the above statements have plausible validity even if they have no basis in fact. They need to be taken seriously and addressed. In addition, different constituencies will value different indicators of success more than others. In short, the type of data collected has to be “psychologically real” for each constituent group to be accepted by that group. For example, a principal of a special admission school said he would only consider IMP successful if it could raise SAT scores by 30 points. Another principal expressed concern about attendance rates and preventing school dropouts, while another was concerned about scores on the Pennsylvania System of Student Assessment (PSSA). 

            In response to these methodological issues and the different values placed on different measures of success, our strategy has been to collect student achievement data from a variety of sources using a variety of indicators. At the same time, we have attempted to control or compensate for as many biasing factors as possible. Our goal was and is to determine whether there is a convergence of the data in any one direction that could be used to answer the bottom line question of the costs and benefits of using a standards-based curriculum, such as IMP .

We began collecting data during the first year of IMP’s implementation in Philadelphia in May 1994. Since then, we have used a variety of measures, which are listed in the Appendix under Student Achievement Measures There are several reasons to believe the superior results of NCTM standards-based curriculum and student centered instructional methods as shown with IMP in Philadelphia are actually understated.

1.      All of the results documented herein were from pre-publication draft versions of IMP.

2.      All of the IMP teachers were teaching the curriculum for the first time.

3.      None of Philadelphia’s most able students were permitted to enroll in IMP at each school.

4.      None of the  9th grade students had previous been exposed to standards-based k-8 materials.

5.      Most of the results were based on students using IMP for only 2 and 2/3 years.

6.      IMP teachers did not operate in a “reform teaching culture,” but were often isolated in their departments. The same was often true for IMP students.

7.      Parents were unfamiliar with a standards-based curriculum and some did not know how to help their children.  

Based on our experience, we hypothesize that significantly greater student achievement results would accrue if high school students who are taught using the IMP materials:

a)      were first taught using a k-8 standards-based mathematics curriculum,

b)      use the finished published version of the IMP text,

c)      are taught by teachers who had several years of experience teaching the same level of IMP,

d)      are enrolled in a school that had adopted a standards-based mathematics program for all its students.

 


Next Page