Voices Vol 41 No 2


SPECIAL INTEREST GROUPS: Teacher Education

What Do English Learners' Standardized Test Scores Tell Us About Teacher Effectiveness?

By Gail Verdi

On February 24, 2012, the New York City DOE released (to the public) teacher ratings based on students’ standardized test scores.  This resulted in a series of articles focusing on the inherent flaws in the “value added model” of data interpretation:

February 25, 2012: Sharon Otterman and Robert Gebeloff wrote a piece in the New York Times titled, In Teacher Ratings, Good Test Scores Are Sometimes Not Good Enough where they revealed how teacher rankings based on student test scores varied greatly even within what are considered the “best” public schools in New York City.   As they noted, the system that was developed to analyze this data called “the value added model” required that results show a full distribution of teacher quality from high to low or from above average to below average among teachers with similar student demographics and scores.  Therefore, since so many students in these schools scored high on the tests, and since the “value added model” required a spread across designations such as above average, average, and below average, teachers with students that actually scored well on the test were ranked ineffective.

February 26, 2012: Georgett Roberts published an article in the New York Post, Queens Parents Demand Answers Following Teacher’s Low Grades.  Pascale Mauclair was declared the worst teacher in the city because of her student test scores.  Her salary was posted; parents were quotes as saying “I think she should be out,” and her picture was printed along side the piece.  In a blog post on EdWize, Leo Casey explained The True Story of Pascale Mauclair. Here we learn that Mauclair is an ESL teacher at P.S. 11 (where a quarter of students are English learners) and that her small, self-contained classes are made up of “recently arrived immigrant students who do not speak English.”  Casey goes on to argue that the following factors contributed significantly to her contorted test results: 1. She teaches students with the highest academic needs. 2. She works with a small number of students which means that the sample size also contributed to the skewed rating.  It is important to note here that when Pascale came back to work after this incident, she was greeted by applause from her colleagues and principal.  However, the fact that she was shamed publicly in contrast to what happened to teachers at the more high profile public school says something about how teachers of second language learners and students in poverty are treated.  Even Bill Gates commented on the use of the test scores in an op-ed piece in the New York Times, Shame Is Not the Solution, where he admits that releasing the test scores was a BIG MISTAKE a bigger mistake for a teacher working with English learners.

March 5, 2012: Linda Darling-Hammond comments on the NYC debacle in Education Week (Value-Added Evaluation Hurts Teaching).  Here Darling-Hammond reviews the case of Mauclair and asks us to consider the following question: “Is this what we want to achieve with teacher-evaluation reform? She cites incidents in Tennessee, Washington, D.C., and Portugal where “value-added methods” have lead to disastrous outcomes such as punishing teachers for choosing to take tough assignments.  She argues that these tests should be used to get a snapshot of what is happening within a state, but that they “should not be used to make high-stakes decisions about teachers” for the following reasons: 

        1. Test score gains reflect more than one teacher’s impact on student performance.  Factors such as poverty, health, and access to books have a far greater impact on academic success.

        2. Teacher ratings vary across years, classes, and tests.  A teacher may rank at the bottom one year and at the top another.

        3. Teachers know they are being pressured to teach to multiple-choice tests, and consequently, this reduces the opportunities for students to engage in higher-order thinking and problem-based learning. 

        4. These test scores paint a portrait of the student population a teacher instructs, not the quality of his/her teaching.  It is common sense that if you have a large number of new English-learners and students with disabilities, you will have lower test scores. 

For those of us working in the field of teacher education in New Jersey, we might want to use our SIG meeting at NJTESOL/NJBE’s annual conference to explore how we might help assist teachers during this time of transition.  As Linda Darling-Hammond noted, there is nothing inherently wrong with using test scores to help teachers develop professionally.  Therefore, we might also consider some questions inspired by the articles outlined above.  For example: What do we want to achieve with teacher-evaluation reform in NJ?  What should TE/Higher Ed be doing to help support the implementation of our new Core Curriculum Standards?  And since we know that initiatives are being considered to tie teacher effectiveness to the quality of teacher education programs, we should ask our selves: How would we feel if the reporters that wrote about Ms. Mauclair, listed the name of the school where she got her degree? 

Please join me at one of our two Teacher Education SIG meetings to consider how we will deal with the future of Teacher Education in New Jersey and how we can support teachers of English learners across the state as we prepare them for College and Career Readiness.  Our meeting dates are May 30th from 10:30-11:30 and May 31st from 10:30-11:30.  Please feel free to contact me with anything items you would like me to put on the agenda, or any questions we should consider.  You can contact me at gverdi@njtesol-njbe.org or at 908-737-3908.

Gail Verdi is the Teacher Education SIG Representative.