Saturday, October 7, 2017

Masquerade

In one of the all time favorite Broadway shows, Phantom of the Opera, we get this stupendous chorus and dance:



The Masquerade: where everyone hides behind a mask and pretends to be someone different.

But who is that fellow who appears at the end? None other than the phantom, who has something to say about the theater.

Today that is me. Let's talk about the masquerade of the standardized testing, Common Core-styled, a/k/a PARCC, Smarter Balanced, and variations on the theme as it pertains to mathematics, in particular, the technologically-enhanced items that have you convinced that at last, at long last, the states have a test to measure student achievement in a meaningful way.

Multi-Select:
     This type of item tries to do away with the test-taking technique of narrowing down multiple choice answers until there is one obvious answer to choose. The student must evaluate a number of choices and select each one that is correct.

An example:  What is 4 + 3?
                             () 7
                             () 3 + 4
                             () 5 + 2
                             () 43
                             () 12
                             () 1

How does a student need to tackle this problem? By looking at each choice and deciding if it is correct or not!

This is not a new type of question: It is an old-fashioned TRUE/FALSE quiz item.

Drag and Drop:
     This type of item presents open boxes and circles to be filled with numbers, variables (letters), and symbols from an answer bank.

An example:


Fill in the blank! Generations of school children have dealt with this type of quiz question and hated it because they had to think up something for the blank. But wait! Our newfangled CC tests give them an assist: all they have to do is grab something from the bank for the blank.

Matching:
     Let me quote from Florida's Item Specifications for Grade 8 mathematics to give you an idea of this one: "The student checks a box to indicate if information from a column header matches information from a row."
     Which, as every student knows, can be worked out by making all the obvious matches and then seeing what's left. Since these item types don't ask for more than 3 or 4 matches, once the student works out the obvious ones, all that's left is to connect the one pair they don't know but have to go together.

Drop-Down Menu:

    Meant to mimic "Cloze Reading," this time asks students to complete a paragraph by choosing the correct response from a drop-down menu.

An example:


    Sorry that the screen capture is small, but hopefully you can see that all a student has to do is select one of the choices presented. Yes, this type of question is really multiple choice.

Equation Editor:

    At last, an item that requires a student to determine a correct answer without a list of choices or a 50-50 guess. Perhaps we finally have an item that truly measures student understanding and skill. But wait, take a look at this:


We are asking students to generate original thought, but there are two problems with this. One, the interface. The equation editor is hard to use and students frequently ask for help during testing to get their desired response entered correctly. To which every smart teacher says, "I cannot help you," for fear of being accused of cheating. Two, student don't understand the response required. Once, a student asked me how to enter his response when the screen showed 'y =' and then the response box. He asked, "Do I put 'y =' into the box?' That would have resulted in an incorrect answer because the computer would have seen 'y = y=.' Yet, the student had the correct answer. So these items don't measure student understanding of mathematics as much as they measure the student's ability to navigate the interface.

Free Response:

    At last, an item worthy of testing students. An example:


But this requires a human to score it, which negates the argument for computerized testing. In fact, it suggests that the best person to score a response is the student's teacher. Oops! We can't have that. So we'll advertise on Craig's List and other places for warm bodies to read and assign a grade. I wonder how much time this item's response will get when we have previously had reports from persons grading writing test that they get about a minute per essay.

The Take-Away:

You have been told that computer testing has eliminated the limitations of standardized testing in which students eliminate possibilities and guess/select the best answer. Nonsense. Most of these item types are old wine in new wineskins. The only types that are new come with limitations that make them of limited use: the interface gets in the way or we simply ask less qualified persons than professional teachers to evaluate the responses and assign a grade.

Why does anyone think these Common Core era tests are better than what was done in the past?

Why does anyone think that these tests measure anything other than the test-taking skills a child possesses?



It is nothing more than a masquerade.

No comments:

Post a Comment