With falling of autumn leaves, we get another defense of the controversial program, Teach For America. The argument falls along familiar lines: TFA teachers get better results, TFA teachers don't leave after two years, TFA teachers are better than long-term substitutes.
The latest serving appeared in the Florida Times-Union Wednesday, October 18: http://jacksonville.com/opinion/columnists/2017-10-18/guest-column-don-t-make-teach-america-scapegoat-larger-educational
The writer makes three arguments, which I shall answer:
1. Given the teacher shortages that exist, TFA fills positions that would otherwise require substitutes, who don't have the necessary qualifications to bring about student learning.
Stone Eggs: You have a point. If the Yankees, playing the last game against the Houston Astros for the League Championship, needed a pitcher and none were available, I would be a better selection than a 6-year old wunderkind at T-ball. But I suck at sports. I couldn't put a pitch in the strike zone to save my life. This argument really has no merit.
Stop saying we're better than nothing and show how you are prepared, as a TFA recruit with 5 weeks of summer training, are qualified to step into a classroom. Describe that training! What are you doing in those five weeks that makes you the equal of a teacher-college graduate, who has spent four years preparing for the job?
2. 60% of TFA teachers remain in the classroom beyond their commitment, which is better than the retention rate for other teachers.
Stone Eggs: We need a source for this claim. Ooh, I kept reading your column and find that this is your personal experience as you keep up with your friends. Hmm, anecdotal evidence is not persuasive when it comes to citing statistics. Or did you get the percent from TFA, a source that is biased?
And what time period are you dealing with? Are you comparing two-year TFA retention versus 5-year general retention?
3. TFA corps members get better results than teachers, veteran and rookie, from traditional colleges of education.
Stone Eggs: You claim this because of test results. A test that is invalid and unreliable, a test that is so bad that a 28% rate of providing correct answers is deemed a passing score.
This is where all educational 'reform' falters. You say you produce better test results and pretend that means students learned better under your tutelage.
You are wrong. Could I ask what research you follow? Because everything that I read, done under carefully-controlled studies to eliminate the odd variable, says that test-preparation (and frankly, that is what you are trained to do) produces better test results, but a more poorly educated student.
Attack me if you will. No, I don't get the best test results in my building, much less my district. But my students are desired by teachers in the next year because they are the best prepared to move on, because I work on actual learning and understanding.
That is the irony of the Common Core. It creates the circumstances that produce the exact opposite of what it says it is after: critical and creative thinking.
Now for what you won't say: TFA is a <expletive-deleted>, yes I come from the Nixon era, expensive program.
If I give you all that you claim, you would still fail on a cost-benefit analysis. The latest DCPS contract with TFA (http://news.wjct.org/post/school-board-approves-new-tfa-contract) would bring in TFA recruits at a cost of $6,000 or higher.
That is not money that goes anywhere except into the very rich pockets of Teach For America, which at the end of 2016 held $343,162,094 in net assets. https://www.teachforamerica.org/about-us/annual-reports
Let me put this into perspective: 343 FREAKING MILLION DOLLARS! WHY DO THEY NEED TO CHARGE SCHOOL DISTRICTS THOUSANDS OF DOLLARS FOR EACH POSITION!!
I don't think I need to say anymore. TFA is a pecuniary, self-serving institution that decades ago lost sight of its (unneeded) mission.
Sunday, October 22, 2017
Saturday, October 7, 2017
Stretch Goals
Way back in the 1960s, IBM was the dominant computer company. Indeed, the industry was known as Snow White and the Seven Dwarfs: IBM being Snow White and the other tech companies such as DEC, Sun, etc. having such a small market share that they were tiny compared to Big Blue headquartered in Armonk, New York.
IBM was noted for insisting that everyone wear a suit with a white shirt. Also, it was known for setting goals for its sales force that were achievable. IBM believed that people needed to have goals they could achieve to motivate them to work harder as opposed to goals that were clearly impossible, stretch goals, that they could not achieve, that would have made the sales force lose motivation, and not bother to try very hard as it would be impossible to hit the mark.
Ah, stretch goals. I once worked for a man who did stretch goals. I clearly remember the day I sat with him and we looked at the goals for the business that I was put in charge of. I remember the tingling feeling in my body as I thought we could achieve the goals we had set. WE CAN DO THIS! And then the man ratcheted the goals up higher in the belief that he had to keep goals impossible so he could rant and rave at his personnel and they would work harder.
Oops. At that point, I realized he would never allow anyone to feel success and never again bothered myself about what he wanted.
Now we come to DCPS, a misguided school board, and their stretch goals: http://jacksonville.com/news/education/2017-10-05/duval-district-uses-new-formula-set-stretch-goals-2020
(Even Nikolai Vitti got this a year ago when he clashed with A S-J over setting goals that would motivate staff.)
What does it mean to have a new algorithm? Do they mean they developed a mathematical formula that leaves out human judgment?
While the Board celebrates their self-determined excellent work, have they bothered to consult anybody who works at the schools? Principals? Teachers? You know, the people who actually make it happen and know better than anyone else what their school can achieve?
No, they did not. They don't bother because they really don't think the actual employees have any expertise in educating children.
If they did, they would have included principals and teachers in this goal-setting process.
They celebrate themselves because now they have set goals for each school as opposed to setting overall district goals. They think they are the first ones who have done this. Hello, exalted personages who sit on the dais once a month in public sessions: NO, you are not. It didn't work in the past and it won't work now.
What's that? Why? Because you haven't included school-based personnel in the goal-setting process.
Oh, but your algorithm is the best idea since sliced bread? (And I hate it that you force me to use that cliche.)
Just like Coca-Cola's secret formula, the Colonel's secret recipe with its secret herbs and spices, and may I add the student growth formula that you refuse to release to teachers so we can see exactly how you are determining 50% of our annual evaluations, it's a BIG SECRET.
No one can know.
Is that because it is astoundingly, astonishingly excellent? Or is it more of your normal <ahem>? If you refuse to tell people, we will just trust you.
I hate to tell you this, but we don't. Take your stretch goals and go to the gym because they will not have any effect in this school system.
Not until you begin respecting teachers and other school-based personnel.
IBM was noted for insisting that everyone wear a suit with a white shirt. Also, it was known for setting goals for its sales force that were achievable. IBM believed that people needed to have goals they could achieve to motivate them to work harder as opposed to goals that were clearly impossible, stretch goals, that they could not achieve, that would have made the sales force lose motivation, and not bother to try very hard as it would be impossible to hit the mark.
Ah, stretch goals. I once worked for a man who did stretch goals. I clearly remember the day I sat with him and we looked at the goals for the business that I was put in charge of. I remember the tingling feeling in my body as I thought we could achieve the goals we had set. WE CAN DO THIS! And then the man ratcheted the goals up higher in the belief that he had to keep goals impossible so he could rant and rave at his personnel and they would work harder.
Oops. At that point, I realized he would never allow anyone to feel success and never again bothered myself about what he wanted.
Now we come to DCPS, a misguided school board, and their stretch goals: http://jacksonville.com/news/education/2017-10-05/duval-district-uses-new-formula-set-stretch-goals-2020
(Even Nikolai Vitti got this a year ago when he clashed with A S-J over setting goals that would motivate staff.)
What does it mean to have a new algorithm? Do they mean they developed a mathematical formula that leaves out human judgment?
While the Board celebrates their self-determined excellent work, have they bothered to consult anybody who works at the schools? Principals? Teachers? You know, the people who actually make it happen and know better than anyone else what their school can achieve?
No, they did not. They don't bother because they really don't think the actual employees have any expertise in educating children.
If they did, they would have included principals and teachers in this goal-setting process.
They celebrate themselves because now they have set goals for each school as opposed to setting overall district goals. They think they are the first ones who have done this. Hello, exalted personages who sit on the dais once a month in public sessions: NO, you are not. It didn't work in the past and it won't work now.
What's that? Why? Because you haven't included school-based personnel in the goal-setting process.
Oh, but your algorithm is the best idea since sliced bread? (And I hate it that you force me to use that cliche.)
Just like Coca-Cola's secret formula, the Colonel's secret recipe with its secret herbs and spices, and may I add the student growth formula that you refuse to release to teachers so we can see exactly how you are determining 50% of our annual evaluations, it's a BIG SECRET.
No one can know.
Is that because it is astoundingly, astonishingly excellent? Or is it more of your normal <ahem>? If you refuse to tell people, we will just trust you.
I hate to tell you this, but we don't. Take your stretch goals and go to the gym because they will not have any effect in this school system.
Not until you begin respecting teachers and other school-based personnel.
Masquerade
In one of the all time favorite Broadway shows, Phantom of the Opera, we get this stupendous chorus and dance:
The Masquerade: where everyone hides behind a mask and pretends to be someone different.
But who is that fellow who appears at the end? None other than the phantom, who has something to say about the theater.
Today that is me. Let's talk about the masquerade of the standardized testing, Common Core-styled, a/k/a PARCC, Smarter Balanced, and variations on the theme as it pertains to mathematics, in particular, the technologically-enhanced items that have you convinced that at last, at long last, the states have a test to measure student achievement in a meaningful way.
Multi-Select:
This type of item tries to do away with the test-taking technique of narrowing down multiple choice answers until there is one obvious answer to choose. The student must evaluate a number of choices and select each one that is correct.
An example: What is 4 + 3?
() 7
() 3 + 4
() 5 + 2
() 43
() 12
() 1
How does a student need to tackle this problem? By looking at each choice and deciding if it is correct or not!
This is not a new type of question: It is an old-fashioned TRUE/FALSE quiz item.
Drag and Drop:
This type of item presents open boxes and circles to be filled with numbers, variables (letters), and symbols from an answer bank.
An example:
Fill in the blank! Generations of school children have dealt with this type of quiz question and hated it because they had to think up something for the blank. But wait! Our newfangled CC tests give them an assist: all they have to do is grab something from the bank for the blank.
Matching:
Let me quote from Florida's Item Specifications for Grade 8 mathematics to give you an idea of this one: "The student checks a box to indicate if information from a column header matches information from a row."
Which, as every student knows, can be worked out by making all the obvious matches and then seeing what's left. Since these item types don't ask for more than 3 or 4 matches, once the student works out the obvious ones, all that's left is to connect the one pair they don't know but have to go together.
Drop-Down Menu:
Meant to mimic "Cloze Reading," this time asks students to complete a paragraph by choosing the correct response from a drop-down menu.
An example:
Sorry that the screen capture is small, but hopefully you can see that all a student has to do is select one of the choices presented. Yes, this type of question is really multiple choice.
We are asking students to generate original thought, but there are two problems with this. One, the interface. The equation editor is hard to use and students frequently ask for help during testing to get their desired response entered correctly. To which every smart teacher says, "I cannot help you," for fear of being accused of cheating. Two, student don't understand the response required. Once, a student asked me how to enter his response when the screen showed 'y =' and then the response box. He asked, "Do I put 'y =' into the box?' That would have resulted in an incorrect answer because the computer would have seen 'y = y=.' Yet, the student had the correct answer. So these items don't measure student understanding of mathematics as much as they measure the student's ability to navigate the interface.
Free Response:
At last, an item worthy of testing students. An example:
But this requires a human to score it, which negates the argument for computerized testing. In fact, it suggests that the best person to score a response is the student's teacher. Oops! We can't have that. So we'll advertise on Craig's List and other places for warm bodies to read and assign a grade. I wonder how much time this item's response will get when we have previously had reports from persons grading writing test that they get about a minute per essay.
You have been told that computer testing has eliminated the limitations of standardized testing in which students eliminate possibilities and guess/select the best answer. Nonsense. Most of these item types are old wine in new wineskins. The only types that are new come with limitations that make them of limited use: the interface gets in the way or we simply ask less qualified persons than professional teachers to evaluate the responses and assign a grade.
The Masquerade: where everyone hides behind a mask and pretends to be someone different.
But who is that fellow who appears at the end? None other than the phantom, who has something to say about the theater.
Today that is me. Let's talk about the masquerade of the standardized testing, Common Core-styled, a/k/a PARCC, Smarter Balanced, and variations on the theme as it pertains to mathematics, in particular, the technologically-enhanced items that have you convinced that at last, at long last, the states have a test to measure student achievement in a meaningful way.
Multi-Select:
This type of item tries to do away with the test-taking technique of narrowing down multiple choice answers until there is one obvious answer to choose. The student must evaluate a number of choices and select each one that is correct.
An example: What is 4 + 3?
() 7
() 3 + 4
() 5 + 2
() 43
() 12
() 1
How does a student need to tackle this problem? By looking at each choice and deciding if it is correct or not!
This is not a new type of question: It is an old-fashioned TRUE/FALSE quiz item.
Drag and Drop:
This type of item presents open boxes and circles to be filled with numbers, variables (letters), and symbols from an answer bank.
An example:
Fill in the blank! Generations of school children have dealt with this type of quiz question and hated it because they had to think up something for the blank. But wait! Our newfangled CC tests give them an assist: all they have to do is grab something from the bank for the blank.
Matching:
Let me quote from Florida's Item Specifications for Grade 8 mathematics to give you an idea of this one: "The student checks a box to indicate if information from a column header matches information from a row."
Which, as every student knows, can be worked out by making all the obvious matches and then seeing what's left. Since these item types don't ask for more than 3 or 4 matches, once the student works out the obvious ones, all that's left is to connect the one pair they don't know but have to go together.
Drop-Down Menu:
Meant to mimic "Cloze Reading," this time asks students to complete a paragraph by choosing the correct response from a drop-down menu.
An example:
Sorry that the screen capture is small, but hopefully you can see that all a student has to do is select one of the choices presented. Yes, this type of question is really multiple choice.
Equation Editor:
At last, an item that requires a student to determine a correct answer without a list of choices or a 50-50 guess. Perhaps we finally have an item that truly measures student understanding and skill. But wait, take a look at this:
We are asking students to generate original thought, but there are two problems with this. One, the interface. The equation editor is hard to use and students frequently ask for help during testing to get their desired response entered correctly. To which every smart teacher says, "I cannot help you," for fear of being accused of cheating. Two, student don't understand the response required. Once, a student asked me how to enter his response when the screen showed 'y =' and then the response box. He asked, "Do I put 'y =' into the box?' That would have resulted in an incorrect answer because the computer would have seen 'y = y=.' Yet, the student had the correct answer. So these items don't measure student understanding of mathematics as much as they measure the student's ability to navigate the interface.
Free Response:
At last, an item worthy of testing students. An example:
But this requires a human to score it, which negates the argument for computerized testing. In fact, it suggests that the best person to score a response is the student's teacher. Oops! We can't have that. So we'll advertise on Craig's List and other places for warm bodies to read and assign a grade. I wonder how much time this item's response will get when we have previously had reports from persons grading writing test that they get about a minute per essay.
The Take-Away:
Why does anyone think these Common Core era tests are better than what was done in the past?
Why does anyone think that these tests measure anything other than the test-taking skills a child possesses?
It is nothing more than a masquerade.
Subscribe to:
Posts (Atom)