David Williamson Shaffer, author of How Computer Games Help Children Learn, recently wrote about a November 2008 article detailing how a group of elected officials flunked a civic literacy test. The test is being conducting by the Intercollegiate Studies Institute American Civic Literacy Program.

The article claims that the average score of public officials was 44% – making my 94% look totally awesome. So, as Shaffer asked, if I were an elected official, would I be twice as effective? Twice as likely to solve that whole Arab – Israeli problem? Twice as successful at balancing the budget? Twice as competent at dog catching?

Test results don’t lie. Based on the 33 questions asked by the ISIACLP, I’m ready to lead.

Perhaps looking at how the questions were selected can give us a few clues about why public officials did so poorly:

Thirteen of the 33 knowledge questions are taken from previous ISI surveys developed by ISI faculty advisors from universities around the country. Nine of the civic knowledge questions are taken from the U.S. Department of Education’s 12th grade NAEP test, and six from the U.S. naturalization exam. Two new knowledge questions were developed especially for this new survey and three are drawn from an “American History 101” exam posted online by www.InfoPlease.com.

Oops. Guess not. According to their test scores, they can’t pass the high school ISIACLP equivalent of “Are You Smarter Than a 5th Grader?”

Okay . . . let’s back up a bit.

I’m pretty sure that there is no way I could step into a legislative role and perform twice as well as the person already in that position. (Though some days I’d like to try.)

So we’ve got a problem. I’m assuming that the point of releasing results from these types of tests is to try and predict, using 33 multiple choice questions, how well someone will actually do out in the big, bad world. So if elected officials average 44% on a civic literacy test, we assume the results are correct, i.e. the elected officials aren’t ready to lead.

They failed the test and the test can’t be wrong.

Shaffer suggests:

This is, of course, one of the most fundamental fallacies of our regime of high-stakes tests: we take performance on the test as an end in itself, never bothering to ask–much less test–whether the exams actually tell us anything useful about whether students can really do anything useful in the world.

He claims that these studies have been done and that they “routinely show that exams don’t give us much useful information.”

He quotes from his book:

Even students who do well on school tests cannot apply their knowledge to real-world problem solving. For example, one classic set of studies shows that students who have passed a physics course and can write Newton’s Laws of Motion down on a piece of paper still can’t answer even simple problems like “If you flip a coin into the air, how many forces are acting on it at the top of its trajectory?” Which is, of course, a problem that can be solved using Newton’s Laws.

So we need to start asking questions about a test’s validity. What kinds of questions are we asking that ensure a test is actually measuring what we want to measure?

In this case, probably not. We’re trying to make connections between knowing the name of FDR’s New Deal programs with the ability to competently run a government department.

“Passing” the multiple choice test becomes the issue and so, because our test data is flawed, our conclusions are just as flawed.

So is there an answer? A balance between core knowledge and 21st century skills? I think so.

I spent yesterday at the Kansas MACE conference speaking and listening to a whole bunch of very smart people who said the same thing. A foundation of basic social studies knowledge is vital to creating successful citizens but just as important is asking kids to apply that knowledge in ways that allow them to solve real-world problems, not just ace some multiple choice test.