The term assessment has been on my mind these past couple of months, in no small part due to the fact that New Yorkers have been debating how to evaluate teacher impact, the degree to which teaching can be assessed relative to student performance, and even the reliability of test scores as predictors of future success and learning. I’ve especially enjoyed the perspective brought in by the Finnish school system, which is very unlike the U.S. model in its rejection of standardized testing in preference for classroom-based tests created by individual teachers.

We’re talking the NYC public school system here, not academia, so hardly anyone is debating whether or not teachers should be evaluated and student performance should be assessed. (I’ll get to the much more ambivalent feeling on assessment within higher ed below.) And although the media like to present the matter of teacher assessment as a battle between the politicians (who blame teachers for poor student scores) and teachers’ unions (which point to larger systemic issues affecting learning), I believe most reasonable people would agree that teachers, parents, and neighborhood life all have an impact on how well children learn.

The central difficulty of assessment stems from the how: How can we manage to boil down the complex activities of teaching and learning to something quantifiable? This has been one of the questions looming over my own project. The various people I’ve spoken to in youth development and education received my ideas enthusiastically, but they’ve all cautioned that the major challenge I’ll likely face will have to do with presenting data sufficient and convincing enough to secure grant money.

A teaching artist I spoke with said that evaluations need to be nothing more than a simple set of content-related questions targeting what I wanted students to get out of my workshops, and which I can be administered pre- and post-program. So for my coming of age course, sample questions might include How do you define adulthood? or even Have you ever thought about the meaning of adulthood prior to this workshop? I am a bit ambivalent about the approach. Part of me is relieved to hear that teaching artists are able to get funding with such a crude assessment tool, but the better part of me remains unsatisfied with this type of survey. If I wish to regard assessments seriously and treat them as more than annoying bureaucratic hoops, then the evaluations I craft for my program(s) will have to be able to track, in a very real way, the achievement of learning goals and give me feedback on how to improve my teaching practice.

Earlier this month I attended a panel at the MLA Convention called Assessing Assessment(s), which gave a largely a dispiriting account of the state of assessment in college humanities departments. As one might expect, professors shared stories of the contrivances they’ve been forced to develop (or, worse, implement by top-down mandate) to measure student learning. And of course there was a lot of moaning and hand-wringing about how the sort of work we do cannot be accurately captured and measured by any method of counting.

Really? I am no professional expert in the sciences or social sciences, but I can safely presume that our colleagues in those fields also wish to endow their students with “critical thinking skills” and share with them the “joys of lifelong learning.” It does us no favors to distance ourselves continuously as scholars of the softer disciplines. Instead, we should be coming up with very concrete, specific definitions of what it means to “think critically” within our disciplines, and not get so prickly whenever outsiders (often innocently) ask what the use of literary study is. Get over that allergic reaction. A defensive attitude is unbecoming and, moreover, unproductive.

Donna Heiland provided the rare ray of light on the panel. Much of her work with the Teagle Foundation has to do with grantmaking, so she has devoted a lot of time thinking about assessment and accountability in the humanities. What her experience working with assessment professionals has taught her is that data experts can be of tremendous help to professors (she especially lauded the Institutional Research folks at Stanford), so long as they (the professors) are able to put in solid terms what they most care about in their students’ learning.

This post has gone on a bit longer than expected, so in the next post I will give a summary of Donna Heiland’s very helpful article, “Approaching the Ineffable: Flow, Sublimity, and Student Learning,” downloadable here.


4 Comments

How Donna Heiland approaches “the ineffable” « Minds On Fire · January 25, 2012 at 7:49 pm

[…] began talking Donna Heiland‘s views on assessment in yesterday’s post and today I wanted to take a closer look at her article, “Approaching the Ineffable: Flow, […]

Big ideas I learned in college « Minds On Fire · January 26, 2012 at 6:11 pm

[…] the sense that it isn’t the sort of outcome that can be gauged in a multiple choice exam, but Donna Heiland gives me hope that we might be able to capture evidence of this insight by sharpening our assessment […]

Why and How « Minds On Fire · March 12, 2012 at 10:28 am

[…] it left me fairly cold. The only panels I really wanted to attend were ones that touched on program assessment and the crisis (I really wish I had another word for it) in the humanities. Academic topics that […]

More on Big Ideas « Minds On Fire · March 14, 2012 at 5:01 pm

[…] His definition of good teachers (and, incidentally, his take on the underlying problem with teacher assessment) is one that I share: “My best teachers, the ones I still think about today, exposed me to […]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.