Assessment and Evaluation: Sense or Nonsense?

My wife and I bought a new car not too long ago. A perfectly functional, if boring, hybrid sedan that gets fine gas mileage and will be on the road long after us. I tend to be a positive sort, but working with the slug of a car salesman brought me down. I still occasionally think of the nonsense with the car salesman and I want to run into the shower.

Nobody is Perfect

During our visit, the guy tried to up-sell us, downplayed the value of our trade-in, did the, “Let me talk with the manager” nonsense, permitted a massive error (that we caught immediately) in the financing agreement, and  periodically reminded us that we would be sent an assessment form via email that we needed to fill out.

Oh – and if he didn’t get perfect scores on the form, he would be fired. In other words, if we didn’t write that he was God’s gift to car buyers, his supposed firing would be on us, forever.

How do we respond to that?

nobody is perfect


The goal of assessment is no longer to use meaningful tools to gather sensible data by which we can evaluate the condition of an educational program (or of a business, or a teacher’s effectiveness via SROIs, or even an individual car sale).

Rather, it is now to cook the books with nonsense data to make the educational program or business or car sale look perfect. “Truth” is what I say it is, the facts be damned.

Assessment has gone off the deep end, so evaluation has become advertising, not judging based on sensible data. If we cannot trust our data to be representative of our system, then science, which relies on credible data at its core, ceases to be science.

Let’s dig into these ideas.

First, assessment and evaluation.

Assessment is gathering sensible data that measure the effectiveness of a program, organization, person, etc. The data can take all kinds of forms, from experiments to questionnaires to focus groups – small groups who discuss specific questions – to one-on-one visits.

Evaluation is the interpretation of the assessment data. You find meaning in the evaluation of assessment data.

An example I often use to illustrate these concepts is that of a runner who is having trouble with their knees.

wear pattern on running shoesAn orthopedic specialist whose professional understanding dictates the tests to (forgive me, pun intended) run.

These tests might include:

  • bending the leg to look for mobility;
  • taking an MRI to see inside the knee;
  • looking at the wear pattern on the running shoes.


Each of these tests provide sensible and meaningful data. That’s the assessment part.
Maybe the knee can bend only a few degrees, and the MRI reveals an unexpected white line of fluid in the meniscus. Her running shoes might show unusual pronation. The assessment does not interpret – it only gathers data.

Now the doctor is ready for the evaluation part. This is where she uses the assessment data. Evaluating the MRI data shows a clear tear. The other tests can help confirm this evaluation – this diagnosis. Based on the evaluation, the doctor can recommend treatment, such as surgery.

Garbage in, garbage out.

Recall above that I discussed meaningful data. What makes data meaningful or not meaningful? Why is the doctor’s data meaningful and the car salesman’s, or, often, Yelp’s, not?

ventriloquist dummy

When the assessment is geared toward a certain outcome, the data is biased.

The car salesman was not interested in my opinion of his work. He was interested in having his opinion of his work transmitted to the survey using me as a ventriloquist’s dummy.

The data would have been nonsense – not meaningful – because his opinion of his work is not what the survey wanted to know.

Garbage in, garbage out. We could accurately call it Fake News.

Has assessment gone off the rails?

I love the apartment complex in which I live. We have a beautiful view of the regional bluffs near the place. It’s a walk to shopping and a lite rail to the city center.

Some weeks back, after the maintenance staff did some timely repairs (we love the maintenance staff as well!), I was asked to rate the place on Yelp. I was happy to do it. Great rating. It was my personal assessment of the place and the staff, which I believe readers can rely on. But can they rely on all the assessments? Food for thought. What about the Ratemyprofessors website? Hotels on TripAdvisor? Are those meaningful assessments that lead to reasonable evaluations? Why?

quotation mark

When the assessment is geared toward a certain outcome, the data is biased.

These days, you and I and so many others are flooded with assessment requests from restaurants, chain stores, cable TV service, even lite rail operators. How many restaurants have you visited in which the waitstaff says that their job is dependent on getting a superior assessment from you?

Assessments are getting a deserved bad rap because they are becoming a cudgel to use against employees in fear of their jobs, and, therefore, against customers, who are asked to protect these jobs.

Assessments are becoming the anti-assessments, in which evaluation means nothing. Regarding the data, it’s nonsense in, nonsense out. Assessment and evaluation are too important to be part of the flatlining of meaning that’s going on now. We must find ways – and I don’t know what those are – to give assessment and evaluation back to the realm of science. Only then can they have their meaning to the realm of education.


prunesThe Grizzled Teacher (TGT) has taught at eight public universities, one private college, two “flagships”, and several regional state schools during 39 years in postsecondary education. TGT has directed and taught in a 6,000-student first-year science program with a typical class size of 300 students, has taught graduate courses with 5 students, and worked with a thousand faculty, instructional staff, and more than 10,000 students. TGT was on short-term contracts for many years, has been tenured for many more, and has won two-dozen university, state, and national teaching awards. TGT hasn’t seen it all, but has seen a heckuva lot.

After all these years, I still don’t know all the answers, but I’m getting better at knowing the questions.

To quote the late, great Joan Rivers, “Can we talk?”

Submit a pedagogical question or comment to the Office of Teaching and Learning ( for answers in an upcoming blog post.

4 Replies to “Assessment and Evaluation: Sense or Nonsense?”

  1. This is exactly what I have been railing about for a decade in regard to assessment and SROIs. They are meaningless based on the way the students perceive them and respond to them. Students have been trained by the internet to indicate if they “like” a product/service or if they “didn’t like” it. Frequently good educational experiences are challenging and many students “don’t like” the challenged or don’t understand why they are being challenged, particularly in 100-level gen. ed. courses. It can be years down the road before they recognize the value of their gen. ed. courses; and that’s way too late for an SROI. If professors pander to “enjoyability” and “likes” on their course evaluations then we wouldn’t be offering meaningful, challenging courses.

    1. James Younger – This is an interesting and important comment. It is true that SROIs reveal bias based on a number of variables including gender, race, mother tongue, etc. Still, we need student voices about their classes and teachers, because they are the ones being taught. How can we make that happen in ways that are productive instead of destructive? One way is to hear those voices regularly during the semester, rather than the often adversarial time after the semester. The Grizzled Teacher will write about that in the next TGT post. An additional thought: What is the purpose of what teachers do? Is it only to have students learn things, or is it to have them be able to question well? Or even to be motivated to want to learn more after the class is completed? Knowing the purpose of teaching helps us to design tools and strategies to measure teaching effectiveness that go well beyond the SROIs.
      -Posted at the request of The Grizzled Teacher

  2. You are spot on with the assessments. Even oil changes are graded and if you don’t give them a perfect score I have had the dealership call and offer free oil changes if I would change my scores to reflect better numbers then they deserved. It’s a sad world where you can’t fail and there by learn from your mistakes.

    1. Chris Hamre – A grizzled “thanks!” Your experience is way too common, and yep, far too often, we start from the standpoint of assuming perfect grades for work that is merely expected. For me, it’s all an issue of trust. Extending past the classroom, or even the car dealership repair shop, in the U.S., our institutions are built and work because there is trust – at least that’s the ideal. We have trust in our banks, in social security, in retirement accounts or pensions, in the long-term escalation of house prices. Where that trust is breached, as happened a decade ago, society begins to break down in all kinds of ways. This is certainly true with online ratings sites such as Amazon, Yelp, and TripAdvisor, where the ratings no longer mean a lot because of our (valid) lack of trust in the ratings. We must work toward a more trusting society, where the assessments (our opinions) have meaning because the recipients want our opinions rather than having us claim their opinions as our own.
      -Posted at the request of The Grizzled Teacher

Leave a Reply

Your email address will not be published. Required fields are marked *