Since online learning often separates teachers from learners across time and distance, we rely on evaluations – in the form of tests, quizzes and assessments – to judge each student’s successful comprehension of the content (and to judge how well the course designers presented their information).

But what makes a good test question? Is it meant to challenge a student? Should it stump as many students as possible? If every student answers a question correctly, does that mean your question is too easy, or is it a perfect example of an effective test question?

To find out, let’s begin by reminding ourselves why we test our students in the first place.

The Purpose of a Good Test Question

As a general rule, a good question tests the 6-levels of intellectual understanding, as espoused in Bloom’s Taxonomy:

  • Knowledge
  • Comprehension
  • Application
  • Analysis
  • Synthesis
  • Evaluation

Going further, the Cornell University’s Center for Teaching Excellence provides a great summary of the characteristics of what a “good question” is:

  • Intention: Did the question assess what you intended to assess?
  • Demonstration: Did learners demonstrate that they learned what they needed to learn?
  • Progress: Were learners able to show progress in their learning?
  • Motivation: Did the question help motivate learners to further their academic pursuits of the subject matter?
  • Distinction: Did the question help distinguish learners from “non-learners”?

Notice that these guidelines have nothing to do with the structure of the question itself.

Whether your questions involve True/False answers, Multiple Choice responses, Matching items, Fill-in-the-Blanks or Essay responses, good questions must demonstrate all of these traits.

How to Create Effective Tests and Quizzes

At the heart of any good question is an understanding of the learning outcomes that the questions are seeking to measure. Before you develop your question bank, revisit the objectives of the course to ensure your questions are built with those objectives in mind.

For example, if the objective of a course is to ensure that students are capable of executing the basic functions of trigonometry, you might begin to formulate your final exam by first listing all the necessary functions you would expect a student to be able to execute by the end of your course. This will give you a checklist of “must-have” test questions, and provide a structure for the progression of the test.

For a more subjective topic, like political theory, you might first list all the key concepts you’d expect a student to be able to explain by the end of your course, as well as the critical thinking skills you’d expect them to be able to employ. Then you could devise an exam which includes all the necessary topics while simultaneously testing the students’ cognitive functions in their explanation of those terms.

From there, you can decide which question formats best serve those purposes.

While choosing from a series of Multiple Choice or True/False answers may be sufficient to prove a student’s familiarity with glossary terms or the basic comprehension of functions, those formats also allow for “educated guesses,” which may not be enough to prove a student truly understands the underlying concepts. Thus, you should also include Essay, Fill-in-the-Blank, and other open-ended question formats that require a student not just to deduce (or guess) the correct answer but to apply their knowledge and rhetorical reasoning — or, in the case of mathematics, to prove they can actually perform the computations effectively.

How Hard Should a Test Be?

Experts vary on their responses to this question, but the general consensus seems to be: harder is better, with a caveat.

Students who feel “put on the spot” or otherwise expected to achieve errorless results in a difficult situation are reportedly more likely to retain the correct information afterward, even if they make mistakes. The caveat? For this approach to work best, students must also have the opportunity to review their responses and understand what they got wrong. (Understanding why an answer is wrong also helps with retention.)

However, it’s critical to note that a question’s difficulty should be derived from the challenge it presents, not from any complexity in the way it’s phrased. As a recent incident in the UK proved, students of all ages can feel “demoralized” if they struggle to even understand the questions on a test.

Thus, if you’re presenting questions in such a manner that your students will barely be able to answer them — whether by writing them for an advanced reading level or by purposely writing them to be obtuse — you’re not truly testing your students’ knowledge; you’re making them jump through needless hoops which may result in lower scores and a dislike of the material simply for the sake of appearing “challenging.”

Testing the Test-Makers

Not sure if your test is too hard? Ask a beta tester in your target audience to take it before you administer it to your class.

For example, a grad student or teacher’s aide in the field should have no real trouble passing a test for undergrads, nor should a senior manager in a department that’s receiving employee training in a specific topic. If they do, you may want to step your difficulty level down a notch or two.

After all, a test that no one passes means it might be you, and not your students, who need a refresher.

Image: “Quiz” by Animated Heaven, via Flickr Creative Commons License