Multiple-Choice Questions: 4 Misunderstandings – eLearning Industry

Misunderstandings About Multiple-Choice Questions

Here’s a made-up version of the start of a recent Twitter discussion, based on a real conversation:

My post: Multiple-choice question distractors (incorrect answers) need to be plausible for them to work as needed.

Response to my post: They can’t be plausible because then they’d be confusing!

There are too many misunderstandings about multiple-choice questions. For example, some question writers mistakenly think all distractors need to be obviously false to not confuse test takers. Research tells us that we must make them plausible rather than obviously false so they don’t help test takers guess, and they mimic the actual thinking process we want to assess (Shrock and Coscarelli, 1997, 2007).

One of the main reasons question writers too often write multiple-choice questions poorly is because they have misunderstandings about how to write them. Consider the following multiple-choice assessment question for people working with produce in a grocery store:

What does it mean to “rotate a display” in the produce department? (Select the correct answer.)
a) Lift the display and rotate it 90 degrees.
b) Swap fruit or vegetables from one end of the store to the other end.
c) Move the oldest produce to the top of the display and add fresher produce from stock to the bottom of the display.
d) None of the above.

You probably don’t do this task, but I’m pretty sure you can figure out the correct answer. The answer choices make it easy to eliminate incorrect answers. A and B are hardly feasible, even if you’re not very knowledgeable about this task. Many test takers know that “None of the above” is rarely the correct answer because it wastes testing time to ask a question to which none of the answers is correct. C is left and is the longest answer with the most depth, which is also a clue that it’s correct. And it is correct.

What Are The 4 Misunderstandings?

Misunderstandings lead to poor assessments and bad information. Understanding how to create valuable and relevant assessments using multiple-choice questions allows us to create efficient and effective assessments that provide us with valuable information. Here are the myths about multiple-choice questions (MCQs) I’ll address next:

  1. MCQs are simply not effective.
  2. MCQs measure content recall only.
  3. MCQs are easy to guess; therefore, results tell us little about what people know and can do.
  4. MCQs are generally written poorly; the question format is typically worthless.

What’s Wrong With Each Misunderstanding?

Let’s discuss why these misconceptions are typically incorrect and what is true, based on research evidence, instead.

1. MCQs Are Simply Not Effective

Research tells us that multiple-choice questions, when well written, can efficiently and effectively measure a wide range of knowledge and skills, including higher-level skills. (Gierl et al., 2017; Haladyna, 1997, 2004; Haladyna and Rodriguez, 2013). So, if we learn to write them well, we can assess higher-level thinking such as decision-making and solving problems.

Maybe you think that efficiency isn’t important, but reducing costs and time for assessment is a big deal. When the assessment isn’t efficient, it often doesn’t occur. For example, let’s say you need to regularly assess whether your customers are using your application correctly, as incorrect use frequently leads to serious problems. To assure that clients are using the application correctly, you’d like to meet with them and watch them perform specific tasks to fix problems quickly.

It’d be great to watch new users so as to see what they understand and don’t. The difficulty is that it requires a lot of staff and customer time. So, it’s unlikely to happen. But assessing new user skills is mission-critical, so there must be a more efficient way!

The efficiency of multiple-choice questions also makes it easier to assess a representative sample of needed knowledge and skills possible (Haladyna, 1997). And assessing a representative sample of knowledge and skills helps us with the validity of our assessments. The bottom line is that research shows that well-written and relevant multiple-choice questions are efficient and effective.

2. MCQs Measure Content Recall Only

Shrock and Coscarelli (1998) tell us the best way to improve multiple-choice question writing is to write them above the recall level, and we can measure higher-level thinking if we write them well. Let’s look at two example questions we might write to assess recall and higher-level skills for the multiple-choice assessment of whether customers are using the application correctly.

Example of a recall question for this content:

Which of the following menus should you use to modify a customer profile?
a) Finance
b) Terms
c) Setup

Example of a higher-level question for this content:

A vendor contact who places weekly orders has a new email address. She sent her new email address yesterday through the app messaging system. If we don’t quickly update her email address, she may: (Select the best answer.)
a) Lose QB-2 access to the product ordering system
b) Incur an unexpected late fee because of our delay
c) Experience errors when using our messaging system

The recall question above is trivial because it’s not highly critical knowledge about whether customers can properly use the application. According to Shrock and Coscarelli and other researchers, we shouldn’t waste assessment on trivial matters. The higher-level question is an example of a common and problematic issue that should be assessed because doing it wrong leads to data and organizational problems.

3. MCQs Are Easy To Guess; Therefore, Results Tell Us Little About What People Know And Can Do

Multiple-choice questions are often easy to guess. But this isn’t the fault of multiple-choice questions but the fault of writing them poorly. To make sure they are hard to guess, we must write them without clues to the correct answer. There are 4 common clues multiple-choice questions shouldn’t have but frequently do:

  • Answer length
    A longer and more in-depth answer choice is a clue that the answer is correct. Well-written multiple-choice questions typically make sure that all answer choices are the same length.
  • Grammar issues
    When an answer choice doesn’t follow grammatically from the stem, it is a clue that the answer choice is incorrect. Therefore, all answer choices must follow grammatically from the stem.
  • Silly or obviously incorrect answer choices
    When an answer choice is silly or obviously wrong, it is a clue that the answer choice is incorrect. All answer choices need to be plausible so that we don’t clue test takers that specific choices are obviously incorrect.
  • Using “none of the above” as an answer choice
    Many test takers know that “none of the above” is rarely correct. That’s because it’s a waste of time to ask questions that offer no information about what test takers know and can do. We shouldn’t use “none of the above” as an answer choice.

Consider the following multiple-choice question. See any clues to the correct answer?

Includes a clue to the correct answer:

You are using your outdoor wood-burning ceramic fire pit. Which of the following should you do to reduce the chance that flying sparks will ignite other fires? (Select the best answer.)
a) Burn softer rather than harder woods.
b) Douse the fire with water to put it out.
c) Make sure there aren’t any combustible materials, such as dead vegetation, next to the fire pit.

The above example has a clue that answer choice C is correct because it’s longer and has more details. Here’s a fix to this poorly written multiple-choice question:

Doesn’t include a clue to the correct answer:

You are using your outdoor wood-burning ceramic fire pit. Which of the following should you do to reduce the chance that flying sparks will ignite other fires? (Select the best answer.)
a) Burn softer rather than harder woods.
b) Douse the fire with water to put it out.
c) Remove nearby flammable materials.

To eliminate the clue that C is the answer because it’s longer and has more depth, we rewrite the answer choices to be the same length. Answer C no longer stands out as the obviously correct answer.

4. MCQs Are Generally Written Poorly; The Question Format Is Typically Worthless

Yes, this myth is partially correct because multiple-choice questions are often poorly written! But they shouldn’t be written poorly. Research tells us how to write them so they are easy to understand, don’t offer any clues, and measure important knowledge and skills.

Use What You Learned To Write Better MCQs

To improve multiple-choice questions as an assessment tool, we need to write them well. In this article, I’ve offered some important ways to write well-written multiple-choice questions including:

  • Write your questions above the recall level to assess essential and not trivial knowledge.
  • Write questions that ask people to make decisions and solve problems as they would in real life.
  • Don’t provide any length, grammar, ridiculousness, or “none of the above” or “all of the above” clues!
  • Make sure that all incorrect answers are plausible.
  • Make the answer choices the same length and depth.
  • Train your question writers to use evidence-based question-writing techniques.

Additional Resources:

Want more help? These two following resources can help you and your question writers improve their question-writing skills using evidence-based tactics:

[1] Write Better Multiple-Choice Questions to Assess Learning: Measure What Matters— Evidence-Informed Tactics for Multiple-Choice Questions

 

multiple-choice questions to assess learning

[2] Are your multiple-choice questions causing problems?

References:

  • Gierl, Mark J., Okan Bulut, Qi Guo, and Xinxin Zhang. 2017. “Developing, analyzing, and using distractors for multiple-choice tests in education: A comprehensive review.” Review of Educational Research 87, no. 6: 1082–1116.
  • Haladyna, Thomas M. 1997. Writing test items to evaluate higher order thinking skills. Needham Heights, MA: Allyn and Bacon.
  • Haladyna, Thomas M. 2004. Developing and validating multiple-choice test items. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
  • Haladyna, Thomas M., and Michael Rodriguez. 2013. Developing and validating test items. New York: Routledge.
  • Shrock, Sharon A., and William C. Coscarelli. 1998. “Make the test match the job.” Interview by N. Chase, Quality 100.
  • Shrock, Sharon A., and William C. Coscarelli.. 2007. Criterion-referenced test development: Technical and legal guidelines for corporate training and certification. 3rd ed. Silver Spring: Pfeiffer.

Image Credits:

  • Pixaby
  • Photo by Michael Burrows from Pexels

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *