
Reinventing assessments: the forgotten power of open-ended questions
In the world of vocational training, the open question (or free answer) is a type of question that is too little used in assessments, mostly because of the cumbersome nature of the correction. However, the open-ended question has many advantages when it comes to assessing knowledge!
What is an open-ended question?
An open-ended question is a type of question that invites the learner to formulate a developed and reasoned answer, rather than choosing from predefined options (as in a MCQ) or giving a short and factual answer. It is not limited to a single possible correct answer: several formulations, points of view or approaches can be considered relevant, depending on the quality of the reasoning and the mastery of the subject.
Concretely, an open question often requires the student to explain, justify, analyze, compare, interpret, or propose. For example: “Explain how the Industrial Revolution transformed European societies in the 19th century” or even: “Justify the methodological choices for your experiment.”
An open-ended question does not necessarily require a written answer. Moreover, they are often used orally, face to face.
When should you use an open-ended question?
Open-ended questions are particularly appropriate when the objective of the assessment goes beyond the simple recall of knowledge. They are used to measure the learner’s ability to:
- mobilize and articulate several pieces of knowledge to build a coherent response;
- exercise critical thinking and formulate personal reasoning;
- communicate ideas clearly and use specific disciplinary vocabulary;
- demonstrate a thorough understanding of concepts, rather than mechanical memorization
They are therefore relevant in summary evaluations, written productions, the resolution of complex problems or even in exams where the ability to argue is essential.
With the advances of digital assessment technologies, it is now possible to allow the respondent to give a verbale answer to an open-ended question in an online questionnaire. The learner records their response without writing it, and the proofreader can listen to the soundtrack and assign a grade. This format can be particularly effective for learning a foreign language or in a recruitment process.
The proven effectiveness of the free response
Several studies indicate that open-ended questions contribute to the mobilization of higher-level cognitive skills, such as analysis, argument, or explanation, making them particularly useful for evaluating more than just memory.
Research on the Testing Effect repeatedly show that having a response produced (generative recall) improves long-term retention more effectively than simple proofreading, especially when the required response is generative (short or free answer) rather than simply selected from proposals. For example, Butler & Roediger (2007) showed that, under conditions simulating a class, an initial short response test resulted in better final recall than proofreading or a multiple choice test. Numerous reviews and meta-analyses confirm that formats that require information to be generated have a greater learning effect than only selective formats.
Open-ended questions provide information that MCQ do not reveal: they allow us to observe the structure of reasoning, the intermediate steps and the conceptual errors (rather than only the option chosen). Recent studies show that “constructed-response” formats (constructed, short, or free responses) can offer high predictive validity and discrimination when they are well designed. For example, work in education and professional selection reports that constructed-response tests improve the validity of judgment and the accuracy of the diagnosis compared to the more classical types of questions. In addition, research on “very short answers” (VSAQ) show good reliability and comparable discrimination, which makes these formats viable on a large scale if item construction and correction are controlled.
Finally, from a practical and formative point of view, the integration of open questions into assesments produces two measurable benefits:
- better knowledge retention through generative recall
- better usable information for the teacher (diagnosis of difficulties, adaptation of teaching)
Recent reviews and summaries conclude that, although open-ended questions require more correction work, their contribution in terms of lasting learning and diagnostic information makes them particularly effective in assessing higher-level competencies (analysis, justification, synthesis). For these reasons, the literature recommends using them in a targeted manner (formative evaluations, key exam items) and to combine, if necessary, automation and criteria scales to limit the correction burden.
Points to avoid or to be supervised
However, open-ended questions require more time to correct and need explicit assessment criteria to ensure fairness between responses. They should therefore be avoided for evaluations on a large scale or for purely factual purposes (for example: to check the knowledge of a definition or a formula).
The free response also implies that the respondent has sufficient time to write their answer. Therefore, it cannot be used in short or quick assessments.
As part of a digital assessment, it is also important to take into account the context in which the learner takes the exam. The answer to an open-ended question can be given on a mobile phone, as long as the expected length is short. If the objective is to obtain a more detailed answer, lasting several paragraphs, it is better if the answer can be given on a computer, which is more comfortable for writing long texts.
Asynchronous correction: AI to the rescue
Cleverly integrated into an assessment platform, artificial intelligence is a real opportunity to facilitate and encourage the use of open questions in vocational training programs!
AI makes it possible to greatly simplify the correction work by automatically carrying out a pre-analysis of each response. It provides the reviewer with initial information on the relevance and quality of the response, as well as suggestions for comments or notes.
However, AI is not replacing the trainer, who must remain responsible for the final decision and, if necessary, provide additional corrections.
This combination of technology and human judgment ensures reliable and consistent results while simplifying the correction process, making it easier to exploit the full potential of the free question!








.avif)







.avif)







.avif)
.avif)








.avif)



.avif)





.avif)


















