Best practices for creating questionnaires

20 nov. 2023

Accueil du blog

Discover all our best practices for creating a questionnaire that is both effective and relevant to address your assessment challenges.

Best practices for designing effective questionnaires

Best practices for drafting questions

General best practices for creating a questionnaire

  • All questions cover the domain uniformly,

  • Each question measures a single important knowledge point,

  • No traps or overly complex questions, and no answers that differ in a minor detail,

  • A binary question contains two propositions: one correct, one incorrect. Limit binary questions that leave too much room for chance (50%) and therefore provide less information for the same time consumed,

  • Ideally aim for 4 or 5 answer choices for each question,

  • The nature of the question determines the chosen type. It is good to vary question types, but there is no need to force a question into a specific type,

  • Do not hesitate to create variations of the same question, especially in cases where tests are created by random selection from a question bank, as this adds diversity with minimal writing effort,

  • Plan for a minimum of 20 questions per test, preferably 30 if possible, ideally around forty.

Tip: when creating a questionnaire, one way to address binary questions is to combine two binary questions on the same theme to create a multiple-choice question with 4 answer choices (2 true, 2 false).

Tips for writing the question statement

  • Prefer an interrogative form to an affirmative form in the case of a binary question with a single choice to avoid writing a false assertion,

  • Simple and precise vocabulary,

  • Not too long except in the case of fill-in-the-blank texts,

  • Avoid double negations (e.g., it is not impossible...),

  • Do not hesitate to insert illustrations, even if they are only decorative; they will make the statement more enjoyable (free image library).

  • How to write the answers and distractors of the questionnaire

  • Proposed answers consistent with each other,

  • Same type of formulation (noun groups, sentences),

  • Same level of detail (often correct answers are more precise),

  • Target common errors or confusions if possible,

  • Binary questions are useful for learning to highlight essential knowledge, an assertion (e.g., deciduous foliage is foliage: that falls / that does not fall). As mentioned earlier, they are not very discriminating and should be used in moderation.

  • True / False: by default, the answers "true" and "false" are pre-filled but can be modified (never/always or 2 propositions),

  • Text (word(s) freely written by the learner to answer the question): several propositions can be accepted by separating them with a comma. The first proposition will be presented as the correct answer. It is not necessary to propose variants with capital letters. The propositions should be short and unambiguous. Avoid asking for multiple words. This also applies to fill-in-the-blank text, Numeric (number(s) freely provided by the learner): depending on the case, either the exact value must be provided, or value ranges can be accepted,

  • Matching: preferably present images in the left column to facilitate drag and drop (ideally at least 4 propositions),

  • Dropdown list: allows for fill-in-the-blank text with predefined propositions, limiting choices,

  • Free question: allows for completing a questionnaire by reducing randomness but requires manual correction.

Best practices for writing explanations

  • The explanation can detail why a certain answer is correct and another is incorrect. It is read in the context of the question and may refer to the question,

  • Do not mention the order of the answer in the explanation (for example: answer 2 is incorrect because...) since the answers can be mixed in the test.

  • Advice on rules or course reminders

  • The rule or memo should be readable and understandable without having the question in view,

  • Prefer media such as sheets that allow for mutualizing memos and facilitate maintenance (correction on the sheet and not in each question),

  • Same remark as for the explanation: do not mention the statement or answers since the question will no longer be displayed when the rule is presented in the personalized revision space.

Best practices for domains and tags

If possible, determine a list of criteria for categorizing questions: tags and domains refine the interpretation of a test score.

How to choose questions to create an effective questionnaire 

The selection of questions for a questionnaire depends on its purpose. If the questionnaire is a self-training questionnaire, the selection of questions can be done randomly, adapting to the user's history.

It is advisable to multiply questions addressing the same essential concept while trying to offer different types of questions to better validate the concept. If the questionnaire is a certifying assessment questionnaire, it is important to:

  • Define the objectives of the evaluation in advance,

  • Choose questions that align with the objectives (avoid expert questions if targeting beginners), Choose questions without traps: do not give the impression that there might be traps,

  • Maintain the same structure of questions to allow the learner not to dwell on the question process but focus on the underlying knowledge. However, it should be noted that different types of questions are not equally discriminating; for example, questions that involve input provide more information than binary questions,

  • Respect the proportion of questions on the objectives in proportion to the available coverage,

  • Optionally, end with a few free-response questions for manual correction, which complement the automated evaluation well.

If there is a competency framework, it is very useful to have an inventory of questions that validate specific competencies or abilities. This point is consistent with the general best practice that says: each question measures a single important knowledge point.

Note that this inventory is easy to establish if questions are structured using the platform's domains or tags.

The number of questions to include in the questionnaire determines the reliability of a test or evaluation and depends on several criteria.

Test and validate the questionnaire 

After creating the questionnaire, it is necessary to validate it with multiple experts because some may propose overly technical questions, while others may suggest questions that are too simple.

An essential point in the deployment of evaluations is calibration, i.e., determining the score to be achieved, the satisfactory score. This is an exercise that is often overlooked. The success threshold should not be established relative to performance (e.g., the bottom 30% of scores fail); the test should not systematically imply a proportion of participants failing.

Having written 40 questions on a certain theme, no one can precisely affirm that the expected level is 80% correct answers. The success threshold is set a priori and not a posteriori. Of course, one can arbitrarily declare this goal, but it might later be discovered that entirely competent participants do not achieve the required score.

In short, it is imperative to calibrate the test. It is sufficient (and necessary) to have it taken by 5 participants whose level is deemed satisfactory. On the other hand, it is common to focus too much on the success rate of questions. It is good for an evaluation to include easy questions (with a success rate above 80%) and difficult questions (success rate below 20%).

Therefore, before certification, it is necessary to validate the test with a reference population to:

  • Allow for feedback again, which is not systematic as during the writing phase but should help control the accuracy and wording of questions in a certification context,
  • Obtain initial statistics that help better select questions,
  • Calibrate the minimum level required for the test,
  • Conditions for taking the questionnaire,

Include a training phase.

When creating a questionnaire, it is important to question the conditions for taking it: for example, is the integration of a training phase relevant?

Yes, it is one of our recommendations, both as a learning tool and for the security of participants. At the end of the training phase, the participant should be able to say, "I now know that I have the expected level; I am ready to take the certification."

Specifics of certification

The conditions for taking the test are important because some have repercussions on the results:

  • For certification, you can choose the 'single page' mode, in which the user sees all the questions and can answer them in the order that suits them and can go back to a previous answer until final validation,

  • Random order for presenting questions and answers,

  • No weighting (differentiated scores per question) unless the question type is statistically easier (e.g., binary question with a 50% success rate for random guessing, but we have seen that this type of question should be limited),

  • Partial grading (100% 50%) if the questions propose multiple-choice questions with more than 2 correct answers,

  • Time control for the entire test rather than a timer per question,

  • Consequences that validate certain abilities or that suggest moving on to another evaluation if the results are insufficient or incomplete.

To create a quality questionnaire, it is essential to also determine the difficulty level of each question.

Easily create questions with AI

Creating a large question bank can be a time-consuming and tedious operation. To help trainers harness the full power of assessment, ExperQuiz offers an innovative feature that leverages artificial intelligence to generate questions automatically based on the chosen subject.

After selecting a theme, the AI will generate a series of questions as well as correct answers and distractors (incorrect ones). The trainer can then choose the ones that seem suitable to integrate, with a click, into a question bank.

What is blended learning?
Blended learning (hybrid training or mixed training) is a type of training that combines face-to-face and digital tools (like those of e-learning).
Soft skills assessment with ExperQuiz
Unlike hard skills, which are a person's technical skills and often called “know-how”, soft skills are more difficult to define. However, they can be considered as a person's “interpersonal skills ”.
Training assessment white paper
Get the best practices of training assessment in our free white paper and create great surveys following your trainings!
ExperQuiz and training assessment
ExperQuiz is a complete Learning and Assessment Solution. Meaning it is a full featured LMS platform, which is characterized by particularly rich and advanced functionalities in terms of assessment.
Adaptive learning: a powerful tool for training
The growing importance of adaptive learning in the world of training and education is undeniable. This approach, which is based on learner data, makes it possible to personalize learning to the needs of each individual.
The online attendance register by ExperQuiz
The attendance sheet makes it possible to meet legal obligations and digitalization offers an enriched and fluid experience to learners, trainees and trainers.
What is m-learning in the world of digital learning?
One of these new methods is m-learning, or mobile learning, which has transformed access to training. But what exactly is m-learning?
The authoring tool: definition
Different tools are used to completely digitize a training process: from the administrative part, to the distribution of training, through the design of e-learning modules. It is in this last phase that the authoring tool plays its role.
What is upskilling reskilling and cross-skilling?
3 terms regularly emerge in the world of professional training: upskilling, reskilling and cross-skilling. These different notions make it possible to better define the objectives of a training course and can therefore help a company to adapt to change, to deepen its knowledge and to improve the flexibility of its personnel.
What is an LXP platform?
Learning Experience Platforms (LXP) are increasingly popular in the universe of training professionals. They offer a learner-centric approach with innovative features that dramatically improve the learning experience for employees and organizations.
Will AI replace trainers?
With the arrival of OpenAI innovation, ChatGPT, artificial intelligence is at the forefront and we now know that all sectors will be impacted fairly quickly. The world of training is obviously not spared.
Synchronous & asynchronous learning : differences & advantages
We regularly hear about synchronous and asynchronous learning. These two notions refer to two teaching methods : both have advantages and these 2 approaches can be combined depending on the educational objectives.
Best practices for designing an effective elearning module
E-learning modules are a great way to deliver training on almost any topic at scale. But what are the best practices for designing and deploying effective elearning modules?
LMS: Understanding the Learning Management System for online training
Discover all the features offered by an LMS platform: content management, pedagogy, assessments and more!
The challenges of training assessment
Evaluating a training means determining its value, but what defines the value of a training? It seems to us that the clearest definition would be: to what extent did the training achieve the stated objectives? Which, of course, presupposes having defined them in a precise and systematic manner.
How to increase assessment acceptance in companies
Assessment is simply quantifying something and giving it a value. The action of measuring is normally and generally something objective and precise. However, assessment is something less cartesian, often influenced by subjectivity.
What is a flipped classroom?
The flipped classroom is a pedagogical approach that is developing more and more thanks to digital technology and which makes it possible to focus learning on the student.
Adaptive Learning In Practice
Adaptive learning is a powerful tool to enhance both teaching and learning, but depends on strong digital learning solutions.
What is nudge?
The English term “Nudge” means to give a little push or a little boost to someone. That is to say, a subtle indication that we give to someone to encourage them to do or not to do something.
Using Assessment To Drive Effective Skills Management
A Learning Assessment Solution enables you to benefit from strong and reliable assessments to enrich and drive skills management in any type of organization.
Using Assessment To Improve Training And Learning
Training assessment is key and essential for training organizations and for companies. It is an opportunity to improve and increase the quality and the effectiveness of the training they provide.
Digital learning : democratization or dehumanization ?
Digital training do not replace human interactions but offer the opportunity to create personalized training experiences with more precise support.
Adaptive learning: a powerful tool for training
The growing importance of adaptive learning in the world of training and education is undeniable. This approach, which is based on learner data, makes it possible to personalize learning to the needs of each individual.