Hair 4e IM Ch08
Hair 4e IM Ch08
Hair 4e IM Ch08
Chapter 8
Designing the Questionnaire
1. Bad questions
2. Call records
3. Common methods variance (CMV)
4. Cover letter
5. Interviewer instructions
6. Introductory section
7. Questionnaire
8. Quotas
9. Research questions section
10. Response order bias
11. Screening questions
12. Sensitive questions
13. Skip questions
14. Structured questions
15. Supervisor instruction form
16. Unstructured questions
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
2
Chapter 08 – Designing the Questionnaire
scaling, determine layout and evaluate questionnaire, obtain initial client approval, pretest, revise
and finalize questionnaire, and implement the survey.
A number of design considerations and rules of logic apply to the questionnaire development
process. The process requires knowledge of sampling plans, construct development, scale
measurement, and types of data. A questionnaire is a set of questions/scales designed to collect
data and generate information to help decision makers solve business problems. Good
questionnaires enable researchers to gain a true report of the respondent’s attitudes, preferences,
beliefs, feelings, behavioral intentions, and actions. Through carefully worded questions and
clear instructions, a researcher has the ability to focus a respondent’s thoughts and ensure
answers that faithfully represent respondents’ attitudes, beliefs, intentions, and knowledge. By
understanding good communication principles, researchers can avoid bad questions that might
result in unrealistic information requests, unanswerable questions, or leading questions that
prohibit or distort the respondent’s answers.
Survey information requirements play a critical role in the development of questionnaires. For
each objective, the researcher must choose types of scale formats (nominal, ordinal, interval, or
ratio); question formats (open-ended and closed-ended); and the appropriate scaling. Researchers
must be aware of the impact that different data collection methods (personal, telephone, self-
administered, computer-assisted) have on the wording of both questions and response choices.
With good questionnaires, the questions are simple, clear, logical, and meaningful to the
respondent, and move from general to specific topics.
The primary role of a cover letter should be to obtain the respondent’s cooperation and
willingness to participate in the project. Ten factors should be examined in developing cover
letters. Observing these guidelines will increase response rates.
When data are collected using interviews, supervisor and interviewer instructions must be
developed as well as screening forms and call record sheets. These documents ensure the data
collection process is successful. Supervisor instructions serve as a blueprint for training people to
complete the interviewing process in a consistent fashion. The instructions outline the process for
conducting the study and are important to any research project that uses personal or telephone
interviews. Interviewer instructions are used to train interviewers to correctly select a prospective
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
3
Chapter 08 – Designing the Questionnaire
respondent for inclusion in the study, screen prospective respondents for eligibility, and how to
properly conduct the actual interview. Screening forms are a set of preliminary questions used to
confirm the eligibility of a prospective respondent for inclusion in the survey. Quota sheets are
tracking forms that enable the interviewer to collect data from the right type of respondents. All
of these documents help improve data collection and accuracy.
Chapter Outline
Opening Vignette: Can Surveys Be Used to Develop University Residence Life Plans
The opening vignette in this chapter describes a survey instrument, “Residence Life” program,
developed to determine the housing needs of its current and future students at a large university
(43,000 students). The objectives of the program evolved around the idea that high-quality on-
campus living facilities and programs could help attract new students to the university. MPC
Consulting Group, Inc., which specializes in the assessment of on-campus housing programs,
prepared a self-administrated survey instrument to be delivered to existing students using the
university’s newly acquired “Blackboard” electronic learning management system. The survey
asked for demographic/socioeconomic characteristics, followed by questions concerning
students’ current housing situation and an assessment of those conditions, the importance of
housing characteristics, and intention of living in on-campus versus off-campus housing
facilities. The questionnaire was administered using the university’s “Blackboard” (online course
management system) system, but required viewing 24 screens and six different “screener”
questions having the respondent skipping back and forth between computer screens depending
on how they responded to the screening questions. Their response was dismal, only 17 students
responded and eight of those were incomplete. The university asked MPC to address the reasons
for the low response rate, the quality of the instrument, and the value of the data collected.
Most surveys are designed to be descriptive or predictive. Descriptive research designs use
questionnaires to collect data that can be turned into knowledge about a person, object, or issue.
In contrast, predictive survey questionnaires require the researcher to collect a wider range of
data that can be used in predicting changes in attitudes and behaviors as well as in testing
hypotheses.
Researchers should know the activities and principles involved in designing survey
questionnaires. A questionnaire is a document consisting of a set of questions and scales
designed to gather primary data. Good questionnaires enable researchers to collect reliable and
valid information. Advances in communication systems, the Internet and software have
influenced how questions are asked and recorded. Yet the principles followed in designing
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
4
Chapter 08 – Designing the Questionnaire
A pilot study is a small-scale version of the intended main research study, including all the
subcomponents that make up the main study, including the data collection and analysis from
about 50 to 100 respondents that have representation of the main study’s defined target
population.
Exhibit 8.1 lists the steps followed in developing survey questionnaires (PPT slide 8-4):
In the initial phase of the development process, the research objectives are agreed upon by the
researcher and bank management.
To select the data collection method, the researcher first must determine the data requirements
to achieve each of the objectives as well as the type of respondent demographic information
desired. In doing so, the researcher should follow a general-to-specific order.
In making these decisions, researchers must consider how the data are to be collected. For
example, appropriate questions and scaling often differ between online, mail, and telephone
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
5
Chapter 08 – Designing the Questionnaire
surveys.
Question format: Unstructured questions are open-ended questions that enable respondents
to reply in their own words. There is no predetermined list of responses available to aid or
limit respondents’ answers. Open-ended questions are more difficult to code for analysis.
Perhaps more importantly, these questions require more thinking and effort on the part of
respondents. As a result, with quantitative surveys there are generally only a few open-ended
questions. Unless the question is likely to be interesting to respondents, open-ended questions
are often skipped.
Structured questions are closed-ended questions that require the respondent to choose from
a predetermined set of responses or scale points. Structured formats reduce the amount of
thinking and effort required by respondents, and the response process is faster. In quantitative
surveys, structured questions are used much more often than unstructured ones. They are
easier for respondents to fill out and easier for researchers to code. Examples of structured
questions are shown in Exhibit 8.2.
Wording: Researchers must carefully consider the words used in creating questions and
scales. Ambiguous words and phrases as well as vocabulary that is difficult to understand
must be avoided. Researchers must select words carefully to make sure respondents are
familiar with them, and when unsure, questionable words should be examined in a pretest.
Some question topics are considered sensitive and must be structured carefully to increase
response rates. Examples of sensitive questions include income, sexual beliefs or behaviors,
medical conditions, financial difficulties, alcohol consumption, and so forth. These types of
behaviors are often engaged in but may be considered socially unacceptable.
Guidelines for asking sensitive questions start with not asking them unless they are required
to achieve your research objectives. If they are necessary, assure respondents that their
answers will be kept completely confidential. Another guideline is to indicate the behavior is
not unusual.
Questions and scaling: Questions and scale format directly impact survey designs. To collect
accurate data researchers must devise good questions and select the correct type of scale.
Whenever possible, metric scales should be used. Also, researchers must be careful to
maintain consistency in scales and coding to minimize confusion among respondents in
answering questions.
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
6
Chapter 08 – Designing the Questionnaire
Once a particular question or scale is selected, the researcher must ensure it is introduced
properly and easy to respond to accurately. Bad questions prevent or distort communications
between the researcher and the respondent. If the respondent cannot answer a question in a
meaningful way, it is a bad question. Some examples of bad questions are those that are:
Unanswerable either because the respondent does not have access to the information
needed or because none of the answer choices apply to the respondent.
Leading (or loaded) in that the respondent is directed to a response that would not
ordinarily be given if all possible response categories or concepts were provided, or if
all the facts were provided for the situation.
Double-barreled in that they ask the respondent to address more than one issue at a
time.
When designing specific questions and scales, researchers should act as if they are two
different people: one thinking like a technical, systematic researcher and the other like a
respondent. The questions and scales must be presented in a logical order. After selecting a
title for the questionnaire, the researcher includes a brief introductory section and any general
instructions prior to asking the first question. Questions should be asked in a natural general-
to-specific order to reduce the potential for sequence bias. Also, any sensitive or more
difficult questions should be placed later in the questionnaire after the respondent becomes
engaged in the process of answering questions.
Some questionnaires have skip or branching questions. Skip questions can appear anywhere
within the questionnaire and are used if the next question (or set of questions) should be
responded to only by respondents who meet a previous condition. Skip questions help ensure
that only specifically qualified respondents answer certain items. When skip questions are
used, the instructions must be clearly communicated to respondents or interviewers. If the
survey is online, then skip questions are easy to use and handled automatically.
Respondents should be made aware of the time it will take to complete the questionnaire and
of their progress in completing it. This begins in the introductory section when the respondent
is told how long it will take to complete the questionnaire, but it continues throughout the
questionnaire. For online surveys this is easy and most surveys have an icon or some other
indicator of the number of questions remaining or progress toward completion.
Prior to developing the layout for the survey questionnaire, the researcher should assess the
reliability and validity of the scales. Once this is completed, the focus is on preparing
instructions and making required revisions.
D. Step 4: Determine Layout and Evaluate Questionnaire (PPT slides 8-9 to 8-12)
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
7
Chapter 08 – Designing the Questionnaire
In good questionnaire design, questions flow from general to more specific information, and
end with demographic data. Questionnaires begin with an introductory section that gives the
respondent an overview of the research. The section begins with a statement to establish the
legitimacy of the questionnaire. Screening questions (also referred to as screeners or filter
questions) are used on most questionnaires. Their purpose is to identify qualified prospective
respondents and prevent unqualified respondents from being included in the study. It is
difficult to use screening questions in many self-administered questionnaires, except for
computer-assisted surveys. Screening questions are completed before the beginning of the
main portion of the questionnaire that includes the research questions. The introductory
section also includes general instructions for filling out the survey.
The second section of the questionnaire focuses on the research questions. This is called the
research questions section, and based on the research objectives the sequence is arranged
from general questions to more specific questions. If the study has multiple research
objectives then questions designed to obtain information on each of the objectives also should
be sequenced from general to specific. One exception to this would be when two sections of a
questionnaire have related questions. In such a situation the researcher would typically
separate the two sections (by inserting a nonrelated set of questions) to minimize the
likelihood that answers to one set of questions might influence the answers given in a second
set of questions. Finally, any difficult or sensitive questions should be placed toward the end
of each section.
The last section includes demographic questions for the respondents. Demographic questions
are placed at the end of a questionnaire because they often ask personal information and many
people are reluctant to provide this information to strangers. Until the “comfort zone” is
established between the interviewer and respondent, asking personal questions could easily
bring the interviewing process to a halt. The questionnaire ends with a thank-you statement.
Some researchers have recently expressed concerns that questionnaire design might result in
common methods variance (CMV) being embedded in respondent data collected from
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
8
Chapter 08 – Designing the Questionnaire
surveys. Common methods variance is biased variance that results from the measurement
method used in a questionnaire instead of the scales used to obtain the data. CMV is present
in survey responses when the answers given by respondents to the independent and dependent
variable questions are falsely correlated. The bias introduced by CMV is most likely to occur
when the same respondent answers at the same time both independent and dependent variable
questions that are perceptual in nature, and the respondent recognizes a relationship between
the two types of variables.
Questionnaire formatting and layout should make it easy for respondents to read and follow
instructions. If the researcher fails to consider questionnaire layout, the quality of the data can
be substantially reduced. The value of a well-constructed questionnaire is difficult to estimate.
The main function of a questionnaire is to obtain people’s true thoughts and feelings about
issues or objects. Data collected using questionnaires should improve understanding of the
problem or opportunity that motivated the research. In contrast, bad questionnaires can be
costly in terms of time, effort, and money without yielding good results.
The design of online surveys requires additional planning. A primary metric for traditional
data collection methods is response rate. To determine response rates the researcher must
know the number of attempts to contact respondents and complete a questionnaire. With mail
or phone surveys this task is relatively easy. With online surveys researchers must work with
the online data collection field service to plan how respondents will be solicited. If the data
collection service sends out a “blanket” invitation to complete a survey there is no way to
measure response rates. Even if the invitation to complete a survey is sent to an organized
panel of respondents, calculating response rates can be a problem. This is because panels may
involve millions of individuals with broad criteria describing them. Another issue is recruiting
of participants. If individuals are invited to participate and they decline, should they be
included in the response rate metric? Or should the response rate metric be based on just those
individuals who say they qualify and agree to respond, whether or not they actually respond?
To overcome these problems, researchers must work closely with data collection vendors to
identify, target, and request participation from specific groups so accurate response rates can
be calculated. Moreover, a clear understanding of how individuals are recruited for online
surveys is necessary before data collection begins.
A related problem in calculating the response rate metric online is the possibility of
recruitment of participants outside the official online data collection vendor. For example, it is
not uncommon for an individual included in the official invitation to participate to recruit
online friends and suggest that they participate. Effective control mechanisms, such as a
unique identifier that must be entered before taking the survey, must be put in place ahead of
time to prevent this type of unsolicited response.
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
9
Chapter 08 – Designing the Questionnaire
Another problem with online surveys is the length of time it takes some respondents to
complete the survey. To deal with this issue with online data collection vendors, first make
sure “time for completion” is a metric that is measured.
Research firms are just beginning to analyze online questionnaire design issues. Following
three issues have been addressed to date.
The effect of response box size on length of answer in open-ended questions—
respondents will write more when boxes are bigger. Some research firms cope with this
by offering respondents the choice between small, medium, and large boxes so that the
box size does not affect their response.
Use of radio buttons versus pull-down menus for responses—if a response is
immediately visible at the top of the pull-down menu, it will be selected more often than
if it is simply one of the responses on the list. Thus, if a pull-down menu is used, the top
option should be “select one.”
Appropriate use of visuals
Online formats facilitate the use of improved rating and ranking scales, as well as extensive
graphics and animation. Programming surveys in the online environment offers the
opportunity to use graphics and scales in new ways, some of which are helpful and some of
which are not. As with traditional data collection methods and questionnaire design, the best
guideline is the KISS test—Keep It Simple and Short! Complex formats or designs can
produce biased findings and should be avoided.
Copies of the questionnaire should be given to all parties involved in the project. This is the
client’s opportunity to provide suggestions of topics overlooked or to ask any questions.
Researchers must obtain final approval of the questionnaire prior to pretesting. If changes are
necessary, this is where they should occur. Changes at a later point will be more expensive
and may not be possible.
F. Step 6: Pretest, Revise, and Finalize the Questionnaire (PPT slides 8-14 and 8-15)
The final version of the questionnaire is evaluated using a pretest. In the pretest, the survey
questionnaire is given to a small, representative group of respondents that are asked to fill out
the survey and provide feedback to researchers. The number of respondents is most often
between 10 and 20 individuals. In a pretest, respondents are asked to pay attention to words,
phrases, instructions, and question sequence. They are asked to point out anything that is
difficult to follow or understand. Returned questionnaires are checked for signs of boredom or
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
10
Chapter 08 – Designing the Questionnaire
tiring on the part of the respondent. These signs include skipped questions or circling the
same answer for all questions within a group. The pretest helps the researcher determine how
much time respondents will need to complete the survey, whether to add or revise
instructions, and what to say in the cover letter.
Complex questionnaires involving scale development or revision often execute a pilot study.
The terms “pilot study” and “pretest” are sometimes used interchangeably but actually are
different. A pilot study is a small-scale version of the intended main research study, including
the data collection and analysis and typically involving 100 to 200 respondents that are
similar to the study’s defined target population. In some situations, a pilot study is conducted
to examine specific subcomponents of the overall research plan, such as the experimental
design manipulations, to see if the proposed procedures work as expected or if refinements are
needed.
The focus here is on the process followed to collect the data using the agreed-upon
questionnaire. The process varies depending on whether the survey is self-administered or
interviewer-completed. For example, self-completed questionnaires must be distributed to
respondents and methods used to increase response rates. Similarly, with Internet surveys the
format, sequence, skip patterns, and instructions after the questionnaire is uploaded to the web
must be thoroughly checked. Thus, implementation involves following up to ensure all
previous decisions are properly implemented.
IV. The Role of a Cover Letter (PPT slides 8-17 and 8-18)
A cover letter is used with a self-administered questionnaire. The primary role of the cover letter
is to obtain the respondent’s cooperation and willingness to participate in the research project.
With personal or telephone interviews, interviewers use a verbal statement that includes many of
the points covered by a mailed or drop-off cover letter. Self-administered surveys often have low
response rates (25 percent or less). Good cover letters increase response rates. Exhibit 8.5
provides guidelines for developing cover letters (PPT slide 8-18).
Personalization
Identification of the organization
Clear statement of the study’s purpose and importance
Anonymity and confidentiality
General time frame of doing the study
Reinforce the importance of respondent’s participation
Acknowledge reasons for nonparticipation in survey or interview
Time requirements and incentive
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
11
Chapter 08 – Designing the Questionnaire
When data are collected, supervisor and interviewer instructions often must be developed as well
as screening questions and call records. These mechanisms ensure the data collection process is
successful. Some of these are needed for interviews and others for self-administered
questionnaires.
A supervisor instruction form serves as a blueprint for training people on how to execute
the interviewing process in a standardized fashion; it outlines the process by which to conduct
a study that uses personal and telephone interviewers. They include the following:
detailed information on the nature of the study
start and completion dates
sampling instructions
number of interviewers required
equipment and facility requirements
reporting forms
quotas
validation procedures
Exhibit 8.6 displays a sample page from a set of supervisor instructions for a restaurant study.
Interviewer instructions are used for training interviewers to correctly select a prospective
respondent for inclusion in the study, screen prospective respondents for eligibility, and to
properly conduct the actual interview. The instructions include the following:
detailed information about the nature of the study
start and completion dates
sampling instructions
screening procedures
quotas
number of interviews required
guidelines for asking questions
use of rating cards
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
12
Chapter 08 – Designing the Questionnaire
recording responses
reporting forms
verification form procedures
Screening questions ensure the respondents included in a study are representative of the
defined target population. Screening questions are used to confirm the eligibility of a
prospective respondent for inclusion in the survey and to ensure that certain types of
respondents are not included in the study. This occurs most frequently when a person’s direct
occupation or a family member’s occupation in a particular industry eliminates the person
from inclusion in the study.
Quota is a tracking system that collects data from respondents and helps ensure that
subgroups are represented in the sample as specified. When a particular quota for a subgroup
of respondents is filled, questionnaires for that subgroup are no longer completed. If
interviewers are used they record quota information and tabulate it to know when interviews
for targeted groups have been completed. If the survey is online then the computer keeps track
of questionnaires completed to ensure quotas for targeted groups are met.
Call records, also referred to as either reporting or tracking approaches, are used to estimate
the efficiency of interviewing. These records typically collect information on the number of
attempts to contact potential respondents made by each interviewer and the results of those
attempts. Call and contact records are often used in data collection methods that require the
use of an interviewer, but also can be used with online surveys. Information gathered from
contact records include the following:
number of calls or contacts made per hour
number of contacts per completed interview
length of time of the interview
completions by quota categories
number of terminated interviews
reasons for termination
number of contact attempts
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
13
Chapter 08 – Designing the Questionnaire
Designing a Questionnaire to Survey Santa Fe Grill Customers (PPT slides 8-22 and 8-23)
The Marketing Research in Action in this chapter extends the discussion on questionnaire design.
It includes actual Screening Questions (Exhibit 8.7) and a Questionnaire (Exhibit 8.8) for Santa
Fe Grill. The research objectives guiding the survey are provided.
1. Based on the research objectives, does the self-administered questionnaire, in its current
form, correctly illustrate sound questionnaire design principles? Please explain why or why
not.
No, the self-administered questionnaire does not correctly illustrate the sound
questionnaire design principles. It does not flow from the general to the specific. A better
order might be Selection Factors, Perception Measures, Relationship Measures, Life Style
Questions, and finally Classification Questions. In the Relationship section, Question 25
should come before Question 22, followed by the rest in the order given. Also, in Section
1: Life Style Questions, the “Questions” term in the subtitle should be changed to
“Features” because the belief statements are framed in a question format. In addition, the
actual scale measurement used in Section 1and Section 2 should be redesigned to eliminate
all the redundancy of repeating “Strongly Disagree” and “Strongly Agree.”
2. Overall, is the current survey design able to capture the required data needed to address all
the stated research objectives? Why, or why not? If changes are needed, how would you
change the survey’s design?
Keeping the six research objectives in mind, Santa Fe Grill could try to bring the following
changes in its survey design.
(i) To identify the factors people consider important in making casual dining restaurant
choice decisions
The restaurant decision factors are fairly well covered. It is a good idea to include an
ordering scheme. A better approach would be to include an item for convenience of
location and let the respondent list any other factors that they consider important. It
would be more useful if some way could be devised to assign a relative importance
to what is now a simple ordinal scale. A constant sum questions would be a simple
way to get that information. It is a good idea to include an ordering scheme.
(ii) To determine the characteristics customers use to describe the Santa Fe Grill and its
competitor, Jose’s Southwestern Café
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
14
Chapter 08 – Designing the Questionnaire
Customers’ description of the Santa Fe Grill is covered, but there is still the question
concerning coverage of the important factors. Are all possible descriptors included in
the questionnaire? Which descriptors do customers find most attractive?
(iv) To determine the patronage and positive word-of-mouth advertising patterns of the
restaurant customers
These factors seem to be covered. Is there any type of word-of-mouth promotion that
goes beyond recommending a place to a friend? That issue might need more attention
since personal recommendations are an important reason that people choose a place
to dine out.
(v) To assess the customer’s willingness to return to the restaurant in the future
Likelihood of return is covered in one of the questions.
(vi) To assess the degree to which customers are satisfied with their Mexican restaurant
experiences
There is just one question on satisfaction with an open-ended follow-up for people
who expressed dissatisfaction. Satisfaction is a multi-dimensional construct. There is,
however, a trade-off between getting complete information and having the
respondent get tired of answering questions.
3. Evaluate the “screener” used to qualify the respondents. Are there any changes needed?
Why, or why not?
The first screener is fine. It gets right to the characteristic of interest. The second screener
about having eaten at some other Mexican restaurant isn’t particularly useful. Why is it
relevant to have eaten at some other restaurant? There are no questions about comparing
Santa Fe Grill with other restaurants in the survey. The income question doesn’t add much.
There may be a valid reason for excluding people with low income, but they are part of
Santa Fe Grill’s customer base. Evidence of that would be that they are there at the time of
the survey.
4. Redesign questions # 26 to 29 on the survey using a rating scale that will enable you to
obtain the “degree of importance” a customer might attach to each of the four listed
attributes in selecting a restaurant to dine at.
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
15
Chapter 08 – Designing the Questionnaire
Listed below are some reasons that many people might use in selecting a restaurant where
they want to dine. Think about your visits to casual dining restaurants in the last three
months. For each attribute listed below, please indicate by circling the number which
corresponds to your answer how important that attribute is to your decision of where to
dine when selecting a casual restaurant. For each attribute, use a scale such that 1 = not at
all important, 2 = somewhat unimportant, 3 = neither important nor unimportant, 4 =
somewhat important, and 5 = extremely important.
Attribute 1 2 3 4 5
Price 1 2 3 4 5
Food quality 1 2 3 4 5
Atmosphere 1 2 3 4 5
Service 1 2 3 4 5
The type of question format (structured or unstructured) has a direct impact on survey
design. The advantage of using unstructured questions is that they provide a rich array of
information to the researcher, since there are no predetermined responses available to aid
(or limit) a respondent’s answer. The downside is that this type of question requires more
thinking and effort on the part of the respondent. If the respondent fails to comprehend
what’s being asked, they may leave the question blank since there isn’t an interviewer
present to intervene and address the respondent’s question/concern. Closed-ended
questions require the respondent to choose from a predetermined set of scale points or
responses, and it reduces the amount of thinking and effort required from the respondent.
2. Explain the role of a questionnaire in the research process. What should be the role of the
client during the questionnaire development process?
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
16
Chapter 08 – Designing the Questionnaire
question in the past, and what the outcomes were. Survey design is an iterative process, so
the research team should expect to revise a number of drafts of the survey instruments
before a pre-test, and expect to subject the instrument to further revisions after that.
3. What are the guidelines for deciding the format and layout of a questionnaire?
Questions and decisions about “format” and “layout” are directly related to function; for
example, constructing a survey which captures primary information from the target
population to address business or marketing information problems. Questionnaire
formatting and layout should make it easy for respondents to read and follow instructions.
After preparing the questionnaire but before submitting it to the client for approval, the
researcher should review the document carefully. The focus is on determining whether
each question is necessary and if the overall length is acceptable. Also, the researcher
checks to make sure that the survey meets the research objectives, that the scale format and
instructions work well, and that the questions move from general to specific.
4. What makes a question bad? Develop three examples of bad questions. Rewrite your
examples so they could be judged as good questions.
Bad questions are defined as queries which cause a fundamental “disconnect” between
researcher and respondent. They can be unanswerable, leading (or loaded), and/or double-
barreled. Listed below are three examples of “bad” questions and the remedies which flow
from the guidelines presented.
(i) “What do you think of the exceptionally poor customer service provided by the
circulation desk at our campus library?” This is a leading or loaded question. The phrase
“exceptionally poor” should be eliminated.
(ii) “Can you see a world in which the mall-of-the-present will give way to cyberspace and
the internet of the future?” This question is a brain-twister, full of odd conceptual
formulations and strange suggestions. It would be better phrased as “Will electronic
commerce eventually replace the need to buy items from shopping malls?”
(iii) “Will you vote for Freda Johnson in the upcoming election, and are you pleased with
her stance on gun control in our local community?” This is a double-barreled question. If it
were followed with two response categories (e.g., “Yes” and “No”) the output from
respondents cannot be segmented in a way to be of value to the researcher and the
campaign team. (People could vote for Freda yet disagree with her position on gun control
in the local area.) This question needs to be broken up into two separate questions: (a)
“Will you vote for Freda Johnson in the next election?” and (b) “Do you agree with her
position on gun control in the local community?”
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
17
Chapter 08 – Designing the Questionnaire
A good questionnaire increases the probability of collecting high-quality primary data that
can be transformed into reliable and valid information. The layout of the instrument can
enhance the ability to provide valid and reliable data. A good questionnaire is designed to
ensure that all sampling units are asked relevant questions, in the same manner and order,
and responses are uniformly recorded. The instrument is designed to be understandable to
respondents; interesting enough to encourage completion; and economical to administer,
record, and analyze.
7. Unless needed for screening purposes, why shouldn’t demographic questions be asked up
front in questionnaire?
Gathering demographic data is not the main purpose of the survey, so it should not be
covered in the main part of the questionnaire. Because demographic information is
sometimes considered sensitive by respondents, these questions are best left for the end of
the session—after respondents become comfortable with the idea of answering questions.
1. Assume you are doing exploratory research to find out students’ opinions about the
purchase of a new digital music player. What information would you need to collect? What
types of questions would you use? Suggest six to eight questions you would ask, and
indicate in what sequence you would ask them. Pretest the questions on a sample of
students from your class.
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
18
Chapter 08 – Designing the Questionnaire
2. Assume you are conducting a study to determine the importance of brand names and
features of mobile phone handsets. What types of questions would you use—open-ended,
closed-ended, or scaled—and why? Suggest six to eight questions you would ask and in
what sequence you would ask them. Pretest the questions on a sample of students from
your class.
3. Discuss the guidelines for developing cover letters. What are some of the advantages of
developing good cover letters? What are some of the costs of a bad cover letter?
The advantages associated with a “good” cover letter are numerous, but the primary one is
the ability of a cover letter to act as a behavioral incentive to get the respondent to
complete and return the survey in a manner which is aligned with the “deadlines”
associated with capturing data agreed upon by the research team and decision makers.
The “downside” of a bad cover letter concerns negative fallout concerning what the
project’s about, the legitimacy of the project, and the image of the person and/or
organization underwriting the request for communication.
4. Using the questions asked in evaluating any questionnaire design (see Exhibit 8.4),
evaluate the Santa Fe Grill restaurant questionnaire. Write a one-page assessment.
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.
19
Chapter 08 – Designing the Questionnaire
Students’ responses will vary, but all should follow the considerations given in Exhibit 8.4.
These are as follows:
Are the questions in the questionnaire appropriate and complete for addressing the
research objectives?
Will the questions as designed provide the data in a sufficient form to address each
objective?
Does the introduction section include a general description of the study?
Are the instructions clear?
Do the questions and scale measurements follow a logical order?
Does the questionnaire begin with simple questions and then gradually lead to more
difficult or sensitive questions?
Are personal questions located at the end of the survey?
Are questions that are psychological in nature toward the end of the survey or
interview, but before demographics?
Does each section use a single measurement format?
Does the instrument end with a thank-you statement?
It is crucial to pretest a questionnaire since no amount of input from the research team and
client can ever be considered “final” until the instrument is subjected to a first “litmus
test”; for example, getting it in front of the target population who will eventually be asked
to fill it out in a more expensive and time-consuming fashion. The critical issue involved in
pretesting a questionnaire is whether the respondents who fill it out (e.g., 15–30 in number)
provide answers to the questions which are truly representative of the target population.
Even if random sampling methods are utilized for the pretest there will always be those
who doubt if the “slice” of responses obtained from this initial group bear any value to the
information problem and decision at hand. The following statement is a rule of thumb:
“Better to pretest the survey than not to test it at all.” It is rare that a survey returns from a
pretest without needing subtle and important modifications before it “goes to market.”
Copyright © 2017 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill
Education.