macro of a business survey, with Strongly Agree checked
ATD Blog

Evidence-Based Survey Design: Do You Agree or Somewhat Agree?

Wednesday, July 31, 2019

You may have used the Likert scale in your survey questionnaires to measure various topics such as learner satisfaction, employee engagement, or organizational culture. When American social psychologist Rensis Likert developed a five-point response scale in the 1930s, his original wording was strongly approve, approve, undecided, disapprove, and strongly disapprove (Likert, 1932). Since then, the wording in the Likert scale has been changed to strongly agree, agree, neutral (or neither agree nor disagree), disagree, and strongly disagree.

How do you analyze and report the data obtained from the Likert scale? Some of you may report percentages (for example, strongly agree = 75%, agree = 13%, neutral = 5%, disagree = 3%, strongly disagree = 4%); some of you may report average scores (e.g., M=4.2). Which method should you use? Your decision can depend on whether you treat the Likert scale as an ordinal or interval scale.

To make the most appropriate choice for your data presentation, you need to know what ordinal and interval scales are about.

There are four levels of measurement scales, each of which produces different types of data (Stevens, 1946).

  • Nominal scales produce name-like data without any rank-ordering, as shown below:

What is your job title?
a. Instructional designer b. Trainer c. E-Learning developer d. Other

  • Ordinal scales produce rank-ordered data:

How often have you used the checklist since it was made available?
a. Never b. Seldom c. Sometimes d. Often e. Always

  • Interval scales also produce rank-ordered data, but all sets of two consecutive points have the same interval, and any zero value indicated is an arbitrary value:

How satisfied are you with the program?
Extremely dissatisfied 0 1 2 3 4 5 6 7 8 10 Extremely satisfied

(This can be changed to a negative to positive scale, illustrating the zero value being an arbitrary value.)

  • Ratio scales produce continuous data with the same interval and a true zero value (e.g., an income value of $0 means no income, and a test score of zero means no correct answers):

What is your current income?

Chyung Figure 1.png
You can calculate average scores with interval and ratio type data; however, it would not make much sense to calculate average scores with ordinal data. Think about this with the frequency rating scale presented in the ordinal scale example above—someone might perceive the five levels of the ordinal scale as shown in Figure 1 while other people perceive them differently. Then, it would not be fair (or correct) to assign 1, 2, 3, 4, and 5 to the five levels of the ordinal scale to calculate an average score obtained from multiple people anyway.

Figure 1. A possible perception of the five levels in the frequency scale.

Chyung Figure 2.png
Similarly, the Likert scale is likely an ordinal scale because it is difficult to say that the distance between s trongly disagree and disagree is the same as the distance between disagree and neutral and so on; however, you may have coded the five Likert scale options with 1, 2, 3, 4, and 5 although disagree may not be exactly the halfway point between strongly disagree and neutral, and agree may not be exactly halfway between strongly agree and neutral.

In fact, research (Worcester and Burns, 1975) shows that people perceive disagree to be close to strongly disagree and agree to be close to strongly agree, making the Likert scale an ordinal scale; however, when an adjective such as “slightly” is added to disagree and agree, people perceive disagree slightly to be close to the hallway point between strongly disagree and neutral, and agree slightly to be close to the hallway point between neutral and strongly agree, as illustrated in Figure 2.

Figure 2. An illustration of changed perceptions after adding a modifier Slightly to Disagree and Agree on the Likert scale (based on Worcester and Burns, 1975).

Chyung Figure 3.png
What do we learn from this research and how can we apply this evidence to our survey design practice?

Based on this research evidence, it would be reasonable to add an adjective such as slightly or somewhat to disagree and agree, especially if you intend to use the Likert scale as an interval scale. Although the additional wording may not make the Likert scale perfectly an interval scale, doing so makes the Likert scale close to an interval scale, allowing you to report and compare average scores of your survey data (for example, Branch 1 with the training program has made as twice as much improvement as Branch 1 without it).

You will also find that some online survey programs such as Qualtrics provide such wording (somewhat disagree and somewhat agree) as the default when you select a Likert scale to be used (Figure 3), while other programs use the conventional wording, disagree and agree, as the default. If your online survey program populates disagree and agree as the default, you may want to add slightly or somewhat.

Figure 3. A screen shot of Qualtrics showing a new survey item with an automatic choice of a five-point Likert scale.

Chyung Figure 4.png
This Insights article is one in a series of evidence-based survey design articles that I present to help practitioners make evidence-based decisions when designing surveys. For more information about this topic along with review of inclusion or exclusion of a midpoint in the Likert scale, please see this article published by my research team at Boise State University’s Organizational Performance and Workplace Learning department.

About the Author

Yonnie Chyung, EdD, is a professor and associate chair of the Organizational Performance and Workplace Learning department at Boise State University. She teaches graduate courses on program evaluation, quantitative research, and survey design. She is the author of 10-Step Evaluation for Training and Performance Improvement (Sage 2019) and Foundations of Instructional and Performance Technology (HRD Press 2008). Yonnie provides consulting to organizations to perform statistical analysis on their organizational data and conduct program evaluations, often involving students in her research and consulting projects. Recently, she has been developing evidence-based survey design principles for training and performance improvement practitioners.