of 11

Item sampling in service quality assessment surveys to improve response rates and reduce respondent burden: The “LibQUAL+® Lite” example

All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Item sampling in service quality assessment surveys to improve response rates and reduce respondent burden: The “LibQUAL+® Lite” example
  Item sampling in service qualityassessment surveys to improveresponse rates and reducerespondent burden The “LibQUAL 1 w Lite” example Bruce Thompson Texas A&M University and Baylor College of Medicine, College Station,Texas, USA Martha Kyrillidou  Association of Research Libraries, Washington, DC, USA, and  Colleen Cook Texas A&M University, College Station, Texas, USA Abstract Purpose  – Survey researchers sometimes develop large pools of items about which they seekparticipants’ views. As a general proposition, library participants cannot reasonably be expected torespond to 100  þ  items on a given service quality assessment protocol. This paper seeks to describethe use of matrix sampling to reduce that burden on the participant. Design/methodology/approach  – Matrix sampling is a survey method that can be used to collectdata on all survey items without requiring every participant to react to every survey question. Here thefeatures of data are investigated from one such survey, the LibQUAL  þ  w Lite protocol, and theparticipation rates, completion times, and result comparisons across the two administration protocols – the traditional LibQUAL  þ  w protocol and the LibQUAL  þ  w Lite protocol – at each of the fourinstitutions are explored. Findings  – Greater completion rates were realized with the LibQUAL  þ w Lite protocol. Originality/value  – The data from the Lite protocol might be the most accurate representation of theviews of all the library users in a given community. Keywords  Information services, Service quality assurance, Quality assessment, Surveys Paper type  Research paper Introduction As Rowena Cullen noted, “focusing more energy on meeting [ . . . ] customers’expectations” is critical in the contemporary academic library environment, in partbecause “the emergence of the virtual university, supported by the virtual library, callsinto question many of our basic assumptions about the role of the academic library,and the security of its future” (Cullen, 2001).In this environment, as Danuta Nitecki has observed, “A measure of library qualitybased solely on collections [counts] has become obsolete” (Nitecki, 1996). Librarianshave come to realize the wisdom of the words of French philosopher and moralistFranc¸ois de La Rochefoucauld: “Il est plus ne´cessaire d’e´tudier les hommes que les The current issue and full text archive of this journal is available at www.emeraldinsight.com/1467-8047.htm PMM10,1 6 Performance Measurement andMetricsVol. 10 No. 1, 2009pp. 6-16 q Emerald Group Publishing Limited1467-8047DOI 10.1108/14678040910949657  livres” (de La Rochefoucauld, 1613-1680). In the words of Bruce Thompson, “We onlycare about the things we measure” (Thompson, 2006), so we do not seriously care aboutservice quality unless we listen to library users in various systematic ways. Within aservice quality orientation, “only customers judge quality; all other judgments are essentially irrelevant  ” (Zeithaml  et al. , 1990).  LibQUAL  þ  w One service quality assessment tool that has been widely used in libraries around theworld is LibQUAL  þ  w . LibQUAL  þ  w has three primary components. As notedelsewhere: First, LibQUAL  þ  w consists of 22 core items measuring perceived service quality withrespect to (a) Service Affect, (b) Library as Place, and (c) Information Control. Each item israted with respect to (a) minimally-acceptable service expectations, (b) desired serviceexpectations, and (c) perceived level of actual service quality [ . . . ] Second, the LibQUAL  þ  w protocol solicits open-ended comments from users regarding library service quality [ . . . ]These comments are crucial, because here the participants elaborate upon perceivedstrengths and weaknesses, and sometimes offer suggestions for specific actions to improveservice. Third, libraries using LibQUAL  þ  w have the option of selecting five additionalitems from a supplementary pool of 100  þ  items to augment the 22 core items to focus onissues of local interest (Thompson  et al. , 2007). LibQUAL  þ  w data can be evaluated using any combination of three interpretationframeworks:(1) location of perceptions within the “zones of tolerance” defined by minimallyacceptable and desired expectations;(2) benchmarking against peer institutions; and(3) comparing changes in a given institution’s data longitudinally over time.In the ten years since its inception in 2000 (Thompson, 2007), LibQUAL  þ  w has beenused to collect data from more than 1.25 million library users from more than 1,000institutions! LibQUAL  þ  w has now been used in 22 different countries: the USA,Canada, Mexico, Bahamas, Australia, New Zealand, the UK (England, Scotland,Wales), France, Ireland, Belgium, The Netherlands, Switzerland, Denmark, Finland,Norway, Sweden, Egypt, United Arab Emirates, South Africa, Hong Kong, Singapore,and Japan. Currently, the system supports 15 languages: Afrikaans, American English,British English, Chinese (Traditional), Danish, Dutch, Finnish, French (Canadian),French (European), German, Norwegian, Spanish, Swedish, Welsh and Japanese. Thedevelopment and use of LibQUAL  þ  w has been documented in a host of academicoutlets (Cook  et al. , 2001c; Thompson  et al. , 2001 Cook  et al. , 2003; Heath  et al. , 2002;Cook  et al. , 2002; Cook and Heath, 2001; Cook  et al. , 2001a; Thompson  et al. , 2005, 2005,2007, 2008.  Purposes of the present article Survey researchers sometimes develop large pools of items about which they seekparticipants’ views. For example, in the Association of Research Libraries DigiQUAL w (Association of Research Libraries) project, the item pool consist of more than 100items. As a general proposition, library participants cannot reasonably be expected torespond to 100  þ  items on a given service quality assessment protocol. However, a Libqual  þ  w Lite 7  survey method called “matrix sampling” can be used to collect data on all survey itemswithout requiring every participant to react to every survey question. Here weinvestigate the features of data from one such survey – the LibQUAL  þ  w Liteprotocol.LibQUAL  þ  w Lite is a survey methodology in which: . all   users answer a few, selected survey questions (i.e. three core items); but . the remaining survey questions are answered  only  by a randomly selectedsubsample of the users.Thus, data are collected on  all   questions, but each user answers  fewer   questions, thusshortening the required response time. Table I illustrates this survey strategy. In thisexample, all users complete three of the items (i.e. the first, second, and fourth items).But only Mary and Sue were randomly selected to complete the third item in the itempool, which was Service Affect item #2. Only Bob and Mary were randomly selected tocomplete the fifth item in the item pool, which was Service Affect item #3. Only Sueand Ted were randomly selected to complete the sixth item in the item pool, which wasInformation Control item #2.On LibQUAL  þ  w Lite, each participant completes only eight of the 22 core surveyitems. Every participant completes the same single Service Affect, single InformationControl, and single Library as Place items, plus two of the remaining eight (i.e. nineminus the one core item completed by everyone) randomly selected Service Affectitems, two of the remaining seven (i.e. eight minus the one core item completed byeveryone) randomly selected Information Control, and one of the remaining four (i.e.five minus the one core item completed by everyone) randomly selected Library asPlace items.Here we explore the features of the LibQUAL  þ  w Lite protocol implemented at fouruniversity libraries in the USA. Specifically, we were interested in exploringparticipation rates, completion times, and result comparisons across the two protocolsat each of the four institutions. An important meta-analysis of the literature conductedby Colleen Cook, Fred Heath, and Russell L. Thompson suggested that response ratesshould be improved with the use of the shorter protocol (Cook  et al. , 2000).In our study, at one institution 70 percent of all LibQUAL  þ  w survey invitees wererandomly assigned the LibQUAL  þ  w Lite protocol, while the remaining 30 percent of  PersonItem Bob Mary Bill Sue TedService Affect #1  X X X X X Information Control #1  X X X X X Service Affect #2  X X Library as Place #1  X X X X X Service Affect #3  X X Info Control #2  X X Library as Place #2  X X Note:  Items completed by  all   participants are presented in bold Table I. Matrix sampling surveystrategy PMM10,1 8  survey invitees were randomly assigned the long form of the protocol (i.e. all 22 surveycore items). At the remaining three institutions, 50 percent of all LibQUAL  þ  w surveyinvitees were randomly assigned the LibQUAL  þ  w Lite protocol, while the remaining50 percent of survey invitees were randomly assigned the long form of the protocol (i.e.all 22 survey core items). Results Survey completion rates Table II presents the number of participants across the four institutions (assigned IDnumbers 433, 3, 107, and 5 to assure their anonymity) and the two protocol forms (i.e.short and long). As indicated in Table II, the actual participation rates for personsrandomly assigned the LibQUAL  þ  w Lite protocol (i.e. 73.7 percent, 59.5 percent, 55.0percent, and 55.3 percent, respectively) were higher than the baselines of the percentagesof persons at each institution asked to complete the short form (i.e. 70 percent, 50 percent,50 percent, and 50 percent, respectively). Thus, these results clearly indicate thatparticipants are more likely to complete the survey when the matrix sampling strategy isused to collect data on all the items in the item pool.Table III presents the median survey completion times in seconds across the twoprotocols. These results indicate that completing the protocol with only eight versus all22 core items took a little more than half as long as completion of the full item set.Of course, participants in both groups completed other items (e.g. demographicself-descriptions), and were allowed to provide comments. Historically, about 40percent of all LibQUAL  þ  w participants write comments, and these qualitative dataare at least as important as the quantitative data gathered on the protocol, because hereusers often present specific suggestions for library improvement! And persons writing InstitutionGroup/statistic 433 3 107 5Short 1,868 627 451 382Long 688 426 369 309Total 2,536 1,053 820 691Actual (percent) 73.7 59.5 55.0 55.3Random (percent) 70.0 50.0 50.0 50.0Difference (percent) 3.7 9.5 5.0 5.3 Note:  At institution #433, 70 (percent) of all participants were  randomly  assigned the short form,while at the remaining three institutions 50 (percent) of all participants were  randomly  assigned theshort form Table II. Ratios of completersacross the twoadministration formatsand four institutionsInstitutionFormat 433 3 107 5 TotalLong form 456.5 501.0 458.0 470.0 470.5“Lite” form 276.0 300.0 290.0 291.5 285.0 Table III. Median completion timesin seconds acrossadministration formats Libqual  þ  w Lite 9  longer comments would have taken longer to complete the survey regardless of whichprotocol they were randomly assigned.Table IV presents the percentages of participants who both completed the surveyonce started and met the protocol inclusion criteria across institutions andadministration formats. In LibQUAL  þ  w , participants are excluded from thedataset if they meet certain criteria. For example, on the longer protocol, if a participantanswer more than 11 core items “not applicable”, the participant’s data are droppedunder a view that such a user for whatever reason does not have a definitive view of library service quality. Also, no participant can logically rate a service item higher onwhat is “minimally acceptable” than the “desired” service quality on the same item,and any person with an excessive number of such “inversions” also is omitted under aview that the person is responding randomly rather than seriously.The results shown in Table IV make clear that higher percentages of persons whostart the LibQUAL  þ  w Lite protocol once they begin actually complete the survey. Of course, some participants begin the survey, determine what the protocol is about, andreturn later to actually complete the survey. So, not all persons who fail to complete thesurvey in a given administration are actually non-responders.  LibQUAL  þ  w  Lite versus LibQUAL  þ  w  score comparisons The present study was a randomized clinical trial or an experiment. Because theparticipants were randomly assigned either the LibQUAL  þ  w Lite protocol or theconventional LibQUAL  þ  w protocol, the scores on the measures at a given institutionshould be similar, unless the protocols themselves caused score differences.All LibQUAL  þ  w scores (i.e. total, the three scales, and items) are scaled on a scaleof 1-9, with 9 being the highest rating. Figure 1 presents 95 percent confidence intervals Figure 1. Ninety-five percentconfidence intervals aboutmeans for total perceptionscores across fourinstitutions and twoprotocolsInstitutionFormat 433 3 107 5 TotalLong form 56.18 35.26 61.40 51.07 49.18“Lite” form 66.08 51.44 73.57 60.54 62.91 Table IV. Percentages of participants who bothcompleted the surveyonce started and metinclusion criteria acrossinstitutions andadministration formats PMM10,1 10
Related Search
Related Docs
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks