Keywords

1 Introduction

There is a growing desire by commercial enterprises and customers to conduct transactions digitally, where possible [1]. For enterprises, this requirement is usually driven by cost reduction objectives. For customers, the convenience of online service support - available at all times and via multiple devices - is important in an increasingly complex and time-poor world, provided it does not cause them extra effort [2].

Effective online service support enables customers to find answers to service problems, and where possible to diagnose and fix them themselves, meaning that – if it works well - they may never need to come into direct contact with helpdesks. However, there is the risk of a trade-off between 100 % self-service and good customer experience, if for example online support information is hard for the customer to find, or where their expectations are not met. To offer a satisfactory self-support experience to customers, the enterprise needs to understand:

  • How customers’ need for support is expressed in their own language.

  • The contexts of where, when and how customers seek support.

  • What customers consider reliable sources of online information.

  • When customers abandon online help and seek assistance from a helpdesks.

  • How navigation and presentation of information can be optimized for self-support.

The objective of this paper is to present findings of research conducted to (a) identify the highest-priority customer support issues, and (b) to identify the Information Seeking and Retrieval (IS and IR) strategies [3] customers adopt to resolve these issues, within the context of the five “areas of understanding” listed above, leading to recommendations for improving the search process, to help customers help themselves and reduce customer support costs consequently.

2 Business Rationale and Context for Project

British Telecommunications Plc has around 10 million residential customers, and offers hundreds of products individually and combined as packages. Consequently it has a significant customer service division to support these customers and their diverse needs, handling 10 billion minutes of inbound voice calls per year [2]. The help section of its online web presence is intended to be the first port of call for customers seeking help with service problems, so that customers can diagnose and fix their own service issues without them needing to contact these helpdesks.

The customer support site we researched receives around half a million hits per week. Around 6000 searches are made per day via the specific search engine for the site. Some 2500 customer support articles are available and any can be returned based on the search terms entered. Articles are also accessible via the navigational structure of the site via a series of tabs and drop-down menus. These are also accessible from search engines operating from outside the site (Google.com for example). With respect to these figures, prioritization needs to occur as to where the most significant interventions can be made for the lowest cost. The first step to achieve this was identifying the most significant topics for which customers were searching for help and accessing support articles on. We did this by analyzing the following data inputs: Most-Searched Keywords, Article Ratings, Verbatim Feedback, and Click-Throughs.

3 Quantitative Analysis of IS and User Feedback Behaviors

Most Searched Terms. Using a prototype dashboard created in R Shiny [4], daily data feeds from web analytics were processed to present graphically, via the GUI, the searches made by customers over a 14-week period between October 2014 and January 2015. (The search engine only functioned within the boundary of the support section of the enterprise’s website). This was used to define the key issues customers were seeking support for in their own language.

The top ten single keywords and their associate search terms over this period were found to be (in descending order): Password, Change, Email, Number, Phone, Hub, Line, Broadband, Account, Mail.

This dashboard was also used to detect when most searches were conducted. Results show that most searches take place on Mondays, followed by Wednesdays, with the least number of searches conducted on Sundays. This infers that days where the majority of customers are at work are chosen to sort out service issues. This is explored further in the user tests.

Ratings Against Returned Articles. Currently, customers are asked the following at the end of each support article:

After clicking on the stars (Fig. 1), a free-text box opens, offering customers the opportunity to provide verbatim feedback (see below).

Fig. 1.
figure 1

Ratings and Feedback box presented to customers at the foot of support articles

All articles that were rated overwhelming received poor ratings (71 % of all ratings given were “1” compared to 18 % given “5”. It was observed through manual analysis of the verbatim that comments are very often negative even when an article has a high rating, suggesting customers pay less attention to giving representative ratings when they want to express an opinion via free text).

Therefore we inferred the most rated articles represented the most significant customer topics. The articles with the most ratings were ranked over the same 14-week period to achieve an average ranking (where articles were in very similar topic areas we grouped these together). In summary, the most poorly-rated article topic areas were, in order:

  1. 1.

    Fixing phone line faults.

  2. 2.

    Parental controls.

  3. 3.

    How to deal with Email security.

  4. 4.

    Broadband problems and fixes.

This list broadly matches the search terms top ten, with the exception of “Parental controls”.Footnote 1

Verbatim Feedback Against Returned Articles. In conjunction with the above analysis all verbatim feedback given against articles rated “1” were input into “Debatescape” [5] (a text and sentiment analytics tool), to generate simple word clouds. This gave a visual representation of the most frequently used words in the free text comments box. According to Debatescape, during the 14-week period, the ten most used words entered in the comments box were (in descending order): Email, password, answer, account, phone, problem, service, work, broadband and mail.

Again, these results reflect the search term top ten (broadly relating to email, password, phone), with some discrepancies which relate to general experiences (e.g. service, answer, work).

Click-Throughs. Click-through data was the final input considered to understand key customer topics. These are specifically recorded when a user accesses the “Contact Us” page after reading an article, inferring that users are not satisfied with the support article returned and that they now wish to contact a helpdesk.

Using data feeds from web analytic tools, over the 14-week period four articles which generated the highest click-throughs were:

  1. 1.

    “Help with usernames and passwords”.

  2. 2.

    “Compromised email accounts”.

  3. 3.

    “How to change or cancel an account”Footnote 2.

  4. 4.

    “I’ve got no broadband connection”.

Again, there are similarities between the search terms, ratings and verbatim feedbacks in terms of topic areas (passwords, email, broadband problems).

In summary the key customer topics thus distilled from these four analyses are:

  • Needing help with email and password problems

  • Needing help with broadband connection problems

  • Needing help with phone service

4 User Experience Testing

Qualitative User Experience Testing was carried out to explore the participants’ Information Seeking and Retrieval strategies [3] within scenarios based on the three topic areas identified, reflecting the original objective to understand:

  • How customers express their need for support in their own language.

  • The contexts of where, when and how users seek support.

  • What customers consider reliable sources of information.

  • When online help is abandoned and helpdesk assistance is needed.

  • How the navigation and presentation of information be optimized to minimize calls to helpdesks.

Participants. Participants were selected as being users of broadband, email and telephony services but were not BT customers, so they had no pre-knowledge of the navigation of the site. There were five male and five female users, ranging in age from 31 to 55. All were computer users to some degree. None had IT or HCI-based occupations (occupations included accountancy, university lecturer, full-time child carer, musician, HGV driver). All user tests were conducted using a laptop computer (while we acknowledge users will use phones and tablets as well we aimed to provide consistency in this test, to ensure we are evaluating the entire journey rather than the differences presented via different access devices).

User Test Design. We followed Marchionini and White’s framework [6] for IS as a reference, giving consistency to the user tests but with slight adaptations, i.e.:

  • The Recognise and Accept phases were given to the participants when the scenario was described.

  • The Formulate, Express, Examine, Re-formulate and Use phases were participant-driven with prompts from the researcher where necessary.

The tests were conducted sequentially, with each participant following their own journey against the following scenarios:

  • Scenario 1: “Imagine you are having problems accessing your email account, although your Internet connection seems to be OK. What would you do to solve this problem?”

  • Scenario 2: “Imagine you are experiencing poor service from your broadband connection. What would you do to solve this problem?”

  • Scenario 3: “Imagine you’ve picked up the phone and there is noise on the line. What would you do to solve this problem?”

Participants were asked to put themselves in the position of a BT customer. BT’s homepage was open on the laptop, although it was not a requirement for the participant to start their journey there, or to use it at all (until prompted). All user tests were conducted in a home environment and video recorded so that notes could be made after. Results were collated according to the five key points of the original objective.

Results

How customers express their need for support in their own language. When prompted, most participants entered what they considered general search terms because “if you’re more general with your search you can find what you want”. (In reality, the terms used are more closed-task type searches [7]). Re-formulation of the query is rare, with participants more likely to follow other paths for IR than re-entering search terms.

There is variety in the “formulate” and “express” phases of IS within the tests. Requests were expressed predominantly via:

  1. (a)

    short experiential statements e.g.: “Email account blocked”, “email not working”, “broadband keeps cutting out”, “no dial tone”, “phone fault”.

  2. (b)

    full experiential statements e.g.: “My username is no longer working and I can’t access account”, “There is crackling on the phone line”.

  3. (c)

    implicit requests for help: “Unblocking my email account”.

  4. (d)

    explicit requests for help: “I have no connection, what can I do?” “I need help with my connection”, “I have a terrible crackle on my line, please help”.

  5. (e)

    Searching directly for diagnostic tools: “Line fault check”.

For the phone line self-diagnostic in Scenario 3 there is a gap between customer language and how we ask them to define their own problem. Of the available options in self-diagnosis, “some make sense, some not so“ring trip”I don’t know what that means,….you’ve also got “unable to trip ringing”, I’m not really sure what that means….”. For the same scenario, one participant attempted to input a full telephone number into the line checker tool, including a space between the area code and the number. The box cannot contain this number of characters so an error message is returned, but with no indication of why it was wrong or what would be the correct way to enter numbers.

When, Where and How Users Seek Support. Self-support is already embedded behavior for half the participants who would conduct their own diagnostic tests before seeking support online. The search for support in this context involves drawing on previous experiences of dealing with the issues in the scenarios (e.g. checking PC, router and security settings, pop-ups and ad blockers, re-setting passwords and using line speed checkers), commenting: “I’d always try to fix a problem myself rather than get someone else to do it”. For in-home-broadband connection issues, workarounds by using 3G and 4G compatible devices to access support are widely used. While this is encouraged, there is also a risk of following poorly-formed procedures and increasing the problem, but this risk is mitigated by avoidance of calling helpdesks for those who were so inclined.

After accessing support articles, all apart from two would go through the diagnostic and fixing steps (wizards, desktop help, check my line etc.) suggested before contacting a helpdesk - or they would ask their partner or a friend to help them.

Two participants went straight to Google.com for help as a habit - “that’s what I tend to search everything with…you stick with one thing and you trust it”. These same participants would not start their journey on the BT homepage, “assuming it [Google] would take me there anyway”. Another participant commented that they would use Google if the BT website did not yield satisfactory responses, without re-formulating the search terms used.

Participants expressed they wanted to find solutions to problems as soon as possible, especially during a working day (for those who work at home this is especially important). This was reflected in the search trends outlined earlier, regarding preferred days of week for searching. Additionally, instant useful responses to searching are expected - currently over 1000 articles are returned on average after a search, which can be overwhelming. Two participants commented that they expect the answer to their question immediately, and not on the next page of 20 or so answers.

For those who cannot resolve issues during work-time the choice of support depends on “what I’ve got available to me at the time”. This participant is a driver and does not have ready access to Internet-connected devices during the day. His workaround is that he has support phone numbers stored in his mobile phone.

Time of day may also have an impact on the motivation of the customer to self-support: “I’m lazy [at the end of the day] and I’m not going to do the whole [self-diagnosis] thing….this is all very irritating, why can’t I just ring somebody…I’m bored now, I’m really tired.”

Reliability of Information - Forums. There was variability in the reliance and trust placed on forums. The two participants who went straight to Google.com preferred to use community forums to check if others had solved similar problems. This is based on previous positive experiences – one participant commented that forums had been used successfully for advice on fixing his car. Forums were considered more reliable “because it’s based on other people’s direct experience, and if they’ve had the same problem they might have come up with solutions or explanations which are slightly different to what the service provider suggested”. Another comments, “techy people put stuff on there because they know what they’re doingI don’t”.

However, the immediacy of the need for information forms a barrier to using forums – it is unknown how long it may take for an answer to be made to a post. Dates and times of postings are important details for trusting forum posts.

Six participants however did not trust forums, preferring to rely on what their service provider suggests, fearing they would “blow up their computer” if they did anything else: “you’re the guys with the know-how…why wouldn’t I work through your advice before I tried [forums]?”. Lack of trust in forum contributors is also pervasive: “I don’t believe other people….Forums are usually full of people just talking rubbish.”

Reliability of InformationRatings. Although customers are invited to give ratings to articles there is no indication as to why this is useful to them. It informs the “Answers others found helpful” (see Fig. 2) but this is not obvious to participants. Rating stars are given next to forum posts but often these are not populated. As a result there is a general reluctance to actively engage in the rating process. Apart from the lack of noticeable benefit to the customer, this is also due to: lack of time available, not expecting to see ratings for support content, being unaccustomed to give ratings to support content, and general apathy.

Fig. 2.
figure 2

Screenshot of example article return screen

Participants commented that it appeared no-one else had rated articles either – this is especially true for the forums where stars appeared mostly blank - “For some reason I didn’t think it was asking me…it’s like when you’re reviewing a product and the stars are blank it’s because no-one’s reviewed it, so it’s almost as if I’d thought other people hadn’t reviewed it…”.

When ratings are given, they are highly polarized (this is supported by the quantitative analysis). This was observed by participants and underlines potential unreliability. One user commented he would only rate useful articles (positively) while another said he would only give poor ratings saying “if you get what you want, you very rarely report that”. Assumptions were expressed that all online ratings are only ever very negative or very positive: “usually you only get the extreme.”

The purpose of providing ratings was misinterpreted by two participants who assumed the rating system was for overall customer service, and again would only give very high or very low ratings.

Another participant comments: “I wouldn’t ever look to see a rating to help me decide that [reliability] so I don’t think it’s relevant to give my opinion to help other people.” His choice of what to trust is based on the relevance of the article title. For those who favored the contribution of the forums there is also a perceived lack of trust of ratings on the provider website.

Abandoning online help and using helpdesks. Three out of ten participants would go straight to a helpdesk to resolve the issues presented by the scenarios, without considering other alternative solutions.

The other seven referred to phoning the helpdesk as a “last resort”, preferring if needed a “web-chat” with a helpdesk agent. These users assume that, via this route, they will make contact with an agent immediately rather than having to go through IVR queues, hold the call or submit a form (for which the response timescale is unknown), commenting, “in this day and age you want to solve your problems right away, everyone at least expects to”.

Previous positive experience of other support sites’ web-chat facilities helps with acceptance: “I had a problem with Amazon and there was a little person [pop-up box] saying can I help you and he sorted it out straight away…which was good rather than having to talk to them on the phone.”

Additionally, a significant advantage of web-chat is that any language barriers – particularly when dealing with off-shore call centres are avoided. However the participant who would always phone first finds web-chat a barrier because, “it takes me a little bit longer, as I’m not a typist”, but is not averse to the idea of using it.

Navigation and Presentation of Support Information. The navigation of the help website is clear. All participants used the navigational tabs (predominantly “Help”) and the related drop-down menus - indeed by the end of the user test one participant named this route as her “trusted favorite”. Considering the small proportion of customers who use search against the total number of articles accessed this reflects the behaviors of the overall customer base. (Surprisingly, all participants needed to be prompted to enter words into the search bar; with participants preferring to start their search journey with Google or via the current navigational design of the website).

The most significant user interaction event was demonstrated when users accessed the “Contact Us” page, predominantly near the end of the journey for Scenario 1. This represented a pivotal moment, when the online support journey became in danger of being abandoned. Specifically, this was the disregard of an information box which said “Password stopped working?” Eight of the ten participants did not notice this box, despite the lettering being in bold red letters. By this stage of the journey, the users appeared fixated on seeking help and therefore did not consider anything outside the “Contact Us”, thereby missing this information.

Other UI issues expressed included: the absence of auto-correction for spelling in the search bar, the position of the ratings box which appears at the end of articles (so not readily noticeable), and some misinterpretation of tab labeling. (e.g. one participant, while looking for “Contact Us” noticed a tab labeled “Find a Number” at the bottom of the screen. It is not clear where this will lead, as she comments, “Does that mean a [helpdesk] number?”

5 Recommendations

Supporting customers in their own languageSemantic Search. Due to the variety in how search terms are expressed (see “Results”) it is suggested that a semantic search capability [8] is implemented to address not only this but user intent and context. For example, the search terms that were entered as requests for help should return information which reflect these.

Query expansion and spell-checking of search terms may also help with filtering searches, ensuring accuracy and reducing the number of returned articles.

Supporting customers in their own languageeditorial changes. Some simple changes to the content can be effective – 3 are outlined here. First, from the user tests and from the verbatim analysis, it was clear that some users were using both the search bar and the comments box to enter direct requests for help. Two options are possible to resolve this: (a) provide a fuller explanation of the purpose of the boxes and what to expect as a result of entering information, and (b) present an alternative box where users can enter requests for help, at an earlier stage in the journey to the observed arrival at the “Contact Us” page. Second, for the phone line fault check example (see page 7), a simple but effective change would result from changing the words given in the options available for diagnosing faults into customers’ language (and not presenting words by which the enterprise categorizes faults, which is currently the case). Finally, when asking customers to enter a phone number into the phone line fault checker, make allowances for how they normally read and use phone numbers – e.g. allow enough space for these permutations within the tool and if errors are still experienced, explain why it is not working [9].

Reliability of information - Ratings. Many returned articles give instructions which may take customers away from the website or indeed the computer (if it involves checking cables), meaning they are unlikely to go back to the online article at a later stage to rate it as useful or not (most will find what they want and leave, or will forget or not notice the ratings box). So, it is recommended to rate usefulness of the entire experience (some assume this is what the ratings box is for anyway). This is currently done via other survey methods for overall customer experience and could be extended to include the online experience.

Ratings given are either very good or very poor, with little in between. Users assume this, and therefore may disregard them as they only reflect “the extreme”. In turn, for analysis, they are not reliable indicators of how well articles meet user needs. Depending on the article (i.e. not those with lists of actions – as above), the recommendation is to remove the request for numerical (star) ratings and present only a text feedback box which is (a) presented more prominently, (b) worded in a way which outlines the purpose and benefit of doing so to the customer and (c) use text analytics to distill the keywords and expressions raised. Text analytics and click-through data can then inform the content designer of which articles work for customers using qualitative data, and in turn support content can be confidently presented back to customers as “Answers others found helpful”. Conversely, articles which are not useful can be identified and re-worked as needed.

Reliability of Information - Forums. The test participants who valued the contribution of forums found them after searching on Google. For those who did not value them, they would never have looked to them for advice in the first place. As a result, there may be a case for not including forums on the support site. There is an element of risk in this, in that customers may look to un-moderated forums on the external Internet for solutions, but as this is a tried and tested method for some, it can be argued that customers will locate what is useful to them via trial and error. In this regard, space on the website can be used for provider-generated content only, which for many participants appeared to be the only trustworthy content available.

Changes in the UI to Minimize Online Journey Abandonment. Several minor changes to the UI can be recommended. However the most significant change recommended regards the following example:

This recommendation is based on the probability that the customer is (by this stage) fixated on contacting their provider, (see Results, page 9), and reflects a participant’s comment that “If the message had been in the “Contact Us” box I probably would have clicked on it”. Eight of the ten participants missed this box in the Current UI (see Fig. 3. left), so theoretically 80 % of customers are more likely to notice this box in the Recommended UI (Fig. 3, right), and take action accordingly. Again theoretically this could lead to a potential 80 % reduction of helpdesk contacts as a result. (This of course would need further testing, with success measured through web analytics).

Fig. 3.
figure 3

Recommended change to position of “Password Stopped Working?” box

6 Conclusions

Our first conclusion is that an enterprise’s understanding of, at a macro-level, the IS and IR strategies of its user base, is a vital first step in prioritizing where to concentrate efforts for improvement. For our research, this was based on manually analyzing four sets of quantitative data to identify this. In future these inputs should be integrated and automated for efficiency - although the manual approach is workable for a research project, for continuous business operations it is too labor and time-intensive.

Our second conclusion is that not all customers will want to, or be able to, help themselves by accessing all their support information online. With a diverse customer base and product range it is a worthy ambition to aim for a high percentage of support transactions to be conducted online without but in practical terms this may not always be possible. However, many of the recommendations from the user tests for minor content-based changes (e.g. matching article language to customer language) and more significant changes (e.g. introducing more sophisticated semantic search engine capabilities) can enable the enterprise to maximize the expertise which users already demonstrate in dealing with their own problems, and their willingness to do so.

Our final conclusion is that, theoretically, by reducing operational costs for the vast majority of self-helpers, investment can be made to enhance customer support for the minority – e.g. for dealing with complex issues and to support more vulnerable and less confident customers. In conjunction with making interventions in the customer journey as above, this could result in customer support being optimized for all, with seamless integration between channels where needed [2]. This is a long-term opportunity and one which could be considered for all large enterprises.