As an end user, I’ve worked with a lot of software and most of the software user assistance is far from great. Yep, I said it. And it’s true, Mostly. I realize this article isn’t going to gain me any more friends. I’m okay with it. But before you pass final judgment, please read on. I’m trying to help.
Why are most of the software user assistance help systems so bad? It’s not because the individuals designing it are incompetent (quite the contrary), it’s the lack of real time feedback from end users.
Take the Microsoft Clippit Office Assistant which was included in Microsoft Office 97 and 2003. The program was widely criticized by users as disruptive and annoying. In fact, Smithsonian Magazine called Clippy “one of the worst software design blunders in the annals of computing”.
For me, Clippy proactively offered letter-writing help regardless of how many times I clicked “Just type the letter without help.” It was not listening to my feedback.
Worse, the constant Clippy tips just weren’t helpful. Not helpful because its computer brain had no way of determining how to deal with me. And therein lies the problem.
True artificial intelligence doesn’t exist today and may never be able to replicate a human social experience. I say never because unless the artificial intelligence occupies a human body, it can’t properly experience the emotions, intuitiveness and feeling that a human body experiences when connected to its brain.
So why do we constantly try to use computers to do a human’s job? I’d argue we shouldn’t. Instead, new curation tools are emerging that allow software help assistance to become more intelligent over time. Curation tools operated by humans that keep track of whether users believe the help assistance to be helpful or not. Plus analytical tools that clearly highlight help search phrases and whether they properly solved a user’s issue.
So what am I talking about?
Let me start with an example. Let us propose that you’re in charge of user assistance for a new software product. You’ve developed what you believe to be a fantastic help experience. You’ve covered most of the use cases. Or so you believe.
The problem is that you don’t really know if the user assistance is good or not. You don’t know because you’re not getting real time feedback from the end user. Even if you cover all of the major use cases, you’re not going to cover the 1 + 1 use cases. What are those? I’ll give you an another example.
I use Microsoft Outlook a lot. But I use it in my own context. I use Outlook for creating email and I also use it as a low budget CRM system. I use the email + Contacts in Outlook to create a CRM Marketing tool that works fairly well. Yet, the people that designed the help in Outlook didn’t anticipate my use case. I know because when I do a search in the help files nothing shows up.
I bet Microsoft doesn’t know that I am searching for it either. And what little information they give me is not relevant, but do they know that? Probably not. How could they? They don’t ask. Clippy isn’t around to help either.
It gets more interesting when 1 + 1 = Word and Excel. There’s zero possibility Microsoft is anticipating all of the use cases in that scenario. Maybe a few high level use cases but that’s as far as they’re going to take it. But they’re missing out on critical information that they could be capitalizing on. And they’re one of the best in the business at user assistance.
Analytics, User Rating & Feedback, Curation
So what do I mean by missing out on critical information? What if user assistance influenced R&D and Sales? Imagine if your users gave you real time feedback on whether your tutorials, guides or procedures were helpful. Imagine if you could quickly discover what assistance users were searching on and not finding. Imagine if you knew what user assistance content was being accessed the most or the least.
The tools are available today. Tools that ask the user whether a tutorial was helpful and if not a lightbox pops up and asks them why not. The information is collected and given back to your team in the form of a report or dashboard.
Tools that watch all of the searches for help and report the findings to you in the form of reports. Armed with missing search information, you can build additional user assistance content that helps the user experience. Moreover, if a search term is being used more than you anticipated, then building additional content may be necessary. Curating content becomes important as well. User assistance content that is viewed by users as helpful should be promoted. Content that is viewed as unhelpful should be reworked or archived.
Now your R&D department can identify areas for innovation (based on searches or direct feedback) and your sales team can identify cross selling opportunities (you do need to track who is using the software for this to work – we use Marketo). Moreover, I suspect you may identify some additional training revenue opportunities if a user just doesn’t “get it” and needs help.
So, here’s my final point. At a high level there are tremendous amounts of information that you’re not tracking today. And that’s why your user assistance is not as good as it could be. You may be great at what you do, but you can’t anticipate how the crowd is using your software. Sorry, but it’s a fact.
(Cross-posted @ Seek Omega )