Lately I've been pondering whether users are better served by full text search, such as the search functionality in most web browser help formats, or a human-edited keyword search with ranked results.
Full text search
The full text search results in many help formats can be overwhelming to users. In a large help project, users could easily be presented with 50 to 100 topic titles, all listed alphabetically. How do they decide which is most relevant?
Are users likely to scroll through such a large set of topics, clicking anything that sounds promising until they find an answer to their question?
Full text search, however, does offer a complete set of results. If a single obscure reference to the search phrase is buried in the bottom of the topic, users will be able to find that topic in the results.
But is a complete set of results the most helpful format for a majority of users?
What if the majority of users could find their answers faster by clicking a ranked list of results, similar to Google? Keyword search allows help authors to present what we think are the topics most likely to answer the majority of questions for a given search term.
If a user searches on "printing reports," chances are the "Printing reports" procedure is going to provide the answer, right? So shouldn't that topic appear at the top of the list of search results?
Keyword search can be tedious to set up. It requires indexing your content and studying past user search data. But maybe such an effort is worthwhile if more users find what they are looking for in the help.
A combination of both
Eventually I think help authoring tools will offer a combination of full text search and ranked keyword search to appear in the same help system. (Adobe, MadCap, are you listening?) That way we can present users with a few promising topics that will likely answer their question, followed by a complete list of topics for the users who are looking for less predictable instances of the search term.