Frowns, Sighs, and Advanced Queries -- How does search behavior change as search becomes more difficult?
Friday, September 17, 2010
How does search behavior change as search becomes more difficult?
At Google, we strive to make finding information easy, efficient, and even fun. However, we know that once in a while, finding a specific piece of information turns out to be tricky. Based on dozens of user studies over the years, we know that it’s relatively easy for an observer to notice that the user is having problems finding the information, by watching changes in language, body language, and facial expressions:
Computers, however, don’t have the luxury of observing a user the way another person would. But would it be possible for a computer to somehow tell that the user is struggling to find information?
We decided to find out. We first ran a study in the usability lab where we gave users search tasks, some of which we knew to be difficult. The first couple of searches always looked pretty much the same independent of task difficulty: users formulated a query, quickly scanned the results and either clicked on a result or refined the query. However, after a couple of unsuccessful searches, we started noticing interesting changes in behavior. In addition to many of them sighing or starting to bite their nails, users sometimes started to type their searches as natural language questions, they sometimes spent a very long time simply staring at the results page, and they sometimes completely changed their approach to the task.
We were fascinated by these findings as they seemed to be signals that the computer could potentially detect while the user is searching. We formulated the initial findings from the usability lab study as hypotheses which we then tested in a larger web-based user study.
The overall findings were promising: we found five signals that seemed to indicate that users were struggling in the search task. Those signals were: use of question queries, use of advanced operators, spending more time on the search results page, formulating the longest query in the middle of the session, and spending a larger proportion of the time on the search results page. None of these signals alone are strong enough predictors of users having problems in search tasks. However, when used together, we believe we can use them to build a model that will one day make it possible for computers to detect frustration in real time.
You can read the full text of the paper here.