How Valuable Is Self-Reported Data?

I still hear about people conducting surveys of users as the sole or primary means of evaluating their product's design and usability. Surveying users seems like a simple enough thing to do, particularly on the web. After all, your users are already within reach and your product is fresh in their minds, so asking them to complete a quick survey seems like a no-brainer. I have also been told that the comments made by users during a user-based usability evaluation are the "real findings" from this form of evaluation. Writing down comments made by users during the user-based evaluation is certainly easy, and their comments come across as compelling information to follow. After all, it was your user who made the comment, so it's hard to consider ignoring it. But is relying on user comments really getting us what we want?

What users tell us is what they are aware of, but this is not the whole story. They are also influenced by unconscious thoughts and things they don't necessarily notice. It is the combination of both of these that affects their behavior and performance.

Consider a survey conducted by Consumers Union, the nonprofit publisher of Consumer Reports. The well-known magazine, through its Web Watch program, surveyed 2,700 web users on the subject of medical websites. In the report, the authors stated that consumers rely too much on "style over substance" and care too much about a site's "look and feel." They stated that consumers "paid far more attention to superficial aspects of the information----the graphics or visual cues----than the content." As a result, the authors of the study stated: "Consumers should be a little more savvy when they go online" and that consumers "may be exposing themselves to misleading or biased information."

This does sound pretty scary. The authors of this study even compared the consumer self-reported criteria to the criteria supposedly used by healthcare professionals to evaluate the same websites. The professionals reported that they cared more about the content credentials of the authors than about the "superficial" elements of the web sites.

The problem is, all of this data collected is self-reported data. It is the respondents' assumptions about what they use as their criteria, not necessarily the total criteria used. If we assume that there are influences outside of the user's awareness that could be affecting performance, perhaps this survey's "finding" is not as bad it sounds. We need to look beyond self-reported data and look at performance as well. Then we can better understand the influence of consumers' beliefs about the criteria they are using for evaluating our products.

Luckily performance data was also provided when the survey results were reported. In addition to reporting the supposed criteria used, both groups of users (consumers and healthcare professionals) ranked the ten medical websites that were part of the study. The results are interesting:

Looking at their behavior rather than just their responses alone, we see that the consumers and the healthcare professionals agreed on four of the five top websites; and the differences in rating for two of these websites was only one position. This certainly suggests that consumers may be getting it right after all, even if they are only consciously aware of the "superficial" aspects of the site design. It also suggests that healthcare professionals may be influenced by the graphics and visual aspects of sites more than they admit, and not solely by the content or the credentials of the content authors.


Image of sites consumers visit versus those healthcare professionals visit

The anthropologist Margaret Meade said: "What people say, what people do, and what people say they do are entirely different things." If we are to conduct our research and evaluation of our products correctly, we need to reconsider the primary importance often placed on self-reported data. We need to include other sources of behavioral data to get the complete picture of how users respond to our products.