Personal Glossaries on the WWW

6. Discussion

[skip to list of section contents]

  1. Effect of Glossary on Reading Time

    When our experimental users read texts presented with either type of glossary tool they took more time answering questions and understood the text better than when they read without any glossary. Our results confirm that it was only the answering time, not the time to read, that was effected by the presence of a glossary.

    One reason for the decrease in speed may be that users spent more time answering questions if they knew the answers instead of simply stating that they did not know the answer. Also, with better recall, it may be reasonable to speculate that users would have more detailed answers. Typing out these answers would generally take more time than if users had simpler, shorter responses. A further experiment may consider the length of responses to the comprehension questions. Such an experiment would need to include a measure of revision (e.g., deleting of text with a keyboard, or re-drafting on paper) as part of the length. Dillon [1991] and McKnight et al. [1991] report that fill-in-the-blank tests are an ineffective measure of comprehension with hypertext however some form of multiple choice test may be suitable.

  2. User Satisfaction

    The subjective results of the study show that users found the glossary tools easy to use as well as useful. The analytical results show that user performance increased without any significant decrease in users' speed of reading the articles meaning that the glossary tools were effective and pleasing to users at no significant cost to efficiency. While more detailed studies need to be conducted in order to determine the superiority of one of the glossary tools over the other, it can be concluded from these statements that the presence of glossary tools, either automatic or user updateable, is an improvement to users' experience with online texts.

  3. Effect of Glossary on Comprehension

    The results of the study indicate that the presence of glossary tools did indeed significantly improve users' performance. These results provide strong support the idea that some type of glossary would be a useful addition either to individual websites or web browsers. If further experiments find, as we suspect, that user-updateable glossaries lead to better comprehension than static glossaries, then it will be worthwhile to incorporate some type of personalizable glossary in web browsers. (For further speculation of the role of structured annotation and personal glossaries see the hypertextual significance of this work section.)

    On the other hand, if there is little or no additional benefit to be gained by such employing such complex technology then designers of so-called portal websites and plug-in services (such as the Google toolbar for Internet Explorer web browser and Googlebar for Mozilla and related browsers) will find something else that can be added to give extra value to web browsers. This possibility is particularly attractive when considered alongside the high degree of user satisfaction with the tools (reported above). Building on the work of prior research into glossary interfaces [Wright, 1991; Black et al., 1992; Wright, 1993] our experimental results support the conclusion that we have eliminated all serious interface stumbling blocks for static glossaries presented in ordinary graphical browsers. This result too will help designers of plug-in tools for web browsers.

  4. Updateable Features

    None of the users employed the user updateable glossary tool's specific functionality. Many users explained this behavior during the debriefing sessions by saying that there was little motivation to expend the effort of modifying the glossary since they would not be using it after the study. They also explained that they would be more willing to use this functionality in real-life situations where they would be able to access the glossary for a long time and where their changes would remain permanent.

    Another reason that users may have not felt the need to use the updateable functionality could be that the articles used in the study were too short and presented on one page. The length and presentation of the articles would not provide much motivation for making or editing entries since any new knowledge the user obtains would be from the current article. Any information that the users may add or change could easily be found again by scrolling to the appropriate location.

  5. Glossary Use

    Many users did not click on any glossary terms at all. Despite this, there was a significant increase in the performance of users when they were given a glossary tool. This may be partly due to the effectiveness of the glossary tool itself, and in the case of users who did not click on any terms, due to the highlighting of certain words and phrases by transforming them into glossary links. The emphasis on glossary terms in the text may distract the user into noticing and remembering these words and phrases when they otherwise would not have, consequently improving their performance on the questions about the articles. Further experimentation needs to be conducted in order to confirm or reject this speculation.

  6. Next Steps

    We are conducting a more extensive analysis of variance than we presented in the Results sections (§6) to give a more realistic measure of the potential for false positive results.

    To see if somehow the presence of a glossary changes the way that people approach a text comparison should be made of a glossary that includes real definitions and one that includes nonsensical ones. Such an experiment will enable us to determine if it is only the presence of highlighted terms that improves user comprehension. Furthermore if users updated the nonsense definitions with realistic ones then we could safely conclude that the glossary tool was something that readers would truly choose to use.

The section entitled hypertextual significance of this work may be profitably read at this point.


References

References for works cited in this text chunk appear below. References for all works cited are available in a separate chunk.

[Black et al., 1992]
A. Black, P. Wright, and K. Norman. Consulting on-line dictionary information while reading. Hypermedia, 4(3):145 – 169, 1992.
[Dillon, 1991]
Andrew Dillon. Readers' models of text structures: the case of academic articles. International Journal of Man-Machine Studies, 35:913 – 925, 1991.
[McKnight et al., 1991]
Cliff McKnight, Andrew Dillon, and John Richardson. Navigation Through Complex Information Spaces. In Cliff McKnight, Andrew Dillon, and John Richardson (editors). Hypertext in Context (ISBN 0-521-37488-X), Chapter 4. Cambridge University Press, 1991.
[Non-authoritative link: <URL:http://telecaster.lboro.ac.uk/HiC/chapter4.html>].
[Wright, 1991]
Patricia Wright. Cognitive overheads and prostheses: Some issues in evaluating hypertexts. In HT'91 pages 1 – 12. ACM Press, 1991.
<DOI:10.1145/122974.122975>.
[Wright, 1993]
Patricia Wright. To jump or not to jump: Strategy selection while reading electronic texts. In C. McKnight, A. Dillon, and J. Richardson (editors), Hypertext: A Psychological Perspective (ISBN 0-134-41643-0), chapter 6 (pages 137 – 152). Ellis Horwood, 1993.
[Non-authoritative link: <URL:http://telecaster.lboro.ac.uk/HaPP/chapter6.html>].

This document is written in valid XHTML 1.0 & This document makes use of cascading style sheets.

[Up to navigation links]