This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
I come to the question of the future of text
as a hypertext researcher with a particular interest in human factors.
I take my inspiration from Doug Engelbart who, along with his many ground-breaking achievements, espoused a vision of augmented human intelligence.
Although my research includes augmented reality (AR) and virtual reality (VR), I am considering text as written words.
My research focus has been mainly on discursive text in contrast to poetry and other literary forms.
The texts that this essay is concerned with have words which are connected statements representing ideas, unlike word clouds
for example.
Aware that this volume will be preserved in a static format I have striven not to employ many dynamic textual features (such as hyperlinks and stretchtext), although I do use HTML's summary
/detail
disclosure element as an example but only once.
Text is important — it has a function and a value that are part of the ways that ideas are developed and transmitted.
In the present, technology supports search, breadcrumb trails for navigation, visual representations of structure etc.
I imagine that the future of text is text that is presented by computational machinery.
I do not suggest that the computing machinery will be recognizable to us, nor will it be under glass
as Nelson has described uneditable text such as much of the WWW.
A future of text augmented by computational machinery means that the possibilities that are attendant in text will be manifest and magnified by the application of additional technology.
In my optimistic view of the future one of the benefits or features of computational machinery is to make possible and manifest the connexions between and amongst ideas and to map new pathways to formalize them to connect them, to map new pathways, to inject oneself as an author, to circulate those ideas and connexions because with text available through a distributed open network (e.g. the WWW) every person has the (theoretical) potential to be an author, a reader and a publisher.
Although all of this has been possible since written text, the widespread adoption of computing machinery (particularly networks), expands the scholarly potential.
My response to the question of what is the future of text
has been influenced by many sources.
At this draft stage I do not intend to list most of them.
I am keenly aware of being influenced by Markson, in particular the plot-less novel Vanishing Point (), and many colleagues in the communities of hypertext authors and researchers.
I am particularly interested in text as palimpsest.
As Blustein et al. () wrote,
Notions of permanence attached to the written word are thought of as fetish; palimpsests (literally the residuum of erased text on parchment, metaphorically textual edits thought of as obscured in a final draft) are now marked by digital traces and tags.
Accordingly the ways that readers can mark their unique engagement and strategies … are changing.
I consider the future of text in two frames: as a reader (or person who experiences) text, and as an author of text. We must recognize that these categories are fluid and often overlap. In the simple case, overlap will occur as readers alter original texts (by annotation understood broadly) and thus become authors. More deeply, the techniques and tools with traditional and designed uses are often subverted or appropriated. In print, Nabokov's () Pale Fire is a familiar example of a text that appears to be of only one type but is ostensibly of another. Larsen used the path name feature of the Storyspace hypertext publishing software as a space for poetry in Samplers: Nine Vicious Little Hypertexts (). The path name feature was originally intended for documentation and reader guidance; in Larsen's text, each path name entry is a line in a poem that the reader can see when they look at the list.
I am consciously not addressing the rôle of publisher in part because, here too, the question of territory is complex.
Although I currently favour spatial hypertext for activities that are done individually, e.g. writing, brainstorming, and information triage, my focus in this Chapter is on more conventional forms of text.
I would like to see a future in which readers can freely annotate texts to make the texts their own without concern that the original text or their changes would ever be lost. For text not to be lost it must be recognised when it is found.
The7 Issues
Of Halasz's () 7 Issues for hypertext the most pertinent for this Chapter's vision are: tailorability and extension, versioning and transclusion and collaboration support. Obviously versioning is particularly important when texts can be changed. The activities of updating and extending texts encompass readers adding their own notes and hyperlinks (internally- and externally-pointing) to enhance texts, and of authors correcting or expanding their texts. Transclusion is important in two ways: to support stretchtext (described below) and as the best way to support users creating their own documents by combining parts of others (akin to Victorian commonplace books).
Blustein & Noor () discuss glossaries as both ancillary materials to author's texts and as stand-alone records of readers' notes.
Those author's classify glossaries in four dimensions, two of which are relevant to the future of text are: flexibility of location — in a single document or potentially available in many documents; and user — for use only by one reader or to be shared by multiple people, even if only one of them can alter the content.
That vision of glossaries will be stronger with transclusion to support the integration of snippets (i.e. segments of text that copied
from other texts, not to be confused with lexia
which are units of reading established by authors).
Such glossaries are works composed by readers using, and augmenting, text written by others. My vision of the future of text firmly includes such hypertextual works.
A problem with most of today's ('s) HTML-based online texts is that they necessarily use WWW browsers' default link-following behaviour which can be attention ripping
: when links are followed the entire visual context of the head (outgoing part) of the link is removed and replaced.
This behaviour makes it difficult for readers as it reduces the coherence of what Kintsch and van Dijk call the surface
(or most basic level) of the text.
Ted Nelson has suggested using document browsers that can display a universe of documents at once by allowing users to zoom in an out and pan to see the text and overlap documents in myriad ways.
Concerns about overwhelming readers' attentions in such interfaces makes me seek better solutions.
Stretchtext (introduced by Nelson (); demonstrated by Fagerjord () inter alia) and other types of fluid links (as invented by Zelwegger et al., ) are ways I imagine these problems will be alleviated in the future.
Stretchfilm (Fagerjord ) is surface text that expands to include more surface text, similar to how today's ('s) summary/detail
operates in HTML: the summary
is always shown and the detail
is shown only when activated, e.g. by being clicked like a link.
Zelwegger et al. () demonstrated multiple types of fluid text but their intention was the same with them all: to provide the reader with an advance organizer
* to provide information about what will be found at the destination of the link.
All of the fluid links in the 1998 and 1999 articles act as ways to inform the reader of what is at the destination of the link (or in some cases to provide dynamic content in place of a traversal link).
One class of fluid link acts like stretchfilm, another uses the margin of the display to show the additional information.
Some studies indicated that reader's success with hypertext is related to the constellation of psychological measures known as spatial ability
.
Allen () and Juvina () have written extensively about this relationship and its implications.
Using spatial ability as an instance, I suggest that future text will have myriad forms of presentation that will be either automatically generated or under the control of the reader.
I imagine that the presentation will be personalized so that readers who want or require certain presentation styles, e.g. a presentation most suited to their spatial ability, personality, disability, or level of fluency can be accommodated automatically by the technology with which they receive the text (today that would likely be an e-reader device, Web browser software or bound paper book or magazine).
Gobel &Bechhofer () identified Wikipedia as a part of the WWW in which all of Halasz's () 7 Issues had been successfully addressed.
Wikipedia (n.d.) is an interesting example of a place
where the rôles of reader, author and publisher blend, and where ideas are connected.
At once Wikipedia is about spreading knowledge and an example of social computing (according to
Schuler's () definition).
The volunteers (and small number of paid staff) at Wikipedia do not use the platform for its potential to map ideas or to create new knowledge as a community working together.
Why are we not yet in the future described above?
We need a cri de couer.
Primarily because of a lack of coherent vision by those with resources to bring together the many disparate projects that have striven to make that future (Bouvin, ).
For the technology to be harnessed to do what I described earlier, viz., to make possible and manifest the connexions between and amongst ideas and to map new pathways to formalize them to connect them, to map new pathways, to inject oneself as an author, to circulate them
the value must be perceived and, regrettably, not without being available for financial gain.
Better types of text cannot weaken understanding or degrade scholarship;
they will most likely lead to greater opportunities for all: writers, readers and publishers.
As the size of the knowledge network increases its value (profitability) will increase many times over according to Metcalfe's Law (Gilder, ), Beckstrom's Law and Reed's Law (Hogg, ).
Scholars should strive to convince large publishers, governments and others with substantial resources, to support the vision of a future of text that will augment human potential that will match the aspirations of Engelbart.
GEORGE GILDER’S TELECOSM. . Forbes, 158–66.Metcalfe’s Law and Legacy
Seven Issues: Hypertext in the Era of The Web. ACM J. Comput. Doc. 25, 3 (), 109–114. DOI:10.1145/507317.507328.