Robert J. Sandusky
Laura J. Neumann
Emily N. Ignacio
The authors, members of the Illinois Digital Library Initiative (DLI) Social Sciences Team, conducted a small set of observations of Grainger Engineering Library users, primarily graduate students, during the summer of 1995. Our objectives were to collect data on the current patterns of usage of the following systems:
System Number of Observation IEEE / IEE Publications Ondisc (version 4.40)* 3 INSPEC Database Ondisc (version (4.30)* 2 Illinet Online 4 Wilson Indexes (IBIS) 2 OCLC FirstSearch 2 WWW Graphical Browsers 2 Engineering Index on CD-ROM 2 CARL UnCover 1 * - using UMI ProQuest interface
The results from these observations are intended to aid in the design of the Illinois Digital Library interface and retrieval infrastructure by presenting a picture of how people use currently available systems. In some cases, what was discovered during this process may prove enlightening to the DLI team. In other cases, this information will confirm some of the expectations that are held by the system builders and designers.
The first several observations were conducted as unobtrusively as possible. After preliminary review of the data collected at that time, a short user debriefing guide was designed in order to collect richer data about the users background, the users motivation for seeking information, and the users level of satisfaction at the conclusion of the search session.
Our findings are in the main consistent with our expectations that, for example, subject or keyword searching without use of a controlled vocabulary is the most commonly used search strategy; that barriers to use include a mismatch between the users conception, or mental map, of the system they are using and the system itself; that users find the operation of the interface and underlying retrieval mechanisms difficult; that users rely upon information presented to them at the interface for guidance rather than seeking on-line help or librarian assistance; that, in this setting, these bibliographic tools are used primarily to locate journal articles; that people will print whats available, in the fullest format possible (i.e., either full text or citations with abstracts); and that the users searches often support tasks such as writing journal articles or theses or dissertations.
Some more surprising findings resulting from the analysis of these observations are that the reliability and availability of the bibliographic tools is low; and that there is a wide range of motivations for users to come into the library to use these systems, including use of these tools to support such tasks as surveying primary research fields, exploring unfamiliar cognate fields, and even looking for employment.
Before conducting the observations, the authors wrote a letter explaining our research and distributed it to the reference librarians at Grainger (see Appendix A). In addition, several copies of the letter were left in the Grainger reference area to alert the patrons to our activity. We would solicit users working at the bibliographic retrieval workstations in the reference area of Grainger Library, asking them for their consent before beginning an observation. During the earlier observations, we merely watched the users search session and took notes on their use of the system. After reviewing the fit between the data collected and our research questions, we modified our data collection method to include a short list of questions to be used to debrief the user at the conclusion of the observation. These questions were intended to elicit data about the users background, the users motivation for seeking information, and the users level of satisfaction at the conclusion of the search session. We also modified our introductory remarks to the users in order to secure their permission to ask them questions (see Appendix B for a list of the questions asked). In all cases, the patron's anonymity and confidentiality were preserved.
Length of the observation depended upon the scope and complexity of the users search and in some cases also the schedule of the observer. Some users conducted short and very specific searches while others conducted long sessions where they might search one or several systems exhaustively for information on a particular topic or set of topics. The time users worked on specific systems ranged from 5 to 60 minutes.
The data collected included demographic information (e.g., gender, academic level) and specific search actions and results. We attempted to capture as much detail as possible about the person's search terms, search strategy, results, problems, etc., in an unobtrusive manner. Any comments made by the user were also recorded. The observers took hand written notes of the users search activities during the course of the observations. These notes were transferred to word processing files for coding and analysis (see Appendix C for an example).
As the observations progressed, the Social Sciences Team conducted a thorough review of all of the data collected since late 1994. This wider set of data included in depth interviews with faculty, focus group sessions with undergraduates, graduates, and faculty, as well as observations of general library usage in the Engineering Library and in the Universitys High School. One result of this review was the development of a comprehensive coding list that could be applied to the data gathered in any of the Social Science Teams research. This code list was used in coding the data from the observations. As the coding progressed, the code list was amended to reflect new concepts emerging from the data (see Appendix D).
The primary unit of analysis is the discrete user action within the larger framework of a search session. Examples of discrete user actions include entering a search term, selecting a citation item for review from a retrieval set, asking someone for help, or printing a citation or an article. At the end of the coding process, there were over 900 coded pieces of data. At a more general level, the concept of an observation as a unit of analysis must also be considered. For the purposes of this study, one observation is the interaction of one person with one specific system. That is, if person A is observed using System 1 and person B is observed using System 2, we would have a total of two observations. If person C is then observed using System 2, System 3, and System 4 during one contiguous period of time, we would have a total of five observations. The data we gathered represented a total of eighteen such interactions between people and systems. Three of the people observed used more than one system during the observation: one person used two systems, one person used three systems, and one person used five systems. A total of eleven people were observed; for the number of observations per system, refer the list on the first page of this report.
Each user action during a search session was assigned one or more codes. All search sessions were coded by the same member of the research team. The codes were inserted into a copy of the raw transcription file. A third file was created that associated some of the transcription text with each code applied to the observation (this step is analogous to the traditional copy and cut methods commonly used in naturalistic research). The coded data in this file was then sorted in numerical code order. After all of the observations were processed in this manner, the sorted data was combined into a single large file of coded observation data. This data was also copied into a spreadsheet file to allow us to sort the data by user, system, or user action. This also helped us gauge the relative frequency of code use.
This coding method allowed us to look at the large mass of data in multiple ways. The original transcriptions were available to provide overall context for each action within an observation and the combined data provided a rough sense of how frequently various activities were performed (see Appendix D). This data is amenable neither to statistical analysis nor hypothesis testing due to the relatively low number of observations and our use of convenience sampling in selecting who would be observed.
The rest of this section will summarize the findings from the observations using the following ten questions as organizing categories:
What search strategies/keys are used and why?
Bates (1989) provides a useful categorization of search strategies commonly employed by users when performing searches in manual environments. She defines footnote chasing, citation searching, journal run, area scanning, subject searches in bibliographies and abstracting and indexing services, and author searching. Our coding scheme included the six categories from Bates, and included some additional categories as well. For example, we included title searching because we observed its use during the observations. We also made a distinction between subject searches using controlled vocabularies (which we called controlled vocabulary searches) and subject searches not using controlled vocabularies (which we called keyword searches). The Illinois DLI project has an emphasis on integration and use of controlled vocabularies into the final system, so we wanted to capture this distinction.
We observed the following search strategies:
We did not observe the strategies of journal run, citation searching, or area scanning in use in any observation discussed here.
Users bring a variety of material with them to help them with their search. The nature of the material they bring with them is likely to have a direct influence on their selection of a search strategy. Having a copy of an article turned to the reference section is an indication that the user to use a footnote chasing, or backward chaining strategy. We saw users referring to this type of information before beginning searches using both subject and specific article titles as search terms.
The most frequently noted search strategy was the subject search. We noted two varieties of subject search. The majority, about three quarters, were keyword style searches and a minority (about one quarter) were searches using controlled vocabulary.
Authors names were used for searching both as stand alone search terms and in combination with subject terms. Title searches were relatively uncommon. Journal titles, keywords, and specific article titles were submitted as title search terms.
We noted a set of behaviors associated with the search process that we termed search management behavior. Use of hand-held references like journal articles, hand written notes, etc., are one component of search management that emerged as an interesting phenomenon during the observation process. These references in hand, as noted above, seem likely to be an influence on the starting point for a search, and also to be related to the type of search strategy pursued. We became interested in the varieties of notes the users brought to the system to help them begin their searches. We observed cases of copies of journal articles turned to the reference section, hand written notes, and printouts from what we believe to be other bibliographic reference tools. We noted that users frequently made marks or other notations on their hand held references. We plan to explore this further as the project unfolds.
Another set of behaviors associated with search management was use of system features for narrowing or widening the searches. The observations revealed four important categories of behavior in terms of search management: narrowing the search, widening the search, moving to another database, and abandoning the search. Narrowing the search was by far the most common search management technique applied. Usually the user narrows the search by applying additional subject terms, usually a keyword instead of a term from a controlled vocabulary. Typically, two subject or concept terms were used to find the retrieval set at the intersection of two subject terms. Narrowing by year was also used commonly. Narrowing with an authors name was observed once. There were no instances where users actually widened their searches. Switching to another database available within a single interface was observed on one occasion. Three of the observed users used multiple systems during their search sessions. Two of these people used the same search terms on the different systems. The third user seemed to lack any specific focus for his activities.
We were sensitive to the possibility that search strategies themselves may be ineffective, come undone, or that serendipitous finds could result from activities directed toward some other goal. We noted one case in which a patron serendipitously located valuable information closely related to his topic while using the IEEE full text system for an unrelated reason. This user had been looking at the typical content of some journals that he was not familiar with in order to see if his paper could be published in these journals. While he was focused on that goal, ...he had found a few articles on topics similar to his, so he was planning on integrating those references into his own paper.
What mistakes are made? What are biggest problems/barriers to use?
It can be difficult to determine if a user is making mistakes in a relatively unobtrusive, naturalistic research setting. The observer is unlikely to detect if the user is encountering cognitive difficulties such as using an inappropriate system, inappropriate search terms, or inappropriate strategies. Other types of mistakes, such as typographical errors, selection of the wrong menu option or compact disc, etc., are more easily observed.
We tried to distinguish between different sorts of mistakes and barriers. The first category we labeled mistakes, which tend to be rather mundane, like typographical errors, hitting the wrong key at a particular juncture in the session, etc. These sorts of things are relatively simple in nature and easy to correct, assuming the user notices the event. The second category, also relatively mundane, is comprised of fairly typical but annoying events, like a printer running out of paper. The final category, which is more important in terms of designing and evaluating systems, were classified as barriers to use. This discussion will focus on the latter category, barriers to use.
Barriers may exist in the user himself, in the system, or in the interaction between the user and the system. We observed users unsure of the coverage of the systems they were using, which we classify as an example of an internal barrier. In one case we observed a user scrolling through the list of indices presented by the IBIS interface in order to choose an appropriate database. In another case, a user told us that he was unsure of his own understanding and skills with the system, so he couldnt be sure he was making the best use of it.
There were several system level failures noted that presented serious problems for the users of these on-line systems. In two of the three observations of use of Engineering Index, the entire system was completely unavailable for use during part or all of the users session. In one case, the user had come to Grainger specifically to use EI only to find that it was down. The second EI user, in reaction to EI crashing and not recovering in the middle of an extensive search remarked that ... this system is not ready for prime time.
It would be useful to collect and review the availability figures for EI in order to determine if the EI system is inherently unreliable or if our observations represent an unlikely coincidence. Other problems noted were of systems that were frozen: the computer was completely unusable until the system was rebooted, or systems that spontaneously restarted mid-session. These sorts of system problems were also noticed when the Social Science Team performed a series of observations at the Grainger Reference Desk during late 1994 and early 1995. Almost half of the Reference Desk observations note some sort of computer system problems such as lost connectivity. Not all of those problems were with EI, but these are indications that system availability and reliability continue to be recurring problem. (This is noteworthy because potentially negative perceptions of the DLI could arise if the DLI systems face similar difficulties.)
There were cases in which it is difficult to determine whether the barrier exists in the system, the user, or in some combination of the two. For example, in several instances, the users acted impatiently, and this behavior could be interpreted as either a result of poor system performance or unrealistic user expectations. Other examples include situations where the system presents the user with options or information that the user feels is inappropriate at that juncture. In other cases it seems as if the system is not behaving as expected. For example, a user of Engineering Index who was reviewing a list of titles in the result set performed the following actions:
Presses 1 to see the first entry - [system] beeps; tries to mark 1 [system] beeps; [user] cant reformat anything so that it displays more than titles; [user] tries to print 1, but all it will print is the title - [system] wont give him the full record.
Issues of confusion at the interface seem to be extremely important and begin to raise questions about how people begin to use, learn to use these bibliographic systems.
Another category of barriers are those aspects of a system that the users do not seem to like. System performance is again noted here, and the interpretation of the source of the problem is unclear without extensive information on typical system performance levels. One user complained that the long format results of the IBIS systems dont provide an abstract. The same user complained of not being able to switch from a short format display to a long format display without backing up through an intermediate screen: You have to go back and do the whole thing over again. There should be an easier way to get back there. Also in IBIS, the same user didnt like having to remember the numbers of previous items of interest. He would obviously prefer to have this information available simultaneously with the display of the record: Lucky I remember the numbers I want so I dont have to go around in loops. I hate that I cant see what Im doing along the way.
One user objected to the way the CARL UnCover system prompts for address information, apparently interpreting this prompt as an invasion of privacy. Another user would like to be able to limit the search (in IBIS) to specific years. Another user stated that because the IEEE system only covers 1988 and later, users must also rely on bound indices, journals, and the photocopy machines for complete subject coverage -- this seems to represent a sort of fragmentation of the collection. One user had hit enter to begin a search, realized he had made a typographical error on a search term, and was frustrated by the lack of a search interrupt mechanism in the user interface.
Quantity and quality of resources were also observed to be a barrier to use. Examples include waiting in line at the reference desk for assistance; one user noted that the print quality of the IEEE full text system is lower than the print quality of a photocopy of an article from a journal. One user began reading a journal article while he waited for search results to download to his disk. This user preferred to download to disk and print out elsewhere in order to avoid using the relatively low quality dot matrix printer attached to the INSPEC system. Again, the system performance issues pop up here, although we cannot determine from this information if the quality or quantity of resources supporting these systems influence their performance.
How do users get help? What do they do when they're stuck?
We organized our analysis of situations in which users seek help by the source of help selected by the user. The users use information available to them on the screen most frequently. Occasionally, they may try the help function, if one is available. The systems do not usually have well developed help systems, however. As a result, the on-screen information did not appear to provide assistance to the users when they were truly confused about what to do: He is trying to decide what to do from the first menu (do keyword search or search by journal title), is sitting and looking at screen for some time. The on-screen information may be useful more as a mnemonic device which represents the various options available from a particular screen. We observed only two cases where users sought out and actually talked with the Grainger reference librarians. In both cases, the users had what appeared to be successful interaction with the reference librarians. In one other case, the user was unable to talk to a reference librarian because it was after 10 PM and the reference desk is not staffed at that time. One user, frustrated and ready to leave because Engineering Index was not working, asked another user what the second user was using and then went back to a terminal to try the Wilson Indexes (librarians were on duty at the reference desk at that time). We only observed one instance of a user trying to get vocabulary help on one of these systems. This was an Illinet Online user using the Library of Congress subject heading search module.
What content is sought? (What journals, fields, formats?)
It isnt really possible to draw conclusions about which journals are sought out by users of these systems because this information is hard to collect observationally, the user sample is too small, and the users we observed were less focused on specific journals than on information related to specific subjects. This sort of data would be better collected through transaction logging and analysis of the transaction logs for frequencies of journals retrieved, items reviewed, items printed, etc.
The fields, or disciplines, represented by these users ranged from Computer Engineering, Electrical Engineering, other types of engineering (where subject terms included concrete and water pollution control), biology, Library and Information Science, and Computer Science.
Most of the users were using the on-line systems to search for articles from journals, and in a few cases, they also looked at papers from conferences. One user performed a periodical search on the IEEE system, but soon backed up to do a keyword search: it seems likely that this user had made a mistake. Most users displayed search results in a longer format that included an abstract. In one notable exception, the user was frustrated that the Wilson Index citations in the long form did not include abstracts.
Users of the IEEE full text system relied upon review of the citations and abstracts for the information they needed to decide whether it was worth the effort (finding another CD and placing it in the CD drive) to bring up and print the full text document image. Most of the time, users of this system did not examine the full text version of the document before printing it. No users tried to read the full text article on the screen. In a few cases, users paged through the article, pausing at tables or graphs, before deciding to print the article. This range of behavior is probably due to the fact that it is impossible to read the text of the article when you are zoomed out enough to be able to see an entire page of the article. If you zoom in to where you can see the words, you can only see half of the page, and the print resolution is still fuzzy enough to make reading from the screen unpleasant. The tables and figures though, seemed clear enough for some users to get an understanding of the details of the paper without having to read all the words.
The users of the IEEE full text system were intent upon getting to the full text versions of the articles that they felt were interesting based upon the citations they had examined. Most often (generalizing to all of the systems) people were actually retrieving the abstracts associated with the items in the retrieval sets. Pictures were only useful to users of WWW browsers; tables and figures were useful to those who reviewed the full text images from the IEEE full text system. It was also common for users to examine the contents of a retrieval set by looking a just a title list, which they could then use to get to the abstracts. When reviewing citations in detail, users overwhelmingly preferred to use the long format including the abstract when it was available. The short format citations were rarely used. Illinet Online users looked at holdings data frequently.
When and why is full text retrieved?
The answer to the when question seems to be whenever its available. We can only use the observations of users of the IEEE full text system to address this question. The general pattern of use on the IEEE system was for the user to enter one or more subject terms, submit the search, and then perform a fairly careful review of the list of article titles, linking to the citations with the most relevant or interesting titles (a popular variation on this was for the user to simply page through the citations, one after the other, without surfacing to the list of article titles). After review of the citation, which usually focused on the abstract, the user would choose to retrieve the full text image of the article. Once the image was on the screen, the article was invariably printed, usually without review of the on-screen document image by the user.
By listening to unsolicited comments and through the debriefings at the conclusion of some of the observations, we learned that users perceive time and cost benefits to using the IEEE full text system. The time benefit is that they dont have to find out if the particular journal issue is available on campus, go locate it, and then go photocopy it. Printing it out from the workstation saves many steps and ultimately a lot of time. The cost benefit is that the user can get hard copy of the article at only slightly lower quality for free compared to the $.05 per page photocopy charge. Its important to keep in mind that in most cases the users made their print / not to print decisions based upon examination of a traditional citation with abstract, not upon examination of the full text image.
When do people print? Do they print pieces or entire articles?
We generalized this question a bit to include activities other than printing on paper. We believe that activities such as emailing or downloading of search results, citations, etc., are merely different forms of making a record of your search that you can take away with you. Full text printing was obviously popular on the IEEE full text system. Printing of citations or abstracts was done fairly often (including simultaneous printing of full text and citations by one user). An INSPEC user was searching for the same topic across all the years represented by the INSPEC system, and was downloading all of the search results to diskette for later evaluation. One user of the Wilson Indices emailed search results to himself. One user of Engineering Index tried to email results, but email was not an option available to him at that point in the search.
In terms of printing full text, there are not really any options available in selecting what to print on the IEEE full text system. It is possible to print a single page, but no user was observed to do this. (In one case, when the printer had run out of paper in the middle of printing a long article, the user printed the last half of the paper using the print page function in order to avoid printing the first half of the paper twice.)
What will people do with the material they've retrieved?
This type of information is difficult to glean using a non-obtrusive mode of observation. Users of the IEEE full text system were almost certainly collecting copies of articles to place in their personal collections. One IEEE user, in addition to printing the full text of the journal articles, was also printing the long form of the citation. When asked about this, she replied that she used the printed copies of the citations as a condensed reference to the articles she had on hand.
Another next step is to take the citations retrieved from an on line system and use Illinet Online to determine if the item is available. We did not actually follow users from an on line bibliographic system to the OPAC, but did observe that users on occasion relocated from the bibliographic tool to the OPAC terminals.
How satisfied are users? What do they see as advantages?
Users made comments that specifically identified aspects of the current systems that they liked and presumably find beneficial. One user, who otherwise made only critical comments, mentioned that he liked the search history feature in IBIS. This user would probably find this feature of benefit in any system. One user mentioned using Illinet Online from his office frequently; presumably remote access to other bibliographic systems would be beneficial. The IEEE full text system has the obvious attraction of free prints of articles for the subset of items available. One user preferred the INSPEC system to the IEEE system and was willing to forego the free printing of articles in favor INSPECs inclusion of sources ... that are not on the IEEE system: he traded the cost and convenience against more complete coverage of his discipline.
How much do people understand about the system and its use (how do they conceptualize the system?)
There are actually two different questions posed here:
Can the patrons use the systems effectively?
Do the patrons understand the intent of the systems design, including the nature and extent of the coverage of the database?
Both questions are difficult to answer in simple terms. We will describe some of the behaviors we observed during the observations to provide a sense of the variation in system use. In some cases, the users seemed to be making effective use of the system because they were finding materials that suited their stated purpose. Two of the users were engaged in directed but fairly open scoped searches to get a sense of some general research area. They seemed to be locating materials of relevance to them. Two other users were obviously meandering through the information space provided by the systems: one of these users was switching search terms widely, using engineering terms at one point and art terms at another. Neither of these users took any results away with him. However, it may be unfair to classify these as not effective, because comments each of these users made indicated that they were first time users of these systems and their exploration may have been the motivation for their activities. In other cases, users entered several search terms and combinations of search terms which were obviously variations on a theme. Their intent was rather tightly focused, and they seemed to take few items away with them at the conclusion of their sessions.
We were unable to gather significant amounts of information related to the second question. Only one user offered information about this question when asked in general about problems with using this system. He felt unsure of his own level of understanding about this system and his skills with the system, which made him unsure that he was making the best possible use of the system. This comment seems to indicate a lack of clear understanding of the system and its coverage and understanding of how to maximize ones use of the system.
Another possible indication of problems with the users mental models is that a particular user was likely to repeat the same search strategy or pattern, with little or no modification, during the entire session.
What work tasks is the system supporting?
The users observed seemed to have a variety of reasons for embarking upon their information quest. In the earlier observations, we did not ask questions to explicitly get this information. In some cases, the users volunteered this kind of information. In some of the later observations, we explicitly asked questions about what brought them to the library that day. One characterization of this is the information need that has motivated the user to begin to search for information from an on-line bibliographic system. Direct responses we received to this sort of question were (1) looking for an explanation for recently acquired data that is behaving in a non-ideal fashion; (2) looking for journals that might accept a recent paper that seemed to fit in a field somewhat removed from his own; (3) looking for papers related to her research work in order to get a feel for the field; and (4) looking for a job on the Internet.
In some cases, the user told us the specific work task that the current search was supporting. Among the responses received were (1) preparing to submit a journal article; and (2) finishing a thesis. In general, these sorts of tasks seem consistent with the focus group and interview data obtained earlier. Those data suggested that different types of users (undergraduates, graduates, and faculty) had different motivations and different styles of seeking information. Almost all of the users observed were graduate students.
Discussion of the Grainger Observations
There are five main areas for discussion resulting from this study. First, there are the major implications for system design that we can draw from the data collected so far. The observations we conducted of bibliographic tool use suggest that library users rely most heavily upon the information provided to them on the screen for help, hints, and suggestions as to how to proceed. Rarely do they attempt to use the built in help systems. The quality of the help systems embedded within these systems varies widely. Illinet Online, for example, doesnt really have a well developed help system. At the other end of the spectrum, OCLC FirstSearch has an extensive help system. On some systems, help is available only from a subset of the screens. The contents of the help screens are frequently not in the language of the user, but in the language of the system builder. The design of the Illinois Digital Library could benefit from incorporation of ideas from research into networked on-line help, reference, and tutorial systems (see, for example, Barrett, 1989).
Another area of concern is how the population of potential users will be introduced to the system, and once they begin using the system, how they will gain familiarity and confidence with the system. The initial concern is with how users are introduced to existing systems: how are the tools currently located at Grainger introduced into peoples lives? How does the nature of the introduction affect their ability to use and conceptualize the nature of these systems? How effective are the methods in use today? Are these methods appropriate for systems that most users only touch occasionally? The findings of the NASA / DoD Aerospace Knowledge Diffusion Research Project (Pinelli, et. al., 1994) suggest that engineering students at the University of Illinois rely more upon personal collections of information than the collections organized by or available via the library. That study also shows that less than one third of the students in engineering and the sciences receive specific training related to the use of bibliographic resources in those disciplines. To compensate for what may be larger systemic issues, another goal for the Illinois Digital Library might be maximization of mutual intelligibility. That is, the design of the system should cue the user as to its purpose, its expectations in its current state, and the actions and responses now available to the user. Also, if possible, the design should allow the system to accept signals from the user in regard to the users current understanding of the unfolding of the session, perhaps by allowing the user to ask for information about why the system has responded as it has or to ask for reasons why the system might be presenting this information now. Such an exchange of cues on the fly might be useful in clarifying the intent of the system design and thereby refining the users conceptualization of the system.
Third are the management issues that emerge when considering use of bibliographic tools. The most obvious issue is based on the reliability and availability of the systems themselves. The on-line version of Engineering Index seems, in particular, to be unreliable. This condition was manifest during the Winter of 1994 - 1995 semester as well as the Summer and Fall 1995 terms. Recurring problems of this magnitude beg the question how central are these systems to the academic library as an institution? If the digital library we build today is the model of the future, is the academic library as an institution ready or willing to make the adjustments needed to place such systems in a truly central place? If the academic library commits to greater digitization of humanitys knowledge, it must also acknowledge and address the associated responsibilities of reliability, availability, and ease or transparency of use, which are largely moot points in the book oriented repositories of today. Reliability, availability, and ease or transparency of use are not merely properties of technology but also imply changes in the roles and responsibilities of the staff working within the institution. While libraries have traditionally been service organizations, they have not been computer based service organizations. What steps must the library make to prepare for these new responsibilities? Will library usage patterns change so that less use is made of the physical library and more use is made of the collection via remote access? If that is the case, how will users get assistance? Will the library ever close? Will the users allow it to close?
Fourth, the bibliographic tools that our subjects used were with only one exception based upon the classic information retrieval model where a users query is matched to a representation of a document. The IEEE/IEE Publications Ondisc system is still grounded in the classic model: its differentiating feature, full text images of articles, primarily adds convenience for the user because the user doesnt have to walk around and locate the cited item on a shelf and manually photocopy the article. World Wide Web browsers, however, seem qualitatively different than the other systems, perhaps for the following reasons. First, a WWW browser is not really a bibliographic tool as instantiated today: searching mechanisms are primitive and the body of information lacks organization. Second, WWW browsers have hypertext capabilities which the bibliographic tools we focused on do not. This qualitative difference was underscored by the fact that our coding scheme had to be expanded in order to allow us to code user activities like selecting links and even scrolling. At this point, there is little functional overlap between the bibliographic and WWW tools, and with little functional overlap, we can expect to find little overlap in the sorts of things the user can do with each type of system. In the future, these functional distinctions should begin to fade and the patterns of usage of bibliographic tools with embedded links, etc., can also be expected to merge.
Finally, some comments about the methods used, their strengths and limitations, and how the methods could be modified to be more informative. The first notable strength of the methods used in this study was that these were observations of real users performing real searches. As a result of watching users in situ we observed several unanticipated types of system usage. Some limitations of these methods include the fact that conducting, transcribing, and analyzing the data from these observations is very time consuming and that unobtrusive methods fail to gather data concerning the users motivations, cognitive states, or demographic information. As the study unfolded, we acknowledged the importance of the debriefing process for eliciting information regarding questions the users task, motivations, and use of the results from searches. Use of a comprehensive interview guide for the debriefings from the outset would have enabled us to gather more data concerning such questions. There are changes to the methods that could be considered for future rounds of observations. For example, if the systems are capable of collecting and producing transaction log data, the importance of note taking in real time is reduced and the processes of session transcription and analysis might be improved. It might also be wise to consider adjusting these methods before applying them to new systems such as the Illinois Digital Library. There appear to be some important systemic differences between newer systems such as the DL or WWW browsers and the bibliographic retrieval systems observed here. Another way to modify the methods would be to use these experiences as a basis for generating an observation check-off form which could be used to conduct more frequent, if leaner (in terms of richness of content) observations of users in situ. This might allow other observers, such as library staff, to collect useful observational data on an ongoing basis while reducing the burden of note taking and transcription while still yielding data amenable to analysis. The designers of such a process would need to evaluate the trade-offs between an increase in the number of cases and collection of leaner data.
Barrett, Edward, ed. The Society of Text: Hypertext, Hypermedia, and the Social Construction of Information. Cambridge, Massachusetts: MIT Press, 1989.
Bates, Marcia. The design of browsing and berrypicking techniques for the online search interface, Online Review, 13, 5, 1989, pp. 407-424.
Pinelli, Thomas E., Rebecca O. Barclay, Laura M. Hecht, and John M. Kennedy. The technical communication practices of engineering and science students: results of the phase 3 academic surveys, NASA/DoD Aerospace Knowledge Diffusion Research Project: Report Number 27. December, 1994.
Implications for methods
a few notes... not intended to be part of this write-up
this study was under-designed; it was designed to be somewhat informal
lessons learned include:
many of these questions cannot be answered by non-obtrusive observation
an interview guide with structured and unstructured components is needed
structured for demographics, tasks, motivations
unstructured for questions related to events observed during this observation, like HCI difficulties, or system misbehavior, etc., or unexpected user behavior patterns.
make these changes to the methods and you will generate somewhat different results
the above could be applied to many different types of systems by us in future
Sharp differences between Mosaic / WWW on one hand and all other search systems.
Frequencies associated with each code as applied to the observations forming the basis of this study. Numbers in bold brackets (e.g., ) represent totals of this and all of the related subdivisions of this code. In some cases, the number of instances of code use may not match the number of observations due to (1) incomplete, or missing, data or (2) application of codes across multiple searach sessions performed by the same user.
Go to the Introduction
Go to the Working Memo 1: All data to date and information finding
Go back to the home page