Earn upto Rs. 9,000 pm checking Emails. Join now!

Enter your email address:

Delivered by FeedBurner

Friday, November 23, 2007

Database study

Database study
The purpose of this study was to check users’ knowledge of the terms used to describe a database application. 18 users participated in a quiz-style dialogue in which the investigator pointed to parts of the display and asked for the term by which it was described in the instruction material, before performing 2 retrieval tasks and a transfer task. Users were also asked to compare the database application to the spreadsheet application investigated in the previous study.

File Could you please describe what a database file is?
2 Field Do you know what one of these columns is called?
Can you describe what a field is?
3 Record Do you know what one of these lines in the file is called?
Can you describe what a record is?
4 Entry Do you know what one of those cells is called?
How would you describe what an entry is?
5 Diff. field/record How would you describe the difference between fields and records?
6-10: Data Types What different types of data can you have?
11-14: Query Language You have used a query language to retrieve information from files. Can you remember what the query language consists of?
15-17: Addresses? What do the commands FROM, SELECT, WHERE refer to?
Tasks Description
18 Task 1 You want a list of all customers names and postal addresses for a mailshot.
19 Task 2 You want a list of all people whose payments are overdue, the name of a contact person and their phone number, so that you can start chasing outstanding payments
20 Task 3 (TT) You want to find out which customers who have purchased how much from you, and list them by the size of their purchases.
21 Difference DB/SS Could you describe the difference between the database and the spreadsheet

Table 20: Database tasks
8.1 Procedure

This study was not directly based any of the observation scenarios suggested by Young (1983), but tried to illuminate the relationship between semantic knowledge that users hold on the one hand, and their performance on the other. Semantic knowledge was elicited by asking users questions about the system, and subsequently observing them using the system (the Open Access database). The study therefore consisted of three parts:

The investigator (the author) asked the users 17 questions about the database (see Table 20), checking whether the users knew the basic terminology the commands of the query language. All the terms had been introduced in the tutorials, and were explained on the instruction material given to users for this application users. Users were allowed to view an example file while answering those questions.
Users were then presented with three tasks, for which they had to construct and execute a query on a given database file. The third task required sorting the records retrieved in descending order, which was a transfer task.
Users were asked to describe the difference between a database and a spreadsheet.
8.2 Results
18 users participated in this study. The verbal protocols of the observations on this scenario are provided in Transcripts 114-127, and an overview of the results is given in Table 21. The findings for each group of tasks are given below.

8.2.1 Terms

All users could give some description of a file or could identify files in the system. 60% of users did recall the term field, or knew that this was referring to the columns in a file. Half of the users recalled the term record, or knew that this referred to the rows in a file.

8.2.2 Data Types

Most users could identify text/alphanumeric and dates as data types. 10 users identified either numerical or decimal, but not both, which suggests that they used either term for numbers in a generic sense. 8 users identified binary choice as a data type.

8.2.3 Query Language/Addresses

Most users could recall the commands FROM and SELECT, and knew that they addressed files and fields. About half could recall the command WHERE and ORDER, and knew that WHERE was used to specify conditions.

8.2.4 Tasks

All but one user was able to complete the first task without assistance, but only seven managed to complete the second task. The problem that most encountered was to specify the condition WHERE OVERDUE = TRUE: most users just selected the field OVERDUE, and only specified the condition after prompting from the investigator (e.g Subject 8, Transcripts-119; Subject 9, Transcript-120).

The third task was a transfer task, which required users to list the retrieved records in descending order (the system default for sorting numerical or decimal fields was ascending). Only one user managed to complete the task without help or prompting.

8.2.5 Additional observations

At the end of the session, users were asked to compare database and spreadsheet applications. Only one user (Subject 14, Transcripts-124) provided an answer which summed up the functionality of the two systems accurately. About half the users mentioned that the spreadsheet could be used for calculations, and the database for selective retrieval.

8.3 Discussion and Conclusions

8.3.1 The effectiveness of the scenario

It was originally planned not to make the example file available until users were asked to perform the retrieval tasks. The first two subjects in this study could not answer any of the first 10 questions, and asked if they could look at a file. In the interest of eliciting more than "no" the investigator allowed them do so, and offered this to the other users at the beginning of the session. All users loaded the file during the first 5 questions, confirming Payne’s (1991b) observation about the users’ dependency on the display.

Average length of the verbal protocols elicited in this study was 3/4 of a page. Half an hour was scheduled for each session, but most users completed their session in about 20 minutes. The sessions were easy to transcribe and score (this could have been done during the session by an additional observer). By eliciting semantic knowledge as well as observing users working to actual tasks, the scenario provides an opportunity to (a) illuminate the difference between user’s knowledge and competence; and (b) to gain some appreciation about the importance of certain features of the system image for guiding the user through the interaction. At the end of the session, users were asked to compare the database application with the spreadsheet used in the previous study.

8.3.2 Evidence of users’ models

The most interesting finding from users’ comparison of the database and spreadsheet applications pertains to the spreadsheet rather than the database application investigated in this study. Whereas only two users mentioned calculation as an important feature when asked to explain the spreadsheet in the previous study (see Chapter 7), 10 of the 15 user who answered this question mention calculation to contrast it to the database. Here, most users describe the main functionality of both applications fairly accurately (storage and retrieval of information for the database), rather than talking about surface characteristics. The visual structure of the displays for both applications is very similar (rows and columns), and thus offers little ground for distinguishing between the two. It is very likely that users drew on their users’ models of the spreadsheet application when they started using the database in the vein of an analogy (UC1 ® UC2).

The discussion of analogy and metaphor in 4.2.2 (see also Table 10) revealed the importance of identifying and representing differences between source and target models accurately. The similarity of the applications at the surface should create the potential for confusing the two (see also Norman’s (1983) observations in 4.1.3); at the same time, it may create a greater need to identify differences between the source and target model, and represent them in both models. This means that not only the user’s model of the new application, but also the model that is source of the mapping, are refined during the process: (UC1 ® UC2 « UC1 ® UC1*).

Whilst their users’ models have become more refined as far as functionality is concerned, half of the users could not identify two of the five data types in the example file, even though they had been introduced to them, and they had entered and retrieved all the different types of data in their previous exercises. As in the spreadsheet study, it seems that many users still have a general knowledge and experience (UW) representation of numbers and text, as discussed before in 7.2.3. Anything composed of letters, including binary data (YES/NO or TRUE/FALSE) is subsumed under text, and all numbers are just that (not differentiated into integers and decimals). The data type DATE, which is composed of a sequence of numbers, was recognised by most users as a separate data type, but this is consistent with general knowledge. Despite instruction and practice undergone, it seems that users did not refine their model from general knowledge and experience into a user’s model (UW ® UC).

No comments:

 
Thanks

Total Pageviews