1
2.8kviews
Explain Evaluation Paradigm and types of evaluation
1 Answer
0
130views

Evaluation paradigms


1. "quick and dirty" evaluations

  • A "quick and dirty" evaluation is a common practice in which designers informally get feedback from users or consultants to confirm that their ideas are in line with users' needs and are liked.

  • "Quick and dirty" evaluations can be done at any stage and the emphasis is on fast input rather than carefully documented findings.

  • This approach is often called "quick and dirty" because it is meant to be done in a short space of time.

  • For example, early in design developers may meet informally with users to get feed- back on ideas for a new product At later stages similar meetings may occur to try out an idea for an icon, check whether a graphic is liked.

2. Usability testing

  • Usability testing involves measuring typical users' performance on carefully prepared tasks that are typical of those for which the system was designed.

  • Users' performance is generally measured in terms of number of errors and time to complete the task.

  • As the users perform these tasks, they are watched and recorded on video and by logging their interactions with software. This observational data is used to calculate performance times, identify errors, and help explain why the users did what they did.

  • User satisfaction questionnaires and interviews are also used to elicit users' opinions.

3. Field studies

In product design, field studies can be used to

  • Help identify opportunities for new technology;

  • Determine requirements for design; facilitate the introduction of technology; and

  • Evaluate technology

Qualitative techniques such as interviews, observation, participant observation, and ethnography that are used in field studies. In content analysis, for example, the data is analyzed into content categories, whereas in discourse analysis the use of words and phrases is examined

4. Predictive evaluation

  • In predictive evaluations experts apply their knowledge of typical users, often guided by heuristics, to predict usability problems. The key feature of predictive evaluation is that users need not be present, which makes the process quick, relatively inexpensive, and thus attractive to companies; but it has limitations.

  • Usability guidelines were designed primarily for evaluating screen-based products With the advent of a range of new interactive products this original set of I heuristics has been found insufficient.

Techniques or Types of evaluation


There are many evaluation techniques and they can be categorized in various ways.

1. Observing users :

  • Observation techniques help to identify needs leading to new types of products and help to evaluate prototypes. Notes, audio, video, and interaction logs are well- known ways of recording observations and each has benefits and drawbacks.
  • Drawback: how to analyze the data, particularly when large quantities of video data are collected or when several different types must be integrated to tell the story.

2. Asking users

Asking users what they think of a product-whether it does what they want; whether they like it; whether the aesthetic design appeals; whether they had problems using it; whether they want to use it again -is an obvious way of getting feedback. Interviews and questionnaires are the main techniques for doing this. The questions asked can be unstructured or tightly structured. They can be asked of a few people or of hundreds. Interview and questionnaire techniques are also being developed for use with email and the web.

3. Asking experts

Software inspections and reviews are long established techniques for evaluating software code and structure. experts step through tasks role-playing typical users and identify problems. Developers like this approach be- cause it is usually relatively inexpensive and quick to perform compared with laboratory and field evaluations that involve users. In addition, experts frequently suggest solutions to problems

4. User testing

  • Measuring user performance to compare two or more designs has been the bedrock of usability testing.
  • These tests are usually conducted in controlled settings and involve typical users performing typical, well-defined tasks. Data is collected so that performance can be analyzed. Generally the time taken to complete a task, the number of errors made, and the navigation path through the product are recorded.
  • Standard deviations are commonly used to report the results.

5. Modeling users' task performance

There have been various attempts to model human-computer interaction so as to predict the efficiency and problems associated with different designs at an early stage without building elaborate prototypes. These techniques are successful for systems with limited functionality such as telephone systems.

Please log in to add an answer.