Module 8 - HCI

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 19

CHAPTER 8

EVALUATION APPROACHES
8.1 USABILITY TESTING
Usability Testing Leads to the Right Products:

• Usability testing is the practice of testing how easy a design is to use with a
group of representative users.
• It usually involves observing users as they attempt to complete tasks and can be
done for different types of designs
• It is often conducted repeatedly, from early development until a product’s
release.
• Through usability testing, you can find design flaws you might otherwise
overlook.
• When you watch how test users behave while they try to execute tasks, you’ll get
vital insights into how well your design/product works.
• Then, you can leverage these insights to make improvements.
Objectives of usability test:

1) Determine whether testers can complete tasks successfully and independently.

2) Assess their performance and mental state as they try to complete tasks, to see
how well your design works.

3) See how much users enjoy using it.

4) Identify problems and their severity.

5) Find solutions.
• There are different methods for usability testing. Which one you
choose depends on your product and where you are in your design
process.
Best Practices for Usability test

• Usability Testing is an Iterative Process:


• To make usability testing work best, you should:
1. Plan
2. Set user tasks
3. Recruit testers
4. Facilitate/ Moderate testing
1) Plan –

a) Define what you want to test. Ask yourself questions about your design/product.
What aspect/s of it do you want to test? You can make a hypothesis from each
answer. With a clear hypothesis, you’ll have the exact aspect you want to test.

b) Decide how to conduct your test – e.g., remotely. Define the scope of what to
test (e.g., navigation) and stick to it throughout the test. When you test aspects
individually, you’ll eventually build a broader view of how well your design works
overall.
2) Set user tasks –

a) Prioritize the tasks to meet objectives (e.g., complete checkout), no more than 5
per participant. Allow a 60-minute time frame.

b) Clearly define tasks with realistic goals.

c) Create scenarios where users can try to use the design naturally. That means you
let them get to grips with it on their own rather than direct them with
instructions.
3) Recruit testers –

• Know who your users are as a target group.

• Use screening questionnaires (e.g., Google Forms) to find suitable candidates.

• You can advertise and offer incentives.

• You can also find contacts through community groups, etc.

• If you test with only 5 users, you can still reveal 85% of core issues.
4) Facilitate/Moderate testing –

• Set up testing in a suitable environment. Observe and interview users. Notice issues. See if users
fail to see things, go in the wrong direction or misinterpret rules. When you record usability
sessions, you can more easily count the number of times users become confused. Ask users to
think aloud and tell you how they feel as they go through the test.

• From this, you can check whether your designer’s mental model is accurate: Does what you think
users can do with your design match what these test users show?

• If you choose remote testing, you can moderate via Google Hangouts, etc., or use unmoderated
testing. You can use this software to carry out remote moderated and unmoderated testing and
have the benefit of tools such as heatmaps.
8.2 FIELD STUDIES
• Field studies are research activities that take place in the user’s context rather than in
your office or lab.

• The range of possible field-study methods and activities is very wide.

• Field studies also vary a lot in terms of how the researcher interacts (or doesn’t) with
participants.

• Some field studies are purely observational (the researcher is a “fly on the wall”), some
are interviews in which the questions evolve as understanding increases, and some
involve prototype feature exploration or demonstration of pain points in existing
systems.
Examples of field studies include:

• Flexible user tests in the field, which combine usability testing with adaptive
interviews. Interviewing people about their tasks and challenges gives you very
rich information. In an adaptive interview, you refine the questions you ask as you
learn.

• Customer visits can help you better understand usability issues that arise in
particular industry or business contexts or those that appear at a certain scale.
• Direct observation is useful for conducting design research into user processes,
for instance, to help create natural task flows for subsequent paper prototypes.

• Direct observation is also great for learning user vocabulary, understanding


businesses’ interaction with customers, and discovering common workarounds —
for example by listening in on support calls, watching people moving through
amusement parks, or observing sales staff and customers in stores.
• Ethnographic research situates you in the users’ context as a member of the
group. Group research allows you to gain insight into mental models and social
situations that can help products and services fit into people’s lives. This type of
research is particularly helpful when your target audience lives in a culture
different from yours.

• Contextual inquiry is a method that structures and combines many of these


field-study activities.
8.3 ANALYTICAL EVALUATION
• Analytical evaluations differ from empirical evaluations in that analytical
evaluations do not include user observations.

• Reviewers, most often experts, rely on data and quantitative criteria when
conducting evaluations.

• Internal and external financial auditors, prototype developers and business


process analysts all conduct analytic evaluations.

• Available analytic evaluation methods focus on determining how close data


values come to benchmark parameters.
Goals and Objectives:
• Regardless of the method used, the goal of an analytical evaluation is to establish
relationships between actual and benchmark data to determine whether
variations exist.

• For example, financial auditors use analytic evaluation methods during the
planning stages of an audit.

• The objectives are to identify variations in relationships, such as unusual


transactions, ratios and trends that indicate financial data requires greater
scrutiny, a longer audit time frame and procedures that are more detailed.
Methods for Analytical Evaluation
1. Cognitive Walkthrough Method
2. Heuristic Evaluation
3. Point-Factor Method
Cognitive Walkthrough Method:
• Software developers commonly use cognitive walk-through evaluations in the
design phase of development.
• The goal is to identify strengths and weaknesses in a prototype design and how
users will understand it.
• Data sources include a user interface mock-up, a user profile that assumes a
specific knowledge level, task lists and action sequence diagrams.
• A cognitive walk-through starts with analysis of the steps and actions required to
accomplish a task, and system responses to user actions.
• Evaluators, typically designers and developers, then walk through the steps as a
group, gathering usability data along the way.
• Analysis determines whether tasks or actions require a redesign.
Heuristic Evaluation:
• Unlike the team approach used in a cognitive walk-through, a heuristic evaluation
is a series of independent evaluations.
• It is useful in analyzing operational processes, developing standard operating
procedures, and writing an instructions manual.
• Data sources include established guidelines and performance measurements.
• During the evaluation, two or three analysts compare current procedures against
pre-established guidelines or principles, with each looking for and ranking a
specific issue such as unsafe, erroneous, and duplicate or redundant actions.
• A post-evaluation meeting and analysis determines which instructions require
modification.
Point-Factor Method:
• Point-factor evaluations are common in job evaluations.
• Goals typically focus on ranking different jobs within a company and establishing
a pay-grade or structure.
• Data sources include role profiles, job descriptions and a numerical ranking
system.
• In a point-factor evaluation, reviewers -- who most often are human resources
staff members -- identify and break the key elements of each job into separate
components.
• Evaluators then compare these factors to role profiles and allocate points
according to the skills, expertise or level of difficulty of each specific job.
• Most often, the more demanding a job is, the higher the point value and the
higher its pay grade.

You might also like