-
CONCEPTUAL DESIGN (MODEL)
-
Interface Metaphor (physical entity)
- Conceptualizing what users are doing
- A conceptual model instantiated at the interface
- Visualizing an operation
-
Choosing
- Understand functionality
- Identify potential problems areas
- Generate metaphors
- Evaluate suitability of metaphors
-
Advantages
- makes it easy to learn new systems
- helps to better understand the conceptual model
- supports a diverse set of users
- strengthen innovation
-
Disadvantages
- constrains the conceptual design to metaphors
- limit the user to understand the system in terms of metaphor
- poor designs can be used as the metaphor
- rely on metaphors might hinder coming up with new conceptual models
- some times may break conventional and cultural rules
-
Interaction Types
-
Instructing
-
tell the system what to do
- e.g. typing in commands
- e.g.selecting menus
-
Conversing
-
have a dialog with the system
- menu-based dialogues
- text-based dialogues
- virtual agents
-
Manipulating
- manipulating objects and users experiences
- based on users experience with real objects (affordance)
-
Exploring
- moving through virtual or physical environments
- e.g. VR
-
Responding
- initiative to alert, describe, or show the user sth. of interesting
- relevance to time or context
-
Interface Types
- Types of input and output methods
- Interface used by the users to support the interaction
- Choose the most appropriate or a combination
-
ENVISIONMENT
- make ideas visible and externalize thoughts
- represent design work
- occurs throughout development
-
Different representations
-
Sketches
- Ideas and thoughts can be quickly visualized
-
Quick, timely, inexpensive, disposable and plentiful
- o Allow quick test of new ideas during brainstorming
- o Reduce attachment to design
- Basic elements, People, Objects - depends on the Purpose
- Context, User view, Snapshot
- advantages
-
Storyboards
- Sequence of actions or events
-
User Journey
- 3-7 steps
- each picture labelled with 1 short description
- context of interaction is visible
- correct level of details
-
Wireframes (e.g. Wireflow)
- Single screen or interaction page
- Plan the layout and interaction patterns
- Different level of details
-
Prototypes
-
Low-fidelity
- medium unlike the final medium
- capture early design thinking
- quick and easy to produce
-
High-fidelity
- similar in look and feel with anticipated final product
- detailed evaluation of the main design elements
-
Paper prototypes
- produce quickly
- enables non-technical people to interact easily with the design team
- flexibility - 'redesigned'
- Advantages
-
Faking interaction
- Wizard of Oz (lo-fi)
- human is responding to output rather than the system
- Video prototype
- how the prototype is 'used' in real-life
- Early stage - fake interaction
- Later stage - communicate what product looks like and can do
- Focus on information to be conveyed
- limited by imgination, time and materials
-
Compromises
- Horizontal - wide range but little details
- Vertical - lots of details but a few functions
-
Computer Prototyping tools
-
Fidelity of Prototype
- Level of details and functionality built into a prototype
-
Low-fidelity
- Limited functionality and interactivity
- Examples: Paper prototype
-
High-fidelity
- Close resemblance to the final design
- High functionality and interactivity
- Examples: Digital prototypes
-
What are prototyping tools?
-
Tools develop for the sole purpose of prototyping
- Code-based
- Code-free
-
Software Prototyping Tools
-
Certain degree of coding required
-
Web UIs
- HTML5 with a lot of libraries
- Three.js
-
User interface builders
- Visual Studio, XCode, Visual Basic
-
Finished design can be used for final implementation
-
Processing.org
- A programming IDE for prototyping
-
Supports many libraries
- Video, Audio, Network, Animation, Vision, ML
- Based on Java
-
Comparison
-
Designing Tools
- Tools allow you to design within or import from other softwares
- Different tools, different range of fidelity
-
Software suitable to create
- Balsamiq
- AdobeXD
-
Linking to create Clickthroughs
- Prototype that links multiple screens together via hotspots
- Hotspots area that is interact-able by the user.
-
Moving from paper to digital prototype
- Upload existing images
- Add hotspots
-
Sharing the prototype
-
Purpose
-
Collaboration
- Add team members to project (cloud-based tools)
- Edit and comment on design
-
Presentation and testing
- User participants and stakeholders
- View and use prototype
-
Different types of shareable/export formats
- Web Link
- PDF file with hyperlinks
- View in iOS/Android phones
- HTML files
-
Choosing a Prototype Tool
-
Fidelity
- Layout and navigation design
- Visual design and micro-interactios
-
Ease of collaboration (Teamwork support)
- Ease to pick up tool
- Number of people on the same project
- Platform - Mac/Window/Cloud
-
Integration with workflow
- Import ad export previous work
- Assert libraries
-
Costs
- Free/Trial
- Subscription
-
Physical Prototypes
-
What are the physical prototypes?
-
Mostly focus on electrical products
- Wearable technology
- Tangible UI
-
Same principle - test out ideas quickly
- Test out idea quickly
-
Resources to support development
-
Physical computing kits
- Build and code prototypes and devices using electronics
- Arduino
- Open-source electronics platform based on easy-to-use hardware and software
- Toolkit comprises of two parts
- Arduino board
- Arduino IDE - program sketch to board
- Sketch - Unit of code
- BBC micro: bit
- Similar to Arduino
- Add to external components at the edge connector
- Teach programming in schools - Scratch, Python
- MaKey MaKey
-
Rapid fabrication
- Computer aided production tools
- 3D printers (additive manufacturing)
- Laser cutters (Subtractive)
- Helps to quickly fabricate high quality physical prototypes
- Easy to modify and change
- Subtopic 2
-
Introducing Evaluation
-
Ethics
- Inform participants about their rights during the study
-
Protect participants during study
- Physical or emotional endangerment
- Privacy of participants
- Ethics approval must be obtained before study is conducted.
-
University Human Ethics Policy
-
Ethics approval helps to
- protect the welfare, rights, dignity and safety of research participants
- protect researchers' rights to conduct legitimate investigation
- protect the University's reputation for research conducted and sponsored by it.
- Minimize the potential for claims of negligence made against individual researchers and the University
- Human research
- Research conducted with or about people, their biological materials or information.
- It covers activities including:
- taking part in surveys, interviews, or focus groups
- undergoing psychological, physiological or medical testing or treatment
- being observed by researchers
- assessing personal documents or other materials
- collection and use of biological materials
- assess to personal information as part of an existing published or unpublished source.
- Before the Session
- Don't waste the user's time
- make sure experiment is designed well
- be prepared
- Make users feel comfortable
- communicate only the system will be tested, not the user
- indicate that the software may have problems
- inform that they can stop at any time
- Maintain privacy
- tell the user that results will be anonymized (if applicable)
- Inform the user
- explain what is being recorded (video, audio, data logging, etc.)
- answer user's questions (but avoid bias)
- Do not coerce users
- obtain informed consent
- During the Session
- Don't waste the user's time
- do not ask to perform unnecessary tasks
- Make users feel confortable
- give early success experience (pre-trials)
- keep a relaxed atmosphere
- sufficient breaks (e.g. coffee breaks)
- hand out test tasks one at a time
- do not show displeasure
- avoid disruptions
- stop the test if participant show discomfort
- Maintain privacy
- external people should not be present
- After the Session
- Make users feel comfortable
- thank the user and inform they have helped
- Provide additional information if necessary
- answer any other remaining questions user had
- e.g. something that could have lead to a bias
- Maintain privacy
- report the data without compromising privacy
- only share audio visual data with expression permission
- store all the data in a secure location
- university has a dedicated research data storage
- Research Computing Optimized Storage (RCOS)
-
Main steps in Evaluation
- 1. Establish aims of evaluation
- 2. Select evaluation methods. Good to have combination of participant (with users) and non-participant methods (without users).
- 3. Carry out non-participant methods first.
- 4. Use results from non-participant methods to plan participant testing
- 5. Plan session, recruit participants and setup equipment
- 6. Carry out evaluation
- 7. Analyze results, document and report
-
Selecting and Combining Methods
- Use a combination of methods to obtain richer understanding of users and product
- Controlled - test hypothesis about specific features
- Uncontrolled - insight to people's experience of interacting with technology in the context of daily life
-
Examples
- Combination of usability testing in labs combined with field studies
- Cognitive walkthrough to test run the prototype before actual usability testing in the lab
-
When to Evaluate?
-
During Iterative Design - Check if
- Design matches the requirements
- Problems with the design
-
Before deployment - For Acceptance Testing
- Does the system meet expected performance
-
Continuous Evaluation after Deploying
- "Performance beta"
-
Continuous evaluation
- In the wild, bug reports, field studies
-
Where to Evaluate?
-
Usability lab
-
Testing room constructed for usability testing
- Instrumented
- camera, microphones, data recording, etc.
- Separate observation room
- connected by one-way mirror
-
Benefits
- Controlled situation
- Ideal to study one precise aspect
- Many equipment available
- Only option if real location is dangerous or remote
-
Problems
- Does not represent a natural situation
- Hard to generalize results
- Research lab
-
Naturalistic setting
-
Observation occurs in realistic setting
- Real life
- Workplace / Home
- In-situ
-
Benefits
- More realistic (e.g. external effects)
- Situation and behavior more natural
- Better suited for long-term studies
- Well-suited for user experience studies
-
Problems
- Hard to arrange and run
- Time consuming
- Task is difficult to control
- Environment is difficult to control (e.g. distractions)
- Remote study
-
What to Evaluate?
-
Conceptual model
- Focus is on (standard) usability issues
- product is close to final / feature rich
- comparative results
-
Early and subsequent prototypes of a new system
- get early feedback on a design
- Low-fidelity prototype
- can fix design issues in advance
-
Final product
- How the product works for new markets / user groups
- Existing product, already evaluated for one market
-
Why Evaluate?
-
To judge system features / functionality
- Does it facilitate users' tasks and match their requirements?
- Does it offer the right features?
-
To judge effects on users
- How easy is the system to learn and use?
- How do users feel about the system?
-
To discover unforeseen problems
- What unexpected / confusing situations come up?
-
To compare your solution against competitors
- Important for marketing / sales department
-
EVALUATIONS WITHOUT uSERS
-
Inspections
-
Heuristic Evaluation
- a review guided by a set of heuristics
- Small set of evaluators examine the interface and judge its compliance with recognized usability principles.
-
Original heuristics - Nielsen TEN Usability Heuristics derived empirically from an analysis of 249 usability problems.
- Number of Evaluators
- On average Five evaluators identify 75-80 percent of usability problems
- Choice of Heuristics
- Should depend on goals of the evaluations
- Suggest to use category-specific heuristics that apply to a specific class of product as a supplement to the general heuristics.
- Can tailor original heuristics with other design guidelines, market research and requirements documents for this purpose.
- How to Heuristics Evaluation
- Briefing session to tell experts what to do
- Evaluation period of 1-2 hours in which
- Each expert works separately
- Take one pass to get a feel for the product
- Take a second pass to focus on specific features
- Debriefing session in which experts work together to prioritize problems.
- Subtopic 1
-
Benefits
- Few ethical and practical issues to consider because users not involved
- Best experts have knowledge of application domain and users
-
Problems
- Can be difficult and expensive to find experts
- Many trivial problems are often identified, such as false alarms
- Experts have biases
-
Cognitive Walkthrough
- Involve stepping through a pre-planned scenario noting potential problems
- Focus on ease of learning
- Designer presents an aspect of the design and usage scenarios
- Expert is told the assumptions about user population, context of use, task details.
- One or more experts walk through the design prototype with the secenario
-
How to Cognitive Walkthrough
- UX researchers walk through the action sequences for each task.
- As they do this, answer the following questions:
- Will the correct action be sufficiently evident to the user?
- Will the user notice that the correct action is available?
- Will the user associate and interpret the response from the action correctly?
- Record problems.
-
Benefits
- Can be done without users
- Considers users' task
- Quick and inexpensive to apply
-
Problems
- Limited by skills of the evaluator
- Labor intensive - answering and discussing questions may take a long time.
-
Analytics
-
Web Analytics
-
A form of interaction logging that analyzes users' activities on website
- Total number of people
- Length of stay
- Content site visits
- Outcomes can be used to improve their design
- When designs don't meet users' needs, users will not return to the site (one-time users)
- Example - SparkPlus
-
Learning Analytics
- Web analytics applied to field of education
- Learner's activity in massive open online courses ( MOOCs) and Open Education Resources (OERs).
-
A/B Testing
- A large-scale experiment
- Offers another way to evaluate a website, application of app running on a mobile device
- Often used for evaluating changes in design on social media applications
- Compares how two groups of users perform on two versions of a design
-
Model
- Predictive models evaluate a system without users being present.
-
Fitts' Law
- Time taken to hit a screen target is independent on distance of cursor and size of target.
-
eVALUATION WITH USERS (1) - Usability test
-
Usability Testing
- Involves recording performance of typical users doing typical tasks
- Users are observed and timed
- Data is recorded on video, and key presses are logged
- User satisfaction is evaluated using questionnaires and interviews
-
Team roles during testing
- All members are encouraged to participate in the evaluations
-
Facilitator
- Person in the lab together with participant
- Responsibilities
- Plan and execute session
- Set up lab for session
- Responsible for putting participant at ease during session
- Must have people skills
-
Prototype executor
- Person to 'execute' the prototype and move it through its paces as users interact
- Only if you are using a low-fidelity prototype
- Must have thorough technical knowledge of how design works
- Poker face and should not speak a single word during the session.
-
Quantitative data Collector
- Works
- Time to complete task
- Number and type of errors per task
- Number of errors per unit of time
- Number of navigations to online help or manuals
- Record into spreadsheet directly
- Tools - stopwatch and counter.
-
Qualitative data Collector
- Observation notes
- Critical incidents
- Think aloud comments
-
Supporting Actors
- Optional
- If part of the setting or task requires participant to interact with someone.
- Manage the props needed in the evaluation (other than the prototype execution).
- Example: Call client on the telephone.
-
Tasks during session
- Representative, frequent and critical tasks that apply to the key work role and user class represented by each participant.
- Prepare corresponding task description and UX target metrics to guide data collection and compare observed results.
- Test conditions same for every participant.
-
Task description
- what to do, no hints about how to do
-
Recuiting Participants
- Find representative users (usually outside you team and outside project organization)
-
Recruitment methods and screening
- People around you - spouses, children, friends.
- Post ads in public spaces
- Announcements at meetings of user groups and professional organizations if the group matches your user class needs
- Temporary employment agencies.
-
Number of participants
- Schedule for testinng
- Availability of participants
- Costs of running tests
- Famous rule of thumb
- 3 - 5 participants
- typically 5-10 participants
- OR: Test until no new insights are gained
-
Planning the Session
-
If it is in the lab, configure the lab to your needs.
- Computer / Device
- Placement of participants, facilitator and executor
- Set up hardware, e.g. eye-trackers, timers, counters etc.
-
Determine length of session for one participant
- Typical length: 30 mins to 120 mins
- Strategies to manage long sessions
- Warn participants in advance
- Schedule breaks between task - exercise, toilet break, refreshments.
- Prepare food and water in advance to keep participant at ease
-
Prepare necessary paperwork
- Informed consent (important)
- Formal and signed permission given to UX professional by participants to use data gathered within stipulated limits.
- Preparation for informed consent begins with institutional review board (IRB) / Ethics Approval Committee
- Evaluators / Project manager to prepare application.
- USYD Ethics Application
- Participant Information Statement (PIS)
- Participant Consent Forms (PCF)
- advertisements, letters and emails seeking participants.
- interview or focus group questions / themes
- letters of support or permission from organizations assisting in the research in any way
- external research declarations (for researchers not affiliated with the University)
- Participants must read both PIS and PCF before session
- Allow participants / guardians to ask questions
- Participants / guardian must sign PCF before session
- Prepare two copies for the session
- One for participant to keep
- One for submission
- Other data collection forms
- Non-disclosure Agreements (NDAs)
- If required by developer or customer organizations to protect intellectual property (IP) contained in design.
- Must be included during signing of PCF.
- Questionnaires
- If your evaluation plan includes administration of one.
- SUS - System Usability Scale
- Usefulness, Satisfaction and Ease of Use (USE) questionnaire
-
On the Big Day
-
Before Session
- invite participant into the lab
- offer refreshments and paperwork
- explain details of study and check for questions
- participant to complete requested forms
- [Optional] interview participant to check responses on questionnaire
-
During Session
- hand out task (one at a time)
- Encourage participant to think - aloud
- Describe his actions and why
- Stop test if participant is in distress
- To help or not to help participant during the task
- Depends on purpose of test
- Guide users if questions are asked
-
After Session
- post-session probs if have any
- debrief participant - answer remaining questions that the participant has and what you will do next
- thank the participant and give token of appreciation for their time
- prepare for the next participant.
-
EVALUATION WITH USERS (2) - Experiments
-
Usability Testing vs. Experiments
- Usability testing is applied experimentation
- Developers check that the system is usable by the intended user population by collecting data about participants' performance on prescribed tasks
- Experiments test hypotheses to discover new knowledge by investigating the relationship between two or more variables.
-
Experiments
-
Basics
- test hypothesis
- predict the relationship between two or more variables
- independent variable is manipulated by the researcher
- dependent variable influenced by the independent variable
- typical experimental designs have one or two independent variables
- validated statistically and replicable
-
Designs
- When dealing with human subjects, need to determine which participants to involve for which conditions in an experiments.
- Experience in participating in one condition will affect performance of those participants if asked to participate in another condition.
- Using multimedia materials in class will improve students' learning.
-
Different - participants design (between - subjects design)
- Single group of participants is allocated randomly to each of the experimental conditions so that different participants perform in different conditions.
- Need many participants to minimize individual differences- differences in experience and expertise. Perform pre-testing to identify participants that differ strongly.
-
Same - participants design (within - subjects design)
- All participants perform in all conditions
- Must perform counter-balancing to reduce effects of ordering.
- Effects of ordering - learning from pervious task affects performance of subsequent task.
-
Matched - participant design (pair-wise design)
- Participants are matched in pairs based on certain user characteristics such as expertise and gender.
- Each pair is then randomly allocated to each experimental condition.
- Does not consider important variables that may influence the results.
-
How to Experiments
- 1. Determine goals, explore the questions then formulate hypothesis
- 2. Design experiment, define experimental variables
- 3. Choose subjects
- 4. Run pilot experiment
- 5. Iteratively improve experiment design
- 6. Run experiment
- 7. Interpret results to accept or reject hypothesis
-
Crowdsourcing
-
Internet - source to recruit participants and run large-scale experiments
- Amazon Mechanical Turk (Mturk)
- Turkers - volunteers to perform human intelligence tasks (HIT)
-
EVALUATION WITH USERS (3) - Field Study
-
Field Studies
- Done in natural settings
- "In the wild" is a term for prototypes being used freely in natural settings
- Seek to understand what users do naturally and how technology impacts them
-
Field studies are used in product design to
- identify opportunities for new technology
- determine design requirements
- design how best to introduce new technology
- evaluate technology in use
- Range from a few minutes to longitudinal studies (few years)
-
Data collection that is obtrusive but informative
- Self-reports of problems encountered when occur
- Interval logging triggered by smartphone notifications
- Logging software - monitor frequency / patterns of daily activities
-
Conundrums with Field Studies
-
Informing participants that they are studied
- Knowledge of study would make people conscious about how they behave.
- Without informing people of their participation, how do you get their consent for participation?
-
Privacy of participants
- Studies done in people's homes will always be intrusive
- Agreement between participant and research the activities that can or cannot be recorded.
-
What to do with the prototype?
- Event of breakdown
- Security arrangements if deployed in public spaces
-
E.G. Pain Monitoring
- special permission and leads to privacy issues
- User Journey
- Wireflow
- Adv and Disadv
- Experts use their knowledge of users and technology to review software usability / Expert critiques can be formal or informal
- A variety of users' actions can be recorded by software automatically - key presses, time spent searching web page, looking at help systems. / Unobtrusive provided system's performance not affected. / Large volumes logged automatically, explored and analyzed. / Ethical issues - observing without knowledge.