Table of usability levels
Start of the table by Merja, Christoph, and Jan
This table is a graphical presentation to help the organising and focusing of the purposes and methods of different usability testing. The three levels are in continuum, they are not exclusive, and often the methods mentioned in them are mixed (mixing methods is actually preferred, since it gives better means to take into consideration the multifaceted aspects of users’ work that is mediated by technology), but there is a different emphasises in the levels.
The naming of the levels does not imply that we follow thoroughly Activity Theory but some ideas are taken from AT approaches of interaction design, CSCL, and CSCW, since these approaches have taken more into account the embedded and embodied nature of all human activity be it mediated by computers or just by any artefacts (and signs). The table below attempts to help to define the method selection for usability tests and the division of work between pedagogical and professional partners in designing tests and executing the tests. However, the usability group (T 2.4.) will be available to give help, support and design guidelines for the tests if needed.
In brief we could say this necessary background information to be able to grasp what is going on and why we (T 2.4 usability group) will ask you to do certain procedures in relations to the usability of the KP-Lab tools.
Figure of the levels before more detailed description...
Figure 1: Rough visualisation of the levels
Ideas for the development of the table:
-> I think it is a good idea to separate the column. I would suggest a different wording: ‘
Research/evaluation approach’ for (e.g. heuristic evaluation, ethnographic research, design experiment) and 'Data-collection methods' for what you call techniques. It comes
closer to the terminology used by the pedagogical partners. (Crina)
- The column “General methods used“ could divided to research/evaluation methods (e.g. heuristic evaluation, ethnographic research, design experiment) on the one and data-collection techniques (such as questionnaire, observation, log-file analysis, interviews) on the other hand as usually data-collection techniques can be used on all three layers.
- Or then we list after the table those methods that can be used in all three levels, i.e., questionnaire, observation, log-file analysis, interviews. This could make the table easier to read.
- The naming of the levels is still open. The ones that are there now are just what were suggested in the last virtual meeting. Please include your suggestions for more descriptive naming of the levels.
- Your ideas here or directly into the table…
|Level||General purpose|| Characteristics|| General methods used||Partner involvement|
|Activity design ||The aims at this level are: - to find out what the utility of tools is in an embedded authentic, motive-oriented activity in the field. - to find out to which extent and how the tools provide support for knowledge practices Possible sub-aims: to find out the tools ability to provide means for knowledge practices or change of knowledge practices (Comment: does not mean that knowledge practices are necessarily designed, emergent knowledge practices are also possible)||Broad long term scale, i.e., tries to take into account the historicity of the: users’ environment (context), i.e., the social background and institutional/organisational setting and to take into account the groups/teams existing and emerging working and knowledge practices. This level is closely tied to the pedagogical and professional research focuses.||New methods explored by WP8, 10, 2 and 3, in development, e.g., * “Trialogical usability criteria” combined with the trialogical checklist and AT checklist ideas. * Assessment of Activity-Technology Fit * Stimulated recall||- Pedagogical and professional partners, - Individuals from WP2 and WP3, interested to develop new evaluation methods|
|Interaction design ||The aims at this level are: - to find out the usability of tools in some concise (individual or collaborative) activity in a setting which is specially organized for that research case and context; - to collect and provide input for designers, regarding the improvement of the tools, through finding out how fit the tools are for users’ tasks and goals, what hinders users’ use, and their problems with understanding of what can be done and how it can be done in relations to their tasks.||- Short-term focus, i.e., snapshot like tests including tasks that users supposedly do. - Focus is on how the users can execute the tasks related to their work.- The broader context is only implicitly involved in the tests. This means that the activity level influences the information of the users’ tasks and goals. However, sometimes the basic information from the related scenarios is enough as a context provider for selecting the tasks for the users. - Most often tests take into account one user interacting with the system, but can be extended to group or team interacting with the system. - Both real setting and controlled setting are possible.|| The methods used in this level are mostly mixed, i.e. combining different methods to get the most out of the tests. Methods include existing but also innovative usability methods, such as: * Think-aloud techniques and groupware walkthroughs * User tests in controlled or real settings * Usability surveys, e.g. by the common questionnaire developed by T2.4 * Act-storming and other explorative techniques|| - Through WKs contacts, the appropriate pedagogical, professional partners and technological partners in collaboration design the testing trials so that these tests and evaluations can take into account information needs of both technical and pedagogical partners. The type of evaluation and method depends on the actual questions at stake, the maturity of the tools to be tested as well as time and resource constraints. Support provided by T 2.4 (usability group).|
|Interface design|| At this level the aim are: - to identify the GUI and functionality problems, problems of understanding elements in the interface, problems in execution logic of the tasks, or system feedback to users, inconsistencies in the execution of functions or presented elements in the GUI; - to provide information for development and improvement of features and functions for the designers. - to find out if the requirements are met||- Tasks can be acquired from the scenarios related to the tested tool or part of the tool. Can also be simulated tasks, based from the requirements. - The tests can be executed also with different kinds of mock-ups, prototypes. Often these tests are done before tools are released to real use to ensure that interface provides clear enough affordances of its use and meaning of the elements in it, or about the organization of the elements in the screen, etc. - Most often tests are conducted with one user, but multiple users can be also involved. - Tests are conducted in controlled settings. ||Usually done by using existing (typical/traditional) usability methods, in controlled settings. Methods can be applied to mock-ups and prototypes alike and include for example: * Function checklists * Heuristic evaluation,* Rapid prototyping * Controlled user tests * Usability inspections, * Focus-group sessions * Cognitive walkthroughs and think-aloud protocols.||- Technical partners mostly. At this level the tests are carried mostly by technical partners since the phase for testing is often fast and response is needed quickly Furthermore, it is often the designers who know what are the needs for the tests. - Ped. partners or WKs are invited to participate, but involvement should be discussed with the designers. |
Some explanations of the explored methods in the level of Activity design:
- Assessment of Activity-Technology Fit: The main idea is to assess the activities the users are actually involved in and then check whether the tools provide the functionalities/affordances needed but also to check how tools are repurposed or integrated in the users tool ecologies
- Stimulated recall: 'stimulated recall' method can be used in relation to videotaped observations for investigating student groups using e.g., SSpA. The procedure starts by observing a group in an authentic task setting and videotaping the activity. You do not need to plan in advance where to focus your observation if you capture the core activity in video; therefore the method suits well for explorative usability analysis where you are not sure what emerges from the usage of the tool. Afterwards the video is watched together with the user group in a researcher-led discussion about the process and experiences; various issues can be discussed based on what comes out of watching the video. Also that discussion is video- or audio taped and can be used as data for analysis and source for further design decisions.
This page is a category under: usability
and under Category Of Recommendations