Thanks for all participants for a good and productive meeting!
Present: Christoph, Crina, Jan, Klass, Kari, Liisa B., Minna, Merja
A productive discussion on what we mean by the different levels of usability.
We clear some terminology in the old image
NEW TABLE OF THE LEVELS
Ended up making new terminology and felt that we have come to an understanding of the meaning of the levels. Agreed on redoing the table adding there the general purpose of the different levels, the rationale that describes the level, the often-used methods in each level for testing usability. We agreed that “technical usability” is not what we deal in T 2.4. and that is something that the technical partners define and do in tool development. We agreed on updating the co-design glossary after we have redefined the table so that we get the terms right also in defining the terms used for defining.
Glossary to be updated The three levels are in continuum, they are not exclusive, and often the methods mentioned in them are mixed, but the emphasises is different. This is a way to present them graphically to make them clearer (well the graphical presentation will come)
The three (3) levels we named in the following way (description preliminary/DRAFT):
A. Activity design (purpose: tools’ support for knowledge practices and/or tools ability to provide means for knowledge practices or change of knowledge practices). Takes as fully as possible into account the real place of the use of the tools, users doing real tasks in their “natural” environment including a long term use (sustained use), Takes into account the social and organisational aspects as well as groups and teams; we explore new methods in this level, some are for example the “trialogical usability criteria” and combining the trialogical checklist and AT checklist ideas. Testing and evaluation is executed by pedagogical partners and is heavily tied to their research.
B. Interaction design(Purpose: how fit the tools are for users’ tasks and goals, finding our what hinders users use, problems of their grasping/understanding of what can be done and how it can be done in relations to their tasks). Varied methods (often mixed) from existing (traditional/typical) usability methods, includes an example of common questionnaire that should be used in all “cases” where KP-Lab tools are used. The common questionnaire is adapted to fit the tools used, and the results are reported using a template to get an overview of the KP-Lab tools used and their good point and bad points in use as well as to help designers to improve them further. The task can be pre-defined but the knowledge from the actual use setting and reasons for use come from the upper level. This level can use testing that is controlled or in the actual users’ environment, often one person interacting with the system but increasingly tries to take into account groups using a system, still broader scope of users environment is not considered, e.g. social and organisational aspects, etc. Pedagogical and technical partners need to some cooperation in this level. The contacting occurs through WKs. Guidelines are given from T 2.4. Focus is more on short-term use – sometimes like snapshots of uses.
C. Information design (Purpose: to find out the GUI and function problems, problems of grasping/understanding elements in the interface, problems in execution logic of tasks, or system feedback to users, inconsistencies in the execution of functions or presented elements in the GUI, to provide help for development and improvement of features and functions in the basic GUI). Tasks do not have to be informed of the actual users tasks the scenarios are enough, but simulate the general tasks envisioned that the tools should make possible, often these are done before tools are released to real use to ensure that the tools can be used by users, for example to see that all the requirements are provided in a way that users can do them and understand them. Testing is mostly executed by technical partners but can be outsourced to pedagogical partners to acquire real users to do the tests. Usually done by existing (typical/traditional) usability methods, in controlled settings.
Some work to be done before the GA for bases to continue in GA, this work will continue in KP-Lab wiki and/or Plone.
To me the explanation of the three levels is clear, it gives a good view of what we envision for the usability studies, in a comprehensible way. But, of course, I was present in the VM Usability meeting in which these ideas were discussed. I was thinking that, when presenting this for the GA, that should be done in a structured way; so the publlic (which is very heterogenous) can understand it quickly and easily. Processing this material in the table is an option. Also structuring it by using a number of key words might help. For example: Purpose, Level of complexity (?), Level of theory related to, What does it involve, Methodes used, Who does it.
Something like this... These are just suggestion, hope they are useful. Best, Crina
Sounds really good for me Crina! - since that's how the first draft of the table was, the "keywords" were presented as columns:-) Lets see if Jan and Christoph agreed on this - the dead line for them to put it forward is today:-) - I hope it will be easier to comment then directly to table....