Pedagogical usability can be defined as the applicability of the KP-Lab tools in actual educational contexts, e.g. in a course or study process with certain goals and practices. Pedagogical usability can be very different regarding the context. It can fully be evaluated only through empirical research where multifaceted data and experiences are collected from the educatonal processes in which the tools are used. Pedagogical usability is investigated in research cases.

A central issue for co-design is whether we can define some criteria for pedagogical usability that are generic and could be applied for tool design regardless of the pedagogical approach and variying contexts. For instance, in educational settings the tools are usually used only occasionally or temporarily and the users are not necessarily very competent in technology or do not have time and interest in learning to use the technology as such. Should then one generic pedagogical usability criteria be that the initiation of the usage of basic functionalities should be very easy even though the system provides advanced features if needed later.


Back to the Co-Evolutionary Design Glossary


If we need special concept PEDAGOGICAL usability, there obviously is other types of usability also - what are they?

Is this concept suitable also for professional cases in in WP10 or should we apply some other term for them - professional usability, work practice usability etc.)?

--Minna Lakkala, 16-Jan-2008


Hi Minna and all,

Thanks Minna for starting this defining. First the different levels usability that we have been talking since the fisrt year of the project are cane be easily seen from the this graph:

Otherwise I think your definition comes so close to what has been called the ("trialogical") criteria for evaluation that we have started to define and describe as well as test certain kind of methods that would be the most suitable form this kind of evaluation/testing that takes the context into account tightly. It also as I mentioned in the meeting partly defined in the description of the T2.4.2 "Definition of criteria and procedures for usability testing in relation to trialogical practices, taking into account different contexts of use. It is carried out in close cooperation with WP3, including joint meetings and review of reports".

I would prefer not to use "Pedagogical Usability" at all for the reason you mentioned above, it asks for defining professional usability etc. And I do not see it to be specific to pedagogical cases. I see it to hold aspects that go beyond that usually is defined to be usability, i.e., to test how usable tools, are for users to use them. However, there is a general tendency at the moment in usability research to move forward from the traditional usability to "usability" that takes the context more into account as well as emphasis the aspects of social interaction that affect the feeling of if something is usable or not, thus mixing what Chrsitoph has been bringing forward as to be defined realtions also, namely the relation betwen utility/usefulness and usability.

Very best Merja

--merja, 17-Jan-2008


h yes, one more thing, I do not agree fully on what you described that we should sort of take for granted that KP-Lab tools are used sporadically or that users of these are casual users since these kind of tools are not used often - isn't that something we try to avoid, i.e., create the tools or KP-Lab environment something that would support and promote use that extends into long periods of time, therefore KP-Lab environment would be something that keeps being with user life so gradually the users should be able to learn also more advances features. This does not mean that we should skip aspects that provide "easy to use" tools since also in long term use the using might occur only every now and then. My point is that I do not want to give here an impression that we have dropped the idea that KP-Lab tools would promote deeper belonging and involvement in the use. Sorry have to go...

--merja, 17-Jan-2008


I'm of the opinion that we don't need the concept 'pedagogical usability'. We need usability methods that captures practices in different contexts like classroom, distance education, workplace and so on. And this method should reflect the trialogical practices (with reference to the Trialogical Checlist for design and evaluation).

Here in Oslo, if we are going to to an usability experiment/trial of SS, we will probably use the trialogical checklist as far as possible. We will probably also ne inspired by the Activity Checklist (Kaptelinin and Nardi) and the Activity Oriented Design Method (Mwanza). So far I see two problems with these approaches:

1. They are not very detailed and I would like to see how far we can take them before we must approach other usability techniques.

2. Historicity. Trialogical practices and Activity Theory do both emphasize the historical processes. And if we are going to use the Trialogical or Activity checklist we must somehow be able to capture the history of the practices we are dealing with. This might mean that we need usability trials with longer iterations/phases or that we interview and study the teachers' previous practices. Something like that :-)

--Jan Dolonen, 17-Jan-2008


Hi Jan, It seems that you are doing quite similar than what we have been trying to do. However, I think you emphasise more the AT notion of history - I mean a sort longer deeper aspect of historicity. The other difference I can see (of what I know or think I know thus far of what you do) we have also tried to take in to account those "features/needs" that seem to arise from nearly from all cases/scenarios and that have been increasingly mentioned in never approaches to usability and design. Both of us use the trialogical checklist and AT checklist - I wonder should we do some collaboration in these or should we keep developing them separately and see into what they end up?

Very Best Merja

--merja, 18-Jan-2008


The purpose of evaluation research of KP-Lab tools is to help the design process. According to my perspective it is used to refine and to formulate high-level requirements (from usability trials) and driving objectives (from design-based studies performed in WPs 8, 9 and 10). It will involve working closely with users and gathering feedback about their opinions of the system.

The evaluation of KP-Lab tools is a complex enterprise since it involves multiple interacting layers to take into account; i.e., theory, practice and technology. According to the classic evaluation literature in CSCL, complete assessment employs both summative and formative evaluations. Summative evaluation is the most common. It takes place at the end of for instance a course, is used for evaluative purposes and commonly measures student satisfaction. Formative evaluation takes place while a course or tool use is ongoing. It is the indispensable part of assessment that provides a way for us to continuously monitor actualized knowledge practices or our teaching practices.

Formative evaluation of learning environments can be split into two hierarchical levels. The top level is evaluation of learning goals and how well the environment supports these goals. Below this level, is evaluation of use of the environment, or its usability.

Taking this hierarchical view of formative evaluation for complex tools is useful for making a separation between user difficulties as a result of usability issues vs. difficulties which relate to knowledge practices (or learning in the traditional sense). Errors in usability impact how students perceive and learn. Faulkner (2000) uses Jordan’s (1998) classification of error in usability to distinguish between major and minor errors and fatal errors. Major and minor errors do not prevent an user from completing a task, but they do effect efficiency and attitude, fatal errors prevent users from completing a task. All three types have serious effects on the users' knowledge practices and further muddy the waters when it comes to assessing for instance a group of students' gains from a tool. If they cannot complete the task, how can they be assessed?

Notess, (2002) adds that “Online learning leaves many students frustrated or unenthusiastic. The good news is that concepts and processes for addressing these shortfalls in learner experience can be found in the field of usability.” Additionally, the newness of the KP-Lab shared space for instance demands some usability testing. For these reasons much evaluation needs to be done on the usability front to assess whether the current and future design are usable.

As you may see, evaluation occurs on these different levels which necessitate for their several but complementary research methods (with their respective timeframes) for conducting, analyzing and reporting data from trials. Pedagogical usability as you have stated in your definition concerns more the formative (or more deep approach) evaluation of KP-Lab tools. Methods such as guided or unguided cognitive walkthroughs fit more with the purposes of usability studies (shorter timeframe and less context sensitive). At the other hand, the internal evaluation of KP-Lab tools and the longitudinal pedagogical wp studies in which the aim is to monitor the actualized use and affordances of KP-Lab tools (i.e., the extent to which the users' appropriate the tools in authentic educational or working contexts in a way that matches theoretical expectations -> driving objectives) fit more under the term formative (pedagogical) evaluation. In this latter use, the Trialogical Checklist and historicity of the context under investigation comes into play. I think that both Merja's and Jan's views have to be combined since it is important to know the history of our users to know more about why they are using tools in a particular way complemented with data on how they develop ownership over the tools.

Finally, I think generic conclusions regarding the latter aspect of evaluative results can be obtained by aggegrating them for each priority area since they encompass cases that focus on similar knowledge practices

--PatrickSins, 23-Jan-2008

  Page Info My Prefs Log in
This page (revision-15) last changed on 18:24 25-Mar-2017 by PatrickSins.
 

Referenced by
Research Case

JSPWiki v2.4.102
[RSS]