This page contains the common questionnaire starting base for collaborative development of the common questionnaire form as well as the template for reporting the usability tests. The template is also a base that is to be developed together to come to an agreement of the form of the template.

Before starting to comment the questionnaire or the template, please read the below description that provides background to the questionnaire and template as well as basic info of the possible usability test in relation to the tool development.

Common background info

How the different tests are seen, so far, in "tool development":

Testing & evaluation of functions, mock-ups, tools, etc (blue arrows indicate a gray transitory area)

Figure 1.

Within the levels we have to make a distinction on:
The lowest part in the Figure 1. i.e., the technical aspect. The requests for these kind of tests will come from the tech. partners (or through the tech. partners. In the case of SSp it will be announced by Liisa B. or Merja B., in the case of Change Lab, Multimedia annotation, Mapit, it has not yet been specified who will act as the contact person). The guidelines for these kind of test will be very specific and and those who participate will, if said frankly and bluntly, do as they are told. It means that the participants will not have to think of purpose of the test, tasks of the test, organisation or reporting.

The next levels (in the Figure 1.) are lab testing a) & b) and field trial b): Mock-ups/prototypes (e.g. paper prototypes, semi-functional, functional) tests should be initiated by sub-section of WK requirement elicitor or WK chair. Therefore, the purpose, methodology, task arrangement etc. will come from the WK or subsection of the WK. The potential end-users for the test are searched through the participants of the particular WK. Therefore, it is not a solemn responsibility of the ped. partners alone.

Level of Field trial a): is on the responsibility of the ped. partners that want to participate in the developing of the usability methods towards context sensitive testing and evaluation - something we have called experimental testing. Here the purpose of the test is more tied to pedagogical aspect, since it tries to test and evaluate the tools' support to trialogical practices from some aspects specified by those ped. partners that conduct these experiments.

Last but not least, the Evaluation using "trialogical evaluation criteria": is the one where WP3, WP2 and WP13 cooperate for developing usability criteria from the design principles that could be used by experts in evaluating the tools developed from the trialogical aspect.

For the fall

First, the tools or parts of tools are developed in different timescales, so it is not possible to give a general detailed account of the phases that the tools and "parts of tools/modules" will go through. These phases can be seen from the WK schedule. This is also why it is important that the initiation for the tests comes from the requirements elicitor or WK chair.
For Shared Space the estimated time when the tool integration has been done is around end of November (the schedule of the tools will be uploaded/linked here when available).
For the field trials the SSp version M24 will be available somewhere in February/March 2008 (estimation) during which the field trial usability testing can be made as well as the experimental usability tests.

In the table 1. below is a one kind of list of different methods that can be seen to suit the different phases. However, it is only and example since the actual method chosen depends on the purpose of the test, available tool's state (paper-prototype, semi-functional, independent tool before integration, tools after integration, etc., and recourses available (i.e., end-users for test, schedule of the institutions/universities, etc).

Summative test so far has been seen to belong more to the design approach called waterfall, in which an end state is thought to be seen. Within KP-Lab co-design approach the only sort of end state that can be seen is the end of the 5 years. Until that we see iterations. After the field trials which use an external release of the KP-lab tools, we could say that the tests conducted during these field trials together produce a combined summary i.e., some kind of summative test.

Table 1. A classification of some usability evaluation methods according to the five dimensions by Wixon and Wilson (1997)[1].
See links to brief descriptions of different usability testing methods in the end of this page [2].


We see three kind of questions that are potential: background questions, the pre- and post-questions and context/purpose dependent questions.

Background questionnaires

(Template for commenting)
Are questions that try to find out what is the background of the end-users in their technological knowledge, habitual usage of technologies and widgets/gadgets, etc.
These questions serve double purpose:
a) They help the design process of GUI for knowing what kind of affordances and ways of executing operations (tasks) could be familiar to the end-users
b) They help to interpret the test results- it makes a difference if a users are very familiar with what SSp offers or are not at all (putting it bluntly)

Pre- and post-questionnaires

These questions can be off two kind:
a) the way they were in the spring 2007, i.e. made from the pedagogical perspective but holding some questions that might produce some information about the usability of the tools also b) Including also questions that are deal explicitely about usability.
Thus the questionnaire would serve two purposes:
i) to acquire information for the ped partners (case a) ii) to acquire information for the design and development of the tools in usability issues (case b)
Idea would then be that end-user get one questionnaire and not two separate ones but the answers are then distributed for the analysis and interpreation according to the the pupose, i.e., the pedagogical or the usability.

Purpose/context related questionnaires

Are questions that serve a specific need. For example, in field trials it might be that some usability test's purpose concentrates on a specific issue (lets say ease of use of the contextual chat) and needs a specific questionnaire for this e.g. might needs some particular kind of information before or after the actual test and this information is colelcted by a questionnaire.

Library formation

Idea with library formation is to have place where the questionnaires and executed tests can be retrieved, i.e. reused. Therefore, they would act as a repository, memory and example base of what has been used so that redefining and future use would be easier.
Therefore, some general metadata information has been thought to help the understanding of the use-potential, meaning and purpose of the questionnaires and test examples that are in the library.
Questionnaire parts could include descriptions of:
  • Purpose of test (what the answers are thought to contribute for),
  • Context (as well as timing: before, during, after use, etc.),
  • Analysing methods of the answers (this might be needed for those who have not used such a questionnaire so that they are able to understand how much recourses the analysing of the results might take, i.e., extensive exel - sheet processing, in depth open question interpretation (e.g. discourse analysis)...etc. It does not mean full description, but brief indicating and potential link for further reading if such is available).

Reporting tests

template for commenting
Why already deal with this now?
First, some have and will conduct soon paper-prototyping tests, and it is good to have already for these kinds of tests a template that can be used.
Second, when the template is ready and agreed, it is easier to estimate the time and recourses needed for reporting a test (since reporting does take time also and what form of template is makes a difference).
Third, knowing what is needed to be reported helps to plan the test as well as to keep tract issues that have to be taken into account in the reporting (the template can act as a reminder – check-list).

[#1] Wixon, D. & Wilson, C. (1997) The usability engineering framework for product design and evaluation. In M. Helander, T.K. Landauer, P. Prabhu (Eds.) Handbook of human- computer interaction, 2nd ed. Elsevier Science. pp. 653-688.

Links to brief descriptions of usability testing methods:

1. Work-shops:
Requirements/ functions meeting /GUI design

2. Heuristics:
Using heuristics

3. Prototyping:
Rapid prototyping

4. Pluralistic walkthroughs:
Riihiaho walkthrough
Groupware walkthroughs

5. User testing ("lab")
Using users

6. Field experiment (Close co-operation with WP8 and WP3 - "evaluation of the practices").
General methods:
Observing, observing in usability tests is somewhat different than the guidelines for observing in ethnography, for example. We will see if we find a template or list of points that are usually kept in mind when obervation is done for usability related issues.

Link to the note takers template (more fitted to lab-testing)

Template base for background questionnaire - for commenting

The start and end explanatory text can be left of these questions when using these for acquiring information from the users for design of the tools or when using these to acquire information on the level of familiarity with tools that already exist for helping the interpreation of the results of the usability testing.
These can be used also as part of the pre-questions to combine different questions to be filled at one time by the users (students).
These can be modified to meet the needs of background information. But from tech. point of view the information on familiarity with the so-called Web 2.0 related "tools" helps the desing of the GUI and also seems to be that kind of "tool" familarity which helps the use of SSp.

These kind of background questions are usually asked only once, thus they do not need a post- counterpart.

Background Questions of technology familiarity for xxx course / participants, xxx
These questions are one part of the study/research/test/xxx. More detailed information of the project is available in the third/xx page of this document. The answers will be collected and analyzed and results reported according to good research practices and the identity of respondents will be kept confidential.
First, write your name in the following empty space if you give your consent to use the xxx produced during the xxx for research purposes. Second, answer the questions below and return the document, including your answers, to xxxx.

I give my consent to use the answers, xxx (whatever else is related to the common questionnaires) for research purposes.

Name: Date: xx.xx.200x

Thank you!

(The purpose of these questions is to gain insight into background into the types of users we are using in tests and who might also be the end-users.)

Please answer the following questions: (Background of users study/interest/work area)

  1. What is your main study topic?
  2. What institute/university you are studying in?
  3. How many years have you been studying there?
  4. Do you use computers in your “daily studying”?
    • a. How often:
      • daily,
      • couple of times a week 3-5 times a week
      • once a month once a week
      • less
  5. Have you participated in courses that offer computer based or computer related materials and/or methods of working?
    • a. If yes, what methods do you use to hand in your course assignments?
      • by email
      • printed on paper
      • in electronic format (CD/DVD)
      • into e-learning platform (specify the name of the platform and what way you hand the documents into it: by uploading, giving a link to your assignment, or the assignments are done using the platform)
  6. Does your study place offer training for using any of the software you are required to use?
  7. If you are working, do you use computers in your workplace? (these questions are for students, since it makes a big difference if they are working in a related field or if they have been working in a related field or just used particular programs in their work be it summer work or part time work)
    • a. If yes, what kind of software you use there?
      • If yes, did you get training in your workplace to use the software?
      • What kind of training (help files, manuals, training sessions (virtual, face to face....)

(Background of users personal equipment and uses of tools in relations to the KP-Lab tools envisioned to be used)

  1. Do you own a computer?
  2. Do you have a personal website?
  3. Do you have a personal "blog"?
    • a. If yes, how often do you add entries to your blog?
  4. Are you familiar with the term "wiki"?
  5. Have you ever used a wiki for personal use?
  6. Do you use email for personal communication with friends and/or family?
  7. Do you use an instant messenger software (ex: MSN messenger, ICQ, GoogleTalk,...)?
    • a. If yes, which one?
  8. Have you ever used an internet calling service such as Skype?
    • a. If yes, how often do you use it?
  9. Are you familiar with "flickr"?
    • a. If yes, do you use it for
      • viewing images?
      • sharing images?
      • posting your own images online?
  10. Are you familiar with "YouTube"?
    • a. If yes, do you use it for
      • viewing videos?
      • sharing links to videos?
  11. Are you familiar with social bookmarking (e.g.
  12. Have you used services for collaborative writing (e.g. Google docs/spread sheets, etc.)?
    • a. If yes, what for?
    • b. If yes, how often?

RESEARCH ABOUT xxx AND xxx IN Xxx Institute/university

Institute/university and some of its xxx will participate in an international research and development project studying advanced knowledge practices collaborative learning and working practices in higher education and work places during years 2006-2011. This Knowledge Practices Laboratory project (KP-Lab, is coordinated by the University of Helsinki. The aim of the project is to develop new tools based on modern web technology on Open source basis as well as to foster good practices of learning and (in collaboration) working .
The purpose of the study is to investigate xxxx. (Short explanation). The results are used to xxx. By participating in the research you can contribute to the development of the tools developed by KP-Lab and xxx.
If you want to ask something about the research have any questions concerning the research or tool development, please contact researcher xxx.

Template for reporting usability test results - for commenting:

Use the following checklist to assure that required elements are reported. Use the ones that are relevant for your testing purpose, method and setting.
NOTE: this works as check list, thus, descriptions can be very brief. As mentioned above, not everything mentioned has to be described. We will also produce an example when we get the paper prototype testing reported.
NOTE: this template is meant to be used for reporting all kind of tests - not only questionnaires. In addition, how ped. partners report for their own questionnaires or reserach reports is up to them. This template concentrates on the usability test reporting.

Title Page

  • Enter KP-Lab Logo or Name
    • Identify report as: KP-lab internal document, KP-Lab report, KP-Lab xxx
    • Name the product and version that was tested
    • Who led the test
    • When the test was conducted
    • Date the report was prepared
    • Who prepared the report
    • Contact name(s) for questions and/or clarifications
    • Enter phone number
    • Enter email address
    • Enter mailing or postal address

Executive Summary
  • Provide a brief level overview of the test (including purpose of test)
    • Name the product
    • List of method(s)including number and type of participants and tasks (if tasks are used)
    • Results in main points, e.g. bullet list (this is needed for being ablet o get the main reulsts without reading the full report, this is seen as important, since the reports serve different puposes and sometimes the need is to get a fast overview)
  • Full Product Description
    • Formal product name and release or version
    • Describe what parts of the product were evaluated
    • The user population for which the product is intended
    • Any groups with special needs
    • Brief description of the environment in which it should be used (this means the context of the use of product/tool, e.g., is it an education product used in primary school, higher education, etc., or maybe research tool used in the field - then what could be field , or in office)
    • The type of user work that is supported by the product We know this

Test Objectives

  • State the objectives for the test and any areas of specific interest
    • Functions and components with which the user directly and indirectly interacted
    • Reason for focusing on a product subset


  • Participants
    • The total number of participants tested
    • Segmentation of user groups tested, if more than one
    • Key characteristics and capabilities of user group (this info might have been gotten through the background questionnaires, thus it can be just referred here, e.g. linked to the description of the results of the background questionnaires)
    • How participants were selected; whether they had the essential characteristics
    • Differences between the participant sample and the user population
    • Description of groups with special needs
    • Table of participant (row) characteristics (columns)

Context of Product Use in the Test

  • Any known differences between the evaluated context and the expected context of use


  • Describe the task scenarios for testing
    • Explain why these tasks were selected
    • Describe the source of these tasks
    • Include any task data given to the participants
    • Completion or performance criteria established for each task

Test Facility

  • Describe the setting, and type of space in which the evaluation was conducted
    • Detail any relevant features or circumstances which could affect the results (e.g. the users did not poses the main characteristics of the end-user group - maybe right kind of end-users were not available. There was a brake down of the server, which messed up the test for a while and created unnessary tension. It was found out during the test that some of the end-users knew about the product although in the particular test it was required that the users would not have seen or heard about the product before hand, etc.)

Participant’s Computing Environment

  • Computer configuration, including model, OS version, required libraries or settings, browser name and version; relevant plug-in names and versions (this means telling e.g., what browser and computers the users are using in the test. In field trials this is information that is not known by the technical partners. For example, in one of the tests during last spring 2007, one of the users was at home using SSp during the test, so it was asked what she used e.g., Internet Explorer 6 and Mozilla Firefox, Compaq Presario with Windows XP and IBM ThinkPad with Windows XP. If all is not know then it is not but it would be good to try to get the info. Plug-ins can refer for example to the browser add-ons (in Firefox these are found from the upper tools menu. Sometimes it is needed to know if some plug-ins are on, because it might change or prohibit some functions.).

Display Devices (report if relevant for the particular test)

  • If screen-based, screen size, resolution, and color setting
  • If print-based, the media size and print resolution
  • If visual interface elements can vary in size, specify the size(s) used in the test

Audio Devices (report if relevant for the particular test)

  • If used, specify relevant settings or values for the audio bits, volume, etc
Manual Input Devices (report if relevant for the particular test)
  • If used, specify the make and model of devices used in the test

Test Administrator Tools

  • If a questionnaire was used, describe or specify it here
  • Describe any hardware or software used to control the test or to record data

Experimental Design

  • Describe the logical design of the test
  • Define independent variables and control variables
    • Describe the measures for which data were recorded (the scale/scope of the recorded data, if relevant for the particular test ).


  • Operational definitions of measures
  • Operational definitions of independent variables or control variables
    • Time limits on tasks
    • Policies and procedures for interaction between tester(s) and subjects
      • Sequence of events from greeting the participants to dismissing them
      • Non-disclosure agreements, form completion, warm-ups, pre-task training, and debriefing
      • Verify that the participants knew and understood their rights as human subjects
      • Specify steps followed to execute the test sessions and record data
      • Number and roles of people who interacted with the participants during the test session
      • Specify if other individuals were present in the test environment
      • State whether participants were paid

Participant General Instructions

  • Instructions given to the participants (here or in Appendix)
    • Instructions on how participants were to interact with any other persons present

Participant Task Instructions

  • Task instruction summary

Usability Metrics (if used and metrics below are examples)

  • Metrics for effectiveness
  • Metrics for efficiency
  • Metrics for satisfaction

Data Analysis (report only if relevant)

  • Quantitative data analysis
  • Qualitative data analysis

Presentation of the Results

  • From quantitative data analysis
  • From qualitative data analysis (descriptive and clarifying presentation of the results)
  • Summary

Appendices(if any)

  • Custom Questionnaires, if used
  • Participant General Instructions
  • Participant Task Instructions
    • Release Notes

This page is a category under: usability and under Category Of Recommendations


methods.png Info on methods.png 55773 bytes
schemaofdifferenttestings.png Info on schemaofdifferenttestings.png 311636 bytes
  Page Info My Prefs Log in
This page (revision-33) last changed on 18:24 25-Mar-2017 by merja.

Referenced by
R.2.6 Usability G...

JSPWiki v2.4.102