Start page

Links to Methods

Card SortingCritical Incident TechniqueAffinity Diagramming[Working: Stub for methods]

Critical Incident Technique

Alternative names for this method: (No other names known)

Summary

End users are asked to identify specific incidents which they experienced personally and which had an important effect on the final outcome. The emphasis is on incidents rather than vague opinions. The context of the incident may also be elicited. Data from many users is collected and summarised.

Expected benefits

The CIT is an open-ended retrospective method of finding out what users feel are the critical features of the software being evaluated. It is more flexible than a questionnaire or survey and is recommended in situations where the only alternative is to develop a questionnaire or survey from the start. It focuses on user behaviour, so it can be used in situations where video recording is not practicable so long as the inherent bias of retrospective judgement is understood.

When is it applicable?

There should be at least a working prototype for this method to be effective, something that the respondents can use and have experience with. You could use it with a production version to start the process of re-design.

What training, equipment, licences do you need to have?

Some experience of Content Analysis is useful when you are summarising the data from many users.

Description of method

Planning beforehand

Define the activity you intend to study, and get access to the users as soon as possible after they have finished doing the activity. In the case of a lab study this should be after the testing has finished but before any de-briefing takes place; in the case of a naturalistic study, this should be soon after the user has used the software being surveyed or investigated, and if possible in the same environment in whuich they used it.

Running the method

You can do CIT by either employing an interview or by getting the users to fill out a paper form. The user is requested to follow the three stages described below in that order:

  • focus on an incident which had a strong positive influence on the result of the interaction and describe the incident
  • describe what led up to the incident
  • describe how the incident helped the successful completion of the interaction

It is usual to request two or maybe three such incidents, but one at least should be elicited. When this has been done, the procedure is repeated but now the user is asked to focus on incidents which had a strong negative influence on the result of the interaction and to follow the above formula to place the incidents in context. There will be some variation in the number of positive and negative incidents users respond with.

It is usual to start with a positive incident in order to set a constructive tone with the user.

If context is well understood, or time is short, the method may be stripped down and the user simply required to do the first part only: focus on describing the positive and negative critical incidents.

In an interview situation the user can be corrected if they attempt to reply with generalities, not tying themselves to a specific incident. This is more difficult to control if you are employing a written form, so ensure that the introductory instructions are clear.

A very short variant, sometimes used on feedback forms, is to ask the two questions:

  • What is the best thing about this software/ web site, and why?
  • What do you think needs most improving with this software/ web site, and why?

Analysing the outputs

When you have gathered a sufficient quantity of data you should be able to categorise the incidents and produce a relative importance weighting for each - some incidents will happen frequently and some less frequently.

For a summative evaluation, you should collect enough critical incidents which will enable you to make statements such as "x percent of the users found feature y in context z was helpful/ unhelpful."

For a formative evaluation, you should collect enough contextual data around each incident so that the designers can place the critical incidents in scenarios or use cases.

The basic procedure for the analyst is card-sorting. Write each incident on a card, and place the cards in piles according to themes. See the Content Analysis method for more information on details.

Reporting the results

Although it is tempting to assume that incidents which are reported by many people are more important than those which are reported by fwe, this need not be the case. One or two users may actually pinpoint important issues that others have missed. So don't depend entirely on the frequency of occurence but examine the consequences of the incidents.

Variants on the above

The 'short variant' has been noted above. Some researchers have tried developing an app which the user can summon as soon as they experience a critical incident, so as to cut down on the interval of time between incident and reporting. This may not be as helpful as it sounds, since the essence of CIT is that it is a retrosepctive method: the user should be able to stand back and reflect on the entirety of their experience. Another variant is to video record the entire session and then to replay the video with the user, allowing them to comment on incidents as they happened. However, this can take a long time. Replay may take three or four times as long as the original session.

Quality control

The most important thing about CIT is that the respondents have experience of using the software or web site in a real (or close to real) context, not just reviewing it, or watching someone else use it. Users should be encouraged to be as specific about the indcidents as possible, so as not to make general statements for which not cause can easily be deduced. CIT should uncover causes for users' experiences. If you are getting a lot of vague, unclassifiable statements then you might ask whether you are sampling the real end users of the product. It is also possible that you will get some extremely superficial remarks (eg about the use of specific shades of colour or wording). In this case, you may not be accessing users who depend on the product and for whom the product is important. A question such as 'How important is this product for you?' is a useful adjunct to help you interpret the outputs.

What next?

CIT may be used to construct use-case scenarios, or to develop lists of problems which need to be fixed.

More information

In print

The original article is:

Flanagan, J.C. (1954). The Critical Incident Technique. Psychological Bulletin, 51(4), pp. 327-359

See also:

Carlisle, K. E. (1986). Analyzing Jobs and Tasks. Englewood Cliffs, NJ: Educational Technology Publications, Inc.

Online

Fivars, G. & Fitzpatrick, R. (2001). Critical Incident Technique Bibliography. Retrieved from www.apa.org/pubs/databases/psycinfo/cit.

Gogan, J., McLaughlin, M-D. & Thomas, D. (2014). Critical Incident Technique in the Basket. In: Information Systems. International Conference (ICIS 2014) (6 vols). Retrieved from pdfs.semanticscholar.org/5253/11340add46601fd42f4e75fb53a0f1aab565.pdf.

Case studies

Carlisle, K. E. (1986). Analyzing Jobs and Tasks. Englewood Cliffs, NJ: Educational Technology Publications, Inc.

Other methods that could be used instead

Alternative methods of acquiring this kind of data are subjective questionnaires or surveys. The CIT is sometimes used as a precursor to designing a questionnaire. Insofar as the behaviour of users is being examined, video records may also be used as alternatives in which case user comments are not required (but see 'variants', above)..

History of this page and contributors

Adapted by jk from UsabilityNet, 2018-09-03.

Make a comment, or suggest another method

Please enter your comments in the form below and click 'Send'. Note that all comments will be moderated and may not appear immediately (or just click Refresh form without sending.)
Name:
e-mail:
Your comments: