Accéder directement au contenu Accéder directement à la navigation
Nouvelle interface
Communication dans un congrès

An Item Response Theory (IRT) approach to check correspondence between cut-off scores and maximal test information in French pilot selection

Nadine Matton 1 Michel Veldhuis 2 Stéphane Vautier 2 
1 IHM Aero - ENAC - Programme transverse IHM Aéronautique
ENAC - Ecole Nationale de l'Aviation Civile
Abstract : Cognitive ability test scores are widely used in selection procedures in aviation for hiring pilot trainees or ATC trainees (e.g., Damos, 1996; Burke, Hobson & Linsky, 1997; Martinussen & Torjussen, 1998; Sommer, Olbrich & Arendasy, 2004; Matton, Vautier & Raufaste, 2009). In Europe, the underlying theory implicitly used in this context is the Classical Test Theory (CTT, Gulliksen, 1950; Lord & Novick, 1968). Following this theory, the observed score variable (Y), usually the sum of elementary scores for each item, is construed as the sum of a true score variable (T) and an error variable (E), Y = T + E. In CTT, measurement precision is generally assessed through reliability indexes. Considering scores of a group of participants, the reliability of a test is defined as the proportion of true variance (var(T)) on observed variance (var(Y)). Reliability cannot be computed directly (as T is a latent variable) and can only be estimated under some hypotheses (e.g., when two tests are supposed to be parallel1, reliability can be computed as the correlation between both score variables). Moreover, in CTT, reliability is assumed to be constant whatever the score level. In a more modern psychometric theory, item response theory (IRT, Rasch, 1960; Birnbaum, 1968), the focus is on the response on each item instead of on the test. Furthermore, the measurement precision is assessed by the level of information that is provided by each item, with the idea that the degree of information depends on the level of the respondent's ability, defined as the latent psychological dimension assessed by the test. The key idea in IRT is that the probability of success of an item depends on the level of ability of the respondent. Generally IRT models assume an S-shaped relationship (see Figure 1, left panel) depending on one, two or three parameters being characteristics of the item (e.g., difficulty or discrimination parameters). Classically, the difficulty corresponds to the location of the inflexion point of the curve (the more this point is on the left of the theta axis, the easier the item) and the discrimination corresponds to the steepness of the curve at the inflexion point (the steeper the curve, the more discriminant the item). The information given by an item is defined as the precision of measurement of the estimated ability and depends on the item parameters as well as on the level of ability (see Figure 1, right panel). It is also inversely related to the standard error of the ability level estimation.
Type de document :
Communication dans un congrès
Liste complète des métadonnées

Littérature citée [11 références]  Voir  Masquer  Télécharger
Contributeur : Laurence Porte Connectez-vous pour contacter le contributeur
Soumis le : vendredi 18 juillet 2014 - 15:32:57
Dernière modification le : mardi 19 octobre 2021 - 11:02:49
Archivage à long terme le : : jeudi 20 novembre 2014 - 18:12:20


Fichiers éditeurs autorisés sur une archive ouverte


  • HAL Id : hal-01022484, version 1



Nadine Matton, Michel Veldhuis, Stéphane Vautier. An Item Response Theory (IRT) approach to check correspondence between cut-off scores and maximal test information in French pilot selection. EAAP 2010, 29th Conference of the European Association for Aviation Psychology, Sep 2010, Budapest, Hungary. pp 147-151. ⟨hal-01022484⟩



Consultations de la notice


Téléchargements de fichiers