language-icon Old Web
English
Sign In

Item response theory

In psychometrics, item response theory (IRT) (also known as latent trait theory, strong true score theory, or modern mental test theory) is a paradigm for the design, analysis, and scoring of tests, questionnaires, and similar instruments measuring abilities, attitudes, or other variables. It is a theory of testing based on the relationship between individuals' performances on a test item and the test takers' levels of performance on an overall measure of the ability that item was designed to measure. Several different statistical models are used to represent both item and test taker characteristics. Unlike simpler alternatives for creating scales and evaluating questionnaire responses, it does not assume that each item is equally difficult. This distinguishes IRT from, for instance, Likert scaling, in which 'All items are assumed to be replications of each other or in other words items are considered to be parallel instruments' (p. 197). By contrast, item response theory treats the difficulty of each item (the item characteristic curves, or ICCs) as information to be incorporated in scaling items. In psychometrics, item response theory (IRT) (also known as latent trait theory, strong true score theory, or modern mental test theory) is a paradigm for the design, analysis, and scoring of tests, questionnaires, and similar instruments measuring abilities, attitudes, or other variables. It is a theory of testing based on the relationship between individuals' performances on a test item and the test takers' levels of performance on an overall measure of the ability that item was designed to measure. Several different statistical models are used to represent both item and test taker characteristics. Unlike simpler alternatives for creating scales and evaluating questionnaire responses, it does not assume that each item is equally difficult. This distinguishes IRT from, for instance, Likert scaling, in which 'All items are assumed to be replications of each other or in other words items are considered to be parallel instruments' (p. 197). By contrast, item response theory treats the difficulty of each item (the item characteristic curves, or ICCs) as information to be incorporated in scaling items. It is based on the application of related mathematical models to testing data. Because it is often regarded as superior to classical test theory, it is the preferred method for developing scales in the United States, especially when optimal decisions are demanded, as in so-called high-stakes tests, e.g., the Graduate Record Examination (GRE) and Graduate Management Admission Test (GMAT). The name item response theory is due to the focus of the theory on the item, as opposed to the test-level focus of classical test theory. Thus IRT models the response of each examinee of a given ability to each item in the test. The term item is generic, covering all kinds of informative items. They might be multiple choice questions that have incorrect and correct responses, but are also commonly statements on questionnaires that allow respondents to indicate level of agreement (a rating or Likert scale), or patient symptoms scored as present/absent, or diagnostic information in complex systems. IRT is based on the idea that the probability of a correct/keyed response to an item is a mathematical function of person and item parameters. (The expression “a mathematical function of person and item parameters” is analogous to Kurt Lewin’s equation B = f(P, E), which asserts that behavior is a function of the person in their environment.) The person parameter is construed as (usually) a single latent trait or dimension. Examples include general intelligence or the strength of an attitude. Parameters on which items are characterized include their difficulty (known as 'location' for their location on the difficulty range); discrimination (slope or correlation), representing how steeply the rate of success of individuals varies with their ability; and a pseudoguessing parameter, characterising the (lower) asymptote at which even the least able persons will score due to guessing (for instance, 25% for pure chance on a multiple choice item with four possible responses).

[ "Social psychology", "Statistics", "Econometrics", "Developmental psychology", "test", "Person-fit analysis", "Patient-Reported Outcomes Measurement Information System", "Fatigue Item Bank", "Mokken scale", "Polytomous Rasch model" ]
Parent Topic
Child Topic
    No Parent Topic