TY - JOUR
T1 - Problems with “percent correct” in conditional discrimination tasks
AU - Iversen, Iver H.
N1 - Iversen, I. H. (2016). Problems with “percent correct” in conditional discrimination tasks. European Journal of Behavior Analysis, 17(1), 69–80. https://doi.org/10.1080/15021149.2016.1139368
PY - 2016/2/16
Y1 - 2016/2/16
N2 - Literature on conditional discrimination tasks indicates that interpretation of data depends on assumptions about what constitutes evidence of performance accuracy and change. According to one interpretation, performance after a procedural intervention (e.g., introduction of new stimuli in an identity matching-to-sample task) is compared to baseline performance before the intervention; if a decrease in performance is evident, then the conclusion is drawn that the intervention produced a deficit in performance. According to a different interpretation, performance from an intervention is compared not to baseline but to chance level; if performance is significantly different from chance level after the intervention, the conclusion is drawn that the intervention did not produce a deficit in performance. Evidence for presence or absence of stimulus control or concepts is extracted from such data depending on the method of comparison. In many cases, the intervention may produce a decrease in accuracy from a baseline of 90–100% accuracy to the 60–80% range, which may be significantly different from baseline but also significantly different from chance level of 50%, for two-choice tasks. Thus, different, if not opposite, conclusions might be drawn from the same set of data depending on the method of analysis (e.g., a change from a baseline of near 90% correct to 70% correct after the intervention is either a performance deficit or not depending on the method of analysis). Interpretations of results from conditional discrimination tasks may profitably be clarified when data are presented more objectively as percent stimulus control rather than as percent correct.
AB - Literature on conditional discrimination tasks indicates that interpretation of data depends on assumptions about what constitutes evidence of performance accuracy and change. According to one interpretation, performance after a procedural intervention (e.g., introduction of new stimuli in an identity matching-to-sample task) is compared to baseline performance before the intervention; if a decrease in performance is evident, then the conclusion is drawn that the intervention produced a deficit in performance. According to a different interpretation, performance from an intervention is compared not to baseline but to chance level; if performance is significantly different from chance level after the intervention, the conclusion is drawn that the intervention did not produce a deficit in performance. Evidence for presence or absence of stimulus control or concepts is extracted from such data depending on the method of comparison. In many cases, the intervention may produce a decrease in accuracy from a baseline of 90–100% accuracy to the 60–80% range, which may be significantly different from baseline but also significantly different from chance level of 50%, for two-choice tasks. Thus, different, if not opposite, conclusions might be drawn from the same set of data depending on the method of analysis (e.g., a change from a baseline of near 90% correct to 70% correct after the intervention is either a performance deficit or not depending on the method of analysis). Interpretations of results from conditional discrimination tasks may profitably be clarified when data are presented more objectively as percent stimulus control rather than as percent correct.
KW - Conditional discrimination tasks
KW - data analysis
KW - "percent correct"
KW - stimulus control
U2 - 10.1080/15021149.2016.1139368
DO - 10.1080/15021149.2016.1139368
M3 - Article
SN - 1502-1149
VL - 17
SP - 69
EP - 80
JO - European Journal of Behavior Analysis
JF - European Journal of Behavior Analysis
IS - 1
ER -