CARD Research Team Publishes Study on Comparison of Data Collection Methods in a Behavioral Intervention Program

Los Angeles, CA - February 08, 2010 A Center for Autism and Related Disorders, Inc. (CARD) research study, “Comparison of Data Collection Methods in a Behavioral Intervention Program,” finds that collecting data on all trials versus only the first trial in a block of ten trials during discrete trial teaching show no difference in indication of mastery or maintenance of skills. The Journal of Applied Behavior Analysis (JABA) has published the findings in its current issue.

It’s common for ABA providers to collect data on all trials that are conducted with children during their behavioral intervention programs. However, there is a group of ABA providers who have started using what is called a “cold probe” data collection method. This involves collecting data only on the first trial conducted for each of the children’s lessons. The idea is that the first trial is a good test of how the child is doing because it follows a period of no practice and absence of receiving feedback on performance. All decisions about whether the child has mastered skills and has maintained this mastery over time are made based on first trial data.

This study compared whether conclusions regarding mastery and maintenance of skills would differ if decisions were based on only the first trial of data collected versus on all 10 trials of data collected. Eleven children with autism participated in this study. During sessions, data were collected on all trials during teaching sessions which were 10 trials in length. Then, data were graphed as percentage correct based on both the first trial only and also based on all 10 trials. The graphs were then compared to determine when decisions about mastery would be made. The mastery criterion was defined as three consecutive sessions above 80% correct responding (all-trials condition) or three consecutive sessions of 100% correct responding (first-trial condition).

Using the all-trials versus first-trial methods, the mean number of sessions to mastery were 7.45 and 7.77, respectively, indicating little difference between the data collection methods. The mean percentage of correct responding on the target skills during maintenance probes for all children, in the all-trials, versus first-trial methods yielded 95.23% and 97%, respectively. Again, this indicated no differentiation between the data collection methods. In summary, results suggested that there was no difference in terms of the number of sessions in which participant displayed mastery performance and in the percentage of correct responding during maintenance probes using either the all-trials or the first-trial data collection method.

These results differ slightly from those obtained during previous research conducted by Cummings and Carr (2009) who noted that mastery was identified in fewer sessions using the first-trial data collection method. However, it should be mentioned that data collection continued on the remaining nine trials during the first-trial condition in the current investigation (which did not occur in the study by Cummings and Carr). Thus, the current analysis also compared the two data collection procedures within each target skill.

"This additional analysis allowed us to identify that, had the alternative data collection method been used to identify mastery for each target, mastery would have been suggested earlier in more cases when the all-trials method was applied (43% of targets) than when the first-trial method was applied (18% of targets). Thus, the first-trial method was a slightly more conservative measure of length of time to achieve mastery-level performance in the current study. Despite this finding, the current results should be interpreted with caution since they might have been an artifact of using a mastery criterion of greater than 80% during the all-trials condition. Given that this mastery criterion is less stringent than the 100% criterion used for the first-trial condition, it is not surprising that mastery would be suggested sooner in more cases using a lower criterion level,” says CARD Researcher, Adel Najdowski, PhD, BCBA-D.

The above 80% criterion was used in the current investigation because it is a suggested criterion used to evaluate response mastery (Anderson, Taras, & Cannon, 1996). Nevertheless, future research could compare the identification of mastery-level performance using ranges of criterion levels (e.g., comparing mastery at 80%, 90%, and 100% during all trials to 100% during the first trial) to determine the impact of different criterion levels on evidence of mastery. Future research should also evaluate the extent to which data collection on all trials or only a subset of trials decreases time requirements associated with the implementation of discrete trial teaching programs.

The above 80% criterion was used in the current investigation because it is a suggested criterion used to evaluate response mastery (Anderson, Taras, & Cannon, 1996). Nevertheless, future research could compare the identification of mastery-level performance using ranges of criterion levels (e.g., comparing mastery at 80%, 90%, and 100% during all trials to 100% during the first trial) to determine the impact of different criterion levels on evidence of mastery. Future research should also evaluate the extent to which data collection on all trials or only a subset of trials decreases time requirements associated with the implementation of discrete trial teaching programs.

"Considering the results of this study combined with the results reported by Cummings and Carr (2009), it appears as though first-trial data collection might be a promising option for assessing behavior change during DTI. However, additional research is needed to evaluate the utility of this data collection procedure," adds Najdowski.

Questions regarding this study should be directed to Dr. Jonathan Tarbox, CARD Director of Research at j.tarbox@centerforautism.com or 818.345.2345.



About the Center for Autism and Related Disorders, Inc. (CARD):

CARD is committed to science as the most objective and reliable approach to evaluating treatment for autism. CARD’s mission is to conduct empirical research on the assessment and treatment of autism and to disseminate CARD’s research findings and derived technology through publication and education of professionals and the public. While the primary focus of CARD’s research is ABA-based methods of assessment and treatment, CARD’s overall approach to research includes any topic which may hold promise for producing information that could improve the lives of individuals with autism.

In addition, CARD maintains a reputation as one of the world’s largest and most experienced organizations effectively treating children with autism, Asperger’s Syndrome, PDD-NOS, and related disorders. Following the principles of Applied Behavior Analysis (ABA), CARD develops individualized treatment plans for children worldwide.

For more information about CARD, visit www.centerforautism.com .

For more information about the CARD Research department, visit
www.centerforautism.com/autism_research.

No comments:

Post a Comment