Dynamic testing: Can a robot as tutor be of help in assessing children’s potential for learning?

Dynamic testing: Can a robot as tutor be of help in assessingchildren's potential for learning?

Table of Contents




Abstract

This study examined whether computerized dynamic testing by utilizing a robot would lead to different patterns in children’s (aged 6–9 years) potential for learning and strategy use when solving series‐completion tasks. The robot, in a “Wizard of Oz” setting, provided instructions and prompts during dynamic testing. It was found that dynamic training resulted in greater accuracy and more correctly placed pieces at the post‐test than repeated testing only. Moreover, children who were dynamically trained appeared to use more heuristic strategies at the post‐test than their peers who were not trained. In general, observations showed that children were excited to work with the robot. All in all, the study revealed that computerized dynamic testing by means of a robot has much potential in tapping into children’s potential for learning and strategy use. The implications of using a robot in the educational assessment were stressed further in the discussion.

KEYWORDS

computer, dynamic testing, educational assessment, inductive reasoning, robot, series completion

INTRODUCTION

Recently, considerable development of new educational technologies, involving the use of seamless technology (Liu et al., 2014), tablets, and even robots, has triggered research into the effects of implementing these materials in educational settings (André et al., 2014; Mubin, Stevens, Shahid, Al Mahmud, & Dong, 2013). Recent research has focused on the use of robots in education, for example, as an instructional tool for transmitting knowledge (Belpaeme, Kennedy, Ramachandran, Scassellati, & Tanaka, 2018; Chin, Hong, & Chen, 2014). In educational settings, robots can be classified based on how they are used: as a tool (technology aid), peer, tutor, or a novice (Belpaeme et al., 2018; Mubin et al., 2013). Usage of (personalized) robot peers or tutors in educational learning and assessment procedures has gained attention in recent years (e.g., Baxter, Ashurst, Read, Kennedy, & Belpaeme, 2017; Belpaeme et al., 2013; Belpaeme et al., 2018; Benitti, 2012; Hong, Huang, Hsu, & Shen, 2016). One form of educational assessment where the use of robots may be particularly interesting is dynamic testing.

Whereas conventional, static test procedures are characterized by testing without providing the testee with any form of feedback, dynamic testing is based on the assumption that test outcomes resulting from a scaffolded feedback procedure or intervention are more likely to provide a good indication of a person’s level of cognitive functioning than conventional, static test scores. The primary aims of research in dynamic testing have been to examine progression in cognitive abilities following training between test session(s), to consider behavior related to the individual’s potential for learning, and to gain insight into learning processes at the moment they occur (Elliott, Grigorenko, & Resing, 2010; Resing, Touw, Veerbeek, & Elliott, 2017). Dynamic test procedures differ from static ones because in a dynamic test situation testees are given (guided) instruction enabling them to show individual differences in progress when solving equivalent tasks.

The aim of the current study was to investigate whether a computerized one‐on‐one dynamic test administered by a tutor robot could allow for (investigating) systematic and controlled dynamic testing outcomes. In doing so, we sought to examine the effects of receiving instruction and training by a robot on children’s changes in performance across test sessions. A major difficulty in undertaking highly interactive forms of assessment is that the assessor must try to fully engage with the child while also recording in detail each step in the process. A key advantage of computerized testing is that it may be possible to register every task‐solving step taken by the child, which would provide examiners with the opportunity to analyze the sequence of these steps. This would offer valuable information about the child’s learning progression during the dynamic process (e.g., Resing & Elliott, 2011). Computerized assisted instruction provided by a personalized robot may also offer promising new possibilities for dynamic testing. These include using more flexible approaches to task‐solving, using more adaptive scaffolding procedures, and, consequently, creating a more authentic assessment environment (Huang, Wu, Chu, & Hwang, 2008; Khandelwal, 2006). Therefore, the one‐on‐one tutor robot in the present study, which had an attractive appearance to children, was designed to detect the children’s task‐solving steps, provide hints to solve the tasks, record in detail children’s responses to that assistance and react adaptively to children’s solving behavior.

DISCUSSION

The present study focused on the potential of using a pre-programmed table‐top robot in a Wizard of Oz setting as an educational assistant and training tool for primary school children. In line with previous studies on children’s seriation and analogical reasoning skills (e.g., Freund & Holling, 2011; Resing & Elliott, 2011; Stevenson et al., 2013), our study showed that task performance generally improved when children were tested twice, but that the degree of progression varied, depending on whether or not children were dynamically trained by the robot on the task (e.g., Campione & Brown, 1987; Passig et al., 2016; Resing et al., 2011, 2017, 2012). Children that were dynamically tested and trained by the robot showed significantly greater progression in both their accuracy of task solving and the more detailed number of correct puzzle pieces variable than children who were just statically tested by the robot. We believe that we can safely conclude that the intervention children were provided with by our friendly table‐top robot led to these differences in progression because the same tasks and instructions were tested and positively evaluated in other studies (e.g., Resing & Elliott, 2011; Resing et al., 2017; Veerbeek et al., 2019). Of course, a future study, in which a second control group solves the items used in both training sessions in a static, unguided way, would provide extra information, necessary to further confirm this conclusion. Dynamic testing research in the past with children in these three conditions already provides further support to this conclusion (e.g., Resing, 1993, 2000). Another useful direction for future studies concerns investigating the potential advantages of robot‐administered as opposed to computerized or human‐ administered dynamic tests to research whether robot‐administered dynamic testing has benefits beyond those of human and computerized testing.

Interestingly, children’s progression paths increased, whereas the number of prompts they needed did not decrease from the first to the second training. This could be partially due to the difficulty level of the series‐completion items; they were developed as rather difficult tasks on purpose. More or more enduring training periods could, possibly, result in children showing extra progression in task solving as well as then needing fewer prompts or scaffolds while being trained. Children showed large individual differences in the number of prompts they needed. Further research with a revised design might provide more information in the future. Another potential reason might be that the children experienced the robot as such a nice companion that they wanted to continue receiving prompts and scaffolds from it.

The scaffolding and graduated prompts principles behind the training given by the robot were specifically designed to tap into children’s zone of proximal development (Serholt & Barendregt, 2016; Vygotsky, 1978). When we explored the variation in progression in task solving in relation to the outcomes, large individual differences were detected. Of course, the data regarding learner groups are rather speculative but, considering the small subgroups of children, are promising, and highlight the potential extra value of individualized forms of dynamic testing, in particular with computerized robot technology. In the future, outcomes of an extended study will have to support these preliminary findings.

The current study shows that our dynamic training provided by a robot did also differentially influence children’s behavioral strategy use as measured by the time children needed to actually start solving each task item. Unexpectedly, trained children made less use of a more analytical strategy after training than their peers who did not receive training. The untrained children, however, appeared to use an analytical strategy more frequently during the post‐test. Nevertheless, our first, global checking of the log files revealed that trained children more systematically placed the puppet blocks; they first selected little piles of equal blocks, for example, three green ones for the body of the puppet; then made a three‐piece block of the body, and finally placed that 3 × 3 blocks on the puppet frame. Untrained children, on the contrary, frequently seemed to use quick trial‐and‐error behavior or solved the puzzle piece by piece. Perhaps the unexpected findings regarding the increase in the heuristic strategy used by the trained children reflect familiarity with the task, as a result of which these children required less preparation. This finding underlines that we cannot solely rely on reaction time data in relation to children’s behavioral strategy use (e.g., Kossowska & Nȩcka, 1994), but, of course, future research with a larger sample size will be necessary to underline our findings and inferences with regard to children’s strategy use. Step‐by‐step analysis of children’s task‐solving sequences would be one possible option (e.g., Resing et al., 2017; Veerbeek et al., 2019).

The results further support our idea that subgroups can be discerned that differ on the basis of their changing strategy use, particularly in the case of the trained children, in combination with information regarding the number of prompts children need during training, and their progression in accuracy and strategy use. Findings lent further support to the idea that dynamic testing outcomes can be helpful for educational assessors because these provide interesting process information regarding inter‐variability and intra‐variability in children’s use of strategies when learning to solve tasks.

In the current study, the robot provided prompts to the child when needed, but these were not yet optimally adaptively tailored to the, at times, very idiosyncratic mistakes that the children incidentally made during training. Further research is necessary to ensure that the robots of tomorrow provide highly sophisticated and differentiated interaction responses in assessment contexts. With regard to the cognitive domain studied here, future research should be geared to the fine-tuning of prompts, and dynamic scaffolds, adaptations to specific groups of children, the examination of specific, systematic task‐analyses, and consideration of patterns of mistakes and idiosyncratic ways of processing children show in solving cognitive tasks (e.g., Granott, 2005; Khandelwal, 2006; Renninger & Granott, 2005). In the future, for example, the robot could be programmed to enable more variation and flexibility in preprogrammed scenarios in providing feedback and instruction to individual children. Although we are aware it is a challenge to realize all these requirements, these developments should provide exciting possibilities for obtaining further insight into children’s differing learning paths during dynamic testing, or in relation to instruction in the classroom. Although the robot still had some obvious limitations, such as repeating instructions in exactly the same way, and the robot was operated in a “Wizard of Oz” setting, children interacted with the robot freely, for example, providing the robot with feedback, and were highly responsive and motivated to work with the robot, even after all the assessment and training sessions. The vast majority of children did not even seem to notice that the examiner was seated in the back of the room.

A particular complication of dynamic testing, in particular when individual strategy patterns and changes are the focus of assessment, is that detailed study of children’s processing, including their responses to training, can easily result in an overload of information (derived from spoken, written or videotaped sources) that is too complex and time‐consuming to interpret and report. A personalized robot teacher assistant would certainly help to overcome this difficulty, especially if it would be able to visually deal with the pieces of tangible materials children put on the table freely. We think this is a key and unique aspect of using robotics in psychological and educational assessment, because both the development and education of higher cognitive abilities have their origin in sensory‐motor activities in young children (e.g., Timms, 2016), and the robot in combination with the material as developed perfectly match these activities. We anticipated and found that such technology can assist us in assessing and examining task‐solving processes in more detail, thereby enabling us to inspect in‐depth more of the information processing that takes place during the course of training, one of the key elements of process‐oriented dynamic testing (Elliott, Grigorenko, & Resing, 2010; Jeltova et al., 2011; Sternberg & Grigorenko, 2002). As most empirical studies that discuss the effects of robots as teaching tools involve learning closely related to the field of robotics, our findings have significant potential and should provide further opportunities for the broader field of learning complex reasoning skills (Benitti, 2012).

We are aware that much effort in terms of both hardware and software development will be necessary for educational assessment before educational robots will be ready to assist teachers and educational psychologists in the classroom of tomorrow (e.g., Timms, 2016). We think, however, that the results of the current study reveal that even a simplified version of a real robot, as a result of its instructive teaching and patience, can stimulate children in their learning of solving complex reasoning tasks, leading to an important impact on the development of cognitive growth (Mubin et al., 2013). We noted that the children enjoyed the testing periods very much and were eager to work with the robot during all assessment sessions. It would be valuable to study whether children in the control condition also learned a lot from assessment by the robot, as they also were eager to leave the classroom for a next session with the robot. An extension of the study design with a focus on the novelty aspect of a robot‐administered dynamic test will, therefore, be necessary in the future, investigating the effect of repeated interactions of the robot as possible influences on the outcomes. Possible examples of such influences include being distracted by the robot or the magnitude of the cognitive load posed by robot‐administered tasks. The focus of the current study was on quantitative analysis of children’s (cognitive) changes brought about by being assessed by a robot. Future studies could focus more on qualitative analyses, for instance, by analyzing qualitative differences in children’s approach to solving tasks.

The merits of using a robot as an assistant in dynamic testing are, of course, intriguing. Earlier studies (Resing & Elliott, 2011; Resing et al., 2017) have already highlighted the benefits of the use of an electronic console for dynamic testing. Our study replicated the potential of electronic technology for dynamic testing but also introduced the robot as a helpful coassessor, whereby the children could freely play with the tangibles, organizing and moving them. The robot has been found to be an enjoyable dynamic companion, mostly because it possessed both verbal and nonverbal interaction qualities, with—for the moment—the examiner as Wizard of Oz at the background. Earlier research also showed that the use of a preprogrammed computerized interface for offering the prompts and scaffolds has no discernible negative consequences when compared with that provided by an examiner (Stevenson et al., 2011; Tzuriel & Shamir, 2002). Because the task prompts and scaffolds remained the same overstudies, we think that these earlier findings are generalizable to the outcomes of the current study. Nevertheless, it will be necessary to check the potential application of an assistant‐robot assessor in comparison to both a human and a (2D) computer administration of the dynamic test, to further validate the additional value of robot‐ administered dynamic testing Our recommendations for future studies would be to continue to explore possibilities in the use of preprogrammed robot instructions to further reveal learning processes unfolding during dynamic testing. This would further open ways to the tailored assessment of individual children’s potential for learning (Clabaugh, Ragusa, Sha, & Matarić, 2015; Granott, 2005) and a more sophisticated understanding of children’s differential development in ways that can directly impact upon their learning.

About KSRA

The Kavian Scientific Research Association (KSRA) is a non-profit research organization to provide research / educational services in December 2013. The members of the community had formed a virtual group on the Viber social network. The core of the Kavian Scientific Association was formed with these members as founders. These individuals, led by Professor Siavosh Kaviani, decided to launch a scientific / research association with an emphasis on education.

KSRA research association, as a non-profit research firm, is committed to providing research services in the field of knowledge. The main beneficiaries of this association are public or private knowledge-based companies, students, researchers, researchers, professors, universities, and industrial and semi-industrial centers around the world.

Our main services Based on Education for all Spectrum people in the world. We want to make an integration between researches and educations. We believe education is the main right of Human beings. So our services should be concentrated on inclusive education.

The KSRA team partners with local under-served communities around the world to improve the access to and quality of knowledge based on education, amplify and augment learning programs where they exist, and create new opportunities for e-learning where traditional education systems are lacking or non-existent.

FULL Paper PDF file:

Dynamic testing: Can a robot as a tutor be of help in assessing children's potential for learning?

Bibliography

author

Wilma C.M. Resing, Developmental and Educational Psychology, Leiden University, Leiden, the Netherlands
Merel Bakker, Instruction Psychology and Technology, Catholic University Leuven, Leuven, Belgium
Julian G. Elliott, School of Education, Durham University, Durham, UK
Bart Vogelaar, Developmental and Educational Psychology, Leiden University, Leiden, the Netherlands

Year

2019

Title

Dynamic testing: Can a robot as a tutor be of help in assessing children’s potential for learning?

Publish in

Journal of Computer Assisted Learning Published by John Wiley & Sons Lt

DOI

DOI: 10.1111/jcal.12358

PDF reference and original file: Click here

 

Website | + posts

Nasim Gazerani was born in 1983 in Arak. She holds a Master's degree in Software Engineering from UM University of Malaysia.

Website | + posts

Professor Siavosh Kaviani was born in 1961 in Tehran. He had a professorship. He holds a Ph.D. in Software Engineering from the QL University of Software Development Methodology and an honorary Ph.D. from the University of Chelsea.

+ posts

Somayeh Nosrati was born in 1982 in Tehran. She holds a Master's degree in artificial intelligence from Khatam University of Tehran.